id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
903376 | Hyperbolic discounting | Economics concept
In economics, hyperbolic discounting is a time-"inconsistent" model of delay discounting. It is one of the cornerstones of behavioral economics and its brain-basis is actively being studied by neuroeconomics researchers.
According to the discounted utility approach, intertemporal choices are no different from other choices, except that some consequences are delayed and hence must be anticipated and discounted (i.e., reweighted to take into account the delay).
Given two similar rewards, humans show a preference for one that arrives in a more prompt timeframe. Humans are said to "discount" the value of the later reward, by a factor that increases with the length of the delay. In the financial world, this process is normally modeled in the form of exponential discounting, a time-"consistent" model of discounting. Many psychological studies have since demonstrated deviations in instinctive preference from the constant discount rate assumed in exponential discounting. Hyperbolic discounting is an alternative mathematical model that agrees more closely with these findings.
According to hyperbolic discounting, valuations fall relatively rapidly for earlier delay periods (as in, from now to one week), but then fall more slowly for longer delay periods (for instance, more than a few days). For example, in an early study subjects said they would be indifferent between receiving $15 immediately or $30 after 3 months, $60 after 1 year, or $100 after 3 years. These indifferences reflect annual discount rates that declined from 277% to 139% to 63% as delays got longer. This contrasts with exponential discounting, in which valuation falls by a constant factor per unit delay and the discount rate stays the same.
The standard experiment used to reveal a test subject's hyperbolic discounting curve is to compare short-term preferences with long-term preferences. For instance: "Would you prefer a dollar today or three dollars tomorrow?" or "Would you prefer a dollar in one year or three dollars in one year and one day?" It has been claimed that a significant fraction of subjects will take the lesser amount today, but will gladly wait one extra day in a year in order to receive the higher amount instead. Individuals with such preferences are described as "present-biased".
The most important consequence of hyperbolic discounting is that it creates temporary preferences for small rewards that occur sooner over larger, later ones. Individuals using hyperbolic discounting reveal a strong tendency to make choices that are inconsistent over time – they make choices today that their future self would prefer not to have made, despite knowing the same information. This dynamic inconsistency happens because hyperbolas distort the relative value of options with a fixed difference in delays in proportion to how far the choice-maker is from those options.
Observations.
The phenomenon of hyperbolic discounting is implicit in Richard Herrnstein's "matching law", which states that when dividing their time or effort between two non-exclusive, ongoing sources of reward, most subjects allocate in direct proportion to the rate and size of rewards from the two sources, and in inverse proportion to their delays. That is, subjects' choices "match" these parameters.
After the report of this effect in the case of delay, George Ainslie pointed out that in a single choice between a larger, later and a smaller, sooner reward, inverse proportionality to delay would be described by a plot of value by delay that had a hyperbolic shape, and that when the smaller, sooner reward is preferred, this preference can be reversed by increasing both rewards' delays by the same absolute amount. Ainslie's research showed that a substantial number of subjects reported that they would prefer $50 immediately rather than $100 in six months, but would NOT prefer $50 in 3 months rather than $100 in nine months, even though this was the same choice seen at 3 months' greater distance. More significantly, those subjects who said they preferred $50 in 3 months to $100 in 9 months said they would NOT prefer $50 in 12 months to $100 in 18 months—again, the same pair of options at a different distance—showing that the preference-reversal effect did not depend on the excitement of getting an immediate reward. Nor does it depend on human culture; the first preference reversal findings were in rats and pigeons.
Many subsequent experiments have confirmed that spontaneous preferences by both human and nonhuman subjects follow a hyperbolic curve rather than the conventional, exponential curve that would produce consistent choice over time. For instance, when offered the choice between $50 now and $100 a year from now, many people will choose the immediate $50. However, given the choice between $50 in five years or $100 in six years almost everyone will choose $100 in six years, even though that is the same choice seen at five years' greater distance.
Hyperbolic discounting has also been found to relate to real-world examples of self-control. Indeed, a variety of studies have used measures of hyperbolic discounting to find that drug-dependent individuals discount delayed consequences more than matched nondependent controls, suggesting that extreme delay discounting is a fundamental behavioral process in drug dependence. Some evidence suggests pathological gamblers also discount delayed outcomes at higher rates than matched controls. Whether high rates of hyperbolic discounting precede addictions or vice versa is currently unknown, although some studies have reported that high-rate discounters are more likely to consume alcohol and cocaine than lower-rate discounters. Likewise, some have suggested that high-rate hyperbolic discounting makes unpredictable (gambling) outcomes more satisfying.
The degree of discounting is vitally important in describing hyperbolic discounting, especially in the discounting of specific rewards such as money. The discounting of monetary rewards varies across age groups due to the varying discount rate. The rate depends on a variety of factors, including the species being observed, age, experience, and the amount of time needed to consume the reward.
Mathematical model.
Step-by-step explanation.
Suppose that in a study, participants are offered the choice between taking "x" dollars immediately or taking "y" dollars "n" days later. Suppose further that one participant in that study employs exponential discounting and another employs hyperbolic discounting. Both participants know that they can invest the money they receive today in a savings plan that gives them an interest of "r". Both of them realize that they should take "x" dollars immediately if the future value of the savings plan will yield more than "y" dollars "n" days later. Each participant correctly understands the fundamental question being asked: "For any given value of "y" dollars and "n" days, what is the minimum amount "x" of dollars, that I should be willing to accept? In other words, how many dollars would I need to invest today to get "y" dollars "n" days from now?" Each will take "x" dollars if "x" is greater than the answer that they calculate, and each will take "y" dollars "n" days from now if "x" is smaller than that answer. However, the methods that they use to calculate that amount and the answers that they get will be different, and only the exponential discounter will use the correct method and get a reliably correct result:
As "n" becomes very large, the value of (1 + "r")"n" becomes much larger than the value of [1 + "n"×"r"], with the effect that the value of "y" / (1 + "r")"n" becomes much smaller than the value of "y"/["1 + n×r"]. Therefore, the minimum value of "x" (the number of dollars in the immediate choice) that suffices to be greater than that amount will be much smaller than the hyperbolic discounter thinks, with the result that they will perceive "x"-values in the range from "y"/(1 + "r" )"n" to "y"/["1 + n×r"] inclusive as being too small and, as a result, irrationally turn those alternatives down when they are in fact the better investment.
Formal model.
Hyperbolic discounting is mathematically described as
formula_0
where "g"("D") is the discount factor that multiplies the value of the reward, "D" is the delay in the reward, and "k" is a parameter governing the degree of discounting (for example, the interest rate). This is compared with the formula for exponential discounting:
formula_1
Comparison.
Consider formula_2 an exponential discounting function with formula_3 and formula_4 a hyperbolic function with formula_5, and suppose both use units of weeks to measure "D", the delay. Then the exponential discounting a week from "now" ("D"=0) is formula_6, and the exponential discounting from "D" weeks of delay to formula_7 weeks is formula_8, so the incremental discount associated with an additional week of delay is the same. For the hyperbolic model using "g"("D"), the discount for a week from now is formula_9, which is the same as for "f" in the exponential model, while the incremental discount for an additional week after a delay of "D" weeks is not the same: formula_10. From this one can see that the two models of discounting are the same "now"; this is the reason for the choice of interest rate parameters "k". However, when "D" is much greater than 1, formula_11 so that the hyperbolic discounting of an additional week after a long delay is almost no discount at all, while the exponential discount factor is still 1/2, so there is still substantial discounting in the far future. Hyperbolic discounting places very little discount on an additional week of delay beyond an already large delay, while exponential discounting places a constant discount on every week of delay, whether it is far in the future or the next week.
Quasi-hyperbolic approximation.
The "quasi-hyperbolic" discount function (sometimes called "beta-delta discounting"), proposed by Laibson (1997), approximates the hyperbolic discount function above in discrete time by
formula_12
where "β" and "δ" are constants between 0 and 1; and "D" is the delay in the reward, but now it takes only integer values. The condition "f"(0) = 1 states that rewards taken at the present time are not discounted.
Quasi-hyperbolic discounting retain much of the analytical tractability of exponential discounting while capturing the key qualitative feature of hyperbolic discounting.
Explanations.
Uncertain risks.
Whether discounting future gains is rational or not—and at what rate such gains should be discounted—depends greatly on circumstances. Many examples exist in the financial world, for example, where it is reasonable to assume that there is an implicit risk that the reward will not be available at the future date, and furthermore that this risk increases with time. Consider paying $50 for dinner today or delaying payment for sixty years but paying $100,000. In this case, the restaurateur would be reasonable to discount the promised future value as there is significant risk that it might not be paid (e.g. due to the death of the restaurateur or the diner).
Uncertainty of this type can be quantified with Bayesian analysis. For example, suppose that the probability for the reward to be available after time "t" is, for known hazard rate λ,
formula_13
but the rate is unknown to the decision maker. If the prior probability distribution of λ is
formula_14
then the decision maker will expect that the probability of the reward after time "t" is
formula_15
which is exactly the hyperbolic discount rate. Similar conclusions can be obtained from other plausible distributions for λ.
Applications.
More recently these observations about discount functions have been used to study saving for retirement, personal income to drug addiction., borrowing on credit cards, and procrastination.
It has frequently been used to explain addiction.
Hyperbolic discounting has also been offered as an explanation of the divergence between privacy attitudes and behaviour.
Present values of annuities.
Present value of a standard annuity.
The present value of a series of equal annual cash flows in arrears discounted hyperbolically is
formula_16
where "V" is the present value, "P" is the annual cash flow, "D" is the number of annual payments and "k" is the factor governing the discounting.
Criticism.
Several alternative explanations of non-exponential discounting have been proposed. An article from 2003 noted that this pattern might be better explained by a similarity heuristic than by hyperbolic discounting. Subjects have also reported changing relative preferences as they see more details of what they are choosing—a “temporal construal” effect.
A study by Daniel Read introduces "subadditive discounting": the fact that discounting over a delay increases if the delay is divided into smaller intervals. This hypothesis may explain the main finding of many studies in support of hyperbolic discounting—the observation that impatience declines with time–while also accounting for observations not predicted by hyperbolic discounting. However, although these observations depart from exponential discounting, they do not entail preference reversal as time from the choice to the earlier reward increases.
Arousal of appetite or emotion does sometimes lead to preference reversal, and this has been the most widely accepted alternative to a simply hyperbolic function: hyperboloid or quasi-hyperbolic discounting fuses exponential curves with an arousal bump as a visceral reward becomes imminent. Such cases are obviously important, but still do not account for cases where either both or neither choice is made during arousal.
The most obvious objection to hyperbolic discounting is that many or most people learn to choose consistently over time in most situations. Similarly, a 2014 paper criticized the existing studies for mostly using data collected from university students and being too quick to conclude that the hyperbolic model of discounting is correct. Human experiments have frequently reported wide between-subject variations. If overcoming the tendency to temporary preference takes learning, the next obvious task for experimenters is to test theories of how and when this learning occurs (e.g. Ainslie, 2012).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g(D)=\\frac{1}{1+kD}\\,"
},
{
"math_id": 1,
"text": "f(D)=e^{-kD}\\,"
},
{
"math_id": 2,
"text": "f(D)=2^{-D}\\,"
},
{
"math_id": 3,
"text": "k = \\ln 2 \\approx 0.69"
},
{
"math_id": 4,
"text": "g(D)=\\frac{1}{1+D}\\,"
},
{
"math_id": 5,
"text": "k = 1"
},
{
"math_id": 6,
"text": "\\frac{f(1)}{f(0)}=\\frac{1}{2}\\,"
},
{
"math_id": 7,
"text": "D+1"
},
{
"math_id": 8,
"text": "\\frac{f(D+1)}{f(D)}=\\frac{1}{2}\\,"
},
{
"math_id": 9,
"text": "\\frac{g(1)}{g(0)}=\\frac{1}{2}\\,"
},
{
"math_id": 10,
"text": "\\frac{g(D+1)}{g(D)}=1-\\frac{1}{D+2}\\,"
},
{
"math_id": 11,
"text": "\\frac{g(D+1)}{g(D)}\\approx 1\\,"
},
{
"math_id": 12,
"text": "f(D)=\\begin{cases}\n1 \\quad D = 0\\\\\n\\beta \\delta^D \\quad D = 1, 2, 3, ...\n\\end{cases} "
},
{
"math_id": 13,
"text": "P(R_t|\\lambda) = \\exp(-\\lambda t),\\,"
},
{
"math_id": 14,
"text": "p(\\lambda) = \\exp(-\\lambda/k)/k,\\,"
},
{
"math_id": 15,
"text": "P(R_t) = \\int_0^\\infty P(R_t|\\lambda) p(\\lambda) d\\lambda = \\frac{1}{1 + k t},\\,"
},
{
"math_id": 16,
"text": "V = P \\frac{\\ln(1+kD)}{k},\\,"
}
] | https://en.wikipedia.org/wiki?curid=903376 |
9033954 | Hafner–Sarnak–McCurley constant | The Hafner–Sarnak–McCurley constant is a mathematical constant representing the probability that the determinants of two randomly chosen square integer matrices will be relatively prime. The probability depends on the matrix size, "n", in accordance with the formula
formula_0
where "pk" is the "k"th prime number. The constant is the limit of this expression as "n" approaches infinity. Its value is roughly 0.3532363719... (sequence in the OEIS). | [
{
"math_id": 0,
"text": "D(n)=\\prod_{k=1}^{\\infty}\\left\\{1-\\left[1-\\prod_{j=1}^n(1-p_k^{-j})\\right]^2\\right\\},"
}
] | https://en.wikipedia.org/wiki?curid=9033954 |
9034035 | Computer-aided diagnosis | Type of diagnosis assisted by computers
Computer-aided detection (CADe), also called computer-aided diagnosis (CADx), are systems that assist doctors in the interpretation of medical images. Imaging techniques in X-ray, MRI, endoscopy, and ultrasound diagnostics yield a great deal of information that the radiologist or other medical professional has to analyze and evaluate comprehensively in a short time. CAD systems process digital images or videos for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the professional.
CAD also has potential future applications in digital pathology with the advent of whole-slide imaging and machine learning algorithms. So far its application has been limited to quantifying immunostaining but is also being investigated for the standard H&E stain.
CAD is an interdisciplinary technology combining elements of artificial intelligence and computer vision with radiological and pathology image processing. A typical application is the detection of a tumor. For instance, some hospitals use CAD to support preventive medical check-ups in mammography (diagnosis of breast cancer), the detection of polyps in colonoscopy, and lung cancer.
Computer-aided detection (CADe) systems are usually confined to marking conspicuous structures and sections. Computer-aided diagnosis (CADx) systems evaluate the conspicuous structures. For example, in mammography CAD highlights microcalcification clusters and hyperdense structures in the soft tissue. This allows the radiologist to draw conclusions about the condition of the pathology. Another application is CADq, which quantifies, "e.g.", the size of a tumor or the tumor's behavior in contrast medium uptake. Computer-aided simple triage (CAST) is another type of CAD, which performs a fully automatic initial interpretation and triage of studies into some meaningful categories ("e.g." negative and positive). CAST is particularly applicable in emergency diagnostic imaging, where a prompt diagnosis of critical, life-threatening condition is required.
Although CAD has been used in clinical environments for over 40 years, CAD usually does not substitute the doctor or other professional, but rather plays a supporting role. The professional (generally a radiologist) is generally responsible for the final interpretation of a medical image. However, the goal of some CAD systems is to detect earliest signs of abnormality in patients that human professionals cannot, as in diabetic retinopathy, architectural distortion in mammograms, ground-glass nodules in thoracic CT, and non-polypoid (“flat”) lesions in CT colonography.
History.
In the late 1950s, with the dawn of modern computers researchers in various fields started exploring the possibility of building computer-aided medical diagnostic (CAD) systems. These first CAD systems used flow-charts, statistical pattern-matching, probability theory, or knowledge bases to drive their decision-making process.
In the early 1970s, some of the very early CAD systems in medicine, which were often referred as “expert systems” in medicine, were developed and used mainly for educational purposes. Examples include the MYCIN expert system, the Internist-I expert system and the CADUCEUS (expert system).
During the beginning of the early developments, the researchers were aiming at building entirely automated CAD / expert systems. The expectated capability of computers was unrealistically optimistic among these scientists. However, after the breakthrough paper, “Reducibility among Combinatorial Problems” by Richard M. Karp, it became clear that there were limitations but also potential opportunities when one develops algorithms to solve groups of important computational problems.
As result of the new understanding of the various algorithmic limitations that Karp discovered in the early 1970s, researchers started realizing the serious limitations that CAD and expert systems in medicine have. The recognition of these limitations brought the investigators to develop new kinds of CAD systems by using advanced approaches. Thus, by the late 1980s and early 1990s the focus sifted in the use of data mining approaches for the purpose of using more advanced and flexible CAD systems.
In 1998, the first commercial CAD system for mammography, the ImageChecker system, was approved by the US Food and Drug Administration (FDA). In the following years several commercial CAD systems for analyzing mammography, breast MRI, medical imagining of lung, colon, and heart also received FDA approvals. Currently, CAD systems are used as a diagnostic aid to provide physicians for better medical decision-making.
Methodology.
CAD is fundamentally based on highly complex pattern recognition. X-ray or other types of images are scanned for suspicious structures. Normally a few thousand images are required to optimize the algorithm. Digital image data are copied to a CAD server in a DICOM-format and are prepared and analyzed in several steps.
"1. Preprocessing" for
"2. Segmentation" for
"3. Structure/ROI (Region of Interest) Analyze"
Every detected region is analyzed individually for special characteristics:
"4. Evaluation / classification"
After the structure is analyzed, every ROI is evaluated individually (scoring) for the probability of a TP. The following procedures are examples of classification algorithms.
If the detected structures have reached a certain threshold level, they are highlighted in the image for the radiologist. Depending on the CAD system these markings can be permanently or temporary saved. The latter's advantage is that only the markings which are approved by the radiologist are saved. False hits should not be saved, because an examination at a later date becomes more difficult then.
Relation to provider metrics.
Sensitivity and specificity.
CAD systems seek to highlight suspicious structures. Today's CAD systems cannot detect 100% of pathological changes. The hit rate (sensitivity) can be up to 90% depending on system and application. A correct hit is termed a True Positive (TP), while the incorrect marking of healthy sections constitutes a False Positive (FP). The less FPs indicated, the higher the specificity is. A low specificity reduces the acceptance of the CAD system because the user has to identify all of these wrong hits. The FP-rate in lung overview examinations (CAD Chest) could be reduced to 2 per examination. In other segments ("e.g." CT lung examinations) the FP-rate could be 25 or more. In CAST systems the FP rate must be extremely low (less than 1 per examination) to allow a meaningful study triage.
Absolute detection rate.
The absolute detection rate of a radiologist is an alternative metric to sensitivity and specificity. Overall, results of clinical trials about sensitivity, specificity, and the absolute detection rate can vary markedly. Each study result depends on its basic conditions and has to be evaluated on those terms. The following facts have a strong influence:
Challenges.
Despite the many developments that CAD has achieved since the dawn of computers, there are still certain challenges that CAD systems face today.
Some challenges are related to various algorithmic limitations in the procedures of a CAD system including input data collection, preprocessing, processing and system assessments. Algorithms are generally designed to select a single likely diagnosis, thus providing suboptimal results for patients with multiple, concurrent disorders. Today input data for CAD mostly come from electronic health records (EHR). Effective designing, implementing and analyzing for EHR is a major necessity on any CAD systems.
Due to the massive availability of data and the need to analyze such data, big data is also one of the biggest challenges that CAD systems face today. The increasingly vast amount of patient data is a serious problem. Often the patient data are complex and can be semi-structured or unstructured data. It requires highly developed approaches to store, retrieve and analyze them in reasonable time.
During the preprocessing stage, input data must be normalized. The normalization of input data includes noise reduction and filtering.
Processing may contain a few sub-steps depending on applications. Basic three sub-steps on medical imaging are segmentation, feature extraction / selection, and classification. These sub-steps require advanced techniques to analyze input data with less computational time. Although much effort has been devoted to creating innovative techniques for these procedures of CAD systems, no single best algorithm has emerged for any individual step. Ongoing studies in building innovative algorithms for all the aspects of CAD systems is essential.
There is also a lack of standardized assessment measures for CAD systems. This fact may cause the difficulty for obtaining approval for commercial use from governing bodies such as the FDA. Moreover, while many positive developments of CAD systems have been proven, studies for validating their algorithms for clinical practice have not been confirmed.
Other challenges are related to the problem for healthcare providers to adopt new CAD systems in clinical practice. Some negative studies may discourage the use of CAD. In addition, the lack of training of health professionals on the use of CAD sometimes brings the incorrect interpretation of the system outcomes.
Applications.
CAD is used in the diagnosis of breast cancer, lung cancer, colon cancer, prostate cancer, bone metastases, coronary artery disease, congenital heart defect, pathological brain detection, fracture detection, Alzheimer's disease, and diabetic retinopathy.
Breast cancer.
CAD is used in screening mammography (X-ray examination of the female breast). Screening mammography is used for the early detection of breast cancer. CAD systems are often utilized to help classify a tumor as malignant (cancerous) or benign (non-cancerous). CAD is especially established in the US and the Netherlands and is used in addition to human evaluation, usually by a radiologist.
The first CAD system for mammography was developed in a research project at the University of Chicago. Today it is commercially offered by iCAD and Hologic. However, while achieving high sensitivities, CAD systems tend to have very low specificity and the benefits of using CAD remain uncertain. A 2008 systematic review on computer-aided detection in screening mammography concluded that CAD does not have a significant effect on cancer detection rate, but does undesirably increase recall rate ("i.e." the rate of false positives). However, it noted considerable heterogeneity in the impact on recall rate across studies.
Recent advances in machine learning, deep-learning and artificial intelligence technology have enabled the development of CAD systems that are clinically proven to assist radiologists in addressing the challenges of reading mammographic images by improving cancer detection rates and reducing false positives and unnecessary patient recalls, while significantly decreasing reading times.
Procedures to evaluate mammography based on magnetic resonance imaging (MRI) exist too.
Lung cancer (bronchial carcinoma).
In the diagnosis of lung cancer, computed tomography with special three-dimensional CAD systems are established and considered as appropriate second opinions. At this a volumetric dataset with up to 3,000 single images is prepared and analyzed. Round lesions (lung cancer, metastases and benign changes) from 1 mm are detectable. Today all well-known vendors of medical systems offer corresponding solutions.
Early detection of lung cancer is valuable. However, the random detection of lung cancer in the early stage (stage 1) in the X-ray image is difficult. Round lesions that vary from 5–10 mm are easily overlooked. The routine application of CAD Chest Systems may help to detect small changes without initial suspicion. A number of researchers developed CAD systems for detection of lung nodules (round lesions less than 30 mm) in chest radiography and CT, and CAD systems for diagnosis ("e.g.", distinction between malignant and benign) of lung nodules in CT. Virtual dual-energy imaging improved the performance of CAD systems in chest radiography.
Colon cancer.
CAD is available for detection of colorectal polyps in the colon in CT colonography. Polyps are small growths that arise from the inner lining of the colon. CAD detects the polyps by identifying their characteristic "bump-like" shape. To avoid excessive false positives, CAD ignores the normal colon wall, including the haustral folds.
Cardiovascular disease.
State-of-the-art methods in cardiovascular computing, cardiovascular informatics, and mathematical and computational modeling can provide valuable tools in clinical decision-making. CAD systems with novel image-analysis-based markers as input can aid vascular physicians to decide with higher confidence on best suitable treatment for cardiovascular disease patients.
Reliable early-detection and risk-stratification of carotid atherosclerosis is of outmost importance for predicting strokes in asymptomatic patients. To this end, various noninvasive and low-cost markers have been proposed, using ultrasound-image-based features. These combine echogenicity, texture, and motion characteristics to assist clinical decision towards improved prediction, assessment and management of cardiovascular risk.
CAD is available for the automatic detection of significant (causing more than 50% stenosis) coronary artery disease in coronary CT angiography (CCTA) studies.
Congenital heart defect.
Early detection of pathology can be the difference between life and death. CADe can be done by auscultation with a digital stethoscope and specialized software, also known as computer-aided auscultation. Murmurs, irregular heart sounds, caused by blood flowing through a defective heart, can be detected with high sensitivity and specificity. Computer-aided auscultation is sensitive to external noise and bodily sounds and requires an almost silent environment to function accurately.
Pathological brain detection (PBD).
Chaplot et al. was the first to use Discrete Wavelet Transform (DWT) coefficients to detect pathological brains. Maitra and Chatterjee employed the Slantlet transform, which is an improved version of DWT. Their feature vector of each image is created by considering the magnitudes of Slantlet transform outputs corresponding to six spatial positions chosen according to a specific logic.
In 2010, Wang and Wu presented a forward neural network (FNN) based method to classify a given MR brain image as normal or abnormal. The parameters of FNN were optimized via adaptive chaotic particle swarm optimization (ACPSO). Results over 160 images showed that the classification accuracy was 98.75%.
In 2011, Wu and Wang proposed using DWT for feature extraction, PCA for feature reduction, and FNN with scaled chaotic artificial bee colony (SCABC) as classifier.
In 2013, Saritha et al. were the first to apply wavelet entropy (WE) to detect pathological brains. Saritha also suggested to use spider-web plots. Later, Zhang et al. proved removing spider-web plots did not influence the performance. Genetic pattern search method was applied to identify abnormal brain from normal controls. Its classification accuracy was reported as 95.188%. Das et al. proposed to use Ripplet transform. Zhang et al. proposed to use particle swarm optimization (PSO). Kalbkhani et al. suggested to use GARCH model.
In 2014, El-Dahshan et al. suggested the use of pulse coupled neural network.
In 2015, Zhou et al. suggested application of naive Bayes classifier to detect pathological brains.
Alzheimer's disease.
CADs can be used to identify subjects with Alzheimer's and mild cognitive impairment from normal elder controls.
In 2014, Padma "et al". used combined wavelet statistical texture features to segment and classify AD benign and malignant tumor slices. Zhang et al. found kernel support vector machine decision tree had 80% classification accuracy, with an average computation time of 0.022s for each image classification.
In 2019, Signaevsky "et al". have first reported a trained Fully Convolutional Network (FCN) for detection and quantification of neurofibrillary tangles (NFT) in Alzheimer's disease and an array of other tauopathies. The trained FCN achieved high precision and recall in naive digital whole slide image (WSI) semantic segmentation, correctly identifying NFT objects using a SegNet model trained for 200 epochs. The FCN reached near-practical efficiency with average processing time of 45 min per WSI per graphics processing unit (GPU), enabling reliable and reproducible large-scale detection of NFTs. The measured performance on test data of eight naive WSI across various tauopathies resulted in the recall, precision, and an F1 score of 0.92, 0.72, and 0.81, respectively.
Eigenbrain is a novel brain feature that can help to detect AD, based on principal component analysis (PCA) or independent component analysis decomposition. Polynomial kernel SVM has been shown to achieve good accuracy. The polynomial KSVM performs better than linear SVM and RBF kernel SVM. Other approaches with decent results involve the use of texture analysis, morphological features, or high-order statistical features
Nuclear medicine.
CADx is available for nuclear medicine images. Commercial CADx systems for the diagnosis of bone metastases in whole-body bone scans and coronary artery disease in myocardial perfusion images exist.
With a high sensitivity and an acceptable false lesions detection rate, computer-aided automatic lesion detection system is demonstrated as useful and will probably in the future be able to help nuclear medicine physicians to identify possible bone lesions.
Diabetic retinopathy.
Diabetic retinopathy is a disease of the retina that is diagnosed predominantly by fundoscopic images. Diabetic patients in industrialised countries generally undergo regular screening for the condition. Imaging is used to recognize early signs of abnormal retinal blood vessels. Manual analysis of these images can be time-consuming and unreliable. CAD has been employed to enhance the accuracy, sensitivity, and specificity of automated detection method. The use of some CAD systems to replace human graders can be safe and cost effective.
Image pre-processing, and feature extraction and classification are two main stages of these CAD algorithms.
Pre-processing methods.
"Image normalization" is minimizing the variation across the entire image. Intensity variations in areas between periphery and central macular region of the eye have been reported to cause inaccuracy of vessel segmentation. Based on the 2014 review, this technique was the most frequently used and appeared in 11 out of 40 recently (since 2011) published primary research.
"Histogram equalization" is useful in enhancing contrast within an image. This technique is used to increase "local contrast." At the end of the processing, areas that were dark in the input image would be brightened, greatly enhancing the contrast among the features present in the area. On the other hand, brighter areas in the input image would remain bright or be reduced in brightness to equalize with the other areas in the image. Besides vessel segmentation, other features related to diabetic retinopathy can be further separated by using this pre-processing technique. Microaneurysm and hemorrhages are red lesions, whereas exudates are yellow spots. Increasing contrast between these two groups allow better visualization of lesions on images. With this technique, 2014 review found that 10 out of the 14 recently (since 2011) published primary research.
"Green channel filtering" is another technique that is useful in differentiating lesions rather than vessels. This method is important because it provides the maximal contrast between diabetic retinopathy-related lesions. Microaneurysms and hemorrhages are red lesions that appear dark after application of green channel filtering. In contrast, exudates, which appear yellow in normal image, are transformed into bright white spots after green filtering. This technique is mostly used according to the 2014 review, with appearance in 27 out of 40 published articles in the past three years. In addition, green channel filtering can be used to detect center of optic disc in conjunction with double-windowing system.
"Non-uniform illumination correction" is a technique that adjusts for non-uniform illumination in fundoscopic image. Non-uniform illumination can be a potential error in automated detection of diabetic retinopathy because of changes in statistical characteristics of image. These changes can affect latter processing such as feature extraction and are not observable by humans. Correction of non-uniform illumination (f') can be achieved by modifying the pixel intensity using known original pixel intensity (f), and average intensities of local (λ) and desired pixels (μ) (see formula below). Walter-Klein transformation is then applied to achieve the uniform illumination. This technique is the least used pre-processing method in the review from 2014.
formula_0
"Morphological operations" is the second least used pre-processing method in 2014 review. The main objective of this method is to provide contrast enhancement, especially darker regions compared to background.
Feature extractions and classifications.
After pre-processing of funduscopic image, the image will be further analyzed using different computational methods. However, the current literature agreed that some methods are used more often than others during vessel segmentation analyses. These methods are SVM, multi-scale, vessel-tracking, region growing approach, and model-based approaches.
"Support vector machine" is by far the most frequently used classifier in vessel segmentation, up to 90% of cases. SVM is a supervised learning model that belongs to the broader category of pattern recognition technique. The algorithm works by creating a largest gap between distinct samples in the data. The goal is to create the largest gap between these components that minimize the potential error in classification. In order to successfully segregate blood vessel information from the rest of the eye image, SVM algorithm creates support vectors that separate the blood vessel pixel from the rest of the image through a supervised environment. Detecting blood vessel from new images can be done through similar manner using support vectors. Combination with other pre-processing technique, such as green channel filtering, greatly improves the accuracy of detection of blood vessel abnormalities. Some beneficial properties of SVM include
"Multi-scale" approach is a multiple resolution approach in vessel segmentation. At low resolution, large-diameter vessels can first be extracted. By increasing resolution, smaller branches from the large vessels can be easily recognized. Therefore, one advantage of using this technique is the increased analytical speed. Additionally, this approach can be used with 3D images. The surface representation is a surface normal to the curvature of the vessels, allowing the detection of abnormalities on vessel surface.
"Vessel tracking" is the ability of the algorithm to detect "centerline" of vessels. These centerlines are maximal peak of vessel curvature. Centers of vessels can be found using directional information that is provided by Gaussian filter. Similar approaches that utilize the concept of centerline are the skeleton-based and differential geometry-based.
"Region growing" approach is a method of detecting neighboring pixels with similarities. A seed point is required for such method to start. Two elements are needed for this technique to work: similarity and spatial proximity. A neighboring pixel to the seed pixel with similar intensity is likely to be the same type and will be added to the growing region. One disadvantage of this technique is that it requires manual selection of seed point, which introduces bias and inconsistency in the algorithm. This technique is also being used in optic disc identification.
"Model-based" approaches employ representation to extract vessels from images. Three broad categories of model-based are known: deformable, parametric, and template matching. Deformable methods uses objects that will be deformed to fit the contours of the objects on the image. Parametric uses geometric parameters such as tubular, cylinder, or ellipsoid representation of blood vessels. Classical snake contour in combination with blood vessel topological information can also be used as a model-based approach. Lastly, template matching is the usage of a template, fitted by stochastic deformation process using Hidden Markov Mode 1.
Effects on employment.
Automation of medical diagnosis labor (for example, quantifying red blood cells) has some historical precedent. The deep learning revolution of the 2010s has already produced AI that are more accurate in many areas of visual diagnosis than radiologists and dermatologists, and this gap is expected to grow.
Some experts, including many doctors, are dismissive of the effects that AI will have on medical specialties.
In contrast, many economists and artificial intelligence experts believe that fields such as radiology will be massively disrupted, with unemployment or downward pressure on the wages of radiologists; hospitals will need fewer radiologists overall, and many of the radiologists who still exist will require substantial retraining. Geoffrey Hinton, the "Godfather of deep learning", argues that in light of the likely advances expected in the next five or ten years, hospitals should immediately stop training radiologists, as their time-consuming and expensive training on visual diagnosis will soon be mostly obsolete, leading to a glut of traditional radiologists.
An op-ed in "JAMA" argues that pathologists and radiologists should merge into a single "information specialist" role, and state that "To avoid being replaced by computers, radiologists must allow themselves to be displaced by computers." Information specialists would be trained in "Bayesian logic, statistics, data science", and some genomics and biometrics; manual visual pattern recognition would be greatly de-emphasized compared with current onerous radiology training.
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f' = f + \\mu-\\lambda"
}
] | https://en.wikipedia.org/wiki?curid=9034035 |
9034184 | Pressure altimeter | Altitude can be determined based on the measurement of atmospheric pressure. The greater the altitude, the lower the pressure. When a barometer is supplied with a nonlinear calibration so as to indicate altitude, the instrument is a type of altimeter called a pressure altimeter or barometric altimeter. A pressure altimeter is the altimeter found in most aircraft, and skydivers use wrist-mounted versions for similar purposes. Hikers and mountain climbers use wrist-mounted or hand-held altimeters, in addition to other navigational tools such as a map, magnetic compass, or GPS receiver.
Calibration.
The calibration of an altimeter follows the equation
formula_0
where c is a constant, T is the absolute temperature, P is the pressure at altitude z, and Po is the pressure at sea level. The constant c depends on the acceleration of gravity and the molar mass of the air.
However, one must be aware that this type of altimeter relies on "density altitude" and its readings can vary by hundreds of feet owing to a sudden change in air pressure, such as from a cold front, without any actual change in altitude.
The most common unit of measurement used for altimeter calibration worldwide is hectopascals (hPa), except for North America (other than Canada
) and Japan where inches of mercury (inHg) are used. To obtain an accurate altitude reading in either feet or meters, the local barometric pressure must be calibrated correctly using the barometric formula.
History.
The scientific principles behind the pressure altimeter were first written by Rev. Alexander Bryce a Scottish minister and astronomer in 1772 who realised that the principles of a barometer could be adjusted to measure height.
Applications.
Use in hiking, climbing and skiing.
A barometric altimeter, used along with a topographic map, can help to verify one's location. It is more reliable, and often more accurate, than a GPS receiver for measuring altitude; the GPS signal may be unavailable, for example, when one is deep in a canyon, or it may give wildly inaccurate altitudes when all available satellites are near the horizon. Because barometric pressure changes with the weather, hikers must periodically re-calibrate their altimeters when they reach a known altitude, such as a trail junction or peak marked on a topographical map.
Skydiving.
An altimeter is the most important piece of skydiving equipment, after the parachute itself. Altitude awareness is crucial at all times during the jump, and determines the appropriate response to maintain safety.
Since altitude awareness is so important in skydiving, there is a wide variety of altimeter designs made specifically for use in the sport, and a non-student skydiver will typically use two or more altimeters in a single jump:
The exact choice of altimeters depends heavily on the individual skydiver's preferences, experience level, primary disciplines, as well as the type of the jump. On one end of the spectrum, a low-altitude demonstration jump with water landing and no free fall might waive the mandated use of altimeters and use none at all. In contrast, a jumper doing freeflying jumps and flying a high performance canopy might use a mechanical analogue altimeter for easy reference in free fall, an in-helmet audible for breakaway altitude warning, additionally programmed with swoop guide tones for canopy flying, as well as a digital altimeter on an armband for quickly glancing the precise altitude on approach. Another skydiver doing similar types of jumps might wear a digital altimeter for their primary visual one, preferring the direct altitude readout of a numeric display.
Use in aircraft.
In aircraft, an aneroid altimeter or aneroid barometer measures the atmospheric pressure from a static port outside the aircraft. Air pressure decreases with an increase of altitude—approximately 100 hectopascals per 800 meters or one inch of mercury per 1000 feet or 1 hectopascals per 30 feet near sea level.
The aneroid altimeter is calibrated to show the pressure directly as an altitude above mean sea level, in accordance with a mathematical model atmosphere defined by the International Standard Atmosphere (ISA). Older aircraft used a simple aneroid barometer where the needle made less than one revolution around the face from zero to full scale. This design evolved to three-pointer altimeters with a primary needle and one or more secondary needles that show the number of revolutions, similar to a clock face. In other words, each needle points to a different digit of the current altitude measurement. However, this design has fallen out of favor due to the risk of misreading in stressful situations. The design evolved further to drum-type altimeters, the final step in analogue instrumentation, where each revolution of a single needle accounted for , with thousand foot increments recorded on a numerical odometer-type drum. To determine altitude, a pilot had first to read the drum to determine the thousands of feet, then look at the needle for the hundreds of feet. Modern analogue altimeters in transport aircraft are typically drum-type. The latest development in clarity is an Electronic flight instrument system with integrated digital altimeter displays. This technology has trickled down from airliners and military planes until it is now standard in many general aviation aircraft.
Modern aircraft use a "sensitive altimeter". On a sensitive altimeter, the sea-level reference pressure can be adjusted with a setting knob. The reference pressure, in inches of mercury in Canada and the United States, and hectopascals (previously millibars) elsewhere, is displayed in the small "Kollsman window," on the face of the aircraft altimeter. This is necessary, since sea level reference atmospheric pressure at a given location varies over time with temperature and the movement of pressure systems in the atmosphere.
In aviation terminology, the regional or local air pressure at mean sea level (MSL) is called the QNH or "altimeter setting", and the pressure that will calibrate the altimeter to show the height above ground at a given airfield is called the QFE of the field. An altimeter cannot, however, be adjusted for variations in air temperature. Differences in temperature from the ISA model will accordingly cause errors in indicated altitude.
In aerospace, the mechanical stand-alone altimeters which are based on diaphragm bellows were replaced by integrated measurement systems which are called air data computers (ADC). This module measures altitude, speed of flight and outside temperature to provide more precise output data allowing automatic flight control and flight level division. Multiple altimeters can be used to design a pressure reference system to provide information about the airplane's position angles to further support inertial navigation system calculations.
Pilots can perform preflight altimeter checks by setting the barometric scale to the current reported altimeter setting. The altimeter pointers should indicate the surveyed field elevation of the airport. Federal Aviation Administration requires that if the indication is off by more than from the surveyed field elevation, the instrument should be recalibrated.
Other modes of transport.
The altimeter is an instrument optional in off-road vehicles to aid in navigation. Some high-performance luxury cars that were never intended to leave paved roads, such as the Duesenberg in the 1930s, have also been equipped with altimeters.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "z=c\\;T\\;\\log(P_o/P),"
}
] | https://en.wikipedia.org/wiki?curid=9034184 |
9035542 | Extragalactic cosmic ray | Extragalactic cosmic rays are very-high-energy particles that flow into the Solar System from beyond the Milky Way galaxy. While at low energies, the majority of cosmic rays originate within the Galaxy (such as from supernova remnants), at high energies the cosmic ray spectrum is dominated by these extragalactic cosmic rays. The exact energy at which the transition from galactic to extragalactic cosmic rays occurs is not clear, but it is in the range 1017 to 1018 eV.
Observation.
The observation of extragalactic cosmic rays requires detectors with an extremely large surface area, due to the very limited flux. As a result, extragalactic cosmic rays are generally detected with ground-based observatories, by means of the extensive air showers they create. These ground based observatories can be either surface detectors, which observe the air shower particles which reach the ground, or air fluorescence detectors (also called 'fly's eye' detectors), which observe the fluorescence caused by the interaction of the charged air shower particles with the atmosphere. In either case, the ultimate aim is to find the mass and energy of the primary cosmic ray which created the shower. Surface detectors accomplish this by measuring the density of particles at the ground, while fluorescence detectors do so by measuring the depth of shower maximum (the depth from the top of the atmosphere at which the maximum number of particles are present in the shower). The two currently operating high energy cosmic ray observatories, the Pierre Auger Observatory and the Telescope Array, are hybrid detectors which use both of these methods. This hybrid methodology allows for a full three-dimensional reconstruction of the air shower, and gives much better directional information as well as more accurate determination of the type and energy of the primary cosmic ray than either technique on its own.
Pierre Auger Observatory.
The Pierre Auger Observatory, located in the Mendoza province in Argentina, consists of 1660 surface detectors, each separated by 1.5 km and covering a total area of 3000 km2, and 27 fluorescence detectors at 4 different locations overlooking the surface detectors. The observatory has been in operation since 2004, and began operating at full capacity in 2008 once construction was completed. The surface detectors are water Cherenkov detectors, each detector being a tank 3.6 m in diameter. One of the Pierre Auger Observatory's most notable results is the detection of a dipole anisotropy in the arrival directions of cosmic rays with energy greater than 8 x 1018 eV, which was the first conclusive indication of their extragalactic origin.
Telescope Array.
The Telescope Array is located in the state of Utah in the United States of America, and consists of 507 surface detectors separated by 1.2 km and covering a total area of 700 km2, and 3 fluorescence detector stations with 12-14 fluorescence detectors at each station. The Telescope Array was constructed by a collaboration between the teams formerly operating the Akeno Giant Air Shower Array (AGASA), which was a surface detector array in Japan, and the High Resolution Fly's Eye (HiRes), which was an air fluorescence detector also located in Utah. The Telescope Array was initially designed to detect cosmic rays with energy above 1019 eV, but an extension to the project, the Telescope Array Low Energy extension (TALE) is currently underway and will allow observation of cosmic rays with energies above 3 x 1016 eV
Spectrum and Composition.
Two clear and long-known features of the spectrum of extragalactic cosmic rays are the 'ankle', which is a flattening of the spectrum at around 5 x 1018 eV, and suppression of the cosmic ray flux at high energies (above about 4 x 1019 eV). More recently the Pierre Auger Observatory also observed a steepening of the cosmic ray spectrum above the ankle, before the steep cutoff above than 1019 eV (see figure). The spectrum measured by the Pierre Auger Observatory does not appear to depend on the arrival direction of the cosmic rays. However, there are some discrepancies between the spectrum (specifically the energy at which the suppression of flux occurs) measured by the Pierre Auger Observatory in the Southern hemisphere and the Telescope Array in the Northern hemisphere. It is unclear whether this is the result of an unknown systematic error or a true difference between the cosmic rays arriving at the Northern and Southern hemispheres.
The interpretation of these features of the cosmic ray spectrum depends on the details of the model assumed.Historically the ankle is interpreted as the energy at which the steep Galactic cosmic ray spectrum transitions to a flat extragalactic spectrum. However diffusive shock acceleration in supernova remnants, which is the predominant source of cosmic rays below 1015 eV, can accelerate protons only up to 3 x 1015 eV and iron up to 8 x 1016 eV. Thus there must be an additional source of Galactic cosmic rays up to around 1018 eV. On the other hand, the 'dip' model assumes that the transition between Galactic and extragalactic cosmic rays occurs at about 1017 eV. This model assumes that extragalactic cosmic rays are composed purely of protons, and the ankle is interpreted as being due to pair production arising from interactions of cosmic rays with the Cosmic Microwave Background (CMB). This suppresses the cosmic ray flux and thus causes a flattening of the spectrum. Older data, as well as more recent data from the Telescope Array do favour a pure proton composition. However recent Auger data suggests a composition which is dominated by light elements to 2 x 1018 eV, but becomes increasingly dominated by heavier elements with increasing energy. In this case a source of the protons below 2 x 1018 eV is needed.
The suppression of flux at high energies is generally assumed to be due to the Greisen–Zatsepin–Kuz'min (GZK) effect in the case of protons, or due to photodisintegration by the CMB (the Gerasimova-Rozental or GR effect) in the case of heavy nuclei. However it could also be because of the nature of the sources, that is because of the maximum energy to which sources can accelerate cosmic rays.
As mentioned above the Telescope Array and the Pierre Auger Observatory give different results for the most likely composition. However the data used to infer composition from these two observatories is consistent once all systematic effects are taken into account. The composition of extragalactic cosmic rays is thus still ambiguous
Origin.
Unlike solar or galactic cosmic rays, little is known about the origins of extragalactic cosmic rays. This is largely due to a lack of statistics: only about 1 extragalactic cosmic ray particle per square kilometer per year reaches the Earth's surface (see figure). The possible sources of these cosmic rays must satisfy the Hillas criterion,
formula_0
where E is the energy of the particle, q its electric charge, B is the magnetic field in the source and R the size of the source. This criterion comes from the fact that for a particle to be accelerated to a given energy, its Larmor radius must be less than the size of the accelerating region. Once the Larmor radius of the particle is greater than the size of the accelerating region, it escapes and does not gain any more energy. As a consequence of this, heavier nuclei (with a greater number of protons), if present, can be accelerated to higher energies than protons within the same source.
Active galactic nuclei.
Active galactic nuclei (AGNs) are well known to be some of the most energetic objects in the universe, and are therefore often considered as candidates for the production of extragalactic cosmic rays. Given their extremely high luminosity, AGNs can accelerate cosmic rays to the required energies even if only 1/1000 of their energy is used for this acceleration. There is some observational support for this hypothesis. Analysis of cosmic ray measurements with the Pierre Auger Observatory suggests a correlation between the arrival directions of cosmic rays of the highest energies of more than 5×1019 eV and the positions of nearby active galaxies. In 2017, IceCube detected a high energy neutrino with energy 290 TeV whose direction was consistent with a flaring blazar, TXS 0506-056, which strengthened the case for AGNs as a source of extragalactic cosmic rays. Since high-energy neutrinos are assumed to come from the decay of pions produced by the interaction of correspondingly high-energy protons with the Cosmic Microwave Background (CMB) (photo-pion production), or from the photodisintegration of energetic nuclei, and since neutrinos travel essentially unimpeded through the universe, they can be traced back to the source of high-energy cosmic rays.
Clusters of galaxies.
Galaxy clusters continuously accrete gas and galaxies from filaments of the cosmic web. As the cold gas which is accreted falls into the hot intracluster medium, it gives rise to shocks at the outskirts of the cluster, which could accelerate cosmic rays through the diffusive shock acceleration mechanism. Large scale radio halos and radio relics, which are expected to be due to synchrotron emission from relativistic electrons, show that clusters do host high energy particles. Studies have found that shocks in clusters can accelerate iron nuclei to 1020 eV, which is nearly as much as the most energetic cosmic rays observed by the Pierre Auger Observatory. However, if clusters do accelerate protons or nuclei to such high energies, they should also produce gamma ray emission due to the interaction of the high-energy particles with the intracluster medium. This gamma ray emission has not yet been observed, which is difficult to explain.
Gamma ray bursts.
Gamma ray bursts (GRBs) were originally proposed as a possible source of extragalactic cosmic rays because the energy required to produce the observed flux of cosmic rays was similar their typical luminosity in γ-rays, and because they could accelerate protons to energies of 1020 eV through diffusive shock acceleration. Long gamma ray bursts (GRBs) are especially interesting as possible sources of extragalactic cosmic rays in light of the evidence for a heavier composition at higher energies. Long GRBs are associated with the death of massive stars, which are well known to produce heavy elements. However, in this case many of the heavy nuclei would be photo-disintegrated, leading to considerable neutrino emission also associated with GRBs, which has not been observed. Some studies have suggested that a specific population of GRBs known as low-luminosity GRBs might resolve this, as the lower luminosity would lead to less photo-dissociation and neutrino production. These low luminosity GRBs could also simultaneously account for the observed high-energy neutrinos. However it has also been argued that these low-luminosity GRBs are not energetic enough to be a major source of high energy cosmic rays.
Neutron stars.
Neutron stars are formed from the core collapse of massive stars, and as with GRBs can be a source of heavy nuclei. In models with neutron stars - specifically young pulsars or magnetars - as the source of extragalactic cosmic rays, heavy elements (mainly iron) are stripped from the surface of the object by the electric field created by the magnetized neutron star's rapid rotation. This same electric field can accelerate iron nucleii up to 1020 eV. The photodisintegration of the heavy nucleii would produce lighter elements with lower energies, matching the observations of the Pierre Auger Observatory. In this scenario, the cosmic rays accelerated by neutron stars within the Milky Way could fill in the 'transition region' between Galactic cosmic rays produced in supernova remnants, and extragalactic cosmic rays. | [
{
"math_id": 0,
"text": "E = qBR"
}
] | https://en.wikipedia.org/wiki?curid=9035542 |
9035647 | Loss given default | Loss given default or LGD is the share of an asset that is lost if a borrower defaults.
It is a common parameter in risk models and also a parameter used in the calculation of economic capital, expected loss or regulatory capital under Basel II for a banking institution. This is an attribute of any exposure on bank's client. Exposure is the amount that one may lose in an investment.
The LGD is closely linked to the expected loss, which is defined as the product of the LGD, the probability of default (PD) and the exposure at default (EAD).
Definition.
LGD is the share of an asset that is lost when a borrower defaults. The "recovery rate" is defined as 1 minus the LGD, the share of an asset that is recovered when a borrower defaults.
Loss given default is facility-specific because such losses are generally understood to be influenced by key transaction characteristics such as the presence of collateral and the degree of subordination.
How to calculate LGD.
The LGD calculation is easily understood with the help of an example: If the client defaults with an outstanding debt of $200,000 and the bank or insurance is able to sell the security (e.g. a condo) for a net price of $160,000 (including costs related to the repurchase), then the LGD is 20% (= $40,000 / $200,000).
Theoretically, LGD is calculated in different ways, but the most popular is 'gross' LGD, where total losses are divided by exposure at default (EAD). Another method is to divide losses by the unsecured portion of a credit line (where security covers a portion of EAD). This is known as 'Blanco' LGD. If collateral value is zero in the last case then Blanco LGD is equivalent to gross LGD. Different types of statistical methods can be used to do this.
Gross LGD is most popular amongst academics because of its simplicity and because academics only have access to bond market data, where collateral values often are unknown, uncalculated or irrelevant. Blanco LGD is popular amongst some practitioners (banks) because banks often have many secured facilities, and banks would like to decompose their losses between losses on unsecured portions and losses on secured portions due to depreciation of collateral quality. The latter calculation is also a subtle requirement of Basel II, but most banks are not sophisticated enough at this time to make those types of calculations.
Calculating LGD under the foundation approach (for corporate, sovereign and bank exposure).
To determine required capital for a bank or financial institution under Basel II, the institution has to calculate risk-weighted assets. This requires estimating the LGD for each corporate, sovereign and bank exposure. There are two approaches for deriving this estimate: a foundation approach and an advanced approach.
Exposure without collateral.
Under the foundation approach, BIS prescribes fixed LGD ratios for certain classes of unsecured exposures:
Exposure with collateral.
Simple LGD example: If the client defaults, with an outstanding debt of 200,000 (EAD) and the bank is able to sell the security for a net price of 160,000 (including costs related to the repurchase), then 40,000, or 20%, of the EAD are lost - the LGD is 20%.
The effective loss given default (formula_0) applicable to a collateralized transaction can be expressed as
formula_1
Haircut appropriate for currency mismatch between the collateral and exposure (The standard supervisory haircut for currency risk where exposure and collateral are denominated in different currencies is 8%)
The *He and *Hc has to be derived from the following table of standard supervisory haircuts:
However, under certain special circumstances the supervisors, i.e. the local central banks may choose not to apply the haircuts specified under the comprehensive approach, but instead to apply a zero H.
Calculating LGD under the advanced approach (and for the retail-portfolio under the foundation approach).
Under the A-IRB approach and for the retail-portfolio under the F-IRB approach, the bank itself determines the appropriate loss given default to be applied to each exposure, on the basis of robust data and analysis. The analysis must be capable of being validated both internally and by supervisors. Thus, a bank using internal loss given default estimates for capital purposes might be able to differentiate loss given default values on the basis of a wider set of transaction characteristics (e.g. product type, wider range of collateral types) as well as borrower characteristics. These values would be expected to represent a conservative view of long-run averages. A bank wishing to use its own estimates of LGD will need to demonstrate to its supervisor that it can meet additional minimum requirements pertinent to the integrity and reliability of these estimates.
An LGD model assesses the value and/or the quality of a security the bank holds for providing the loan – securities can be either machinery like cars, trucks or construction machines. It can be mortgages or it can be a custody account or a commodity. The higher the value of the security the lower the LGD and thus the potential loss the bank or insurance faces in the case of a default. Banks using the A-IRB approach have to determine LGD values, whereas banks within the F-IRB do only have to do so for the retail-portfolio. For example, as of 2013, there were nine companies in the United Kingdom with their own mortgage LGD models. In Switzerland there were two banks as of 2013. In Germany many thrifts – especially the market leader Bausparkasse Schwäbisch Hall – have their own mortgage LGD models. In the corporate asset class many German banks still only use the values given by the regulator under the F-IRB approach.
Repurchase value estimators (RVEs) have proven to be the best kind of tools for LGD estimates. The repurchase value ratio provides the percentage of the value of the house/apartment (mortgages) or machinery at a given time compared to its purchase price.
Downturn LGD.
Under Basel II, banks and other financial institutions are recommended to calculate 'downturn LGD' (downturn loss given default), which reflects the losses occurring during a 'downturn' in a business cycle for regulatory purposes. Downturn LGD is interpreted in many ways, and most financial institutions that are applying for IRB approval under BIS II often have differing definitions of what Downturn conditions are. One definition is at least two consecutive quarters of negative growth in real GDP. Often, negative growth is also accompanied by a negative output gap in an economy (where potential production exceeds actual demand).
The calculation of LGD (or downturn LGD) poses significant challenges to modelers and practitioners. Final resolutions of defaults can take many years and final losses, and hence final LGD, cannot be calculated until all of this information is ripe. Furthermore, practitioners are of want of data since BIS II implementation is rather new and financial institutions may have only just started collecting the information necessary for calculating the individual elements that LGD is composed of: EAD, direct and indirect Losses, security values and potential, expected future recoveries. Another challenge, and maybe the most significant, is the fact that the default definitions between institutions vary. This often results in a so-called differing cure-rates or percentage of defaults without losses. Calculation of LGD (average) is often composed of defaults with losses and defaults without. Naturally, when more defaults without losses are added to a sample pool of observations LGD becomes lower. This is often the case when default definitions become more 'sensitive' to credit deterioration or 'early' signs of defaults. When institutions use different definitions, LGD parameters therefore become non-comparable.
Many institutions are scrambling to produce estimates of downturn LGD, but often resort to 'mapping' since downturn data is often lacking. Mapping is the process of guesstimating losses under a downturn by taking existing LGD and adding a supplement or buffer, which is supposed to represent a potential increase in LGD when a downturn occurs. LGD often decreases for some segments during a downturn since there is a relatively larger increase of defaults that result in higher cure-rates, often the result of temporary credit deterioration that disappears after the downturn period is over. Furthermore, LGD values decrease for defaulting financial institutions under economic downturns because governments and central banks often rescue these institutions in order to maintain financial stability.
In 2010 researchers at Moody's Analytics quantify an LGD in line with the target probability event intended to be captured under Basel. They illustrate that the Basel downturn LGD guidelines may not be sufficiently conservative. Their results are based on a structural model that incorporates systematic risk in recovery.
Correcting for different default definitions.
One problem facing practitioners is the comparison of LGD estimates (usually averages) arising from different time periods where differing default definitions have been in place. The following formula can be used to compare LGD estimates from one time period (say x) with another time period (say y):
LGDy=LGDx*(1-Cure Ratey)/(1-Cure Ratex)
Country-specific LGD.
In Australia, the prudential regulator APRA has set an interim minimum downturn LGD of 20 per cent on residential mortgages for all applicants for the advanced Basel II approaches. The 20 per cent floor is not risk sensitive and is designed to encourage authorised deposit-taking institutions (ADIs) to undertake further work, which APRA believes would be closer to the 20 per cent on average than ADIs’ original estimates.
Importance.
LGD warrants more attention than it has been given in the past decade, where credit risk models often assumed that LGD was time-invariant. Movements in LGD often result in proportional movements in required economic capital. According to BIS (2006) institutions implementing Advanced-IRB instead of Foundation-IRB will experience larger decreases in Tier 1 capital, and the internal calculation of LGD is a factor separating the two Methods.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L_{GD}^*"
},
{
"math_id": 1,
"text": "\nL_{GD}^* = L_{GD}\\cdot\\frac{E^*}{E}\n"
}
] | https://en.wikipedia.org/wiki?curid=9035647 |
903686 | Teleparallelism | Theory of gravity
Teleparallelism (also called teleparallel gravity), was an attempt by Albert Einstein to base a unified theory of electromagnetism and gravity on the mathematical structure of distant parallelism, also referred to as absolute or teleparallelism. In this theory, a spacetime is characterized by a curvature-free linear connection in conjunction with a metric tensor field, both defined in terms of a dynamical tetrad field.
Teleparallel spacetimes.
The crucial new idea, for Einstein, was the introduction of a tetrad field, i.e., a set {X1, X2, X3, X4} of four vector fields defined on "all" of M such that for every "p" ∈ "M" the set {X1("p"), X2("p"), X3("p"), X4("p")} is a basis of "TpM", where "TpM" denotes the fiber over p of the tangent vector bundle TM. Hence, the four-dimensional spacetime manifold M must be a parallelizable manifold. The tetrad field was introduced to allow the distant comparison of the direction of tangent vectors at different points of the manifold, hence the name distant parallelism. His attempt failed because there was no Schwarzschild solution in his simplified field equation.
In fact, one can define the connection of the parallelization (also called the Weitzenböck connection) {X"i"} to be the linear connection ∇ on M such that
formula_0
where "v" ∈ "TpM" and "f""i" are (global) functions on M; thus "f""i"X"i" is a global vector field on M. In other words, the coefficients of Weitzenböck connection ∇ with respect to {X"i"} are all identically zero, implicitly defined by:
formula_1
hence
formula_2
for the connection coefficients (also called Weitzenböck coefficients) in this global basis. Here "ωk" is the dual global basis (or coframe) defined by "ωi"(X"j")
"δ".
This is what usually happens in R"n", in any affine space or Lie group (for example the 'curved' sphere S3 but 'Weitzenböck flat' manifold).
Using the transformation law of a connection, or equivalently the ∇ properties, we have the following result.
Proposition. In a natural basis, associated with local coordinates ("U", "xμ"), i.e., in the holonomic frame ∂"μ", the (local) connection coefficients of the Weitzenböck connection are given by:
formula_3
where X"i"
"h"∂"μ" for "i", "μ"
1, 2,… "n" are the local expressions of a global object, that is, the given tetrad.
The Weitzenböck connection has vanishing curvature, but – in general – non-vanishing torsion.
Given the frame field {X"i"}, one can also define a metric by conceiving of the frame field as an orthonormal vector field. One would then obtain a pseudo-Riemannian metric tensor field g of signature (3,1) by
formula_4
where
formula_5
The corresponding underlying spacetime is called, in this case, a Weitzenböck spacetime.
It is worth noting to see that these 'parallel vector fields' give rise to the metric tensor as a byproduct.
New teleparallel gravity theory.
New teleparallel gravity theory (or new general relativity) is a theory of gravitation on Weitzenböck spacetime, and attributes gravitation to the torsion tensor formed of the parallel vector fields.
In the new teleparallel gravity theory the fundamental assumptions are as follows:
In 1961 Christian Møller revived Einstein's idea, and Pellegrini and Plebanski found a Lagrangian formulation for "absolute parallelism".
Møller tetrad theory of gravitation.
In 1961, Møller showed that a tetrad description of gravitational fields allows a more rational treatment of the energy-momentum complex than in a theory based on the metric tensor alone. The advantage of using tetrads as gravitational variables was connected with the fact that this allowed to construct expressions for the energy-momentum complex which had more satisfactory transformation properties than in a purely metric formulation. In 2015, it was shown that the total energy of matter and gravitation is proportional to the Ricci scalar of three-space up to the linear order of perturbation.
New translation teleparallel gauge theory of gravity.
Independently in 1967, Hayashi and Nakano revived Einstein's idea, and Pellegrini and Plebanski started to formulate the gauge theory of the spacetime translation group. Hayashi pointed out the connection between the gauge theory of the spacetime translation group and absolute parallelism. The first fiber bundle formulation was provided by Cho. This model was later studied by Schweizer et al., Nitsch and Hehl, Meyer; more recent advances can be found in Aldrovandi and Pereira, Gronwald, Itin, Maluf and da Rocha Neto, Münch, Obukhov and Pereira, and Schucking and Surowitz.
Nowadays, teleparallelism is studied purely as a theory of gravity without trying to unify it with electromagnetism. In this theory, the gravitational field turns out to be fully represented by the translational gauge potential "Baμ", as it should be for a gauge theory for the translation group.
If this choice is made, then there is no longer any Lorentz gauge symmetry because the internal Minkowski space fiber—over each point of the spacetime manifold—belongs to a fiber bundle with the Abelian group R4 as structure group. However, a translational gauge symmetry may be introduced thus: Instead of seeing tetrads as fundamental, we introduce a fundamental R4 translational gauge symmetry instead (which acts upon the internal Minkowski space fibers affinely so that this fiber is once again made local) with a connection B and a "coordinate field" x taking on values in the Minkowski space fiber.
More precisely, let "π" : M → "M" be the Minkowski fiber bundle over the spacetime manifold M. For each point "p" ∈ "M", the fiber is an affine space. In a fiber chart ("V", "ψ"), coordinates are usually denoted by "ψ"
("xμ", "xa"), where xμ are coordinates on spacetime manifold M, and xa are coordinates in the fiber .
Using the abstract index notation, let "a", "b", "c",… refer to and "μ", "ν",… refer to the tangent bundle TM. In any particular gauge, the value of xa at the point "p" is given by the section
formula_6
The covariant derivative
formula_7
is defined with respect to the connection form B, a 1-form assuming values in the Lie algebra of the translational abelian group R4. Here, d is the exterior derivative of the ath "component" of x, which is a scalar field (so this isn't a pure abstract index notation). Under a gauge transformation by the translation field αa,
formula_8
and
formula_9
and so, the covariant derivative of "xa"
"ξa"("p") is gauge invariant. This is identified with the translational (co-)tetrad
formula_10
which is a one-form which takes on values in the Lie algebra of the translational Abelian group R4, whence it is gauge invariant. But what does this mean? "xa"
"ξa"("p") is a local section of the (pure translational) affine internal bundle , another important structure in addition to the translational gauge field "Baμ". Geometrically, this field determines the origin of the affine spaces; it is known as Cartan’s radius vector. In the gauge-theoretic framework, the one-form
formula_11
arises as the nonlinear translational gauge field with "ξa" interpreted as the Goldstone field describing the spontaneous breaking of the translational symmetry.
A crude analogy: Think of as the computer screen and the internal displacement as the position of the mouse pointer. Think of a curved mousepad as spacetime and the position of the mouse as the position. Keeping the orientation of the mouse fixed, if we move the mouse about the curved mousepad, the position of the mouse pointer (internal displacement) also changes and this change is path dependent; i.e., it does not depend only upon the initial and final position of the mouse. The change in the internal displacement as we move the mouse about a closed path on the mousepad is the torsion.
Another crude analogy: Think of a crystal with line defects (edge dislocations and screw dislocations but not disclinations). The parallel transport of a point of M along a path is given by counting the number of (up/down, forward/backwards and left/right) crystal bonds transversed. The Burgers vector corresponds to the torsion. Disinclinations correspond to curvature, which is why they are neglected.
The torsion—that is, the translational field strength of Teleparallel Gravity (or the translational "curvature")—
formula_12
is gauge invariant.
We can always choose the gauge where xa is zero everywhere, although is an affine space and also a fiber; thus the origin must be defined on a point-by-point basis, which can be done arbitrarily. This leads us back to the theory where the tetrad is fundamental.
Teleparallelism refers to any theory of gravitation based upon this framework. There is a particular choice of the action that makes it exactly equivalent to general relativity, but there are also other choices of the action which are not equivalent to general relativity. In some of these theories, there is no equivalence between inertial and gravitational masses.
Unlike in general relativity, gravity is due not to the curvature of spacetime but to the torsion thereof.
Non-gravitational contexts.
There exists a close analogy of geometry of spacetime with the structure of defects in crystal. Dislocations are represented by torsion, disclinations by curvature. These defects are not independent of each other. A dislocation is equivalent to a disclination-antidisclination pair, a disclination is equivalent to a string of dislocations. This is the basic reason why Einstein's theory based purely on curvature can be rewritten as a teleparallel theory based only on torsion. There exists, moreover, infinitely many ways of rewriting Einstein's theory, depending on how much of the curvature one wants to reexpress in terms of torsion, the teleparallel theory being merely one specific version of these.
A further application of teleparallelism occurs in quantum field theory, namely, two-dimensional non-linear sigma models with target space on simple geometric manifolds, whose renormalization behavior is controlled by a Ricci flow, which includes torsion. This torsion modifies the Ricci tensor and hence leads to an infrared fixed point for the coupling, on account of teleparallelism ("geometrostasis").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\nabla_v\\left(f^i\\mathrm X_i\\right)=\\left(vf^i\\right)\\mathrm X_i(p),"
},
{
"math_id": 1,
"text": "\\nabla_{\\mathrm{X}_i} \\mathrm{X}_j = 0,"
},
{
"math_id": 2,
"text": "{W^k}_{ij} = \\omega^k\\left(\\nabla_{\\mathrm{X}_i} \\mathrm{X}_j\\right)\\equiv 0,"
},
{
"math_id": 3,
"text": "{\\Gamma^{\\beta}}_{\\mu\\nu}= h^{\\beta}_{i} \\partial_{\\nu} h^{i}_{\\mu},"
},
{
"math_id": 4,
"text": "g\\left(\\mathrm{X}_i,\\mathrm{X}_j\\right)=\\eta_{ij},"
},
{
"math_id": 5,
"text": "\\eta_{ij}=\\operatorname{diag}(-1,-1,-1,1)."
},
{
"math_id": 6,
"text": "x^\\mu \\to \\left(x^\\mu,x^a = \\xi^a(p)\\right)."
},
{
"math_id": 7,
"text": "D_\\mu \\xi^a \\equiv \\left(d \\xi^a\\right)_\\mu + {B^a}_\\mu = \\partial_\\mu \\xi^a + {B^a}_\\mu"
},
{
"math_id": 8,
"text": "x^a\\to x^a+\\alpha^a"
},
{
"math_id": 9,
"text": "{B^a}_\\mu\\to {B^a}_\\mu - \\partial_\\mu \\alpha^a"
},
{
"math_id": 10,
"text": "{h^a}_\\mu = \\partial_\\mu \\xi^a + {B^a}_\\mu"
},
{
"math_id": 11,
"text": "h^a = {h^a}_\\mu dx^\\mu = \\left(\\partial_\\mu \\xi^a + {B^a}_\\mu\\right)dx^{\\mu}"
},
{
"math_id": 12,
"text": "{T^a}_{\\mu\\nu} \\equiv \\left(DB^a\\right)_{\\mu\\nu} = D_\\mu {B^a}_\\nu - D_\\nu {B^a}_\\mu,"
}
] | https://en.wikipedia.org/wiki?curid=903686 |
9038490 | Minute ventilation | Volume of air breathed per minute
Minute ventilation (or respiratory minute volume or minute volume) is the volume of gas inhaled (inhaled minute volume) or exhaled (exhaled minute volume) from a person's lungs per minute. It is an important parameter in respiratory medicine due to its relationship with blood carbon dioxide levels. It can be measured with devices such as a Wright respirometer or can be calculated from other known respiratory parameters. Although minute volume can be viewed as a unit of volume, it is usually treated in practice as a flow rate (given that it represents a volume change over time). Typical units involved are (in metric) 0.5 L × 12 breaths/min = 6 L/min.
Several symbols can be used to represent minute volume. They include formula_0 (V̇ or V-dot) or Q (which are general symbols for flow rate), MV, and VE.
Determination of minute volume.
Minute volume can either be measured directly or calculated from other known parameters.
Measurement of minute volume.
Minute volume is the amount of gas inhaled or exhaled from a person's lungs in one minute. It can be measured by a Wright respirometer or other device capable of cumulatively measuring gas flow, such as mechanical ventilators.
Calculation of minute volume.
If both tidal volume (VT) and respiratory rate (ƒ or RR) are known, minute volume can be calculated by multiplying the two values. One must also take care to consider the effect of dead space on alveolar ventilation, as seen below in "Relationship to other physiological rates".
formula_1
Physiological significance of minute volume.
Blood carbon dioxide (PaCO2) levels generally vary inversely with minute volume. For example, a person with increased minute volume (e.g. due to hyperventilation) should demonstrate a lower blood carbon dioxide level. The healthy human body will alter minute volume in an attempt to maintain physiologic homeostasis. A normal minute volume while resting is about 5–8 liters per minute in humans. Minute volume generally decreases when at rest, and increases with exercise. For example, during light activities minute volume may be around 12 litres. Riding a bicycle increases minute ventilation by a factor of 2 to 4 depending on the level of exercise involved. Minute ventilation during moderate exercise may be between 40 and 60 litres per minute.
Hyperventilation is the term for having a minute ventilation higher than physiologically appropriate. Hypoventilation describes a minute volume less than physiologically appropriate.
Relationship to other physiological rates.
Minute volume comprises the sum of alveolar ventilation and dead space ventilation. That is:
formula_2
where formula_3 is alveolar ventilation, and formula_4 represents dead space ventilation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\dot{V}"
},
{
"math_id": 1,
"text": " \\dot{V} = V_T \\times f "
},
{
"math_id": 2,
"text": " \\dot{V} = \\dot{V}_A + \\dot{V}_D "
},
{
"math_id": 3,
"text": " \\dot{V}_A"
},
{
"math_id": 4,
"text": " \\dot{V}_D "
}
] | https://en.wikipedia.org/wiki?curid=9038490 |
9042378 | Kleene–Brouwer order | In descriptive set theory, the Kleene–Brouwer order or Lusin–Sierpiński order is a linear order on finite sequences over some linearly ordered set formula_0, that differs from the more commonly used lexicographic order in how it handles the case when one sequence is a prefix of the other. In the Kleene–Brouwer order, the prefix is later than the longer sequence containing it, rather than earlier.
The Kleene–Brouwer order generalizes the notion of a postorder traversal from finite trees to trees that are not necessarily finite. For trees over a well-ordered set, the Kleene–Brouwer order is itself a well-ordering if and only if the tree has no infinite branch. It is named after Stephen Cole Kleene, Luitzen Egbertus Jan Brouwer, Nikolai Luzin, and Wacław Sierpiński.
Definition.
If formula_1 and formula_2 are finite sequences of elements from formula_3, we say that formula_4 when there is an formula_5 such that either:
Here, the notation formula_10 refers to the prefix of formula_1 up to but not including formula_7.
In simple terms, formula_4 whenever formula_2 is a prefix of formula_1 (i.e. formula_2 terminates before formula_1, and they are equal up to that point) or formula_1 is to the "left" of formula_2 on the first place they differ.
Tree interpretation.
A tree, in descriptive set theory, is defined as a set of finite sequences that is closed under prefix operations. The parent in the tree of any sequence is the shorter sequence formed by removing its final element. Thus, any set of finite sequences can be augmented to form a tree, and the Kleene–Brouwer order is a natural ordering that may be given to this tree. It is a generalization to potentially-infinite trees of the postorder traversal of a finite tree: at every node of the tree, the child subtrees are given their left to right ordering, and the node itself comes after all its children. The fact that the Kleene–Brouwer order is a linear ordering (that is, that it is transitive as well as being total) follows immediately from this, as any three sequences on which transitivity is to be tested form (with their prefixes) a finite tree on which the Kleene–Brouwer order coincides with the postorder.
The significance of the Kleene–Brouwer ordering comes from the fact that if formula_3 is well-ordered, then a tree over formula_3 is well-founded (having no infinitely long branches) if and only if the Kleene–Brouwer ordering is a well-ordering of the elements of the tree.
Recursion theory.
In recursion theory, the Kleene–Brouwer order may be applied to the computation trees of implementations of total recursive functionals. A computation tree is well-founded if and only if the computation performed by it is total recursive. Each state formula_11 in a computation tree may be assigned an ordinal number formula_12, the supremum of the ordinal numbers formula_13 where formula_14 ranges over the children of formula_11 in the tree. In this way, the total recursive functionals themselves can be classified into a hierarchy, according to the minimum value of the ordinal at the root of a computation tree, minimized over all computation trees that implement the functional. The Kleene–Brouwer order of a well-founded computation tree is itself a recursive well-ordering, and at least as large as the ordinal assigned to the tree, from which it follows that the levels of this hierarchy are indexed by recursive ordinals.
History.
This ordering was used by , and then again by . Brouwer does not cite any references, but Moschovakis argues that he may either have seen , or have been influenced by earlier work of the same authors leading to this work. Much later, studied the same ordering, and credited it to Brouwer.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(X, <)"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "s"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "t <_{KB} s"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "t\\upharpoonright n = s\\upharpoonright n"
},
{
"math_id": 7,
"text": "t(n)"
},
{
"math_id": 8,
"text": "s(n)"
},
{
"math_id": 9,
"text": "t(n)<s(n)"
},
{
"math_id": 10,
"text": "t\\upharpoonright n"
},
{
"math_id": 11,
"text": "x"
},
{
"math_id": 12,
"text": "||x||"
},
{
"math_id": 13,
"text": "1+||y||"
},
{
"math_id": 14,
"text": "y"
}
] | https://en.wikipedia.org/wiki?curid=9042378 |
904378 | Montgomery modular multiplication | Algorithm for fast modular multiplication
In modular arithmetic computation, Montgomery modular multiplication, more commonly referred to as Montgomery multiplication, is a method for performing fast modular multiplication. It was introduced in 1985 by the American mathematician Peter L. Montgomery.
Montgomery modular multiplication relies on a special representation of numbers called Montgomery form. The algorithm uses the Montgomery forms of a and b to efficiently compute the Montgomery form of "ab" mod "N". The efficiency comes from avoiding expensive division operations. Classical modular multiplication reduces the double-width product "ab" using division by N and keeping only the remainder. This division requires quotient digit estimation and correction. The Montgomery form, in contrast, depends on a constant R > N which is coprime to N, and the only division necessary in Montgomery multiplication is division by R. The constant R can be chosen so that division by R is easy, significantly improving the speed of the algorithm. In practice, R is always a power of two, since division by powers of two can be implemented by bit shifting.
The need to convert a and b into Montgomery form and their product out of Montgomery form means that computing a single product by Montgomery multiplication is slower than the conventional or Barrett reduction algorithms. However, when performing many multiplications in a row, as in modular exponentiation, intermediate results can be left in Montgomery form. Then the initial and final conversions become a negligible fraction of the overall computation. Many important cryptosystems such as RSA and Diffie–Hellman key exchange are based on arithmetic operations modulo a large odd number, and for these cryptosystems, computations using Montgomery multiplication with R a power of two are faster than the available alternatives.
Modular arithmetic.
Let N denote a positive integer modulus. The quotient ring Z/"N"Z consists of residue classes modulo N, that is, its elements are sets of the form
formula_0
where a ranges across the integers. Each residue class is a set of integers such that the difference of any two integers in the set is divisible by N (and the residue class is maximal with respect to that property; integers aren't left out of the residue class unless they would violate the divisibility condition). The residue class corresponding to a is denoted . Equality of residue classes is called congruence and is denoted
formula_1
Storing an entire residue class on a computer is impossible because the residue class has infinitely many elements. Instead, residue classes are stored as representatives. Conventionally, these representatives are the integers a for which 0 ≤ "a" ≤ "N" − 1. If a is an integer, then the representative of is written "a" mod "N". When writing congruences, it is common to identify an integer with the residue class it represents. With this convention, the above equality is written "a" ≡ "b" mod "N".
Arithmetic on residue classes is done by first performing integer arithmetic on their representatives. The output of the integer operation determines a residue class, and the output of the modular operation is determined by computing the residue class's representative. For example, if "N" = 17, then the sum of the residue classes and is computed by finding the integer sum 7 + 15 = 22, then determining 22 mod 17, the integer between 0 and 16 whose difference with 22 is a multiple of 17. In this case, that integer is 5, so .
Montgomery form.
If a and b are integers in the range [0, "N" − 1], then their sum is in the range [0, 2"N" − 2] and their difference is in the range [−"N" + 1, "N" − 1], so determining the representative in [0, "N" − 1] requires at most one subtraction or addition (respectively) of N. However, the product "ab" is in the range [0, "N"2 − 2"N" + 1]. Storing the intermediate integer product "ab" requires twice as many bits as either a or b, and efficiently determining the representative in [0, "N" − 1] requires division. Mathematically, the integer between 0 and "N" − 1 that is congruent to "ab" can be expressed by applying the Euclidean division theorem:
formula_2
where q is the quotient formula_3 and r, the remainder, is in the interval [0, "N" − 1]. The remainder r is "ab" mod "N". Determining r can be done by computing q, then subtracting "qN" from "ab". For example, again with formula_4, the product is determined by computing formula_5, dividing formula_6, and subtracting formula_7.
Because the computation of q requires division, it is undesirably expensive on most computer hardware. Montgomery form is a different way of expressing the elements of the ring in which modular products can be computed without expensive divisions. While divisions are still necessary, they can be done with respect to a different divisor R. This divisor can be chosen to be a power of two, for which division can be replaced by shifting, or a whole number of machine words, for which division can be replaced by omitting words. These divisions are fast, so most of the cost of computing modular products using Montgomery form is the cost of computing ordinary products.
The auxiliary modulus R must be a positive integer such that gcd("R", "N") = 1. For computational purposes it is also necessary that division and reduction modulo R are inexpensive, and the modulus is not useful for modular multiplication unless "R" > "N". The "Montgomery form" of the residue class with respect to R is "aR" mod "N", that is, it is the representative of the residue class . For example, suppose that "N" = 17 and that "R" = 100. The Montgomery forms of 3, 5, 7, and 15 are 300 mod 17 = 11, 500 mod 17 = 7, 700 mod 17 = 3, and 1500 mod 17 = 4.
Addition and subtraction in Montgomery form are the same as ordinary modular addition and subtraction because of the distributive law:
formula_8
formula_9
Note that doing the operation in Montgomery form does not lose information compared to doing it in the quotient ring Z/"NZ. This is a consequence of the fact that, because gcd("R", "N") = 1, multiplication by R is an isomorphism on the additive group Z/"NZ. For example, (7 + 15) mod 17 = 5, which in Montgomery form becomes (3 + 4) mod 17 = 7.
Multiplication in Montgomery form, however, is seemingly more complicated. The usual product of "aR" and "bR" does not represent the product of a and b because it has an extra factor of R:
formula_10
Computing products in Montgomery form requires removing the extra factor of R. While division by R is cheap, the intermediate product ("aR" mod "N")("bR" mod "N") is not divisible by R because the modulo operation has destroyed that property. So for instance, the product of the Montgomery forms of 7 and 15 modulo 17, with "R" = 100, is the product of 3 and 4, which is 12. Since 12 is not divisible by 100, additional effort is required to remove the extra factor of R.
Removing the extra factor of R can be done by multiplying by an integer "R"′ such that "RR"′ ≡ 1 (mod "N"), that is, by an "R"′ whose residue class is the modular inverse of R mod N. Then, working modulo N,
formula_11
The integer "R"′ exists because of the assumption that R and N are coprime. It can be constructed using the extended Euclidean algorithm. The extended Euclidean algorithm efficiently determines integers "R"′ and "N"′ that satisfy Bézout's identity:
0 < "R"′ < "N", 0 < "N"′ < "R", and:
formula_12
This shows that it is possible to do multiplication in Montgomery form. A straightforward algorithm to multiply numbers in Montgomery form is therefore to multiply "aR" mod "N", "bR" mod "N", and "R"′ as integers and reduce modulo N.
For example, to multiply 7 and 15 modulo 17 in Montgomery form, again with "R" = 100, compute the product of 3 and 4 to get 12 as above. The extended Euclidean algorithm implies that 8⋅100 − 47⋅17 = 1, so "R"′ = 8. Multiply 12 by 8 to get 96 and reduce modulo 17 to get 11. This is the Montgomery form of 3, as expected.
The REDC algorithm.
While the above algorithm is correct, it is slower than multiplication in the standard representation because of the need to multiply by "R"′ and divide by N. "Montgomery reduction", also known as REDC, is an algorithm that simultaneously computes the product by "R"′ and reduces modulo N more quickly than the naïve method. Unlike conventional modular reduction, which focuses on making the number smaller than N, Montgomery reduction focuses on making the number more divisible by R. It does this by adding a small multiple of N which is sophisticatedly chosen to cancel the residue modulo R. Dividing the result by R yields a much smaller number. This number is so much smaller that it is nearly the reduction modulo N, and computing the reduction modulo N requires only a final conditional subtraction. Because all computations are done using only reduction and divisions with respect to R, not N, the algorithm runs faster than a straightforward modular reduction by division.
function REDC is
input: Integers "R" and "N" with gcd("R", "N") = 1,
Integer "N"′ in [0, "R" − 1] such that "NN"′ ≡ −1 mod "R",
Integer "T" in the range [0, "RN" − 1].
output: Integer "S" in the range [0, "N" − 1] such that "S" ≡ "TR"−1 mod "N"
"m" ← (("T" mod "R")"N"′) mod "R"
"t" ← ("T" + "mN") / "R"
if "t" ≥ "N" then
return "t" − "N"
else
return "t"
end if
end function
To see that this algorithm is correct, first observe that m is chosen precisely so that "T" + "mN" is divisible by R. A number is divisible by R if and only if it is congruent to zero mod R, and we have:
formula_13
Therefore, t is an integer. Second, the output is either t or "t" − "N", both of which are congruent to "t" mod "N", so to prove that the output is congruent to "TR"−1 mod "N", it suffices to prove that t is "TR"−1 mod "N", t satisfies:
formula_14
Therefore, the output has the correct residue class. Third, m is in [0, "R" − 1], and therefore "T" + "mN" is between 0 and ("RN" − 1) + ("R" − 1)"N" < 2"RN". Hence t is less than 2"N", and because it's an integer, this puts t in the range [0, 2"N" − 1]. Therefore, reducing t into the desired range requires at most a single subtraction, so the algorithm's output lies in the correct range.
To use REDC to compute the product of 7 and 15 modulo 17, first convert to Montgomery form and multiply as integers to get 12 as above. Then apply REDC with "R" = 100, "N" = 17, "N"′ = 47, and "T" = 12. The first step sets m to 12 ⋅ 47 mod 100 = 64. The second step sets t to (12 + 64 ⋅ 17) / 100. Notice that 12 + 64 ⋅ 17 is 1100, a multiple of 100 as expected. t is set to 11, which is less than 17, so the final result is 11, which agrees with the computation of the previous section.
As another example, consider the product 7 ⋅ 15 mod 17 but with "R" = 10. Using the extended Euclidean algorithm, compute −5 ⋅ 10 + 3 ⋅ 17 = 1, so "N"′ will be −3 mod 10 = 7. The Montgomery forms of 7 and 15 are 70 mod 17 = 2 and 150 mod 17 = 14, respectively. Their product 28 is the input T to REDC, and since 28 < "RN" = 170, the assumptions of REDC are satisfied. To run REDC, set m to (28 mod 10) ⋅ 7 mod 10 = 196 mod 10
6. Then 28 + 6 ⋅ 17 = 130, so "t" = 13. Because 30 mod 17 = 13, this is the Montgomery form of 3 = 7 ⋅ 15 mod 17.
Arithmetic in Montgomery form.
Many operations of interest modulo N can be expressed equally well in Montgomery form. Addition, subtraction, negation, comparison for equality, multiplication by an integer not in Montgomery form, and greatest common divisors with N may all be done with the standard algorithms. The Jacobi symbol can be calculated as formula_15 as long as formula_16 is stored.
When "R" > "N", most other arithmetic operations can be expressed in terms of REDC. This assumption implies that the product of two representatives mod N is less than RN, the exact hypothesis necessary for REDC to generate correct output. In particular, the product of "aR" mod "N" and "bR" mod "N" is REDC(("aR" mod "N")("bR" mod "N")). The combined operation of multiplication and REDC is often called "Montgomery multiplication".
Conversion into Montgomery form is done by computing REDC(("a" mod "N")("R"2 mod "N")). Conversion out of Montgomery form is done by computing REDC("aR" mod "N"). The modular inverse of "aR" mod "N" is REDC(("aR" mod "N")−1("R"3 mod "N")). Modular exponentiation can be done using exponentiation by squaring by initializing the initial product to the Montgomery representation of 1, that is, to "R" mod "N", and by replacing the multiply and square steps by Montgomery multiplies.
Performing these operations requires knowing at least "N"′ and "R"2 mod "N". When R is a power of a small positive integer b, "N"′ can be computed by Hensel's lemma: The inverse of N modulo b is computed by a naïve algorithm (for instance, if "b" = 2 then the inverse is 1), and Hensel's lemma is used repeatedly to find the inverse modulo higher and higher powers of b, stopping when the inverse modulo R is known; "N"′ is the negation of this inverse. The constants "R" mod "N" and "R"3 mod "N" can be generated as REDC("R"2 mod "N") and as REDC(("R"2 mod "N")("R"2 mod "N")). The fundamental operation is to compute REDC of a product. When standalone REDC is needed, it can be computed as REDC of a product with 1 mod "N". The only place where a direct reduction modulo N is necessary is in the precomputation of "R"2 mod "N".
Montgomery arithmetic on multiprecision integers.
Most cryptographic applications require numbers that are hundreds or even thousands of bits long. Such numbers are too large to be stored in a single machine word. Typically, the hardware performs multiplication mod some base B, so performing larger multiplications requires combining several small multiplications. The base B is typically 2 for microelectronic applications, 28 for 8-bit firmware, or 232 or 264 for software applications.
The REDC algorithm requires products modulo R, and typically "R" > "N" so that REDC can be used to compute products. However, when R is a power of B, there is a variant of REDC which requires products only of machine word sized integers. Suppose that positive multi-precision integers are stored little endian, that is, x is stored as an array "x"[0], ..., "x"[ℓ - 1] such that 0 ≤ "x"["i"] < "B" for all i and "x" = ∑ "x"["i"] "Bi". The algorithm begins with a multiprecision integer T and reduces it one word at a time. First an appropriate multiple of N is added to make T divisible by B. Then a multiple of N is added to make T divisible by "B"2, and so on. Eventually T is divisible by R, and after division by R the algorithm is in the same place as REDC was after the computation of t.
function MultiPrecisionREDC is
Input: Integer "N" with gcd("B", "N") = 1, stored as an array of "p" words,
Integer "R" = "B""r", --thus, "r" = "log""B" "R"
Integer "N"′ in [0, "B" − 1] such that "NN"′ ≡ −1 (mod "B"),
Integer "T" in the range 0 ≤ "T" < "RN", stored as an array of "r" + "p" words.
Output: Integer "S" in [0, "N" − 1] such that "TR"−1 ≡ "S" (mod "N"), stored as an array of "p" words.
Set "T"["r" + "p"] = 0 "(extra carry word)"
for 0 ≤ "i" < "r" do
"--loop1- Make T divisible by Bi+1"
"c" ← 0
"m" ← "T"["i"] ⋅ "N"′ mod "B"
for 0 ≤ "j" < "p" do
"--loop2- Add the m ⋅ N[j] and the carry from earlier, and find the new carry"
"x" ← "T"["i" + "j"] + "m" ⋅ "N"["j"] + "c"
"T"["i" + "j"] ← "x" mod "B"
"c" ← ⌊"x" / "B"⌋
end for
for "p" ≤ "j" ≤ "r" + "p" − "i" do
"--loop3- Continue carrying"
"x" ← "T"["i" + "j"] + "c"
"T"["i" + "j"] ← "x" mod "B"
"c" ← ⌊"x" / "B"⌋
end for
end for
for 0 ≤ "i" ≤ "p" do
"S"["i"] ← "T"["i" + "r"]
end for
if "S" ≥ "N" then
return "S" − "N"
else
return S
end if
end function
The final comparison and subtraction is done by the standard algorithms.
The above algorithm is correct for essentially the same reasons that REDC is correct. Each time through the i loop, m is chosen so that "T"["i"] + "mN"[0] is divisible by B. Then mNBi is added to T. Because this quantity is zero mod N, adding it does not affect the value of "T" mod "N". If mi denotes the value of m computed in the ith iteration of the loop, then the algorithm sets S to "T" + (∑ "mi Bi")"N". Because MultiPrecisionREDC and REDC produce the same output, this sum is the same as the choice of m that the REDC algorithm would make.
The last word of T, "T"["r" + "p"] (and consequently "S"["p"]), is used only to hold a carry, as the initial reduction result is bound to a result in the range of 0 ≤ "S" < "2N". It follows that this extra carry word can be avoided completely if it is known in advance that "R" ≥ "2N". On a typical binary implementation, this is equivalent to saying that this carry word can be avoided if the number of bits of N is smaller than the number of bits of R. Otherwise, the carry will be either zero or one. Depending upon the processor, it may be possible to store this word as a carry flag instead of a full-sized word.
It is possible to combine multiprecision multiplication and REDC into a single algorithm. This combined algorithm is usually called Montgomery multiplication. Several different implementations are described by Koç, Acar, and Kaliski. The algorithm may use as little as "p" + 2 words of storage (plus a carry bit).
As an example, let "B" = 10, "N" = 997, and "R" = 1000. Suppose that "a" = 314 and "b" = 271. The Montgomery representations of a and b are 314000 mod 997 = 942 and 271000 mod 997 = 813. Compute 942 ⋅ 813 = 765846. The initial input T to MultiPrecisionREDC will be [6, 4, 8, 5, 6, 7]. The number N will be represented as [7, 9, 9]. The extended Euclidean algorithm says that −299 ⋅ 10 + 3 ⋅ 997 = 1, so "N"′ will be 7.
i ← 0
m ← 6 ⋅ 7 mod 10 = 2
j T c
0 0485670 2 "(After first iteration of first loop)"
1 0485670 2
2 0485670 2
3 0487670 0 "(After first iteration of second loop)"
4 0487670 0
5 0487670 0
6 0487670 0
i ← 1
m ← 4 ⋅ 7 mod 10 = 8
j T c
0 0087670 6 "(After first iteration of first loop)"
1 0067670 8
2 0067670 8
3 0067470 1 "(After first iteration of second loop)"
4 0067480 0
5 0067480 0
i ← 2
m ← 6 ⋅ 7 mod 10 = 2
j T c
0 0007480 2 "(After first iteration of first loop)"
1 0007480 2
2 0007480 2
3 0007400 1 "(After first iteration of second loop)"
4 0007401 0
Therefore, before the final comparison and subtraction, "S" = 1047. The final subtraction yields the number 50. Since the Montgomery representation of 314 ⋅ 271 mod 997 = 349 is 349000 mod 997 = 50, this is the expected result.
When working in base 2, determining the correct m at each stage is particularly easy: If the current working bit is even, then m is zero and if it's odd, then m is one. Furthermore, because each step of MultiPrecisionREDC requires knowing only the lowest bit, Montgomery multiplication can be easily combined with a carry-save adder.
Side-channel attacks.
Because Montgomery reduction avoids the correction steps required in conventional division when quotient digit estimates are inaccurate, it is mostly free of the conditional branches which are the primary targets of timing and power side-channel attacks; the sequence of instructions executed is independent of the input operand values. The only exception is the final conditional subtraction of the modulus, but it is easily modified (to always subtract something, either the modulus or zero) to make it resistant. It is of course necessary to ensure that the exponentiation algorithm built around the multiplication primitive is also resistant.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\{ a + kN \\colon k \\in \\mathbf{Z} \\},"
},
{
"math_id": 1,
"text": "\\bar a \\equiv \\bar b \\pmod{N}."
},
{
"math_id": 2,
"text": "ab = qN + r,"
},
{
"math_id": 3,
"text": "\\lfloor ab / N \\rfloor"
},
{
"math_id": 4,
"text": "N=17"
},
{
"math_id": 5,
"text": "7 \\cdot 15 = 105"
},
{
"math_id": 6,
"text": "\\lfloor 105 / 17 \\rfloor = 6"
},
{
"math_id": 7,
"text": "105 - 6 \\cdot 17 = 105 - 102 = 3"
},
{
"math_id": 8,
"text": "aR + bR = (a + b)R,"
},
{
"math_id": 9,
"text": "aR - bR = (a - b)R."
},
{
"math_id": 10,
"text": "(aR \\bmod N)(bR \\bmod N) \\bmod N = (abR)R \\bmod N."
},
{
"math_id": 11,
"text": "(aR \\bmod N)(bR \\bmod N)R' \\equiv (aR)(bR)R^{-1} \\equiv (ab)R \\pmod{N}."
},
{
"math_id": 12,
"text": "RR' - NN' = 1."
},
{
"math_id": 13,
"text": "T + mN \\equiv T + (((T \\bmod R)N') \\bmod R)N \\equiv T + T N' N \\equiv T - T \\equiv 0 \\pmod{R}."
},
{
"math_id": 14,
"text": "t \\equiv (T + mN)R^{-1} \\equiv TR^{-1} + (mR^{-1})N \\equiv TR^{-1} \\pmod{N}."
},
{
"math_id": 15,
"text": "\\big(\\tfrac{a}{N}\\big) = \\big(\\tfrac{aR}{N}\\big) / \\big(\\tfrac{R}{N}\\big)"
},
{
"math_id": 16,
"text": "\\big(\\tfrac{R}{N}\\big)"
}
] | https://en.wikipedia.org/wiki?curid=904378 |
90446 | Equality (mathematics) | Relationship asserting that two quantities are the same
In mathematics, equality is a relationship between two quantities or, more generally, two mathematical expressions, asserting that the quantities have the same value, or that the expressions represent the same mathematical object. Equality between "A" and "B" is written "A" = "B", and pronounced ""A" equals "B"". In this equality, "A" and "B" are the "members" of the equality and are distinguished by calling them "left-hand side" or "left member", and "right-hand side" or "right member". Two objects that are not equal are said to be distinct.
A formula such as formula_0 where x and y are any expressions, means that x and y denote or represent the same object. For example,
formula_1
are two notations for the same number. Similarly, using set builder notation,
formula_2
since the two sets have the same elements. (This equality results from the axiom of extensionality that is often expressed as "two sets that have the same elements are equal".)
The truth of an equality depends on an interpretation of its members. In the above examples, the equalities are true if the members are interpreted as numbers or sets, but are false if the members are interpreted as expressions or sequences of symbols.
An identity, such as formula_3 means that if x is replaced with any number, then the two expressions take the same value. This may also be interpreted as saying that the two sides of the equals sign represent the same function (equality of functions), or that the two expressions denote the same polynomial (equality of polynomials).
Etymology.
The word is derived from the Latin "aequālis" ("equal", "like", "comparable", "similar"), which itself stems from "aequus" ("equal", "level", "fair", "just").
Basic properties.
If restricted to the elements of a given set formula_15, those first three properties make equality an equivalence relation on formula_15. In fact, equality is the unique equivalence relation on formula_15 whose equivalence classes are all singletons.
Equality as predicate.
In logic, a predicate is a proposition which may have some free variables. Equality is a predicate, which may be true for some values of the variables (if any) and false for other values. More specifically, equality is a binary relation (i.e., a two-argument predicate) which may produce a truth value ("true" or "false") from its arguments. In computer programming, equality is called a Boolean-valued expression, and its computation from the two expressions is known as comparison.
Identities.
When "A" and "B" may be viewed as functions of some variables, then "A" = "B" means that "A" and "B" define the same function. Such an equality of functions is sometimes called an identity. An example is formula_16 Sometimes, but not always, an identity is written with a triple bar: formula_17
Equations.
An equation is the problem of finding values of some variable, called unknown, for which the specified equality is true. Each value of the unknown for which the equation holds is called a solution of the given equation; also stated as satisfying the equation. For example, the equation formula_18 has the values formula_19 and formula_20 as its only solutions. The terminology is used similarly for equations with several unknowns.
An equation can be used to define a set. For example, the set of all solution pairs formula_21 of the equation formula_22 forms the unit circle in analytic geometry; therefore, this equation is called the equation of the unit circle.
An identity is an equality that is true for all values of its variables in a given domain. An "equation" may sometimes mean an identity, but more often than not, it specifies a subset of the variable space to be the subset where the equation is true. There is no standard notation that distinguishes an equation from an identity, or other use of the equality relation: one has to guess an appropriate interpretation from the semantics of expressions and the context.
See also: Equation solving
In logic.
In mathematical logic and mathematical philosophy, equality is often described through the following properties:
formula_23
formula_26
For example: For all real numbers "a" and "b", if "a" = "b", then "a" ≥ 0 implies "b" ≥ 0 (here, formula_31 is "x" ≥ 0)
These properties offer a formal reinterpretation of equality from how it is defined in standard Zermelo–Fraenkel set theory (ZFC) or other formal foundations. In ZFC, equality only means that two sets have the same elements. However, outside of set theory, mathematicians don't tend to view their objects of interest as sets. For instance, many mathematicians would say that the expression "formula_32" (see union) is an abuse of notation or meaningless. This is a more abstracted framework which can be grounded in ZFC (that is, both axioms can be proved within ZFC as well as most other formal foundations), but is closer to how most mathematicians use equality.
Note that this says "Equality implies these two properties" not that "These properties define equality"; this is intentional. This makes it an "incomplete axiomatization" of equality. That is, it does not say what equality "is", only what "equality" must satify. However, the two axioms as stated are still generally useful, even as an "incomplete axiomatization" of equality, as they are usually sufficient for deducing most properties of equality that mathematicians care about. (See the following subsection)
If these properties were to define a "complete axiomatization" of equality, meaning, if they were to define equality, then the converse of the second statement must be true. The converse of the Substitution property is "the identity of indiscernibles", which states that two distinct things cannot have all their properties in common. In mathematics, the "identity of indiscernibles" is usually rejected since indiscernibles in mathematical logic are not necessarily forbidden. Set equality in ZFC is capable of declairing these indiscernibles as not equal, but an equality defined by these properties is not. Thus these properties form a strictly weaker notion of equality than set equality in ZFC. Outside of pure math, the "identity of indiscernibles" has attracted much controversy and criticism, especially from corpuscular philosophy and quantum mechanics. This is why the properties are said to not form a complete axiomatization.
However, apart from cases dealing with indiscernibles, these properties taken as axioms of equality are equivalent to equality as defined in ZFC.
These are sometimes taken as the definition of equality, such as in some areas of first-order logic.
Derivations of basic properties.
The "Law of identity" is distinct from reflexivity in two main ways: first, the Law of Identity applies only to cases of equality, and second, it is not restricted to elements of a set. However, many mathematicians refer to both as "Reflexivity", which is generally harmless.
This is also sometimes included in the axioms of equality, but isn't necessary as it can be deduced from the other two axioms as shown above.
Approximate equality.
There are some logic systems that do not have any notion of equality. This reflects the undecidability of the equality of two real numbers, defined by formulas involving the integers, the basic arithmetic operations, the logarithm and the exponential function. In other words, there cannot exist any algorithm for deciding such an equality (see Richardson's theorem).
The binary relation "is approximately equal" (denoted by the symbol formula_51) between real numbers or other things, even if more precisely defined, is not transitive (since many small differences can add up to something big). However, equality almost everywhere "is" transitive.
A questionable equality under test may be denoted using the formula_52 symbol.
Relation with equivalence, congruence, and isomorphism.
Viewed as a relation, equality is the archetype of the more general concept of an equivalence relation on a set: those binary relations that are reflexive, symmetric and transitive. The identity relation is an equivalence relation. Conversely, let "R" be an equivalence relation, and let us denote by "xR" the equivalence class of "x", consisting of all elements "z" such that "x R z". Then the relation "x R y" is equivalent with the equality "xR" = "yR". It follows that equality is the finest equivalence relation on any set "S" in the sense that it is the relation that has the smallest equivalence classes (every class is reduced to a single element).
In some contexts, equality is sharply distinguished from "equivalence" or "isomorphism." For example, one may distinguish "fractions" from "rational numbers," the latter being equivalence classes of fractions: the fractions formula_53 and formula_54 are distinct as fractions (as different strings of symbols) but they "represent" the same rational number (the same point on a number line). This distinction gives rise to the notion of a quotient set.
Similarly, the sets
formula_55 and formula_56
are not equal sets – the first consists of letters, while the second consists of numbers – but they are both sets of three elements and thus isomorphic, meaning that there is a bijection between them. For example
formula_57
However, there are other choices of isomorphism, such as
formula_58
and these sets cannot be identified without making such a choice – any statement that identifies them "depends on choice of identification". This distinction, between equality and isomorphism, is of fundamental importance in category theory and is one motivation for the development of category theory.
In some cases, one may consider as equal two mathematical objects that are only equivalent for the properties and structure being considered. The word congruence (and the associated symbol formula_59) is frequently used for this kind of equality, and is defined as the quotient set of the isomorphism classes between the objects. In geometry for instance, two geometric shapes are said to be equal or congruent when one may be moved to coincide with the other, and the equality/congruence relation is the isomorphism classes of isometries between shapes. Similarly to isomorphisms of sets, the difference between isomorphisms and equality/congruence between such mathematical objects with properties and structure was one motivation for the development of category theory, as well as for homotopy type theory and univalent foundations.
Equality in set theory.
Equality of sets is axiomatized in set theory in two different ways, depending on whether the axioms are based on a first-order language with or without equality.
Set equality based on first-order logic with equality.
In first-order logic with equality, the axiom of extensionality states that two sets which "contain" the same elements are the same set.
Incorporating half of the work into the first-order logic may be regarded as a mere matter of convenience, as noted by Lévy.
"The reason why we take up first-order predicate calculus "with equality" is a matter of convenience; by this we save the labor of defining equality and proving all its properties; this burden is now assumed by the logic."
Set equality based on first-order logic without equality.
In first-order logic without equality, two sets are "defined" to be equal if they contain the same elements. Then the axiom of extensionality states that two equal sets "are contained in" the same sets.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "x=y,"
},
{
"math_id": 1,
"text": "1.5= 3/2,"
},
{
"math_id": 2,
"text": "\\{x \\mid x\\in \\Z \\text{ and } 0<x\\le 3\\} = \\{1,2,3\\},"
},
{
"math_id": 3,
"text": "(x+1)^2=x^2+2x+1,"
},
{
"math_id": 4,
"text": "f(x)"
},
{
"math_id": 5,
"text": "f(a) = f(b)"
},
{
"math_id": 6,
"text": "2a-5 = 2b-5"
},
{
"math_id": 7,
"text": "f(x) = 2x-5"
},
{
"math_id": 8,
"text": "a^2 = 2b^2"
},
{
"math_id": 9,
"text": "a^2 /b^2 = 2"
},
{
"math_id": 10,
"text": "f(x,y) = x/y^2"
},
{
"math_id": 11,
"text": "y = b"
},
{
"math_id": 12,
"text": "g(a) = h(a)"
},
{
"math_id": 13,
"text": "\\frac{d}{da}g(a) = \\frac{d}{da}h(a)"
},
{
"math_id": 14,
"text": "f(x) = \\frac{dx}{da}"
},
{
"math_id": 15,
"text": "S"
},
{
"math_id": 16,
"text": "\\left(x + 1\\right)\\left(x + 1\\right) = x^2 + 2 x + 1."
},
{
"math_id": 17,
"text": "\\left(x + 1\\right)\\left(x + 1\\right) \\equiv x^2 + 2 x + 1."
},
{
"math_id": 18,
"text": "x^2 - 6x + 5=0"
},
{
"math_id": 19,
"text": "x=1"
},
{
"math_id": 20,
"text": "x=5"
},
{
"math_id": 21,
"text": "(x,y)"
},
{
"math_id": 22,
"text": "x^2 + y^2 = 1"
},
{
"math_id": 23,
"text": "\\forall a(a = a)"
},
{
"math_id": 24,
"text": "a"
},
{
"math_id": 25,
"text": "a = a"
},
{
"math_id": 26,
"text": "(a=b) \\implies \\bigl[ \\phi(a) \\Rightarrow \\phi(b) \\bigr] "
},
{
"math_id": 27,
"text": "\\phi(x),"
},
{
"math_id": 28,
"text": "a=b"
},
{
"math_id": 29,
"text": "\\phi(a)"
},
{
"math_id": 30,
"text": "\\phi(b)"
},
{
"math_id": 31,
"text": "\\phi(x)"
},
{
"math_id": 32,
"text": "1 \\cup 2"
},
{
"math_id": 33,
"text": "xRy \\Leftrightarrow x=y"
},
{
"math_id": 34,
"text": "a \\in S"
},
{
"math_id": 35,
"text": "a=a"
},
{
"math_id": 36,
"text": "aRa"
},
{
"math_id": 37,
"text": "a,b \\in S"
},
{
"math_id": 38,
"text": "aRb"
},
{
"math_id": 39,
"text": "\\phi(x) : xRa"
},
{
"math_id": 40,
"text": "(a=b) \\implies (aRa \\Rightarrow bRa)"
},
{
"math_id": 41,
"text": "bRa"
},
{
"math_id": 42,
"text": "a,b,c \\in S"
},
{
"math_id": 43,
"text": "bRc"
},
{
"math_id": 44,
"text": "\\phi(x): xRc"
},
{
"math_id": 45,
"text": "(b=a) \\implies (bRc \\Rightarrow aRc)"
},
{
"math_id": 46,
"text": "b=a"
},
{
"math_id": 47,
"text": "aRc"
},
{
"math_id": 48,
"text": "\\phi(x): f(a) = f(x)"
},
{
"math_id": 49,
"text": "(a=b) \\implies [(f(a) = f(a)) \\Rightarrow (f(a) = f(b))]"
},
{
"math_id": 50,
"text": "f(a) = f(a)"
},
{
"math_id": 51,
"text": "\\approx"
},
{
"math_id": 52,
"text": "\\stackrel{?}{=} "
},
{
"math_id": 53,
"text": "1/2"
},
{
"math_id": 54,
"text": "2/4"
},
{
"math_id": 55,
"text": "\\{\\text{A}, \\text{B}, \\text{C}\\} "
},
{
"math_id": 56,
"text": "\\{ 1, 2, 3 \\} "
},
{
"math_id": 57,
"text": "\\text{A} \\mapsto 1, \\text{B} \\mapsto 2, \\text{C} \\mapsto 3."
},
{
"math_id": 58,
"text": "\\text{A} \\mapsto 3, \\text{B} \\mapsto 2, \\text{C} \\mapsto 1,"
},
{
"math_id": 59,
"text": "\\cong"
},
{
"math_id": 60,
"text": "x = y \\implies \\forall z, (z \\in x \\iff z \\in y)"
},
{
"math_id": 61,
"text": "x = y \\implies \\forall z, (x \\in z \\iff y \\in z)"
},
{
"math_id": 62,
"text": "(\\forall z, (z \\in x \\iff z \\in y)) \\implies x = y"
},
{
"math_id": 63,
"text": "(x = y) \\ := \\ \\forall z, (z \\in x \\iff z \\in y)"
}
] | https://en.wikipedia.org/wiki?curid=90446 |
904490 | Pure submodule | Module components with flexibility in module theory
In mathematics, especially in the field of module theory, the concept of pure submodule provides a generalization of direct summand, a type of particularly well-behaved piece of a module. Pure modules are complementary to flat modules and generalize Prüfer's notion of pure subgroups. While flat modules are those modules which leave short exact sequences exact after tensoring, a pure submodule defines a short exact sequence (known as a pure exact sequence) that remains exact after tensoring with any module. Similarly a flat module is a direct limit of projective modules, and a pure exact sequence is a direct limit of split exact sequences.
Definition.
Let "R" be a ring (associative, with 1), let "M" be a (left) module over "R", let "P" be a submodule of "M" and let "i": "P" → "M" be the natural injective map. Then "P" is a pure submodule of "M" if, for any (right) "R"-module "X", the natural induced map id"X" ⊗ "i" : "X" ⊗ "P" → "X" ⊗ "M" (where the tensor products are taken over "R") is injective.
Analogously, a short exact sequence
formula_0
of (left) "R"-modules is pure exact if the sequence stays exact when tensored with any (right) "R"-module "X". This is equivalent to saying that "f"("A") is a pure submodule of "B".
Equivalent characterizations.
Purity of a submodule can also be expressed element-wise; it is really a statement about the solvability of certain systems of linear equations. Specifically, "P" is pure in "M" if and only if the following condition holds: for any "m"-by-"n" matrix ("a""ij") with entries in "R", and any set "y"1, ..., "y""m" of elements of "P", if there exist elements "x"1, ..., "x""n" in "M" such that
formula_1
then there also exist elements "x"1′, ..., "x""n"′ in "P" such that
formula_2
Another characterization is: a sequence is pure exact if and only if it is the filtered colimit (also known as direct limit) of split exact sequences
formula_3
Properties.
Suppose
formula_0
is a short exact sequence of "R"-modules, then:
If formula_0 is pure-exact, and "F" is a finitely presented "R"-module, then every homomorphism from "F" to "C" can be lifted to "B", i.e. to every "u" : "F" → "C" there exists "v" : "F" → "B" such that "gv"="u". | [
{
"math_id": 0,
"text": "0 \\longrightarrow A\\,\\ \\stackrel{f}{\\longrightarrow}\\ B\\,\\ \\stackrel{g}{\\longrightarrow}\\ C \\longrightarrow 0"
},
{
"math_id": 1,
"text": "\\sum_{j=1}^n a_{ij}x_j = y_i \\qquad\\mbox{ for } i=1,\\ldots,m"
},
{
"math_id": 2,
"text": "\\sum_{j=1}^n a_{ij}x'_j = y_i \\qquad\\mbox{ for } i=1,\\ldots,m"
},
{
"math_id": 3,
"text": "0 \\longrightarrow A_i \\longrightarrow B_i \\longrightarrow C_i \\longrightarrow 0."
}
] | https://en.wikipedia.org/wiki?curid=904490 |
90465 | Super-Poulet number | A super-Poulet number is a Poulet number, or pseudoprime to base 2, whose every divisor "d" divides
2"d" − 2.
For example, 341 is a super-Poulet number: it has positive divisors {1, 11, 31, 341} and we have:
(211 - 2) / 11 = 2046 / 11 = 186
(231 - 2) / 31 = 2147483646 / 31 = 69273666
(2341 - 2) / 341 = 13136332798696798888899954724741608669335164206654835981818117894215788100763407304286671514789484550
When formula_0 is not prime, then it and every divisor of it are a pseudoprime to base 2, and a super-Poulet number.
The super-Poulet numbers below 10,000 are (sequence in the OEIS):
Super-Poulet numbers with 3 or more distinct prime divisors.
It is relatively easy to get super-Poulet numbers with 3 distinct prime divisors. If you find three Poulet numbers with three common prime factors, you get a super-Poulet number, as you built the product of the three prime factors.
Example:
2701 = 37 * 73 is a Poulet number,
4033 = 37 * 109 is a Poulet number,
7957 = 73 * 109 is a Poulet number;
so 294409 = 37 * 73 * 109 is a Poulet number too.
Super-Poulet numbers with up to 7 distinct prime factors you can get with the following numbers:
For example, 1118863200025063181061994266818401 = 6421 * 12841 * 51361 * 57781 * 115561 * 192601 * 205441 is a super-Poulet number with 7 distinct prime factors and 120 Poulet numbers. | [
{
"math_id": 0,
"text": " \\frac{ \\Phi_n(2)}{gcd(n, \\Phi_n(2))}"
}
] | https://en.wikipedia.org/wiki?curid=90465 |
9049338 | Gustav Herglotz | German mathematician
Gustav Herglotz (2 February 1881 – 22 March 1953) was a German Bohemian physicist best known for his works on the theory of relativity and seismology.
Biography.
Gustav Ferdinand Joseph Wenzel Herglotz was born in Volary num. 28 to a public notary Gustav Herglotz (also a Doctor of Law) and his wife Maria née Wachtel. The family were Sudeten Germans. He studied mathematics and astronomy at the University of Vienna in 1899, and attended lectures by Ludwig Boltzmann. In this time of study, he had a friendship with his colleagues Paul Ehrenfest, Hans Hahn and Heinrich Tietze. In 1900 he went to the LMU Munich and achieved his Doctorate in 1902 under Hugo von Seeliger. Afterwards, he went to the University of Göttingen, where he habilitated under Felix Klein. In 1904 he became Privatdozent for Astronomy and Mathematics there, and in 1907 Professor extraordinarius. In 1908 he became Professor extraordinarius in Vienna, and in 1909 at the University of Leipzig. From 1925 (until becoming Emeritus in 1947) he again was in Göttingen as the successor of Carl Runge on the chair of applied mathematics. One of his students was Emil Artin.
Work.
Herglotz worked in the fields of seismology, number theory, celestial mechanics, theory of electrons, special relativity, general relativity, hydrodynamics, refraction theory.
formula_1
The theorem also asserts that the probability measure is unique to "f".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_3"
},
{
"math_id": 1,
"text": "\\forall z \\in D \\ \\ f (z) \\ = \\ \\int_{\\partial D} \\frac{\\lambda + z}{\\lambda - z}\\ d\\mu(\\lambda)."
}
] | https://en.wikipedia.org/wiki?curid=9049338 |
904940 | First-countable space | Topological space where each point has a countable neighbourhood basis
In topology, a branch of mathematics, a first-countable space is a topological space satisfying the "first axiom of countability". Specifically, a space formula_0 is said to be first-countable if each point has a countable neighbourhood basis (local base). That is, for each point formula_1 in formula_0 there exists a sequence formula_2 of neighbourhoods of formula_1 such that for any neighbourhood formula_3 of formula_1 there exists an integer formula_4 with formula_5 contained in formula_6
Since every neighborhood of any point contains an open neighborhood of that point, the neighbourhood basis can be chosen without loss of generality to consist of open neighborhoods.
Examples and counterexamples.
The majority of 'everyday' spaces in mathematics are first-countable. In particular, every metric space is first-countable. To see this, note that the set of open balls centered at formula_1 with radius formula_7 for integers form a countable local base at formula_8
An example of a space that is not first-countable is the cofinite topology on an uncountable set (such as the real line). More generally, the Zariski topology on an algebraic variety over an uncountable field is not first-countable.
Another counterexample is the ordinal space formula_9 where formula_10 is the first uncountable ordinal number. The element formula_10 is a limit point of the subset formula_11 even though no sequence of elements in formula_11 has the element formula_10 as its limit. In particular, the point formula_10 in the space formula_9 does not have a countable local base. Since formula_10 is the only such point, however, the subspace formula_12 is first-countable.
The quotient space formula_13 where the natural numbers on the real line are identified as a single point is not first countable. However, this space has the property that for any subset formula_14 and every element formula_1 in the closure of formula_15 there is a sequence in A converging to formula_8 A space with this sequence property is sometimes called a Fréchet–Urysohn space.
First-countability is strictly weaker than second-countability. Every second-countable space is first-countable, but any uncountable discrete space is first-countable but not second-countable.
Properties.
One of the most important properties of first-countable spaces is that given a subset formula_15 a point formula_1 lies in the closure of formula_14 if and only if there exists a sequence formula_16 in formula_14 that converges to formula_8 (In other words, every first-countable space is a Fréchet-Urysohn space and thus also a sequential space.) This has consequences for limits and continuity. In particular, if formula_17 is a function on a first-countable space, then formula_17 has a limit formula_18 at the point formula_1 if and only if for every sequence formula_19 where formula_20 for all formula_21 we have formula_22 Also, if formula_17 is a function on a first-countable space, then formula_17 is continuous if and only if whenever formula_19 then formula_23
In first-countable spaces, sequential compactness and countable compactness are equivalent properties. However, there exist examples of sequentially compact, first-countable spaces that are not compact (these are necessarily not metrizable spaces). One such space is the ordinal space formula_24 Every first-countable space is compactly generated.
Every subspace of a first-countable space is first-countable. Any countable product of a first-countable space is first-countable, although uncountable products need not be.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "N_1, N_2, \\ldots"
},
{
"math_id": 3,
"text": "N"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "N_i"
},
{
"math_id": 6,
"text": "N."
},
{
"math_id": 7,
"text": "1/n"
},
{
"math_id": 8,
"text": "x."
},
{
"math_id": 9,
"text": "\\omega_1 + 1 = \\left[0, \\omega_1\\right]"
},
{
"math_id": 10,
"text": "\\omega_1"
},
{
"math_id": 11,
"text": "\\left[0, \\omega_1\\right)"
},
{
"math_id": 12,
"text": "\\omega_1 = \\left[0, \\omega_1\\right)"
},
{
"math_id": 13,
"text": "\\R / \\N"
},
{
"math_id": 14,
"text": "A"
},
{
"math_id": 15,
"text": "A,"
},
{
"math_id": 16,
"text": "\\left(x_n\\right)_{n=1}^{\\infty}"
},
{
"math_id": 17,
"text": "f"
},
{
"math_id": 18,
"text": "L"
},
{
"math_id": 19,
"text": "x_n \\to x,"
},
{
"math_id": 20,
"text": "x_n \\neq x"
},
{
"math_id": 21,
"text": "n,"
},
{
"math_id": 22,
"text": "f\\left(x_n\\right) \\to L."
},
{
"math_id": 23,
"text": "f\\left(x_n\\right) \\to f(x)."
},
{
"math_id": 24,
"text": "\\left[0, \\omega_1\\right)."
}
] | https://en.wikipedia.org/wiki?curid=904940 |
90500 | Boosting (machine learning) | Method in machine learning
<templatestyles src="Machine learning/styles.css"/>
In machine learning, boosting is an ensemble meta-algorithm for primarily reducing bias, variance. It is used in supervised learning and a family of machine learning algorithms that convert weak learners to strong ones.
The concept of boosting is based on the question posed by Kearns and Valiant (1988, 1989): "Can a set of weak learners create a single strong learner?" A weak learner is defined to be a classifier that is only slightly correlated with the true classification (it can label examples better than random guessing). In contrast, a strong learner is a classifier that is arbitrarily well-correlated with the true classification.
Robert Schapire answered the question posed by Kearns and Valiant in the affirmative in a paper published in 1990.This has had significant ramifications in machine learning and statistics, most notably leading to the development of boosting.
When first introduced, the "hypothesis boosting problem" simply referred to the process of turning a weak learner into a strong learner. "Informally, [the hypothesis boosting] problem asks whether an efficient learning algorithm […] that outputs a hypothesis whose performance is only slightly better than random guessing [i.e. a weak learner] implies the existence of an efficient algorithm that outputs a hypothesis of arbitrary accuracy [i.e. a strong learner]." Algorithms that achieve hypothesis boosting quickly became simply known as "boosting". Freund and Schapire's arcing (Adapt[at]ive Resampling and Combining), as a general technique, is more or less synonymous with boosting.
Boosting algorithms.
While boosting is not algorithmically constrained, most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are weighted in a way that is related to the weak learners' accuracy. After a weak learner is added, the data weights are readjusted, known as "re-weighting". Misclassified input data gain a higher weight and examples that are classified correctly lose weight. Thus, future weak learners focus more on the examples that previous weak learners misclassified.
There are many boosting algorithms. The original ones, proposed by Robert Schapire (a recursive majority gate formulation), and Yoav Freund (boost by majority), were not adaptive and could not take full advantage of the weak learners. Schapire and Freund then developed AdaBoost, an adaptive boosting algorithm that won the prestigious Gödel Prize.
Only algorithms that are provable boosting algorithms in the probably approximately correct learning formulation can accurately be called "boosting algorithms". Other algorithms that are similar in spirit to boosting algorithms are sometimes called "leveraging algorithms", although they are also sometimes incorrectly called boosting algorithms.
The main variation between many boosting algorithms is their method of weighting training data points and hypotheses. AdaBoost is very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners. It is often the basis of introductory coverage of boosting in university machine learning courses. There are many more recent algorithms such as LPBoost, TotalBoost, BrownBoost, xgboost, MadaBoost, LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework, which shows that boosting performs gradient descent in a function space using a convex cost function.
Object categorization in computer vision.
Given images containing various known objects in the world, a classifier can be learned from them to automatically classify the objects in future images. Simple classifiers built based on some image feature of the object tend to be weak in categorization performance. Using boosting methods for object categorization is a way to unify the weak classifiers in a special way to boost the overall ability of categorization.
Problem of object categorization.
Object categorization is a typical task of computer vision that involves determining whether or not an image contains some specific category of object. The idea is closely related with recognition, identification, and detection. Appearance based object categorization typically contains feature extraction, learning a classifier, and applying the classifier to new examples. There are many ways to represent a category of objects, e.g. from shape analysis, bag of words models, or local descriptors such as SIFT, etc. Examples of supervised classifiers are Naive Bayes classifiers, support vector machines, mixtures of Gaussians, and neural networks. However, research has shown that object categories and their locations in images can be discovered in an unsupervised manner as well.
Status quo for object categorization.
The recognition of object categories in images is a challenging problem in computer vision, especially when the number of categories is large. This is due to high intra class variability and the need for generalization across variations of objects within the same category. Objects within one category may look quite different. Even the same object may appear unalike under different viewpoint, scale, and illumination. Background clutter and partial occlusion add difficulties to recognition as well. Humans are able to recognize thousands of object types, whereas most of the existing object recognition systems are trained to recognize only a few, e.g. human faces, cars, simple objects, etc. Research has been very active on dealing with more categories and enabling incremental additions of new categories, and although the general problem remains unsolved, several multi-category objects detectors (for up to hundreds or thousands of categories) have been developed. One means is by feature sharing and boosting.
Boosting for binary categorization.
AdaBoost can be used for face detection as an example of binary categorization. The two categories are faces versus background. The general algorithm is as follows:
After boosting, a classifier constructed from 200 features could yield a 95% detection rate under a formula_0 false positive rate.
Another application of boosting for binary categorization is a system that detects pedestrians using patterns of motion and appearance. This work is the first to combine both motion information and appearance information as features to detect a walking person. It takes a similar approach to the Viola-Jones object detection framework.
Boosting for multi-class categorization.
Compared with binary categorization, multi-class categorization looks for common features that can be shared across the categories at the same time. They turn to be more generic edge like features. During learning, the detectors for each category can be trained jointly. Compared with training separately, it generalizes better, needs less training data, and requires fewer features to achieve the same performance.
The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged). This can be done via converting multi-class classification into a binary one (a set of categories versus the rest), or by introducing a penalty error from the categories that do not have the feature of the classifier.
In the paper "Sharing visual features for multiclass and multiview object detection", A. Torralba et al. used GentleBoost for boosting and showed that when training data is limited, learning via sharing features does a much better job than no sharing, given same boosting rounds. Also, for a given performance level, the total number of features required (and therefore the run time cost of the classifier) for the feature sharing detectors, is observed to scale approximately logarithmically with the number of class, i.e., slower than linear growth in the non-sharing case. Similar results are shown in the paper "Incremental learning of object detectors using a visual shape alphabet", yet the authors used AdaBoost for boosting.
Convex vs. non-convex boosting algorithms.
Boosting algorithms can be based on convex or non-convex optimization algorithms. Convex algorithms, such as AdaBoost and LogitBoost, can be "defeated" by random noise such that they can't learn basic and learnable combinations of weak hypotheses. This limitation was pointed out by Long & Servedio in 2008. However, by 2009, multiple authors demonstrated that boosting algorithms based on non-convex optimization, such as BrownBoost, can learn from noisy datasets and can specifically learn the underlying classifier of the Long–Servedio dataset.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "10^{-5}"
}
] | https://en.wikipedia.org/wiki?curid=90500 |
9050760 | Multi-compartment model | Type of mathematical model
A multi-compartment model is a type of mathematical model used for describing the way materials or energies are transmitted among the "compartments" of a system. Sometimes, the physical system that we try to model in equations is too complex, so it is much easier to discretize the problem and reduce the number of parameters. Each compartment is assumed to be a homogeneous entity within which the entities being modeled are equivalent. A multi-compartment model is classified as a lumped parameters model. Similar to more general mathematical models, multi-compartment models can treat variables as continuous, such as a differential equation, or as discrete, such as a Markov chain. Depending on the system being modeled, they can be treated as stochastic or deterministic.
Multi-compartment models are used in many fields including pharmacokinetics, epidemiology, biomedicine, systems theory, complexity theory, engineering, physics, information science and social science. The circuits systems can be viewed as a multi-compartment model as well. Most commonly, the mathematics of multi-compartment models is simplified to provide only a single parameter—such as concentration—within a compartment.
In Systems Theory.
In systems theory, it involves the description of a network whose components are compartments that represent a population of elements that are equivalent with respect to the manner in which they process input signals to the compartment.
Single-compartment model.
Possibly the simplest application of multi-compartment model is in the single-cell concentration monitoring (see the figure above). If the volume of a cell is "V", the mass of solute is "q", the input is "u"("t") and the secretion of the solution is proportional to the density of it within the cell, then the concentration of the solution "C" within the cell over time is given by
formula_0
formula_1
Where "k" is the proportionality.
Software.
Simulation Analysis and Modeling 2 SAAM II is a software system designed specifically to aid in the development and testing of multi-compartment models. It has a user-friendly graphical user interface
wherein compartmental models are constructed by creating a visual representation of the model. From this model, the program automatically creates systems of ordinary differential equations. The program can both
simulate and fit models to data, returning optimal parameter estimates and associated statistics. It was developed by scientists working on metabolism and hormones kinetics (e.g., glucose, lipids, or insulin). It was then used for tracer studies and pharmacokinetics. Albeit a multi-compartment model can in principle be developed and run via other software, like MATLAB or C++ languages, the user interface offered by SAAM II allows the modeler (and non-modelers) to better control the system, especially when the complexity increases.
Discrete Compartmental Model.
Discrete models are concerned with discrete variables, often a time interval formula_2. An example of a discrete multi-compartmental model is a discrete version of the Lotka–Volterra Model. Here consider two compartments prey and predators denoted by formula_3 and formula_4 respectively. The compartments are coupled to each other by mass action terms in each equation. Over a discrete time-step formula_2, we get
formula_5
Here
These equations are easily solved iteratively.
Continuous Compartmental Model.
The discrete Lotka-Volterra example above can be turned into a continuous version by rearranging and taking the limit as formula_13.
formula_14
This yields a system of ordinary differential equations. Treating this model as differential equations allows the implementation of calculus methods to study the dynamics of the system more in-depth.
Multi-Compartment Model.
As the number of compartments increases, the model can be very complex and the solutions usually beyond ordinary calculation.
The formulae for n-cell multi-compartment models become:
formula_15
Where
formula_16 for formula_17 (as the total 'contents' of all compartments is constant in a closed system)
Or in matrix forms:
formula_18
Where
formula_19 and formula_20 (as the total 'contents' of all compartments is constant in a closed system)
In the special case of a closed system (see below) i.e. where formula_21 then there is a general solution.
formula_22
Where formula_23, formula_24, ... and formula_25 are the eigenvalues of formula_26; formula_27, formula_28, ... and formula_29 are the respective eigenvectors of formula_26; and formula_30, formula_31, ... and formula_32 are constants.
However, it can be shown that given the above requirement to ensure the 'contents' of a closed system are constant, then for every pair of eigenvalue and eigenvector then either formula_33 or formula_34 and also that one eigenvalue is 0, say formula_23
So
formula_35
Where
formula_36 for formula_37
This solution can be rearranged:
formula_38
This somewhat inelegant equation demonstrates that all solutions of an "n-cell" multi-compartment model with constant or no inputs are of the form:
formula_39
Where formula_40 is a "nxn" matrix and formula_24, formula_41, ... and formula_25 are constants.
Where formula_42
Model topologies.
Generally speaking, as the number of compartments increases, it is challenging both to find the algebraic and numerical solutions to the model. However, there are special cases of models, which rarely exist in nature, when the topologies exhibit certain regularities that the solutions become easier to find. The model can be classified according to the interconnection of cells and input/output characteristics:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\mathrm{d}q}{\\mathrm{d}t}=u(t)-kq"
},
{
"math_id": 1,
"text": "C=\\frac{q}{V}"
},
{
"math_id": 2,
"text": "\\Delta t"
},
{
"math_id": 3,
"text": "x(t)"
},
{
"math_id": 4,
"text": "y(t)"
},
{
"math_id": 5,
"text": "\\begin{align}\nx(t+\\Delta t) &= x(t) + \\alpha x(t)\\Delta t - \\beta x(t) y(t) \\Delta t\\\\\ny(t+\\Delta t) &= y(t) +\\delta x(t) y(t) \\Delta t- \\gamma y(t)\\Delta t. \n\\end{align}"
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": "\\alpha x(t)\\Delta t"
},
{
"math_id": 8,
"text": "\\beta x(t) y(t) \\Delta t"
},
{
"math_id": 9,
"text": "\\delta x(t) y(t) \\Delta t"
},
{
"math_id": 10,
"text": "\\gamma y(t) \\Delta t"
},
{
"math_id": 11,
"text": "\\alpha, \\beta, \\delta, "
},
{
"math_id": 12,
"text": "\\gamma "
},
{
"math_id": 13,
"text": "\\Delta t \\rightarrow 0"
},
{
"math_id": 14,
"text": "\\begin{align}\n&\\lim_{\\Delta t \\rightarrow 0} \\frac{x(t + \\Delta t)-x(t)}{\\Delta t} \\equiv \\frac{d x}{dt} = \\alpha x - \\beta x y\\\\\n&\\lim_{\\Delta t \\rightarrow 0}\\frac{y(t + \\Delta t)-y(t)}{\\Delta t}\\equiv \\frac{d y}{dt} = \\delta x y - \\gamma y\n\\end{align} "
},
{
"math_id": 15,
"text": "\n\\begin{align}\n\\dot{q}_1=q_1 k_{11}+q_2 k_{12}+\\cdots+q_n k_{1n}+u_1(t) \\\\\n\\dot{q}_2=q_1 k_{21}+q_2 k_{22}+\\cdots+q_n k_{2n}+u_2(t) \\\\\n\\vdots\\\\\n\\dot{q}_n=q_1 k_{n1}+q_2 k_{n2}+\\cdots+q_n k_{nn}+u_n(t)\n\\end{align}\n"
},
{
"math_id": 16,
"text": "0=\\sum^n_{i=1}{k_{ij}}"
},
{
"math_id": 17,
"text": "j=1,2,\\dots,n"
},
{
"math_id": 18,
"text": "\n\\mathbf{\\dot{q}}=\\mathbf{Kq}+\\mathbf{u}"
},
{
"math_id": 19,
"text": "\\mathbf{K}=\\begin{bmatrix}\nk_{11}& k_{12} &\\cdots &k_{1n}\\\\\nk_{21}& k_{22} & \\cdots&k_{2n}\\\\\n\\vdots&\\vdots&\\ddots&\\vdots \\\\\nk_{n1}& k_{n2} &\\cdots &k_{nn}\\\\\n\\end{bmatrix} \n\\mathbf{q}=\\begin{bmatrix}\nq_1 \\\\\nq_2 \\\\\n\\vdots \\\\\nq_n\n\\end{bmatrix}\n\\mathbf{u}=\\begin{bmatrix}\nu_1(t) \\\\\nu_2(t) \\\\\n\\vdots \\\\\nu_n(t)\n\\end{bmatrix}\n"
},
{
"math_id": 20,
"text": "\n\\begin{bmatrix}\n1 & 1 &\\cdots & 1\\\\\n\\end{bmatrix}\\mathbf{K}=\\begin{bmatrix}\n0 & 0 &\\cdots & 0\\\\\n\\end{bmatrix} "
},
{
"math_id": 21,
"text": "\\mathbf{u}=0"
},
{
"math_id": 22,
"text": "\\mathbf{q} = c_1 e^{\\lambda_1 t} \\mathbf{v_1} + c_2 e^{\\lambda_2 t} \\mathbf{v_2} + \\cdots + c_n e^{\\lambda_n t} \\mathbf{v_n}"
},
{
"math_id": 23,
"text": "\\lambda_1"
},
{
"math_id": 24,
"text": "\\lambda_2"
},
{
"math_id": 25,
"text": "\\lambda_n"
},
{
"math_id": 26,
"text": "\\mathbf{K}"
},
{
"math_id": 27,
"text": "\\mathbf{v_1}"
},
{
"math_id": 28,
"text": "\\mathbf{v_2}"
},
{
"math_id": 29,
"text": "\\mathbf{v_n}"
},
{
"math_id": 30,
"text": "c_1"
},
{
"math_id": 31,
"text": "c_2"
},
{
"math_id": 32,
"text": "c_n"
},
{
"math_id": 33,
"text": "\\lambda=0"
},
{
"math_id": 34,
"text": "\n\\begin{bmatrix}\n1 & 1 &\\cdots & 1\\\\\n\\end{bmatrix}\\mathbf{v}=0"
},
{
"math_id": 35,
"text": "\\mathbf{q} = c_1 \\mathbf{v_1} + c_2 e^{\\lambda_2 t} \\mathbf{v_2} + \\cdots + c_n e^{\\lambda_n t} \\mathbf{v_n}"
},
{
"math_id": 36,
"text": "\n\\begin{bmatrix}\n1 & 1 &\\cdots & 1\\\\\n\\end{bmatrix}\\mathbf{v_i}=0"
},
{
"math_id": 37,
"text": "\\mathbf{i}=2, 3, \\dots n"
},
{
"math_id": 38,
"text": "\n\\mathbf{q} = \n\\Bigg[ \\mathbf{v_1}\\begin{bmatrix}\n c_1 & 0 & \\cdots & 0 \\\\\n \\end{bmatrix}\n+ \\mathbf{v_2}\\begin{bmatrix}\n 0 & c_2 & \\cdots & 0 \\\\\n \\end{bmatrix}\n+ \\dots + \\mathbf{v_n}\\begin{bmatrix}\n 0 & 0 & \\cdots & c_n \\\\\n \\end{bmatrix} \\Bigg]\n\\begin{bmatrix}\n1 \\\\\ne^{\\lambda_2t} \\\\\n \\vdots \\\\\ne^{\\lambda_nt} \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 39,
"text": " \\mathbf{q} = \\mathbf{A}\n\\begin{bmatrix}\n1 \\\\\ne^{\\lambda_2t} \\\\\n \\vdots \\\\\ne^{\\lambda_nt} \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 40,
"text": "\\mathbf{A}"
},
{
"math_id": 41,
"text": "\\lambda_3"
},
{
"math_id": 42,
"text": "\\begin{bmatrix}\n1 & 1 &\\cdots & 1\\\\\n\\end{bmatrix}\\mathbf{A}=\\begin{bmatrix}\n a & 0 & \\cdots & 0 \\\\\n \\end{bmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=9050760 |
9052285 | Zoeppritz equations | In geophysics and reflection seismology, the Zoeppritz equations are a set of equations that describe the partitioning of seismic wave energy at an interface, due to mode conversion. They are named after their author, the German geophysicist Karl Bernhard Zoeppritz, who died before they were published in 1919.
The equations are important in geophysics because they relate the amplitude of P-wave, incident upon a plane interface, and the amplitude of reflected and refracted P- and S-waves to the angle of incidence. They are the basis for investigating the factors affecting the amplitude of a returning seismic wave when the angle of incidence is altered — also known as amplitude versus offset analysis — which is a helpful technique in the detection of petroleum reservoirs.
The Zoeppritz equations were not the first to describe the amplitudes of reflected and refracted waves at a plane interface. Cargill Gilston Knott used an approach in terms of potentials almost 20 years earlier, in 1899, to derive Knott's equations. Both approaches are valid, but Zoeppritz's approach is more easily understood.
Equations.
The Zoeppritz equations consist of four equations with four unknowns
formula_0
RP, RS, TP, and TS, are the reflected P, reflected S, transmitted P, and transmitted S-wave amplitude coefficients, respectively, formula_1=angle of incidence, formula_2=angle of the transmitted P-wave, formula_3=angle of reflected S-wave and formula_4=angle of the transmitted S-wave. Inverting the matrix form of the Zoeppritz equations give the coefficients as a function of angle.
Although the four equations can be solved for the four unknowns, they do not give an intuitive understanding for how the reflection amplitudes vary with the rock properties involved (density, velocity etc.). Several attempts have been made to develop approximations to the Zoeppritz equations, such as Bortfeld's (1961) and Aki & Richards’ (1980), but the most successful of these is the Shuey's, which assumes Poisson's ratio to be the elastic property most directly related to the angular dependence of the reflection coefficient.
Shuey equation.
The 3-term Shuey equation can be written a number of ways, the following is a common form:
formula_5
where
formula_6
and
formula_7 ; formula_8
where formula_9=angle of incidence; formula_10 = P-wave velocity in medium; formula_11 = P-wave velocity contrast across interface;formula_12 = S-wave velocity in medium; formula_13 = S-wave velocity contrast across interface; formula_14 = density in medium; formula_15 = density contrast across interface;
A proposed better approximation of Zoeppritz equations:
formula_16
and
formula_17
In the Shuey equation, R(0) is the reflection coefficient at normal incidence and is controlled by the contrast in acoustic impedances. G, often referred to as the AVO gradient, describes the variation of reflection amplitudes at intermediate offsets and the third term, F, describes the behaviour at large angles/far offsets that are close to the critical angle.
This equation can be further simplified by assuming that the angle of incidence is less than 30 degrees (i.e. the offset is relatively small), so the third term will tend to zero. This is the case in most seismic surveys and gives the “Shuey approximation”:
formula_18
Further reading.
A full derivation of these equations can be found in most exploration geophysics text books, such as:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n \\begin{bmatrix} \n R_\\mathrm{P} \\\\\n R_\\mathrm{S} \\\\\n T_\\mathrm{P} \\\\\n T_\\mathrm{S} \\\\\n \\end{bmatrix} \n =\n \\begin{bmatrix}\n -\\sin\\theta_1 & -\\cos\\phi_1 & \\sin\\theta_2& \\cos\\phi_2 \\\\\n \\cos\\theta_1 & -\\sin\\phi_1 & \\cos\\theta_2 & -\\sin\\phi_2 \\\\\n \\sin 2\\theta_1 & \\frac{V_\\mathrm{P1}}{V_\\mathrm{S1}}\\cos 2\\phi_1 & \\frac{\\rho_2V_\\mathrm{S2}^2 V_\\mathrm{P1}}{\\rho_1V_\\mathrm{S1}^2 V_\\mathrm{P2}}\\sin2\\theta_2 & \\frac{\\rho_2 V_\\mathrm{S2} V_\\mathrm{P1}} {\\rho_1 V_\\mathrm{S1}^2} \\cos 2 \\phi_2 \\\\\n -\\cos 2\\phi_1 & \\frac{V_\\mathrm{S1}}{V_\\mathrm{P1}}\\sin 2\\phi_1 & \\frac{\\rho_2 V_{P2}}{\\rho_1 V_{P1}}\\cos 2\\phi_2 & -\\frac{\\rho_2 V_{S2}}{\\rho_1V_{P1}}\\sin2\\phi_2\\end{bmatrix}^{-1}\n \\begin{bmatrix}\n \\sin\\theta_1 \\\\\n \\cos\\theta_1 \\\\\n \\sin 2\\theta_1 \\\\\n \\cos 2\\phi_1 \\\\\n \\end{bmatrix}\n"
},
{
"math_id": 1,
"text": "{\\theta}_{1}"
},
{
"math_id": 2,
"text": "{\\theta}_{2}"
},
{
"math_id": 3,
"text": "{\\phi}_{1}"
},
{
"math_id": 4,
"text": "{\\phi}_{2}"
},
{
"math_id": 5,
"text": "R(\\theta ) = R(0) + G \\sin^2 \\theta + F ( \\tan^2 \\theta - \\sin^2 \\theta )"
},
{
"math_id": 6,
"text": "R(0) = \\frac{1}{2} \\left ( \\frac{\\Delta V_\\mathrm{P}}{V_\\mathrm{P}} + \\frac{\\Delta \\rho}{\\rho} \\right ) "
},
{
"math_id": 7,
"text": "G = \\frac{1}{2} \\frac{\\Delta V_\\mathrm{P}}{V_\\mathrm{P}} - 2 \\frac{V^2_\\mathrm{S}}{V^2_\\mathrm{P}} \\left ( \\frac{\\Delta \\rho}{\\rho} + 2 \\frac{\\Delta V_\\mathrm{S}}{V_\\mathrm{S}} \\right ) "
},
{
"math_id": 8,
"text": "F = \\frac{1}{2}\\frac{\\Delta V_\\mathrm{P}}{V_\\mathrm{P}} "
},
{
"math_id": 9,
"text": "{\\theta}"
},
{
"math_id": 10,
"text": "{V_p}"
},
{
"math_id": 11,
"text": "{{\\Delta}V_p}"
},
{
"math_id": 12,
"text": "{V_s}"
},
{
"math_id": 13,
"text": "{{\\Delta}V_s}"
},
{
"math_id": 14,
"text": "{{\\rho}}"
},
{
"math_id": 15,
"text": "{{\\Delta}{\\rho}}"
},
{
"math_id": 16,
"text": "R(\\theta ) = R(0) - A \\sin^2 \\theta "
},
{
"math_id": 17,
"text": "A = 2 \\frac{V^2_\\mathrm{S}}{V^2_\\mathrm{P}} \\left ( \\frac{\\Delta \\rho}{\\rho} + 2 \\frac{\\Delta V_\\mathrm{S}}{V_\\mathrm{S}} \\right ) "
},
{
"math_id": 18,
"text": "R(\\theta ) = R(0) + G \\sin^2 \\theta "
}
] | https://en.wikipedia.org/wiki?curid=9052285 |
9052595 | Ramanujan summation | Mathematical techniques for summing divergent infinite series
Ramanujan summation is a technique invented by the mathematician Srinivasa Ramanujan for assigning a value to divergent infinite series. Although the Ramanujan summation of a divergent series is not a sum in the traditional sense, it has properties that make it mathematically useful in the study of divergent infinite series, for which conventional summation is undefined.
Summation.
Since there are no properties of an entire sum, the Ramanujan summation functions as a property of partial sums. If we take the Euler–Maclaurin summation formula together with the correction rule using Bernoulli numbers, we see that:
formula_0
Ramanujan wrote this again for different limits of the integral and the corresponding summation for the case in which "p" goes to infinity:
formula_1
where "C" is a constant specific to the series and its analytic continuation and the limits on the integral were not specified by Ramanujan, but presumably they were as given above. Comparing both formulae and assuming that "R" tends to 0 as "x" tends to infinity, we see that, in a general case, for functions "f"("x") with no divergence at "x" = 0:
formula_2
where Ramanujan assumed formula_3 By taking formula_4 we normally recover the usual summation for convergent series. For functions "f"("x") with no divergence at "x" = 1, we obtain:
formula_5
alternatively, applying smoothed sums.
The convergent version of summation for functions with appropriate growth condition is then:
formula_6
<templatestyles src="Crossreference/styles.css" />
Ramanujan summation of divergent series.
In the following text, formula_7 indicates "Ramanujan summation". This formula originally appeared in one of Ramanujan's notebooks, without any notation to indicate that it exemplified a novel method of summation.
For example, the formula_7 of 1 − 1 + 1 − ⋯ is:
formula_8
Ramanujan had calculated "sums" of known divergent series. It is important to mention that the Ramanujan sums are not the sums of the series in the usual sense, i.e. the partial sums do not converge to this value, which is denoted by the symbol formula_9 In particular, the formula_7 sum of 1 + 2 + 3 + 4 + ⋯ was calculated as:
formula_10
Extending to positive even powers, this gave:
formula_11
and for odd powers the approach suggested a relation with the Bernoulli numbers:
formula_12
It has been proposed to use of "C"(1) rather than "C"(0) as the result of Ramanujan's summation, since then it can be assured that one series formula_13 admits one and only one Ramanujan's summation, defined as the value in 1 of the only solution of the difference equation formula_14 that verifies the condition formula_15.
This demonstration of Ramanujan's summation (denoted as formula_16) does not coincide with the earlier defined Ramanujan's summation, "C"(0), nor with the summation of convergent series, but it has interesting properties, such as: If "R"("x") tends to a finite limit when "x" → 1, then the series formula_16 is convergent, and we have
formula_17
In particular we have:
formula_18
where γ is the Euler–Mascheroni constant.
Extension to integrals.
Ramanujan resummation can be extended to integrals; for example, using the Euler–Maclaurin summation formula, one can write
formula_19
which is the natural extension to integrals of the Zeta regularization algorithm.
This recurrence equation is finite, since for formula_20,
formula_21
Note that this involves (see zeta function regularization)
formula_22.
With formula_23, the application of this Ramanujan resummation lends to finite results in the renormalization of quantum field theories. | [
{
"math_id": 0,
"text": "\\begin{align}\n\\frac 1 2 f(0) + f(1) + \\cdots + f(n - 1) + \\frac 1 2 f(n) &= \\frac {f(0) + f(n)} {2} + \\sum_{k=1}^{n-1} f(k) = \\sum_{k=0}^{n} f(k) - \\frac {f(0) + f(n)} {2} \\\\\n &= \\int_0^n f(x)\\,dx + \\sum_{k=1}^p \\frac{B_{2k}}{(2k)!}\\left[f^{(2k-1)}(n) - f^{(2k-1)}(0)\\right] + R_p\n\\end{align}"
},
{
"math_id": 1,
"text": "\n\\sum_{k=a}^{x}f(k)=C+\\int_a^x f(t)dt+\\frac{1}{2}f(x)+\\sum_{k=1}^{\\infty}\\frac{B_{2k}}{(2k)!}f^{(2k-1)}(x)\n"
},
{
"math_id": 2,
"text": "C(a)=\\int_0^a f(t)\\,dt - \\frac 1 2 f(0)-\\sum_{k=1}^\\infty \\frac{B_{2k}}{(2k)!}f^{(2k-1)}(0)"
},
{
"math_id": 3,
"text": "a= 0."
},
{
"math_id": 4,
"text": "a= \\infty"
},
{
"math_id": 5,
"text": "C(a) = \\int_1^a f(t)\\,dt+ \\frac{1}{2}f(1) - \\sum_{k=1}^\\infty \\frac{B_{2k}}{(2k)!} f^{(2k-1)}(1)"
},
{
"math_id": 6,
"text": "f(1)+f(2)+f(3)+\\cdots=-\\frac{f(0)} 2 + i\\int_0^\\infty \\frac{f(it)-f(-it)}{e^{2\\pi t}-1} \\, dt"
},
{
"math_id": 7,
"text": "(\\mathfrak{R})"
},
{
"math_id": 8,
"text": "1 - 1 + 1 - \\cdots = \\frac 1 2\\quad (\\mathfrak{R})."
},
{
"math_id": 9,
"text": "(\\mathfrak{R})."
},
{
"math_id": 10,
"text": "1+2+3+\\cdots = -\\frac{1}{12} \\quad (\\mathfrak{R})"
},
{
"math_id": 11,
"text": "1 + 2^{2k} + 3^{2k} + \\cdots = 0\\quad (\\mathfrak{R})"
},
{
"math_id": 12,
"text": "1+2^{2k-1}+3^{2k-1}+\\cdots = -\\frac{B_{2k}}{2k}\\quad (\\mathfrak{R})"
},
{
"math_id": 13,
"text": "\\textstyle \\sum_{k=1}^{\\infty}f(k) "
},
{
"math_id": 14,
"text": "R(x) - R(x + 1) = f(x)"
},
{
"math_id": 15,
"text": "\\textstyle \\int_1^2 R(t)\\,dt = 0"
},
{
"math_id": 16,
"text": "\\textstyle\\sum_{n \\ge 1}^{\\mathfrak{R}} f(n)"
},
{
"math_id": 17,
"text": "\\sum_{n \\ge 1}^{\\mathfrak{R}} f(n) = \\lim_{N \\to \\infty} \\left[\\sum_{n = 1}^{N}f(n) - \\int_1^N f(t)\\,dt\\right]"
},
{
"math_id": 18,
"text": "\\sum_{n \\ge 1}^\\mathfrak{R} \\frac{1}{n} = \\gamma"
},
{
"math_id": 19,
"text": "\\begin{align}\n\\int_a^\\infty x^{m-s} \\, dx &= \\frac{m-s}{2} \\int_a^\\infty \\, dx + \\zeta (s-m)-\\sum_{i=1}^a \\left[i^{m-s} +a^{m-s}\\right]\\\\\n&\\qquad -\\sum_{r=1}^\\infty \\frac{B_{2r} \\theta (m-s+1)}{(2r)!\\Gamma (m-2r+2-s)} (m-2r+1-s) \\int_a^\\infty x^{m-2r-s} \\, dx \n\\end{align}"
},
{
"math_id": 20,
"text": " m-2r < -1"
},
{
"math_id": 21,
"text": "\\int_a^\\infty dx \\, x^{m-2r}= -\\frac{a^{m-2r+1}}{m-2r+1}. "
},
{
"math_id": 22,
"text": "I(n, \\Lambda) = \\int_0^\\Lambda dx \\, x^n"
},
{
"math_id": 23,
"text": " \\Lambda \\to \\infty"
}
] | https://en.wikipedia.org/wiki?curid=9052595 |
9053035 | Halley's method | A method of numerically finding roots of a function
In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. Edmond Halley was an English mathematician and astronomer who introduced the method now called by his name.
The algorithm is second in the class of Householder's methods, after Newton's method. Like the latter, it iteratively produces a sequence of approximations to the root; their rate of convergence to the root is cubic. Multidimensional versions of this method exist.
Halley's method exactly finds the roots of a linear-over-linear Padé approximation to the function, in contrast to Newton's method or the Secant method which approximate the function linearly, or Muller's method which approximates the function quadratically.
Method.
Halley's method is a numerical algorithm for solving the nonlinear equation "f"("x") = 0. In this case, the function "f" has to be a function of one real variable. The method consists of a sequence of iterations:
formula_0
beginning with an initial guess "x"0.
If "f" is a three times continuously differentiable function and "a" is a zero of "f" but not of its derivative, then, in a neighborhood of "a", the iterates "x""n" satisfy:
formula_1
This means that the iterates converge to the zero if the initial guess is sufficiently close, and that the convergence is cubic.
The following alternative formulation shows the similarity between Halley's method and Newton's method. The expression formula_2 is computed only once, and it is particularly useful when formula_3 can be simplified:
formula_4
When the second derivative is very close to zero, the Halley's method iteration is almost the same as the Newton's method iteration.
Derivation.
Consider the function
formula_5
Any root "r" of "f" that is "not" a root of its derivative is a root of "g" (i.e., formula_6 when formula_7), and any root "r" of "g" must be a root of "f" provided the derivative of "f" at "r" is not infinite. Applying Newton's method to "g" gives
formula_8
with
formula_9
and the result follows. Notice that if "f" ′("c") = 0, then one cannot apply this at "c" because "g"("c") would be undefined.
Cubic convergence.
Suppose "a" is a root of "f" but not of its derivative. And suppose that the third derivative of "f" exists and is continuous in a neighborhood of "a" and "x""n" is in that neighborhood. Then Taylor's theorem implies:
formula_10
and also
formula_11
where ξ and η are numbers lying between "a" and "x""n". Multiply the first equation by formula_12 and subtract from it the second equation times formula_13 to give:
formula_14
Canceling formula_15 and re-organizing terms yields:
formula_16
Put the second term on the left side and divide through by
formula_17
to get:
formula_18
Thus:
formula_19
The limit of the coefficient on the right side as "x""n" → "a" is:
formula_20
If we take "K" to be a little larger than the absolute value of this, we can take absolute values of both sides of the formula and replace the absolute value of coefficient by its upper bound near "a" to get:
formula_21
which is what was to be proved.
To summarize,
formula_22
Halley's irrational method.
Halley actually developed "two" third-order root-finding methods. The above, using only a division, is referred to as "Halley's rational method". A second, "irrational" method uses a square root as well:
formula_23
This iteration was "deservedly preferred" to the rational method by Halley on the grounds that the denominator is smaller, making the division easier. A second advantage is that it tends to have about half of the error of the rational method, a benefit which multiplies as it is iterated. On a computer, it would appear to be slower as it has two slow operations (division and square root) instead of one, but on modern computers the reciprocal of the denominator can be computed at the same time as the square root via instruction pipelining, so the latency of each iteration differs very little.24 | [
{
"math_id": 0,
"text": "x_{n+1} = x_n - \\frac {2 f(x_n) f'(x_n)} {2 {[f'(x_n)]}^2 - f(x_n) f''(x_n)} "
},
{
"math_id": 1,
"text": "| x_{n+1} - a | \\le K \\cdot {| x_n - a |}^3,\\text{ for some }K > 0."
},
{
"math_id": 2,
"text": "f(x_n)/f'(x_n)"
},
{
"math_id": 3,
"text": "f''(x_n)/f'(x_n)"
},
{
"math_id": 4,
"text": "x_{n+1} = x_n -\\frac{f(x_n)}{f'(x_n) - \\frac{f(x_n)}{f'(x_n)}\\frac{f''(x_n)}{2}}= x_n - \\frac {f(x_n)} {f'(x_n)} \\left[1 - \\frac {f(x_n)}{f'(x_n)} \\cdot \\frac {f''(x_n)} {2 f'(x_n)} \\right]^{-1}."
},
{
"math_id": 5,
"text": "g(x) = \\frac {f(x)} {\\sqrt{|f'(x)|}}."
},
{
"math_id": 6,
"text": "g(r)=0"
},
{
"math_id": 7,
"text": "f(r)=0 \\ne {\\sqrt{|f'(r)|}}"
},
{
"math_id": 8,
"text": "x_{n+1} = x_n - \\frac{g(x_n)}{g'(x_n)}"
},
{
"math_id": 9,
"text": "g'(x) = \\frac{2[f'(x)]^2 - f(x) f''(x)}{2 f'(x) \\sqrt{|f'(x)|}}, "
},
{
"math_id": 10,
"text": "0 = f(a) = f(x_n) + f'(x_n) (a - x_n) + \\frac{f''(x_n)}{2} (a - x_n)^2 + \\frac{f'''(\\xi)}{6} (a - x_n)^3"
},
{
"math_id": 11,
"text": "0 = f(a) = f(x_n) + f'(x_n) (a - x_n) + \\frac{f''(\\eta)}{2} (a - x_n)^2,"
},
{
"math_id": 12,
"text": "2f'(x_n)"
},
{
"math_id": 13,
"text": "f''(x_n)(a - x_n)"
},
{
"math_id": 14,
"text": "\\begin{align}\n0 &= 2 f(x_n) f'(x_n) + 2 [f'(x_n)]^2 (a - x_n) + f'(x_n) f''(x_n) (a - x_n)^2 + \\frac{f'(x_n) f'''(\\xi)}{3} (a - x_n)^3 \\\\\n&\\qquad- f(x_n) f''(x_n) (a - x_n) - f'(x_n) f''(x_n) (a - x_n)^2 - \\frac{f''(x_n) f''(\\eta)}{2} (a - x_n)^3.\n\\end{align}"
},
{
"math_id": 15,
"text": "f'(x_n) f''(x_n) (a - x_n)^2"
},
{
"math_id": 16,
"text": "0 = 2 f(x_n) f'(x_n) + \\left(2 [f'(x_n)]^2 - f(x_n) f''(x_n) \\right) (a - x_n) + \\left(\\frac{f'(x_n) f'''(\\xi)}{3} - \\frac{f''(x_n) f''(\\eta)}{2} \\right) (a - x_n)^3."
},
{
"math_id": 17,
"text": " 2 [f'(x_n)]^2 - f(x_n) f''(x_n) "
},
{
"math_id": 18,
"text": "a - x_n = \\frac{-2f(x_n) f'(x_n)}{2[f'(x_n)]^2 - f(x_n) f''(x_n)} - \\frac{2f'(x_n) f'''(\\xi) - 3 f''(x_n) f''(\\eta)}{6(2 [f'(x_n)]^2 - f(x_n) f''(x_n))} (a - x_n)^3."
},
{
"math_id": 19,
"text": "a - x_{n+1} = - \\frac{2 f'(x_n) f'''(\\xi) - 3 f''(x_n) f''(\\eta)}{12[f'(x_n)]^2 - 6 f(x_n) f''(x_n)} (a - x_n)^3."
},
{
"math_id": 20,
"text": "-\\frac{2 f'(a) f'''(a) - 3 f''(a) f''(a)}{12 [f'(a)]^2 - 6 f(a) f''(a)}."
},
{
"math_id": 21,
"text": "|a - x_{n+1}| \\leq K |a - x_n|^3"
},
{
"math_id": 22,
"text": "\\Delta x_{i+1} =\\frac{3(f'')^2 - 2f' f'''}{12(f')^2} (\\Delta x_i)^3 + O[\\Delta x_i]^4, \\qquad \\Delta x_i \\triangleq x_i - a."
},
{
"math_id": 23,
"text": "x_{n+1} = x_n - \\frac{f'(x_n) - \\sqrt{[f'(x_n)]^2 - 2f(x_n)f''(x_n)}}{f''(x_n)}"
}
] | https://en.wikipedia.org/wiki?curid=9053035 |
905348 | Continuous phase modulation | Continuous phase modulation (CPM) is a method for modulation of data commonly used in wireless modems. In contrast to other coherent digital phase modulation techniques where the carrier phase
abruptly resets to zero at the start of every symbol (e.g. M-PSK), with CPM the carrier phase is modulated in a continuous manner. For instance, with QPSK the carrier instantaneously jumps from a sine to a cosine (i.e. a 90 degree phase shift) whenever one of the two message bits of the current symbol differs from the two message bits of the previous symbol. This discontinuity requires a relatively large percentage of the power to occur outside of the intended band (e.g., high fractional out-of-band power), leading to poor spectral efficiency. Furthermore, CPM is typically implemented as a constant-envelope waveform, i.e., the transmitted carrier power is constant.
Therefore, CPM is attractive because the phase continuity yields high spectral efficiency, and the constant envelope yields excellent power efficiency. The primary drawback is the high implementation complexity required for an optimal receiver.
Phase memory.
Each symbol is modulated by gradually changing the phase of the carrier from the starting value to the final value, over the symbol duration. The modulation and demodulation of CPM is complicated by the fact that the initial phase of each symbol is determined by the cumulative total phase of all previous transmitted symbols, which is known as the "phase memory".
Therefore, the optimal receiver cannot make decisions on any isolated symbol without taking the entire sequence of transmitted symbols into account. This requires a maximum-likelihood sequence estimator (MLSE), which is efficiently implemented using the Viterbi algorithm.
Phase trajectory.
Minimum-shift keying (MSK) is another name for CPM with an excess bandwidth of 1/2 and a linear "phase trajectory". Although this linear phase trajectory is continuous, it is not "smooth" since the derivative of the phase is not continuous. The spectral efficiency of CPM can be further improved by using a smooth phase trajectory. This is typically accomplished by filtering the phase trajectory prior to modulation, commonly using a raised cosine or a Gaussian filter. The raised cosine filter has zero crossings offset by exactly one symbol time, and so it can yield a "full-response" CPM waveform that prevents intersymbol interference (ISI).
Partial response CPM.
Partial-response signaling, such as duo-binary signaling, is a form of intentional ISI where
a certain number of adjacent symbols interfere with each symbol in a controlled manner.
A MLSE must be used to optimally demodulate any signal in the presence of ISI. Whenever
the amount of ISI is known, such as with any partial-response signaling scheme, MLSE can be used to determine the exact symbol sequence (in the absence of noise). Since the optimal demodulation of full-response CPM already requires MLSE detection, using partial-response signaling requires little additional complexity, but can afford a comparatively smoother phase trajectory, and thus, even greater spectral efficiency. One extremely popular form of partial-response CPM is GMSK, which is used by GSM in most of the world's 2nd generation cell phones. It is also used in 802.11 FHSS, Bluetooth, and many other proprietary wireless modems.
Continuous-phase frequency-shift keying.
Continuous-phase frequency-shift keying (CPFSK) is a commonly used variation of frequency-shift keying (FSK), which is itself a special case of analog frequency modulation. FSK is a method of modulating digital data onto a sinusoidal carrier wave, encoding the information present in the data to variations in the carrier's instantaneous frequency between one of two frequencies (referred to as the space frequency and mark frequency). In general, a standard FSK signal does not have continuous phase, as the modulated waveform cuts instantaneously between two sinusoids with different frequencies.
As the name suggests, the phase of a CPFSK is in fact continuous; this attribute is desirable for signals that are to be transmitted over a bandlimited channel, as discontinuities in a signal introduce wideband frequency components. In addition, some classes of amplifiers exhibit nonlinear behavior when driven with nearly discontinuous signals; this could have undesired effects on the shape of the transmitted signal.
Theory.
If a finitely valued digital signal to be transmitted (the message) is "m"("t"), then the corresponding CPFSK signal is
formula_0
where "Ac" represents the amplitude of the CPFSK signal, "fc" is the base carrier frequency, and "Df" is a parameter that controls the frequency deviation of the modulated signal. The integral located inside of the cosine's argument is what gives the CPFSK signal its continuous phase; an integral over any finitely valued function (which "m"("t") is assumed to be) will not contain any discontinuities. If the message signal is assumed to be causal, then the limits on the integral change to a lower bound of zero and a higher bound of "t".
Note that this does not mean that "m"("t") must be continuous; in fact, most ideal digital data waveforms contain discontinuities. However, even a discontinuous message signal will generate a proper CPFSK signal.
References.
Notation for the CPFSK waveform was taken from: | [
{
"math_id": 0,
"text": "s(t) = A_c \\cos\\left(2 \\pi f_c t + D_f \\int_{-\\infty}^{t} m(\\alpha) d \\alpha\\right)\\,"
}
] | https://en.wikipedia.org/wiki?curid=905348 |
9057666 | Blue Monday (date) | Supposed saddest day of the year
Blue Monday is the name given to a day in January (typically the third Monday of the month) said by a UK travel company, Sky Travel, to be the most depressing day of the year. The concept was first published in a 2005 press release from the company, which claimed to have calculated the date using an "equation". It takes into account weather conditions and thus only applies to the Northern Hemisphere.
Some have dismissed the idea as pseudoscience.
History.
This date was published in a press release under the name of Cliff Arnall, who was at the time a tutor at the Centre for Lifelong Learning, a Further Education centre attached to Cardiff University. "The Guardian" columnist Ben Goldacre reported that the press release was delivered substantially pre-written to a number of academics via public relations agency Porter Novelli, along with an offer of money to those who offered to put their names to it. A statement later printed in "The Guardian" sought to distance leaders of Cardiff University from Arnall: "Cardiff University has asked us to point out that Cliff Arnall … was a former part-time tutor at the university but left in February."
Variations of the story have been repeatedly reused by other companies in press releases, with 2014 seeing Blue Monday invoked by legal firms and retailers of bottled water and alcoholic drinks. Some versions of the story purport to analyse trends in social media posts to calculate the date.
In 2018, Arnall told a reporter at the "Independent" newspaper that it was "never his intention to make the day sound negative", but rather "to inspire people to take action and make bold life decisions". It was also reported that he was working with Virgin Atlantic and Virgin Holidays, having "made it his mission to challenge some of the negative news associated with January and to debunk the melancholic mind-set of 'Blue Monday'".
Date.
The date is generally reported as falling on the third Monday in January, but also on the second or fourth Monday. The first such date declared was 24 January in 2005 as part of a Sky Travel press release.
Calculation.
The formula uses many factors, including: weather conditions, debt level (the difference between debt accumulated and ability to pay), time since Christmas, time since new year's resolutions have been broken, low motivational levels, and the feeling of a need to take action. One relationship used by Arnall in 2006 was:
formula_0
where Tt = travel time; D = delays; C = time spent on cultural activities; R = time spent relaxing; ZZ = time spent sleeping; St = time spent in a state of stress; P = time spent packing; Pr = time spent in preparation. Units of measurement are not defined; as all the factors involve time, dimensional analysis of the "formula" shows that it violates the fundamental property of dimensional homogeneity and is thus meaningless.
The 2005 press release and a 2009 press release used a different formula:
formula_1
where W = weather, D = debt, d = monthly salary, T = time since Christmas, Q = time since the failure of new year's resolutions, M = low motivational levels, and Na = the feeling of a need to take action. Again, no units were defined; the lack of any explanation for what is meant by "weather" and "low motivational levels" means the dimensional homogeneity of the resulting formula cannot be assessed or verified, rendering it similarly meaningless.
Ben Goldacre has observed that the equations "fail even to make mathematical sense on their own terms", pointing out that under Arnall's original equation, packing for ten hours and preparing for 40 will always guarantee a good holiday, and that "you can have an infinitely good weekend by staying at home and cutting your travel time to zero". Dean Burnett, a neuroscientist who has worked in the psychology department of Cardiff University, has described the work as "farcical", with "nonsensical measurements".
In 2016, Arnall claimed to have attempted to "overturn" his "theory" by visiting the Canary Islands; his claim was publicised by the Canary Islands Tourism Board.
Happiest day.
Arnall also says, in a press release commissioned by Wall's ice cream, that he has calculated the happiest day of the year – in 2005, 24 June, in 2006, 23 June, in 2008, 20 June and in 2010, 18 June. So far, this date has fallen close to Midsummer in the Northern Hemisphere (June 21 to 24).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{(C \\times R \\times ZZ)}{((Tt + D) \\times St)} + (P \\times Pr)>400"
},
{
"math_id": 1,
"text": "\\frac{[W + (D-d)] \\times T^Q}{M \\times N_a}"
}
] | https://en.wikipedia.org/wiki?curid=9057666 |
9057679 | Akhmim wooden tablets | Ancient Egyptian texts
The Akhmim wooden tablets, also known as the Cairo wooden tablets are two wooden writing tablets from ancient Egypt, solving arithmetical problems. They each measure around and are covered with plaster. The tablets are inscribed on both sides. The hieroglyphic inscriptions on the first tablet include a list of servants, which is followed by a mathematical text. The text is dated to year 38 (it was at first thought to be from year 28) of an otherwise unnamed king's reign. The general dating to the early Egyptian Middle Kingdom combined with the high regnal year suggests that the tablets may date to the reign of the 12th Dynasty pharaoh Senusret I, c. 1950 BC. The second tablet also lists several servants and contains further mathematical texts.
The tablets are currently housed at the Museum of Egyptian Antiquities in Cairo. The text was reported by Daressy in 1901 and later analyzed and published in 1906.
The first half of the tablet details five multiplications of a "hekat", a unit of volume made up of 64 "dja", by 1/3, 1/7, 1/10, 1/11 and 1/13. The answers were written in binary Eye of Horus quotients and exact Egyptian fraction remainders, scaled to a 1/320 factor named "ro". The second half of the document proved the correctness of the five division answers by multiplying the two-part quotient and remainder answer by its respective (3, 7, 10, 11 and 13) dividend that returned the "ab initio" hekat unity, 64/64.
In 2002, Hana Vymazalová obtained a fresh copy of the text from the Cairo Museum, and confirmed that all five two-part answers were correctly checked for accuracy by the scribe that returned a 64/64 hekat unity. Minor typographical errors in Daressy's copy of two problems, the division by 11 and 13 data, were corrected at this time. The proof that all five divisions had been exact was suspected by Daressy but was not proven until 1906.
Mathematical content.
1/3 case.
The first problem divides 1 "hekat" by writing it as formula_0 + (5 "ro") (which equals 1) and dividing that expression by 3.
In modern mathematical notation, one might say that the scribe showed that 3 times the "hekat" fraction (1/4 + 1/16 + 1/64) is equal to 63/64, and that 3 times the remainder part, (1 + 2/3) "ro", is equal to 5 "ro", which is equal to 1/64 of a "hekat", which sums to the initial hekat unity (64/64).
Other fractions.
The other problems on the tablets were computed by the same technique. The scribe used the identity 1 "hekat" = 320 "ro" and divided 64 by 7, 10, 11 and 13. For instance, in the 1/11 computation, the division of 64 by 11 gave 5 with a remainder 45/11 "ro". This was equivalent to (1/16 + 1/64) "hekat" + (4 + 1/11) "ro". Checking the work required the scribe to multiply the two-part number by 11 and showed the result 63/64 + 1/64 = 64/64, as all five proofs reported.
Accuracy.
The computations show several minor mistakes. For instance, in the 1/7 computations, formula_3 was said to be 12 and the double of that 24 in all of the copies of the problem. The mistake takes place in exactly the same place in each of the versions of this problem, but the scribe manages to find the correct answer in spite of this error since the 64/64 hekat unity guided his thinking. The fourth copy of the 1/7 division contains an extra minor error in one of the lines.
The 1/11 computation occurs four times and the problems appear right next to one another, leaving the impression that the scribe was practicing the computation procedure. The 1/13 computation appears once in its complete form and twice more with only partial computations. There are errors in the computations, but the scribe does find the correct answer. 1/10 is the only fraction computed only once. There are no mistakes in the computations for this problem.
Hekat problems in other texts.
The Rhind Mathematical Papyrus (RMP) contained over 60 examples of "hekat" multiplication and division in RMP 35, 36, 37, 38, 47, 80, 81, 82, 83 and 84. The problems were different since the hekat unity was changed from the 64/64 binary hekat and ro remainder standard as needed to a second 320/320 standard recorded in 320 ro statements. Some examples include:
The "Ebers Papyrus" is a famous late Middle Kingdom medical text. Its raw data were written in "hekat" one-parts suggested by the Akhim wooden tablets, handling divisors greater than 64.
References.
Other: | [
{
"math_id": 0,
"text": "1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64"
},
{
"math_id": 1,
"text": "1/4 + 1/16 + 1/64"
},
{
"math_id": 2,
"text": "1/4 + 1/16 + 1/64 + (1 + 2/3) ro"
},
{
"math_id": 3,
"text": " 2 \\times 7"
}
] | https://en.wikipedia.org/wiki?curid=9057679 |
905850 | Supermodular function | Mathematical function class
In mathematics, a function
formula_0
is supermodular if
formula_1
for all formula_2, formula_3, where formula_4 denotes the componentwise maximum and formula_5 the componentwise minimum of formula_2 and formula_6.
If −"f" is supermodular then "f" is called submodular, and if the inequality is changed to an equality the function is modular.
If "f" is twice continuously differentiable, then supermodularity is equivalent to the condition
formula_7
Supermodularity in economics and game theory.
The concept of supermodularity is used in the social sciences to analyze how one agent's decision affects the incentives of others.
Consider a symmetric game with a smooth payoff function formula_8 defined over actions formula_9 of two or more players formula_10. Suppose the action space is continuous; for simplicity, suppose each action is chosen from an interval: formula_11. In this context, supermodularity of formula_8 implies that an increase in player formula_12's choice formula_9 increases the marginal payoff formula_13 of action formula_14 for all other players formula_15. That is, if any player formula_12 chooses a higher formula_9, all other players formula_15 have an incentive to raise their choices formula_14 too. Following the terminology of Bulow, Geanakoplos, and Klemperer (1985), economists call this situation strategic complementarity, because players' strategies are complements to each other. This is the basic property underlying examples of multiple equilibria in coordination games.
The opposite case of supermodularity of formula_8, called submodularity, corresponds to the situation of strategic substitutability. An increase in formula_9 lowers the marginal payoff to all other player's choices formula_14, so strategies are substitutes. That is, if formula_12 chooses a higher formula_9, other players have an incentive to pick a "lower" formula_14.
For example, Bulow et al. consider the interactions of many imperfectly competitive firms. When an increase in output by one firm raises the marginal revenues of the other firms, production decisions are strategic complements. When an increase in output by one firm lowers the marginal revenues of the other firms, production decisions are strategic substitutes.
A supermodular utility function is often related to complementary goods. However, this view is disputed.
Submodular functions of subsets.
Supermodularity and submodularity are also defined for functions defined over subsets of a larger set. Intuitively, a submodular function over the subsets demonstrates "diminishing returns". There are specialized techniques for optimizing submodular functions.
Let "S" be a finite set. A function formula_16 is submodular if for any formula_17 and formula_18, formula_19. For supermodularity, the inequality is reversed.
The definition of submodularity can equivalently be formulated as
formula_20
for all subsets "A" and "B" of "S".
Theory and enumeration algorithms for finding local and global maxima (minima) of submodular (supermodular) functions can be found in "Maximization of submodular functions: Theory and enumeration algorithms", B. Goldengorin.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f\\colon \\mathbb{R}^k \\to \\mathbb{R}"
},
{
"math_id": 1,
"text": "\nf(x \\uparrow y) + f(x \\downarrow y) \\geq f(x) + f(y)\n"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "y \\isin \\mathbb{R}^{k}"
},
{
"math_id": 4,
"text": "x \\uparrow y"
},
{
"math_id": 5,
"text": "x \\downarrow y"
},
{
"math_id": 6,
"text": "y"
},
{
"math_id": 7,
"text": " \\frac{\\partial ^2 f}{\\partial z_i\\, \\partial z_j} \\geq 0 \\mbox{ for all } i \\neq j."
},
{
"math_id": 8,
"text": "\\,f"
},
{
"math_id": 9,
"text": "\\,z_i"
},
{
"math_id": 10,
"text": "i \\in {1,2,\\dots,N}"
},
{
"math_id": 11,
"text": "z_i \\in [a,b]"
},
{
"math_id": 12,
"text": "\\,i"
},
{
"math_id": 13,
"text": "df/dz_j"
},
{
"math_id": 14,
"text": "\\,z_j"
},
{
"math_id": 15,
"text": "\\,j"
},
{
"math_id": 16,
"text": "f\\colon 2^S \\to \\mathbb{R}"
},
{
"math_id": 17,
"text": "A \\subset B \\subset S"
},
{
"math_id": 18,
"text": "x \\in S \\setminus B"
},
{
"math_id": 19,
"text": "f(A \\cup \\{x\\})-f(A) \\geq f(B \\cup \\{x\\})-f(B)"
},
{
"math_id": 20,
"text": " f(A)+f(B) \\geq f(A \\cap B) + f(A \\cup B) "
}
] | https://en.wikipedia.org/wiki?curid=905850 |
9062245 | T-criterion | The T-failure criterion is a set of material failure criteria that can be used to predict both brittle and ductile failure.
These criteria were designed as a replacement for the von Mises yield criterion which predicts the unphysical result that pure hydrostatic tensile loading of metals never leads to failure. The T-criteria use the volumetric stress in addition to the deviatoric stress used by the von Mises criterion and are similar to the Drucker Prager yield criterion. T-criteria have been designed on the basis of energy considerations and the observation that the reversible elastic energy density storage process has a limit which can be used to determine when a material has failed.
Description.
Only in the case of pure shear does the strain energy density stored in the material and calculated by the area under the formula_0-formula_1 curve, represent the total amount of energy stored. In all other cases, there is a divergence between the actual and calculated stored energy in the material, which is maximum in the case of pure hydrostatic loading, where, according to the von Mises criterion, no energy is stored. This paradox is resolved if a second constitutive equation is introduced, that relates hydrostatic pressure p with the volume change formula_2. These two curves, namely formula_3 and (p-formula_2) are essential for a complete description of material behaviour up to failure. Thus, two criteria must be accounted for when considering failure and two constitutive equations that describe material response. According to this criterion, an upper limit to allowable strains is set either by a critical value ΤV,0 of the elastic energy density due to volume change (dilatational energy) or by a critical value ΤD,0 of the elastic energy density due to change in shape (distortional energy). The volume of material is considered to have failed by extensive plastic flow when the distortional energy Τd reaches the critical value ΤD,0 or by extensive dilatation when the dilatational energy Τv reaches a critical value ΤV,0. The two critical values ΤD,0 and ΤV,0 are considered material constants independent of the shape of the volume of material considered and the induced loading, but dependent on the strain rate and temperature.
Deployment for Isotropic Metals.
For the development of the criterion, a continuum mechanics approach is adopted. The material volume is considered to be a continuous medium with no particular form or manufacturing defect. It is also considered to behave as a linear elastic isotropically hardening material, where stresses and strains are related by the generalized Hook’s law and by the incremental theory of plasticity with the von Mises flow rule. For such materials, the following assumptions are considered to hold:
(a) The total increment of a strain component formula_4 is decomposed into the elastic and the plastic formula_5 increment and formula_6 respectively:
formula_7 (1)
(b) The elastic strain increment formula_5 is given by Hooke’s law:
formula_8(2)
where formula_9the shear modulus, formula_10 the Poisson’s ratio and formula_11 the Krönecker delta.
(c) The plastic strain increment formula_6 is proportional to the respective deviatoric stress:
formula_12(3)
where formula_13 and formula_14 an infinitesimal scalar. (3) implies that the plastic strain increment:
(d) The increment in plastic work per unit volume using (4.16) is:
formula_15 (4)
and the increment in strain energy, formula_16, equals to the total differential of the potential formula_17:
formula_18(5)
where
formula_19, formula_20 and for metals following the von Mises yield law, by definition
formula_21(6)
formula_22(7)
are the equivalent stress and strain respectively.
In (5) the first term of the right hand side, formula_23 is the increment in elastic energy for unit volume change due to hydrostatic pressure. Its integral over a load path is the total amount of dilatational strain energy density stored in the material. The second term formula_24 is the energy required for an infinitesimal distortion of the material. The integral of this quantity is the distortional strain energy density. The theory of plastic flow permits the evaluation of stresses, strains and strain energy densities along a path provided that formula_14 in (3) is known. In elasticity, linear or nonlinear, formula_14. In the case of strain hardening materials, formula_14 can be evaluated by recording the formula_25 curve in a pure shear experiment. The hardening function after point “y” in Figure 1 is then:
formula_26(8)
and the infinitesimal scalar formula_14 is:
formula_27 (9)
where formula_28is the infinitesimal increase in plastic work (see Figure 1). The elastic part of the total distortional strain energy density is:
formula_29 (10)
where formula_30 is the elastic part of the equivalent strain. When there is no nonlinear elastic behaviour, by integrating (4.22) the elastic distortional strain energy density is:
formula_31 (11)
Similarly, by integrating the increment in elastic energy for unit volume change due to hydrostatic pressure, formula_32, the dilatational strain energy density is:
formula_33 (12)
assuming that the unit volume change formula_34 is the elastic straining, proportional to the hydrostatic pressure, p (Figure 2):formula_35 or formula_36 (13)
where formula_19, formula_20 and formula_37 the bulk modulus of the material.
In summary, in order to use (12) and (13) to determine the failure of a material volume, the following assumptions hold:
Limitations.
The criterion will not predict any failure due to distortion for elastic-perfectly plastic, rigid-plastic, or strain softening materials. For the case of nonlinear elasticity, appropriate calculations for the integrals in and (12) and (13) accounting for the nonlinear elastic material properties must be performed. The two threshold values for the elastic strain energy formula_38 and formula_39 are derived from experimental data. A drawback of the criterion is that elastic strain energy densities are small and comparatively hard to derive. Nevertheless, example values are presented in the literature as well as applications where the T-criterion appears to perform quite well. | [
{
"math_id": 0,
"text": "\\bar{\\sigma}"
},
{
"math_id": 1,
"text": "\\bar{\\epsilon}"
},
{
"math_id": 2,
"text": "\\Theta"
},
{
"math_id": 3,
"text": "\\bar{\\sigma}-\\bar{\\epsilon}"
},
{
"math_id": 4,
"text": "d\\epsilon_{i,j}"
},
{
"math_id": 5,
"text": "d\\epsilon_{i,j}^e"
},
{
"math_id": 6,
"text": "d\\epsilon_{i,j}^p"
},
{
"math_id": 7,
"text": "d\\epsilon_{i,j}=d\\epsilon_{i,j}^e+d\\epsilon_{i,j}^p"
},
{
"math_id": 8,
"text": "d\\epsilon_{i,j}^e=\\cfrac{1}{2G}(d{\\sigma}_{i,j}-\\cfrac{3\\nu}{1+{\\nu}}{\\delta}_{i,j}dp)"
},
{
"math_id": 9,
"text": "G=\\cfrac{E}{2(1+\\nu)}"
},
{
"math_id": 10,
"text": "\\nu"
},
{
"math_id": 11,
"text": "{\\delta}_{i,j}"
},
{
"math_id": 12,
"text": "d\\epsilon_{i,j}^p=s_{i,j}d{\\lambda}"
},
{
"math_id": 13,
"text": "s_{i,j}={\\sigma}_{i,j}-{\\delta}_{i,j}p"
},
{
"math_id": 14,
"text": "d{\\lambda}"
},
{
"math_id": 15,
"text": "dw_{p}={\\sigma}_{i,j}d{\\epsilon}_{i,j}^p={\\sigma}_{i,j}s_{i,j}d{\\lambda}"
},
{
"math_id": 16,
"text": "dT"
},
{
"math_id": 17,
"text": "{\\Pi}"
},
{
"math_id": 18,
"text": "dT=d{\\Pi}=pd{\\Theta}+{\\sigma}d{\\epsilon}=dT_{V}+dT_{D}^*"
},
{
"math_id": 19,
"text": "{\\Theta}={\\epsilon}_{11}+{\\epsilon}_{22}+{\\epsilon}_{33}"
},
{
"math_id": 20,
"text": "p=\\cfrac{1}{3}({\\sigma}_{11}+{\\sigma}_{22}+{\\sigma}_{33})"
},
{
"math_id": 21,
"text": "\\bar{\\sigma}=\\cfrac{1}{2}\\sqrt{2}[({\\sigma}_{11}-{\\sigma}_{22})^2+({\\sigma}_{22}-{\\sigma}_{33})^2+({\\sigma}_{33}-{\\sigma}_{11})^2]^{1/2}"
},
{
"math_id": 22,
"text": "\\bar{\\epsilon}=\\cfrac{1'''}{2}\\sqrt{2}[({\\epsilon}_{11}-{\\epsilon}_{22})^2+({\\epsilon}_{22}-{\\epsilon}_{33})^2+({\\epsilon}_{33}-{\\epsilon}_{11})^2]^{1/2}"
},
{
"math_id": 23,
"text": "dT_{V}=pd\\Theta"
},
{
"math_id": 24,
"text": "dT_{D}^*=\\bar{\\sigma}d\\bar{\\epsilon}"
},
{
"math_id": 25,
"text": "\\bar{\\sigma}=\\bar{\\sigma}(\\bar{\\epsilon})"
},
{
"math_id": 26,
"text": "H(\\bar{\\sigma},\\bar{\\epsilon})=\\cfrac{d\\bar{\\sigma}}{d\\bar{\\epsilon}}"
},
{
"math_id": 27,
"text": "d{\\lambda}=\\cfrac{3}{2\\bar\\sigma^2}dw_p(H)"
},
{
"math_id": 28,
"text": "dw_p(H)"
},
{
"math_id": 29,
"text": "dT_D=\\bar{\\sigma}d\\bar{\\epsilon}^e"
},
{
"math_id": 30,
"text": "\\bar{\\epsilon}^e"
},
{
"math_id": 31,
"text": "T_D=\\int\\bar{\\sigma}d\\bar{\\epsilon}^e=\\cfrac{1}{6G}\\bar{\\sigma}^2"
},
{
"math_id": 32,
"text": "dT_V=pd\\Theta"
},
{
"math_id": 33,
"text": "T_V=\\int{pd\\Theta}=\\cfrac{1}{2K}p^2=\\cfrac{1}{2}K{\\Theta}^2"
},
{
"math_id": 34,
"text": "{\\Theta}"
},
{
"math_id": 35,
"text": "{\\Theta}=\\cfrac{1}{2K}p"
},
{
"math_id": 36,
"text": "d{\\Theta}=\\cfrac{1}{K}dp"
},
{
"math_id": 37,
"text": "K=\\cfrac{E}{3(1-2\\nu)}"
},
{
"math_id": 38,
"text": "T_{V,0}"
},
{
"math_id": 39,
"text": "T_{D,0}"
}
] | https://en.wikipedia.org/wiki?curid=9062245 |
906601 | Source-synchronous | Technique used for timing symbols on a digital interface
Source-Synchronous clocking refers to a technique used for timing symbols on a digital interface. Specifically, it refers to the technique of having the transmitting device send a clock signal along with the data signals. The timing of the unidirectional data signals is referenced to the clock (often called the strobe) sourced by the same device that generates those signals, and not to a global clock (i.e. generated by a bus master). Compared to other digital clocking topologies like system-synchronous clocks, where a global clock source is fed to all devices in the system, a source-synchronous clock topology can attain far higher speeds.
This type of clocking is common in high-speed interfaces between micro-chips, including DDR SDRAM, SGI XIO interface, Intel Front Side Bus for the x86 and Itanium processors, HyperTransport, SPI-4.2 and many others.
Reasons for usage.
A reason that source-synchronous clocking is useful is that it has been observed that all of the circuits within a given semiconductor device experience roughly the same process-voltage-temperature (PVT) variation. This means signal propagation delay experienced by the data through a device tracks the delay experienced by the clock through that same device over PVT. This advantage allows higher speed operation as compared to the traditional technique of providing the clock from a third device to both the transmitter and the receiver. Another benefit is that higher complexity data-recovery or clock-data-recovery circuits (such as PLLs) are not required when this technique is used.
Or rather than higher clock speeds, large systems that take advantage of source-synchronous clocking can have the benefit of a higher tolerance of PVT variation of its individual components.
Timing Analysis.
Synchronous logic elements such as flip-flops have static timing criteria that must be satisfied in order for them to work correctly. In a system-synchronous clock topology where a skew-aligned clock is fed to all devices, the criteria are
formula_0
A source-synchronous clock topology eliminates two of these factors, formula_1 and formula_2. The former is eliminated since both clock and data signals are driven by identical flip-flops on the same silicon at the same temperature and voltage, thereby equalizing the formula_1 seen by both clock and data. The latter is eliminated for the same reason - since the clock and data are driven by identical devices and (ideally) connected with wires of equal length, the skew between clock and data is greatly reduced. For this reason, formula_3 can be reduced significantly. Since frequency is inversely proportional to clock period, the clock frequency increases as a result.
Drawbacks.
One drawback of using source-synchronous clocking is the creation of a separate clock-domain at the receiving device, namely the clock-domain of the strobe generated by the transmitting device. This strobe clock-domain is often not synchronous to the core clock domain of the receiving device. For proper operation of the received data with other data already present in the device, an additional stage of synchronization logic is required to transfer the received data into the core clock-domain of the receiving device. This stage can often be found alongside source synchronous logic. This usually results in greater system complexity compared to globally clocked systems, but the benefits are generally much greater than this increase in complexity.
Implementation Variations.
In bi-directional data transfer buses, two opposing unidirectional strobes can be sent from each device. Often the strobe is free running in this case. That is, the strobe continues to toggle whether there is data being transferred or not.
Another variation is the sharing of the same bus to transfer the strobe. In this case the strobe can only be transferred by the device that is sending the data and may require transmission of pre-ambles and post-ambles to indicate the start and end of the strobes. (Example: DDR2).
In large ASICs or processors, multiple strobes and data groups (data bits that are associated to the same strobe) may exist between the same two devices to account for the slightly different PVT variations in different regions of the same die. | [
{
"math_id": 0,
"text": "T_{clock} > T_{setup} + T_{ko} + T_{skew}"
},
{
"math_id": 1,
"text": "T_{ko}"
},
{
"math_id": 2,
"text": "T_{skew}"
},
{
"math_id": 3,
"text": "T_{clock}"
}
] | https://en.wikipedia.org/wiki?curid=906601 |
9067 | Division ring | Algebraic structure also called skew field
In algebra, a division ring, also called a skew field, is a nontrivial ring in which division by nonzero elements is defined. Specifically, it is a nontrivial ring in which every nonzero element a has a multiplicative inverse, that is, an element usually denoted "a"–1, such that "a
a"–1 = "a"–1
"a" = 1. So, (right) "division" may be defined as "a" / "b" = "a"
"b"–1, but this notation is avoided, as one may have "a
b"–1 ≠ "b"–1
"a".
A commutative division ring is a field. Wedderburn's little theorem asserts that all finite division rings are commutative and therefore finite fields.
Historically, division rings were sometimes referred to as fields, while fields were called "commutative fields". In some languages, such as French, the word equivalent to "field" ("corps") is used for both commutative and noncommutative cases, and the distinction between the two cases is made by adding qualificatives such as "corps commutatif" (commutative field) or "corps gauche" (skew field).
All division rings are simple. That is, they have no two-sided ideal besides the zero ideal and itself.
Relation to fields and linear algebra.
All fields are division rings, and every non-field division ring is noncommutative. The best known example is the ring of quaternions. If one allows only rational instead of real coefficients in the constructions of the quaternions, one obtains another division ring. In general, if "R" is a ring and "S" is a simple module over "R", then, by Schur's lemma, the endomorphism ring of "S" is a division ring; every division ring arises in this fashion from some simple module.
Much of linear algebra may be formulated, and remains correct, for modules over a division ring "D" instead of vector spaces over a field. Doing so, one must specify whether one is considering right or left modules, and some care is needed in properly distinguishing left and right in formulas. In particular, every module has a basis, and Gaussian elimination can be used. So, everything that can be defined with these tools works on division algebras. Matrices and their products are defined similarly. However, a matrix that is left invertible need not to be right invertible, and if it is, its right inverse can differ from its left inverse. (See "".)
Determinants are not defined over noncommutative division algebras, and everything that requires this concept cannot be generalized to noncommutative division algebras.
Working in coordinates, elements of a finite-dimensional right module can be represented by column vectors, which can be multiplied on the right by scalars, and on the left by matrices (representing linear maps); for elements of a finite-dimensional left module, row vectors must be used, which can be multiplied on the left by scalars, and on the right by matrices. The dual of a right module is a left module, and vice versa. The transpose of a matrix must be viewed as a matrix over the opposite division ring "D"op in order for the rule ("AB")T
"B"T"A"T to remain valid.
Every module over a division ring is free; that is, it has a basis, and all bases of a module have the same number of elements. Linear maps between finite-dimensional modules over a division ring can be described by matrices; the fact that linear maps by definition commute with scalar multiplication is most conveniently represented in notation by writing them on the "opposite" side of vectors as scalars are. The Gaussian elimination algorithm remains applicable. The column rank of a matrix is the dimension of the right module generated by the columns, and the row rank is the dimension of the left module generated by the rows; the same proof as for the vector space case can be used to show that these ranks are the same and define the rank of a matrix.
Division rings are the only rings over which every module is free: a ring "R" is a division ring if and only if every "R"-module is free.
The center of a division ring is commutative and therefore a field. Every division ring is therefore a division algebra over its center. Division rings can be roughly classified according to whether or not they are finite dimensional or infinite dimensional over their centers. The former are called "centrally finite" and the latter "centrally infinite". Every field is one dimensional over its center. The ring of Hamiltonian quaternions forms a four-dimensional algebra over its center, which is isomorphic to the real numbers.
Main theorems.
Wedderburn's little theorem: All finite division rings are commutative and therefore finite fields. (Ernst Witt gave a simple proof.)
Frobenius theorem: The only finite-dimensional associative division algebras over the reals are the reals themselves, the complex numbers, and the quaternions.
Related notions.
Division rings "used to be" called "fields" in an older usage. In many languages, a word meaning "body" is used for division rings, in some languages designating either commutative or noncommutative division rings, while in others specifically designating commutative division rings (what we now call fields in English). A more complete comparison is found in the article on fields.
The name "skew field" has an interesting semantic feature: a modifier (here "skew") "widens" the scope of the base term (here "field"). Thus a field is a particular type of skew field, and not all skew fields are fields.
While division rings and algebras as discussed here are assumed to have associative multiplication, nonassociative division algebras such as the octonions are also of interest.
A near-field is an algebraic structure similar to a division ring, except that it has only one of the two distributive laws.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
External links.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma: \\Complex \\to \\Complex"
},
{
"math_id": 1,
"text": "\\Complex"
},
{
"math_id": 2,
"text": "\\Complex((z,\\sigma))"
},
{
"math_id": 3,
"text": "z"
},
{
"math_id": 4,
"text": "\\alpha\\in\\Complex"
},
{
"math_id": 5,
"text": "z^i\\alpha := \\sigma^i(\\alpha) z^i"
},
{
"math_id": 6,
"text": "i\\in\\mathbb{Z}"
},
{
"math_id": 7,
"text": "\\sigma"
},
{
"math_id": 8,
"text": "F"
}
] | https://en.wikipedia.org/wiki?curid=9067 |
906703 | Clenshaw algorithm | In numerical analysis, the Clenshaw algorithm, also called Clenshaw summation, is a recursive method to evaluate a linear combination of Chebyshev polynomials. The method was published by Charles William Clenshaw in 1955. It is a generalization of Horner's method for evaluating a linear combination of monomials.
It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term recurrence relation.
Clenshaw algorithm.
In full generality, the Clenshaw algorithm computes the weighted sum of a finite series of functions formula_0:
formula_1
where formula_2 is a sequence of functions that satisfy the linear recurrence relation
formula_3
where the coefficients formula_4 and formula_5 are known in advance.
The algorithm is most useful when formula_0 are functions that are complicated to compute directly, but formula_4 and formula_5 are particularly simple. In the most common applications, formula_6 does not depend on formula_7, and formula_8 is a constant that depends on neither formula_9 nor formula_7.
To perform the summation for given series of coefficients formula_10, compute the values formula_11 by the "reverse" recurrence formula:
formula_12
Note that this computation makes no direct reference to the functions formula_0. After computing formula_13 and formula_14,
the desired sum can be expressed in terms of them and the simplest functions formula_15 and formula_16:
formula_17
See Fox and Parker for more information and stability analyses.
Examples.
Horner as a special case of Clenshaw.
A particularly simple case occurs when evaluating a polynomial of the form
formula_18
The functions are simply
formula_19
and are produced by the recurrence coefficients formula_20 and formula_21.
In this case, the recurrence formula to compute the sum is
formula_22
and, in this case, the sum is simply
formula_23
which is exactly the usual Horner's method.
Special case for Chebyshev series.
Consider a truncated Chebyshev series
formula_24
The coefficients in the recursion relation for the Chebyshev polynomials are
formula_25
with the initial conditions
formula_26
Thus, the recurrence is
formula_27
and the final sum is
formula_28
One way to evaluate this is to continue the recurrence one more step, and compute
formula_29
(note the doubled "a"0 coefficient) followed by
formula_30
Meridian arc length on the ellipsoid.
Clenshaw summation is extensively used in geodetic applications. A simple application is summing the trigonometric series to compute the meridian arc distance on the surface of an ellipsoid. These have the form
formula_31
Leaving off the initial formula_32 term, the remainder is a summation of the appropriate form. There is no leading term because formula_33.
The recurrence relation for formula_34 is
formula_35
making the coefficients in the recursion relation
formula_36
and the evaluation of the series is given by
formula_37
The final step is made particularly simple because formula_38, so the end of the recurrence is simply formula_39; the formula_32 term is added separately:
formula_40
Note that the algorithm requires only the evaluation of two trigonometric quantities formula_41 and formula_42.
Difference in meridian arc lengths.
Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write
formula_43
Clenshaw summation can be applied in this case
provided we simultaneously compute formula_44
and perform a matrix summation,
formula_45
where
formula_46
The first element of formula_47 is the average
value of formula_48 and the second element is the average slope.
formula_49 satisfies the recurrence
relation
formula_50
where
formula_51
takes the place of formula_52 in the recurrence relation, and formula_53.
The standard Clenshaw algorithm can now be applied to yield
formula_54
where formula_55 are 2×2 matrices. Finally we have
formula_56
This technique can be used in the limit formula_57 and formula_58 to simultaneously compute formula_59 and the derivative formula_60, provided that, in evaluating formula_61 and formula_62, we take formula_63.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi_k(x)"
},
{
"math_id": 1,
"text": "S(x) = \\sum_{k=0}^n a_k \\phi_k(x)"
},
{
"math_id": 2,
"text": "\\phi_k,\\; k=0, 1, \\ldots"
},
{
"math_id": 3,
"text": "\\phi_{k+1}(x) = \\alpha_k(x)\\,\\phi_k(x) + \\beta_k(x)\\,\\phi_{k-1}(x),"
},
{
"math_id": 4,
"text": "\\alpha_k(x)"
},
{
"math_id": 5,
"text": "\\beta_k(x)"
},
{
"math_id": 6,
"text": "\\alpha(x)"
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": "\\beta"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "a_0, \\ldots, a_n"
},
{
"math_id": 11,
"text": "b_k(x)"
},
{
"math_id": 12,
"text": " \\begin{align}\n b_{n+1}(x) &= b_{n+2}(x) = 0, \\\\\n b_k(x) &= a_k + \\alpha_k(x)\\,b_{k+1}(x) + \\beta_{k+1}(x)\\,b_{k+2}(x).\n\\end{align} "
},
{
"math_id": 13,
"text": "b_2(x)"
},
{
"math_id": 14,
"text": "b_1(x)"
},
{
"math_id": 15,
"text": "\\phi_0(x)"
},
{
"math_id": 16,
"text": "\\phi_1(x)"
},
{
"math_id": 17,
"text": "S(x) = \\phi_0(x)\\,a_0 + \\phi_1(x)\\,b_1(x) + \\beta_1(x)\\,\\phi_0(x)\\,b_2(x)."
},
{
"math_id": 18,
"text": "S(x) = \\sum_{k=0}^n a_k x^k."
},
{
"math_id": 19,
"text": " \\begin{align}\n \\phi_0(x) &= 1, \\\\\n \\phi_k(x) &= x^k = x\\phi_{k-1}(x)\n\\end{align} "
},
{
"math_id": 20,
"text": "\\alpha(x) = x"
},
{
"math_id": 21,
"text": "\\beta = 0"
},
{
"math_id": 22,
"text": "b_k(x) = a_k + x b_{k+1}(x)"
},
{
"math_id": 23,
"text": "S(x) = a_0 + x b_1(x) = b_0(x),"
},
{
"math_id": 24,
"text": "p_n(x) = a_0 + a_1 T_1(x) + a_2 T_2(x) + \\cdots + a_n T_n(x)."
},
{
"math_id": 25,
"text": "\\alpha(x) = 2x, \\quad \\beta = -1,"
},
{
"math_id": 26,
"text": "T_0(x) = 1, \\quad T_1(x) = x."
},
{
"math_id": 27,
"text": "b_k(x) = a_k + 2xb_{k+1}(x) - b_{k+2}(x)"
},
{
"math_id": 28,
"text": "p_n(x) = a_0 + xb_1(x) - b_2(x)."
},
{
"math_id": 29,
"text": "b_0(x) = a_0 + 2xb_1(x) - b_2(x),"
},
{
"math_id": 30,
"text": "p_n(x) = \\tfrac{1}{2} \\left[a_0+b_0(x) - b_2(x)\\right]."
},
{
"math_id": 31,
"text": "m(\\theta) = C_0\\,\\theta + C_1\\sin \\theta + C_2\\sin 2\\theta + \\cdots + C_n\\sin n\\theta."
},
{
"math_id": 32,
"text": "C_0\\,\\theta"
},
{
"math_id": 33,
"text": "\\phi_0(\\theta) = \\sin 0\\theta = \\sin 0 = 0"
},
{
"math_id": 34,
"text": "\\sin k\\theta"
},
{
"math_id": 35,
"text": "\\sin (k+1)\\theta = 2 \\cos\\theta \\sin k\\theta - \\sin (k-1)\\theta,"
},
{
"math_id": 36,
"text": "\\alpha_k(\\theta) = 2\\cos\\theta, \\quad \\beta_k = -1."
},
{
"math_id": 37,
"text": "\\begin{align}\n b_{n+1}(\\theta) &= b_{n+2}(\\theta) = 0, \\\\\n b_k(\\theta) &= C_k + 2\\cos \\theta \\,b_{k+1}(\\theta) - b_{k+2}(\\theta),\\quad\\mathrm{for\\ } n\\ge k \\ge 1.\n\\end{align}"
},
{
"math_id": 38,
"text": "\\phi_0(\\theta) = \\sin 0 = 0"
},
{
"math_id": 39,
"text": "b_1(\\theta)\\sin(\\theta)"
},
{
"math_id": 40,
"text": "m(\\theta) = C_0\\,\\theta + b_1(\\theta)\\sin \\theta."
},
{
"math_id": 41,
"text": "\\cos \\theta"
},
{
"math_id": 42,
"text": "\\sin \\theta"
},
{
"math_id": 43,
"text": "\n m(\\theta_1)-m(\\theta_2) = C_0(\\theta_1-\\theta_2) + \\sum_{k=1}^n 2 C_k\n \\sin\\bigl({\\textstyle\\frac12}k(\\theta_1-\\theta_2)\\bigr)\n \\cos\\bigl({\\textstyle\\frac12}k(\\theta_1+\\theta_2)\\bigr).\n"
},
{
"math_id": 44,
"text": "m(\\theta_1)+m(\\theta_2)"
},
{
"math_id": 45,
"text": "\n \\mathsf M(\\theta_1,\\theta_2) = \\begin{bmatrix}\n (m(\\theta_1) + m(\\theta_2)) / 2\\\\\n (m(\\theta_1) - m(\\theta_2)) / (\\theta_1 - \\theta_2)\n \\end{bmatrix} =\n C_0 \\begin{bmatrix} \\mu \\\\ 1 \\end{bmatrix} +\n \\sum_{k=1}^n C_k \\mathsf F_k(\\theta_1,\\theta_2),\n"
},
{
"math_id": 46,
"text": " \\begin{align}\n \\delta &= \\tfrac{1}{2}(\\theta_1-\\theta_2), \\\\[1ex]\n \\mu &= \\tfrac{1}{2}(\\theta_1+\\theta_2), \\\\[1ex]\n \\mathsf F_k(\\theta_1,\\theta_2) &=\n \\begin{bmatrix}\n \\cos k \\delta \\sin k \\mu \\\\\n \\dfrac{\\sin k \\delta}\\delta \\cos k \\mu\n \\end{bmatrix}.\n\\end{align} "
},
{
"math_id": 47,
"text": "\\mathsf M(\\theta_1,\\theta_2)"
},
{
"math_id": 48,
"text": "m"
},
{
"math_id": 49,
"text": "\\mathsf F_k(\\theta_1,\\theta_2)"
},
{
"math_id": 50,
"text": "\n \\mathsf F_{k+1}(\\theta_1,\\theta_2) =\n \\mathsf A(\\theta_1,\\theta_2) \\mathsf F_k(\\theta_1,\\theta_2) -\n \\mathsf F_{k-1}(\\theta_1,\\theta_2),\n"
},
{
"math_id": 51,
"text": "\n \\mathsf A(\\theta_1,\\theta_2) = 2\\begin{bmatrix}\n \\cos \\delta \\cos \\mu & -\\delta\\sin \\delta \\sin \\mu \\\\\n - \\displaystyle\\frac{\\sin \\delta}\\delta \\sin \\mu & \\cos \\delta \\cos \\mu\n \\end{bmatrix}\n"
},
{
"math_id": 52,
"text": "\\alpha"
},
{
"math_id": 53,
"text": "\\beta=-1"
},
{
"math_id": 54,
"text": " \\begin{align}\n \\mathsf B_{n+1} &= \\mathsf B_{n+2} = \\mathsf 0, \\\\[1ex]\n \\mathsf B_k &= C_k \\mathsf I + \\mathsf A \\mathsf B_{k+1} -\n \\mathsf B_{k+2}, \\qquad\\mathrm{for\\ } n\\ge k \\ge 1, \\\\[1ex]\n \\mathsf M(\\theta_1,\\theta_2) &=\n C_0 \\begin{bmatrix}\\mu\\\\1\\end{bmatrix} +\n \\mathsf B_1 \\mathsf F_1(\\theta_1,\\theta_2),\n\\end{align}"
},
{
"math_id": 55,
"text": "\\mathsf B_k"
},
{
"math_id": 56,
"text": "\n \\frac{m(\\theta_1) - m(\\theta_2)}{\\theta_1 - \\theta_2} =\n \\mathsf M_2(\\theta_1, \\theta_2).\n"
},
{
"math_id": 57,
"text": "\\theta_2 = \\theta_1 = \\mu"
},
{
"math_id": 58,
"text": " \\delta = 0 "
},
{
"math_id": 59,
"text": "m(\\mu)"
},
{
"math_id": 60,
"text": "dm(\\mu)/d\\mu"
},
{
"math_id": 61,
"text": "\\mathsf F_1"
},
{
"math_id": 62,
"text": "\\mathsf A"
},
{
"math_id": 63,
"text": "\\lim_{\\delta \\to 0} (\\sin k \\delta)/\\delta = k"
}
] | https://en.wikipedia.org/wiki?curid=906703 |
9067359 | Steiner's calculus problem | Steiner's problem, asked and answered by , is the problem of finding the maximum of the function
formula_0
It is named after Jakob Steiner.
The maximum is at formula_1, where "e" denotes the base of the natural logarithm. One can determine that by solving the equivalent problem of maximizing
formula_2
Applying the first derivative test, the derivative of formula_3 is
formula_4
so formula_5 is positive for formula_6 and negative for formula_7, which implies that formula_8 – and therefore formula_9 – is increasing for formula_6 and decreasing for formula_10 Thus, formula_11 is the unique global maximum of formula_12 | [
{
"math_id": 0,
"text": "f(x)=x^{1/x}.\\,"
},
{
"math_id": 1,
"text": "x = e"
},
{
"math_id": 2,
"text": "g(x) = \\ln f(x) = \\frac{\\ln x}{x}."
},
{
"math_id": 3,
"text": "g"
},
{
"math_id": 4,
"text": "g'(x) = \\frac{1-\\ln x}{x^2},"
},
{
"math_id": 5,
"text": "g'(x)"
},
{
"math_id": 6,
"text": "0<x<e"
},
{
"math_id": 7,
"text": "x>e"
},
{
"math_id": 8,
"text": "g(x)"
},
{
"math_id": 9,
"text": "f(x)"
},
{
"math_id": 10,
"text": "x>e."
},
{
"math_id": 11,
"text": "x=e"
},
{
"math_id": 12,
"text": "f(x)."
}
] | https://en.wikipedia.org/wiki?curid=9067359 |
9067941 | History of quantum mechanics | The history of quantum mechanics is a fundamental part of the . The major chapters of this history begin with the emergence of quantum ideas to explain individual phenomena—blackbody radiation, the photoelectric effect, solar emission spectra—an era called the Old or Older quantum theories. Building on the technology developed in classical mechanics, the invention of wave mechanics by Erwin Schrödinger and expansion by many others triggers the "modern" era beginning around 1925. Paul Dirac's relativistic quantum theory work lead him to explore quantum theories of radiation, culminating in quantum electrodynamics, the first quantum field theory. The history of quantum mechanics continues in the history of quantum field theory. The history of quantum chemistry, theoretical basis of chemical structure, reactivity, and bonding, interlaces with the events discussed in this article.
The phrase "quantum mechanics" was coined (in German, "Quantenmechanik") by the group of physicists including Max Born, Werner Heisenberg, and Wolfgang Pauli, at the University of Göttingen in the early 1920s, and was first used in Born's 1925 paper "Zur Quantenmechanik".
The word "quantum" comes from the Latin word for "how much" (as does "quantity"). Something that is "quantized", as the energy of Planck's harmonic oscillators, can only take specific values. For example, in most countries, money is effectively quantized, with the "quantum of money" being the lowest-value coin in circulation. Mechanics is the branch of science that deals with the action of forces on objects. So, quantum mechanics is the part of mechanics that deals with objects for which particular properties are quantized.
Triumph and trouble at the end of the classical era.
The discoveries of the 19th century, both the successes and failures, set the stage for the emergence of quantum mechanics.
Wave theory of light.
Beginning in 1670 and progressing over three decades, Isaac Newton developed and championed his corpuscular theory, arguing that the perfectly straight lines of reflection demonstrated light's particle nature, as at that time no wave theory demonstrated travel in straight lines. He explained refraction by positing that particles of light accelerated laterally upon entering a denser medium. Around the same time, Newton's contemporaries Robert Hooke and Christiaan Huygens, and later Augustin-Jean Fresnel, mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different media, refraction could be easily explained as the medium-dependent propagation of light waves. The resulting Huygens–Fresnel principle was extremely successful at reproducing light's behaviour and was consistent with Thomas Young's discovery of wave interference of light by his double-slit experiment in 1801. The wave view did not immediately displace the ray and particle view, but began to dominate scientific thinking about light in the mid 19th century, since it could explain polarization phenomena that the alternatives could not.
James Clerk Maxwell discovered that he could apply his previously discovered Maxwell's equations, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. It quickly became apparent that visible light, ultraviolet light, and infrared light were all electromagnetic waves of differing frequency. This theory became a critical ingredient in the beginning of quantum mechanics.
Emerging atomic theory.
During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. The existence of atoms was not universally accepted among physicists or chemists; Ernst Mach, for example, was a staunch anti-atomist.
The earliest hints of problems in classical mechanics were raised in relation to the temperature dependence of the properties of gasses.
Ludwig Boltzmann suggested in 1877 that the energy levels of a physical system, such as a molecule, could be discrete (rather than continuous). Boltzmann's rationale for the presence of discrete energy levels in molecules such as those of iodine gas had its origins in his statistical thermodynamics and statistical mechanics theories and was backed up by mathematical arguments, as would also be the case twenty years later with the first quantum theory put forward by Max Planck.
Electrons.
In the final days of the 1800s, J. J. Thomson established that electrons carry a negative charge opposite but the same size as that of a hydrogen ion while having a mass over one thousand times less. Many such electrons were known to be associated with every atom.
Radiation theory.
Throughout the 1800s many studies investigated details in the spectrum of intensity versus frequency for light emitted by flames, by the Sun, or red-hot objects. The Rydberg formula effectively summarized the dark lines seen in the spectrum, but he provided no physical model to explain them. The spectrum emitted by red-hot objects could be explained at high or low wavelengths but the two theories differed.
Old quantum theory.
Quantum mechanics developed in two distinct phases. The first phase, known as the old quantum theory, began around 1900 with radically new approaches to explanations physical phenomena not understood by classical mechanics of the 1800s.
Max Planck introduces quanta to explain black-body radiation.
Thermal radiation is electromagnetic radiation emitted from the surface of an object due to the object's internal energy. If an object is heated sufficiently, it starts to emit light at the red end of the visible spectrum, as it becomes red hot.
Heating it further causes the color to change from red to yellow, white, and blue, as it emits light at increasingly shorter wavelengths (higher frequencies). A perfect emitter is also a perfect absorber: when it is cold, such an object looks perfectly black, because it absorbs all the light that falls on it and emits none. Consequently, an ideal thermal emitter is known as a black body, and the radiation it emits is called black-body radiation.
By the late 19th century, thermal radiation had been fairly well characterized experimentally. Several formulas had been created that could describe some of the experimental measurements of thermal radiation: how the wavelength at which the radiation is strongest changes with temperature is given by Wien's displacement law, the overall power emitted per unit area is given by the Stefan–Boltzmann law. The best theoretical explanation of the experimental results was the Rayleigh–Jeans law, which agrees with experimental results well at large wavelengths (or, equivalently, low frequencies), but strongly disagrees at short wavelengths (or high frequencies). In fact, at short wavelengths, classical physics predicted that energy will be emitted by a hot body at an infinite rate. This result, which is clearly wrong, is known as the ultraviolet catastrophe. However, classical physics led to the Rayleigh–Jeans law, which, as shown in the figure, agrees with experimental results well at low frequencies, but strongly disagrees at high frequencies. Physicists searched for a single theory that explained all the experimental results.
The first model that was able to explain the full spectrum of thermal radiation was put forward by Max Planck in 1900. He proposed a mathematical model in which the thermal radiation was in equilibrium with a set of harmonic oscillators. To reproduce the experimental results, he had to assume that each oscillator emitted an integer number of units of energy at its single characteristic frequency, rather than being able to emit any arbitrary amount of energy. In other words, the energy emitted by an oscillator was "quantized". The quantum of energy for each oscillator, according to Planck, was proportional to the frequency of the oscillator; the constant of proportionality is now known as the Planck constant.
Planck's law was the first quantum theory in physics, and Planck won the Nobel Prize in 1918 "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta". At the time, however, Planck's view was that quantization was purely a heuristic mathematical construct, rather than (as is now believed) a fundamental change in our understanding of the world.
Albert Einstein applies quanta to explain the photoelectric effect.
In 1887, Heinrich Hertz observed that when light with sufficient frequency hits a metallic surface, the surface emits cathode rays. Ten years later, J. J. Thomson showed that the many reports of cathode rays were actually "corpuscles" and they quickly came to be called electrons. In 1902, Philipp Lenard discovered that the maximum possible energy of an ejected electron is unrelated to its intensity. This observation is at odds with classical electromagnetism, which predicts that the electron's energy should be proportional to the intensity of the incident radiation.
In 1905, Albert Einstein suggested that even though continuous models of light worked extremely well for time-averaged optical phenomena, for instantaneous transitions the energy in light may occur a finite number of energy quanta.
From the introduction section of his March 1905 quantum paper "On a heuristic viewpoint concerning the emission and transformation of light", Einstein states:
<templatestyles src="Template:Blockquote/styles.css" />According to the assumption to be contemplated here, when a light ray is spreading from a point, the energy is not distributed continuously over ever-increasing spaces, but consists of a finite number of "energy quanta" that are localized in points in space, move without dividing, and can be absorbed or generated only as a whole.
This statement has been called the most revolutionary sentence written by a physicist of the twentieth century.
The energy of a single quantum of light of frequency formula_0 is given by the frequency multiplied by the Planck constant formula_1:
formula_2
Einstein assumed a light quanta transfers all of its energy to a single electron imparting at most an energy "hf" to the electron. Therefore, only the light frequency determines the maximum energy that can be imparted to the electron; the intensity of the photoemission is proportional to the light beam intensity.
Einstein argued that it takes a certain amount of energy, called the "work function" and denoted by φ, to remove an electron from the metal. This amount of energy is different for each metal. If the energy of the light quanta is less than the work function, then it does not carry sufficient energy to remove the electron from the metal. The threshold frequency, "f"0, is the frequency of a light quanta whose energy is equal to the work function:
formula_3
If "f" is greater than "f"0, the energy "hf" is enough to remove an electron. The ejected electron has a kinetic energy, "E"k, which is, at most, equal to the light energy minus the energy needed to dislodge the electron from the metal:
formula_4
Einstein's description of light as being composed of energy quanta extended Planck's notion of quantized energy, which is that a single quanta of a given frequency, "f", delivers an invariant amount of energy, "hf".
In nature, single quanta are rarely encountered. The Sun and emission sources available in the 19th century emit vast amount of energy every second. The Planck constant, "h", is so tiny that the amount of energy in each quanta, "hf" is very very small. Light we see includes many trillions of such quanta.
Quantization of matter: the Bohr model of the atom.
By the dawn of the 20th century, the evidence required a model of the atom with a diffuse cloud of negatively charged electrons surrounding a small, dense, positively charged nucleus. These properties suggested a model in which electrons circle the nucleus like planets orbiting a star. The classical model of the atom is called the planetary model, or sometimes the Rutherford model—after Ernest Rutherford who proposed it in 1911, based on the Geiger–Marsden gold foil experiment, which first demonstrated the existence of the nucleus. However, it was also known that the atom in this model would be unstable: according to classical theory, orbiting electrons are undergoing centripetal acceleration, and should therefore give off electromagnetic radiation, the loss of energy also causing them to spiral toward the nucleus, colliding with it in a fraction of a second.
A second, related puzzle was the emission spectrum of atoms. When a gas is heated, it gives off light only at discrete frequencies. For example, the visible light given off by hydrogen consists of four different colors, as shown in the picture below. The intensity of the light at different frequencies is also different. By contrast, white light consists of a continuous emission across the whole range of visible frequencies. By the end of the nineteenth century, a simple rule known as Balmer's formula showed how the frequencies of the different lines related to each other, though without explaining why this was, or making any prediction about the intensities. The formula also predicted some additional spectral lines in ultraviolet and infrared light that had not been observed at the time. These lines were later observed experimentally, raising confidence in the value of the formula.
<templatestyles src="Template:Hidden begin/styles.css"/>The mathematical formula describing hydrogen's emission spectrum
In 1885 the Swiss mathematician Johann Balmer discovered that each wavelength "λ" (lambda) in the visible spectrum of hydrogen is related to some integer "n" by the equation
formula_5
where "B" is a constant Balmer determined is equal to 364.56 nm.
In 1888 Johannes Rydberg generalized and greatly increased the explanatory utility of Balmer's formula. He predicted that "λ" is related to two integers "n" and "m" according to what is now known as the Rydberg formula:
formula_6
where "R" is the Rydberg constant, equal to 0.0110 nm−1, and "n" must be greater than "m".
The Rydberg formula accounts for the four visible wavelengths of hydrogen by setting "m"
2 and "n"
3, 4, 5, 6. It also predicts additional wavelengths in the emission spectrum: for "m"
1 and for "n" > 1, the emission spectrum should contain certain ultraviolet wavelengths, and for "m"
3 and "n" > 3, it should also contain certain infrared wavelengths. Experimental observation of these wavelengths came two decades later: in 1908 Louis Paschen found some of the predicted infrared wavelengths, and in 1914 Theodore Lyman found some of the predicted ultraviolet wavelengths.
Both Balmer's formula and the Rydberg formula involve integers: in modern terms, they imply that some property of the atom is quantized. Understanding exactly what this property was, and why it was quantized, was a major part of the development of quantum mechanics, as shown in the rest of this article.
In 1905, Albert Einstein used kinetic theory to explain Brownian motion. French physicist Jean Baptiste Perrin used the model in Einstein's paper to experimentally determine the mass, and the dimensions, of atoms, thereby giving direct empirical verification of the atomic theory.
In 1913 Niels Bohr proposed a new model of the atom that included quantized electron orbits: electrons still orbit the nucleus much as planets orbit around the Sun, but they are permitted to inhabit only certain orbits, not to orbit at any arbitrary distance. When an atom emitted (or absorbed) energy, the electron did not move in a continuous trajectory from one orbit around the nucleus to another, as might be expected classically. Instead, the electron would jump instantaneously from one orbit to another, giving off the emitted light in the form of a photon. The possible energies of photons given off by each element were determined by the differences in energy between the orbits, and so the emission spectrum for each element would contain a number of lines.
Starting from only one simple assumption about the rule that the orbits must obey, the Bohr model was able to relate the observed spectral lines in the emission spectrum of hydrogen to previously known constants. In Bohr's model, the electron was not allowed to emit energy continuously and crash into the nucleus: once it was in the closest permitted orbit, it was stable forever. Bohr's model did not explain why the orbits should be quantized in that way, nor was it able to make accurate predictions for atoms with more than one electron, or to explain why some spectral lines are brighter than others.
Some fundamental assumptions of the Bohr model were soon proven wrong—but the key result that the discrete lines in emission spectra are due to some property of the electrons in atoms being quantized is correct. The way that the electrons actually behave is strikingly different from Bohr's atom, and from what we see in the world of our everyday experience; this modern quantum mechanical model of the atom is discussed below.
<templatestyles src="Template:Hidden begin/styles.css"/>A more detailed explanation of the Bohr model
Bohr theorized that the angular momentum, "L", of an electron is quantized:
formula_7
where "n" is an integer and "h" and "ħ" are the Planck constant and Planck reduced constant respectively. Starting from this assumption, Coulomb's law and the equations of circular motion show that an electron with "n" units of angular momentum orbits a proton at a distance "r" given by
formula_8,
where "k"e is the Coulomb constant, "m" is the mass of an electron, and "e" is the charge on an electron.
For simplicity this is written as
formula_9
where "a"0, called the Bohr radius, is equal to 0.0529 nm.
The Bohr radius is the radius of the smallest allowed orbit.
The energy of the electron is the sum of its kinetic and potential energies. The electron has kinetic energy by virtue of its actual motion around the nucleus, and potential energy because of its electromagnetic interaction with the nucleus. In the Bohr model this energy can be calculated, and is given by
formula_10.
Thus Bohr's assumption that angular momentum is quantized means that an electron can inhabit only certain orbits around the nucleus and that it can have only certain energies. A consequence of these constraints is that the electron does not crash into the nucleus: it cannot continuously emit energy, and it cannot come closer to the nucleus than "a"0 (the Bohr radius).
An electron loses energy by jumping instantaneously from its original orbit to a lower orbit; the extra energy is emitted in the form of a photon. Conversely, an electron that absorbs a photon gains energy, hence it jumps to an orbit that is farther from the nucleus.
Each photon from glowing atomic hydrogen is due to an electron moving from a higher orbit, with radius "rn", to a lower orbit, "rm". The energy "E"γ of this photon is the difference in the energies "En" and "Em" of the electron:
formula_11
Since Planck's equation shows that the photon's energy is related to its wavelength by "E"γ
"hc"/"λ", the wavelengths of light that can be emitted are given by
formula_12
This equation has the same form as the Rydberg formula, and predicts that the constant "R" should be given by
formula_13
Therefore, the Bohr model of the atom can predict the emission spectrum of hydrogen in terms of fundamental constants. The model can be easily modified to account for the emission spectrum of any system consisting of a nucleus and a single electron (that is, ions such as He+ or O7+, which contain only one electron) but cannot be extended to an atom with two electrons such as neutral helium. However, it was not able to make accurate predictions for multi-electron atoms, or to explain why some spectral lines are brighter than others.
An important step was taken in the evolution of quantum theory at the first Solvay Congress of 1911. There the top physicists of the scientific community met to discuss the problem of “Radiation and the Quanta.” By this time the Ernest Rutherford model of the atom had been published, but much of the discussion involving atomic structure revolved around the quantum model of Arthur Haas in 1910. Also, at the Solvay Congress in 1911 Hendrik Lorentz suggested after Einstein's talk on quantum structure that the energy of a rotator be set equal to nhv. This was followed by other quantum models such as the John William Nicholson model of 1912 which was nuclear and discretized angular momentum. Nicholson had introduced the spectra into his atomic model by using the oscillations of electrons in a nuclear atom perpendicular to the orbital plane thereby maintaining stability. Nicholson's atomic spectra identified many unattributed lines in solar and nebular spectra.
In 1913, Bohr explained the spectral lines of the hydrogen atom, again by using quantization, in his paper of July 1913 "On the Constitution of Atoms and Molecules" in which he discussed and cited the Nicholson model. In the Bohr model, the hydrogen atom is pictured as a heavy, positively charged nucleus orbited by a light, negatively charged electron. The electron can only exist in certain, discretely separated orbits, labeled by their angular momentum, which is restricted to be an integer multiple of the reduced Planck constant.
The model's key success lay in explaining the Rydberg formula for the spectral emission lines of atomic hydrogen by using the transitions of electrons between orbits. While the Rydberg formula had been known experimentally, it did not gain a theoretical underpinning until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results.
Moreover, the application of Planck's quantum theory to the electron allowed Ștefan Procopiu in 1911–1913, and subsequently Niels Bohr in 1913, to calculate the magnetic moment of the electron, which was later called the "magneton"; similar quantum computations, but with numerically quite different values, were subsequently made possible for both the magnetic moments of the proton and the neutron that are three orders of magnitude smaller than that of the electron.
These theories, though successful, were strictly phenomenological: during this time, there was no rigorous justification for quantization, aside, perhaps, from Henri Poincaré's discussion of Planck's theory in his 1912 paper . They are collectively known as the "old quantum theory".
Spin quantization.
Quantization of the orbital angular momentum of the electron combined with the magnetic moment of the electron suggested that atoms with a magnetic moment should show quantized behavior in a magnetic field.
In 1922, Otto Stern and Walther Gerlach set out to test this theory. They heated silver in a vacuum tube equipped with a series of narrow aligned slits, creating a molecular beam of silver atoms. They shot this beam through an inhomogeneous magnetic field. Rather than a continuous pattern of Silver atoms, they found two bunches.
Relative to its northern pole, pointing up, down, or somewhere in between, in classical mechanics, a magnet thrown through a magnetic field may be deflected a small or large distance upwards or downwards. The atoms that Stern and Gerlach shot through the magnetic field acted similarly. However, while the magnets could be deflected variable distances, the atoms would always be deflected a constant distance either up or down. This implied that the property of the atom that corresponds to the magnet's orientation must be quantized, taking one of two values (either up or down), as opposed to being chosen freely from any angle.
The choice of the orientation of the magnetic field used in the Stern–Gerlach experiment is arbitrary. In the animation shown here, the field is vertical and so the atoms are deflected either up or down. If the magnet is rotated a quarter turn, the atoms are deflected either left or right. Using a vertical field shows that the spin along the vertical axis is quantized, and using a horizontal field shows that the spin along the horizontal axis is quantized.
The results of the Stern-Gerlach experiment caused a sensation, most especially because leading scientists, including Einstein and Paul Ehrenfest argued that the silver atoms should have random orientations in the conditions of the experiment: quantization should not have been observable. At least five years would elapse before this mystery was resolved: quantization was observed but it was not due to orbital angular momentum.
In 1925 Ralph Kronig proposed that electrons behave as if they self-rotate, or "spin", about an axis. Spin would generate a tiny magnetic moment that would split the energy levels responsible for spectral lines, in agreement with existing measurements. Two electrons in the same orbital would occupy distinct quantum states if they "spun" in opposite directions, thus satisfying the exclusion principle. Unfortunately, the theory had two significant flaws: two values computed by Kronig were off by a factor of two. Kronig's senior colleagues discouraged his work and it was never published.
Ten months later, Dutch physicists George Uhlenbeck and Samuel Goudsmit at Leiden University published their theory of electron self rotation. The model, like Kronig's was essentially classical but resulted in a quantum prediction.
de Broglie's matter wave hypothesis.
In 1924 Louis de Broglie published a breakthrough hypothesis: matter has wave properties. Building on Einstein's proposal that the photoelectric effect can be described using quantized energy transfers and by Einstein's separate proposal, from special relativity, that mass at rest is equivalent to energy via formula_14, de Broglie proposed that matter in motion appears to have an associated wave with wavelength formula_15 where formula_16 is the matter momentum from the motion. Requiring his wavelength to encircle an atom, he explained quantization of Bohr's orbits. Simultaneously this showed that the wave behavior of light was essentially a quantum effect.
De Broglie expanded the Bohr model of the atom by showing that an electron in orbit around a nucleus could be thought of as having wave-like properties. In particular, an electron is observed only in situations that permit a standing wave around a nucleus. An example of a standing wave is a violin string, which is fixed at both ends and can be made to vibrate. The waves created by a stringed instrument appear to oscillate in place, moving from crest to trough in an up-and-down motion. The wavelength of a standing wave is related to the length of the vibrating object and the boundary conditions. For example, because the violin string is fixed at both ends, it can carry standing waves of wavelengths formula_17, where "l" is the length and "n" is a positive integer. De Broglie suggested that the allowed electron orbits were those for which the circumference of the orbit would be an integer number of wavelengths. The electron's wavelength, therefore, determines that only Bohr orbits of certain distances from the nucleus are possible. In turn, at any distance from the nucleus smaller than a certain value, it would be impossible to establish an orbit. The minimum possible distance from the nucleus is called the Bohr radius. De Broglie's treatment of the Bohr atom was ultimately unsuccessful, but his hypothesis served as a starting point for Schrödinger's wave equation.
Matter behaving as a wave was first demonstrated experimentally for electrons: a beam of electrons can exhibit diffraction, just like a beam of light or a water wave. Three years after de Broglie published his hypothesis two different groups demonstrated electron diffraction. At the University of Aberdeen, George Paget Thomson and Alexander Reid passed a beam of electrons through a thin celluloid film, then later metal films, and observed the predicted interference patterns. (Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned.) At Bell Labs, Clinton Joseph Davisson and Lester Halbert Germer reflected an electron beam from a nickel sample in their experiment, observing well-defined beams predicted by wave models returning form the crystal. De Broglie was awarded the Nobel Prize in Physics in 1929 for his hypothesis; Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their experimental work.
Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics and the non-relativistic Schrödinger equation as an approximation of the generalised case of de Broglie's theory. Schrödinger subsequently showed that the two approaches were equivalent. The first applications of quantum mechanics to physical systems were the algebraic determination of the hydrogen spectrum by Wolfgang Pauli and the treatment of diatomic molecules by Lucy Mensing.
Development of modern quantum mechanics.
The end of the first era of quantum mechanics was triggered by de Broglie's publication of his hypothesis of matter waves, leading to Schrödinger's discovery of wave mechanics for matter. Accurate predictions of the absorption spectrum of hydrogen ensured wide acceptance of the new quantum theory.
Matrix mechanics.
In 1925, Werner Heisenberg attempted to solve one of the problems that the Bohr model left unanswered, explaining the intensities of the different lines in the hydrogen emission spectrum. Through a series of mathematical analogies, he wrote out the quantum-mechanical analog for the classical computation of intensities. Shortly afterward, Heisenberg's colleague Max Born realized that Heisenberg's method of calculating the probabilities for transitions between the different energy levels could best be expressed by using the mathematical concept of matrices.
Heisenberg formulated an early version of the uncertainty principle in 1927, analyzing a thought experiment where one attempts to measure an electron's position and momentum simultaneously. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant, a step that would be taken soon after by Earle Hesse Kennard, Wolfgang Pauli, and Hermann Weyl.
Schrödinger and the wave mechanics.
In the first half of 1926, building on de Broglie's hypothesis, Erwin Schrödinger developed the equation that describes the behavior of a quantum-mechanical wave. The mathematical model, called the Schrödinger equation after its creator, is central to quantum mechanics, defines the permitted stationary states of a quantum system, and describes how the quantum state of a physical system changes in time. The wave itself is described by a mathematical function known as a "wave function". Schrödinger said that the wave function provides the "means for predicting the probability of measurement results".
Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's electron as a classical wave, moving in a well of the electrical potential created by the proton. This calculation accurately reproduced the energy levels of the Bohr model.
In May 1926, Schrödinger proved that Heisenberg's matrix mechanics and his own wave mechanics made the same predictions about the properties and behavior of the electron; mathematically, the two theories had an underlying common form. Yet the two men disagreed on the interpretation of their mutual theory. For instance, Heisenberg accepted the theoretical prediction of jumps of electrons between orbitals in an atom, but Schrödinger hoped that a theory based on continuous wave-like properties could avoid what he called (as paraphrased by Wilhelm Wien) "this nonsense about quantum jumps". In the end, Heisenberg's approach won out, and quantum jumps were confirmed.
Copenhagen interpretation.
Bohr, Heisenberg, and others tried to explain what these experimental results and mathematical models really mean. The term "Copenhagen interpretation" has been applied to their views in retrospect, glossing over differences among them. While no definitive statement of "the" Copenhagen interpretation exists, the following ideas are widely seen as characteristic of it.
Application to the hydrogen atom.
Bohr's model of the atom was essentially a planetary one, with the electrons orbiting around the nuclear "sun". However, the uncertainty principle states that an electron cannot simultaneously have an exact location and velocity in the way that a planet does. Instead of classical orbits, electrons are said to inhabit "atomic orbitals". An orbital is the "cloud" of possible locations in which an electron might be found, a distribution of probabilities rather than a precise location. Each orbital is three dimensional, rather than the two-dimensional orbit, and is often depicted as a three-dimensional region within which there is a 95 percent probability of finding the electron.
Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's electron as a wave, represented by the "wave function" "Ψ", in an electric potential well, "V", created by the proton. The solutions to Schrödinger's equation are distributions of probabilities for electron positions and locations. Orbitals have a range of different shapes in three dimensions. The energies of the different orbitals can be calculated, and they accurately match the energy levels of the Bohr model.
Within Schrödinger's picture, each electron has four properties:
The collective name for these properties is the quantum state of the electron. The quantum state can be described by giving a number to each of these properties; these are known as the electron's quantum numbers. The quantum state of the electron is described by its wave function. The Pauli exclusion principle demands that no two electrons within an atom may have the same values of all four numbers.
The first property describing the orbital is the principal quantum number, "n", which is the same as in the Bohr model. "n" denotes the energy level of each orbital. The possible values for "n" are integers:
formula_18
The next quantum number, the azimuthal quantum number, denoted "l", describes the shape of the orbital. The shape is a consequence of the angular momentum of the orbital. The angular momentum represents the resistance of a spinning object to speeding up or slowing down under the influence of external force. The azimuthal quantum number represents the orbital angular momentum of an electron around its nucleus. The possible values for "l" are integers from 0 to "n − 1" (where "n" is the principal quantum number of the electron):
formula_19
The shape of each orbital is usually referred to by a letter, rather than by its azimuthal quantum number. The first shape ("l"=0) is denoted by the letter "s" (a mnemonic being ""s"phere"). The next shape is denoted by the letter "p" and has the form of a dumbbell. The other orbitals have more complicated shapes (see atomic orbital), and are denoted by the letters "d", "f", "g", etc.
The third quantum number, the magnetic quantum number, describes the magnetic moment of the electron, and is denoted by "m""l" (or simply "m"). The possible values for "m""l" are integers from −"l" to "l" (where "l" is the azimuthal quantum number of the electron):
formula_20
The magnetic quantum number measures the component of the angular momentum in a particular direction. The choice of direction is arbitrary; conventionally the z-direction is chosen.
The fourth quantum number, the spin quantum number (pertaining to the "orientation" of the electron's spin) is denoted "ms", with values +<templatestyles src="Fraction/styles.css" />1⁄2 or −<templatestyles src="Fraction/styles.css" />1⁄2.
The chemist Linus Pauling wrote, by way of example:
<templatestyles src="Template:Blockquote/styles.css" />In the case of a helium atom with two electrons in the 1"s" orbital, the Pauli Exclusion Principle requires that the two electrons differ in the value of one quantum number. Their values of "n", "l", and "ml" are the same. Accordingly they must differ in the value of "ms", which can have the value of +<templatestyles src="Fraction/styles.css" />1⁄2 for one electron and −<templatestyles src="Fraction/styles.css" />1⁄2 for the other."
It is the underlying structure and symmetry of atomic orbitals, and the way that electrons fill them, that leads to the organization of the periodic table. The way the atomic orbitals on different atoms combine to form molecular orbitals determines the structure and strength of chemical bonds between atoms.
The field of quantum chemistry was pioneered by physicists Walter Heitler and Fritz London, who published a study of the covalent bond of the hydrogen molecule in 1927. Quantum chemistry was subsequently developed by a large number of workers, including the American theoretical chemist Linus Pauling at Caltech, and John C. Slater into various theories such as Molecular Orbital Theory or Valence Theory.
Dirac, relativity, and development of the formal methods.
Starting around 1927, Paul Dirac began the process of unifying quantum mechanics with special relativity by proposing the Dirac equation for the electron. The Dirac equation achieves the relativistic description of the wavefunction of an electron that Schrödinger failed to obtain. It predicts electron spin and led Dirac to predict the existence of the positron. He also pioneered the use of operator theory, including the influential bra–ket notation, as described in his famous 1930 textbook. During the same period, Hungarian polymath John von Neumann formulated the rigorous mathematical basis for quantum mechanics as the theory of linear operators on Hilbert spaces, as described in his likewise famous 1932 textbook. These, like many other works from the founding period, still stand, and remain widely used.
Quantum field theory.
Beginning in 1927, researchers attempted to apply quantum mechanics to fields instead of single particles, resulting in quantum field theories. Early workers in this area include P.A.M. Dirac, W. Pauli, V. Weisskopf, and P. Jordan. This area of research culminated in the formulation of quantum electrodynamics by R.P. Feynman, F. Dyson, J. Schwinger, and S. Tomonaga during the 1940s. Quantum electrodynamics describes a quantum theory of electrons, positrons, and the electromagnetic field, and served as a model for subsequent quantum field theories.
The theory of quantum chromodynamics was formulated beginning in the early 1960s. The theory as we know it today was formulated by Politzer, Gross and Wilczek in 1975.
Building on pioneering work by Schwinger, Higgs and Goldstone, the physicists Glashow, Weinberg and Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force, for which they received the 1979 Nobel Prize in Physics.
Quantum information.
Quantum information science developed in the latter decades of the 20th century, beginning with theoretical results like Holevo's theorem, the concept of generalized measurements or POVMs, the proposal of quantum key distribution by Bennett and Brassard, and Shor's algorithm.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "h"
},
{
"math_id": 2,
"text": "E = hf"
},
{
"math_id": 3,
"text": "\\varphi = h f_0."
},
{
"math_id": 4,
"text": "E_\\text{k} = hf - \\varphi = h(f - f_0)."
},
{
"math_id": 5,
"text": "\\lambda = B\\left(\\frac{n^2}{n^2-4}\\right) \\qquad\\qquad n = 3,4,5,6"
},
{
"math_id": 6,
"text": " \\frac{1}{\\lambda} = R \\left(\\frac{1}{m^2} - \\frac{1}{n^2}\\right),"
},
{
"math_id": 7,
"text": "L = n\\frac{h}{2\\pi}=n\\hbar"
},
{
"math_id": 8,
"text": "r = \\frac{n^2 h^2}{4 \\pi^2 k_e m e^2}"
},
{
"math_id": 9,
"text": "r = n^2 a_0,\\!"
},
{
"math_id": 10,
"text": "E = -\\frac{k_{\\mathrm{e}}e^2}{2a_0} \\frac{1}{n^2}"
},
{
"math_id": 11,
"text": "E_{\\gamma} = E_n - E_m = \\frac{k_{\\mathrm{e}}e^2}{2a_0}\\left(\\frac{1}{m^2}-\\frac{1}{n^2}\\right)"
},
{
"math_id": 12,
"text": "\\frac{1}{\\lambda} = \\frac{k_{\\mathrm{e}}e^2}{2 a_0 h c}\\left(\\frac{1}{m^2}-\\frac{1}{n^2}\\right)."
},
{
"math_id": 13,
"text": "R = \\frac{k_{\\mathrm{e}}e^2}{2 a_0 h c} ."
},
{
"math_id": 14,
"text": "E=m_0c^2"
},
{
"math_id": 15,
"text": "\\lambda=h/p"
},
{
"math_id": 16,
"text": "p"
},
{
"math_id": 17,
"text": "\\frac{2l}{n}"
},
{
"math_id": 18,
"text": "n = 1, 2, 3\\ldots"
},
{
"math_id": 19,
"text": "l = 0, 1, \\ldots, n-1."
},
{
"math_id": 20,
"text": "m_l = -l, -(l-1), \\ldots, 0, \\ldots, (l-1), l."
}
] | https://en.wikipedia.org/wiki?curid=9067941 |
906878 | Metamaterial | Materials engineered to have properties that have not yet been found in nature
A metamaterial (from the Greek word μετά "meta", meaning "beyond" or "after", and the Latin word "materia", meaning "matter" or "material") is a type of material engineered to have a property that is rarely observed in naturally occurring materials. They are made from assemblies of multiple elements fashioned from composite materials such as metals and plastics. These materials are usually arranged in repeating patterns, at scales that are smaller than the wavelengths of the phenomena they influence. Metamaterials derive their properties not from the properties of the base materials, but from their newly designed structures. Their precise shape, geometry, size, orientation and arrangement gives them their smart properties capable of manipulating electromagnetic waves: by blocking, absorbing, enhancing, or bending waves, to achieve benefits that go beyond what is possible with conventional materials.
Appropriately designed metamaterials can affect waves of electromagnetic radiation or sound in a manner not observed in bulk materials. Those that exhibit a negative index of refraction for particular wavelengths have been the focus of a large amount of research. These materials are known as negative-index metamaterials.
Potential applications of metamaterials are diverse and include optical filters, medical devices, remote aerospace applications, sensor detection and infrastructure monitoring, smart solar power management, Lasers, crowd control, radomes, high-frequency battlefield communication and lenses for high-gain antennas, improving ultrasonic sensors, and even shielding structures from earthquakes. Metamaterials offer the potential to create super-lenses. Such a lens can allow imaging below the diffraction limit that is the minimum resolution d=λ/(2NA) that can be achieved by conventional lenses having a numerical aperture NA and with illumination wavelength λ. Sub-wavelength optical metamaterials, when integrated with optical recording media, can be used to achieve optical data density higher than limited by diffraction. A form of 'invisibility' was demonstrated using gradient-index materials. Acoustic and seismic metamaterials are also research areas.
Metamaterial research is interdisciplinary and involves such fields as electrical engineering, electromagnetics, classical optics, solid state physics, microwave and antenna engineering, optoelectronics, material sciences, nanoscience and semiconductor engineering.
<templatestyles src="Template:TOC limit/styles.css" />
History.
Explorations of artificial materials for manipulating electromagnetic waves began at the end of the 19th century. Some of the earliest structures that may be considered metamaterials were studied by Jagadish Chandra Bose, who in 1898 researched substances with chiral properties. Karl Ferdinand Lindman studied wave interaction with metallic helices as artificial chiral media in the early twentieth century.
In the late 1940s, Winston E. Kock from AT&T Bell Laboratories developed materials that had similar characteristics to metamaterials. In the 1950s and 1960s, artificial dielectrics were studied for lightweight microwave antennas. Microwave radar absorbers were researched in the 1980s and 1990s as applications for artificial chiral media.
Negative-index materials were first described theoretically by Victor Veselago in 1967. He proved that such materials could transmit light. He showed that the phase velocity could be made anti-parallel to the direction of Poynting vector. This is contrary to wave propagation in naturally occurring materials.
In 1995, John M. Guerra fabricated a sub-wavelength transparent grating (later called a photonic metamaterial) having 50 nm lines and spaces, and then coupled it with a standard oil immersion microscope objective (the combination later called a super-lens) to resolve a grating in a silicon wafer also having 50 nm lines and spaces. This super-resolved image was achieved with illumination having a wavelength of 650 nm in air.
In 2000, John Pendry was the first to identify a practical way to make a left-handed metamaterial, a material in which the right-hand rule is not followed. Such a material allows an electromagnetic wave to convey energy (have a group velocity) against its phase velocity. Pendry's idea was that metallic wires aligned along the direction of a wave could provide negative permittivity (dielectric function ε < 0). Natural materials (such as ferroelectrics) display negative permittivity; the challenge was achieving negative permeability (μ < 0). In 1999 Pendry demonstrated that a split ring (C shape) with its axis placed along the direction of wave propagation could do so. In the same paper, he showed that a periodic array of wires and rings could give rise to a negative refractive index. Pendry also proposed a related negative-permeability design, the Swiss roll.
In 2000, David R. Smith et al. reported the experimental demonstration of functioning electromagnetic metamaterials by horizontally stacking, periodically, split-ring resonators and thin wire structures. A method was provided in 2002 to realize negative-index metamaterials using artificial lumped-element loaded transmission lines in microstrip technology. In 2003, complex (both real and imaginary parts of) negative refractive index and imaging by flat lens using left handed metamaterials were demonstrated. By 2007, experiments that involved negative refractive index had been conducted by many groups. At microwave frequencies, the first, imperfect invisibility cloak was realized in 2006.
From the standpoint of governing equations, contemporary researchers can classify the realm of metamaterials into three primary branches: Electromagnetic/Optical wave metamaterials, other wave metamaterials, and diffusion metamaterials. These branches are characterized by their respective governing equations, which include Maxwell's equations (a wave equation describing transverse waves), other wave equations (for longitudinal and transverse waves), and diffusion equations (pertaining to diffusion processes). Crafted to govern a range of diffusion activities, diffusion metamaterials prioritize diffusion length as their central metric. This crucial parameter experiences temporal fluctuations while remaining immune to frequency variations. In contrast, wave metamaterials, designed to adjust various wave propagation paths, consider the wavelength of incoming waves as their essential metric. This wavelength remains constant over time, though it adjusts with frequency alterations. Fundamentally, the key metrics for diffusion and wave metamaterials present a stark divergence, underscoring a distinct complementary relationship between them. For comprehensive information, please refer to Section I.B, "Evolution of metamaterial physics," in Ref.
Electromagnetic metamaterials.
An electromagnetic metamaterial affects electromagnetic waves that impinge on or interact with its structural features, which are smaller than the wavelength. To behave as a homogeneous material accurately described by an effective refractive index, its features must be much smaller than the wavelength.
The unusual properties of metamaterials arise from the resonant response of each constituent element rather than their spatial arrangement into a lattice. It allows considering the local effective material parameters (permittivity and permeability). The resonance effect related to the mutual arrangement of elements is responsible for Bragg scattering, which underlies the physics of photonic crystals, another class of electromagnetic materials. Unlike the local resonances, Bragg scattering and corresponding Bragg stop-band have a low-frequency limit determined by the lattice spacing. The subwavelength approximation ensures that the Bragg stop-bands with the strong spatial dispersion effects are at higher frequencies and can be neglected. The criterion for shifting the local resonance below the lower Bragg stop-band make it possible to build a photonic phase transition diagram in a parameter space, for example, size and permittivity of the constituent element. Such diagram displays the domain of structure parameters allowing the metamaterial properties observation in the electromagnetic material.
For microwave radiation, the features are on the order of millimeters. Microwave frequency metamaterials are usually constructed as arrays of electrically conductive elements (such as loops of wire) that have suitable inductive and capacitive characteristics. Many microwave metamaterials use split-ring resonators.
Photonic metamaterials are structured on the nanometer scale and manipulate light at optical frequencies. Photonic crystals and frequency-selective surfaces such as diffraction gratings, dielectric mirrors and optical coatings exhibit similarities to subwavelength structured metamaterials. However, these are usually considered distinct from metamaterials, as their function arises from diffraction or interference and thus cannot be approximated as a homogeneous material. However, material structures such as photonic crystals are effective in the visible light spectrum. The middle of the visible spectrum has a wavelength of approximately 560 nm (for sunlight). Photonic crystal structures are generally half this size or smaller, that is < 280 nm.
Plasmonic metamaterials utilize surface plasmons, which are packets of electrical charge that collectively oscillate at the surfaces of metals at optical frequencies.
Frequency selective surfaces (FSS) can exhibit subwavelength characteristics and are known variously as artificial magnetic conductors (AMC) or High Impedance Surfaces (HIS). FSS display inductive and capacitive characteristics that are directly related to their subwavelength structure.
Electromagnetic metamaterials can be divided into different classes, as follows:
Negative refractive index.
Negative-index metamaterials (NIM) are characterized by a negative index of refraction. Other terms for NIMs include "left-handed media", "media with a negative refractive index", and "backward-wave media". NIMs where the negative index of refraction arises from simultaneously negative permittivity and negative permeability are also known as double negative metamaterials or double negative materials (DNG).
Assuming a material well-approximated by a real permittivity and permeability, the relationship between permittivity formula_0, permeability formula_1 and refractive index "n" is given by formula_2. All known non-metamaterial transparent materials (glass, water, ...) possess positive formula_0 and formula_1. By convention the positive square root is used for "n". However, some engineered metamaterials have formula_0 and formula_3. Because the product formula_4 is positive, "n" is real. Under such circumstances, it is necessary to take the negative square root for "n". When both formula_0 and formula_1 are positive (negative), waves travel in the "forward" ("backward") direction. Electromagnetic waves cannot propagate in materials with formula_0 and formula_1 of opposite sign as the refractive index becomes imaginary. Such materials are opaque for electromagnetic radiation and examples include plasmonic materials such as metals (gold, silver, ...).
The foregoing considerations are simplistic for actual materials, which must have complex-valued formula_0 and formula_1. The real parts of both formula_0 and formula_1 do not have to be negative for a passive material to display negative refraction. Indeed, a negative refractive index for circularly polarized waves can also arise from chirality. Metamaterials with negative "n" have numerous interesting properties:
Negative index of refraction derives mathematically from the vector triplet E, H and k.
For plane waves propagating in electromagnetic metamaterials, the electric field, magnetic field and wave vector follow a left-hand rule, the reverse of the behavior of conventional optical materials.
To date, only metamaterials exhibit a negative index of refraction.
Single negative.
Single negative (SNG) metamaterials have either negative relative permittivity (ε) or negative relative permeability (μ), but not both. They act as metamaterials when combined with a different, complementary SNG, jointly acting as a DNG.
Epsilon negative media (ENG) display a negative ε while μ is positive. Many plasmas exhibit this characteristic. For example, noble metals such as gold or silver are ENG in the infrared and visible spectrums.
Mu-negative media (MNG) display a positive ε and negative μ. Gyrotropic or gyromagnetic materials exhibit this characteristic. A gyrotropic material is one that has been altered by the presence of a quasistatic magnetic field, enabling a magneto-optic effect. A magneto-optic effect is a phenomenon in which an electromagnetic wave propagates through such a medium. In such a material, left- and right-rotating elliptical polarizations can propagate at different speeds. When light is transmitted through a layer of magneto-optic material, the result is called the Faraday effect: the polarization plane can be rotated, forming a Faraday rotator. The results of such a reflection are known as the magneto-optic Kerr effect (not to be confused with the nonlinear Kerr effect). Two gyrotropic materials with reversed rotation directions of the two principal polarizations are called optical isomers.
Joining a slab of ENG material and slab of MNG material resulted in properties such as resonances, anomalous tunneling, transparency and zero reflection. Like negative-index materials, SNGs are innately dispersive, so their ε, μ and refraction index n, are a function of frequency.
Hyperbolic.
Hyperbolic metamaterials (HMMs) behave as a metal for certain polarization or direction of light propagation and behave as a dielectric for the other due to the negative and positive permittivity tensor components, giving extreme anisotropy. The material's dispersion relation in wavevector space forms a hyperboloid and therefore it is called a hyperbolic metamaterial. The extreme anisotropy of HMMs leads to directional propagation of light within and on the surface. HMMs have showed various potential applications, such as sensing, reflection modulator, imaging, steering of optical signals, enhanced plasmon resonance effects.
Bandgap.
Electromagnetic bandgap metamaterials (EBG or EBM) control light propagation. This is accomplished either with photonic crystals (PC) or left-handed materials (LHM). PCs can prohibit light propagation altogether. Both classes can allow light to propagate in specific, designed directions and both can be designed with bandgaps at desired frequencies. The period size of EBGs is an appreciable fraction of the wavelength, creating constructive and destructive interference.
PC are distinguished from sub-wavelength structures, such as tunable metamaterials, because the PC derives its properties from its bandgap characteristics. PCs are sized to match the wavelength of light, versus other metamaterials that expose sub-wavelength structure. Furthermore, PCs function by diffracting light. In contrast, metamaterial does not use diffraction.
PCs have periodic inclusions that inhibit wave propagation due to the inclusions' destructive interference from scattering. The photonic bandgap property of PCs makes them the electromagnetic analog of electronic semi-conductor crystals.
EBGs have the goal of creating high quality, low loss, periodic, dielectric structures. An EBG affects photons in the same way semiconductor materials affect electrons. PCs are the perfect bandgap material, because they allow no light propagation. Each unit of the prescribed periodic structure acts like one atom, albeit of a much larger size.
EBGs are designed to prevent the propagation of an allocated bandwidth of frequencies, for certain arrival angles and polarizations. Various geometries and structures have been proposed to fabricate EBG's special properties. In practice it is impossible to build a flawless EBG device.
EBGs have been manufactured for frequencies ranging from a few gigahertz (GHz) to a few terahertz (THz), radio, microwave and mid-infrared frequency regions. EBG application developments include a transmission line, woodpiles made of square dielectric bars and several different types of low gain antennas.
Double positive medium.
Double positive mediums (DPS) do occur in nature, such as naturally occurring dielectrics. Permittivity and magnetic permeability are both positive and wave propagation is in the forward direction. Artificial materials have been fabricated which combine DPS, ENG and MNG properties.
Bi-isotropic and bianisotropic.
Categorizing metamaterials into double or single negative, or double positive, normally assumes that the metamaterial has independent electric and magnetic responses described by ε and μ. However, in many cases, the electric field causes magnetic polarization, while the magnetic field induces electrical polarization, known as magnetoelectric coupling. Such media are denoted as bi-isotropic. Media that exhibit magnetoelectric coupling and that are anisotropic (which is the case for many metamaterial structures), are referred to as bi-anisotropic.
Four material parameters are intrinsic to magnetoelectric coupling of bi-isotropic media. They are the electric (E) and magnetic (H) field strengths, and electric (D) and magnetic (B) flux densities. These parameters are ε, μ, "κ" and χ or permittivity, permeability, strength of chirality, and the Tellegen parameter, respectively. In this type of media, material parameters do not vary with changes along a rotated coordinate system of measurements. In this sense they are invariant or scalar.
The intrinsic magnetoelectric parameters, "κ" and "χ", affect the phase of the wave. The effect of the chirality parameter is to split the refractive index. In isotropic media this results in wave propagation only if ε and μ have the same sign. In bi-isotropic media with "χ" assumed to be zero, and "κ" a non-zero value, different results appear. Either a backward wave or a forward wave can occur. Alternatively, two forward waves or two backward waves can occur, depending on the strength of the chirality parameter.
In the general case, the constitutive relations for bi-anisotropic materials read
formula_6
formula_7
where formula_8 and formula_9 are the permittivity and the permeability tensors, respectively, whereas formula_10 and formula_11 are the two magneto-electric tensors. If the medium is reciprocal, permittivity and permeability are symmetric tensors, and formula_12, where formula_13 is the chiral tensor describing chiral electromagnetic and reciprocal magneto-electric response. The chiral tensor can be expressed as formula_14, where formula_15 is the trace of formula_16, I is the identity matrix, N is a symmetric trace-free tensor, and J is an antisymmetric tensor. Such decomposition allows us to classify the reciprocal bianisotropic response and we can identify the following three main classes: (i) chiral media (formula_17), (ii) pseudochiral media (formula_18), (iii) omega media (formula_19).
Chiral.
Handedness of metamaterials is a potential source of confusion as the metamaterial literature includes two conflicting uses of the terms "left-" and "right-handed". The first refers to one of the two circularly polarized waves that are the propagating modes in chiral media. The second relates to the triplet of electric field, magnetic field and Poynting vector that arise in negative refractive index media, which in most cases are not chiral.
Generally a chiral and/or bianisotropic electromagnetic response is a consequence of 3D geometrical chirality: 3D-chiral metamaterials are composed by embedding 3D-chiral structures in a host medium and they show chirality-related polarization effects such as optical activity and circular dichroism. The concept of 2D chirality also exists and a planar object is said to be chiral if it cannot be superposed onto its mirror image unless it is lifted from the plane. 2D-chiral metamaterials that are anisotropic and lossy have been observed to exhibit directionally asymmetric transmission (reflection, absorption) of circularly polarized waves due to circular conversion dichrosim. On the other hand, bianisotropic response can arise from geometrical achiral structures possessing neither 2D nor 3D intrinsic chirality. Plum and colleagues investigated magneto-electric coupling due to extrinsic chirality, where the arrangement of a (achiral) structure together with the radiation wave vector is different from its mirror image, and observed large, tuneable linear optical activity, nonlinear optical activity, specular optical activity and circular conversion dichroism. Rizza "et al." suggested 1D chiral metamaterials where the effective chiral tensor is not vanishing if the system is geometrically one-dimensional chiral (the mirror image of the entire structure cannot be superposed onto it by using translations without rotations).
3D-chiral metamaterials are constructed from chiral materials or resonators in which the effective chirality parameter formula_20 is non-zero.
Wave propagation properties in such chiral metamaterials demonstrate that negative refraction can be realized in metamaterials with a strong chirality and positive formula_0 and formula_1.
This is because the refractive index formula_21 has distinct values for left and right circularly polarized waves, given by
formula_22
It can be seen that a negative index will occur for one polarization if formula_20 > formula_23. In this case, it is not necessary that either or both formula_0 and formula_1 be negative for backward wave propagation. A negative refractive index due to chirality was first observed simultaneously and independently by Plum "et al." and Zhang "et al." in 2009.
FSS based.
Frequency selective surface-based metamaterials block signals in one waveband and pass those at another waveband. They have become an alternative to fixed frequency metamaterials. They allow for optional changes of frequencies in a single medium, rather than the restrictive limitations of a fixed frequency response.
Other types.
Elastic.
These metamaterials use different parameters to achieve a negative index of refraction in materials that are not electromagnetic. Furthermore, "a new design for elastic metamaterials that can behave either as liquids or solids over a limited frequency range may enable new applications based on the control of acoustic, elastic and seismic waves." They are also called mechanical metamaterials.
Acoustic.
Acoustic metamaterials control, direct and manipulate sound in the form of sonic, infrasonic or ultrasonic waves in gases, liquids and solids. As with electromagnetic waves, sonic waves can exhibit negative refraction.
Control of sound waves is mostly accomplished through the bulk modulus "β", mass density "ρ" and chirality. The bulk modulus and density are analogs of permittivity and permeability in electromagnetic metamaterials. Related to this is the mechanics of sound wave propagation in a lattice structure. Also materials have mass and intrinsic degrees of stiffness. Together, these form a resonant system and the mechanical (sonic) resonance may be excited by appropriate sonic frequencies (for example audible pulses).
Structural.
Structural metamaterials provide properties such as crushability and light weight. Using projection micro-stereolithography, microlattices can be created using forms much like trusses and girders. Materials four orders of magnitude stiffer than conventional aerogel, but with the same density have been created. Such materials can withstand a load of at least 160,000 times their own weight by over-constraining the materials.
A ceramic nanotruss metamaterial can be flattened and revert to its original state.
Thermal.
Typically materials found in nature, when homogeneous, are thermally isotropic. That is to say, heat passes through them at roughly the same rate in all directions. However, thermal metamaterials are anisotropic usually due to their highly organized internal structure. Composite materials with highly aligned internal particles or structures, such as fibers, and carbon nanotubes (CNT), are examples of this.
Nonlinear.
Metamaterials may be fabricated that include some form of nonlinear media, whose properties change with the power of the incident wave. Nonlinear media are essential for nonlinear optics. Most optical materials have a relatively weak response, meaning that their properties change by only a small amount for large changes in the intensity of the electromagnetic field. The local electromagnetic fields of the inclusions in nonlinear metamaterials can be much larger than the average value of the field. Besides, remarkable nonlinear effects have been predicted and observed if the metamaterial effective dielectric permittivity is very small (epsilon-near-zero media). In addition, exotic properties such as a negative refractive index, create opportunities to tailor the phase matching conditions that must be satisfied in any nonlinear optical structure.
Liquid.
Metafluids offer programmable properties such as viscosity, compressibility, and optical. One approach employed 50-500 micron diameter air-filled elastomer spheres suspended in silicon oil. The spheres compress under pressure, and regain their shape when the pressure is relieved. Their properties differ across those two states. Unpressurized, they scatter light, making them opaque. Under pressure, they collapse into half-moon shapes, focusing light, and becoming transparent. The pressure response could allow them to act as a sensor or as a dynamic hydraulic fluid. Like cornstarch, it can act as either a Newtonian or a non-Newtonian fluid. Under pressure, it becomes non-Newtonian – meaning its viscosity changes in response to shear force.
Hall metamaterials.
In 2009, Marc Briane and Graeme Milton proved mathematically that one can in principle invert the sign of a 3 materials based composite in 3D made out of only positive or negative sign Hall coefficient materials. Later in 2015 Muamer Kadic et al. showed that a simple perforation of isotropic material can lead to its change of sign of the Hall coefficient. This theoretical claim was finally experimentally demonstrated by Christian Kern et al.
In 2015, it was also demonstrated by Christian Kern et al. that an anisotropic perforation of a single material can lead to a yet more unusual effect namely the parallel Hall effect. This means that the induced electric field inside a conducting media is no longer orthogonal to the current and the magnetic field but is actually parallel to the latest.
Meta-biomaterials.
Meta-biomaterials have been purposefully crafted to engage with biological systems, amalgamating principles from both metamaterial science and biological areas. Engineered at the nanoscale, these materials adeptly manipulate electromagnetic, acoustic, or thermal properties to facilitate biological processes. Through meticulous adjustment of their structure and composition, meta-biomaterials hold promise in augmenting various biomedical technologies such as medical imaging, drug delivery, and tissue engineering. This underscores the importance of comprehending biological systems through the interdisciplinary lens of materials science.
Frequency bands.
Terahertz.
Terahertz metamaterials interact at terahertz frequencies, usually defined as 0.1 to 10 THz. Terahertz radiation lies at the far end of the infrared band, just after the end of the microwave band. This corresponds to millimeter and submillimeter wavelengths between the 3 mm (EHF band) and 0.03 mm (long-wavelength edge of far-infrared light).
Photonic.
Photonic metamaterial interact with optical frequencies (mid-infrared). The sub-wavelength period distinguishes them from photonic band gap structures.
Tunable.
Tunable metamaterials allow arbitrary adjustments to frequency changes in the refractive index. A tunable metamaterial expands beyond the bandwidth limitations in left-handed materials by constructing various types of metamaterials.
Plasmonic.
Plasmonic metamaterials exploit surface plasmons, which are produced from the interaction of light with metal-dielectrics. Under specific conditions, the incident light couples with the surface plasmons to create self-sustaining, propagating electromagnetic waves or surface waves known as surface plasmon polaritons. Bulk plasma oscillations make possible the effect of negative mass (density).
Applications.
Metamaterials are under consideration for many applications. Metamaterial antennas are commercially available.
In 2007, one researcher stated that for metamaterial applications to be realized, energy loss must be reduced, materials must be extended into three-dimensional isotropic materials and production techniques must be industrialized.
Antennas.
Metamaterial antennas are a class of antennas that use metamaterials to improve performance. Demonstrations showed that metamaterials could enhance an antenna's radiated power. Materials that can attain negative permeability allow for properties such as small antenna size, high directivity and tunable frequency.
Absorber.
A metamaterial absorber manipulates the loss components of metamaterials' permittivity and magnetic permeability, to absorb large amounts of electromagnetic radiation. This is a useful feature for photodetection and solar photovoltaic applications. Loss components are also relevant in applications of negative refractive index (photonic metamaterials, antenna systems) or transformation optics (metamaterial cloaking, celestial mechanics), but often are not used in these applications.
Superlens.
A "superlens" is a two or three-dimensional device that uses metamaterials, usually with negative refraction properties, to achieve resolution beyond the diffraction limit (ideally, infinite resolution). Such a behaviour is enabled by the capability of double-negative materials to yield negative phase velocity. The diffraction limit is inherent in conventional optical devices or lenses.
Cloaking devices.
Metamaterials are a potential basis for a practical cloaking device. The proof of principle was demonstrated on October 19, 2006. No practical cloaks are publicly known to exist.
Radar cross-section (RCS-)reducing metamaterials.
Metamaterials have applications in stealth technology, which reduces RCS in any of various ways (e.g., absorption, diffusion, redirection). Conventionally, the RCS has been reduced either by radar-absorbent material (RAM) or by purpose shaping of the targets such that the scattered energy can be redirected away from the source. While RAMs have narrow frequency band functionality, purpose shaping limits the aerodynamic performance of the target. More recently, metamaterials or metasurfaces are synthesized that can redirect the scattered energy away from the source using either array theory or generalized Snell's law. This has led to aerodynamically favorable shapes for the targets with the reduced RCS.
Seismic protection.
Seismic metamaterials counteract the adverse effects of seismic waves on man-made structures.
Sound filtering.
Metamaterials textured with nanoscale wrinkles could control sound or light signals, such as changing a material's color or improving ultrasound resolution. Uses include nondestructive material testing, medical diagnostics and sound suppression. The materials can be made through a high-precision, multi-layer deposition process. The thickness of each layer can be controlled within a fraction of a wavelength. The material is then compressed, creating precise wrinkles whose spacing can cause scattering of selected frequencies.
Guided mode manipulations.
Metamaterials can be integrated with optical waveguides to tailor guided electromagnetic waves (meta-waveguide). Subwavelength structures like metamaterials can be integrated with for instance silicon waveguides to develop and polarization beam splitters and optical couplers, adding new degrees of freedom of controlling light propagation at nanoscale for integrated photonic devices. Other applications such as integrated mode converters, polarization (de)multiplexers, structured light generation, and on-chip bio-sensors can be developed.
Theoretical models.
All materials are made of atoms, which are dipoles. These dipoles modify light velocity by a factor "n" (the refractive index). In a split ring resonator the ring and wire units act as atomic dipoles: the wire acts as a ferroelectric atom, while the ring acts as an inductor "L, "while the open section acts as a capacitor "C". The ring as a whole acts as an LC circuit. When the electromagnetic field passes through the ring, an induced current is created. The generated field is perpendicular to the light's magnetic field. The magnetic resonance results in a negative permeability; the refraction index is negative as well. (The lens is not truly flat, since the structure's capacitance imposes a slope for the electric induction.)
Several (mathematical) material models frequency response in DNGs. One of these is the Lorentz model, which describes electron motion in terms of a driven-damped, harmonic oscillator. The Debye relaxation model applies when the acceleration component of the Lorentz mathematical model is small compared to the other components of the equation. The Drude model applies when the restoring force component is negligible and the coupling coefficient is generally the plasma frequency. Other component distinctions call for the use of one of these models, depending on its polarity or purpose.
Three-dimensional composites of metal/non-metallic inclusions periodically/randomly embedded in a low permittivity matrix are usually modeled by analytical methods, including mixing formulas and scattering-matrix based methods. The particle is modeled by either an electric dipole parallel to the electric field or a pair of crossed electric and magnetic dipoles parallel to the electric and magnetic fields, respectively, of the applied wave. These dipoles are the leading terms in the multipole series. They are the only existing ones for a homogeneous sphere, whose polarizability can be easily obtained from the Mie scattering coefficients. In general, this procedure is known as the "point-dipole approximation", which is a good approximation for metamaterials consisting of composites of electrically small spheres. Merits of these methods include low calculation cost and mathematical simplicity.
Three conceptions- negative-index medium, non-reflecting crystal and superlens are foundations of the metamaterial theory. Other first principles techniques for analyzing triply-periodic electromagnetic media may be found in Computing photonic band structure
Institutional networks.
MURI.
The Multidisciplinary University Research Initiative (MURI) encompasses dozens of Universities and a few government organizations. Participating universities include UC Berkeley, UC Los Angeles, UC San Diego, Massachusetts Institute of Technology, and Imperial College in London. The sponsors are Office of Naval Research and the Defense Advanced Research Project Agency.
MURI supports research that intersects more than one traditional science and engineering discipline to accelerate both research and translation to applications. As of 2009, 69 academic institutions were expected to participate in 41 research efforts.
Metamorphose.
The Virtual Institute for Artificial Electromagnetic Materials and Metamaterials "Metamorphose VI AISBL" is an international association to promote artificial electromagnetic materials and metamaterials. It organizes scientific conferences, supports specialized journals, creates and manages research programs, provides training programs (including PhD and training programs for industrial partners); and technology transfer to European Industry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varepsilon_r"
},
{
"math_id": 1,
"text": "\\mu_r"
},
{
"math_id": 2,
"text": " n =\\pm\\sqrt{\\varepsilon_\\mathrm{r}\\mu_\\mathrm{r}}"
},
{
"math_id": 3,
"text": "\\mu_r < 0"
},
{
"math_id": 4,
"text": "\\varepsilon_r\\mu_r"
},
{
"math_id": 5,
"text": "k c = \\omega\\sqrt{\\mu \\varepsilon}"
},
{
"math_id": 6,
"text": " \\mathbf{D} = \\varepsilon \\mathbf{E} + \\xi \\mathbf{H}, "
},
{
"math_id": 7,
"text": " \\mathbf{B} = \\zeta \\mathbf{E} + \\mu \\mathbf{H}, "
},
{
"math_id": 8,
"text": " \\varepsilon "
},
{
"math_id": 9,
"text": " \\mu "
},
{
"math_id": 10,
"text": " \\xi "
},
{
"math_id": 11,
"text": " \\zeta "
},
{
"math_id": 12,
"text": " \\xi=-\\zeta^T=-i \\kappa^T "
},
{
"math_id": 13,
"text": " \\kappa "
},
{
"math_id": 14,
"text": " \\kappa=\\tfrac{1}{3}\\operatorname{tr}(\\kappa) I+N+J "
},
{
"math_id": 15,
"text": " \\operatorname{tr}(\\kappa) "
},
{
"math_id": 16,
"text": " \\kappa "
},
{
"math_id": 17,
"text": " \\operatorname{tr}(\\kappa) \\neq 0, N \\neq 0, J=0 "
},
{
"math_id": 18,
"text": " \\operatorname{tr}(\\kappa) = 0, N \\neq 0, J=0 "
},
{
"math_id": 19,
"text": " \\operatorname{tr}(\\kappa) = 0, N = 0, J \\neq 0 "
},
{
"math_id": 20,
"text": "\\kappa"
},
{
"math_id": 21,
"text": "n"
},
{
"math_id": 22,
"text": "n = \\pm\\sqrt{\\varepsilon_r\\mu_r} \\pm \\kappa"
},
{
"math_id": 23,
"text": "\\sqrt{\\varepsilon_r\\mu_r}"
}
] | https://en.wikipedia.org/wiki?curid=906878 |
9070022 | Flexible polyhedron | In geometry, a flexible polyhedron is a polyhedral surface without any boundary edges, whose shape can be continuously changed while keeping the shapes of all of its faces unchanged. The Cauchy rigidity theorem shows that in dimension 3 such a polyhedron cannot be convex (this is also true in higher dimensions).
The first examples of flexible polyhedra, now called Bricard octahedra, were discovered by Raoul Bricard (1897). They are self-intersecting surfaces isometric to an octahedron. The first example of a flexible non-self-intersecting surface in formula_0, the Connelly sphere, was discovered by Robert Connelly (1977). Steffen's polyhedron is another non-self-intersecting flexible polyhedron derived from Bricard's octahedra.
Bellows conjecture.
In the late 1970s Connelly and D. Sullivan formulated the bellows conjecture stating that the volume of a flexible polyhedron is invariant under flexing. This conjecture was proved for polyhedra homeomorphic to a sphere by I. Kh. Sabitov (1995)
using elimination theory, and then proved for general orientable 2-dimensional polyhedral surfaces by Robert Connelly, I. Sabitov, and Anke Walz (1997). The proof extends Piero della Francesca's formula for the volume of a tetrahedron to a formula for the volume of any polyhedron. The extended formula shows that the volume must be a root of a polynomial whose coefficients depend only on the lengths of the polyhedron's edges. Since the edge lengths cannot change as the polyhedron flexes, the volume must remain at one of the finitely many roots of the polynomial, rather than changing continuously.
Scissor congruence.
Connelly conjectured that the Dehn invariant of a flexible polyhedron is invariant under flexing. This was known as the strong bellows conjecture or (after it was proven in 2018) the strong bellows theorem. Because all configurations of a flexible polyhedron have both the same volume and the same Dehn invariant, they are scissors congruent to each other, meaning that for any two of these configurations it is possible to dissect one of them into polyhedral pieces that can be reassembled to form the other. The total mean curvature of a flexible polyhedron, defined as the sum of the products of edge lengths with exterior dihedral angles, is a function of the Dehn invariant that is also known to stay constant while a polyhedron flexes.
Generalizations.
Flexible 4-polytopes in 4-dimensional Euclidean space and 3-dimensional hyperbolic space were studied by Hellmuth Stachel (2000). In dimensions formula_1, flexible polytopes were constructed by .
References.
Notes.
<templatestyles src="Reflist/styles.css" />
Primary sources.
<templatestyles src="Refbegin/styles.css" />
Secondary sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}^3"
},
{
"math_id": 1,
"text": "n\\geq 5"
}
] | https://en.wikipedia.org/wiki?curid=9070022 |
907108 | Alfvén wave | Low-frequency plasma wave
In plasma physics, an Alfvén wave, named after Hannes Alfvén, is a type of plasma wave in which ions oscillate in response to a restoring force provided by an effective tension on the magnetic field lines.
Definition.
An Alfvén wave is a low-frequency (compared to the ion gyrofrequency) travelling oscillation of the ions and magnetic field in a plasma. The ion mass density provides the inertia and the magnetic field line tension provides the restoring force. Alfvén waves propagate in the direction of the magnetic field, and the motion of the ions and the perturbation of the magnetic field are transverse to the direction of propagation. However, Alfvén waves existing at oblique incidences will smoothly change into magnetosonic waves when the propagation is perpendicular to the magnetic field.
Alfvén waves are dispersionless.
Alfvén velocity.
The low-frequency relative permittivity formula_0 of a magnetized plasma is given by
formula_1
where B is the magnetic flux density, formula_2 is the speed of light, formula_3 is the permeability of the vacuum, and the mass density is the sum
formula_4
over all species of charged plasma particles (electrons as well as all types of ions).
Here species formula_5 has number density formula_6
and mass per particle formula_7.
The phase velocity of an electromagnetic wave in such a medium is
formula_8
For the case of an Alfvén wave
formula_9
where
formula_10
is the Alfvén wave group velocity.
If formula_11, then formula_12. On the other hand, when formula_13, formula_14. That is, at high field or low density, the group velocity of the Alfvén wave approaches the speed of light, and the Alfvén wave becomes an ordinary electromagnetic wave.
Neglecting the contribution of the electrons to the mass density, formula_15, where formula_16 is the ion number density and formula_17 is the mean ion mass per particle, so that
formula_18
Alfvén time.
In plasma physics, the Alfvén time formula_19 is an important timescale for wave phenomena. It is related to the Alfvén velocity by:
formula_20
where formula_21 denotes the characteristic scale of the system. For example, formula_21 could be the minor radius of the torus in a tokamak.
Relativistic case.
The Alfvén wave velocity in relativistic magnetohydrodynamics is
formula_22
where e is the total energy density of plasma particles, formula_23 is the total plasma pressure, and
formula_24
is the magnetic pressure. In the non-relativistic limit, where formula_25, this formula reduces to the one given previously.
History.
The coronal heating problem.
The study of Alfvén waves began from the coronal heating problem, a longstanding question in heliophysics. It was unclear why the temperature of the solar corona is hot (about one million kelvins) compared to its surface (the photosphere), which is only a few thousand kelvins. Intuitively, it would make sense to see a decrease in temperature when moving away from a heat source, but this does not seem to be the case even though the photosphere is denser and would generate more heat than the corona.
In 1942, Hannes Alfvén proposed in "Nature" the existence of an electromagnetic-hydrodynamic wave which would carry energy from the photosphere to heat up the corona and the solar wind. He claimed that the sun had all the necessary criteria to support these waves and they may in turn be responsible for sun spots. He stated:
If a conducting liquid is placed in a constant magnetic field, every motion of the liquid gives rise to an E.M.F. which produces electric currents. Owing to the magnetic field, these currents give mechanical forces which change the state of motion of the liquid. Thus a kind of combined electromagnetic–hydrodynamic wave is produced.
This would eventually turn out to be Alfvén waves. He received the 1970 Nobel Prize in Physics for this discovery.
Experimental studies and observations.
The convection zone of the sun, the region beneath the photosphere in which energy is transported primarily by convection, is sensitive to the motion of the core due to the rotation of the sun. Together with varying pressure gradients beneath the surface, electromagnetic fluctuations produced in the convection zone induce random motion on the photospheric surface and produce Alfvén waves. The waves then leave the surface, travel through the chromosphere and transition zone, and interact with the ionized plasma. The wave itself carries energy and some of the electrically charged plasma.
In the early 1990s, de Pontieu and Haerendel suggested that Alfvén waves may also be associated with the plasma jets known as spicules. It was theorized these brief spurts of superheated gas were carried by the combined energy and momentum of their own upward velocity, as well as the oscillating transverse motion of the Alfvén waves.
In 2007, Alfvén waves were reportedly observed for the first time traveling towards the corona by Tomczyk "et al"., but their predictions could not conclude that the energy carried by the Alfvén waves was sufficient to heat the corona to its enormous temperatures, for the observed amplitudes of the waves were not high enough. However, in 2011, McIntosh "et al". reported the observation of highly energetic Alfvén waves combined with energetic spicules which could sustain heating the corona to its million-kelvin temperature. These observed amplitudes (20.0 km/s against 2007's observed 0.5 km/s) contained over one hundred times more energy than the ones observed in 2007. The short period of the waves also allowed more energy transfer into the coronal atmosphere. The 50,000 km-long spicules may also play a part in accelerating the solar wind past the corona. Alfvén waves are routinely observed in solar wind, in particular in fast solar wind streams. The role of Alfvénic oscillations in the interaction between fast solar wind and the Earth's magnetosphere is currently under debate.
However, the above-mentioned discoveries of Alfvén waves in the complex Sun's atmosphere, starting from the Hinode era in 2007 for the next 10 years, mostly fall in the realm of Alfvénic waves essentially generated as a mixed mode due to transverse structuring of the magnetic and plasma properties in the localized flux tubes. In 2009, Jess "et al". reported the periodic variation of H-alpha line-width as observed by Swedish Solar Telescope (SST) above chromospheric bright-points. They claimed first direct detection of the long-period (126–700 s), incompressible, torsional Alfvén waves in the lower solar atmosphere.
After the seminal work of Jess "et al". (2009), in 2017 Srivastava "et al". detected the existence of high-frequency torsional Alfvén waves in the Sun's chromospheric fine-structured flux tubes. They discovered that these high-frequency waves carry substantial energy capable of heating the Sun's corona and also in originating the supersonic solar wind. In 2018, using spectral imaging observations, non-LTE (local thermodynamic equilibrium) inversions and magnetic field extrapolations of sunspot atmospheres, Grant et al. found evidence for elliptically polarized Alfvén waves forming fast-mode shocks in the outer regions of the chromospheric umbral atmosphere. They provided quantification of the degree of physical heat provided by the dissipation of such Alfvén wave modes above active region spots.
In 2024, a paper was published in the journal "Science" detailing a set of observations of what turned out to be the same jet of solar wind made by Parker Solar Probe and Solar Orbiter in February 2022, and implying Alfvén waves were what kept the jet's energy high enough to match the observations.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": " \\varepsilon = 1 + \\frac{c^2\\,\\mu_0\\,\\rho}{B^2}"
},
{
"math_id": 2,
"text": "c"
},
{
"math_id": 3,
"text": "\\mu_0"
},
{
"math_id": 4,
"text": " \\rho = \\sum_s n_s m_s ,"
},
{
"math_id": 5,
"text": "s"
},
{
"math_id": 6,
"text": "n_s"
},
{
"math_id": 7,
"text": "m_s"
},
{
"math_id": 8,
"text": " v = \\frac{c}{\\sqrt{\\varepsilon}} = \\frac{c}{\\sqrt{1 + \\dfrac{c^2 \\mu_0 \\rho}{B^2}}}"
},
{
"math_id": 9,
"text": " v = \\frac{v_A}{\\sqrt{1 + \\dfrac{v_A^2}{c^2}}}"
},
{
"math_id": 10,
"text": " v_A \\equiv \\frac{B}{\\sqrt{\\mu_0\\,\\rho}}"
},
{
"math_id": 11,
"text": "v_A \\ll c"
},
{
"math_id": 12,
"text": "v \\approx v_A"
},
{
"math_id": 13,
"text": "v_A \\to \\infty"
},
{
"math_id": 14,
"text": "v \\to c"
},
{
"math_id": 15,
"text": "\\rho = n_i \\, m_i"
},
{
"math_id": 16,
"text": "n_i"
},
{
"math_id": 17,
"text": "m_i"
},
{
"math_id": 18,
"text": "v_A \\approx \\left(2.18 \\times 10^{11}\\,\\text{cm}\\,\\text{s}^{-1}\\right) \\left(\\frac{m_i}{m_p}\\right)^{-\\frac{1}{2}} \\left(\\frac{n_i}{1~\\text{cm}^{-3}}\\right)^{-\\frac{1}{2}} \\left(\\frac{B}{1~\\text{G}}\\right)."
},
{
"math_id": 19,
"text": "\\tau_A"
},
{
"math_id": 20,
"text": "\\tau_A = \\frac{a}{v_A}"
},
{
"math_id": 21,
"text": "a"
},
{
"math_id": 22,
"text": "v = \\frac{c}{\\sqrt{1 + \\dfrac{e + P}{2 P_m}}}"
},
{
"math_id": 23,
"text": "P"
},
{
"math_id": 24,
"text": " P_m = \\frac{B^2}{2 \\mu_0}"
},
{
"math_id": 25,
"text": "P \\ll e \\approx \\rho c^2"
}
] | https://en.wikipedia.org/wiki?curid=907108 |
9075104 | Dark Energy Survey | Project to measure the expansion of the universe
The Dark Energy Survey (DES) is an astronomical survey designed to constrain the properties of dark energy. It uses images taken in the near-ultraviolet, visible, and near-infrared to measure the expansion of the universe using Type Ia supernovae, baryon acoustic oscillations, the number of galaxy clusters, and weak gravitational lensing. The collaboration is composed of research institutions and universities from the United States, Australia, Brazil, the United Kingdom, Germany, Spain, and Switzerland. The collaboration is divided into several scientific working groups. The director of DES is Josh Frieman.
The DES began by developing and building Dark Energy Camera (DECam), an instrument designed specifically for the survey. This camera has a wide field of view and high sensitivity, particularly in the red part of the visible spectrum and in the near infrared. Observations were performed with DECam mounted on the 4-meter Víctor M. Blanco Telescope, located at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. Observing sessions ran from 2013 to 2019; as of 2021[ [update]] the DES collaboration has published results from the first three years of the survey.
DECam.
DECam, short for the Dark Energy Camera, is a large camera built to replace the previous prime focus camera on the Victor M. Blanco Telescope. The camera consists of three major components: mechanics, optics, and CCDs.
Mechanics.
The mechanics of the camera consists of a filter changer with an 8-filter capacity and shutter. There is also an optical barrel that supports 5 corrector lenses, the largest of which is 98 cm in diameter. These components are attached to the CCD focal plane which is cooled to with liquid nitrogen in order to reduce thermal noise in the CCDs. The focal plane is also kept in an extremely low vacuum of to prevent the formation of condensation on the sensors. The entire camera with lenses, filters, and CCDs weighs approximately 4 tons. When mounted at the prime focus it was supported with a hexapod system allowing for real time focal adjustment.
Optics.
The camera is outfitted with u, g, r, i, z, and Y filters spanning roughly from 340–1070 nm, similar to those used in the Sloan Digital Sky Survey (SDSS). This allows DES to obtain photometric redshift measurements to z≈1. DECam also contains five lenses acting as corrector optics to extend the telescope's field of view to a diameter of 2.2°, one of the widest fields of view available for ground-based optical and infrared imaging. One significant difference between previous charge-coupled devices (CCD) at the Victor M. Blanco Telescope and DECam is the improved quantum efficiency in the red and near-infrared wavelengths.
CCDs.
The scientific sensor array on DECam is an array of 62 2048×4096 pixel back-illuminated CCDs totaling 520 megapixels; an additional 12 2048×2048 pixel CCDs (50 Mpx) are used for guiding the telescope, monitoring focus, and alignment. The full DECam focal plane contains 570 megapixels. The CCDs for DECam use high resistivity silicon manufactured by Dalsa and LBNL with 15×15 micron pixels. By comparison, the OmniVision Technologies back-illuminated CCD that was used in the iPhone 4 has a 1.75×1.75 micron pixel with 5 megapixels. The larger pixels allow DECam to collect more light per pixel, improving low light sensitivity which is desirable for an astronomical instrument. DECam's CCDs also have a 250-micron crystal depth; this is significantly larger than most consumer CCDs. The additional crystal depth increases the path length travelled by entering photons. This, in turn, increases the probability of interaction and allows the CCDs to have an increased sensitivity to lower energy photons, extending the wavelength range to 1050 nm. Scientifically this is important because it allows one to look for objects at a higher redshift, increasing statistical power in the studies mentioned above. When placed in the telescope's focal plane each pixel has a width of 0.27″ on the sky, resulting in a total field of view of 3 square degrees.
Survey.
DES imaged 5,000 square degrees of the southern sky in a footprint that overlaps with the South Pole Telescope and Stripe 82 (in large part avoiding the Milky Way). The survey took 758 observing nights spread over six annual sessions between August and February to complete, covering the survey footprint ten times in five photometric bands ("g", "r, i, z", and "Y"). The survey reached a depth of 24th magnitude in the i band over the entire survey area. Longer exposure times and faster observing cadence were made in five smaller patches totaling 30 square degrees to search for supernovae.
First light was achieved on 12 September 2012; after a verification and testing period, scientific survey observations started in August 2013. The last observing session was completed on 9 January 2019.
Other surveys using DECam.
After completion of the Dark Energy Survey, the Dark Energy Camera was used for other sky surveys:
Observing.
Each year from August through February, observers will stay in dormitories on the mountain. During a weeklong period of work, observers sleep during the day and use the telescope and camera at night. There will be some DES members working at the telescope console to monitor operations while others are monitoring camera operations and data process.
For the wide-area footprint observations, DES takes roughly every two minutes for each new image: The exposures are typically 90 seconds long, with another 30 seconds for readout of the camera data and slewing to point the telescope at its next target. Despite the restrictions on each exposure, the team also need to consider different sky conditions for the observations, such as moonlight and cloud cover.
In order to get better images, DES team use a computer algorithm called the "Observing Tactician" (ObsTac) to help with sequencing observations. It optimizes among different factors, such as the date and time, weather conditions, and the position of the moon. ObsTac automatically points the telescope in the best direction, and selects the exposure, using the best light filter. It also decides whether to take a wide-area or time-domain survey image, depending on whether or not the exposure will also be used for supernova searches.
Results.
Cosmology.
Dark Energy Group published several papers presenting their results for cosmology. Most of these cosmology results coming from its first-year data and the third-year data. Their results for cosmology were concluded with a Multi-Probe Methodology, which mainly combine the data from Galaxy-Galaxy Lensing, different shape of weak lensing, cosmic shear, galaxy clustering and photometric data set.
For the first-year data collected by DES, Dark Energy Survey Group showed the Cosmological Constraints results from Galaxy Clustering and Weak Lensing results and cosmic shear measurement. With Galaxy Clustering and Weak Lensing results, formula_0 and formula_1 for ΛCDM, formula_2, formula_3 and formula_4 at 68% confidence limits for ωCMD. Combine the most significant measurements of cosmic shear in a galaxy survey, Dark Energy Survey Group showed that formula_5 at 68% confidence limits and formula_6 for ΛCDM with formula_7. Other cosmological analyses from first year data showed a derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources. The DES team also published a paper summarize all the Photometric Data Set for Cosmology for their first-year data.
For the third-year data collected by DES, they updated the Cosmological Constraints to formula_8 for the ΛCDM model with the new cosmic shear measurements. From third-year data of Galaxy Clustering and Weak Lensing results, DES updated the Cosmological Constraints to formula_9 and formula_10 in ΛCDM at 68% confidence limits, formula_11, formula_12 and formula_13 in ωCDM at 68% confidence limits. Similarly, the DES team published their third-year observations for photometric data set for cosmology comprising nearly 5000 deg2 of grizY imaging in the south Galactic cap, including nearly 390 million objects, with depth reaching S/N ~ 10 for extended objects up to formula_14 ~ 23.0, and top-of-the-atmosphere photometric uniformity < 3mmag.
Weak lensing.
Weak lensing was measured statistically by measuring the shear-shear correlation function, a two-point function, or its Fourier Transform, the shear power spectrum. In April 2015, the Dark Energy Survey released mass maps using cosmic shear measurements of about 2 million galaxies from the science verification data between August 2012 and February 2013. In 2021 weak lensing was used to map the dark matter in a region of the southern hemisphere sky, in 2022 together with galaxy clustering data to give new cosmological constrains. and in 2023 with data from the Planck telescope and South Pole telescope to give once new improved constraints.
Another big part of weak lensing result is to calibrate the redshift of the source galaxies. In December 2020 and June 2021, DES team published two papers showing their results about using weak lensing to calibrate the redshift of the source galaxies in order to mapping the matter density field with gravitational lensing.
Gravitational waves.
After LIGO detected the first gravitational wave signal from GW170817, DES made follow-up observations of GW170817 using DECam. With DECam independent discovery of the optical source, DES team establish its association with GW170817 by showing that none of the 1500 other sources found within the event localization region could plausibly be associated with the event. DES team monitored the source for over two weeks and provide the light curve data as a machine-readable file. From the observation data set, DES concluded that the optical counterpart they have identified near NGC 4993 is associated with GW170817. This discovery ushers in the era of multi-messenger astronomy with gravitational waves and demonstrates the power of DECam to identify the optical counterparts of gravitational-wave sources.
Dwarf galaxies.
In March 2015, two teams released their discoveries of several new potential dwarf galaxy candidates found in Year 1 DES data. In August 2015, the Dark Energy Survey team announced the discovery of eight additional candidates in Year 2 DES data. Later on, Dark Energy Survey team found more dwarf galaxies. With more Dwarf Galaxy results, the team was able to take a deep look about more properties of the detected Dwarf Galaxy such as the chemical abundance, the structure of stellar population, and Stellar Kinematics and Metallicities. In Feb 2019, the team also discovered a sixth star cluster in the Fornax Dwarf Spheroidal Galaxy and a tidally Disrupted Ultra-Faint Dwarf Galaxy.
Baryon acoustic oscillations.
The signature of baryon acoustic oscillations (BAO) can be observed in the distribution of tracers of the matter density field and used to measure the expansion history of the Universe. BAO can also be measured using purely photometric data, though at less significance. DES team observation samples consists of 7 million galaxies distributed over a footprint of 4100 deg2 with 0.6 < zphoto < 1.1 and a typical redshift uncertainty of 0.03(1+z). From their statistics, they combine the likelihoods derived from angular correlations and spherical harmonics to constrain the ratio of comoving angular diameter distance formula_15 at the effective redshift of our sample to the sound horizon scale at the drag epoch.
Type Ia supernova observations.
In May 2019, Dark Energy Survey team published their first cosmology results using Type Ia supernovae. The supernova data was from DES-SN3YR. The Dark Energy Survey team found Ωm = 0.331 ± 0.038 with a flat ΛCDM model and Ωm = 0.321 ± 0.018, w = −0.978 ± 0.059 with a flat wCDM model. Analyzing the same data from DES-SN3YR, they also found a new current Hubble constant, formula_16. This result has an excellent agreement with the Hubble constant measurement from Planck Satellite Collaboration in 2018. In June 2019, there a follow-up paper was published by DES team discussing the systematic uncertainties, and validation of using the supernovae to measure the cosmology results mentioned before. The team also published their photometric pipeline and light curve data in another paper published in the same month.
Minor planets.
Several minor planets were discovered by DeCam in the course of "The Dark Energy Survey", including high-inclination trans-Neptunian objects (TNOs).
The MPC has assigned the IAU code W84 for DeCam's observations of small Solar System bodies. As of October 2019, the MPC inconsistently credits the discovery of nine numbered minor planets, all of them trans-Neptunian objects, to either "DeCam" or "Dark Energy Survey". The list does not contain any unnumbered minor planets potentially discovered by DeCam, as discovery credits are only given upon a body's numbering, which in turn depends on a sufficiently secure orbit determination.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S_8=\\sigma_8(\\Omega_m/0.3)^{0.5}= 0.773_{-0.020}^{+0.026}"
},
{
"math_id": 1,
"text": "\\Omega_m= 0.267_{-0.017}^{+0.030}"
},
{
"math_id": 2,
"text": "S_8= 0.782_{-0.024}^{+0.036}"
},
{
"math_id": 3,
"text": "\\Omega_m= 0.284_{-0.030}^{+0.033}"
},
{
"math_id": 4,
"text": "\\omega= -0.82_{-0.20}^{+0.21}"
},
{
"math_id": 5,
"text": "\\sigma_8(\\Omega_m/0.3)^{0.5}= 0.782_{-0.027}^{+0.027}"
},
{
"math_id": 6,
"text": "\\sigma_8(\\Omega_m/0.3)^{0.5}= 0.777_{-0.038}^{+0.036}"
},
{
"math_id": 7,
"text": "\\omega= -0.95_{-0.36}^{+0.33}"
},
{
"math_id": 8,
"text": "\\sigma_8(\\Omega_m/0.3)^{0.5}= 0.759_{-0.025}^{+0.023}"
},
{
"math_id": 9,
"text": "S_8=\\sigma_8(\\Omega_m/0.3)^{0.5}= 0.776_{-0.017}^{+0.017}"
},
{
"math_id": 10,
"text": "\\Omega_m= 0.339_{-0.031}^{+0.032}"
},
{
"math_id": 11,
"text": "S_8=\\sigma_8(\\Omega_m/0.3)^{0.5}= 0.775_{-0.024}^{+0.026}"
},
{
"math_id": 12,
"text": "\\Omega_m= 0.352_{-0.041}^{+0.035}"
},
{
"math_id": 13,
"text": "\\omega= -0.98_{-0.20}^{+0.32}"
},
{
"math_id": 14,
"text": "i_{AB}"
},
{
"math_id": 15,
"text": "D_m(Z_eff = 0.835)/r_d=18.92\\pm0.51"
},
{
"math_id": 16,
"text": "H_0= 67.1 \\pm1.3\\,\\mathrm{km\\,s^{-1}\\,Mpc^{-1}}"
}
] | https://en.wikipedia.org/wiki?curid=9075104 |
907524 | Von Neumann regular ring | Rings admitting weak inverses
In mathematics, a von Neumann regular ring is a ring "R" (associative, with 1, not necessarily commutative) such that for every element "a" in "R" there exists an "x" in "R" with "a" = "axa". One may think of "x" as a "weak inverse" of the element "a;" in general "x" is not uniquely determined by "a". Von Neumann regular rings are also called absolutely flat rings, because these rings are characterized by the fact that every left "R"-module is flat.
Von Neumann regular rings were introduced by von Neumann (1936) under the name of "regular rings", in the course of his study of von Neumann algebras and continuous geometry. Von Neumann regular rings should not be confused with the unrelated regular rings and regular local rings of commutative algebra.
An element "a" of a ring is called a von Neumann regular element if there exists an "x" such that "a" = "axa". An ideal formula_0 is called a (von Neumann) regular ideal if for every element "a" in formula_0 there exists an element "x" in formula_0 such that "a" = "axa".
Examples.
Every field (and every skew field) is von Neumann regular: for "a" ≠ 0 we can take "x" = "a"−1. An integral domain is von Neumann regular if and only if it is a field. Every direct product of von Neumann regular rings is again von Neumann regular.
Another important class of examples of von Neumann regular rings are the rings M"n"("K") of "n"-by-"n" square matrices with entries from some field "K". If "r" is the rank of "A" ∈ M"n"("K"), Gaussian elimination gives invertible matrices "U" and "V" such that
formula_1
(where "I""r" is the "r"-by-"r" identity matrix). If we set "X" = "V"−1"U"−1, then
formula_2
More generally, the "n" × "n" matrix ring over any von Neumann regular ring is again von Neumann regular.
If "V" is a vector space over a field (or skew field) "K", then the endomorphism ring End"K"("V") is von Neumann regular, even if "V" is not finite-dimensional.
Generalizing the above examples, suppose "S" is some ring and "M" is an "S"-module such that every submodule of "M" is a direct summand of "M" (such modules "M" are called "semisimple"). Then the endomorphism ring End"S"("M") is von Neumann regular. In particular, every semisimple ring is von Neumann regular. Indeed, the semisimple rings are precisely the Noetherian von Neumann regular rings.
The ring of affiliated operators of a finite von Neumann algebra is von Neumann regular.
A Boolean ring is a ring in which every element satisfies "a"2 = "a". Every Boolean ring is von Neumann regular.
Facts.
The following statements are equivalent for the ring "R":
The corresponding statements for right modules are also equivalent to "R" being von Neumann regular.
Every von Neumann regular ring has Jacobson radical {0} and is thus semiprimitive (also called "Jacobson semi-simple").
In a commutative von Neumann regular ring, for each element "x" there is a unique element "y" such that "xyx"="x" and "yxy"="y", so there is a canonical way to choose the "weak inverse" of "x".
The following statements are equivalent for the commutative ring "R":
Also, the following are equivalent: for a commutative ring "A"
Generalizations and specializations.
Special types of von Neumann regular rings include "unit regular rings" and "strongly von Neumann regular rings" and rank rings.
A ring "R" is called unit regular if for every "a" in "R", there is a unit "u" in "R" such that "a" = "aua". Every semisimple ring is unit regular, and unit regular rings are directly finite rings. An ordinary von Neumann regular ring need not be directly finite.
A ring "R" is called strongly von Neumann regular if for every "a" in "R", there is some "x" in "R" with "a" = "aax". The condition is left-right symmetric. Strongly von Neumann regular rings are unit regular. Every strongly von Neumann regular ring is a subdirect product of division rings. In some sense, this more closely mimics the properties of commutative von Neumann regular rings, which are subdirect products of fields. For commutative rings, von Neumann regular and strongly von Neumann regular are equivalent. In general, the following are equivalent for a ring "R":
Generalizations of von Neumann regular rings include π-regular rings, left/right semihereditary rings, left/right nonsingular rings and semiprimitive rings.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak{i}"
},
{
"math_id": 1,
"text": "A = U \\begin{pmatrix}I_r &0\\\\\n0 &0\\end{pmatrix} V"
},
{
"math_id": 2,
"text": "AXA= U \\begin{pmatrix}I_r &0\\\\\n0 &0\\end{pmatrix} \\begin{pmatrix}I_r &0\\\\\n0 &0\\end{pmatrix} V = U \\begin{pmatrix}I_r &0\\\\\n0 &0\\end{pmatrix} V = A."
},
{
"math_id": 3,
"text": "\\mathrm{Spec}(R) \\to \\mathbb{A}^1"
},
{
"math_id": 4,
"text": "\\{0\\} \\sqcup \\mathbb{G}_m \\to \\mathbb{A}^1"
}
] | https://en.wikipedia.org/wiki?curid=907524 |
907542 | Digital Signal 1 | Digital Signal 1 (DS1, sometimes DS-1) is a T-carrier signaling scheme devised by Bell Labs. DS1 is the primary digital telephone standard used in the United States, Canada and Japan and is able to transmit up to 24 multiplexed voice and data calls over telephone lines. E-carrier is used in place of T-carrier outside the United States, Canada, Japan, and South Korea. DS1 is the logical bit pattern used over a physical T1 line; in practice, the terms "DS1" and "T1" are often used interchangeably.
Overview.
T1 refers to the primary digital telephone carrier system used in North America. T1 is one line type of the PCM T-carrier hierarchy. T1 describes the cabling, signal type, and signal regeneration requirements of the carrier system.
The signal transmitted on a T1 line, referred to as the DS1 signal, consists of serial bits transmitted at the rate of 1.544 Mbit/s. The type of line code used is called Alternate Mark Inversion (AMI). Digital Signal Designation is the classification of digital bit rates in the digital multiplex hierarchy used in transport of telephone signals from one location to another. DS-1 is a communications protocol for multiplexing the bitstreams of up to 24 telephone calls, along with two special bits: a framing bit (for frame synchronization) and a maintenance-signaling bit, transmitted over a digital circuit called T1. T1's maximum data transmission rate is 1.544 megabits per second.
Bandwidth.
A DS1 telecommunication circuit multiplexes 24 DS0s. The twenty-four DS0s sampled 8,000 times per second (one 8bit PCM sample from each DSO per DS1 frame) consume 1.536 Mbit/s of bandwidth. One framing bit adds 8 kbit/s of overhead, for a total of 1.544 Mbit/s, calculated as follows:
formula_0
DS1 is a full-duplex circuit, concurrently transmitting and receiving 1.544 Mbit/s.
DS1 frame synchronization.
Frame synchronization is necessary to identify the timeslots within each 24-channel frame. Synchronization takes place by allocating a framing, or 193rd, bit. This results in 8 kbit/s of framing data, for each DS1. Because this 8-kbit/s channel is used by the transmitting equipment as overhead, only 1.536 Mbit/s is actually passed on to the user. Two types of framing schemes are superframe (SF) and extended superframe (ESF). A superframe consists of twelve consecutive 193-bit frames, whereas an extended superframe consists of twenty-four consecutive 193-bit frames of data. Due to the unique bit sequences exchanged, the framing schemes are not compatible with each other. These two types of framing (SF, and ESF) use their 8 kbit/s framing channel in different ways.
Connectivity and alarms.
Connectivity refers to the ability of the digital carrier to carry customer data from either end to the other. In some cases, the connectivity may be lost in one direction and maintained in the other. In all cases, the terminal equipment, i.e., the equipment that marks the endpoints of the DS1, defines the connection by the quality of the received framing pattern.
Alarms.
Alarms are normally produced by the receiving terminal equipment when the framing is compromised. There are three defined alarm indication signal states, identified by a legacy color scheme: red, yellow and blue.
Red alarm indicates the alarming equipment is unable to recover the framing reliably. Corruption or loss of the signal will produce "red alarm". Connectivity has been lost toward the alarming equipment. There is no knowledge of connectivity toward the far end.
Yellow alarm, also known as remote alarm indication (RAI), indicates reception of a data or framing pattern that reports the far end is in "red alarm". The alarm is carried differently in SF (D4) and ESF (D5) framing. For SF framed signals, the user bandwidth is manipulated and "bit two in every DS0 channel shall be a zero." The resulting loss of payload data while transmitting a yellow alarm is undesirable, and was resolved in ESF framed signals by using the data link layer. "A repeating 16-bit pattern consisting of eight 'ones' followed by eight 'zeros' shall be transmitted continuously on the ESF data link, but may be interrupted for a period not to exceed 100-ms per interruption." Both types of alarms are transmitted for the duration of the alarm condition, but for at least one second.
Blue alarm, also known as alarm indication signal (AIS) indicates a disruption in the communication path between the terminal equipment and line repeaters or DCS. If no signal is received by the intermediary equipment, it produces an unframed, all-ones signal. The receiving equipment displays a "red alarm" and sends the signal for "yellow alarm" to the far end because it has no framing, but at intermediary interfaces the equipment will report "AIS" or Alarm Indication Signal. AIS is also called "all ones" because of the data and framing pattern.
These alarm states are also lumped under the term Carrier Group Alarm (CGA). The meaning of CGA is that connectivity on the digital carrier has failed. The result of the CGA condition varies depending on the equipment function. Voice equipment typically coerces the robbed bits for signaling to a state that will result in the far end properly handling the condition, while applying an often different state to the customer equipment connected to the alarmed equipment. Simultaneously, the customer data is often coerced to a 0x7F pattern, signifying a zero-voltage condition on voice equipment. Data equipment usually passes whatever data may be present, if any, leaving it to the customer equipment to deal with the condition.
Inband T1 versus T1 PRI.
Additionally, for voice T1s there are two main types: so-called "plain" or Inband T1s and PRI (Primary Rate Interface). While both carry voice telephone calls in similar fashion, PRIs are commonly used in call centers and provide not only the 23 actual usable telephone lines (known as "B" channels for "bearer") but also a 24th line (known as the "D" channel for "data") that carries line signaling information. This special "D" channel carries: Caller ID (CID) and automatic number identification (ANI) data, required channel type (usually a B, or bearer, channel), call handle, Dialed Number Identification Service (DNIS) info, requested channel number and a request for response.
Inband T1s are also capable of carrying CID and ANI information if they are configured by the carrier by sending DTMF *ANI*DNIS*. However, PRIs handle this more efficiently. While an inband T1 seemingly has a slight advantage due to 24 lines being available to make calls (as opposed to a PRI that has 23), each channel in an inband T1 must perform its own setup and tear-down of each call. A PRI uses the 24th channel as a data channel to perform all the overhead operations of the other 23 channels (including CID and ANI). Although an inband T1 has 24 channels, the 23 channel PRI can set up more calls faster due to the dedicated 24th signalling channel (D Channel).
Before T1 PRI existed there was T1 CAS. T1 CAS is not common today but it still exists. CAS is Channel Associated Signaling. It is also referred to as Robbed Bit Signaling. CAS is a technology with roots in the 60's and before.
Origin of name.
The name T1 came from the carrier letter assigned by AT&T to the technology in 1957, when digital systems were first proposed and developed, AT&T decided to skip Q, R, and S, and to use T, for "time division". The naming system ended with the letter T, which designated fiber networks. Destined successors of the T1 system of networks, called "T1C", "T2", "T3", and "T4", were not commercial successes and disappeared quickly. Signals that would have been carried on these systems, called "DS1", "DS2", "DS3", and "DS4", are now carried on T1 infrastructure.
"DS-1" means "Digital Service – Level 1" and has to do with the signal carried—as opposed to the network that delivers it (originally 24 digitized voice channels over a T1). Since the practice of naming networks ended with the letter "T", the terms "T1" and "DS1" have become synonymous and encompass a variety of services including voice, data, and "clear-channel pipes". The line speed is always 1.544 Mbit/s, but the payload can vary greatly.
Alternative technologies.
Dark fiber: "Dark fiber" refers to unused fibers available for use. Dark fiber has been, and still is, available for sale on the wholesale market for both metro and wide area links, but it may not be available in all markets or city pairs.
Dark fiber capacity is typically used by network operators to build SONET and dense wavelength-division multiplexing (DWDM) networks, usually involving meshes of self-healing rings. Now, it is also used by end-user enterprises to expand Ethernet local area networks, especially since the adoption of IEEE standards for gigabit Ethernet and 10 Gigabit Ethernet over single-mode fiber. Running Ethernet networks between geographically separated buildings is a practice known as "WAN elimination".
DS1C is a digital signal equivalent to two "Digital Signal 1" circuits, with extra bits to conform to a signaling standard of 3.152 Mbit/s. Few (if any) of these circuit capacities are still in use today. In the early days of digital and data transmission, the three-megabit-per-second data rate was used to link mainframe computers together. The physical side of this circuit is called T1C.
Semiconductor.
The T1/E1 protocol is implemented as a "line interface unit" in silicon. The semiconductor chip contains a decoder/encoder, loop backs, jitter attenuators, receivers, and drivers. Additionally, there are usually multiple interfaces and they are labeled as dual, quad, octal, etc., depending upon the number.
The transceiver chip's primary purpose is to retrieve information from the "line", i.e., the conductive line that transverses distance, by receiving the pulses and converting the signal which has been subjected to noise, jitter, and other interference, to a clean digital pulse on the other interface of the chip.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\n & \\left( 24\\,\\frac{\\mathrm{channels}}{\\mathrm{frame}} \\times 8\\,\\frac{\\mathrm{bits}}{\\mathrm{channel}} \\times 8,000\\,\\frac{\\mathrm{frames}}{\\mathrm{second}} \\right) + \\left( 1\\,\\frac{\\mathrm{framing\\ bit}}{\\mathrm{frame}}\n\\times 8,000\\,\\frac{\\mathrm{frames}}{\\mathrm{second}} \\right) \\\\\n = {} & 1,536,000\\,\\frac{\\mathrm{bits}}{\\mathrm{second}} + 8,000 \\frac{\\mathrm{bits}}{\\mathrm{second}} \\\\\n = {} & 1,544,000\\,\\frac{\\mathrm{bits}}{\\mathrm{second}} \\\\\n \\equiv {} & 1.544\\,\\frac{\\mathrm{Mbit}}{\\mathrm{second}}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=907542 |
9075610 | Bernstein inequalities (probability theory) | Inequalities in probability theory
In probability theory, Bernstein inequalities give bounds on the probability that the sum of random variables deviates from its mean. In the simplest case, let "X"1, ..., "X""n" be independent Bernoulli random variables taking values +1 and −1 with probability 1/2 (this distribution is also known as the Rademacher distribution), then for every positive formula_0,
formula_1
Bernstein inequalities were proven and published by Sergei Bernstein in the 1920s and 1930s. Later, these inequalities were rediscovered several times in various forms. Thus, special cases of the Bernstein inequalities are also known as the Chernoff bound, Hoeffding's inequality and Azuma's inequality.
The martingale case of the Bernstein inequality
is known as Freedman's inequality and its refinement
is known as Hoeffding's inequality.
Some of the inequalities.
1. Let formula_2 be independent zero-mean random variables. Suppose that formula_3 almost surely, for all formula_4 Then, for all positive formula_5,
formula_6
2. Let formula_2 be independent zero-mean random variables. Suppose that for some positive real formula_7 and every integer formula_8,
formula_9
Then
formula_10
3. Let formula_2 be independent zero-mean random variables. Suppose that
formula_11
for all integer formula_12 Denote
formula_13
Then,
formula_14
4. Bernstein also proved generalizations of the inequalities above to weakly dependent random variables. For example, inequality (2) can be extended as follows. Let formula_2 be possibly non-independent random variables. Suppose that for all integers formula_15,
formula_16
Then
formula_17
More general results for martingales can be found in Fan et al. (2015).
Proofs.
The proofs are based on an application of Markov's inequality to the random variable
formula_18
for a suitable choice of the parameter formula_19.
Generalizations.
The Bernstein inequality can be generalized to Gaussian random matrices. Let formula_20 be a scalar where formula_21 is a complex Hermitian matrix and formula_22 is complex vector of size formula_23. The vector formula_24 is a Gaussian vector of size formula_23. Then for any formula_25, we have
formula_26
where formula_27 is the vectorization operation and formula_28 where formula_29 is the largest eigenvalue of formula_21. The proof is detailed here. Another similar inequality is formulated as
formula_30
where formula_31.
References.
<templatestyles src="Reflist/styles.css" />
A modern translation of some of these results can also be found in | [
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": "\\mathbb{P}\\left (\\left|\\frac{1}{n}\\sum_{i=1}^n X_i\\right| > \\varepsilon \\right ) \\leq 2\\exp \\left (-\\frac{n\\varepsilon^2}{2(1+\\frac{\\varepsilon}{3})} \\right)."
},
{
"math_id": 2,
"text": "X_1, \\ldots, X_n"
},
{
"math_id": 3,
"text": "|X_i|\\leq M"
},
{
"math_id": 4,
"text": "i."
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": "\\mathbb{P} \\left (\\sum_{i=1}^n X_i \\geq t \\right ) \\leq \\exp \\left ( -\\frac{\\tfrac{1}{2} t^2}{\\sum_{i = 1}^n \\mathbb{E} \\left[X_i^2 \\right ]+\\tfrac{1}{3} Mt} \\right )."
},
{
"math_id": 7,
"text": "L"
},
{
"math_id": 8,
"text": "k \\geq 2"
},
{
"math_id": 9,
"text": " \\mathbb{E} \\left[ \\left |X_i^k \\right |\\right ] \\leq \\frac{1}{2} \\mathbb{E} \\left[X_i^2\\right] L^{k-2} k!"
},
{
"math_id": 10,
"text": "\\mathbb{P} \\left (\\sum_{i=1}^n X_i \\geq 2t \\sqrt{\\sum \\mathbb{E} \\left [X_i^2 \\right ]} \\right ) < \\exp(-t^2), \\qquad \\text{for}\\quad 0 \\leq t \\leq \\frac{1}{2L}\\sqrt{\\sum \\mathbb{E} \\left[X_j^2\\right ]}. "
},
{
"math_id": 11,
"text": " \\mathbb{E} \\left[ \\left |X_i^k \\right |\\right ] \\leq \\frac{k!}{4!} \\left(\\frac{L}{5}\\right)^{k-4}"
},
{
"math_id": 12,
"text": "k \\geq 4."
},
{
"math_id": 13,
"text": " A_k = \\sum \\mathbb{E} \\left [ X_i^k\\right ]."
},
{
"math_id": 14,
"text": " \\mathbb{P} \\left( \\left| \\sum_{j=1}^n X_j - \\frac{A_3 t^2}{3A_2} \\right|\\geq \\sqrt{2A_2} \\, t \\left[ 1 + \\frac{A_4 t^2}{6 A_2^2} \\right] \\right) < 2 \\exp (- t^2), \\qquad \\text{for} \\quad 0 < t \\leq \\frac{5 \\sqrt{2A_2}}{4L}. "
},
{
"math_id": 15,
"text": "i>0"
},
{
"math_id": 16,
"text": "\\begin{align}\n\\mathbb{E} \\left. \\left [ X_i \\right | X_1, \\ldots, X_{i-1} \\right ] &= 0, \\\\\n\\mathbb{E} \\left. \\left [ X_i^2 \\right | X_1, \\ldots, X_{i-1} \\right ] &\\leq R_i \\mathbb{E} \\left [ X_i^2 \\right ], \\\\\n\\mathbb{E} \\left. \\left [ X_i^k \\right | X_1, \\ldots, X_{i-1} \\right ] &\\leq \\tfrac{1}{2} \\mathbb{E} \\left. \\left[ X_i^2 \\right | X_1, \\ldots, X_{i-1} \\right ] L^{k-2} k!\n\\end{align}"
},
{
"math_id": 17,
"text": "\\mathbb{P} \\left( \\sum_{i=1}^n X_i \\geq 2t \\sqrt{\\sum_{i=1}^n R_i \\mathbb{E}\\left [ X_i^2 \\right ]} \\right) < \\exp(-t^2), \\qquad \\text{for}\\quad 0 < t \\leq \\frac{1}{2L} \\sqrt{\\sum_{i=1}^n R_i \\mathbb{E} \\left [X_i^2 \\right ]}. "
},
{
"math_id": 18,
"text": " \\exp \\left ( \\lambda \\sum_{j=1}^n X_j \\right ),"
},
{
"math_id": 19,
"text": "\\lambda > 0"
},
{
"math_id": 20,
"text": "G = g^H A g + 2 \\operatorname{Re}(g^H a) "
},
{
"math_id": 21,
"text": "A"
},
{
"math_id": 22,
"text": "a"
},
{
"math_id": 23,
"text": "N"
},
{
"math_id": 24,
"text": "g \\sim \\mathcal{CN}(0,I)"
},
{
"math_id": 25,
"text": "\\sigma \\geq 0"
},
{
"math_id": 26,
"text": "\\mathbb{P} \\left( G \\leq \\operatorname{tr}(A) - \\sqrt{2\\sigma}\\sqrt{\\Vert \\operatorname{vec}(A) \\Vert^2 + 2 \\Vert a \\Vert^2 } - \\sigma s^-(A) \\right) < \\exp(-\\sigma), "
},
{
"math_id": 27,
"text": "\\operatorname{vec}"
},
{
"math_id": 28,
"text": "s^- (A) = \\max(-\\lambda_{\\max}(A),0)"
},
{
"math_id": 29,
"text": "\\lambda_{\\max}(A)"
},
{
"math_id": 30,
"text": "\\mathbb{P} \\left( G \\geq \\operatorname{tr}(A) + \\sqrt{2\\sigma}\\sqrt{\\Vert \\operatorname{vec}(A) \\Vert^2 + 2 \\Vert a \\Vert^2 } + \\sigma s^+(A) \\right) < \\exp(-\\sigma), "
},
{
"math_id": 31,
"text": "s^+(A) = \\max(\\lambda_{\\max}(A),0)"
}
] | https://en.wikipedia.org/wiki?curid=9075610 |
9078218 | Three cups problem | The three cups problem, also known as the three cup challenge and other variants, is a mathematical puzzle that, in its most common form, cannot be solved.
In the beginning position of the problem, one cup is upside-down and the other two are right-side up. The objective is to "turn all cups right-side up" in no more than six moves, turning over exactly two cups at each move.
The solvable (but trivial) version of this puzzle begins with one cup right-side up and two cups upside-down. To solve the puzzle in a single move, turn up the two cups that are upside down — after which all three cups are facing up. As a magic trick, a magician can perform the solvable version in a convoluted way, and then ask an audience member to solve the unsolvable version.
Proof of impossibility.
To see that the problem is insolvable (when starting with just one cup upside down), it suffices to concentrate on the number of cups the wrong way up. Denoting this number by formula_0, the goal of the problem is to change formula_0 from 1 to 0, i.e. by formula_1. The problem is insolvable because any move changes formula_0 by an even number. Since a move inverts two cups and each inversion changes formula_0 by formula_2 (if the cup was the right way up) or formula_1 (otherwise), a move changes formula_0 by the sum of two odd numbers, which is even, completing the proof.
Another way of looking is that, at the start, 2 cups are in the "right" orientation and 1 is "wrong". Changing 1 right cup and 1 wrong cup, the situation remains the same. Changing 2 right cups results in a situation with 3 wrong cups, after which the next move restores the original status of 1 wrong cup. Thus, any number of moves results in a situation either with 3 wrongs or with 1 wrong, and never with 0 wrongs.
More generally, this argument shows that for any number of cups, it is impossible to reduce formula_0 to 0 if it is initially odd. On the other hand, if formula_0 is even, inverting cups two at a time will eventually result in formula_0 equaling 0.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "W"
},
{
"math_id": 1,
"text": "-1"
},
{
"math_id": 2,
"text": "+1"
}
] | https://en.wikipedia.org/wiki?curid=9078218 |
9078833 | Chain sequence | In the analytic theory of continued fractions, a chain sequence is an infinite sequence {"a""n"} of non-negative real numbers chained together with another sequence {"g""n"} of non-negative real numbers by the equations
formula_0
where either (a) 0 ≤ "g""n" < 1, or (b) 0 < "g""n" ≤ 1. Chain sequences arise in the study of the convergence problem – both in connection with the parabola theorem, and also as part of the theory of positive definite continued fractions.
The infinite continued fraction of Worpitzky's theorem contains a chain sequence. A closely related theorem shows that
formula_1
converges uniformly on the closed unit disk |"z"| ≤ 1 if the coefficients {"a""n"} are a chain sequence.
An example.
The sequence {, , , ...} appears as a limiting case in the statement of Worpitzky's theorem. Since this sequence is generated by setting "g"0 = "g"1 = "g"2 = ... = , it is clearly a chain sequence. This sequence has two important properties.
formula_2
generates the same unending sequence {, , , ...}. | [
{
"math_id": 0,
"text": "\na_1 = (1-g_0)g_1 \\quad a_2 = (1-g_1)g_2 \\quad a_n = (1-g_{n-1})g_n\n"
},
{
"math_id": 1,
"text": "\nf(z) = \\cfrac{a_1z}{1 + \\cfrac{a_2z}{1 + \\cfrac{a_3z}{1 + \\cfrac{a_4z}{\\ddots}}}} \\,\n"
},
{
"math_id": 2,
"text": "\ng_0 = 0 \\quad g_1 = {\\textstyle\\frac{1}{4}} \\quad g_2 = {\\textstyle\\frac{1}{3}} \\quad \ng_3 = {\\textstyle\\frac{3}{8}} \\;\\dots\n"
}
] | https://en.wikipedia.org/wiki?curid=9078833 |
907899 | X-ray tube | Vacuum tube that converts electrical input power into X-rays
An X-ray tube is a vacuum tube that converts electrical input power into X-rays. The availability of this controllable source of X-rays created the field of radiography, the imaging of partly opaque objects with penetrating radiation. In contrast to other sources of ionizing radiation, X-rays are only produced as long as the X-ray tube is energized. X-ray tubes are also used in CT scanners, airport luggage scanners, X-ray crystallography, material and structure analysis, and for industrial inspection.
Increasing demand for high-performance Computed tomography (CT) scanning and angiography systems has driven development of very high performance medical X-ray tubes.
History.
X-ray tubes evolved from experimental Crookes tubes with which X-rays were first discovered on November 8, 1895, by the German physicist Wilhelm Conrad Röntgen. The first-generation "cold cathode" or "Crookes" X-ray tubes were used until the 1920s. These tubes work by ionisation of residual gas within the tube. The positive ions bombard the cathode of the tube to release electrons, which are accelerated toward the anode and produce X-rays when they strike it. The Crookes tube was improved by William Coolidge in 1913. The "Coolidge" tube, also called a "hot cathode" tube, uses thermionic emission, where a tungsten cathode is heated to a sufficiently high temperature to emit electrons, which are then accelerated toward the anode in a near perfect vacuum.
Until the late 1980s, X-ray generators were merely high-voltage, AC to DC variable power supplies. In the late 1980s a different method of control was emerging, called high speed switching. This followed the electronics technology of switching power supplies (aka switch mode power supply), and allowed for more accurate control of the X-ray unit, higher quality results and reduced X-ray exposures.
Physics.
As with any vacuum tube, there is a cathode, which emits electrons into the vacuum and an anode to collect the electrons, thus establishing a flow of electrical current, known as the beam, through the tube. A high voltage power source, for example 30 to 150 kilovolts (kV), called the "tube voltage", is connected across cathode and anode to accelerate the electrons. The X-ray spectrum depends on the anode material and the accelerating voltage.
Electrons from the cathode collide with the anode material, usually tungsten, molybdenum or copper, and accelerate other electrons, ions and nuclei within the anode material. About 1% of the energy generated is emitted/radiated, usually perpendicular to the path of the electron beam, as X-rays. The rest of the energy is released as heat. Over time, tungsten will be deposited from the target onto the interior surface of the tube, including the glass surface. This will slowly darken the tube and was thought to degrade the quality of the X-ray beam. Vaporized tungsten condenses on the inside of the envelope over the "window" and thus acts as an additional filter and decreases the tube's ability to radiate heat. Eventually, the tungsten deposit may become sufficiently conductive that at high enough voltages, arcing occurs. The arc will jump from the cathode to the tungsten deposit, and then to the anode. This arcing causes an effect called "crazing" on the interior glass of the X-ray window. With time, the tube becomes unstable even at lower voltages, and must be replaced. At this point, the tube assembly (also called the "tube head") is removed from the X-ray system, and replaced with a new tube assembly. The old tube assembly is shipped to a company that reloads it with a new X-ray tube.
The two X-ray photon-generating effects are generally called the 'Characteristic effect' and the bremsstrahlung effect, a compound of the German "bremsen" meaning to brake, and "Strahlung" meaning radiation.
The range of photonic energies emitted by the system can be adjusted by changing the applied voltage, and installing aluminum filters of varying thicknesses. Aluminum filters are installed in the path of the X-ray beam to remove "soft" (non-penetrating) radiation. The number of emitted X-ray photons, or dose, are adjusted by controlling the current flow and exposure time.
Heat released.
Heat is produced in the focal spot of the anode. Since a small fraction (less than or equal to 1%) of electron energy is converted to X-rays, it can be ignored in heat calculations.
The quantity of heat produced (in Joule) in the focal spot is given by :
formula_0
formula_1 being the waveform factor
formula_2= peak AC voltage (in kilo Volts)
formula_3 = tube current (in milli Amperes)
formula_4 = exposure time (in seconds)
Heat Unit (HU) was used in the past as an alternative to Joule. It is a convenient unit when a single-phase power source is connected to the X-ray tube. With a full-wave rectification of a sine wave, formula_1=formula_5, thus the heat unit:
1 HU = 0.707 J
1.4 HU = 1 J
Types.
Crookes tube (cold cathode tube).
Crookes tubes generated the electrons needed to create X-rays by ionization of the residual air in the tube, instead of a heated filament, so they were partially but not completely evacuated. They consisted of a glass bulb with around 10−6 to 5×10−8 atmospheric pressure of air (0.1 to 0.005 Pa). They had an aluminum cathode plate at one end of the tube, and a platinum anode target at the other end. The anode surface was angled so that the X-rays would radiate through the side of the tube. The cathode was concave so that the electrons were focused on a small (~1 mm) spot on the anode, approximating a point source of X-rays, which resulted in sharper images. The tube had a third electrode, an anticathode connected to the anode. It improved the X-ray output, but the method by which it achieved this is not understood. A more common arrangement used a copper plate anticathode (similar in construction to the cathode) in line with the anode such that the anode was between the cathode and the anticathode.
To operate, a DC voltage of a few kilovolts to as much as 100 kV was applied between the anodes and the cathode, usually generated by an induction coil, or for larger tubes, an electrostatic machine.
Crookes tubes were unreliable. As time passed, the residual air would be absorbed by the walls of the tube, reducing the pressure. This increased the voltage across the tube, generating 'harder' X-rays, until eventually the tube stopped working. To prevent this, 'softener' devices were used (see picture). A small tube attached to the side of the main tube contained a mica sleeve or chemical that released a small amount of gas when heated, restoring the correct pressure.
The glass envelope of the tube would blacken with usage due to the X-rays affecting its structure.
Coolidge tube (hot cathode tube).
In the Coolidge tube, the electrons are produced by thermionic effect from a tungsten filament heated by an electric current. The filament is the cathode of the tube. The high voltage potential is between the cathode and the anode, the electrons are thus accelerated, then hit the anode.
There are two designs: end-window tubes and side-window tubes. End window tubes usually have "transmission target" which is thin enough to allow X-rays to pass through the target (X-rays are emitted in the same direction as the electrons are moving.) In one common type of end-window tube, the filament is around the anode ("annular" or ring-shaped), the electrons have a curved path (half of a toroid).
What is special about side-window tubes is an electrostatic lens is used to focus the beam onto a very small spot on the anode. The anode is specially designed to dissipate the heat and wear resulting from this intense focused barrage of electrons. The anode is precisely angled at 1-20 degrees off perpendicular to the electron current to allow the escape of some of the X-ray photons which are emitted perpendicular to the direction of the electron current. The anode is usually made of tungsten or molybdenum. The tube has a window designed for escape of the generated X-ray photons.
The power of a Coolidge tube usually ranges from 0.1 to 18 kW.
Rotating anode tube.
A considerable amount of heat is generated in the focal spot (the area where the beam of electrons coming from the cathode strike to) of a stationary anode. Rather, a rotating anode lets the electron beam sweep a larger area of the anode, thus redeeming the advantage of a higher intensity of emitted radiation, along with reduced damage to the anode compared to its stationary state.
The focal spot temperature can reach during an exposure, and the anode assembly can reach following a series of large exposures. Typical anodes are a tungsten-rhenium target on a molybdenum core, backed with graphite. The rhenium makes the tungsten more ductile and resistant to wear from the impact of the electron beams. The molybdenum conducts heat from the target. The graphite provides thermal storage for the anode, and minimizes the rotating mass of the anode.
Microfocus X-ray tube.
Some X-ray examinations (such as, e.g., non-destructive testing and 3-D microtomography) need very high-resolution images and therefore require X-ray tubes that can generate very small focal spot sizes, typically below 50 μm in diameter. These tubes are called microfocus X-ray tubes.
There are two basic types of microfocus X-ray tubes: solid-anode tubes and metal-jet-anode tubes.
Solid-anode microfocus X-ray tubes are in principle very similar to the Coolidge tube, but with the important distinction that care has been taken to be able to focus the electron beam into a very small spot on the anode. Many microfocus X-ray sources operate with focus spots in the range 5-20 μm, but in the extreme cases spots smaller than 1 μm may be produced.
The major drawback of solid-anode microfocus X-ray tubes is their very low operating power. To avoid melting the anode, the electron-beam power density must be below a maximum value. This value is somewhere in the range 0.4-0.8 W/μm depending on the anode material. This means that a solid-anode microfocus source with a 10 μm electron-beam focus can operate at a power in the range 4-8 W.
In metal-jet-anode microfocus X-ray tubes the solid metal anode is replaced with a jet of liquid metal, which acts as the electron-beam target. The advantage of the metal-jet anode is that the maximum electron-beam power density is significantly increased. Values in the range 3-6 W/μm have been reported for different anode materials (gallium and tin). In the case with a 10 μm electron-beam focus a metal-jet-anode microfocus X-ray source may operate at 30-60 W.
The major benefit of the increased power density level for the metal-jet X-ray tube is the possibility to operate with a smaller focal spot, say 5 μm, to increase image resolution and at the same time acquire the image faster, since the power is higher (15-30 W) than for solid-anode tubes with 10 μm focal spots.
Hazards of X-ray production from vacuum tubes.
Any vacuum tube operating at several thousand volts or more can produce X-rays as an unwanted byproduct, raising safety issues. The higher the voltage, the more penetrating the resulting radiation and the more the hazard. CRT displays, once common in color televisions and computer displays, operate at 3-40 kilovolts depending on size, making them the main concern among household appliances. Historically, concern has focused less on the cathode ray tube, since its thick glass envelope is impregnated with several pounds of lead for shielding, than on high voltage (HV) rectifier and voltage regulator tubes inside earlier TVs. In the late 1960s it was found that a failure in the HV supply circuit of some General Electric TVs could leave excessive voltages on the regulator tube, causing it to emit X-rays. The models were recalled and the ensuing scandal caused the US agency responsible for regulating this hazard, the Center for Devices and Radiological Health of the Food and Drug Administration (FDA), to require that all TVs include circuits to prevent excessive voltages in the event of failure. The hazard associated with excessive voltages was eliminated with the advent of all-solid-state TVs, which have no tubes other than the CRT. Since 1969, the FDA has limited TV X-ray emission to 0.5 mR (milliroentgen) per hour. As other screen technologies advanced, starting in the 1990s, the production of CRTs was slowly phased out. These other technologies, such as LED, LCD and OLED, are incapable of producing x-rays due to the lack of a high voltage transformer.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_\\mathrm{heat} = w \\mathrm{V_p} \\mathrm{I} \\mathrm{t}"
},
{
"math_id": 1,
"text": "w"
},
{
"math_id": 2,
"text": "\\mathrm{V_p}"
},
{
"math_id": 3,
"text": "\\mathrm{I}"
},
{
"math_id": 4,
"text": "\\mathrm{t}"
},
{
"math_id": 5,
"text": "\\frac{1}{\\sqrt{2}} \\approx 0.707"
}
] | https://en.wikipedia.org/wiki?curid=907899 |
9080192 | Shocks and discontinuities (magnetohydrodynamics) | In magnetohydrodynamics (MHD), shocks and discontinuities are transition layers where properties of a plasma change from one equilibrium state to another. The relation between the plasma properties on both sides of a shock or a discontinuity can be obtained from the conservative form of the MHD equations, assuming conservation of mass, momentum, energy and of formula_0.
Rankine–Hugoniot jump conditions for MHD.
The jump conditions across a time-independent MHD shock or discontinuity are referred as the Rankine–Hugoniot equations for MHD. In the frame moving with the shock/discontinuity, those jump conditions can be written:
formula_1
where formula_2, v, p, B are the plasma density, velocity, (thermal) pressure and magnetic field respectively. The subscripts formula_3 and formula_4 refer to the tangential and normal components of a vector (with respect to the shock/discontinuity front). The subscripts 1 and 2 refer to the two states of the plasma on each side of the shock/discontinuity
Contact and tangential discontinuities.
Contact and tangential discontinuities are transition layers across which there is no particle transport. Thus, in the frame moving with the discontinuity, formula_5.
Contact discontinuities are discontinuities for which the thermal pressure, the magnetic field and the velocity are continuous. Only the mass density and temperature change.
Tangential discontinuities are discontinuities for which the total pressure (sum of the thermal and magnetic pressures) is conserved. The normal component of the magnetic field is identically zero. The density, thermal pressure and tangential component of the magnetic field vector can be discontinuous across the layer.
Shocks.
Shocks are transition layers across which there is a transport of particles. There are three types of shocks in MHD: slow-mode, intermediate and fast-mode shocks.
Intermediate shocks are non-compressive (meaning that the plasma density does not change across the shock).
A special case of the intermediate shock is referred to as a rotational discontinuity. They are isentropic. All thermodynamic quantities are continuous across the shock, but the tangential component of the magnetic field can "rotate".
Intermediate shocks in general however, unlike rotational discontinuities, can have a discontinuity in the pressure.
Slow-mode and fast-mode shocks are compressive and are associated with an increase in entropy. Across slow-mode shock, the tangential component of the magnetic field decreases. Across fast-mode shock it increases.
The type of shocks depend on the relative magnitude of the upstream velocity in the frame moving with the shock with respect to some characteristic speed. Those characteristic speeds, the slow and fast magnetosonic speeds, are related to the Alfvén speed, formula_6 and the sonic speed, formula_7 as follows:
formula_8
where formula_9 is the Alfvén speed and formula_10 is the angle between the incoming magnetic field and the shock normal vector.
The normal component of the slow shock propagates with velocity formula_11 in the frame moving with the upstream plasma, that of the intermediate shock with velocity formula_12 and that of the fast shock with velocity formula_13. The fast mode waves have higher phase velocities than the slow mode waves because the density and magnetic field are in phase, whereas the slow mode wave components are out of phase.
References.
Citations.
<templatestyles src="Reflist/styles.css" />
General references.
The original research on MHD shock waves can be found in the following papers.
<templatestyles src="Refbegin/styles.css" />
Textbooks.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\nabla \\cdot \\mathbf{B} "
},
{
"math_id": 1,
"text": "\\begin{cases}\n\\rho_1 v_{1 \\perp} = \\rho_2 v_{2 \\perp}, \\\\[1.2ex]\n\nB_{1 \\perp} = B_{2 \\perp}, \\\\[1.2ex]\n\n\\rho_1 v_{1 \\perp}^2 + p_1 + \\frac{1}{2 \\mu_0} B_{1 \\parallel}^2 = \\rho_2 v_{2 \\perp}^2+ p_2 + \\frac{1}{2 \\mu_0} B_{2 \\parallel}^2, \\\\[1.2ex]\n\n\\rho_1 v_{1 \\perp} \\mathbf{v}_{1 \\parallel} - \\frac{1}{\\mu_0} \\mathbf{B}_{1 \\parallel} B_{1 \\perp} =\n\\rho_2 v_{2 \\perp} \\mathbf{v}_{2 \\parallel} - \\frac{1}{\\mu_0} \\mathbf{B}_{2 \\parallel} B_{2 \\perp}, \\\\[1.2ex]\n\n\\displaystyle \\left(\\frac{\\gamma}{\\gamma-1}\\frac{p_1}{\\rho_1} + \\frac{v_1^2}{2}\\right) \\rho_1 v_{1 \\perp} + \\frac{1}{\\mu_0} \\left[ {v_{1 \\perp} B_{1 \\parallel}^2} - {B_{1 \\perp}(\\mathbf{B}_{1 \\parallel} \\cdot \\mathbf{v}_{1 \\parallel})} \\right] =\n\\left(\\frac{\\gamma}{\\gamma-1}\\frac{p_2}{\\rho_2} + \\frac{v_2^2}{2}\\right) \\rho_2 v_{2 \\perp} + \\frac{1}{\\mu_0} \\left[ {v_{2 \\perp} B_{2 \\parallel}^2} - {B_{2 \\perp}(\\mathbf{B}_{2 \\parallel} \\cdot \\mathbf{v}_{2 \\parallel})} \\right], \\\\[1.2ex]\n\n(\\mathbf{v} \\times \\mathbf{B})_{1 \\parallel} = (\\mathbf{v} \\times \\mathbf{B})_{2 \\parallel},\n\\end{cases}"
},
{
"math_id": 2,
"text": " \\rho"
},
{
"math_id": 3,
"text": "\\parallel"
},
{
"math_id": 4,
"text": "\\perp"
},
{
"math_id": 5,
"text": " v_{1 \\perp} = v_{2 \\perp} = 0"
},
{
"math_id": 6,
"text": " V_A "
},
{
"math_id": 7,
"text": " c_s "
},
{
"math_id": 8,
"text": "\\begin{align}\na_{\\text{slow}}^2 &= \\frac{1}{2} \\left[\\left(c_s^2 + V_A^2\\right)-\\sqrt{\\left(c_s^2+V_A^2\\right)^2-4c_s^2V_{A}^2 \\cos^{2} \\theta_{Bn}}\\,\\right], \\\\[1ex]\na_{\\text{fast}}^2 &= \\frac{1}{2} \\left[\\left(c_s^2 + V_A^2\\right)+\\sqrt{\\left(c_s^2+V_A^2\\right)^2-4c_s^2V_{A}^2 \\cos^{2} \\theta_{Bn}}\\,\\right],\n\\end{align}"
},
{
"math_id": 9,
"text": "V_{A}"
},
{
"math_id": 10,
"text": "\\theta_{Bn}"
},
{
"math_id": 11,
"text": "a_{\\mathrm{slow} }"
},
{
"math_id": 12,
"text": "V_{An}"
},
{
"math_id": 13,
"text": "a_{\\mathrm{fast}}"
}
] | https://en.wikipedia.org/wiki?curid=9080192 |
908081 | Coherent information | Coherent information is an entropy measure used in quantum information theory. It is a property of a quantum state ρ and a quantum channel formula_0; intuitively, it attempts to describe how much of the quantum information in the state will remain after the state goes through the channel. In this sense, it is intuitively similar to the mutual information of classical information theory. The coherent information is written formula_1.
Definition.
The coherent information is defined as formula_2 where formula_3 is the von Neumann entropy of the output and formula_4 is the entropy exchange between the state and the channel.
History.
The coherent information was introduced by Benjamin Schumacher and Michael A. Nielsen in a 1996 paper "Quantum data processing and error correction", which appeared in Physical Review A. The same quantity was independently introduced by Seth Lloyd in a paper called “The capacity of the noisy quantum channel” published in Physical Review A. | [
{
"math_id": 0,
"text": "\\mathcal{N}"
},
{
"math_id": 1,
"text": "I(\\rho, \\mathcal{N})"
},
{
"math_id": 2,
"text": "I(\\rho, \\mathcal{N})\\ \\stackrel{\\mathrm{def}}{=}\\ S(\\mathcal{N} \\rho) - S(\\mathcal{N},\\rho)"
},
{
"math_id": 3,
"text": " S(\\mathcal{N} \\rho)"
},
{
"math_id": 4,
"text": "S({\\mathcal N},\\rho)"
}
] | https://en.wikipedia.org/wiki?curid=908081 |
9082215 | Gaussian isoperimetric inequality | In mathematics, the Gaussian isoperimetric inequality, proved by Boris Tsirelson and Vladimir Sudakov, and later independently by Christer Borell, states that among all sets of given Gaussian measure in the "n"-dimensional Euclidean space, half-spaces have the minimal Gaussian boundary measure.
Mathematical formulation.
Let formula_0 be a measurable subset of formula_1 endowed with the standard Gaussian measure formula_2 with the density formula_3. Denote by
formula_4
the ε-extension of "A". Then the "Gaussian isoperimetric inequality" states that
formula_5
where
formula_6
Proofs and generalizations.
The original proofs by Sudakov, Tsirelson and Borell were based on Paul Lévy's spherical isoperimetric inequality.
Sergey Bobkov proved Bobkov's inequality, a functional generalization of the Gaussian isoperimetric inequality, proved from a certain "two point analytic inequality". Bakry and Ledoux gave another proof of Bobkov's functional inequality based on the semigroup techniques which works in a much more abstract setting. Later Barthe and Maurey gave yet another proof using the Brownian motion.
The Gaussian isoperimetric inequality also follows from Ehrhard's inequality.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle A"
},
{
"math_id": 1,
"text": "\\scriptstyle\\mathbf{R}^n "
},
{
"math_id": 2,
"text": "\\gamma^n"
},
{
"math_id": 3,
"text": " {\\exp(-\\|x\\|^2/2)}/(2\\pi)^{n/2}"
},
{
"math_id": 4,
"text": "A_\\varepsilon = \\left\\{ x \\in \\mathbf{R}^n \\, | \\, \n\\text{dist}(x, A) \\leq \\varepsilon \\right\\}"
},
{
"math_id": 5,
"text": "\\liminf_{\\varepsilon \\to +0} \n \\varepsilon^{-1} \\left\\{ \\gamma^n (A_\\varepsilon) - \\gamma^n(A) \\right\\}\n \\geq \\varphi(\\Phi^{-1}(\\gamma^n(A))),"
},
{
"math_id": 6,
"text": "\\varphi(t) = \\frac{\\exp(-t^2/2)}{\\sqrt{2\\pi}}\\quad{\\rm and}\\quad\\Phi(t) = \\int_{-\\infty}^t \\varphi(s)\\, ds. "
}
] | https://en.wikipedia.org/wiki?curid=9082215 |
9082922 | Dielectric elastomers | Dielectric elastomers (DEs) are smart material systems that produce large strains and are promising for Soft robotics, Artificial muscle, etc. They belong to the group of electroactive polymers (EAP). DE actuators (DEA) transform electric energy into mechanical work and vice versa. Thus, they can be used as both actuators, sensors, and energy-harvesting devices. They have high elastic energy density and fast response due to being lightweight, highly stretchable, and operating under the electrostatic principle. They have been investigated since the late 1990s. Many prototype applications exist. Every year, conferences are held in the US and Europe.
Working principles.
A DEA is a compliant capacitor (see image), where a passive elastomer film is sandwiched between two compliant electrodes. When a voltage formula_0 is applied, the electrostatic pressure formula_1 arising from the Coulomb forces acts between the electrodes. The electrodes squeeze the elastomer film. The equivalent electromechanical pressure formula_2 is twice the electrostatic pressure formula_1 and is given by:
formula_3
where formula_4 is the vacuum permittivity, formula_5 is the dielectric constant of the polymer and formula_6 is the thickness of the elastomer film in the current state (during deformation). Usually, strains of DEA are in the order of 10–35%, maximum values reach 300% (the acrylic elastomer VHB 4910, commercially available from 3M, which also supports a high elastic energy density and a high electrical breakdown strength.)
Ionic.
Replacing the electrodes with soft hydrogels allows ionic transport to replace electron transport. Aqueous ionic hydrogels can deliver potentials of multiple kilovolts, despite the onset of electrolysis at below 1.5 V.
The difference between the capacitance of the double layer and the dielectric leads to a potential across the dielectric that can be millions of times greater than that across the double layer. Potentials in the kilovolt range can be realized without electrochemically degrading the hydrogel.
Deformations are well controlled, reversible, and capable of high-frequency operation. The resulting devices can be perfectly transparent. High-frequency actuation is possible. Switching speeds are limited only by mechanical inertia. The hydrogel's stiffness can be thousands of times smaller than the dielectric's, allowing actuation without mechanical constraint across a range of nearly 100% at millisecond speeds. They can be biocompatible.
Remaining issues include drying of the hydrogels, ionic build-up, hysteresis, and electrical shorting.
Early experiments in semiconductor device research relied on ionic conductors to investigate field modulation of contact potentials in silicon and to enable the first solid-state amplifiers. Work since 2000 has established the utility of electrolyte gate electrodes. Ionic gels can also serve as elements of high-performance, stretchable graphene transistors.
Materials.
Films of carbon powder or grease loaded with carbon black were early choices as electrodes for the DEAs. Such materials have poor reliability and are not available with established manufacturing techniques. Improved characteristics can be achieved with liquid metal, sheets of graphene, coatings of carbon nanotubes, surface-implanted layers of metallic nanoclusters and corrugated or patterned metal films.
These options offer limited mechanical properties, sheet resistances, switching times and easy integration. Silicones and acrylic elastomers are other alternatives.
The requirements for an elastomer material are:
Mechanically prestretching the elastomer film offers the possibility of enhancing the electrical breakdown strength. Further reasons for prestretching include:
The elastomers show a visco-hyperelastic behavior. Models that describe large strains and viscoelasticity are required for the calculation of such actuators.
Materials used in research include graphite powder, silicone oil / graphite mixtures, gold electrodes. The electrode should be conductive and compliant. Compliance is important so that the elastomer is not constrained mechanically when elongated.
Films of polyacrylamide hydrogels formed with salt water can be laminated onto the dielectric surfaces, replacing electrodes.
DEs based on silicone (PDMS) and natural rubber are promising research fields. Properties such as fast response times and efficiency are superior using natural rubber based DEs compared to VHB (acrylic elastomer) based DEs for strains under 15%.
Instabilities in Dielectric elastomers.
Dielectric elastomer actuators are to be designed so as to avoid the phenomenon of dielectric breakdown in their whole course of
motion. In addition to the dielectric breakdown, DEAs are susceptible to another failure mode, referred to as the electromechanical
instability, which arises due to nonlinear interaction between the electrostatic and the mechanical restoring forces. In several cases, the electromechanical instability precedes the dielectric breakdown. The instability
parameters (critical voltage and the corresponding maximum stretch) are dependent on several factors, such as the level of prestretch, temperature, and the deformation dependent permittivity. Additionally, they also depend on the voltage
waveform used to drive the actuator.
Configurations.
Configurations include:
Applications.
Dielectric elastomers offer multiple potential applications with the potential to replace many electromagnetic actuators, pneumatics and piezo actuators. A list of potential applications include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U"
},
{
"math_id": 1,
"text": "p_{el}"
},
{
"math_id": 2,
"text": "p_{eq}"
},
{
"math_id": 3,
"text": "p_{eq}=\\varepsilon_0\\varepsilon_r\\frac{U^2}{z^2}"
},
{
"math_id": 4,
"text": "\\varepsilon_0"
},
{
"math_id": 5,
"text": "\\varepsilon_r"
},
{
"math_id": 6,
"text": "z"
}
] | https://en.wikipedia.org/wiki?curid=9082922 |
908548 | Conditional entropy | Measure of relative information in probability theory
In information theory, the conditional entropy quantifies the amount of information needed to describe the outcome of a random variable formula_1 given that the value of another random variable formula_0 is known. Here, information is measured in shannons, nats, or hartleys. The "entropy of formula_1 conditioned on formula_0" is written as formula_5.
Definition.
The conditional entropy of formula_1 given formula_0 is defined as
where formula_7 and formula_8 denote the support sets of formula_0 and formula_1.
"Note:" Here, the convention is that the expression formula_9 should be treated as being equal to zero. This is because formula_10.
Intuitively, notice that by definition of expected value and of conditional probability, formula_11 can be written as formula_12, where formula_13 is defined as formula_14. One can think of formula_15 as associating each pair formula_16 with a quantity measuring the information content of formula_17 given formula_18. This quantity is directly related to the amount of information needed to describe the event formula_17 given formula_19. Hence by computing the expected value of formula_20 over all pairs of values formula_21, the conditional entropy formula_22 measures how much information, on average, the variable formula_23 encodes about formula_24.
Motivation.
Let formula_25 be the entropy of the discrete random variable formula_1 conditioned on the discrete random variable formula_0 taking a certain value formula_26. Denote the support sets of formula_0 and formula_1 by formula_7 and formula_8. Let formula_1 have probability mass function formula_27. The unconditional entropy of formula_1 is calculated as formula_28, i.e.
formula_29
where formula_30 is the information content of the outcome of formula_1 taking the value formula_31. The entropy of formula_1 conditioned on formula_0 taking the value formula_26 is defined analogously by conditional expectation:
formula_32
Note that formula_5 is the result of averaging formula_25 over all possible values formula_26 that formula_0 may take. Also, if the above sum is taken over a sample formula_33, the expected value formula_34 is known in some domains as <templatestyles src="Template:Visible anchor/styles.css" />equivocation.
Given discrete random variables formula_0 with image formula_7 and formula_1 with image formula_8, the conditional entropy of formula_1 given formula_0 is defined as the weighted sum of formula_25 for each possible value of formula_26, using formula_35 as the weights:
formula_36
Properties.
Conditional entropy equals zero.
formula_37 if and only if the value of formula_1 is completely determined by the value of formula_0.
Conditional entropy of independent random variables.
Conversely, formula_38 if and only if formula_1 and formula_0 are independent random variables.
Chain rule.
Assume that the combined system determined by two random variables formula_0 and formula_1 has joint entropy formula_2, that is, we need formula_2 bits of information on average to describe its exact state. Now if we first learn the value of formula_0, we have gained formula_3 bits of information. Once formula_0 is known, we only need formula_39 bits to describe the state of the whole system. This quantity is exactly formula_5, which gives the "chain rule" of conditional entropy:
formula_40
The chain rule follows from the above definition of conditional entropy:
formula_41
In general, a chain rule for multiple random variables holds:
formula_42
It has a similar form to chain rule in probability theory, except that addition instead of multiplication is used.
Bayes' rule.
Bayes' rule for conditional entropy states
formula_43
"Proof." formula_44 and formula_45. Symmetry entails formula_46. Subtracting the two equations implies Bayes' rule.
If formula_1 is conditionally independent of formula_47 given formula_0 we have:
formula_48
Other properties.
For any formula_0 and formula_1:
formula_49
where formula_6 is the mutual information between formula_0 and formula_1.
For independent formula_0 and formula_1:
formula_50 and formula_51
Although the specific-conditional entropy formula_52 can be either less or greater than formula_3 for a given random variate formula_53 of formula_1, formula_4 can never exceed formula_3.
Conditional differential entropy.
Definition.
The above definition is for discrete random variables. The continuous version of discrete conditional entropy is called "conditional differential (or continuous) entropy". Let formula_0 and formula_1 be a continuous random variables with a joint probability density function formula_54. The differential conditional entropy formula_55 is defined as
Properties.
In contrast to the conditional entropy for discrete random variables, the conditional differential entropy may be negative.
As in the discrete case there is a chain rule for differential entropy:
formula_56
Notice however that this rule may not be true if the involved differential entropies do not exist or are infinite.
Joint differential entropy is also used in the definition of the mutual information between continuous random variables:
formula_57
formula_58 with equality if and only if formula_0 and formula_1 are independent.
Relation to estimator error.
The conditional differential entropy yields a lower bound on the expected squared error of an estimator. For any random variable formula_0, observation formula_1 and estimator formula_59 the following holds:
formula_60
This is related to the uncertainty principle from quantum mechanics.
Generalization to quantum theory.
In quantum information theory, the conditional entropy is generalized to the conditional quantum entropy. The latter can take negative values, unlike its classical counterpart.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "\\Eta(X,Y)"
},
{
"math_id": 3,
"text": "\\Eta(X)"
},
{
"math_id": 4,
"text": "\\Eta(X|Y)"
},
{
"math_id": 5,
"text": "\\Eta(Y|X)"
},
{
"math_id": 6,
"text": "\\operatorname{I}(X;Y)"
},
{
"math_id": 7,
"text": "\\mathcal X"
},
{
"math_id": 8,
"text": "\\mathcal Y"
},
{
"math_id": 9,
"text": "0 \\log 0"
},
{
"math_id": 10,
"text": "\\lim_{\\theta\\to0^+} \\theta\\, \\log \\theta = 0"
},
{
"math_id": 11,
"text": "\\displaystyle H(Y|X) "
},
{
"math_id": 12,
"text": " H(Y|X) = \\mathbb{E}[f(X,Y)]"
},
{
"math_id": 13,
"text": " f "
},
{
"math_id": 14,
"text": "\\displaystyle f(x,y) := -\\log\\left(\\frac{p(x, y)}{p(x)}\\right) = -\\log(p(y|x))"
},
{
"math_id": 15,
"text": "\\displaystyle f"
},
{
"math_id": 16,
"text": "\\displaystyle (x, y)"
},
{
"math_id": 17,
"text": "\\displaystyle (Y=y)"
},
{
"math_id": 18,
"text": "\\displaystyle (X=x)"
},
{
"math_id": 19,
"text": "(X=x)"
},
{
"math_id": 20,
"text": "\\displaystyle f "
},
{
"math_id": 21,
"text": "(x, y) \\in \\mathcal{X} \\times \\mathcal{Y}"
},
{
"math_id": 22,
"text": "\\displaystyle H(Y|X)"
},
{
"math_id": 23,
"text": " X "
},
{
"math_id": 24,
"text": " Y "
},
{
"math_id": 25,
"text": "\\Eta(Y|X=x)"
},
{
"math_id": 26,
"text": "x"
},
{
"math_id": 27,
"text": "p_Y{(y)}"
},
{
"math_id": 28,
"text": "\\Eta(Y) := \\mathbb{E}[\\operatorname{I}(Y)]"
},
{
"math_id": 29,
"text": "\\Eta(Y) = \\sum_{y\\in\\mathcal Y} {\\mathrm{Pr}(Y=y)\\,\\mathrm{I}(y)} \n= -\\sum_{y\\in\\mathcal Y} {p_Y(y) \\log_2{p_Y(y)}},"
},
{
"math_id": 30,
"text": "\\operatorname{I}(y_i)"
},
{
"math_id": 31,
"text": "y_i"
},
{
"math_id": 32,
"text": "\\Eta(Y|X=x)\n= -\\sum_{y\\in\\mathcal Y} {\\Pr(Y = y|X=x) \\log_2{\\Pr(Y = y|X=x)}}."
},
{
"math_id": 33,
"text": "y_1, \\dots, y_n"
},
{
"math_id": 34,
"text": "E_X[ \\Eta(y_1, \\dots, y_n \\mid X = x)]"
},
{
"math_id": 35,
"text": "p(x)"
},
{
"math_id": 36,
"text": "\n\\begin{align}\n\\Eta(Y|X)\\ &\\equiv \\sum_{x\\in\\mathcal X}\\,p(x)\\,\\Eta(Y|X=x)\\\\\n& =-\\sum_{x\\in\\mathcal X} p(x)\\sum_{y\\in\\mathcal Y}\\,p(y|x)\\,\\log_2\\, p(y|x)\\\\\n& =-\\sum_{x\\in\\mathcal X, y\\in\\mathcal Y}\\,p(x)p(y|x)\\,\\log_2\\,p(y|x)\\\\\n& =-\\sum_{x\\in\\mathcal X, y\\in\\mathcal Y}p(x,y)\\log_2 \\frac {p(x,y)} {p(x)}. \n\\end{align}\n"
},
{
"math_id": 37,
"text": "\\Eta(Y|X)=0"
},
{
"math_id": 38,
"text": "\\Eta(Y|X) = \\Eta(Y)"
},
{
"math_id": 39,
"text": "\\Eta(X,Y)-\\Eta(X)"
},
{
"math_id": 40,
"text": "\\Eta(Y|X)\\, = \\, \\Eta(X,Y)- \\Eta(X)."
},
{
"math_id": 41,
"text": "\\begin{align} \n\\Eta(Y|X) &= \\sum_{x\\in\\mathcal X, y\\in\\mathcal Y}p(x,y)\\log \\left(\\frac{p(x)}{p(x,y)} \\right) \\\\[4pt]\n &= \\sum_{x\\in\\mathcal X, y\\in\\mathcal Y}p(x,y)(\\log (p(x)) - \\log (p(x,y))) \\\\[4pt]\n &= -\\sum_{x\\in\\mathcal X, y\\in\\mathcal Y}p(x,y)\\log (p(x,y)) + \\sum_{x\\in\\mathcal X, y\\in\\mathcal Y}{p(x,y)\\log(p(x))} \\\\[4pt]\n & = \\Eta(X,Y) + \\sum_{x \\in \\mathcal X} p(x)\\log (p(x) ) \\\\[4pt]\n & = \\Eta(X,Y) - \\Eta(X). \n\\end{align}"
},
{
"math_id": 42,
"text": " \\Eta(X_1,X_2,\\ldots,X_n) =\n \\sum_{i=1}^n \\Eta(X_i | X_1, \\ldots, X_{i-1}) "
},
{
"math_id": 43,
"text": "\\Eta(Y|X) \\,=\\, \\Eta(X|Y) - \\Eta(X) + \\Eta(Y)."
},
{
"math_id": 44,
"text": "\\Eta(Y|X) = \\Eta(X,Y) - \\Eta(X)"
},
{
"math_id": 45,
"text": "\\Eta(X|Y) = \\Eta(Y,X) - \\Eta(Y)"
},
{
"math_id": 46,
"text": "\\Eta(X,Y) = \\Eta(Y,X)"
},
{
"math_id": 47,
"text": "Z"
},
{
"math_id": 48,
"text": "\\Eta(Y|X,Z) \\,=\\, \\Eta(Y|X)."
},
{
"math_id": 49,
"text": "\\begin{align}\n \\Eta(Y|X) &\\le \\Eta(Y) \\, \\\\\n \\Eta(X,Y) &= \\Eta(X|Y) + \\Eta(Y|X) + \\operatorname{I}(X;Y),\\qquad \\\\\n \\Eta(X,Y) &= \\Eta(X) + \\Eta(Y) - \\operatorname{I}(X;Y),\\, \\\\\n \\operatorname{I}(X;Y) &\\le \\Eta(X),\\,\n\\end{align}"
},
{
"math_id": 50,
"text": "\\Eta(Y|X) = \\Eta(Y) "
},
{
"math_id": 51,
"text": "\\Eta(X|Y) = \\Eta(X) \\, "
},
{
"math_id": 52,
"text": "\\Eta(X|Y=y)"
},
{
"math_id": 53,
"text": "y"
},
{
"math_id": 54,
"text": "f(x,y)"
},
{
"math_id": 55,
"text": "h(X|Y)"
},
{
"math_id": 56,
"text": "h(Y|X)\\,=\\,h(X,Y)-h(X)"
},
{
"math_id": 57,
"text": "\\operatorname{I}(X,Y)=h(X)-h(X|Y)=h(Y)-h(Y|X)"
},
{
"math_id": 58,
"text": "h(X|Y) \\le h(X)"
},
{
"math_id": 59,
"text": "\\widehat{X}"
},
{
"math_id": 60,
"text": "\\mathbb{E}\\left[\\bigl(X - \\widehat{X}{(Y)}\\bigr)^2\\right] \n \\ge \\frac{1}{2\\pi e}e^{2h(X|Y)}"
}
] | https://en.wikipedia.org/wiki?curid=908548 |
908654 | Dynamometer | Machine used to measure force or mechanical power
A dynamometer or "dyno" for short, is a device for simultaneously measuring the torque and rotational speed (RPM) of an engine, motor or other rotating so that its instantaneous power may be calculated, and usually displayed by the dynamometer itself as kW or bhp.
In addition to being used to determine the torque or power characteristics of a machine under test, dynamometers are employed in a number of other roles. In standard emissions testing cycles such as those defined by the United States Environmental Protection Agency, dynamometers are used to provide simulated road loading of either the engine (using an engine dynamometer) or full powertrain (using a chassis dynamometer). Beyond simple power and torque measurements, dynamometers can be used as part of a testbed for a variety of engine development activities, such as the calibration of engine management controllers, detailed investigations into combustion behavior, and tribology.
In the medical terminology, hand-held dynamometers are used for routine screening of grip and hand strength, and the initial and ongoing evaluation of patients with hand trauma or dysfunction. They are also used to measure grip strength in patients where compromise of the cervical nerve roots or peripheral nerves is suspected.
In the rehabilitation, kinesiology, and ergonomics realms, force dynamometers are used for measuring the back, grip, arm, and/or leg strength of athletes, patients, and workers to evaluate physical status, performance, and task demands. Typically the force applied to a lever or through a cable is measured and then converted to a moment of force by multiplying by the perpendicular distance from the force to the axis of the level.
Principles of operation of torque power (absorbing) dynamometers.
An absorbing dynamometer acts as a load that is driven by the prime mover that is under test (e.g. Pelton wheel). The dynamometer must be able to operate at any speed and load to any level of torque that the test requires.
Absorbing dynamometers are not to be confused with "inertia" dynamometers, which calculate power solely by measuring power required to accelerate a known mass drive roller and provide no variable load to the prime mover.
An absorption dynamometer is usually equipped with some means of measuring the operating torque and speed.
The power absorption unit (PAU) of a dynamometer absorbs the power developed by the prime mover. This power absorbed by the dynamometer is then converted into heat, which generally dissipates into the ambient air or transfers to cooling water that dissipates into the air. Regenerative dynamometers, in which the prime mover drives a DC motor as a generator to create load, make excess DC power and potentially - using a DC/AC inverter - can feed AC power back into the commercial electrical power grid.
Absorption dynamometers can be equipped with two types of control systems to provide different main test types.
Constant force.
The dynamometer has a "braking" torque regulator - the power absorption unit is configured to provide a set braking force torque load, while the prime mover is configured to operate at whatever throttle opening, fuel delivery rate, or any other variable it is desired to test. The prime mover is then allowed to accelerate the engine through the desired speed or RPM range. Constant force test routines require the PAU to be set slightly torque deficient as referenced to prime mover output to allow some rate of acceleration. Power is calculated based on rotational speed x torque x constant. The constant varies depending on the units used.
Constant speed.
If the dynamometer has a speed regulator (human or computer), the PAU provides a variable amount of braking force (torque) that is necessary to cause the prime mover to operate at the desired single test speed or RPM. The PAU braking load applied to the prime mover can be manually controlled or determined by a computer. Most systems employ eddy current, oil hydraulic, or DC motor produced loads because of their linear and quick load change abilities.
The power is calculated as the product of angular velocity and torque.
A "motoring dynamometer" acts as a motor that drives the equipment under test. It must be able to drive the equipment at any speed and develop any level of torque that the test requires. In common usage, AC or DC motors are used to drive the equipment or "load" device.
In most dynamometers power ("P") is not measured directly, but must be calculated from torque ("τ") and angular velocity ("ω") values or force ("F") and linear velocity ("v"):
formula_0
or
formula_1
where
"P" is the power in watts
"τ" is the torque in newton metres
"ω" is the angular velocity in radians per second
"F" is the force in newtons
"v" is the linear velocity in metres per second
Division by a conversion constant may be required, depending on the units of measure used.
For imperial or U.S. customary units,
formula_2
where
"P"hp is the power in horsepower
"τ"lb·ft is the torque in pound-feet
"ω"RPM is the rotational velocity in revolutions per minute
For metric units,
formula_3
where
"P"W is the power in Watts (W)
"τ"N·m is the torque in Newton metres (Nm)
"ω" is the rotational velocity in radians/second (rad/s)
"ω = ωRPM . π / 30 "
Detailed dynamometer description.
A dynamometer consists of an absorption (or absorber/driver) unit, and usually includes a means for measuring torque and rotational speed. An absorption unit consists of some type of rotor in a housing. The rotor is coupled to the engine or other equipment under test and is free to rotate at whatever speed is required for the test. Some means is provided to develop a braking torque between the rotor and housing of the dynamometer. The means for developing torque can be frictional, hydraulic, electromagnetic, or otherwise, according to the type of absorption/driver unit.
One means for measuring torque is to mount the dynamometer housing so that it is free to turn except as restrained by a torque arm. The housing can be made free to rotate by using trunnions connected to each end of the housing to support it in pedestal-mounted trunnion bearings. The torque arm is connected to the dyno housing and a weighing scale is positioned so that it measures the force exerted by the dyno housing in attempting to rotate. The torque is the force indicated by the scales multiplied by the length of the torque arm measured from the center of the dynamometer. A load cell transducer can be substituted for the scales in order to provide an electrical signal that is proportional to torque.
Another means to measure torque is to connect the engine to the dynamo through a torque sensing coupling or torque transducer. A torque transducer provides an electrical signal that is proportional to the torque.
With electrical absorption units, it is possible to determine torque by measuring the current drawn (or generated) by the absorber/driver. This is generally a less accurate method and not much practiced in modern times, but it may be adequate for some purposes.
When torque and speed signals are available, test data can be transmitted to a data acquisition system rather than being recorded manually. Speed and torque signals can also be recorded by a chart recorder or plotter.
Types of dynamometers.
In addition to classification as absorption, motoring, or universal, as described above, dynamometers can also be classified in other ways.
A dyno that is coupled directly to an engine is known as an "engine dyno".
A dyno that can measure torque and power delivered by the power train of a vehicle directly from the drive wheel or wheels without removing the engine from the frame of the vehicle), is known as a "chassis dyno".
Dynamometers can also be classified by the type of absorption unit or absorber/driver that they use. Some units that are capable of absorption only can be combined with a motor to construct an absorber/driver or "universal" dynamometer.
Eddy current type absorber.
Eddy current (EC) dynamometers are currently the most common absorbers used in modern chassis dynos. The EC absorbers provide a quick load change rate for rapid load settling. Most are air cooled, but some are designed to require external water cooling systems.
Eddy current dynamometers require an electrically conductive core, shaft, or disc moving across a magnetic field to produce resistance to movement. Iron is a common material, but copper, aluminum, and other conductive materials are also usable.
In current (2009) applications, most EC brakes use cast iron discs similar to vehicle disc brake rotors, and use variable electromagnets to change the magnetic field strength to control the amount of braking.
The electromagnet voltage is usually controlled by a computer, using changes in the magnetic field to match the power output being applied.
Sophisticated EC systems allow steady state and controlled acceleration rate operation.
Powder dynamometer.
A powder dynamometer is similar to an eddy current dynamometer, but a fine magnetic powder is placed in the air gap between the rotor and the coil. The resulting flux lines create "chains" of metal particulate that are constantly built and broken apart during rotation, creating great torque. Powder dynamometers are typically limited to lower RPM due to heat dissipation problems.
Hysteresis dynamometers.
Hysteresis dynamometers use a magnetic rotor, sometimes of AlNiCo alloy, that is moved through flux lines generated between magnetic pole pieces. The magnetisation of the rotor is thus cycled around its B-H characteristic, dissipating energy proportional to the area between the lines of that graph as it does so.
Unlike eddy current brakes, which develop no torque at standstill, the hysteresis brake develops largely constant torque, proportional to its magnetising current (or magnet strength in the case of permanent magnet units) over its entire speed range. Units often incorporate ventilation slots, though some have provision for forced air cooling from an external supply.
Hysteresis and Eddy Current dynamometers are two of the most useful technologies in small ( and less) dynamometers.
Electric motor/generator dynamometer.
Electric motor/generator dynamometers are a specialized type of adjustable-speed drive. The absorption/driver unit can be either an alternating current (AC) motor or a direct current (DC) motor. Either an AC motor or a DC motor can operate as a generator that is driven by the unit under test or a motor that drives the unit under test. When equipped with appropriate control units, electric motor/generator dynamometers can be configured as universal dynamometers. The control unit for an AC motor is a variable-frequency drive, while the control unit for a DC motor is a DC drive. In both cases, regenerative control units can transfer power from the unit under test to the electric utility. Where permitted, the operator of the dynamometer can receive payment (or credit) from the utility for the returned power via net metering.
In engine testing, universal dynamometers can not only absorb the power of the engine, but can also drive the engine for measuring friction, pumping losses, and other factors.
Electric motor/generator dynamometers are generally more costly and complex than other types of dynamometers.
Fan brake.
A fan is used to blow air to provide engine load. The torque absorbed by a fan brake may be adjusted by changing the gearing or the fan itself, or by restricting the airflow through the fan. Due to the low viscosity of air, this variety of dynamometer is inherently limited in the amount of torque that it can absorb.
Force lubricated oil shear brake.
An oil shear brake has a series of friction discs and steel plates similar to the clutches in an automobile automatic transmission. The shaft carrying the friction discs is attached to the load through a coupling. A piston pushes the stack of friction discs and steel plates together creating shear in the oil between the discs and plates applying a torque. Torque can be controlled pneumatically or hydraulically. Force lubrication maintains a film of oil between the surfaces to eliminate wear. Reaction is smooth down to zero RPM without stick-slip. Loads up to hundreds of thermal horsepower can be absorbed through the required force lubrication and cooling unit. Most often, the brake is kinetically grounded through a torque arm anchored by a strain gauge which produces a current under load fed to the dynamometer control. Proportional or servo control valves are generally used to allow the dynamometer control to apply pressure to provide the program torque load with feedback from the strain gauge closing the loop. As torque requirements go up there are speed limitations.
Hydraulic brake.
The hydraulic brake system consists of a hydraulic pump (usually a gear-type pump), a fluid reservoir, and piping between the two parts. Inserted in the piping is an adjustable valve, and between the pump and the valve is a gauge or other means of measuring hydraulic pressure. In simplest terms, the engine is brought up to the desired RPM and the valve is incrementally closed. As the pumps outlet is restricted, the load increases and the throttle is simply opened until at the desired throttle opening. Unlike most other systems, power is calculated by factoring flow volume (calculated from pump design specifications), hydraulic pressure, and RPM. Brake HP, whether figured with pressure, volume, and RPM, or with a different load cell-type brake dyno, should produce essentially identical power figures. Hydraulic dynos are renowned for having the quickest load change ability, just slightly surpassing eddy current absorbers. The downside is that they require large quantities of hot oil under high pressure and an oil reservoir.
Water brake-type absorber.
The hydraulic dynamometer (also referred to as the water brake absorber) was invented by British engineer William Froude in 1877 in response to a request by the Admiralty to produce a machine capable of absorbing and measuring the power of large naval engines. Water brake absorbers are relatively common today. They are noted for their high power capability, small size, light weight, and relatively low manufacturing costs as compared to other, quicker reacting, "power absorber" types.
Their drawbacks are that they can take a relatively long period of time to "stabilize" their load amount, and that they require a constant supply of water to the "water brake housing" for cooling. Environmental regulations may prohibit "flow through" water, in which case large water tanks are installed to prevent contaminated water from entering the environment.
The schematic shows the most common type of water brake, known as the "variable level" type. Water is added until the engine is held at a steady RPM against the load, with the water then kept at that level and replaced by constant draining and refilling (which is needed to carry away the heat created by absorbing the horsepower). The housing attempts to rotate in response to the torque produced, but is restrained by the scale or torque metering cell that measures the torque.
Compound dynamometers.
In most cases, motoring dynamometers are symmetrical; a 300 kW AC dynamometer can absorb 300 kW as well as motor at 300 kW. This is an uncommon requirement in engine testing and development. Sometimes, a more cost-effective solution is to attach a larger absorption dynamometer with a smaller motoring dynamometer. Alternatively, a larger absorption dynamometer and a simple AC or DC motor may be used in a similar manner, with the electric motor only providing motoring power when required (and no absorption). The (cheaper) absorption dynamometer is sized for the maximum required absorption, whereas the motoring dynamometer is sized for motoring. A typical size ratio for common emission test cycles and most engine development is approximately 3:1. Torque measurement is somewhat complicated since there are two machines in tandem - an inline torque transducer is the preferred method of torque measurement in this case. An eddy-current or waterbrake dynamometer, with electronic control combined with a variable frequency drive and AC induction motor, is a commonly used configuration of this type. Disadvantages include requiring a second set of test cell services (electrical power and cooling), and a slightly more complicated control system. Attention must be paid to the transition between motoring and braking in terms of control stability.
How dynamometers are used for engine testing.
Dynamometers are useful in the development and refinement of modern engine technology. The concept is to use a dyno to measure and compare power transfer at different points on a vehicle, thus allowing the engine or drivetrain to be modified to get more efficient power transfer. For example, if an engine dyno shows that a particular engine achieves of torque, and a chassis dynamo shows only , one would know that the drivetrain losses are nominal. Dynamometers are typically very expensive pieces of equipment, and so are normally used only in certain fields that rely on them for a particular purpose.
Types of dynamometer systems.
A 'brake' dynamometer applies variable load on the prime mover (PM) and measures the PM's ability to move or hold the RPM as related to the "braking force" applied. It is usually connected to a computer that records applied braking torque and calculates engine power output based on information from a "load cell" or "strain gauge" and a speed sensor.
An 'inertia' dynamometer provides a fixed inertial mass load, calculates the power required to accelerate that fixed and known mass, and uses a computer to record RPM and acceleration rate to calculate torque. The engine is generally tested from somewhat above idle to its maximum RPM and the output is measured and plotted on a graph.
A 'motoring' dynamometer provides the features of a brake dyno system, but in addition, can "power" (usually with an AC or DC motor) the PM and allow testing of very small power outputs (for example, duplicating speeds and loads that are experienced when operating a vehicle traveling downhill or during on/off throttle operations).
Types of dynamometer test procedures.
There are essentially 3 types of dynamometer test procedures:
Types of sweep tests.
In every type of sweep test, there remains the issue of potential power reading error due to the variable engine/dyno/vehicle total rotating mass. Many modern computer-controlled brake dyno systems are capable of deriving that "inertial mass" value, so as to eliminate this error.
A "sweep test" will almost always be suspect, as many "sweep" users ignore the rotating mass factor, preferring to use a blanket "factor" on every test on every engine or vehicle. Simple inertia dyno systems aren't capable of deriving "inertial mass", and thus are forced to use the same (assumed) inertial mass on every vehicle tested.
Using steady state testing eliminates the rotating inertial mass error of a sweep test, as there is no acceleration during this type of test.
Transient test characteristics.
Aggressive throttle movements, engine speed changes, and engine motoring are characteristics of most transient engine tests. The usual purpose of these tests are vehicle emissions development and homologation. In some cases, the lower-cost eddy-current dynamometer is used to test one of the transient test cycles for early development and calibration. An eddy current dyno system offers fast load response, which allows rapid tracking of speed and load, but does not allow motoring. Since most required transient tests contain a significant amount of motoring operation, a transient test cycle with an eddy-current dyno will generate different emissions test results. Final adjustments are required to be done on a motoring-capable dyno.
Engine dynamometer.
An engine dynamometer measures power and torque directly from the engine's crankshaft (or flywheel), when the engine is removed from the vehicle. These dynos do not account for power losses in the drivetrain, such as the gearbox, transmission, and differential.
Chassis dynamometer (rolling road).
A chassis dynamometer, sometimes referred to as a rolling road, measures power delivered to the surface of the "drive roller" by the drive wheels. The vehicle is often strapped down on the roller or rollers, which the car then turns, and the output measured thereby.
Modern roller-type chassis dyno systems use the "Salvisberg roller", which improves traction and repeatability, as compared to the use of smooth or knurled drive rollers. Chassis dynamometers can be fixed or portable, and can do much more than display RPM, power, and torque. With modern electronics and quick reacting, low inertia dyno systems, it is now possible to tune to best power and the smoothest runs in real time.
Other types of chassis dynamometers are available that eliminate the potential for wheel slippage on old style drive rollers, attaching directly to the vehicle's hubs for direct torque measurement from the axle.
Motor vehicle emissions development and homologation dynamometer test systems often integrate emissions sampling, measurement, engine speed and load control, data acquisition, and safety monitoring into a complete test cell system. These test systems usually include complex emissions sampling equipment (such as constant volume samplers and raw exhaust gas sample preparation systems) and analyzers. These analyzers are much more sensitive and much faster than a typical portable exhaust gas analyzer. Response times of well under one second are common, and are required by many transient test cycles. In retail settings it is also common to tune the air-fuel ratio using a wideband oxygen sensor that is graphed along with the RPM.
Integration of the dynamometer control system with automatic calibration tools for engine system calibration is often found in development test cell systems. In these systems, the dynamometer load and engine speed are varied to many engine operating points, while selected engine management parameters are varied and the results recorded automatically. Later analysis of this data may then be used to generate engine calibration data used by the engine management software.
Because of frictional and mechanical losses in the various drivetrain components, the measured wheel brake horsepower is generally 15-20 percent less than the brake horsepower measured at the crankshaft or flywheel on an engine dynamometer.
History.
The Graham-Desaguliers Dynamometer was invented by George Graham and mentioned in the writings of John Desagulier in 1719. Desaguliers modified the first dynamometers, and so the instrument became known as the Graham-Desaguliers dynamometer.
The Regnier dynamometer was invented and made public in 1798 by Edmé Régnier, a French rifle maker and engineer.
A patent was issued (dated June 1817) to Siebe and Marriot of Fleet Street, London for an improved weighing machine.
Gaspard de Prony invented the de Prony brake in 1821.
Macneill's road indicator was invented by John Macneill in the late 1820s, further developing Marriot's patented weighing machine.
Froude Ltd, of Worcester, UK, manufactures engine and vehicle dynamometers. They credit William Froude with the invention of the hydraulic dynamometer in 1877, and say that the first commercial dynamometers were produced in 1881 by their predecessor company, Heenan & Froude.
In 1928, the German company "Carl Schenck Eisengießerei & Waagenfabrik" built the first vehicle dynamometers for brake tests that have the basic design of modern vehicle test stands.
The eddy current dynamometer was invented by Martin and Anthony Winther around 1931, but at that time, DC Motor/generator dynamometers had been in use for many years. A company founded by the Winthers brothers, Dynamatic Corporation, manufactured dynamometers in Kenosha, Wisconsin until 2002. Dynamatic was part of Eaton Corporation from 1946 to 1995. In 2002, Dyne Systems of Jackson, Wisconsin acquired the Dynamatic dynamometer product line. Starting in 1938, Heenan & Froude manufactured eddy current dynamometers for many years under license from Dynamatic and Eaton.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P=\\tau\\cdot\\omega"
},
{
"math_id": 1,
"text": "P=F \\cdot v"
},
{
"math_id": 2,
"text": "P_\\mathrm{hp}={\\tau_\\mathrm{lb \\cdot ft}\\cdot\\omega_\\mathrm{RPM} \\over 5252}"
},
{
"math_id": 3,
"text": "P_\\mathrm{W}=\\tau_\\mathrm{N \\cdot m}\\cdot\\omega"
}
] | https://en.wikipedia.org/wiki?curid=908654 |
9087 | Dynamical system | Mathematical model of the time dependence of a point in space
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it.
At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The "evolution rule" of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables.
In physics, a dynamical system is described as a "particle or ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives". In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized.
The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept.
Overview.
The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as "solving the system" or "integrating the system". If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a "trajectory" or "orbit".
Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
History.
Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.
Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.
In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his "Dynamical Systems". Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics.
Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.
Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period.
In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft.
Formal definition.
In the most general sense,
a dynamical system is a tuple ("T", "X", Φ) where "T" is a monoid, written additively, "X" is a non-empty set and Φ is a function
formula_0
with
formula_1 (where formula_2 is the 2nd projection map)
and for any "x" in "X":
formula_3
formula_4
for formula_5 and formula_6, where we have defined the set formula_7 for any "x" in "X".
In particular, in the case that formula_8 we have for every "x" in "X" that formula_9 and thus that Φ defines a monoid action of "T" on "X".
The function Φ("t","x") is called the evolution function of the dynamical system: it associates to every point "x" in the set "X" a unique image, depending on the variable "t", called the evolution parameter. "X" is called phase space or state space, while the variable "x" represents an initial state of the system.
We often write
formula_10
formula_11
if we take one of the variables as constant. The function
formula_12
is called the flow through "x" and its graph is called the trajectory through "x". The set
formula_13
is called the orbit through "x".
The orbit through "x" is the image of the flow through "x".
A subset "S" of the state space "X" is called Φ-invariant if for all "x" in "S" and all "t" in "T"
formula_14
Thus, in particular, if "S" is Φ-invariant, formula_15 for all "x" in "S". That is, the flow through "x" must be defined for all time for every element of "S".
More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor.
Geometrical definition.
In the geometrical definition, a dynamical system is the tuple formula_16. formula_17 is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. formula_18 is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. "f" is an evolution rule "t" → "f" "t" (with formula_19) such that "f t" is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain formula_20 into the space of diffeomorphisms of the manifold to itself. In other terms, "f"("t") is a diffeomorphism, for every time "t" in the domain formula_20 .
Real dynamical system.
A "real dynamical system", "real-time dynamical system", "continuous time dynamical system", or "flow" is a tuple ("T", "M", Φ) with "T" an open interval in the real numbers R, "M" a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a "differentiable dynamical system". If the manifold "M" is locally diffeomorphic to R"n", the dynamical system is "finite-dimensional"; if not, the dynamical system is "infinite-dimensional". This does not assume a symplectic structure. When "T" is taken to be the reals, the dynamical system is called "global" or a "flow"; and if "T" is restricted to the non-negative reals, then the dynamical system is a "semi-flow".
Discrete dynamical system.
A "discrete dynamical system", "discrete-time dynamical system" is a tuple ("T", "M", Φ), where "M" is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When "T" is taken to be the integers, it is a "cascade" or a "map". If "T" is restricted to the non-negative integers we call the system a "semi-cascade".
Cellular automaton.
A "cellular automaton" is a tuple ("T", "M", Φ), with "T" a lattice such as the integers or a higher-dimensional integer grid, "M" is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in "M" represents the "space" lattice, while the one in "T" represents the "time" lattice.
Multidimensional generalization.
Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing.
Compactification of a dynamical system.
Given a global dynamical system (R, "X", Φ) on a locally compact and Hausdorff topological space "X", it is often useful to study the continuous extension Φ* of Φ to the one-point compactification "X*" of "X". Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, "X*", Φ*).
In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected.
Measure theoretical definition.
A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet ("T", ("X", Σ, "μ"), Φ). Here, "T" is a monoid (usually the non-negative integers), "X" is a set, and ("X", Σ, "μ") is a probability space, meaning that Σ is a sigma-algebra on "X" and μ is a finite measure on ("X", Σ). A map Φ: "X" → "X" is said to be Σ-measurable if and only if, for every σ in Σ, one has formula_21. A map Φ is said to preserve the measure if and only if, for every "σ" in Σ, one has formula_22. Combining the above, a map Φ is said to be a measure-preserving transformation of "X" , if it is a map from "X" to itself, it is Σ-measurable, and is measure-preserving. The triplet ("T", ("X", Σ, "μ"), Φ), for such a Φ, is then defined to be a dynamical system.
The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates formula_23 for every integer "n" are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.
Relation to geometric definition.
The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.
Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.
For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.
Construction of dynamical systems.
The concept of "evolution in time" is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example consider an initial value problem such as the following:
formula_24
formula_25
where
There is no need for higher order derivatives in the equation, nor for the parameter "t" in "v"("t","x"), because these can be eliminated by considering systems of higher dimensions.
Depending on the properties of this vector field, the mechanical system is called
The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above
formula_27
The dynamical system is then ("T", "M", Φ).
Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy
formula_28
where formula_29 is a functional from the set of evolution functions to the field of the complex numbers.
This equation is useful when modeling mechanical systems with complicated constraints.
Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations.
Examples.
<templatestyles src="Div col/styles.css"/>
Linear dynamical systems.
Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the "N"-dimensional Euclidean space, so any point in phase space can be represented by a vector with "N" numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if "u"("t") and "w"("t") satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will "u"("t") + "w"("t").
Flows.
For a flow, the vector field v("x") is an affine function of the position in the phase space, that is,
formula_30
with "A" a matrix, "b" a vector of numbers and "x" the position vector. The solution to this system can be found by using the superposition principle (linearity).
The case "b" ≠ 0 with "A" = 0 is just a straight line in the direction of "b":
formula_31
When "b" is zero and "A" ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if "x"0 = 0, then the orbit remains there.
For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point "x"0,
formula_32
When "b" = 0, the eigenvalues of "A" determine the structure of the phase space. From the eigenvalues and the eigenvectors of "A" it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.
The distance between two different initial conditions in the case "A" ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior.
Maps.
A discrete-time, affine dynamical system has the form of a matrix difference equation:
formula_33
with "A" a matrix and "b" a vector. As in the continuous case, the change of coordinates "x" → "x" + (1 − "A") –1"b" removes the term "b" from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system "A" "n""x"0.
The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.
As in the continuous case, the eigenvalues and eigenvectors of "A" determine the structure of phase space. For example, if "u"1 is an eigenvector of "A", with a real eigenvalue smaller than one, then the straight lines given by the points along "α" "u"1, with "α" ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point.
There are also many other discrete dynamical systems.
Local dynamics.
The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a "singular point" of the vector field (a point where "v"("x") = 0) will remain a singular point under smooth transformations; a "periodic orbit" is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.
Rectification.
A flow in most small patches of the phase space can be made very simple. If "y" is a point where the vector field "v"("y") ≠ 0, then there is a change of coordinates for a region around "y" where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.
The "rectification theorem" says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space "M" the dynamical system is "integrable". In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where "v"("x") = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.
Near periodic orbits.
In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point "x"0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to "v"("x"0). These points are a Poincaré section "S"("γ", "x"0), of the orbit. The flow now defines a map, the Poincaré map "F" : "S" → "S", for points starting in "S" and returning to "S". Not all these points will take the same amount of time to come back, but the times will be close to the time it takes "x"0.
The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map "F". By a translation, the point can be assumed to be at "x" = 0. The Taylor series of the map is "F"("x") = "J" · "x" + O("x"2), so a change of coordinates "h" can only be expected to simplify "F" to its linear part
formula_34
This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If "λ"1, ..., "λ""ν" are the eigenvalues of "J" they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form "λ""i" – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function "h", the non-resonant condition is also known as the small divisor problem.
Conjugation results.
The results on the existence of a solution to the conjugation equation depend on the eigenvalues of "J" and the degree of smoothness required from "h". As "J" does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of "J" are not in the unit circle, the dynamics near the fixed point "x"0 of "F" is called "hyperbolic" and when the eigenvalues are on the unit circle and complex, the dynamics is called "elliptic".
In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map "J" · "x". The hyperbolic case is also "structurally stable". Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of "J" in the complex plane, implying that the map is still hyperbolic.
The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point.
Bifurcation theory.
When the evolution map Φ"t" (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value "μ"0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.
Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter "μ". At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.
The bifurcations of a hyperbolic fixed point "x"0 of a system family "Fμ" can be characterized by the eigenvalues of the first derivative of the system "DF""μ"("x"0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of "DFμ" on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory.
Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations.
Ergodic systems.
In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset "A" into the points Φ "t"("A") and invariance of the phase space means that
formula_35
In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure.
In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.
For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let "F" be a phase space volume-preserving map and "A" a subset of the phase space. Then almost every point of "A" returns to "A" infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.
One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region "A" is vol("A")/vol(Ω).
The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable "a" is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator "U" "t", the transfer operator,
formula_36
By studying the spectral properties of the linear operator "U" it becomes possible to classify the ergodic properties of Φ "t". In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ "t" gets mapped into an infinite-dimensional linear problem involving "U".
The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−β"H"). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems.
Nonlinear dynamical systems and chaos.
Simple nonlinear dynamical systems and even piecewise linear systems can exhibit a completely unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This seemingly unpredictable behavior has been called "chaos". Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent space perpendicular to a trajectory can be well separated into two parts: one with the points that converge towards the orbit (the "stable manifold") and another of the points that diverge from the orbit (the "unstable manifold").
This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?"
The chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The logistic map is only a second-degree polynomial; the horseshoe map is piecewise linear.
Solutions of Finite Duration.
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions cannot be analytical functions on the whole real line, and because they are non-Lipschitz functions at their ending time, they are not unique solutions of Lipschitz differential equations.
As example, the equation:
formula_37
Admits the finite duration solution:
formula_38
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
Works providing a broad coverage:
Introductory texts with a unique perspective:
Textbooks
Popularizations: | [
{
"math_id": 0,
"text": "\\Phi: U \\subseteq (T \\times X) \\to X"
},
{
"math_id": 1,
"text": "\\mathrm{proj}_{2}(U) = X"
},
{
"math_id": 2,
"text": "\\mathrm{proj}_{2}"
},
{
"math_id": 3,
"text": "\\Phi(0,x) = x"
},
{
"math_id": 4,
"text": "\\Phi(t_2,\\Phi(t_1,x)) = \\Phi(t_2 + t_1, x),"
},
{
"math_id": 5,
"text": "\\, t_1,\\, t_2 + t_1 \\in I(x)"
},
{
"math_id": 6,
"text": "\\ t_2 \\in I(\\Phi(t_1, x)) "
},
{
"math_id": 7,
"text": " I(x) := \\{ t \\in T : (t,x) \\in U \\}"
},
{
"math_id": 8,
"text": " U = T \\times X "
},
{
"math_id": 9,
"text": " I(x) = T "
},
{
"math_id": 10,
"text": "\\Phi_x(t) \\equiv \\Phi(t,x)"
},
{
"math_id": 11,
"text": "\\Phi^t(x) \\equiv \\Phi(t,x)"
},
{
"math_id": 12,
"text": "\\Phi_x:I(x) \\to X"
},
{
"math_id": 13,
"text": "\\gamma_x \\equiv\\{\\Phi(t,x) : t \\in I(x)\\}"
},
{
"math_id": 14,
"text": "\\Phi(t,x) \\in S."
},
{
"math_id": 15,
"text": "I(x) = T"
},
{
"math_id": 16,
"text": " \\langle \\mathcal{T}, \\mathcal{M}, f\\rangle "
},
{
"math_id": 17,
"text": "\\mathcal{T}"
},
{
"math_id": 18,
"text": "\\mathcal{M}"
},
{
"math_id": 19,
"text": "t\\in\\mathcal{T}"
},
{
"math_id": 20,
"text": " \\mathcal{T}"
},
{
"math_id": 21,
"text": "\\Phi^{-1}\\sigma \\in \\Sigma"
},
{
"math_id": 22,
"text": "\\mu(\\Phi^{-1}\\sigma ) = \\mu(\\sigma)"
},
{
"math_id": 23,
"text": "\\Phi^n = \\Phi \\circ \\Phi \\circ \\dots \\circ \\Phi"
},
{
"math_id": 24,
"text": "\\dot{\\boldsymbol{x}}=\\boldsymbol{v}(t,\\boldsymbol{x})"
},
{
"math_id": 25,
"text": "\\boldsymbol{x}|_{{t=0}}=\\boldsymbol{x}_0"
},
{
"math_id": 26,
"text": "\\dot{\\boldsymbol{x}}"
},
{
"math_id": 27,
"text": "\\boldsymbol{{x}}(t)=\\Phi(t,\\boldsymbol{{x}}_0)"
},
{
"math_id": 28,
"text": "\\dot{\\boldsymbol{x}}-\\boldsymbol{v}(t,\\boldsymbol{x})=0 \\qquad\\Leftrightarrow\\qquad \\mathfrak{{G}}\\left(t,\\Phi(t,\\boldsymbol{{x}}_0)\\right)=0"
},
{
"math_id": 29,
"text": "\\mathfrak{G}:{{(T\\times M)}^M}\\to\\mathbf{C}"
},
{
"math_id": 30,
"text": " \\dot{x} = v(x) = A x + b,"
},
{
"math_id": 31,
"text": "\\Phi^t(x_1) = x_1 + b t. "
},
{
"math_id": 32,
"text": "\\Phi^t(x_0) = e^{t A} x_0. "
},
{
"math_id": 33,
"text": " x_{n+1} = A x_n + b, "
},
{
"math_id": 34,
"text": " h^{-1} \\circ F \\circ h(x) = J \\cdot x."
},
{
"math_id": 35,
"text": " \\mathrm{vol} (A) = \\mathrm{vol} ( \\Phi^t(A) ). "
},
{
"math_id": 36,
"text": " (U^t a)(x) = a(\\Phi^{-t}(x)). "
},
{
"math_id": 37,
"text": "y'= -\\text{sgn}(y)\\sqrt{|y|},\\,\\,y(0)=1"
},
{
"math_id": 38,
"text": "y(x)=\\frac{1}{4}\\left(1-\\frac{x}{2}+\\left|1-\\frac{x}{2}\\right|\\right)^2"
}
] | https://en.wikipedia.org/wiki?curid=9087 |
9087850 | Cahn–Hilliard equation | Description of phase separation
The Cahn–Hilliard equation (after John W. Cahn and John E. Hilliard) is an equation of mathematical physics which describes the process of phase separation, spinodal decomposition, by which the two components of a binary fluid spontaneously separate and form domains pure in each component. If formula_0 is the concentration of the fluid, with formula_1 indicating domains, then the equation is written as
formula_2
where formula_3 is a diffusion coefficient with units of formula_4 and formula_5 gives the length of the transition regions between the domains. Here formula_6 is the partial time derivative and formula_7 is the Laplacian in formula_8 dimensions. Additionally, the quantity formula_9 is identified as a chemical potential.
Related to it is the Allen–Cahn equation, as well as the stochastic Allen–Cahn and the stochastic Cahn–Hilliard equations.
Features and applications.
Of interest to mathematicians is the existence of a unique solution of the Cahn–Hilliard equation, given by smooth initial data. The proof relies essentially on the existence of a Lyapunov functional. Specifically, if we identify
formula_10
as a free energy functional, then
formula_11
so that the free energy does not grow in time. This also indicates segregation into domains is the asymptotic outcome of the evolution of this equation.
In real experiments, the segregation of an initially mixed binary fluid into domains is observed. The segregation is characterized by the following facts.
The Cahn–Hilliard equation finds applications in diverse fields: in complex fluids and soft matter (interfacial fluid flow, polymer science and in industrial applications). The solution of the Cahn–Hilliard equation for a binary mixture demonstrated to coincide well with the solution of a Stefan problem and the model of Thomas and Windle. Of interest to researchers at present is the coupling of the phase separation of the Cahn–Hilliard equation to the Navier–Stokes equations of fluid flow.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c"
},
{
"math_id": 1,
"text": "c=\\pm1"
},
{
"math_id": 2,
"text": "\\frac{\\partial c}{\\partial t} = D\\nabla^2\\left(c^3-c-\\gamma\\nabla^2 c\\right),"
},
{
"math_id": 3,
"text": "D"
},
{
"math_id": 4,
"text": "\\text{Length}^2/\\text{Time}"
},
{
"math_id": 5,
"text": "\\sqrt{\\gamma}"
},
{
"math_id": 6,
"text": "\\partial/{\\partial t}"
},
{
"math_id": 7,
"text": "\\nabla^2"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "\\mu = c^3-c-\\gamma\\nabla^2 c"
},
{
"math_id": 10,
"text": "F[c]=\\int d^n x \\left[\\frac{1}{4}\\left(c^2-1\\right)^2+\\frac{\\gamma}{2}\\left|\\nabla c\\right|^2\\right],"
},
{
"math_id": 11,
"text": "\\frac{d F}{dt} = -\\int d^n x \\left|\\nabla\\mu\\right|^2,"
},
{
"math_id": 12,
"text": "c(x) = \\tanh\\left(\\frac{x}{\\sqrt{2\\gamma}}\\right),"
},
{
"math_id": 13,
"text": "L(t)"
},
{
"math_id": 14,
"text": "L(t)\\propto t^{1/3}"
},
{
"math_id": 15,
"text": "\\frac{\\partial c}{\\partial t} = -\\nabla \\cdot \\mathbf{j}(x),"
},
{
"math_id": 16,
"text": "\\mathbf{j}(x)=-D\\nabla \\mu"
},
{
"math_id": 17,
"text": "C=\\int d^n x c\\left(x,t\\right)"
},
{
"math_id": 18,
"text": "\\frac{dC}{dt}=0"
}
] | https://en.wikipedia.org/wiki?curid=9087850 |
9089819 | Buckley–Leverett equation | Conservation law for two-phase flow in porous media
In fluid dynamics, the Buckley–Leverett equation is a conservation equation used to model two-phase flow in porous media. The Buckley–Leverett equation or the Buckley–Leverett "displacement" describes an immiscible displacement process, such as the displacement of oil by water, in a one-dimensional or quasi-one-dimensional reservoir. This equation can be derived from the mass conservation equations of two-phase flow, under the assumptions listed below.
Equation.
In a quasi-1D domain, the Buckley–Leverett equation is given by:
formula_0
where formula_1 is the wetting-phase (water) saturation, formula_2 is the total flow rate, formula_3 is the rock porosity, formula_4 is the area of the cross-section in the sample volume, and formula_5 is the fractional flow function of the wetting phase. Typically, formula_5 is an S-shaped, nonlinear function of the saturation formula_6, which characterizes the relative mobilities of the two phases:
formula_7
where formula_8 and formula_9 denote the wetting and non-wetting phase mobilities. formula_10 and formula_11 denote the relative permeability functions of each phase and formula_12 and formula_13 represent the phase viscosities.
Assumptions.
The Buckley–Leverett equation is derived based on the following assumptions:
General solution.
The characteristic velocity of the Buckley–Leverett equation is given by:
formula_14
The hyperbolic nature of the equation implies that the solution of the Buckley–Leverett equation has the form formula_15, where formula_16 is the characteristic velocity given above. The non-convexity of the fractional flow function formula_5 also gives rise to the well known Buckley-Leverett profile, which consists of a shock wave immediately followed by a rarefaction wave.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\frac{\\partial S_w}{\\partial t} + \\frac{\\partial}{\\partial x}\\left( \\frac{Q}{\\phi A} f_w(S_w) \\right) = 0,\n"
},
{
"math_id": 1,
"text": "S_w(x,t)"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "\\phi"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "f_w(S_w)"
},
{
"math_id": 6,
"text": "S_w"
},
{
"math_id": 7,
"text": "\nf_w(S_w) = \\frac{\\lambda_w}{\\lambda_w + \\lambda_n} = \\frac{ \\frac{k_{rw}}{\\mu_w} }{ \\frac{k_{rw}}{\\mu_w} + \\frac{k_{rn}}{\\mu_n} },\n"
},
{
"math_id": 8,
"text": "\\lambda_w"
},
{
"math_id": 9,
"text": "\\lambda_n"
},
{
"math_id": 10,
"text": "k_{rw}(S_w)"
},
{
"math_id": 11,
"text": "k_{rn}(S_w)"
},
{
"math_id": 12,
"text": "\\mu_w"
},
{
"math_id": 13,
"text": "\\mu_n"
},
{
"math_id": 14,
"text": "U(S_w) = \\frac{Q}{\\phi A} \\frac{\\mathrm{d} f_w}{\\mathrm{d} S_w}."
},
{
"math_id": 15,
"text": "S_w(x,t) = S_w(x - U t)"
},
{
"math_id": 16,
"text": "U"
}
] | https://en.wikipedia.org/wiki?curid=9089819 |
9089928 | Cardiac shunt | Blood flow pattern in the heart which deviates from normal circulation
In cardiology, a cardiac shunt is a pattern of blood flow in the heart that deviates from the normal circuit of the circulatory system. It may be described as right-left, left-right or bidirectional, or as systemic-to-pulmonary or pulmonary-to-systemic. The direction may be controlled by left and/or right heart pressure, a biological or artificial heart valve or both. The presence of a shunt may also affect left and/or right heart pressure either beneficially or detrimentally.
Terminology.
The left and right sides of the heart are named from a dorsal view, i.e., looking at the heart from the back or from the perspective of the person whose heart it is. There are four chambers in a heart: an atrium (upper) and a ventricle (lower) on both the left and right sides. In mammals and birds, blood from the body goes to the right side of the heart first. Blood enters the upper right atrium, is pumped down to the right ventricle and from there to the lungs via the pulmonary artery. Blood going to the lungs is called the pulmonary circulation. When the blood returns to the heart from the lungs via the pulmonary vein, it goes to the left side of the heart, entering the upper left atrium. Blood is then pumped to the lower left ventricle and from there out of the heart to the body via the aorta. This is called the systemic circulation. A cardiac shunt is when blood follows a pattern that deviates from the systemic circulation, i.e., from the body to the right atrium, down to the right ventricle, to the lungs, from the lungs to the left atrium, down to the left ventricle and then out of the heart back to the systemic circulation.
A left-to-right shunt is when blood from the left side of the heart goes to the right side of the heart. This can occur either through a hole in the ventricular or atrial septum that divides the left and the right heart or through a hole in the walls of the arteries leaving the heart, called great vessels. Left-to-right shunts occur when the systolic blood pressure in the left heart is higher than the right heart, which is the normal condition in birds and mammals.
Congenital shunts in humans.
The most common congenital heart defects (CHDs) which cause shunting are atrial septal defects (ASD), patent foramen ovale (PFO), ventricular septal defects (VSD), and patent ductus arteriosi (PDA). In isolation, these defects may be asymptomatic, or they may produce symptoms which can range from mild to severe, and which can either have an acute or a delayed onset. However, these shunts are often present in combination with other defects; in these cases, they may still be asymptomatic, mild or severe, acute or delayed, but they may also work to counteract the negative symptoms caused by another defect (as with d-Transposition of the great arteries).
Acquired shunts in human.
Biological.
Some acquired shunts are modifications of congenital ones: a balloon septostomy can enlarge a foramen ovale (if performed on a newborn), PFO or ASD; or prostaglandin can be administered to a newborn to prevent the ductus arteriosus from closing. Biological tissues may also be used to construct artificial passages.
Evaluation can be done during a cardiac catheterization with a "shunt run" by taking blood samples from superior vena cava (SVC), inferior vena cava (IVC), right atrium, right ventricle, pulmonary artery, and system arterial. Abrupt increases in oxygen saturation support a left-to-right shunt and lower than normal systemic arterial oxygen saturation supports a right-to-left shunt.
Samples from the SVC & IVC are used to calculate mixed venous oxygen saturation using the Flamm formula
formula_0
and Qp:Qs ratio
formula_1
where formula_2 is the pulmonary vein, formula_3 is the pulmonary artery, formula_4 is the systemic arterial, and formula_5 is the mixed-venous The Qp:Qs ratio is based upon the Fick principle and it is reduced to the above equation and eliminates the need to know cardiac output and hemoglobin concentration.
Mechanical.
Mechanical shunts such as the Blalock-Taussig shunt are used in some cases of CHD to control blood flow or blood pressure.
Reptile.
All reptiles have the capacity for cardiac shunts.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S_vO_2 = \\frac{3}{4} \\times SVC + \\frac{1}{4} \\times IVC"
},
{
"math_id": 1,
"text": "Qp:Qs = \\frac{\\text{change in oxygen concentration across the pulmonary circulation}}{\\text{change in oxygen concentration across the systemic circulation}} = \\frac{P_V - P_A}{S_A - S_V}"
},
{
"math_id": 2,
"text": "P_V"
},
{
"math_id": 3,
"text": "P_A"
},
{
"math_id": 4,
"text": "S_A"
},
{
"math_id": 5,
"text": "S_V"
}
] | https://en.wikipedia.org/wiki?curid=9089928 |
909019 | Sun-synchronous orbit | Type of geocentric orbit
A Sun-synchronous orbit (SSO), also called a heliosynchronous orbit, is a nearly polar orbit around a planet, in which the satellite passes over any given point of the planet's surface at the same local mean solar time. More technically, it is an orbit arranged so that it precesses through one complete revolution each year, so it always maintains the same relationship with the Sun.
Applications.
A Sun-synchronous orbit is useful for imaging, reconnaissance, and weather satellites, because every time that the satellite is overhead, the surface illumination angle on the planet underneath it is nearly the same. This consistent lighting is a useful characteristic for satellites that image the Earth's surface in visible or infrared wavelengths, such as weather and spy satellites, and for other remote-sensing satellites, such as those carrying ocean and atmospheric remote-sensing instruments that require sunlight. For example, a satellite in Sun-synchronous orbit might ascend across the equator twelve times a day, each time at approximately 15:00 mean local time.
Special cases of the Sun-synchronous orbit are the noon/midnight orbit, where the local mean solar time of passage for equatorial latitudes is around noon or midnight, and the dawn/dusk orbit, where the local mean solar time of passage for equatorial latitudes is around sunrise or sunset, so that the satellite rides the terminator between day and night. Riding the terminator is useful for active radar satellites, as the satellites' solar panels can always see the Sun, without being shadowed by the Earth. It is also useful for some satellites with passive instruments that need to limit the Sun's influence on the measurements, as it is possible to always point the instruments towards the night side of the Earth. The dawn/dusk orbit has been used for solar-observing scientific satellites such as TRACE, Hinode and PROBA-2, affording them a nearly continuous view of the Sun.
Orbital precession.
A Sun-synchronous orbit is achieved by having the osculating orbital plane precess (rotate) approximately one degree eastward each day with respect to the celestial sphere to keep pace with the Earth's movement around the Sun. This precession is achieved by tuning the inclination to the altitude of the orbit (see Technical details) such that Earth's equatorial bulge, which perturbs inclined orbits, causes the orbital plane of the spacecraft to precess with the desired rate. The plane of the orbit is not fixed in space relative to the distant stars, but rotates slowly about the Earth's axis.
Typical Sun-synchronous orbits around Earth are about in altitude, with periods in the 96–100-minute range, and inclinations of around 98°. This is slightly retrograde compared to the direction of Earth's rotation: 0° represents an equatorial orbit, and 90° represents a polar orbit.
Sun-synchronous orbits are possible around other oblate planets, such as Mars. A satellite orbiting a planet such as Venus that is almost spherical will need an outside push to maintain a Sun-synchronous orbit.
Technical details.
The angular precession per orbit for an Earth orbiting satellite is approximately given by
formula_0
where
"J"2
is the coefficient for the second zonal term related to the oblateness of the Earth,
"R"E ≈ 6378 km is the mean radius of the Earth,
p is the semi-latus rectum of the orbit,
i is the inclination of the orbit to the equator.
An orbit will be Sun-synchronous when the precession rate "ρ"
equals the mean motion of the Earth about the Sun "n"E, which is 360° per sidereal year (), so we must set "n"E
"ρ"
, where TE is the earth orbital period while T is the period of the spacecraft around the earth.
As the orbital period of a spacecraft is
formula_1
where a is the semi-major axis of the orbit, and μ is the standard gravitational parameter of the planet ( for Earth); as "p" ≈ "a" for a circular or almost circular orbit, it follows that
formula_2
or when ρ is 360° per year,
formula_3
As an example, with a = , i.e., for an altitude "a" − "R"E ≈ of the spacecraft over Earth's surface, this formula gives a Sun-synchronous inclination of 98.7°.
Note that according to this approximation cos "i" equals −1 when the semi-major axis equals , which means that only lower orbits can be Sun-synchronous. The period can be in the range from 88 minutes for a very low orbit (a = , i = 96°) to 3.8 hours (a = , but this orbit would be equatorial, with i = 180°). A period longer than 3.8 hours may be possible by using an eccentric orbit with p < but a > .
If one wants a satellite to fly over some given spot on Earth every day at the same hour, the satellite must complete a whole number of orbits per day. Assuming a circular orbit, this comes down to between 7 and 16 orbits per day, as doing less than 7 orbits would require an altitude above the maximum for a Sun-synchronous orbit, and doing more than 16 would require an orbit inside the Earth's atmosphere or surface. The resulting valid orbits are shown in the following table. (The table has been calculated assuming the periods given. The orbital period that should be used is actually slightly longer. For instance, a retrograde equatorial orbit that passes over the same spot after 24 hours has a true period about ≈ 1.0027 times longer than the time between overpasses. For non-equatorial orbits the factor is closer to 1.)
When one says that a Sun-synchronous orbit goes over a spot on the Earth at the same "local time" each time, this refers to mean solar time, not to apparent solar time. The Sun will not be in exactly the same position in the sky during the course of the year (see Equation of time and Analemma).
Sun-synchronous orbits are mostly selected for Earth observation satellites, with an altitude typically between 600 and over the Earth surface. Even if an orbit remains Sun-synchronous, however, other orbital parameters such as argument of periapsis and the orbital eccentricity evolve, due to higher-order perturbations in the Earth's gravitational field, the pressure of sunlight, and other causes. Earth observation satellites, in particular, prefer orbits with constant altitude when passing over the same spot. Careful selection of eccentricity and location of perigee reveals specific combinations where the rate of change of perturbations are minimized, and hence the orbit is relatively stable – a frozen orbit, where the motion of position of the periapsis is stable. The ERS-1, ERS-2 and Envisat of European Space Agency, as well as the MetOp spacecraft of EUMETSAT and RADARSAT-2 of the Canadian Space Agency, are all operated in such Sun-synchronous frozen orbits.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta \\Omega = -3\\pi \\frac{J_2 R_\\text{E}^2}{p^2} \\cos i,"
},
{
"math_id": 1,
"text": "T = 2\\pi \\sqrt{\\frac{a^3}{\\mu}},"
},
{
"math_id": 2,
"text": "\\begin{align}\n \\rho &\\approx -\\frac{3J_2 R_\\text{E}^2 \\sqrt{\\mu}\\cos i}{2a^{7/2}} \\\\\n &= -(360^\\circ\\text{ per year}) \\times \\left(\\frac{a}{12\\,352\\text{ km}}\\right)^{-7/2} \\cos i \\\\\n &= -(360^\\circ\\text{ per year}) \\times \\left(\\frac{T}{3.795\\text{ h}}\\right)^{-7/3} \\cos i,\n\\end{align}"
},
{
"math_id": 3,
"text": "\n \\cos i \\approx -\\frac{2\\rho}{3 J_2 R_\\text{E}^2 \\sqrt{\\mu}} a^{7/2} =\n -\\left(\\frac{a}{12\\,352\\text{ km}}\\right)^{7/2} =\n -\\left(\\frac{T}{3.795\\text{ h}}\\right)^{7/3}.\n"
}
] | https://en.wikipedia.org/wiki?curid=909019 |
9090619 | Stern–Volmer relationship | Relationship describing the kinetics of intermolecular photochemical deactivation
The Stern–Volmer relationship, named after Otto Stern and Max Volmer, allows the kinetics of a photophysical "intermolecular" deactivation process to be explored.
Processes such as fluorescence and phosphorescence are examples of "intramolecular" deactivation (quenching) processes. An "intermolecular" deactivation is where the presence of another chemical species can accelerate the decay rate of a chemical in its excited state. In general, this process can be represented by a simple equation:
formula_0
or
formula_1
where A is one chemical species, Q is another (known as a quencher) and * designates an excited state.
The kinetics of this process follows the Stern–Volmer relationship:
formula_2
Where formula_3 is the intensity, or rate of fluorescence, without a quencher, formula_4 is the intensity, or rate of fluorescence, with a quencher, formula_5 is the quencher rate coefficient, formula_6 is the lifetime of the emissive excited state of A without a quencher present, and formula_7 is the concentration of the quencher.
For "diffusion-limited" quenching ("i.e.", quenching in which the time for quencher particles to diffuse toward and collide with excited particles is the limiting factor, and almost all such collisions are effective), the quenching rate coefficient is given by formula_8, where formula_9 is the ideal gas constant, formula_10 is temperature in kelvins and formula_11 is the viscosity of the solution. This formula is derived from the Stokes–Einstein relation and is only useful in this form in the case of two spherical particles of identical radius that react every time they approach a distance R, which is equal to the sum of their two radii. The more general expression for the diffusion limited rate constant is
formula_12
Where formula_13 and formula_14 are the radii of the two molecules and formula_15 is an approach distance at which unity reaction efficiency is expected (this is an approximation).
In reality, only a fraction of the collisions with the quencher are effective at quenching, so the true quenching rate coefficient must be determined experimentally.
See also.
Optode, a chemical sensor that makes use of this relationship | [
{
"math_id": 0,
"text": "\n\\mathrm{A}^* + \\mathrm{Q} \\rightarrow \\mathrm{A} + \\mathrm{Q}\n"
},
{
"math_id": 1,
"text": "\n\\mathrm{A}^* + \\mathrm{Q} \\rightarrow \\mathrm{A} + \\mathrm{Q}^*\n"
},
{
"math_id": 2,
"text": "\n\\frac{I_f^0}{I_f} = 1+k_q\\tau_0\\cdot[\\mathrm{Q}]\n"
},
{
"math_id": 3,
"text": "I_f^0"
},
{
"math_id": 4,
"text": "I_f"
},
{
"math_id": 5,
"text": "k_q"
},
{
"math_id": 6,
"text": "\\tau_0"
},
{
"math_id": 7,
"text": "[\\mathrm{Q}]"
},
{
"math_id": 8,
"text": "k_q = {8RT}/{3\\eta}"
},
{
"math_id": 9,
"text": "R"
},
{
"math_id": 10,
"text": "T"
},
{
"math_id": 11,
"text": "\\eta"
},
{
"math_id": 12,
"text": "k_q = \\frac{2RT}{3\\eta}[\\frac{r_b + r_a}{r_br_a}]d_{cc}"
},
{
"math_id": 13,
"text": "r_a"
},
{
"math_id": 14,
"text": "r_b"
},
{
"math_id": 15,
"text": "d_{cc}"
}
] | https://en.wikipedia.org/wiki?curid=9090619 |
909227 | Risk factor | Variable associated with an increased risk of disease or infection
In epidemiology, a risk factor or determinant is a variable associated with an increased risk of disease or infection.
Due to a lack of harmonization across disciplines, determinant, in its more widely accepted scientific meaning, is often used as a synonym. The main difference lies in the realm of practice: medicine (clinical practice) versus public health. As an example from clinical practice, low ingestion of dietary sources of vitamin C is a known risk factor for developing scurvy. Specific to public health policy, a determinant is a health risk that is general, abstract, related to inequalities, and difficult for an individual to control. For example, poverty is known to be a determinant of an individual's standard of health.
Risk factors may be used to identify high-risk people.
Correlation vs causation.
Risk factors or determinants are correlational and not necessarily causal, because correlation does not prove causation. For example, being young cannot be said to cause measles, but young people have a higher rate of measles because they are less likely to have developed immunity during a previous epidemic. Statistical methods are frequently used to assess the strength of an association and to provide causal evidence, for example in the study of the link between smoking and lung cancer. Statistical analysis along with the biological sciences can establish that risk factors are causal. Some prefer the term risk factor to mean causal determinants of increased rates of disease, and for unproven links to be called possible risks, associations, etc.
When done thoughtfully and based on research, identification of risk factors can be a strategy for medical screening.
Terms of description.
Mainly taken from risk factors for breast cancer, risk factors can be described in terms of, for example:
Example.
At a wedding, 74 people ate the chicken and 22 of them were ill, while of the 35 people who had the fish or vegetarian meal only 2 were ill. Did the chicken make the people ill?
formula_0
So the chicken eaters' risk = 22/74 = 0.297
And non-chicken eaters' risk = 2/35 = 0.057.
Those who ate the chicken had a risk over five times as high as those who did not, that is, a relative risk of more than five. This suggests that eating chicken was the cause of the illness, but this is "not" proof.
This example of a risk factor is described in terms of the relative risk it confers, which is evaluated by comparing the risk of those exposed to the potential risk factor to those not exposed.
General determinants.
The probability of an outcome usually depends on an interplay between multiple associated variables. When performing epidemiological studies to evaluate one or more determinants for a specific outcome, the other determinants may act as confounding factors, and need to be controlled for, e.g. by stratification. The potentially confounding determinants varies with what outcome is studied, but the following general confounders are common to most epidemiological associations, and are the determinants most commonly controlled for in epidemiological studies:
Other less commonly adjusted for possible confounders include:
Risk marker.
A "risk marker" is a variable that is quantitatively associated with a disease or other outcome, but direct alteration of the risk marker does not necessarily alter the risk of the outcome. For example, driving-while-intoxicated (DWI) history is a risk marker for pilots as epidemiologic studies indicate that pilots with a DWI history are significantly more likely than their counterparts without a DWI history to be involved in aviation crashes.
History.
The term "risk factor" was coined by former Framingham Heart Study director, William B. Kannel in a 1961 article in "Annals of Internal Medicine".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Risk = \\frac {\\mbox{number of persons experiencing event (food poisoning)}} {\\mbox{number of persons exposed to risk factor (food)}}"
}
] | https://en.wikipedia.org/wiki?curid=909227 |
9092737 | Third derivative | Rate of change of the second derivative
In calculus, a branch of mathematics, the third derivative or third-order derivative is the rate at which the second derivative, or the rate of change of the rate of change, is changing. The third derivative of a function formula_0 can be denoted by
formula_1
Other notations for differentiation can be used, but the above are the most common.
Mathematical definitions.
Let formula_2. Then formula_3 and formula_4. Therefore, the third derivative of "f" is, in this case,
formula_5
or, using Leibniz notation,
formula_6
Now for a more general definition. Let "f" be any function of "x" such that "f" ′′ is differentiable. Then the third derivative of "f" is given by
formula_7
The third derivative is the rate at which the second derivative ("f"′′("x")) is changing.
Applications in geometry.
In differential geometry, the torsion of a curve — a fundamental property of curves in three dimensions — is computed using third derivatives of coordinate functions (or the position vector) describing the curve.
Applications in physics.
In physics, particularly kinematics, jerk is defined as the third derivative of the position function of an object. It is, essentially, the rate at which acceleration changes. In mathematical terms:
formula_8
where j("t") is the jerk function with respect to time, and r("t") is the position function of the object with respect to time.
Economic examples.
When campaigning for a second term in office, U.S. President Richard Nixon announced that the rate of increase of inflation was decreasing, which has been noted as "the first time a sitting president used the third derivative to advance his case for reelection." Since inflation is itself a derivative—the rate at which the purchasing power of money decreases—then the rate of increase of inflation is the derivative of inflation, opposite in sign to the second time derivative of the purchasing power of money. Stating that a function is decreasing is equivalent to stating that its derivative is negative, so Nixon's statement is that the second derivative of inflation is negative, and so the third derivative of purchasing power is positive.
Since Nixon's statement allowed for the rate of inflation to increase, his statement did not necessarily indicate price stability.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y = f(x)"
},
{
"math_id": 1,
"text": "\\frac{d^3y}{dx^3},\\quad f'''(x),\\quad\\text{or }\\frac{d^3}{dx^3}[f(x)]."
},
{
"math_id": 2,
"text": "f(x) = x^4"
},
{
"math_id": 3,
"text": "f'(x) = 4x^3"
},
{
"math_id": 4,
"text": "f''(x) = 12x^2"
},
{
"math_id": 5,
"text": "f'''(x) = 24x"
},
{
"math_id": 6,
"text": "\\frac{d^3}{dx^3}[x^4] = 24x."
},
{
"math_id": 7,
"text": "\\frac{d^3}{dx^3}[f(x)] = \\frac{d}{dx}[f''(x)]."
},
{
"math_id": 8,
"text": "\\mathbf{j}(t)=\\frac{d^3\\mathbf{r}}{dt^3}"
}
] | https://en.wikipedia.org/wiki?curid=9092737 |
9093213 | Defense-Independent Component ERA | Defense-Independent Component ERA (DICE) is a 21st-century variation on Component ERA, one of an increasing number of baseball sabermetrics that fall under the umbrella of defense independent pitching statistics. DICE was created by Clay Dreslough in 2001.
The formula for Defense-Independent Component ERA (DICE) is:
formula_0
In that equation, "HR" is home runs, "BB" is walks, "HBP" is hit batters, "K" is strikeouts, and "IP" is innings pitched. That equation gives a number that is better at predicting a pitcher's ERA in the following year than the pitcher's actual ERA in the current year.
Component ERA was created by Bill James to create a more accurate way of evaluating pitchers than earned run average (ERA). Whereas ERA is significantly affected by luck (such as whether the component hits are allowed consecutively), Component ERA eliminates this factor and assigns a weight to each of the recorded 'components' of a pitchers performance. For CERA, these are singles, doubles, triples, home runs, walks and hit batters.
DICE is an improvement on CERA that removes the contribution of the pitcher's defense, instead estimate a pitcher's ERA from the components of his pitching record that don't involve defense. These are home runs, walks, hit batters and strikeouts.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\text{DICE} =3.00 + \\frac {13HR + 3(BB + HBP) - 2K}{IP}"
}
] | https://en.wikipedia.org/wiki?curid=9093213 |
9093977 | Viktor Wagner | Russian mathematician (1908–1981)
Viktor Vladimirovich Wagner, also Vagner () (4 November 1908 – 15 August 1981) was a Russian mathematician, best known for his work in differential geometry and on semigroups.
Wagner was born in Saratov and studied at Moscow State University, where Veniamin Kagan was his advisor. He became the first geometry chair at Saratov State University. He received the Lobachevsky Medal in 1937.
Wagner was also awarded "the Order of Lenin, the Order of the Red Banner, and the title of Honoured Scientist RSFSR. Moreover, he was also accorded that rarest of privileges in the USSR: permission to travel abroad."
Wagner is credited with noting that the collection of partial transformations on a set "X" forms a semigroup formula_0 which is a subsemigroup of the semigroup formula_1 of binary relations on the same set "X", where the semigroup operation is composition of relations. "This simple unifying observation, which is nevertheless an important psychological hurdle, is attributed by Schein (1986) to V.V. Wagner."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{PT}_X"
},
{
"math_id": 1,
"text": "\\mathcal{B}_X"
}
] | https://en.wikipedia.org/wiki?curid=9093977 |
909422 | Antimagic square | Mathematical object
An antimagic square of order "n" is an arrangement of the numbers 1 to "n"2 in a square, such that the sums of the "n" rows, the "n" columns and the two diagonals form a sequence of 2"n" + 2 consecutive integers. The smallest antimagic squares have order 4. Antimagic squares contrast with magic squares, where each row, column, and diagonal sum must have the same value.
Examples.
Order 4 antimagic squares.
In both of these antimagic squares of order 4, the rows, columns and diagonals sum to ten different numbers in the range 29–38.
Order 5 antimagic squares.
In the antimagic square of order 5 on the left, the rows, columns and diagonals sum up to numbers between 60 and 71. In the antimagic square on the right, the rows, columns and diagonals add up to numbers in the range 59–70.
Generalizations.
A sparse antimagic square (SAM) is a square matrix of size "n" by "n" of nonnegative integers whose nonzero entries are the consecutive integers formula_0 for some formula_1, and whose row-sums and column-sums constitute a set of consecutive integers. If the diagonals are included in the set of consecutive integers, the array is known as a sparse totally anti-magic square (STAM). Note that a STAM is not necessarily a SAM, and vice versa.
A filling of the "n" × "n" square with the numbers 1 to "n"2 in a square, such that the rows, columns, and diagonals all sum to different values has been called a "heterosquare". (Thus, they are the relaxation in which no particular values are required for the row, column, and diagonal sums.) There are no heterosquares of order 2, but heterosquares exist for any order "n" ≥ 3: if "n" is odd, filling the square in a spiral pattern will produce a heterosquare, and if "n" is even, a heterosquare results from writing the numbers 1 to "n"2 in order, then exchanging 1 and 2. It is suspected that there are exactly 3120 essentially different heterosquares of order 3.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1,\\ldots,m"
},
{
"math_id": 1,
"text": "m\\leq n^2"
}
] | https://en.wikipedia.org/wiki?curid=909422 |
909777 | Conditional quantum entropy | Measure of relative information in quantum information theory
The conditional quantum entropy is an entropy measure used in quantum information theory. It is a generalization of the conditional entropy of classical information theory. For a bipartite state formula_0, the conditional entropy is written formula_1, or formula_2, depending on the notation being used for the von Neumann entropy. The quantum conditional entropy was defined in terms of a conditional density operator formula_3 by Nicolas Cerf and Chris Adami, who showed that quantum conditional entropies can be negative, something that is forbidden in classical physics. The negativity of quantum conditional entropy is a sufficient criterion for quantum non-separability.
In what follows, we use the notation formula_4 for the von Neumann entropy, which will simply be called "entropy".
Definition.
Given a bipartite quantum state formula_0, the entropy of the joint system AB is formula_5, and the entropies of the subsystems are formula_6 and formula_7. The von Neumann entropy measures an observer's uncertainty about the value of the state, that is, how much the state is a mixed state.
By analogy with the classical conditional entropy, one defines the conditional quantum entropy as formula_8.
An equivalent operational definition of the quantum conditional entropy (as a measure of the quantum communication cost or surplus when performing quantum state merging) was given by Michał Horodecki, Jonathan Oppenheim, and Andreas Winter.
Properties.
Unlike the classical conditional entropy, the conditional quantum entropy can be negative. This is true even though the (quantum) von Neumann entropy of single variable is never negative. The negative conditional entropy is also known as the coherent information, and gives the additional number of bits above the classical limit that can be transmitted in a quantum dense coding protocol. Positive conditional entropy of a state thus means the state cannot reach even the classical limit, while the negative conditional entropy provides for additional information.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho^{AB}"
},
{
"math_id": 1,
"text": "S(A|B)_\\rho"
},
{
"math_id": 2,
"text": "H(A|B)_\\rho"
},
{
"math_id": 3,
"text": " \\rho_{A|B} "
},
{
"math_id": 4,
"text": "S(\\cdot)"
},
{
"math_id": 5,
"text": "S(AB)_\\rho \\ \\stackrel{\\mathrm{def}}{=}\\ S(\\rho^{AB})"
},
{
"math_id": 6,
"text": "S(A)_\\rho \\ \\stackrel{\\mathrm{def}}{=}\\ S(\\rho^A) = S(\\mathrm{tr}_B\\rho^{AB})"
},
{
"math_id": 7,
"text": "S(B)_\\rho"
},
{
"math_id": 8,
"text": "S(A|B)_\\rho \\ \\stackrel{\\mathrm{def}}{=}\\ S(AB)_\\rho - S(B)_\\rho"
}
] | https://en.wikipedia.org/wiki?curid=909777 |
9098731 | Coherent addition | Method of laser power scaling
Coherent addition (or coherent combining) of lasers
is a method of power scaling. It allows increasing the output power and brightness of single-transversal mode laser.
Usually, the term coherent addition applies to fiber lasers. As the ability of pumping and/or cooling of a single laser is saturated, several similar lasers can be forced to oscillate in phase with a common coupler. The first nonlinear theory of the coherent addition of laser sets had been developed by Nikolay Basov with co-workers in 1965.
For Nd:YAG laser set beam combination had been realized by means of SBS phase conjugating mirror.
The coherent addition was demonstrated in power scaling of Raman lasers.
Limits of coherent addition.
The addition of lasers reduces the number of longitudinal modes in the output beam; the more lasers are combined, the smaller is the number of longitudinal modes in the output. The simple estimates show that the number of output modes reduces exponentially with the number of lasers combined. Of order of eight lasers can be combined in such a way. The future increase of number of combined lasers requires the exponential growth of the spectral bandwidth of gain and/or length of partial lasers.
The same conclusion can be made also on the base of more detailed simulations.
Practically, the combination of more than ten lasers with a passive combining arrangement appears to be difficult. However, active coherent combining of lasers has the potential to scale to very large numbers of channels.
Nonlinear coherent addition of lasers.
Nonlinear interactions of light waves are used widely to synchronize the laser beams
in multichannel optical systems. Self-adjusting of phases may be robustly achievable in binary-tree array of beam-splitters and degenerate four-wave mixing Kerr Phase conjugation in Chirped pulse amplification extreme light facilities. This phase-conjugating Michelson interferometer increases the brightness as formula_0, where formula_1 is the number of phase-locked channels.
Talbot coherent addition.
Constructive interference due to Talbot self-imaging forces the lasers in the array to transverse mode locking. The Fresnel number formula_2 of the one-dimensional formula_3 element laser array phase-locked by Talbot cavity is given by formula_4
For the two-dimensional formula_5 element laser array phase-locked by Talbot cavity Fresnel number formula_2 scales as
formula_6 as well. Talbot phase-locking techniques are applicable to thin disk diode-pumped solid-state laser arrays.
Field applications of beam combination.
Laser beam combination of a dozens fiber laser via multispectral technique at 50 kW output power level had been implemented in Dragonfire (weapon) laser system with promising deployment at onboard future Royal Navy warships, British Army armoured vehicles and fighter aircraft of the Royal Air Force, including the BAE Systems Tempest. | [
{
"math_id": 0,
"text": "N^2"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "N-"
},
{
"math_id": 4,
"text": "F=N^2."
},
{
"math_id": 5,
"text": "N^2-"
},
{
"math_id": 6,
"text": "F=N^2 "
}
] | https://en.wikipedia.org/wiki?curid=9098731 |
9100897 | Satellite knot | Type of mathematical knot
In the mathematical theory of knots, a satellite knot is a knot that contains an incompressible, non boundary-parallel torus in its complement. Every knot is either hyperbolic, a torus, or a satellite knot. The class of satellite knots include composite knots, cable knots, and Whitehead doubles. A satellite "link" is one that orbits a companion knot "K" in the sense that it lies inside a regular neighborhood of the companion.
A satellite knot formula_0 can be picturesquely described as follows: start by taking a nontrivial knot formula_1 lying inside an unknotted solid torus formula_2. Here "nontrivial" means that the knot formula_1 is not allowed to sit inside of a 3-ball in formula_2 and formula_1 is not allowed to be isotopic to the central core curve of the solid torus. Then tie up the solid torus into a nontrivial knot.
This means there is a non-trivial embedding formula_3 and formula_4. The central core curve of the solid torus formula_2 is sent to a knot formula_5, which is called the "companion knot" and is thought of as the planet around which the "satellite knot" formula_0 orbits. The construction ensures that formula_6 is a non-boundary parallel incompressible torus in the complement of formula_0. Composite knots contain a certain kind of incompressible torus called a swallow-follow torus, which can be visualized as swallowing one summand and following another summand.
Since formula_2 is an unknotted solid torus, formula_7 is a tubular neighbourhood of an unknot formula_8. The 2-component link formula_9 together with the embedding formula_10 is called the "pattern" associated to the satellite operation.
A convention: people usually demand that the embedding formula_11 is "untwisted" in the sense that formula_10 must send the standard longitude of formula_2 to the standard longitude of formula_12. Said another way, given any two disjoint curves formula_13, formula_10 preserves their linking numbers i.e.: formula_14.
Basic families.
When formula_15 is a torus knot, then formula_0 is called a "cable knot". Examples 3 and 4 are cable knots. The cable constructed with given winding numbers ("m","n") from another knot "K", is often called "the" ("m","n") cable of "K".
If formula_1 is a non-trivial knot in formula_16 and if a compressing disc for formula_2 intersects formula_1 in precisely one point, then formula_0 is called a "connect-sum". Another way to say this is that the pattern formula_9 is the connect-sum of a non-trivial knot formula_1 with a Hopf link.
If the link formula_9 is the Whitehead link, formula_0 is called a "Whitehead double". If formula_10 is untwisted, formula_0 is called an untwisted Whitehead double.
Examples.
Examples 5 and 6 are variants on the same construction. They both have two non-parallel, non-boundary-parallel incompressible tori in their complements, splitting the complement into the union of three manifolds. In 5, those manifolds are: the Borromean rings complement, trefoil complement, and figure-8 complement. In 6, the figure-8 complement is replaced by another trefoil complement.
Origins.
In 1949 Horst Schubert proved that every oriented knot in formula_16 decomposes as a connect-sum of prime knots in a unique way, up to reordering, making the monoid of oriented isotopy-classes of knots in formula_16 a free commutative monoid on countably-infinite many generators. Shortly after, he realized he could give a new proof of his theorem by a close analysis of the incompressible tori present in the complement of a connect-sum. This led him to study general incompressible tori in knot complements in his epic work "Knoten und Vollringe", where he defined satellite and companion knots.
Follow-up work.
Schubert's demonstration that incompressible tori play a major role in knot theory was one several early insights leading to the unification of 3-manifold theory and knot theory. It attracted Waldhausen's attention, who later used incompressible surfaces to show that a large class of 3-manifolds are homeomorphic if and only if their fundamental groups are isomorphic. Waldhausen conjectured what is now the Jaco–Shalen–Johannson-decomposition of 3-manifolds, which is a decomposition of 3-manifolds along spheres and incompressible tori. This later became a major ingredient in the development of geometrization, which can be seen as a partial-classification of 3-dimensional manifolds. The ramifications for knot theory were first described in the long-unpublished manuscript of Bonahon and Siebenmann.
Uniqueness of satellite decomposition.
In "Knoten und Vollringe", Schubert proved that in some cases, there is essentially a unique way to express a knot as a satellite. But there are also many known examples where the decomposition is not unique. With a suitably enhanced notion of satellite operation called splicing, the JSJ decomposition gives a proper uniqueness theorem for satellite knots. | [
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "K'"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "f\\colon V \\to S^3"
},
{
"math_id": 4,
"text": "K = f\\left(K'\\right)"
},
{
"math_id": 5,
"text": "H"
},
{
"math_id": 6,
"text": "f(\\partial V)"
},
{
"math_id": 7,
"text": "S^3 \\setminus V"
},
{
"math_id": 8,
"text": "J"
},
{
"math_id": 9,
"text": "K' \\cup J"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "f \\colon V \\to S^3"
},
{
"math_id": 12,
"text": "f(V)"
},
{
"math_id": 13,
"text": "c_1, c_2 \\subset V"
},
{
"math_id": 14,
"text": "\\operatorname{lk}(f(c_1), f(c_2)) = \\operatorname{lk}(c_1, c_2)"
},
{
"math_id": 15,
"text": "K' \\subset \\partial V"
},
{
"math_id": 16,
"text": "S^3"
}
] | https://en.wikipedia.org/wiki?curid=9100897 |
910234 | Cut-elimination theorem | Theorem in formal logic
The cut-elimination theorem (or Gentzen's "Hauptsatz") is the central result establishing the significance of the sequent calculus. It was originally proved by Gerhard Gentzen in his landmark 1934 paper "Investigations in Logical Deduction" for the systems LJ and LK formalising intuitionistic and classical logic respectively. The cut-elimination theorem states that any judgement that possesses a proof in the sequent calculus making use of the cut rule also possesses a cut-free proof, that is, a proof that does not make use of the cut rule.
The cut rule.
A sequent is a logical expression relating multiple formulas, in the form "formula_0", which is to be read as "formula_1 proves formula_2", and (as glossed by Gentzen) should be understood as equivalent to the truth-function "If (formula_3 and formula_4 and formula_5 …) then (formula_6 or formula_7 or formula_8 …)." Note that the left-hand side (LHS) is a conjunction (and) and the right-hand side (RHS) is a disjunction (or).
The LHS may have arbitrarily many or few formulae; when the LHS is empty, the RHS is a tautology. In LK, the RHS may also have any number of formulae—if it has none, the LHS is a contradiction, whereas in LJ the RHS may only have one formula or none: here we see that allowing more than one formula in the RHS is equivalent, in the presence of the right contraction rule, to the admissibility of the law of the excluded middle. However, the sequent calculus is a fairly expressive framework, and there have been sequent calculi for intuitionistic logic proposed that allow many formulae in the RHS. From Jean-Yves Girard's logic LC it is easy to obtain a rather natural formalisation of classical logic where the RHS contains at most one formula; it is the interplay of the logical and structural rules that is the key here.
"Cut" is a rule in the normal statement of the sequent calculus, and equivalent to a variety of rules in other proof theories, which, given
and
allows one to infer
That is, it "cuts" the occurrences of the formula formula_9 out of the inferential relation.
Cut elimination.
The cut-elimination theorem states that (for a given system) any sequent provable using the rule Cut can be proved without use of this rule.
For sequent calculi that have only one formula in the RHS, the "Cut" rule reads, given
and
allows one to infer
If we think of formula_10 as a theorem, then cut-elimination in this case simply says that a lemma formula_9 used to prove this theorem can be inlined. Whenever the theorem's proof mentions lemma formula_9, we can substitute the occurrences for the proof of formula_9. Consequently, the cut rule is admissible.
Consequences of the theorem.
For systems formulated in the sequent calculus, analytic proofs are those proofs that do not use Cut. Typically such a proof will be longer, of course, and not necessarily trivially so. In his essay "Don't Eliminate Cut!" George Boolos demonstrated that there was a derivation that could be completed in a page using cut, but whose analytic proof could not be completed in the lifespan of the universe.
The theorem has many, rich consequences:
Cut elimination is one of the most powerful tools for proving interpolation theorems. The possibility of carrying out proof search based on resolution, the essential insight leading to the Prolog programming language, depends upon the admissibility of Cut in the appropriate system.
For proof systems based on higher-order typed lambda calculus through a Curry–Howard isomorphism, cut elimination algorithms correspond to the strong normalization property (every proof term reduces in a finite number of steps into a normal form).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_1, A_2, A_3, \\ldots \\vdash B_1, B_2, B_3, \\ldots"
},
{
"math_id": 1,
"text": "A_1, A_2, A_3, \\ldots"
},
{
"math_id": 2,
"text": "B_1, B_2, B_3, \\ldots"
},
{
"math_id": 3,
"text": "A_1"
},
{
"math_id": 4,
"text": "A_2"
},
{
"math_id": 5,
"text": "A_3"
},
{
"math_id": 6,
"text": "B_1"
},
{
"math_id": 7,
"text": "B_2"
},
{
"math_id": 8,
"text": "B_3"
},
{
"math_id": 9,
"text": "A"
},
{
"math_id": 10,
"text": "B"
}
] | https://en.wikipedia.org/wiki?curid=910234 |
910263 | Hawaiian earring | Topological space defined by the union of circles
In mathematics, the Hawaiian earring formula_0 is the topological space defined by the union of circles in the Euclidean plane formula_1 with center formula_2 and radius formula_3 for formula_4 endowed with the subspace topology:
formula_5
The space formula_0 is homeomorphic to the one-point compactification of the union of a countable family of disjoint open intervals.
The Hawaiian earring is a one-dimensional, compact, locally path-connected metrizable space. Although formula_0 is locally homeomorphic to formula_6 at all non-origin points, formula_0 is not semi-locally simply connected at formula_7. Therefore, formula_0 does not have a simply connected covering space and is usually given as the simplest example of a space with this complication.
The Hawaiian earring looks very similar to the wedge sum of countably infinitely many circles; that is, the rose with infinitely many petals, but these two spaces are not homeomorphic. The difference between their topologies is seen in the fact that, in the Hawaiian earring, every open neighborhood of the point of intersection of the circles contains all but finitely many of the circles (an ε-ball around (0, 0) contains every circle whose radius is less than "ε"/2); in the rose, a neighborhood of the intersection point might not fully contain any of the circles. Additionally, the rose is not compact: the complement of the distinguished point is an infinite union of open intervals; to those add a small open neighborhood of the distinguished point to get an open cover with no finite subcover.
Fundamental group.
The Hawaiian earring is neither simply connected nor semilocally simply connected since, for all formula_8 the loop formula_9 parameterizing the nth circle is not homotopic to a trivial loop. Thus, formula_0 has a nontrivial fundamental group formula_10 sometimes referred to as the "Hawaiian earring group". The Hawaiian earring group formula_11 is uncountable, and it is not a free group. However, formula_11 is locally free in the sense that every finitely generated subgroup of formula_11 is free.
The homotopy classes of the individual loops formula_9 generate the free group formula_12 on a countably infinite number of generators, which forms a proper subgroup of formula_11. The uncountably many other elements of formula_11 arise from loops whose image is not contained in finitely many of the Hawaiian earring's circles; in fact, some of them are surjective. For example, the path that on the interval formula_13 circumnavigates the nth circle. More generally, one may form infinite products of the loops formula_9 indexed over any countable linear order provided that for each formula_14, the loop formula_9 and its inverse appear within the product only finitely many times.
It is a result of John Morgan and Ian Morrison that formula_11 embeds into the inverse limit formula_15 of the free groups with n generators, formula_16, where the bonding map from formula_16 to formula_17 simply kills the last generator of formula_16. However, formula_11 is a proper subgroup of the inverse limit since each loop in formula_0 may traverse each circle of formula_0 only finitely many times. An example of an element of the inverse limit that does not correspond an element of formula_11 is an infinite product of commutators formula_18, which appears formally as the sequence formula_19 in the inverse limit formula_15.
First singular homology.
Katsuya Eda and Kazuhiro Kawamura proved that the abelianisation of formula_20 and therefore the first singular homology group formula_21 is isomorphic to the group
formula_22
The first summand formula_23 is the direct product of infinitely many copies of the infinite cyclic group (the Baer–Specker group). This factor represents the singular homology classes of loops that do not have winding number formula_24 around every circle of formula_0 and is precisely the first Cech Singular homology group formula_25. Additionally, formula_23 may be considered as the "infinite abelianization" of formula_11, since every element in the kernel of the natural homomorphism formula_26 is represented by an infinite product of commutators. The second summand of formula_21 consists of homology classes represented by loops whose winding number around every circle of formula_27 is zero, i.e. the kernel of the natural homomorphism formula_28. The existence of the isomorphism with formula_29 is proven abstractly using infinite abelian group theory and does not have a geometric interpretation.
Higher dimensions.
It is known that formula_27 is an aspherical space, i.e. all higher homotopy and homology groups of formula_27 are trivial.
The Hawaiian earring can be generalized to higher dimensions. Such a generalization was used by Michael Barratt and John Milnor to provide examples of compact, finite-dimensional spaces with nontrivial singular homology groups in dimensions larger than that of the space. The formula_30-dimensional Hawaiian earring is defined as
formula_31
Hence, formula_32 is a countable union of k-spheres which have one single point in common, and the topology is given by a metric in which the sphere's diameters converge to zero as formula_33 Alternatively, formula_32 may be constructed as the Alexandrov compactification of a countable union of disjoint formula_34s. Recursively, one has that formula_35 consists of a convergent sequence, formula_36 is the original Hawaiian earring, and formula_37 is homeomorphic to the reduced suspension formula_38.
For formula_39, the formula_40-dimensional Hawaiian earring is a compact, formula_41-connected and locally formula_41-connected. For formula_42, it is known that formula_43 is isomorphic to the Baer–Specker group formula_44
For formula_45 and formula_46 Barratt and Milnor showed that the singular homology group formula_47 is a nontrivial uncountable group for each such formula_48.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{H}"
},
{
"math_id": 1,
"text": "\\R^2"
},
{
"math_id": 2,
"text": "\\left(\\tfrac{1}{n},0\\right)"
},
{
"math_id": 3,
"text": "\\tfrac{1}{n}"
},
{
"math_id": 4,
"text": "n = 1, 2, 3, \\ldots "
},
{
"math_id": 5,
"text": "\\mathbb{H}=\\bigcup_{n=1}^{\\infty}\\left\\{(x,y)\\in\\mathbb{R}^2\\mid\\left(x-\\frac{1}{n}\\right)^2+y^2=\\left(\\frac{1}{n}\\right)^2\\right\\}."
},
{
"math_id": 6,
"text": "\\R"
},
{
"math_id": 7,
"text": "(0,0)"
},
{
"math_id": 8,
"text": "n\\geq 1,"
},
{
"math_id": 9,
"text": "\\ell_n"
},
{
"math_id": 10,
"text": "G=\\pi_1(\\mathbb{H},(0,0)),"
},
{
"math_id": 11,
"text": "G"
},
{
"math_id": 12,
"text": "\\langle [\\ell_n]\\mid n\\geq 1\\rangle"
},
{
"math_id": 13,
"text": "[ 2^{-n}, 2^{-n+1} ]"
},
{
"math_id": 14,
"text": "n\\geq 1"
},
{
"math_id": 15,
"text": "\\varprojlim F_n"
},
{
"math_id": 16,
"text": "F_n"
},
{
"math_id": 17,
"text": "F_{n-1}"
},
{
"math_id": 18,
"text": "\\prod_{n=2}^{\\infty}[\\ell_1\\ell_n\\ell_{1}^{-1}\\ell_{n}^{-1}]"
},
{
"math_id": 19,
"text": "\\left(1,[\\ell_1] [\\ell_2] [\\ell_{1}]^{-1}[\\ell_{2}]^{-1},[\\ell_1] [\\ell_2] [\\ell_{1}]^{-1}[\\ell_{2}]^{-1}[\\ell_1] [\\ell_3] [\\ell_{1}]^{-1}[\\ell_{3}]^{-1},\\dots\\right)"
},
{
"math_id": 20,
"text": "G,"
},
{
"math_id": 21,
"text": " H_1(\\mathbb{H})"
},
{
"math_id": 22,
"text": "\\left(\\prod_{i=1}^\\infty \\Z\\right) \\oplus \\left(\\prod_{i=1}^\\infty \\Z\\Big/ \\bigoplus_{i=1}^{\\infty}\\Z\\right)."
},
{
"math_id": 23,
"text": "\\prod_{i=1}^\\infty \\Z,"
},
{
"math_id": 24,
"text": " 0"
},
{
"math_id": 25,
"text": "\\check{H}_1(\\mathbb{H})"
},
{
"math_id": 26,
"text": "G\\to\\prod_{i=1}^\\infty \\Z "
},
{
"math_id": 27,
"text": " \\mathbb{H}"
},
{
"math_id": 28,
"text": "H_1(\\mathbb{H})\\to\\prod_{i=1}^{\\infty}\\mathbb{Z}"
},
{
"math_id": 29,
"text": "\\prod_{i=1}^\\infty \\Z \\Big/ \\bigoplus_{i=1}^{\\infty}\\Z"
},
{
"math_id": 30,
"text": "k"
},
{
"math_id": 31,
"text": " \\mathbb{H}_k=\\bigcup_{n\\in \\N}\\left\\{(x_0,x_1,\\ldots,x_k)\\in\\R^{k+1} : \\left(x_0-\\frac 1 n \\right)^2 + x_1^2 + \\cdots+x_k^2=\\frac{1}{n^2}\\right\\}."
},
{
"math_id": 32,
"text": "\\mathbb{H}_k"
},
{
"math_id": 33,
"text": "n\\to\\infty."
},
{
"math_id": 34,
"text": "\\R^k"
},
{
"math_id": 35,
"text": "\\mathbb{H}_0"
},
{
"math_id": 36,
"text": "\\mathbb{H}_1"
},
{
"math_id": 37,
"text": "\\mathbb{H}_{k+1}"
},
{
"math_id": 38,
"text": "\\Sigma\\mathbb{H}_{k}"
},
{
"math_id": 39,
"text": "<Math>k\\geq 1</Math>"
},
{
"math_id": 40,
"text": "<Math>k</Math>"
},
{
"math_id": 41,
"text": "(k-1)"
},
{
"math_id": 42,
"text": "k\\geq 2"
},
{
"math_id": 43,
"text": "\\pi_k(\\mathbb{H}_k)"
},
{
"math_id": 44,
"text": "\\prod_{i=1}^{\\infty}\\mathbb{Z}."
},
{
"math_id": 45,
"text": "q\\equiv 1\\bmod(k-1)"
},
{
"math_id": 46,
"text": "q>1,"
},
{
"math_id": 47,
"text": "H_q(\\mathbb{H}_k;\\Q)"
},
{
"math_id": 48,
"text": "q"
}
] | https://en.wikipedia.org/wiki?curid=910263 |
910274 | Decagonal number | A decagonal number is a figurate number that extends the concept of triangular and square numbers to the decagon (a ten-sided polygon). However, unlike the triangular and square numbers, the patterns involved in the construction of decagonal numbers are not rotationally symmetrical. Specifically, the "n"th decagonal numbers counts the dots in a pattern of "n" nested decagons, all sharing a common corner, where the "i"th decagon in the pattern has sides made of "i" dots spaced one unit apart from each other. The "n"-th decagonal number is given by the following formula
formula_0.
The first few decagonal numbers are:
0, 1, 10, 27, 52, 85, 126, 175, 232, 297, 370, 451, 540, 637, 742, 855, 976, 1105, 1242, 1387, 1540, 1701, 1870, 2047, 2232, 2425, 2626, 2835, 3052, 3277, 3510, 3751, 4000, 4257, 4522, 4795, 5076, 5365, 5662, 5967, 6280, 6601, 6930, 7267, 7612, 7965, 8326 (sequence in the OEIS).
The "n"th decagonal number can also be calculated by adding the square of "n" to thrice the ("n"−1)th pronic number or, to put it algebraically, as
formula_1.
formula_5
formula_6
formula_7 | [
{
"math_id": 0,
"text": "d_n = 4n^2 - 3n"
},
{
"math_id": 1,
"text": "D_n = n^2 + 3\\left(n^2 - n\\right)"
},
{
"math_id": 2,
"text": "D_n"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "48^{n-1}"
},
{
"math_id": 5,
"text": "D_n=D_{n-1}+8n-7 , D_0=0"
},
{
"math_id": 6,
"text": "D_n=2D_{n-1}-D_{n-2}+8, D_0=0,D_1=1"
},
{
"math_id": 7,
"text": "D_n=3D_{n-1}-3D_{n-2}+D_{n-3}, D_0=0, D_1=1, D_2=10"
}
] | https://en.wikipedia.org/wiki?curid=910274 |
910285 | Nonagonal number | A nonagonal number, or an enneagonal number, is a figurate number that extends the concept of triangular and square numbers to the nonagon (a nine-sided polygon). However, unlike the triangular and square numbers, the patterns involved in the construction of nonagonal numbers are not rotationally symmetrical. Specifically, the "n"th nonagonal number counts the dots in a pattern of "n" nested nonagons, all sharing a common corner, where the "i"th nonagon in the pattern has sides made of "i" dots spaced one unit apart from each other. The nonagonal number for "n" is given by the formula:
formula_0.
Nonagonal numbers.
The first few nonagonal numbers are:
0, 1, 9, 24, 46, 75, 111, 154, 204, 261, 325, 396, 474, 559, 651, 750, 856, 969, 1089, 1216, 1350, 1491, 1639, 1794, 1956, 2125, 2301, 2484, 2674, 2871, 3075, 3286, 3504, 3729, 3961, 4200, 4446, 4699, 4959, 5226, 5500, 5781, 6069, 6364, 6666, 6975, 7291, 7614, 7944, 8281, 8625, 8976, 9334, 9699 (sequence in the OEIS).
The parity of nonagonal numbers follows the pattern odd-odd-even-even.
Relationship between nonagonal and triangular numbers.
Letting formula_1 denote the "n"th nonagonal number, and using the formula formula_2 for the "n"th triangular number,
formula_3.
formula_4.
Test for nonagonal numbers.
If x is an integer, then n is the x-th nonagonal number. If x is not an integer, then n is not nonagonal.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac {n(7n - 5)}{2}"
},
{
"math_id": 1,
"text": "N_n"
},
{
"math_id": 2,
"text": "T_n = \\frac{n(n+1)}{2}"
},
{
"math_id": 3,
"text": " 7N_n + 3 = T_{7n-3}"
},
{
"math_id": 4,
"text": "\\mathsf{Let}~x = \\frac{\\sqrt{56n+25}+5}{14}"
}
] | https://en.wikipedia.org/wiki?curid=910285 |
910375 | Phakic intraocular lens | Lens implanted in eye in addition to the natural lens
A phakic intraocular lens (PIOL) is an intraocular lens that is implanted surgically into the eye to correct refractive errors without removing the natural lens (also known as "phakos", hence the term). Intraocular lenses that are implanted into eyes after the eye's natural lens has been removed during cataract surgery are known as pseudophakic.
Phakic intraocular lenses are indicated for patients with high refractive errors when the usual laser options for surgical correction (LASIK and PRK) are contraindicated. Phakic IOLs are designed to correct high myopia ranging from −5 to −20 D if the patient has enough anterior chamber depth (ACD) of at least 3 mm.
Three types of phakic IOLs are available:
Medical uses.
LASIK can correct myopia up to -12 to -14 D. The higher the intended correction the thinner and flatter the cornea will be post-operatively. For LASIK surgery, one has to preserve a safe residual stromal bed of at least 250 μm, preferably 300 μm. Beyond these limits there is an increased risk of developing corneal ectasia (i.e. corneal forward bulging) due to thin residual stromal bed which results in loss of visual quality. Due to the risk of higher order aberrations there is a current trend toward reducing the upper limits of LASIK and PRK to around -8 to -10 D. Phakic intraocular lenses are safer than excimer laser surgery for those with significant myopia.
Phakic intraocular lenses are contraindicated in patients who do not have a stable refraction for at least 6 months or are 21 years of age or younger. Preexisting eye disorders such as uveitis are another contraindication.
Although PIOLs for hyperopia are being investigated, there is less enthusiasm for these lenses because the anterior chamber tends to be shallower than in myopic patients. A hyperopic model ICL (posterior chamber PIOL) is available.
A corneal endothelium cell count of less than 2000 to 2500 cells per mm2 is a relative contraindication for PIOL implantation.
Advantages.
PIOLs have the advantage of treating a much larger range of myopic and hyperopic refractive errors than can be safely and effectively treated with corneal refractive surgery. The skills required for insertion are, with a few exceptions, similar to those used in cataract surgery. The equipment is significantly less expensive than an excimer laser and is similar to that used for cataract surgery. In addition, the PIOL is removable; therefore, the refractive effect should theoretically be reversible. However, any intervening damage caused by the PIOL would most likely be permanent. When compared with clear lens extraction, or refractive lens exchange the PIOL has the advantage of preserving natural accommodation and may have a lower risk of postoperative retinal detachment because of the preservation of the crystalline lens and minimal vitreous destabilization.
Disadvantages.
PIOL insertion is an intraocular procedure. With all surgeries there are associated risks. In addition, each PIOL style has its own set of associated risks. In the
case of PIOLs made of polymethylmethacrylate (PMMA), surgical insertion requires a larger incision, which may result in postoperative astigmatism. By comparison, PIOLS made of a foldable gel-like substance require a very small incision due to the flexibility of the material and thus significantly reduces astigmatism risk. In the cases where refractive outcomes are not optimal, LASIK can be used for fine-tuning. If a patient eventually develops a visually significant cataract, the PIOL will have to be explanted at the time of cataract surgery, possibly through a larger-than-usual incision.
Another concern is progressive shallowing of the anterior chamber which normally occurs with advancing age due to the growth of the eye's natural lens. Multiple studies have shown a 12–17 μm/year decrease in the anterior chamber depth with aging. If a phakic IOL patient is assumed to have a 50-year lifespan, the overall decline in ACD may add up to 0.6–0.85 mm, long-term data about this effect are not available. This concern is more important in implantable collamer lens because it is implanted in the narrowest part of the anterior segment.
Contraindications.
Lower levels of acceptable risk may be appropriate for implantation of phakic lenses than for cataract surgery, as the risk-benefit trade-off is less for improving vision than for restoring vision.
Preoperative evaluation.
Anterior chamber depth (ACD, i.e. the distance between the crystalline lens and cornea including the corneal thickness) is required before the surgery and measured with the use of ultrasound.
Iris-fixated IOLs are fixated to iris therefore they have the advantage of being one size (8.5 mm).
Sulcus-supported IOLs need to be implanted in the ciliary sulcus which may have various diameters among individuals, therefore anterior chamber diameter needs to be measured with a calliper or with the use of eye imaging instruments such as Orbscan and high frequency ultrasound. A calliper and Orbscan measure the external limbus-to-limbus diameter of anterior chamber (white-to-white diameter) which provides an approximate estimation of AC diameter but UBM and OCT offer a more adequate measurement of the sulcus diameter (sulcus-to-sulcus diameter) and should be used when available.
Power calculation.
The power of phakic lens is independent of the axial length of the eye. Rather it depends on central corneal power, anterior chamber depth (ACD) and patient refraction (preoperative spherical equivalent). The most common formula for calculating the power of phakic IOL is the following:
formula_0
P : Power of phakic IOL
n : Refractive Index of Aqueous (1.336)
K : Central corneal power in diopters
R : Patient Refraction at the corneal vertex
d : Effective lens position in mm
The effective lens position is calculated as the difference between the anterior chamber depth and the distance between the PIOL and the crystalline lens. From ultrasonographic examinations of PIOLs, the lens-optic distance shows less variability compared with the cornea-optic distance. Therefore, it is preferable to use measured ACD and subtract it with an ‘optic-lens’ constant to obtain the value of ELP. For the Artisan/Verisyse lens the optic-lens constant is 0.84 mm. The ICL power is calculated using the Olsen-Feingold formula by using a four variable formula modified by a regression analysis of past results.
Surgical technique.
The Artisan (Verisyse) lens is implanted under pharmacological miosis. After creating proper incision the lens is grasped with curved holding forceps and inserted. Once in the anterior chamber and while firmly holding the lens with forceps, temporal and nasal iris tissue is enclavated with a special needle. The operation is completed with an iridectomy and the incision is sutured.
The EVO ICL (STAAR® Surgical's phakic IOL) is implanted under pharmacological mydriasis and implanted in the retropupillary position, between the eye's iris and the crystalline lens, using cartridge-injector or forceps. Both eyes can usually be done on the same day.
Steroid antibiotic eye drops are usually prescribed for 2–4 weeks after surgery. Regular follow-ups are recommended.
Risk.
Though ICL surgery has shown to be effective, it sometimes can result in complications such as:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P= {1000 n \\over{1000 n \\over K+R}-d}-{1000 n \\over{1000 n \\over K}-d}"
}
] | https://en.wikipedia.org/wiki?curid=910375 |
9104690 | Homogeneously Suslin set | In descriptive set theory, a set formula_0 is said to be homogeneously Suslin if it is the projection of a homogeneous tree. formula_0 is said to be formula_1-homogeneously Suslin if it is the projection of a formula_1-homogeneous tree.
If formula_2 is a formula_3 set and formula_1 is a measurable cardinal, then formula_4 is formula_1-homogeneously Suslin. This result is important in the proof that the existence of a measurable cardinal implies that formula_3 sets are determined.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "\\kappa"
},
{
"math_id": 2,
"text": "A\\subseteq{}^\\omega\\omega"
},
{
"math_id": 3,
"text": "\\mathbf{\\Pi}_1^1"
},
{
"math_id": 4,
"text": "A"
}
] | https://en.wikipedia.org/wiki?curid=9104690 |
9104798 | Homogeneous tree | In descriptive set theory, a tree over a product set formula_0 is said to be homogeneous if there is a system of measures formula_1 such that the following conditions hold:
An equivalent definition is produced when the final condition is replaced with the following:
formula_7 is said to be formula_14-homogeneous if each formula_2 is formula_14-complete.
Homogeneous trees are involved in Martin and Steel's proof of projective determinacy.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "Y\\times Z"
},
{
"math_id": 1,
"text": "\\langle\\mu_s\\mid s\\in{}^{<\\omega}Y\\rangle"
},
{
"math_id": 2,
"text": "\\mu_s"
},
{
"math_id": 3,
"text": "\\{t\\mid\\langle s,t\\rangle\\in T\\}"
},
{
"math_id": 4,
"text": "s_1\\subseteq s_2"
},
{
"math_id": 5,
"text": "\\mu_{s_1}(X)=1\\iff\\mu_{s_2}(\\{t\\mid t\\upharpoonright lh(s_1)\\in X\\})=1"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "\\langle\\mu_{x\\upharpoonright n}\\mid n\\in\\omega\\rangle"
},
{
"math_id": 9,
"text": "\\langle\\mu_s\\mid s\\in{}^\\omega Y\\rangle"
},
{
"math_id": 10,
"text": "[T]"
},
{
"math_id": 11,
"text": "\\forall n\\in\\omega\\,\\mu_{x\\upharpoonright n}(X_n)=1"
},
{
"math_id": 12,
"text": "f\\in{}^\\omega Z"
},
{
"math_id": 13,
"text": "\\forall n\\in\\omega\\,f\\upharpoonright n\\in X_n"
},
{
"math_id": 14,
"text": "\\kappa"
}
] | https://en.wikipedia.org/wiki?curid=9104798 |
910519 | Harmonic conjugate | Concept in mathematics
In mathematics, a real-valued function formula_0 defined on a connected open set formula_1 is said to have a conjugate (function) formula_2 if and only if they are respectively the real and imaginary parts of a holomorphic function formula_3 of the complex variable formula_4 That is, formula_5 is conjugate to formula_6 if formula_7 is holomorphic on formula_8 As a first consequence of the definition, they are both harmonic real-valued functions on formula_9. Moreover, the conjugate of formula_10 if it exists, is unique up to an additive constant. Also, formula_6 is conjugate to formula_5 if and only if formula_5 is conjugate to formula_11.
Description.
Equivalently, formula_12 is conjugate to formula_6 in formula_9 if and only if formula_6 and formula_5 satisfy the Cauchy–Riemann equations in formula_8 As an immediate consequence of the latter equivalent definition, if formula_6 is any harmonic function on formula_13 the function formula_14 is conjugate to formula_15 for then the Cauchy–Riemann equations are just formula_16 and the symmetry of the mixed second order derivatives, formula_17 Therefore, a harmonic function formula_6 admits a conjugated harmonic function if and only if the holomorphic function formula_18 has a primitive formula_3 in formula_19 in which case a conjugate of formula_6 is, of course, formula_20 So any harmonic function always admits a conjugate function whenever its domain is simply connected, and in any case it admits a conjugate locally at any point of its domain.
There is an operator taking a harmonic function "u" on a simply connected region in formula_21 to its harmonic conjugate "v" (putting e.g. "v"("x"0) = 0 on a given "x"0 in order to fix the indeterminacy of the conjugate up to constants). This is well known in applications as (essentially) the Hilbert transform; it is also a basic example in mathematical analysis, in connection with singular integral operators. Conjugate harmonic functions (and the transform between them) are also one of the simplest examples of a Bäcklund transform (two PDEs and a transform relating their solutions), in this case linear; more complex transforms are of interest in solitons and integrable systems.
Geometrically "u" and "v" are related as having "orthogonal trajectories", away from the zeros of the underlying holomorphic function; the contours on which "u" and "v" are constant cross at right angles. In this regard, "u" + "iv" would be the complex potential, where "u" is the potential function and "v" is the stream function.
Examples.
For example, consider the function formula_22
Since
formula_23
and
formula_24
it satisfies
formula_25
(formula_26 is the Laplace operator) and is thus harmonic. Now suppose we have a formula_2 such that the Cauchy–Riemann equations are satisfied:
formula_27
and
formula_28
Simplifying,
formula_29
and
formula_30
which when solved gives
formula_31
Observe that if the functions related to "u" and "v" were interchanged, the functions would not be harmonic conjugates, since the minus sign in the Cauchy–Riemann equations makes the relationship asymmetric.
The conformal mapping property of analytic functions (at points where the derivative is not zero) gives rise to a geometric property of harmonic conjugates. Clearly the harmonic conjugate of "x" is "y", and the lines of constant "x" and constant "y" are orthogonal. Conformality says that contours of constant "u"("x", "y") and "v"("x", "y") will also be orthogonal where they cross (away from the zeros of "f" ′("z")). That means that "v" is a specific solution of the orthogonal trajectory problem for the family of contours given by "u" (not the only solution, naturally, since we can take also functions of "v"): the question, going back to the mathematics of the seventeenth century, of finding the curves that cross a given family of non-intersecting curves at right angles.
Harmonic conjugate in geometry.
There is an additional occurrence of the term harmonic conjugate in mathematics, and more specifically in projective geometry. Two points "A" and "B" are said to be harmonic conjugates of each other with respect to another pair of points "C, D" if the cross ratio ("ABCD") equals −1. | [
{
"math_id": 0,
"text": "u(x,y)"
},
{
"math_id": 1,
"text": "\\Omega \\subset \\R^2"
},
{
"math_id": 2,
"text": "v(x,y)"
},
{
"math_id": 3,
"text": "f(z)"
},
{
"math_id": 4,
"text": "z:=x+iy\\in\\Omega."
},
{
"math_id": 5,
"text": "v"
},
{
"math_id": 6,
"text": "u"
},
{
"math_id": 7,
"text": "f(z):=u(x,y)+iv(x,y)"
},
{
"math_id": 8,
"text": "\\Omega."
},
{
"math_id": 9,
"text": "\\Omega"
},
{
"math_id": 10,
"text": "u,"
},
{
"math_id": 11,
"text": "-u"
},
{
"math_id": 12,
"text": " v"
},
{
"math_id": 13,
"text": "\\Omega\\subset\\R^2,"
},
{
"math_id": 14,
"text": " -u_y"
},
{
"math_id": 15,
"text": "u_x"
},
{
"math_id": 16,
"text": "\\Delta u = 0"
},
{
"math_id": 17,
"text": "u_{xy}=u_{yx}."
},
{
"math_id": 18,
"text": "g(z) := u_x(x,y) - i u_y(x,y)"
},
{
"math_id": 19,
"text": "\\Omega,"
},
{
"math_id": 20,
"text": "\\operatorname{Im} f(x+iy)."
},
{
"math_id": 21,
"text": "\\R^2"
},
{
"math_id": 22,
"text": "u(x,y) = e^x \\sin y. "
},
{
"math_id": 23,
"text": "{\\partial u \\over \\partial x } = e^x \\sin y, \\quad {\\partial^2 u \\over \\partial x^2} = e^x \\sin y"
},
{
"math_id": 24,
"text": "{\\partial u \\over \\partial y} = e^x \\cos y, \\quad {\\partial^2 u \\over \\partial y^2} = - e^x \\sin y,"
},
{
"math_id": 25,
"text": " \\Delta u = \\nabla^2 u = 0"
},
{
"math_id": 26,
"text": "\\Delta"
},
{
"math_id": 27,
"text": "{\\partial u \\over \\partial x} = {\\partial v \\over \\partial y} = e^x \\sin y"
},
{
"math_id": 28,
"text": "{\\partial u \\over \\partial y} = -{\\partial v \\over \\partial x} = e^x \\cos y."
},
{
"math_id": 29,
"text": "{\\partial v \\over \\partial y} = e^x \\sin y"
},
{
"math_id": 30,
"text": "{\\partial v \\over \\partial x} = -e^x \\cos y"
},
{
"math_id": 31,
"text": " v = -e^x \\cos y + C."
}
] | https://en.wikipedia.org/wiki?curid=910519 |
9105381 | Disk laser | A disk laser or active mirror (Fig.1) is a type of diode pumped solid-state laser characterized by a heat sink and laser output that are realized on opposite sides of a thin layer of active gain medium. Despite their name, disk lasers do not have to be circular; other shapes have also been tried. The thickness of the disk is considerably smaller than the laser beam diameter. Initially, this laser cavity configuration had been proposed and realized experimentally for thin slice semiconductor lasers.
The disk laser concepts allow very high average and peak powers due to its large area, leading to moderate power densities on the active material.
Active mirrors and disk lasers.
Initially, disk lasers were called "active mirrors", because the gain medium of a disk laser is essentially an optical mirror with reflection coefficient greater than unity. An active mirror is a thin disk-shaped double-pass optical amplifier.
The first active mirrors were developed in the Laboratory for Laser Energetics (United States).
Scalable diode-end-pumped disk Nd:YAG laser had been proposed in in Talbot active mirror configuration.
Then, the concept was developed in various research groups,
in particular, the University of Stuttgart (Germany) for Yb:doped glasses.
In the "disk laser", the heat sink does not have to be transparent, so, it can be extremely efficient even with large transverse size formula_0 of the device (Fig.1).
The increase in size allows the power scaling to many kilowatts without significant modification of the design.
Limit of power scaling for disk lasers.
The power of such lasers is limited not only by the power of pump available, but also by overheating, amplified spontaneous emission (ASE) and
the background round-trip loss.
To avoid overheating, the size formula_0 should be increased with power scaling.
Then, to avoid strong losses due to the exponential growth of the ASE, the transverse-trip gain formula_1
cannot be large.
This requires reduction of the gain formula_2;
this gain is determined by the reflectivity of the output coupler and thickness formula_3.
The round-trip gain formula_4 should remain larger than the
round-trip loss formula_5
(the difference formula_6 determines the optical energy,
which is output from the laser cavity at each round-trip).
The reduction of gain formula_2, in a given round-trip loss formula_7,
requires increasing the thickness formula_8.
Then, at some critical size, the disk becomes too thick and cannot be
pumped above the threshold without overheating.
Some features of the power scaling can reveal from a simple model.
Let formula_9 be the saturation intensity,
of the medium,
formula_10 be the ratio of frequencies,
formula_11 be the thermal loading parameter.
The key parameter
formula_12
determines the maximal power of the disk laser.
The corresponding optimal thickness can be estimated with
formula_13.
The corresponding optimal size
formula_14.
Roughly, the round-trip loss should scale inversely proportionally to the cubic root of the power required.
An additional issue is the efficient delivery of pump energy.
In low round-trip gain, the single-pass absorption of the pump is also low. Therefore, recycling of pump energy is required for efficient operation. (See the additional
mirror M at the left-hand side of figure 2.) For power scaling,
the medium should be optically thin, with many passes of pump energy required; the lateral delivery of pump energy
also might be a possible solution.
Scaling of disk lasers via self-imaging.
Thin disk diode-pumped solid-state lasers may be scaled by means of transverse mode-locking in Talbot cavities. The remarkable feature of Talbot scaling is that Fresnel number formula_15 of the formula_16 element laser array phase-locked by self-imaging is given by:
formula_17
The limitation on a number of phase-locked emitters formula_18 is due to randomly distributed phase
distortions across an active mirror of the order formula_19.
Anti-ASE cap.
In order to reduce the impact of ASE, an anti-ASE cap consisting of undoped material on the surface of a disk laser has been suggested.
Such a cap allows spontaneously emitted photons to escape from the active layer and prevents them from resonating in the cavity. Rays cannot bounce (Figure 3) as in an uncovered disk. This could allow an order of magnitude increase in the maximum power achievable by a disk laser. In both cases, the back reflection of the ASE from the edges of the disk should be suppressed. This can be done with absorbing layers, shown with green in Figure 4. At operation close to the maximal power, a significant part of the energy goes into ASE; therefore, the absorbing layers also should be supplied with heat sinks, which are not shown in the figure.
Key parameter for laser materials.
The estimate of maximal power achievable at given loss formula_20, is very sensitive to formula_20. The estimate of the upper bound of formula_20, at which the desired output power formula_21 is achievable is robust. This estimate is plotted versus normalized power
formula_22 in figure 5. Here, formula_23 is the output power of the laser, and
formula_24 is the dimensional scale of power; it is related with the key parameter
formula_25.
The thick dashed line represents the estimate for the uncovered disk. The thick solid line shows the same for the disk with undoped cap. The thin solid line represents the qualitative estimate formula_26 without coefficients. Circles correspond to the experimental data for the power achieved and corresponding estimates for the background loss formula_20. All future experiments and numerical simulations and estimates are expected to give values of formula_27, that are below the red dashed line in Fig.5 for the uncovered disks, and below the blue curve for the disks with anti-ASE cap. This can be interpreted as a scaling law for disk lasers
In the vicinity of the curves mentioned, the efficiency of the disk laser is low; most of the pumping power goes to ASE, and is absorbed at the edges of the device. In these cases, the distribution of the pump energy available among several disks may significantly improve the performance of the lasers. Indeed, some lasers reported using several elements combined in the same cavity.
Pulsed operation.
Similar scaling laws take place for pulsed operation. In quasi continuous wave regime, the maximal mean power can be estimated by scaling the saturation intensity with the fill factor of the pump, and the product of the duration of pump to the repetition rate. At short duration pulses,
more detailed analysis is required.
At moderate values of the repetition rate (say, higher than 1 Hz), the maximal energy of the output pulses is roughly inversely proportional to the cube of the background loss formula_20; the undoped cap may provide an additional order of magnitude of mean output power, under the condition that this cap does not contribute to the background loss.
At low repetition rate (and in the regime of single pulses) and sufficient pump power, there is no general limit of energy, but the required size of the device grows quickly with increase of the required pulse energy, setting the practical limit of energy; it is estimated that from a few joules to a few thousand joules can be extracted in an optical pulse from a single active element, dependently on the level of the background internal loss of the signal in the disk.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "~L~"
},
{
"math_id": 1,
"text": "~u=GL~"
},
{
"math_id": 2,
"text": "G~"
},
{
"math_id": 3,
"text": "~h"
},
{
"math_id": 4,
"text": "~g=2Gh~"
},
{
"math_id": 5,
"text": "\\beta~"
},
{
"math_id": 6,
"text": "g\\!-\\!\\beta~"
},
{
"math_id": 7,
"text": "~\\beta~"
},
{
"math_id": 8,
"text": "h"
},
{
"math_id": 9,
"text": "Q~"
},
{
"math_id": 10,
"text": "\\eta_0=\\omega_{\\rm s}/\\omega_{\\rm p}~~"
},
{
"math_id": 11,
"text": "R~"
},
{
"math_id": 12,
"text": "P_{\\rm k}=\\eta_0\\frac{R^2}{Q\\beta^3}~"
},
{
"math_id": 13,
"text": "h \\sim \\frac{R}{Q \\beta}"
},
{
"math_id": 14,
"text": "L \\sim \\frac{R}{Q \\beta^2}"
},
{
"math_id": 15,
"text": "F"
},
{
"math_id": 16,
"text": "N-"
},
{
"math_id": 17,
"text": "F=(N-1)^2."
},
{
"math_id": 18,
"text": "N"
},
{
"math_id": 19,
"text": "\\lambda/10 \\div \\lambda/100"
},
{
"math_id": 20,
"text": "\\beta"
},
{
"math_id": 21,
"text": "P_{\\rm s}"
},
{
"math_id": 22,
"text": "s=P_{\\rm s}/P_{\\rm d}"
},
{
"math_id": 23,
"text": " P_{\\rm s}"
},
{
"math_id": 24,
"text": "P_{\\rm d}=R^2/Q"
},
{
"math_id": 25,
"text": "P_{\\rm k}=P_{\\rm d}/\\beta^3"
},
{
"math_id": 26,
"text": "\\beta=s^{1/3}"
},
{
"math_id": 27,
"text": "(\\beta, s)"
}
] | https://en.wikipedia.org/wiki?curid=9105381 |
9105584 | Leverett J-function | In petroleum engineering, the Leverett "J"-function is a dimensionless function of water saturation describing the capillary pressure,
formula_0
where formula_1 is the water saturation measured as a fraction, formula_2 is the capillary pressure (in pascal), formula_3 is the permeability (measured in m²), formula_4 is the porosity (0-1), formula_5 is the surface tension (in N/m) and formula_6 is the contact angle. The function is important in that it is constant for a given saturation within a reservoir, thus relating reservoir properties for neighboring beds.
The Leverett "J"-function is an attempt at extrapolating capillary pressure data for a given rock to rocks that are similar but with differing permeability, porosity and wetting properties. It assumes that the porous rock can be modelled as a bundle of non-connecting capillary tubes, where the factor formula_7 is a characteristic length of the capillaries' radii.
This function is also widely used in modeling two-phase flow of proton-exchange membrane fuel cells. A large degree of hydration is needed for good proton conductivity while large liquid water saturation in pores of catalyst layer or diffusion media will impede gas transport in the cathode.
J-function in analyzing capillary pressure data is analogous with TEM-function in analyzing relative permeability data.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "J(S_w) = \\frac{p_c(S_w) \\sqrt{k/\\phi}}{\\gamma \\cos \\theta}"
},
{
"math_id": 1,
"text": "S_w"
},
{
"math_id": 2,
"text": "p_c"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "\\phi"
},
{
"math_id": 5,
"text": "\\gamma"
},
{
"math_id": 6,
"text": "\\theta"
},
{
"math_id": 7,
"text": "\\sqrt{k/\\phi}"
}
] | https://en.wikipedia.org/wiki?curid=9105584 |
9105867 | FitzHugh–Nagumo model | Toy model of excitable media
The FitzHugh–Nagumo model (FHN) describes a prototype of an excitable system (e.g., a neuron).
It is an example of a relaxation oscillator because, if the external stimulus formula_0 exceeds a certain threshold value, the system will exhibit a characteristic excursion in phase space, before the variables formula_1 and formula_2 relax back to their rest values.
This behaviour is a sketch for neural spike generations, with a short, nonlinear elevation of membrane voltage formula_1, diminished over time by a slower, linear recovery variable formula_2 representing sodium channel reactivation and potassium channel deactivation, after stimulation by an external input current.
The equations for this dynamical system read
formula_3
formula_4
The FitzHugh–Nagumo model is a simplified 2D version of the Hodgkin–Huxley model which models in a detailed manner activation and deactivation dynamics of a spiking neuron.
In turn, the Van der Pol oscillator is a special case of the FitzHugh–Nagumo model, with formula_5.
History.
It was named after Richard FitzHugh (1922–2007) who suggested the system in 1961 and Jinichi Nagumo "et al". who created the equivalent circuit the following year.
In the original papers of FitzHugh, this model was called Bonhoeffer–Van der Pol oscillator (named after Karl-Friedrich Bonhoeffer and Balthasar van der Pol) because it contains the Van der Pol oscillator as a special case for formula_5. The equivalent circuit was suggested by Jin-ichi Nagumo, Suguru Arimoto, and Shuji Yoshizawa.
Qualitative analysis.
Qualitatively, the dynamics of this system is determined by the relation between the three branches of the cubic nullcline and the linear nullcline.
The cubic nullcline is defined by formula_6.
The linear nullcline is defined by formula_7.
In general, the two nullclines intersect at one or three points, each of which is an equilibrium point. At large values of formula_8, far from origin, the flow is a clockwise circular flow, consequently the sum of the index for the entire vector field is +1. This means that when there is one equilibrium point, it must be a clockwise spiral point or a node. When there are three equilibrium points, they must be two clockwise spiral points and one saddle point.
The type and stability of the index +1 can be numerically computed by computing the trace and determinant of its Jacobian: formula_9The point is stable iff the trace is negative. That is, formula_10.
The point is a spiral point iff formula_11. That is, formula_12.
The limit cycle is born when a stable spiral point becomes unstable by Hopf bifurcation.
Only when the linear nullcline pierces the cubic nullcline at three points, the system has a separatrix, being the two branches of the stable manifold of the saddle point in the middle.
Gallery figures: FitzHugh-Nagumo model, with formula_13, and varying formula_14. (They are animated. Open them to see the animation.)
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "I_{\\text{ext}}"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "w"
},
{
"math_id": 3,
"text": "\n\\dot{v}=v-\\frac{v^3}{3} - w + RI_{\\rm ext} \n"
},
{
"math_id": 4,
"text": "\n\\tau \\dot{w} = v+a-b w.\n"
},
{
"math_id": 5,
"text": " a=b=0 "
},
{
"math_id": 6,
"text": "\\dot v = 0 \\leftrightarrow w = v-v^3/3 + RI_{ext}"
},
{
"math_id": 7,
"text": "\\dot w = 0 \\leftrightarrow w = (v+a)/b"
},
{
"math_id": 8,
"text": "v^2 + w^2"
},
{
"math_id": 9,
"text": "(tr, \\det) = (1-b/\\tau - v^2, (v^2-1)b/\\tau +1/\\tau)"
},
{
"math_id": 10,
"text": "v^2 > 1-b/\\tau "
},
{
"math_id": 11,
"text": "4\\det - tr^2 > 0"
},
{
"math_id": 12,
"text": "(\\tau v^2 - b - \\tau)^2 < 4\\tau"
},
{
"math_id": 13,
"text": "a = 0.7, \\tau = 12.5, R = 0.1"
},
{
"math_id": 14,
"text": "b, I_{ext}"
}
] | https://en.wikipedia.org/wiki?curid=9105867 |
9107356 | 5-simplex | Regular 5-polytope
In five-dimensional geometry, a 5-simplex is a self-dual regular 5-polytope. It has six vertices, 15 edges, 20 triangle faces, 15 tetrahedral cells, and 6 5-cell facets. It has a dihedral angle of cos−1(), or approximately 78.46°.
The 5-simplex is a solution to the problem: "Make 20 equilateral triangles using 15 matchsticks, where each side of every triangle is exactly one matchstick."
Alternate names.
It can also be called a hexateron, or hexa-5-tope, as a 6-facetted polytope in 5-dimensions. The name "hexateron" is derived from "hexa-" for having six facets and "teron" (with "ter-" being a corruption of "tetra-") for having four-dimensional facets.
By Jonathan Bowers, a hexateron is given the acronym hix.
As a configuration.
This configuration matrix represents the 5-simplex. The rows and columns correspond to vertices, edges, faces, cells and 4-faces. The diagonal numbers say how many of each element occur in the whole 5-simplex. The nondiagonal numbers say how many of the column's element occur in or at the row's element. This self-dual simplex's matrix is identical to its 180 degree rotation.
formula_0
Regular hexateron cartesian coordinates.
The "hexateron" can be constructed from a 5-cell by adding a 6th vertex such that it is equidistant from all the other vertices of the 5-cell.
The Cartesian coordinates for the vertices of an origin-centered regular hexateron having edge length 2 are:
formula_1
The vertices of the "5-simplex" can be more simply positioned on a hyperplane in 6-space as permutations of (0,0,0,0,0,1) "or" (0,1,1,1,1,1). These constructions can be seen as facets of the 6-orthoplex or rectified 6-cube respectively.
Lower symmetry forms.
A lower symmetry form is a "5-cell pyramid" {3,3,3}∨( ), with [3,3,3] symmetry order 120, constructed as a 5-cell base in a 4-space hyperplane, and an apex point "above" the hyperplane. The five "sides" of the pyramid are made of 5-cell cells. These are seen as vertex figures of truncated regular 6-polytopes, like a truncated 6-cube.
Another form is {3,3}∨{ }, with [3,3,2,1] symmetry order 48, the joining of an orthogonal digon and a tetrahedron, orthogonally offset, with all pairs of vertices connected between. Another form is {3}∨{3}, with [3,2,3,1] symmetry order 36, and extended symmetry [[3,2,3],1], order 72. It represents joining of 2 orthogonal triangles, orthogonally offset, with all pairs of vertices connected between.
The form { }∨{ }∨{ } has symmetry [2,2,1,1], order 8, extended by permuting 3 segments as [3[2,2],1] or [4,3,1,1], order 48.
These are seen in the [[vertex figure]]s of [[Bitruncation|bitruncated]] and tritruncated regular 6-polytopes, like a [[bitruncated 6-cube]] and a [[tritruncated 6-simplex]]. The edge labels here represent the types of face along that direction, and thus represent different edge lengths.
The vertex figure of the [[omnitruncated 5-simplex honeycomb]], , is a 5-simplex with a [[petrie polygon]] cycle of 5 long edges. Its symmetry is isomophic to dihedral group Dih6 or simple rotation group [6,2]+, order 12.
Compound.
The compound of two 5-simplexes in dual configurations can be seen in this A6 [[Coxeter plane]] projection, with a red and blue 5-simplex vertices and edges. This compound has [[3,3,3,3]] symmetry, order 1440. The intersection of these two 5-simplexes is a uniform [[Rectified 5-simplexes#Intersection of two 5-simplices|birectified 5-simplex]]. = ∩ .
[[File:Compound two 5-simplexes.png|240px]]
Related uniform 5-polytopes.
It is first in a dimensional series of uniform polytopes and honeycombs, expressed by [[Coxeter]] as 13k series. A degenerate 4-dimensional case exists as 3-sphere tiling, a tetrahedral [[hosohedron]].
It is first in a dimensional series of uniform polytopes and honeycombs, expressed by [[Coxeter]] as 3k1 series. A degenerate 4-dimensional case exists as 3-sphere tiling, a tetrahedral [[dihedron]].
The 5-simplex, as 220 polytope is first in dimensional series 22k.
The regular 5-simplex is one of 19 [[Uniform 5-polytope#The A5 .5B3.2C3.2C3.2C3.5D family .285-simplex.29|uniform polytera]] based on the [3,3,3,3] [[Coxeter group]], all shown here in A5 [[Coxeter plane]] [[orthographic projection]]s. (Vertices are colored by projection overlap order, red, orange, yellow, green, cyan, blue, purple having progressively more vertices)
Notes.
<templatestyles src="Reflist/styles.css" />
External links.
[[Category:5-polytopes]] | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\\begin{matrix}6 & 5 & 10 & 10 & 5 \\\\ 2 & 15 & 4 & 6 & 4 \\\\ 3 & 3 & 20 & 3 & 3 \\\\ 4 & 6 & 4 & 15 & 2 \\\\ 5 & 10 & 10 & 5 & 6 \\end{matrix}\\end{bmatrix}"
},
{
"math_id": 1,
"text": "\\begin{align}\n&\\left(\\tfrac{1}\\sqrt{15},\\ \\tfrac{1}\\sqrt{10},\\ \\tfrac{1}\\sqrt{6},\\ \\tfrac{1}\\sqrt{3},\\ \\pm1\\right)\\\\[5pt]\n&\\left(\\tfrac{1}\\sqrt{15},\\ \\tfrac{1}\\sqrt{10},\\ \\tfrac{1}\\sqrt{6},\\ -\\tfrac{2}\\sqrt{3},\\ 0\\right)\\\\[5pt]\n&\\left(\\tfrac{1}\\sqrt{15},\\ \\tfrac{1}\\sqrt{10},\\ -\\tfrac\\sqrt{3}\\sqrt{2},\\ 0,\\ 0\\right)\\\\[5pt]\n&\\left(\\tfrac{1}\\sqrt{15},\\ -\\tfrac{2\\sqrt 2}\\sqrt{5},\\ 0,\\ 0,\\ 0\\right)\\\\[5pt]\n&\\left(-\\tfrac\\sqrt{5}\\sqrt{3},\\ 0,\\ 0,\\ 0,\\ 0\\right)\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=9107356 |
9109 | Diophantine equation | Polynomial equation whose integer solutions are sought
In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents.
Diophantine problems have fewer equations than unknowns and involve finding integers that solve simultaneously all equations. As such systems of equations define algebraic curves, algebraic surfaces, or, more generally, algebraic sets, their study is a part of algebraic geometry that is called "Diophantine geometry".
The word "Diophantine" refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
While individual equations present a kind of puzzle and have been considered throughout history, the formulation of general theories of Diophantine equations (beyond the case of linear and quadratic equations) was an achievement of the twentieth century.
Examples.
In the following Diophantine equations, w, x, y, and z are the unknowns and the other letters are given constants:
Linear Diophantine equations.
One equation.
The simplest linear Diophantine equation takes the form
formula_0
where a, b and c are given integers. The solutions are described by the following theorem:
"This Diophantine equation has a solution" (where x and y are integers) "if and only if" c "is a multiple of the greatest common divisor of" a "and" b. "Moreover, if" ("x, y") "is a solution, then the other solutions have the form" ("x" + "kv, y" − "ku"), "where" k "is an arbitrary integer, and" u "and" v "are the quotients of" a "and" b "(respectively) by the greatest common divisor of" a "and" b.
Proof: If d is this greatest common divisor, Bézout's identity asserts the existence of integers e and f such that "ae" + "bf" = "d". If c is a multiple of d, then "c" = "dh" for some integer h, and ("eh, fh") is a solution. On the other hand, for every pair of integers x and y, the greatest common divisor d of a and b divides "ax" + "by". Thus, if the equation has a solution, then c must be a multiple of d. If "a" = "ud" and "b"
"vd", then for every solution ("x, y"), we have
formula_1
showing that ("x" + "kv, y" − "ku") is another solution. Finally, given two solutions such that
formula_2
one deduces that formula_3
As u and v are coprime, Euclid's lemma shows that v divides "x"2 − "x"1, and thus that there exists an integer k such that both
formula_4
Therefore,
formula_5
which completes the proof.
Chinese remainder theorem.
The Chinese remainder theorem describes an important class of linear Diophantine systems of equations: let formula_6 be k pairwise coprime integers greater than one, formula_7 be k arbitrary integers, and N be the product formula_8 The Chinese remainder theorem asserts that the following linear Diophantine system has exactly one solution formula_9 such that 0 ≤ "x" < "N", and that the other solutions are obtained by adding to x a multiple of N:
formula_10
System of linear Diophantine equations.
More generally, every system of linear Diophantine equations may be solved by computing the Smith normal form of its matrix, in a way that is similar to the use of the reduced row echelon form to solve a system of linear equations over a field. Using matrix notation every system of linear Diophantine equations may be written
formula_11
where A is an "m" × "n" matrix of integers, X is an "n" × 1 column matrix of unknowns and C is an "m" × 1 column matrix of integers.
The computation of the Smith normal form of A provides two unimodular matrices (that is matrices that are invertible over the integers and have ±1 as determinant) U and V of respective dimensions "m" × "m" and "n" × "n", such that the matrix
formula_12
is such that bi,i is not zero for i not greater than some integer k, and all the other entries are zero. The system to be solved may thus be rewritten as
formula_13
Calling yi the entries of "V"−1"X" and di those of "D" = "UC", this leads to the system
formula_14
This system is equivalent to the given one in the following sense: A column matrix of integers x is a solution of the given system if and only if "x" = "Vy" for some column matrix of integers y such that "By" = "D".
It follows that the system has a solution if and only if bi,i divides di for "i" ≤ "k" and "di" = 0 for "i" > "k". If this condition is fulfilled, the solutions of the given system are
formula_15
where "h""k"+1, …, "hn" are arbitrary integers.
Hermite normal form may also be used for solving systems of linear Diophantine equations. However, Hermite normal form does not directly provide the solutions; to get the solutions from the Hermite normal form, one has to successively solve several linear equations. Nevertheless, Richard Zippel wrote that the Smith normal form "is somewhat more than is actually needed to solve linear diophantine equations. Instead of reducing the equation to diagonal form, we only need to make it triangular, which is called the Hermite normal form. The Hermite normal form is substantially easier to compute than the Smith normal form."
Integer linear programming amounts to finding some integer solutions (optimal in some sense) of linear systems that include also inequations. Thus systems of linear Diophantine equations are basic in this context, and textbooks on integer programming usually have a treatment of systems of linear Diophantine equations.
Homogeneous equations.
A homogeneous Diophantine equation is a Diophantine equation that is defined by a homogeneous polynomial. A typical such equation is the equation of Fermat's Last Theorem
formula_16
As a homogeneous polynomial in n indeterminates defines a hypersurface in the projective space of dimension "n" − 1, solving a homogeneous Diophantine equation is the same as finding the rational points of a projective hypersurface.
Solving a homogeneous Diophantine equation is generally a very difficult problem, even in the simplest non-trivial case of three indeterminates (in the case of two indeterminates the problem is equivalent with testing if a rational number is the dth power of another rational number). A witness of the difficulty of the problem is Fermat's Last Theorem (for "d" > 2, there is no integer solution of the above equation), which needed more than three centuries of mathematicians' efforts before being solved.
For degrees higher than three, most known results are theorems asserting that there are no solutions (for example Fermat's Last Theorem) or that the number of solutions is finite (for example Falting's theorem).
For the degree three, there are general solving methods, which work on almost all equations that are encountered in practice, but no algorithm is known that works for every cubic equation.
Degree two.
Homogeneous Diophantine equations of degree two are easier to solve. The standard solving method proceeds in two steps. One has first to find one solution, or to prove that there is no solution. When a solution has been found, all solutions are then deduced.
For proving that there is no solution, one may reduce the equation modulo p. For example, the Diophantine equation
formula_17
does not have any other solution than the trivial solution (0, 0, 0). In fact, by dividing x, y, and z by their greatest common divisor, one may suppose that they are coprime. The squares modulo 4 are congruent to 0 and 1. Thus the left-hand side of the equation is congruent to 0, 1, or 2, and the right-hand side is congruent to 0 or 3. Thus the equality may be obtained only if x, y, and z are all even, and are thus not coprime. Thus the only solution is the trivial solution (0, 0, 0). This shows that there is no rational point on a circle of radius formula_18 centered at the origin.
More generally, the Hasse principle allows deciding whether a homogeneous Diophantine equation of degree two has an integer solution, and computing a solution if there exist.
If a non-trivial integer solution is known, one may produce all other solutions in the following way.
Geometric interpretation.
Let
formula_19
be a homogeneous Diophantine equation, where formula_20 is a quadratic form (that is, a homogeneous polynomial of degree 2), with integer coefficients. The "trivial solution" is the solution where all formula_21 are zero. If formula_22 is a non-trivial integer solution of this equation, then formula_23 are the homogeneous coordinates of a rational point of the hypersurface defined by Q. Conversely, if formula_24 are homogeneous coordinates of a rational point of this hypersurface, where formula_25 are integers, then formula_26 is an integer solution of the Diophantine equation. Moreover, the integer solutions that define a given rational point are all sequences of the form
formula_27
where k is any integer, and d is the greatest common divisor of the formula_28
It follows that solving the Diophantine equation formula_19 is completely reduced to finding the rational points of the corresponding projective hypersurface.
Parameterization.
Let now formula_29 be an integer solution of the equation formula_30 As Q is a polynomial of degree two, a line passing through A crosses the hypersurface at a single other point, which is rational if and only if the line is rational (that is, if the line is defined by rational parameters). This allows parameterizing the hypersurface by the lines passing through A, and the rational points are those that are obtained from rational lines, that is, those that correspond to rational values of the parameters.
More precisely, one may proceed as follows.
By permuting the indices, one may suppose, without loss of generality that formula_31 Then one may pass to the affine case by considering the affine hypersurface defined by
formula_32
which has the rational point
formula_33
If this rational point is a singular point, that is if all partial derivatives are zero at R, all lines passing through R are contained in the hypersurface, and one has a cone. The change of variables
formula_34
does not change the rational points, and transforms q into a homogeneous polynomial in "n" − 1 variables. In this case, the problem may thus be solved by applying the method to an equation with fewer variables.
If the polynomial q is a product of linear polynomials (possibly with non-rational coefficients), then it defines two hyperplanes. The intersection of these hyperplanes is a rational flat, and contains rational singular points. This case is thus a special instance of the preceding case.
In the general case, consider the parametric equation of a line passing through R:
formula_35
Substituting this in q, one gets a polynomial of degree two in "x"1, that is zero for "x"1 = "r"1. It is thus divisible by "x"1 – "r"1. The quotient is linear in "x"1, and may be solved for expressing "x"1 as a quotient of two polynomials of degree at most two in formula_36 with integer coefficients:
formula_37
Substituting this in the expressions for formula_38 one gets, for "i" = 1, …, "n" − 1,
formula_39
where formula_40 are polynomials of degree at most two with integer coefficients.
Then, one can return to the homogeneous case. Let, for "i" = 1, …, "n",
formula_41
be the homogenization of formula_42 These quadratic polynomials with integer coefficients form a parameterization of the projective hypersurface defined by Q:
formula_43
A point of the projective hypersurface defined by Q is rational if and only if it may be obtained from rational values of formula_44 As formula_45 are homogeneous polynomials, the point is not changed if all ti are multiplied by the same rational number. Thus, one may suppose that formula_46 are coprime integers. It follows that the integer solutions of the Diophantine equation are exactly the sequences formula_47 where, for "i" = 1, ..., "n",
formula_48
where k is an integer, formula_46 are coprime integers, and d is the greatest common divisor of the n integers formula_49
One could hope that the coprimality of the ti, could imply that "d" = 1. Unfortunately this is not the case, as shown in the next section.
Example of Pythagorean triples.
The equation
formula_50
is probably the first homogeneous Diophantine equation of degree two that has been studied. Its solutions are the Pythagorean triples. This is also the homogeneous equation of the unit circle. In this section, we show how the above method allows retrieving Euclid's formula for generating Pythagorean triples.
For retrieving exactly Euclid's formula, we start from the solution (−1, 0, 1), corresponding to the point (−1, 0) of the unit circle. A line passing through this point may be parameterized by its slope:
formula_51
Putting this in the circle equation
formula_52
one gets
formula_53
Dividing by "x" + 1, results in
formula_54
which is easy to solve in x:
formula_55
It follows
formula_56
Homogenizing as described above one gets all solutions as
formula_57
where k is any integer, s and t are coprime integers, and d is the greatest common divisor of the three numerators. In fact, "d" = 2 if s and t are both odd, and "d" = 1 if one is odd and the other is even.
The "primitive triples" are the solutions where "k" = 1 and "s" > "t" > 0.
This description of the solutions differs slightly from Euclid's formula because Euclid's formula considers only the solutions such that x, y, and z are all positive, and does not distinguish between two triples that differ by the exchange of x and y,
Diophantine analysis.
Typical questions.
The questions asked in Diophantine analysis include:
These traditional problems often lay unsolved for centuries, and mathematicians gradually came to understand their depth (in some cases), rather than treat them as puzzles.
Typical problem.
The given information is that a father's age is 1 less than twice that of his son, and that the digits AB making up the father's age are reversed in the son's age (i.e. BA). This leads to the equation 10"A" + "B"
2(10"B" + "A") − 1, thus 19"B" − 8"A"
1. Inspection gives the result "A"
7, "B"
3, and thus AB equals 73 years and BA equals 37 years. One may easily show that there is not any other solution with A and B positive integers less than 10.
Many well known puzzles in the field of recreational mathematics lead to diophantine equations. Examples include the cannonball problem, Archimedes's cattle problem and the monkey and the coconuts.
17th and 18th centuries.
In 1637, Pierre de Fermat scribbled on the margin of his copy of "Arithmetica": "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers." Stated in more modern language, "The equation "an" + "bn" = "cn" has no solutions for any n higher than 2." Following this, he wrote: "I have discovered a truly marvelous proof of this proposition, which this margin is too narrow to contain." Such a proof eluded mathematicians for centuries, however, and as such his statement became famous as Fermat's Last Theorem. It was not until 1995 that it was proven by the British mathematician Andrew Wiles.
In 1657, Fermat attempted to solve the Diophantine equation 61"x"2 + 1
"y"2 (solved by Brahmagupta over 1000 years earlier). The equation was eventually solved by Euler in the early 18th century, who also solved a number of other Diophantine equations. The smallest solution of this equation in positive integers is "x"
226153980, "y"
1766319049 (see Chakravala method).
Hilbert's tenth problem.
In 1900, David Hilbert proposed the solvability of all Diophantine equations as the tenth of his fundamental problems. In 1970, Yuri Matiyasevich solved it negatively, building on work of Julia Robinson, Martin Davis, and Hilary Putnam to prove that a general algorithm for solving all Diophantine equations cannot exist.
Diophantine geometry.
Diophantine geometry, is the application of techniques from algebraic geometry which considers equations that also have a geometric meaning. The central idea of Diophantine geometry is that of a rational point, namely a solution to a polynomial equation or a system of polynomial equations, which is a vector in a prescribed field K, when K is "not" algebraically closed.
Modern research.
The oldest general method for solving a Diophantine equation—or for proving that there is no solution— is the method of infinite descent, which was introduced by Pierre de Fermat. Another general method is the Hasse principle that uses modular arithmetic modulo all prime numbers for finding the solutions. Despite many improvements these methods cannot solve most Diophantine equations.
The difficulty of solving Diophantine equations is illustrated by Hilbert's tenth problem, which was set in 1900 by David Hilbert; it was to find an algorithm to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. Matiyasevich's theorem implies that such an algorithm cannot exist.
During the 20th century, a new approach has been deeply explored, consisting of using algebraic geometry. In fact, a Diophantine equation can be viewed as the equation of an hypersurface, and the solutions of the equation are the points of the hypersurface that have integer coordinates.
This approach led eventually to the proof by Andrew Wiles in 1994 of Fermat's Last Theorem, stated without proof around 1637. This is another illustration of the difficulty of solving Diophantine equations.
Infinite Diophantine equations.
An example of an infinite Diophantine equation is:
formula_58
which can be expressed as "How many ways can a given integer n be written as the sum of a square plus twice a square plus thrice a square and so on?" The number of ways this can be done for each n forms an integer sequence. Infinite Diophantine equations are related to theta functions and infinite dimensional lattices. This equation always has a solution for any positive n. Compare this to:
formula_59
which does not always have a solution for positive n.
Exponential Diophantine equations.
If a Diophantine equation has as an additional variable or variables occurring as exponents, it is an exponential Diophantine equation. Examples include:
A general theory for such equations is not available; particular cases such as Catalan's conjecture and Fermat's Last Theorem have been tackled. However, the majority are solved via ad-hoc methods such as Størmer's theorem or even trial and error.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ax+by=c,"
},
{
"math_id": 1,
"text": "\\begin{align}\na(x+kv) + b(y-ku) &= ax+by+k(av-bu) \\\\\n&= ax+by+k(udv-vdu) \\\\\n&= ax+by,\n\\end{align}"
},
{
"math_id": 2,
"text": "ax_1 + by_1 = ax_2 + by_2 = c,"
},
{
"math_id": 3,
"text": "u(x_2 - x_1) + v(y_2 - y_1) = 0."
},
{
"math_id": 4,
"text": "x_2 - x_1 = kv, \\quad y_2 - y_1 = -ku."
},
{
"math_id": 5,
"text": "x_2 = x_1 + kv, \\quad y_2 = y_1 - ku,"
},
{
"math_id": 6,
"text": "n_1, \\dots, n_k"
},
{
"math_id": 7,
"text": "a_1, \\dots, a_k"
},
{
"math_id": 8,
"text": "n_1 \\cdots n_k."
},
{
"math_id": 9,
"text": "(x, x_1, \\dots, x_k)"
},
{
"math_id": 10,
"text": "\\begin{align}\nx &= a_1 + n_1\\,x_1\\\\\n&\\;\\;\\vdots\\\\\nx &= a_k + n_k\\,x_k\n\\end{align}"
},
{
"math_id": 11,
"text": "AX = C,"
},
{
"math_id": 12,
"text": "B = [b_{i,j}] = UAV"
},
{
"math_id": 13,
"text": "B (V^{-1}X) = UC."
},
{
"math_id": 14,
"text": "\\begin{align}\n& b_{i,i}y_i = d_i, \\quad 1 \\leq i \\leq k \\\\\n& 0y_i = d_i, \\quad k < i \\leq n.\n\\end{align}"
},
{
"math_id": 15,
"text": " V\\,\n\\begin{bmatrix}\n\\frac{d_1}{b_{1,1}}\\\\\n\\vdots\\\\\n\\frac{d_k}{b_{k,k}}\\\\\nh_{k+1}\\\\\n\\vdots\\\\\nh_n\n\\end{bmatrix}\\,, "
},
{
"math_id": 16,
"text": "x^d+y^d -z^d=0."
},
{
"math_id": 17,
"text": "x^2+y^2=3z^2,"
},
{
"math_id": 18,
"text": "\\sqrt{3},"
},
{
"math_id": 19,
"text": "Q(x_1, \\ldots, x_n)=0"
},
{
"math_id": 20,
"text": "Q(x_1, \\ldots, x_n)"
},
{
"math_id": 21,
"text": "x_i"
},
{
"math_id": 22,
"text": "(a_1, \\ldots, a_n)"
},
{
"math_id": 23,
"text": "\\left(a_1, \\ldots, a_n\\right)"
},
{
"math_id": 24,
"text": "\\left(\\frac {p_1}q, \\ldots, \\frac {p_n}q \\right)"
},
{
"math_id": 25,
"text": "q, p_1, \\ldots, p_n"
},
{
"math_id": 26,
"text": "\\left(p_1, \\ldots, p_n\\right)"
},
{
"math_id": 27,
"text": "\\left(k\\frac{p_1}d, \\ldots, k\\frac{p_n}d\\right),"
},
{
"math_id": 28,
"text": "p_i."
},
{
"math_id": 29,
"text": "A=\\left(a_1, \\ldots, a_n\\right)"
},
{
"math_id": 30,
"text": "Q(x_1, \\ldots, x_n)=0."
},
{
"math_id": 31,
"text": "a_n\\ne 0."
},
{
"math_id": 32,
"text": "q(x_1,\\ldots,x_{n-1})=Q(x_1, \\ldots, x_{n-1},1),"
},
{
"math_id": 33,
"text": "R= (r_1, \\ldots, r_{n-1})=\\left(\\frac{a_1}{a_n}, \\ldots, \\frac{a_{n-1}}{a_n}\\right)."
},
{
"math_id": 34,
"text": "y_i=x_i-r_i"
},
{
"math_id": 35,
"text": "\\begin{align}\nx_2 &= r_2 + t_2(x_1-r_1)\\\\\n&\\;\\;\\vdots\\\\\nx_{n-1} &= r_{n-1} + t_{n-1}(x_1-r_1).\n\\end{align}"
},
{
"math_id": 36,
"text": "t_2, \\ldots, t_{n-1},"
},
{
"math_id": 37,
"text": "x_1=\\frac{f_1(t_2, \\ldots, t_{n-1})}{f_n(t_2, \\ldots, t_{n-1})}."
},
{
"math_id": 38,
"text": "x_2, \\ldots, x_{n-1},"
},
{
"math_id": 39,
"text": "x_i=\\frac{f_i(t_2, \\ldots, t_{n-1})}{f_n(t_2, \\ldots, t_{n-1})},"
},
{
"math_id": 40,
"text": "f_1, \\ldots, f_n"
},
{
"math_id": 41,
"text": "F_i(t_1, \\ldots, t_{n-1})=t_1^2 f_i\\left(\\frac{t_2}{t_1}, \\ldots, \\frac{t_{n-1}}{t_1} \\right),"
},
{
"math_id": 42,
"text": "f_i."
},
{
"math_id": 43,
"text": "\\begin{align}\nx_1&= F_1(t_1, \\ldots, t_{n-1})\\\\\n&\\;\\;\\vdots\\\\\nx_n&= F_n(t_1, \\ldots, t_{n-1}).\n\\end{align}"
},
{
"math_id": 44,
"text": "t_1, \\ldots, t_{n-1}."
},
{
"math_id": 45,
"text": "F_1, \\ldots,F_n"
},
{
"math_id": 46,
"text": "t_1, \\ldots, t_{n-1}"
},
{
"math_id": 47,
"text": "(x_1, \\ldots, x_n)"
},
{
"math_id": 48,
"text": "x_i= k\\,\\frac{F_i(t_1, \\ldots, t_{n-1})}{d},"
},
{
"math_id": 49,
"text": "F_i(t_1, \\ldots, t_{n-1})."
},
{
"math_id": 50,
"text": "x^2+y^2-z^2=0"
},
{
"math_id": 51,
"text": "y=t(x+1)."
},
{
"math_id": 52,
"text": "x^2+y^2-1=0,"
},
{
"math_id": 53,
"text": "x^2-1 +t^2(x+1)^2=0."
},
{
"math_id": 54,
"text": "x-1+t^2(x+1)=0,"
},
{
"math_id": 55,
"text": "x=\\frac{1-t^2}{1+t^2}."
},
{
"math_id": 56,
"text": "y=t(x+1) = \\frac{2t}{1+t^2}."
},
{
"math_id": 57,
"text": "\\begin{align}\nx&=k\\,\\frac{s^2-t^2}{d}\\\\\ny&=k\\,\\frac{2st}{d}\\\\\nz&=k\\,\\frac{s^2+t^2}{d},\n\\end{align}"
},
{
"math_id": 58,
"text": "n = a^2 + 2b^2 + 3c^2 + 4d^2 + 5e^2 + \\cdots,"
},
{
"math_id": 59,
"text": "n = a^2 + 4b^2 + 9c^2 + 16d^2 + 25e^2 + \\cdots,"
}
] | https://en.wikipedia.org/wiki?curid=9109 |
910926 | Subspace topology | Inherited topology
In topology and related areas of mathematics, a subspace of a topological space "X" is a subset "S" of "X" which is equipped with a topology induced from that of "X" called the subspace topology (or the relative topology, or the induced topology, or the trace topology).
Definition.
Given a topological space formula_0 and a subset formula_1 of formula_2, the subspace topology on formula_1 is defined by
formula_3
That is, a subset of formula_1 is open in the subspace topology if and only if it is the intersection of formula_1 with an open set in formula_0. If formula_1 is equipped with the subspace topology then it is a topological space in its own right, and is called a subspace of formula_0. Subsets of topological spaces are usually assumed to be equipped with the subspace topology unless otherwise stated.
Alternatively we can define the subspace topology for a subset formula_1 of formula_2 as the coarsest topology for which the inclusion map
formula_4
is continuous.
More generally, suppose formula_5 is an injection from a set formula_1 to a topological space formula_2. Then the subspace topology on formula_1 is defined as the coarsest topology for which formula_5 is continuous. The open sets in this topology are precisely the ones of the form formula_6 for formula_7 open in formula_2. formula_1 is then homeomorphic to its image in formula_2 (also with the subspace topology) and formula_5 is called a topological embedding.
A subspace formula_1 is called an open subspace if the injection formula_5 is an open map, i.e., if the forward image of an open set of formula_1 is open in formula_2. Likewise it is called a closed subspace if the injection formula_5 is a closed map.
Terminology.
The distinction between a set and a topological space is often blurred notationally, for convenience, which can be a source of confusion when one first encounters these definitions. Thus, whenever formula_1 is a subset of formula_2, and formula_0 is a topological space, then the unadorned symbols "formula_1" and "formula_2" can often be used to refer both to formula_1 and formula_2 considered as two subsets of formula_2, and also to formula_8 and formula_9 as the topological spaces, related as discussed above. So phrases such as "formula_1 an open subspace of formula_2" are used to mean that formula_8 is an open subspace of formula_9, in the sense used above; that is: (i) formula_10; and (ii) formula_1 is considered to be endowed with the subspace topology.
Examples.
In the following, formula_11 represents the real numbers with their usual topology.
Properties.
The subspace topology has the following characteristic property. Let formula_13 be a subspace of formula_2 and let formula_14 be the inclusion map. Then for any topological space formula_15 a map formula_16 is continuous if and only if the composite map formula_17 is continuous.
This property is characteristic in the sense that it can be used to define the subspace topology on formula_13.
We list some further properties of the subspace topology. In the following let formula_1 be a subspace of formula_2.
Preservation of topological properties.
If a topological space having some topological property implies its subspaces have that property, then we say the property is hereditary. If only closed subspaces must share the property we call it weakly hereditary.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(X, \\tau)"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "\\tau_S = \\lbrace S \\cap U \\mid U \\in \\tau \\rbrace."
},
{
"math_id": 4,
"text": "\\iota: S \\hookrightarrow X"
},
{
"math_id": 5,
"text": "\\iota"
},
{
"math_id": 6,
"text": "\\iota^{-1}(U)"
},
{
"math_id": 7,
"text": "U"
},
{
"math_id": 8,
"text": "(S,\\tau_S)"
},
{
"math_id": 9,
"text": "(X,\\tau)"
},
{
"math_id": 10,
"text": "S \\in \\tau"
},
{
"math_id": 11,
"text": "\\mathbb{R}"
},
{
"math_id": 12,
"text": "\\mathbb{Q}"
},
{
"math_id": 13,
"text": "Y"
},
{
"math_id": 14,
"text": "i : Y \\to X"
},
{
"math_id": 15,
"text": "Z"
},
{
"math_id": 16,
"text": "f : Z\\to Y"
},
{
"math_id": 17,
"text": "i\\circ f"
},
{
"math_id": 18,
"text": "f:X\\to Y"
},
{
"math_id": 19,
"text": "f:X\\to f(X)"
},
{
"math_id": 20,
"text": "A"
},
{
"math_id": 21,
"text": "S\\in\\tau"
},
{
"math_id": 22,
"text": "X\\setminus S\\in\\tau"
},
{
"math_id": 23,
"text": "B"
},
{
"math_id": 24,
"text": "B_S = \\{U\\cap S : U \\in B\\}"
}
] | https://en.wikipedia.org/wiki?curid=910926 |
91096 | Geodesic | Straight path on a curved surface or a Riemannian manifold
In geometry, a geodesic () is a curve representing in some sense the shortest path (arc) between two points in a surface, or more generally in a Riemannian manifold. The term also has meaning in any differentiable manifold with a connection. It is a generalization of the notion of a "straight line".
The noun "geodesic" and the adjective "geodetic" come from "geodesy", the science of measuring the size and shape of Earth, though many of the underlying principles can be applied to any ellipsoidal geometry. In the original sense, a geodesic was the shortest route between two points on the Earth's surface. For a spherical Earth, it is a segment of a great circle (see also great-circle distance). The term has since been generalized to more abstract mathematical spaces; for example, in graph theory, one might consider a geodesic between two vertices/nodes of a graph.
In a Riemannian manifold or submanifold, geodesics are characterised by the property of having vanishing geodesic curvature. More generally, in the presence of an affine connection, a geodesic is defined to be a curve whose tangent vectors remain parallel if they are transported along it. Applying this to the Levi-Civita connection of a Riemannian metric recovers the previous notion.
Geodesics are of particular importance in general relativity. Timelike geodesics in general relativity describe the motion of free falling test particles.
Introduction.
A locally shortest path between two given points in a curved space, assumed to be a Riemannian manifold, can be defined by using the equation for the length of a curve (a function "f" from an open interval of R to the space), and then minimizing this length between the points using the calculus of variations. This has some minor technical problems because there is an infinite-dimensional space of different ways to parameterize the shortest path. It is simpler to restrict the set of curves to those that are parameterized "with constant speed" 1, meaning that the distance from "f"("s") to "f"("t") along the curve equals |"s"−"t"|. Equivalently, a different quantity may be used, termed the energy of the curve; minimizing the energy leads to the same equations for a geodesic (here "constant velocity" is a consequence of minimization). Intuitively, one can understand this second formulation by noting that an elastic band stretched between two points will contract its width, and in so doing will minimize its energy. The resulting shape of the band is a geodesic.
It is possible that several different curves between two points minimize the distance, as is the case for two diametrically opposite points on a sphere. In such a case, any of these curves is a geodesic.
A contiguous segment of a geodesic is again a geodesic.
In general, geodesics are not the same as "shortest curves" between two points, though the two concepts are closely related. The difference is that geodesics are only "locally" the shortest distance between points, and are parameterized with "constant speed". Going the "long way round" on a great circle between two points on a sphere is a geodesic but not the shortest path between the points. The map formula_0 from the unit interval on the real number line to itself gives the shortest path between 0 and 1, but is not a geodesic because the velocity of the corresponding motion of a point is not constant.
Geodesics are commonly seen in the study of Riemannian geometry and more generally metric geometry. In general relativity, geodesics in spacetime describe the motion of point particles under the influence of gravity alone. In particular, the path taken by a falling rock, an orbiting satellite, or the shape of a planetary orbit are all geodesics in curved spacetime. More generally, the topic of sub-Riemannian geometry deals with the paths that objects may take when they are not free, and their movement is constrained in various ways.
This article presents the mathematical formalism involved in defining, finding, and proving the existence of geodesics, in the case of Riemannian manifolds. The article Levi-Civita connection discusses the more general case of a pseudo-Riemannian manifold and geodesic (general relativity) discusses the special case of general relativity in greater detail.
Examples.
The most familiar examples are the straight lines in Euclidean geometry. On a sphere, the images of geodesics are the great circles. The shortest path from point "A" to point "B" on a sphere is given by the shorter arc of the great circle passing through "A" and "B". If "A" and "B" are antipodal points, then there are "infinitely many" shortest paths between them. Geodesics on an ellipsoid behave in a more complicated way than on a sphere; in particular, they are not closed in general (see figure).
Triangles.
A geodesic triangle is formed by the geodesics joining each pair out of three points on a given surface. On the sphere, the geodesics are great circle arcs, forming a spherical triangle.
Metric geometry.
In metric geometry, a geodesic is a curve which is everywhere locally a distance minimizer. More precisely, a curve "γ" : "I" → "M" from an interval "I" of the reals to the metric space "M" is a geodesic if there is a constant "v" ≥ 0 such that for any "t" ∈ "I" there is a neighborhood "J" of "t" in "I" such that for any "t"1, "t"2 ∈ "J" we have
formula_1
This generalizes the notion of geodesic for Riemannian manifolds. However, in metric geometry the geodesic considered is often equipped with natural parameterization, i.e. in the above identity "v" = 1 and
formula_2
If the last equality is satisfied for all "t"1, "t"2 ∈ "I", the geodesic is called a minimizing geodesic or shortest path.
In general, a metric space may have no geodesics, except constant curves. At the other extreme, any two points in a length metric space are joined by a minimizing sequence of rectifiable paths, although this minimizing sequence need not converge to a geodesic.
Riemannian geometry.
In a Riemannian manifold "M" with metric tensor "g", the length "L" of a continuously differentiable curve γ : ["a","b"] → "M" is defined by
formula_3
The distance "d"("p", "q") between two points "p" and "q" of "M" is defined as the infimum of the length taken over all continuous, piecewise continuously differentiable curves γ : ["a","b"] → "M" such that γ("a") = "p" and γ("b") = "q". In Riemannian geometry, all geodesics are locally distance-minimizing paths, but the converse is not true. In fact, only paths that are both locally distance minimizing and parameterized proportionately to arc-length are geodesics. Another equivalent way of defining geodesics on a Riemannian manifold, is to define them as the minima of the following action or energy functional
formula_4
All minima of "E" are also minima of "L", but "L" is a bigger set since paths that are minima of "L" can be arbitrarily re-parameterized (without changing their length), while minima of "E" cannot.
For a piecewise formula_5 curve (more generally, a formula_6 curve), the Cauchy–Schwarz inequality gives
formula_7
with equality if and only if formula_8 is equal to a constant a.e.; the path should be travelled at constant speed. It happens that minimizers of formula_9 also minimize formula_10, because they turn out to be affinely parameterized, and the inequality is an equality. The usefulness of this approach is that the problem of seeking minimizers of "E" is a more robust variational problem. Indeed, "E" is a "convex function" of formula_11, so that within each isotopy class of "reasonable functions", one ought to expect existence, uniqueness, and regularity of minimizers. In contrast, "minimizers" of the functional formula_10 are generally not very regular, because arbitrary reparameterizations are allowed.
The Euler–Lagrange equations of motion for the functional "E" are then given in local coordinates by
formula_12
where formula_13 are the Christoffel symbols of the metric. This is the geodesic equation, discussed below.
Calculus of variations.
Techniques of the classical calculus of variations can be applied to examine the energy functional "E". The first variation of energy is defined in local coordinates by
formula_14
The critical points of the first variation are precisely the geodesics. The second variation is defined by
formula_15
In an appropriate sense, zeros of the second variation along a geodesic γ arise along Jacobi fields. Jacobi fields are thus regarded as variations through geodesics.
By applying variational techniques from classical mechanics, one can also regard geodesics as Hamiltonian flows. They are solutions of the associated Hamilton equations, with (pseudo-)Riemannian metric taken as Hamiltonian.
Affine geodesics.
A geodesic on a smooth manifold "M" with an affine connection ∇ is defined as a curve γ("t") such that parallel transport along the curve preserves the tangent vector to the curve, so
at each point along the curve, where formula_17 is the derivative with respect to formula_18. More precisely, in order to define the covariant derivative of formula_17 it is necessary first to extend formula_17 to a continuously differentiable vector field in an open set. However, the resulting value of (1) is independent of the choice of extension.
Using local coordinates on "M", we can write the geodesic equation (using the summation convention) as
formula_19
where formula_20 are the coordinates of the curve γ("t") and formula_21 are the Christoffel symbols of the connection ∇. This is an ordinary differential equation for the coordinates. It has a unique solution, given an initial position and an initial velocity. Therefore, from the point of view of classical mechanics, geodesics can be thought of as trajectories of free particles in a manifold. Indeed, the equation formula_16 means that the acceleration vector of the curve has no components in the direction of the surface (and therefore it is perpendicular to the tangent plane of the surface at each point of the curve). So, the motion is completely determined by the bending of the surface. This is also the idea of general relativity where particles move on geodesics and the bending is caused by gravity.
Existence and uniqueness.
The "local existence and uniqueness theorem" for geodesics states that geodesics on a smooth manifold with an affine connection exist, and are unique. More precisely:
For any point "p" in "M" and for any vector "V" in "TpM" (the tangent space to "M" at "p") there exists a unique geodesic formula_22 : "I" → "M" such that
formula_23 and
formula_24
where "I" is a maximal open interval in R containing 0.
The proof of this theorem follows from the theory of ordinary differential equations, by noticing that the geodesic equation is a second-order ODE. Existence and uniqueness then follow from the Picard–Lindelöf theorem for the solutions of ODEs with prescribed initial conditions. γ depends smoothly on both "p" and "V".
In general, "I" may not be all of R as for example for an open disc in R2. Any γ extends to all of ℝ if and only if M is geodesically complete.
Geodesic flow.
Geodesic flow is a local R-action on the tangent bundle "TM" of a manifold "M" defined in the following way
formula_25
where "t" ∈ R, "V" ∈ "TM" and formula_26 denotes the geodesic with initial data formula_27. Thus, "formula_28 is the exponential map of the vector "tV". A closed orbit of the geodesic flow corresponds to a closed geodesic on "M".
On a (pseudo-)Riemannian manifold, the geodesic flow is identified with a Hamiltonian flow on the cotangent bundle. The Hamiltonian is then given by the inverse of the (pseudo-)Riemannian metric, evaluated against the canonical one-form. In particular the flow preserves the (pseudo-)Riemannian metric formula_29, i.e.
formula_30
In particular, when "V" is a unit vector, formula_26 remains unit speed throughout, so the geodesic flow is tangent to the unit tangent bundle. Liouville's theorem implies invariance of a kinematic measure on the unit tangent bundle.
Geodesic spray.
The geodesic flow defines a family of curves in the tangent bundle. The derivatives of these curves define a vector field on the total space of the tangent bundle, known as the geodesic spray.
More precisely, an affine connection gives rise to a splitting of the double tangent bundle TT"M" into horizontal and vertical bundles:
formula_31
The geodesic spray is the unique horizontal vector field "W" satisfying
formula_32
at each point "v" ∈ T"M"; here π∗ : TT"M" → T"M" denotes the pushforward (differential) along the projection π : T"M" → "M" associated to the tangent bundle.
More generally, the same construction allows one to construct a vector field for any Ehresmann connection on the tangent bundle. For the resulting vector field to be a spray (on the deleted tangent bundle T"M" \ {0}) it is enough that the connection be equivariant under positive rescalings: it need not be linear. That is, (cf. Ehresmann connection#Vector bundles and covariant derivatives) it is enough that the horizontal distribution satisfy
formula_33
for every "X" ∈ T"M" \ {0} and λ > 0. Here "d"("S"λ) is the pushforward along the scalar homothety formula_34 A particular case of a non-linear connection arising in this manner is that associated to a Finsler manifold.
Affine and projective geodesics.
Equation (1) is invariant under affine reparameterizations; that is, parameterizations of the form
formula_35
where "a" and "b" are constant real numbers. Thus apart from specifying a certain class of embedded curves, the geodesic equation also determines a preferred class of parameterizations on each of the curves. Accordingly, solutions of (1) are called geodesics with affine parameter.
An affine connection is "determined by" its family of affinely parameterized geodesics, up to torsion . The torsion itself does not, in fact, affect the family of geodesics, since the geodesic equation depends only on the symmetric part of the connection. More precisely, if formula_36 are two connections such that the difference tensor
formula_37
is skew-symmetric, then formula_38 and formula_39 have the same geodesics, with the same affine parameterizations. Furthermore, there is a unique connection having the same geodesics as formula_38, but with vanishing torsion.
Geodesics without a particular parameterization are described by a projective connection.
Computational methods.
Efficient solvers for the minimal geodesic problem on surfaces have been proposed by Mitchell, Kimmel, Crane, and others.
Ribbon test.
A ribbon "test" is a way of finding a geodesic on a physical surface. The idea is to fit a bit of paper around a straight line (a ribbon) onto a curved surface as closely as possible without stretching or squishing the ribbon (without changing its internal geometry).
For example, when a ribbon is wound as a ring around a cone, the ribbon would not lie on the cone's surface but stick out, so that circle is not a geodesic on the cone. If the ribbon is adjusted so that all its parts touch the cone's surface, it would give an approximation to a geodesic.
Mathematically the ribbon test can be formulated as finding a mapping formula_40 of a neighborhood formula_41 of a line formula_42 in a plane into a surface formula_43 so that the mapping formula_44 "doesn't change the distances around formula_42 by much"; that is, at the distance formula_45 from formula_46 we have formula_47 where formula_48 and formula_49 are metrics on formula_41 and formula_43.
Applications.
Geodesics serve as the basis to calculate:
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t \\to t^2"
},
{
"math_id": 1,
"text": "d(\\gamma(t_1),\\gamma(t_2)) = v \\left| t_1 - t_2 \\right| ."
},
{
"math_id": 2,
"text": "d(\\gamma(t_1),\\gamma(t_2)) = \\left| t_1 - t_2 \\right| ."
},
{
"math_id": 3,
"text": "L(\\gamma)=\\int_a^b \\sqrt{ g_{\\gamma(t)}(\\dot\\gamma(t),\\dot\\gamma(t)) }\\,dt."
},
{
"math_id": 4,
"text": "E(\\gamma)=\\frac{1}{2}\\int_a^b g_{\\gamma(t)}(\\dot\\gamma(t),\\dot\\gamma(t))\\,dt."
},
{
"math_id": 5,
"text": "C^1"
},
{
"math_id": 6,
"text": "W^{1,2}"
},
{
"math_id": 7,
"text": "L(\\gamma)^2 \\le 2(b-a)E(\\gamma)"
},
{
"math_id": 8,
"text": "g(\\gamma',\\gamma')"
},
{
"math_id": 9,
"text": "E(\\gamma)"
},
{
"math_id": 10,
"text": "L(\\gamma)"
},
{
"math_id": 11,
"text": "\\gamma"
},
{
"math_id": 12,
"text": "\\frac{d^2x^\\lambda }{dt^2} + \\Gamma^{\\lambda}_{\\mu \\nu }\\frac{dx^\\mu }{dt}\\frac{dx^\\nu }{dt} = 0,"
},
{
"math_id": 13,
"text": "\\Gamma^\\lambda_{\\mu\\nu}"
},
{
"math_id": 14,
"text": "\\delta E(\\gamma)(\\varphi) = \\left.\\frac{\\partial}{\\partial t}\\right|_{t=0} E(\\gamma + t\\varphi)."
},
{
"math_id": 15,
"text": "\\delta^2 E(\\gamma)(\\varphi,\\psi) = \\left.\\frac{\\partial^2}{\\partial s \\, \\partial t} \\right|_{s=t=0} E(\\gamma + t\\varphi + s\\psi)."
},
{
"math_id": 16,
"text": " \\nabla_{\\dot\\gamma} \\dot\\gamma= 0"
},
{
"math_id": 17,
"text": "\\dot\\gamma"
},
{
"math_id": 18,
"text": "t"
},
{
"math_id": 19,
"text": "\\frac{d^2\\gamma^\\lambda }{dt^2} + \\Gamma^{\\lambda}_{\\mu \\nu }\\frac{d\\gamma^\\mu }{dt}\\frac{d\\gamma^\\nu }{dt} = 0\\ ,"
},
{
"math_id": 20,
"text": "\\gamma^\\mu = x^\\mu \\circ \\gamma (t)"
},
{
"math_id": 21,
"text": "\\Gamma^{\\lambda }_{\\mu \\nu }"
},
{
"math_id": 22,
"text": "\\gamma \\,"
},
{
"math_id": 23,
"text": "\\gamma(0) = p \\,"
},
{
"math_id": 24,
"text": "\\dot\\gamma(0) = V,"
},
{
"math_id": 25,
"text": "G^t(V)=\\dot\\gamma_V(t)"
},
{
"math_id": 26,
"text": "\\gamma_V"
},
{
"math_id": 27,
"text": "\\dot\\gamma_V(0)=V"
},
{
"math_id": 28,
"text": "G^t(V)=\\exp(tV)"
},
{
"math_id": 29,
"text": "g"
},
{
"math_id": 30,
"text": "g(G^t(V),G^t(V))=g(V,V). \\, "
},
{
"math_id": 31,
"text": "TTM = H\\oplus V."
},
{
"math_id": 32,
"text": "\\pi_* W_v = v\\,"
},
{
"math_id": 33,
"text": "H_{\\lambda X} = d(S_\\lambda)_X H_X\\,"
},
{
"math_id": 34,
"text": "S_\\lambda: X\\mapsto \\lambda X."
},
{
"math_id": 35,
"text": "t\\mapsto at+b"
},
{
"math_id": 36,
"text": "\\nabla, \\bar{\\nabla}"
},
{
"math_id": 37,
"text": "D(X,Y) = \\nabla_XY-\\bar{\\nabla}_XY"
},
{
"math_id": 38,
"text": "\\nabla"
},
{
"math_id": 39,
"text": "\\bar{\\nabla}"
},
{
"math_id": 40,
"text": "f: N(\\ell) \\to S"
},
{
"math_id": 41,
"text": "N"
},
{
"math_id": 42,
"text": "\\ell"
},
{
"math_id": 43,
"text": "S"
},
{
"math_id": 44,
"text": "f"
},
{
"math_id": 45,
"text": "\\varepsilon"
},
{
"math_id": 46,
"text": "l"
},
{
"math_id": 47,
"text": "g_N-f^*(g_S)=O(\\varepsilon^2)"
},
{
"math_id": 48,
"text": "g_N"
},
{
"math_id": 49,
"text": "g_S"
}
] | https://en.wikipedia.org/wiki?curid=91096 |
910967 | Joint entropy | Measure of information in probability and information theory
In information theory, joint entropy is a measure of the uncertainty associated with a set of variables.
Definition.
The joint Shannon entropy (in bits) of two discrete random variables formula_0 and formula_1 with images formula_2 and formula_3 is defined as
where formula_4 and formula_5 are particular values of formula_0 and formula_1, respectively, formula_6 is the joint probability of these values occurring together, and formula_7 is defined to be 0 if formula_8.
For more than two random variables formula_9 this expands to
where formula_10 are particular values of formula_11, respectively, formula_12 is the probability of these values occurring together, and formula_13 is defined to be 0 if formula_14.
Properties.
Nonnegativity.
The joint entropy of a set of random variables is a nonnegative number.
formula_15
formula_16
Greater than individual entropies.
The joint entropy of a set of variables is greater than or equal to the maximum of all of the individual entropies of the variables in the set.
formula_17
formula_18
Less than or equal to the sum of individual entropies.
The joint entropy of a set of variables is less than or equal to the sum of the individual entropies of the variables in the set. This is an example of subadditivity. This inequality is an equality if and only if formula_0 and formula_1 are statistically independent.
formula_19
formula_20
Relations to other entropy measures.
Joint entropy is used in the definition of conditional entropy
formula_21,
and
formula_22.
It is also used in the definition of mutual information
formula_23.
In quantum information theory, the joint entropy is generalized into the joint quantum entropy.
Joint differential entropy.
Definition.
The above definition is for discrete random variables and just as valid in the case of continuous random variables. The continuous version of discrete joint entropy is called "joint differential (or continuous) entropy". Let formula_0 and formula_1 be a continuous random variables with a joint probability density function formula_24. The differential joint entropy formula_25 is defined as
For more than two continuous random variables formula_9 the definition is generalized to:
The integral is taken over the support of formula_26. It is possible that the integral does not exist in which case we say that the differential entropy is not defined.
Properties.
As in the discrete case the joint differential entropy of a set of random variables is smaller or equal than the sum of the entropies of the individual random variables:
formula_27
The following chain rule holds for two random variables:
formula_28
In the case of more than two random variables this generalizes to:
formula_29
Joint differential entropy is also used in the definition of the mutual information between continuous random variables:
formula_30
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "\\mathcal X"
},
{
"math_id": 3,
"text": "\\mathcal Y"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "y"
},
{
"math_id": 6,
"text": "P(x,y)"
},
{
"math_id": 7,
"text": "P(x,y) \\log_2[P(x,y)]"
},
{
"math_id": 8,
"text": "P(x,y)=0"
},
{
"math_id": 9,
"text": "X_1, ..., X_n"
},
{
"math_id": 10,
"text": "x_1,...,x_n"
},
{
"math_id": 11,
"text": "X_1,...,X_n"
},
{
"math_id": 12,
"text": "P(x_1, ..., x_n)"
},
{
"math_id": 13,
"text": "P(x_1, ..., x_n) \\log_2[P(x_1, ..., x_n)]"
},
{
"math_id": 14,
"text": "P(x_1, ..., x_n)=0"
},
{
"math_id": 15,
"text": "\\Eta(X,Y) \\geq 0"
},
{
"math_id": 16,
"text": "\\Eta(X_1,\\ldots, X_n) \\geq 0"
},
{
"math_id": 17,
"text": "\\Eta(X,Y) \\geq \\max \\left[\\Eta(X),\\Eta(Y) \\right]"
},
{
"math_id": 18,
"text": "\\Eta \\bigl(X_1,\\ldots, X_n \\bigr) \\geq \\max_{1 \\le i \\le n} \n \\Bigl\\{ \\Eta\\bigl(X_i\\bigr) \\Bigr\\}"
},
{
"math_id": 19,
"text": "\\Eta(X,Y) \\leq \\Eta(X) + \\Eta(Y)"
},
{
"math_id": 20,
"text": "\\Eta(X_1,\\ldots, X_n) \\leq \\Eta(X_1) + \\ldots + \\Eta(X_n)"
},
{
"math_id": 21,
"text": "\\Eta(X|Y) = \\Eta(X,Y) - \\Eta(Y)\\,"
},
{
"math_id": 22,
"text": "\\Eta(X_1,\\dots,X_n) = \\sum_{k=1}^n \\Eta(X_k|X_{k-1},\\dots, X_1)"
},
{
"math_id": 23,
"text": "\\operatorname{I}(X;Y) = \\Eta(X) + \\Eta(Y) - \\Eta(X,Y)\\,"
},
{
"math_id": 24,
"text": "f(x,y)"
},
{
"math_id": 25,
"text": "h(X,Y)"
},
{
"math_id": 26,
"text": "f"
},
{
"math_id": 27,
"text": "h(X_1,X_2, \\ldots,X_n) \\le \\sum_{i=1}^n h(X_i)"
},
{
"math_id": 28,
"text": "h(X,Y) = h(X|Y) + h(Y)"
},
{
"math_id": 29,
"text": "h(X_1,X_2, \\ldots,X_n) = \\sum_{i=1}^n h(X_i|X_1,X_2, \\ldots,X_{i-1})"
},
{
"math_id": 30,
"text": "\\operatorname{I}(X,Y)=h(X)+h(Y)-h(X,Y)"
}
] | https://en.wikipedia.org/wiki?curid=910967 |
91099 | Business continuity planning | Prevention and recovery from threats that might affect a company
Business continuity may be defined as "the capability of an organization to continue the delivery of products or services at pre-defined acceptable levels following a disruptive incident", and business continuity planning (or business continuity and resiliency planning) is the process of creating systems of prevention and recovery to deal with potential threats to a company. In addition to prevention, the goal is to enable ongoing operations before and during execution of disaster recovery. Business continuity is the intended outcome of proper execution of both business continuity planning and disaster recovery.
Several business continuity standards have been published by various standards bodies to assist in checklisting ongoing planning tasks.
Business continuity requires a top-down approach to identify an organisation's minimum requirements to ensure its viability as an entity. An organization's resistance to failure is "the ability ... to withstand changes in its environment and still function". Often called resilience, it is a capability that enables organizations to either endure environmental changes without having to permanently adapt, or the organization is forced to adapt a new way of working that better suits the new environmental conditions.
Overview.
Any event that could negatively impact operations should be included in the plan, such as supply chain interruption, loss of or damage to critical infrastructure (major machinery or computing/network resource). As such, BCP is a subset of risk management. In the U.S., government entities refer to the process as "continuity of operations planning" (COOP). A business continuity plan outlines a range of disaster scenarios and the steps the business will take in any particular scenario to return to regular trade. BCP's are written ahead of time and can also include precautions to be put in place. Usually created with the input of key staff as well as stakeholders, a BCP is a set of contingencies to minimize potential harm to businesses during adverse scenarios.
Resilience.
A 2005 analysis of how disruptions can adversely affect the operations of corporations and how investments in resilience can give a competitive advantage over entities not prepared for various contingencies extended then-common business continuity planning practices. Business organizations such as the Council on Competitiveness embraced this resilience goal.
Adapting to change in an apparently slower, more evolutionary manner - sometimes over many years or decades - has been described as being more resilient, and the term "strategic resilience" is now used to go beyond resisting a one-time crisis, but rather continuously anticipating and adjusting, "before the case for change becomes desperately obvious".
This approach is sometimes summarized as: preparedness, protection, response and recovery.
Resilience Theory can be related to the field of Public Relations. Resilience is a communicative process that is constructed by citizens, families, media system, organizations and governments through everyday talk and mediated conversation.
The theory is based on the work of Patrice M. Buzzanell, a professor at the Brian Lamb School of Communication at Purdue University. In her 2010 article, "Resilience: Talking, Resisting, and Imagining New Normalcies Into Being" Buzzanell discussed the ability for organizations to thrive after having a crisis through building resistance. Buzzanell notes that there are five different processes that individuals use when trying to maintain resilience- crafting normalcy, affirming identity anchors, maintaining and using communication networks, putting alternative logics to work and downplaying negative feelings while foregrounding positive emotions.
When looking at the resilience theory, the crisis communication theory is similar, but not the same. The crisis communication theory is based on the reputation of the company, but the resilience theory is based on the process of recovery of the company. There are five main components of resilience: crafting normalcy, affirming identity anchors, maintaining and using communication networks, putting alternative logics to work, and downplaying negative feelings while foregrounding negative emotions. Each of these processes can be applicable to businesses in crisis times, making resilience an important factor for companies to focus on while training.
There are three main groups that are affected by a crisis. They are micro (individual), meso (group or organization) and macro (national or interorganizational). There are also two main types of resilience, which are proactive and post resilience. Proactive resilience is preparing for a crisis and creating a solid foundation for the company. Post resilience includes continuing to maintain communication and check in with employees. Proactive resilience is dealing with issues at hand before they cause a possible shift in the work environment and post resilience maintaining communication and accepting changes after an incident has happened. Resilience can be applied to any organization.
In New Zealand, the Canterbury University Resilient Organisations programme developed an assessment tool for benchmarking the Resilience of Organisations. It covers 11 categories, each having 5 to 7 questions. A "Resilience Ratio" summarizes this evaluation.
Continuity.
Plans and procedures are used in business continuity planning to ensure that the critical organizational operations required to keep an organization running continue to operate during events when key dependencies of operations are disrupted. Continuity does not need to apply to every activity which the organization undertakes. For example, under ISO 22301:2019, organizations are required to define their business continuity objectives, the minimum levels of product and service operations which will be considered acceptable and the maximum tolerable period of disruption (MTPD) which can be allowed.
A major cost in planning for this is the preparation of audit compliance management documents; automation tools are available to reduce the time and cost associated with manually producing this information.
Inventory.
Planners must have information about:
Analysis.
The analysis phase consists of:
Quantifying of loss ratios must also include "dollars to defend a lawsuit." It has been estimated that a dollar spent in loss prevention can prevent "seven dollars of disaster-related economic loss."
Business impact analysis (BIA).
A business impact analysis (BIA) differentiates critical (urgent) and non-critical (non-urgent) organization functions/activities. A function may be considered critical if dictated by law.
Each function/activity typically relies on a combination of constituent components in order to operate:
For each function, two values are assigned:
Maximum RTO.
Maximum time constraints for how long an enterprise's key products or services can be unavailable or undeliverable before stakeholders perceive unacceptable consequences have been named as:
According to ISO 22301 the terms "maximum acceptable outage" and "maximum tolerable period of disruption" mean the same thing and are defined using exactly the same words. Some standards use the term "maximum downtime limit".
Consistency.
When more than one system crashes, recovery plans must balance the need for data consistency with other objectives, such as RTO and RPO.
Recovery Consistency Objective (RCO) is the name of this goal. It applies data consistency objectives, to define a measurement for the consistency of distributed business data within interlinked systems after a disaster incident. Similar terms used in this context are "Recovery Consistency Characteristics" (RCC) and "Recovery Object Granularity" (ROG).
While RTO and RPO are absolute per-system values, RCO is expressed as a percentage that measures the deviation between actual and targeted state of business data across systems for process groups or individual business processes.
The following formula calculates RCO with "n" representing the number of business processes and "entities" representing an abstract value for business data:
formula_0
100% RCO means that post recovery, no business data deviation occurs.
Threat and risk analysis (TRA).
After defining recovery requirements, each potential threat may require unique recovery steps (contingency plans or playbooks). Common threats include:
The above areas can cascade: Responders can stumble. Supplies may become depleted. During the 2002–2003 SARS outbreak, some organizations compartmentalized and rotated teams to match the incubation period of the disease. They also banned in-person contact during both business and non-business hours. This increased resiliency against the threat.
Impact scenarios.
Impact scenarios are identified and documented:
These should reflect the widest possible damage.
Tiers of preparedness.
SHARE's seven tiers of disaster recovery released in 1992, were updated in 2012 by IBM as an eight tier model:
Solution design.
Two main requirements from the impact analysis stage are:
This phase overlaps with disaster recovery planning.
The solution phase determines:
Standards.
ISO Standards.
There are many standards that are available to support business continuity planning and management.
The International Organization for Standardization (ISO) has for example developed a whole series of standards on Business continuity management systems under responsibility of technical committee ISO/TC 292:
British standards.
The British Standards Institution (BSI Group) released a series of standards which have since been withdrawn and replaced by the ISO standards above.
Within the UK, BS 25999-2:2007 and BS 25999-1:2006 were being used for business continuity management across all organizations, industries and sectors. These documents give a practical plan to deal with most eventualities—from extreme weather conditions to terrorism, IT system failure, and staff sickness.
In 2004, following crises in the preceding years, the UK government passed the Civil Contingencies Act of 2004: Businesses must have continuity planning measures to survive and continue to thrive whilst working towards keeping the incident as minimal as possible.
The Act was separated into two parts:
Part 1: civil protection, covering roles & responsibilities for local responders
Part 2: emergency powers.
In the United Kingdom, resilience is implemented locally by the Local Resilience Forum.
Implementation and testing.
The implementation phase involves policy changes, material acquisitions, staffing and testing.
Testing and organizational acceptance.
The 2008 book "Exercising for Excellence", published by The British Standards Institution identified three types of exercises that can be employed when testing business continuity plans.
While start and stop times are pre-agreed, the actual duration might be unknown if events are allowed to run their course.
Maintenance.
Biannual or annual maintenance cycle maintenance of a BCP manual is broken down into three periodic activities.
Issues found during the testing phase often must be reintroduced to the analysis phase.
Information and targets.
The BCP manual must evolve with the organization, and maintain information about who has to know what:
Technical.
Specialized technical resources must be maintained. Checks include:
Testing and verification of recovery procedures.
Software and work process changes must be documented and validated, including verification that documented work process recovery tasks and supporting disaster recovery infrastructure allow staff to recover within the predetermined recovery time objective.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{RCO} = 1 - \\frac{(\\text{number of inconsistent entities})_n}{(\\text{number of entities})_n}"
}
] | https://en.wikipedia.org/wiki?curid=91099 |
91100 | Michelson–Morley experiment | 1887 investigation of the speed of light
The Michelson–Morley experiment was an attempt to measure the motion of the Earth relative to the luminiferous aether, a supposed medium permeating space that was thought to be the carrier of light waves. The experiment was performed between April and July 1887 by American physicists Albert A. Michelson and Edward W. Morley at what is now Case Western Reserve University in Cleveland, Ohio, and published in November of the same year.
The experiment compared the speed of light in perpendicular directions in an attempt to detect the relative motion of matter, including their laboratory, through the luminiferous aether, or "aether wind" as it was sometimes called. The result was negative, in that Michelson and Morley found no significant difference between the speed of light in the direction of movement through the presumed aether, and the speed at right angles. This result is generally considered to be the first strong evidence against some aether theories, as well as initiating a line of research that eventually led to special relativity, which rules out motion against an aether. Of this experiment, Albert Einstein wrote, "If the Michelson–Morley experiment had not brought us into serious embarrassment, no one would have regarded the relativity theory as a (halfway) redemption."
Michelson–Morley type experiments have been repeated many times with steadily increasing sensitivity. These include experiments from 1902 to 1905, and a series of experiments in the 1920s. More recently, in 2009, optical resonator experiments confirmed the absence of any aether wind at the 10−17 level. Together with the Ives–Stilwell and Kennedy–Thorndike experiments, Michelson–Morley type experiments form one of the fundamental tests of special relativity.
Detecting the aether.
Physics theories of the 19th century assumed that just as surface water waves must have a supporting substance, i.e., a "medium", to move across (in this case water), and audible sound requires a medium to transmit its wave motions (such as air or water), so light must also require a medium, the "luminiferous aether", to transmit its wave motions. Because light can travel through a vacuum, it was assumed that even a vacuum must be filled with aether. Because the speed of light is so great, and because material bodies pass through the "aether" without obvious friction or drag, it was assumed to have a highly unusual combination of properties. Designing experiments to investigate these properties was a high priority of 19th-century physics.
Earth orbits around the Sun at a speed of around , or . The Earth is in motion, so two main possibilities were considered: (1) The aether is stationary and only partially dragged by Earth (proposed by Augustin-Jean Fresnel in 1818), or (2) the aether is completely dragged by Earth and thus shares its motion at Earth's surface (proposed by Sir George Stokes, 1st Baronet in 1844). In addition, James Clerk Maxwell (1865) recognized the electromagnetic nature of light and developed what are now called Maxwell's equations, but these equations were still interpreted as describing the motion of waves through an aether, whose state of motion was unknown. Eventually, Fresnel's idea of an (almost) stationary aether was preferred because it appeared to be confirmed by the Fizeau experiment (1851) and the aberration of star light.
According to the stationary and the partially dragged aether hypotheses, Earth and the aether are in relative motion, implying that a so-called "aether wind" (Fig. 2) should exist. Although it would be possible, in theory, for the Earth's motion to match that of the aether at one moment in time, it was not possible for the Earth to remain at rest with respect to the aether at all times, because of the variation in both the direction and the speed of the motion. At any given point on the Earth's surface, the magnitude and direction of the wind would vary with time of day and season. By analyzing the return speed of light in different directions at various different times, it was thought to be possible to measure the motion of the Earth relative to the aether. The expected relative difference in the measured speed of light was quite small, given that the velocity of the Earth in its orbit around the Sun has a magnitude of about one hundredth of one percent of the speed of light.
During the mid-19th century, measurements of aether wind effects of first order, i.e., effects proportional to "v"/"c" ("v" being Earth's velocity, "c" the speed of light) were thought to be possible, but no direct measurement of the speed of light was possible with the accuracy required. For instance, the Fizeau wheel could measure the speed of light to perhaps 5% accuracy, which was quite inadequate for measuring directly a first-order 0.01% change in the speed of light. A number of physicists therefore attempted to make measurements of indirect first-order effects not of the speed of light itself, but of variations in the speed of light (see First order aether-drift experiments). The Hoek experiment, for example, was intended to detect interferometric fringe shifts due to speed differences of oppositely propagating light waves through water at rest. The results of such experiments were all negative. This could be explained by using Fresnel's dragging coefficient, according to which the aether and thus light are partially dragged by moving matter. Partial aether-dragging would thwart attempts to measure any first order change in the speed of light. As pointed out by Maxwell (1878), only experimental arrangements capable of measuring second order effects would have any hope of detecting aether drift, i.e., effects proportional to "v"2/"c"2. Existing experimental setups, however, were not sensitive enough to measure effects of that size.
1881 and 1887 experiments.
Michelson experiment (1881).
Michelson had a solution to the problem of how to construct a device sufficiently accurate to detect aether flow. In 1877, while teaching at his alma mater, the United States Naval Academy in Annapolis, Michelson conducted his first known light speed experiments as a part of a classroom demonstration. In 1881, he left active U.S. Naval service while in Germany concluding his studies. In that year, Michelson used a prototype experimental device to make several more measurements.
The device he designed, later known as a Michelson interferometer, sent yellow light from a sodium flame (for alignment), or white light (for the actual observations), through a half-silvered mirror that was used to split it into two beams traveling at right angles to one another. After leaving the splitter, the beams traveled out to the ends of long arms where they were reflected back into the middle by small mirrors. They then recombined on the far side of the splitter in an eyepiece, producing a pattern of constructive and destructive interference whose transverse displacement would depend on the relative time it takes light to transit the longitudinal "vs." the transverse arms. If the Earth is traveling through an aether medium, a light beam traveling parallel to the flow of that aether will take longer to reflect back and forth than would a beam traveling perpendicular to the aether, because the increase in elapsed time from traveling against the aether wind is more than the time saved by traveling with the aether wind. Michelson expected that the Earth's motion would produce a fringe shift equal to 0.04 fringes—that is, of the separation between areas of the same intensity. He did not observe the expected shift; the greatest average deviation that he measured (in the northwest direction) was only 0.018 fringes; most of his measurements were much less. His conclusion was that Fresnel's hypothesis of a stationary aether with partial aether dragging would have to be rejected, and thus he confirmed Stokes' hypothesis of complete aether dragging.
However, Alfred Potier (and later Hendrik Lorentz) pointed out to Michelson that he had made an error of calculation, and that the expected fringe shift should have been only 0.02 fringes. Michelson's apparatus was subject to experimental errors far too large to say anything conclusive about the aether wind. Definitive measurement of the aether wind would require an experiment with greater accuracy and better controls than the original. Nevertheless, the prototype was successful in demonstrating that the basic method was feasible.
Michelson–Morley experiment (1887).
In 1885, Michelson began a collaboration with Edward Morley, spending considerable time and money to confirm with higher accuracy Fizeau's 1851 experiment on Fresnel's drag coefficient, to improve on Michelson's 1881 experiment, and to establish the wavelength of light as a standard of length. At this time Michelson was professor of physics at the Case School of Applied Science, and Morley was professor of chemistry at Western Reserve University (WRU), which shared a campus with the Case School on the eastern edge of Cleveland. Michelson suffered a mental health crisis in September 1885, from which he recovered by October 1885. Morley ascribed this breakdown to the intense work of Michelson during the preparation of the experiments. In 1886, Michelson and Morley successfully confirmed Fresnel's drag coefficient – this result was also considered as a confirmation of the stationary aether concept.
This result strengthened their hope of finding the aether wind. Michelson and Morley created an improved version of the Michelson experiment with more than enough accuracy to detect this hypothetical effect. The experiment was performed in several periods of concentrated observations between April and July 1887, in the basement of Adelbert Dormitory of WRU (later renamed Pierce Hall, demolished in 1962).
As shown in the diagram to the right, the light was repeatedly reflected back and forth along the arms of the interferometer, increasing the path length to . At this length, the drift would be about 0.4 fringes. To make that easily detectable, the apparatus was assembled in a closed room in the basement of the heavy stone dormitory, eliminating most thermal and vibrational effects. Vibrations were further reduced by building the apparatus on top of a large block of sandstone (Fig. 1), about a foot thick and square, which was then floated in a circular trough of mercury. They estimated that effects of about 0.01 fringe would be detectable.
Michelson and Morley and other early experimentalists using interferometric techniques in an attempt to measure the properties of the luminiferous aether, used (partially) monochromatic light only for initially setting up their equipment, always switching to white light for the actual measurements. The reason is that measurements were recorded visually. Purely monochromatic light would result in a uniform fringe pattern. Lacking modern means of environmental temperature control, experimentalists struggled with continual fringe drift even when the interferometer was set up in a basement. Because the fringes would occasionally disappear due to vibrations caused by passing horse traffic, distant thunderstorms and the like, an observer could easily "get lost" when the fringes returned to visibility. The advantages of white light, which produced a distinctive colored fringe pattern, far outweighed the difficulties of aligning the apparatus due to its low coherence length. As Dayton Miller wrote, "White light fringes were chosen for the observations because they consist of a small group of fringes having a central, sharply defined black fringe which forms a permanent zero reference mark for all readings." Use of partially monochromatic light (yellow sodium light) during initial alignment enabled the researchers to locate the position of equal path length, more or less easily, before switching to white light.
The mercury trough allowed the device to turn with close to zero friction, so that once having given the sandstone block a single push it would slowly rotate through the entire range of possible angles to the "aether wind", while measurements were continuously observed by looking through the eyepiece. The hypothesis of aether drift implies that because one of the arms would inevitably turn into the direction of the wind at the same time that another arm was turning perpendicularly to the wind, an effect should be noticeable even over a period of minutes.
The expectation was that the effect would be graphable as a sine wave with two peaks and two troughs per rotation of the device. This result could have been expected because during each full rotation, each arm would be parallel to the wind twice (facing into and away from the wind giving identical readings) and perpendicular to the wind twice. Additionally, due to the Earth's rotation, the wind would be expected to show periodic changes in direction and magnitude during the course of a sidereal day.
Because of the motion of the Earth around the Sun, the measured data were also expected to show annual variations.
Most famous "failed" experiment.
After all this thought and preparation, the experiment became what has been called the most famous failed experiment in history. Instead of providing insight into the properties of the aether, Michelson and Morley's article in the "American Journal of Science" reported the measurement to be as small as one-fortieth of the expected displacement (Fig. 7), but "since the displacement is proportional to the square of the velocity" they concluded that the measured velocity was "probably less than one-sixth" of the expected velocity of the Earth's motion in orbit and "certainly less than one-fourth". Although this small "velocity" was measured, it was considered far too small to be used as evidence of speed relative to the aether, and it was understood to be within the range of an experimental error that would allow the speed to actually be zero. For instance, Michelson wrote about the "decidedly negative result" in a letter to Lord Rayleigh in August 1887:
<templatestyles src="Template:Blockquote/styles.css" />The Experiments on the relative motion of the earth and ether have been completed and the result decidedly negative. The expected deviation of the interference fringes from the zero should have been 0.40 of a fringe – the maximum displacement was 0.02 and the average much less than 0.01 – and then not in the right place. As displacement is proportional to squares of the relative velocities it follows that if the ether does slip past the relative velocity is less than one sixth of the earth’s velocity.
From the standpoint of the then current aether models, the experimental results were conflicting. The Fizeau experiment and its 1886 repetition by Michelson and Morley apparently confirmed the stationary aether with partial aether dragging, and refuted complete aether dragging. On the other hand, the much more precise Michelson–Morley experiment (1887) apparently confirmed complete aether dragging and refuted the stationary aether. In addition, the Michelson–Morley null result was further substantiated by the null results of other second-order experiments of different kind, namely the Trouton–Noble experiment (1903) and the experiments of Rayleigh and Brace (1902–1904). These problems and their solution led to the development of the Lorentz transformation and special relativity.
After the "failed" experiment Michelson and Morley ceased their aether drift measurements and started to use their newly developed technique to establish the wavelength of light as a standard of length.
Light path analysis and consequences.
Observer resting in the aether.
The beam travel time in the longitudinal direction can be derived as follows: Light is sent from the source and propagates with the speed of light formula_0 in the aether. It passes through the half-silvered mirror at the origin at formula_1. The reflecting mirror is at that moment at distance formula_2 (the length of the interferometer arm) and is moving with velocity formula_3. The beam hits the mirror at time formula_4 and thus travels the distance formula_5. At this time, the mirror has traveled the distance formula_6. Thus formula_7 and consequently the travel time formula_8. The same consideration applies to the backward journey, with the sign of formula_3 reversed, resulting in formula_9 and formula_10. The total travel time formula_11 is:
formula_12
Michelson obtained this expression correctly in 1881, however, in transverse direction he obtained the incorrect expression
formula_13
because he overlooked the increase in path length in the rest frame of the aether. This was corrected by Alfred Potier (1882) and Hendrik Lorentz (1886). The derivation in the transverse direction can be given as follows (analogous to the derivation of time dilation using a light clock): The beam is propagating at the speed of light formula_0 and hits the mirror at time formula_14, traveling the distance formula_15. At the same time, the mirror has traveled the distance formula_16 in the "x" direction. So in order to hit the mirror, the travel path of the beam is formula_2 in the "y" direction (assuming equal-length arms) and formula_16 in the "x" direction. This inclined travel path follows from the transformation from the interferometer rest frame to the aether rest frame. Therefore, the Pythagorean theorem gives the actual beam travel distance of formula_17. Thus formula_18 and consequently the travel time formula_19, which is the same for the backward journey. The total travel time formula_20 is:
formula_21
The time difference between formula_22 and formula_23 is given by
formula_24
To find the path difference, simply multiply by formula_25;
formula_26
The path difference is denoted by formula_27 because the beams are out of phase by a some number of wavelengths (formula_28). To visualise this, consider taking the two beam paths along the longitudinal and transverse plane, and lying them straight (an animation of this is shown at minute 11:00, The Mechanical Universe, episode 41). One path will be longer than the other, this distance is formula_27. Alternatively, consider the rearrangement of the speed of light formula formula_29 .
If the relation formula_30 is true (if the velocity of the aether is small relative to the speed of light), then the expression can be simplified using a first order binomial expansion;
formula_31
So, rewriting the above in terms of powers;
formula_32
Applying binomial simplification;
formula_33
Therefore;
formula_34
It can be seen from this derivation that aether wind manifests as a path difference. The path difference is zero only when the interferometer is aligned with or perpendicular to the aether wind, and it reaches a maximum when it is at a 45° angle. The path difference can be any fraction of the wavelength, depending on the angle and speed of the aether wind.
To prove the existence of the aether, Michelson and Morley sought to find the "fringe shift". The idea was simple, the fringes of the interference pattern should shift when rotating it by 90° as the two beams have exchanged roles. To find the fringe shift, subtract the path difference in first orientation by the path difference in the second, then divide by the wavelength, formula_28, of light;
formula_35
Note the difference between formula_27, which is some number of wavelengths, and formula_28 which is a single wavelength. As can be seen by this relation, fringe shift n is a unitless quantity.
Since "L" ≈ 11 meters and λ ≈ 500 nanometers, the expected fringe shift was "n" ≈ 0.44. The negative result led Michelson to the conclusion that there is no measurable aether drift. However, he never accepted this on a personal level, and the negative result haunted him for the rest of his life.
Observer comoving with the interferometer.
If the same situation is described from the view of an observer co-moving with the interferometer, then the effect of aether wind is similar to the effect experienced by a swimmer, who tries to move with velocity formula_0 against a river flowing with velocity formula_3.
In the longitudinal direction the swimmer first moves upstream, so his velocity is diminished due to the river flow to formula_36. On his way back moving downstream, his velocity is increased to formula_37. This gives the beam travel times formula_4 and formula_38 as mentioned above.
In the transverse direction, the swimmer has to compensate for the river flow by moving at a certain angle against the flow direction, in order to sustain his exact transverse direction of motion and to reach the other side of the river at the correct location. This diminishes his speed to formula_39, and gives the beam travel time formula_14 as mentioned above.
Mirror reflection.
The classical analysis predicted a relative phase shift between the longitudinal and transverse beams which in Michelson and Morley's apparatus should have been readily measurable. What is not often appreciated (since there was no means of measuring it), is that motion through the hypothetical aether should also have caused the two beams to diverge as they emerged from the interferometer by about 10−8 radians.
For an apparatus in motion, the classical analysis requires that the beam-splitting mirror be slightly offset from an exact 45° if the longitudinal and transverse beams are to emerge from the apparatus exactly superimposed. In the relativistic analysis, Lorentz-contraction of the beam splitter in the direction of motion causes it to become more perpendicular by precisely the amount necessary to compensate for the angle discrepancy of the two beams.
Length contraction and Lorentz transformation.
A first step to explaining the Michelson and Morley experiment's null result was found in the FitzGerald–Lorentz contraction hypothesis, now simply called length contraction or Lorentz contraction, first proposed by George FitzGerald (1889) in a letter to same journal that published the Michelson-Morley paper, as "almost the only hypothesis that can reconcile" the apparent contradictions. It was independently also proposed by Hendrik Lorentz (1892). According to this law all objects physically contract by formula_40 along the line of motion (originally thought to be relative to the aether), formula_41 being the Lorentz factor. This hypothesis was partly motivated by Oliver Heaviside's discovery in 1888 that electrostatic fields are contracting in the line of motion. But since there was no reason at that time to assume that binding forces in matter are of electric origin, length contraction of matter in motion with respect to the aether was considered an ad hoc hypothesis.
If length contraction of formula_2 is inserted into the above formula for formula_42, then the light propagation time in the longitudinal direction becomes equal to that in the transverse direction:
formula_43
However, length contraction is only a special case of the more general relation, according to which the transverse length is larger than the longitudinal length by the ratio formula_44. This can be achieved in many ways. If formula_45 is the moving longitudinal length and formula_46 the moving transverse length, formula_47 being the rest lengths, then it is given:
formula_48
formula_49 can be arbitrarily chosen, so there are infinitely many combinations to explain the Michelson–Morley null result. For instance, if formula_50 the relativistic value of length contraction of formula_45 occurs, but if formula_51 then no length contraction but an elongation of formula_46 occurs. This hypothesis was later extended by Joseph Larmor (1897), Lorentz (1904) and Henri Poincaré (1905), who developed the complete Lorentz transformation including time dilation in order to explain the Trouton–Noble experiment, the Experiments of Rayleigh and Brace, and Kaufmann's experiments. It has the form
formula_52
It remained to define the value of formula_49, which was shown by Lorentz (1904) to be unity. In general, Poincaré (1905) demonstrated that only formula_50 allows this transformation to form a group, so it is the only choice compatible with the principle of relativity, "i.e.," making the stationary aether undetectable. Given this, length contraction and time dilation obtain their exact relativistic values.
Special relativity.
Albert Einstein formulated the theory of special relativity by 1905, deriving the Lorentz transformation and thus length contraction and time dilation from the relativity postulate and the constancy of the speed of light, thus removing the "ad hoc" character from the contraction hypothesis. Einstein emphasized the kinematic foundation of the theory and the modification of the notion of space and time, with the stationary aether no longer playing any role in his theory. He also pointed out the group character of the transformation. Einstein was motivated by Maxwell's theory of electromagnetism (in the form as it was given by Lorentz in 1895) and the lack of evidence for the luminiferous aether.
This allows a more elegant and intuitive explanation of the Michelson–Morley null result. In a comoving frame the null result is self-evident, since the apparatus can be considered as at rest in accordance with the relativity principle, thus the beam travel times are the same. In a frame relative to which the apparatus is moving, the same reasoning applies as described above in "Length contraction and Lorentz transformation", except the word "aether" has to be replaced by "non-comoving inertial frame". Einstein wrote in 1916:
<templatestyles src="Template:Blockquote/styles.css" />Although the estimated difference between these two times is exceedingly small, Michelson and Morley performed an experiment involving interference in which this difference should have been clearly detectable. But the experiment gave a negative result — a fact very perplexing to physicists. Lorentz and FitzGerald rescued the theory from this difficulty by assuming that the motion of the body relative to the æther produces a contraction of the body in the direction of motion, the amount of contraction being just sufficient to compensate for the difference in time mentioned above. Comparison with the discussion in Section 11 shows that also from the standpoint of the theory of relativity this solution of the difficulty was the right one. But on the basis of the theory of relativity the method of interpretation is incomparably more satisfactory. According to this theory there is no such thing as a "specially favoured" (unique) co-ordinate system to occasion the introduction of the æther-idea, and hence there can be no æther-drift, nor any experiment with which to demonstrate it. Here the contraction of moving bodies follows from the two fundamental principles of the theory, without the introduction of particular hypotheses; and as the prime factor involved in this contraction we find, not the motion in itself, to which we cannot attach any meaning, but the motion with respect to the body of reference chosen in the particular case in point. Thus for a co-ordinate system moving with the earth the mirror system of Michelson and Morley is not shortened, but it is shortened for a co-ordinate system which is at rest relatively to the sun.
The extent to which the null result of the Michelson–Morley experiment influenced Einstein is disputed. Alluding to some statements of Einstein, many historians argue that it played no significant role in his path to special relativity, while other statements of Einstein probably suggest that he was influenced by it. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance.
It was later shown by Howard Percy Robertson (1949) and others (see Robertson–Mansouri–Sexl test theory), that it is possible to derive the Lorentz transformation entirely from the combination of three experiments. First, the Michelson–Morley experiment showed that the speed of light is independent of the "orientation" of the apparatus, establishing the relationship between longitudinal (β) and transverse (δ) lengths. Then in 1932, Roy Kennedy and Edward Thorndike modified the Michelson–Morley experiment by making the path lengths of the split beam unequal, with one arm being very short. The Kennedy–Thorndike experiment took place for many months as the Earth moved around the sun. Their negative result showed that the speed of light is independent of the "velocity" of the apparatus in different inertial frames. In addition it established that besides length changes, corresponding time changes must also occur, i.e., it established the relationship between longitudinal lengths (β) and time changes (α). So both experiments do not provide the individual values of these quantities. This uncertainty corresponds to the undefined factor formula_49 as described above. It was clear due to theoretical reasons (the group character of the Lorentz transformation as required by the relativity principle) that the individual values of length contraction and time dilation must assume their exact relativistic form. But a direct measurement of one of these quantities was still desirable to confirm the theoretical results. This was achieved by the Ives–Stilwell experiment (1938), measuring α in accordance with time dilation. Combining this value for α with the Kennedy–Thorndike null result shows that "β" must assume the value of relativistic length contraction. Combining "β" with the Michelson–Morley null result shows that "δ" must be zero. Therefore, the Lorentz transformation with formula_50 is an unavoidable consequence of the combination of these three experiments.
Special relativity is generally considered the solution to all negative aether drift (or isotropy of the speed of light) measurements, including the Michelson–Morley null result. Many high precision measurements have been conducted as tests of special relativity and modern searches for Lorentz violation in the photon, electron, nucleon, or neutrino sector, all of them confirming relativity.
Incorrect alternatives.
As mentioned above, Michelson initially believed that his experiment would confirm Stokes' theory, according to which the aether was fully dragged in the vicinity of the earth (see Aether drag hypothesis). However, complete aether drag contradicts the observed aberration of light and was contradicted by other experiments as well. In addition, Lorentz showed in 1886 that Stokes's attempt to explain aberration is contradictory.
Furthermore, the assumption that the aether is not carried in the vicinity, but only "within" matter, was very problematic as shown by the Hammar experiment (1935). Hammar directed one leg of his interferometer through a heavy metal pipe plugged with lead. If aether were dragged by mass, it was theorized that the mass of the sealed metal pipe would have been enough to cause a visible effect. Once again, no effect was seen, so aether-drag theories are considered to be disproven.
Walther Ritz's emission theory (or ballistic theory) was also consistent with the results of the experiment, not requiring aether. The theory postulates that light has always the same velocity in respect to the source. However de Sitter noted that emitter theory predicted several optical effects that were not seen in observations of binary stars in which the light from the two stars could be measured in a spectrometer. If emission theory were correct, the light from the stars should experience unusual fringe shifting due to the velocity of the stars being added to the speed of the light, but no such effect could be seen. It was later shown by J. G. Fox that the original de Sitter experiments were flawed due to extinction, but in 1977 Brecher observed X-rays from binary star systems with similar null results. Furthermore, Filippas and Fox (1964) conducted terrestrial particle accelerator tests specifically designed to address Fox's earlier "extinction" objection, the results being inconsistent with source dependence of the speed of light.
Subsequent experiments.
Although Michelson and Morley went on to different experiments after their first publication in 1887, both remained active in the field. Other versions of the experiment were carried out with increasing sophistication. Morley was not convinced of his own results, and went on to conduct additional experiments with Dayton Miller from 1902 to 1904. Again, the result was negative within the margins of error.
Miller worked on increasingly larger interferometers, culminating in one with a (effective) arm length that he tried at various sites, including on top of a mountain at the Mount Wilson Observatory. To avoid the possibility of the aether wind being blocked by solid walls, his mountaintop observations used a special shed with thin walls, mainly of canvas. From noisy, irregular data, he consistently extracted a small positive signal that varied with each rotation of the device, with the sidereal day, and on a yearly basis. His measurements in the 1920s amounted to approximately instead of the nearly expected from the Earth's orbital motion alone. He remained convinced this was due to partial entrainment or aether dragging, though he did not attempt a detailed explanation. He ignored critiques demonstrating the inconsistency of his results and the refutation by the Hammar experiment. Miller's findings were considered important at the time, and were discussed by Michelson, Lorentz and others at a meeting reported in 1928. There was general agreement that more experimentation was needed to check Miller's results. Miller later built a non-magnetic device to eliminate magnetostriction, while Michelson built one of non-expanding Invar to eliminate any remaining thermal effects. Other experimenters from around the world increased accuracy, eliminated possible side effects, or both. So far, no one has been able to replicate Miller's results, and modern experimental accuracies have ruled them out. Roberts (2006) has pointed out that the primitive data reduction techniques used by Miller and other early experimenters, including Michelson and Morley, were capable of "creating" apparent periodic signals even when none existed in the actual data. After reanalyzing Miller's original data using modern techniques of quantitative error analysis, Roberts found Miller's apparent signals to be statistically insignificant.
Using a special optical arrangement involving a 1/20 wave step in one mirror, Roy J. Kennedy (1926) and K.K. Illingworth (1927) (Fig. 8) converted the task of detecting fringe shifts from the relatively insensitive one of estimating their lateral displacements to the considerably more sensitive task of adjusting the light intensity on both sides of a sharp boundary for equal luminance. If they observed unequal illumination on either side of the step, such as in Fig. 8e, they would add or remove calibrated weights from the interferometer until both sides of the step were once again evenly illuminated, as in Fig. 8d. The number of weights added or removed provided a measure of the fringe shift. Different observers could detect changes as little as 1/1500 to 1/300 of a fringe. Kennedy also carried out an experiment at Mount Wilson, finding only about 1/10 the drift measured by Miller and no seasonal effects.
In 1930, Georg Joos conducted an experiment using an automated interferometer with arms forged from pressed quartz having a very low coefficient of thermal expansion, that took continuous photographic strip recordings of the fringes through dozens of revolutions of the apparatus. Displacements of 1/1000 of a fringe could be measured on the photographic plates. No periodic fringe displacements were found, placing an upper limit to the aether wind of .
In the table below, the expected values are related to the relative speed between Earth and Sun of . With respect to the speed of the solar system around the galactic center of about , or the speed of the solar system relative to the CMB rest frame of about , the null results of those experiments are even more obvious.
Recent experiments.
Optical tests.
Optical tests of the isotropy of the speed of light became commonplace. New technologies, including the use of lasers and masers, have significantly improved measurement precision. (In the following table, only Essen (1955), Jaseja (1964), and Shamir/Fox (1969) are experiments of Michelson–Morley type, "i.e.," comparing two perpendicular beams. The other optical experiments employed different methods.)
Recent optical resonator experiments.
During the early 21st century, there has been a resurgence in interest in performing precise Michelson–Morley type experiments using lasers, masers, cryogenic optical resonators, etc. This is in large part due to predictions of quantum gravity that suggest that special relativity may be violated at scales accessible to experimental study. The first of these highly accurate experiments was conducted by Brillet & Hall (1979), in which they analyzed a laser frequency stabilized to a resonance of a rotating optical Fabry–Pérot cavity. They set a limit on the anisotropy of the speed of light resulting from the Earth's motions of Δ"c"/"c" ≈ 10−15, where Δ"c" is the difference between the speed of light in the "x"- and "y"-directions.
As of 2015, optical and microwave resonator experiments have improved this limit to Δ"c"/"c" ≈ 10−18. In some of them, the devices were rotated or remained stationary, and some were combined with the Kennedy–Thorndike experiment. In particular, Earth's direction and velocity (ca. ) relative to the CMB rest frame are ordinarily used as references in these searches for anisotropies.
Other tests of Lorentz invariance.
Examples of other experiments not based on the Michelson–Morley principle, i.e., non-optical isotropy tests achieving an even higher level of precision, are Clock comparison or Hughes–Drever experiments. In Drever's 1961 experiment, 7Li nuclei in the ground state, which has total angular momentum "J" = 3/2, were split into four equally spaced levels by a magnetic field. Each transition between a pair of adjacent levels should emit a photon of equal frequency, resulting in a single, sharp spectral line. However, since the nuclear wave functions for different "MJ" have different orientations in space relative to the magnetic field, any orientation dependence, whether from an aether wind or from a dependence on the large-scale distribution of mass in space (see Mach's principle), would perturb the energy spacings between the four levels, resulting in an anomalous broadening or splitting of the line. No such broadening was observed. Modern repeats of this kind of experiment have provided some of the most accurate confirmations of the principle of Lorentz invariance.
References.
Notes.
<templatestyles src="Reflist/styles.css" />
Experiments.
<templatestyles src="Reflist/styles.css" />
Bibliography (Series "A" references).
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c"
},
{
"math_id": 1,
"text": "T=0"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "T_1"
},
{
"math_id": 5,
"text": "cT_1"
},
{
"math_id": 6,
"text": "vT_1"
},
{
"math_id": 7,
"text": "cT_1 =L+vT_1"
},
{
"math_id": 8,
"text": "T_1=L/(c-v)"
},
{
"math_id": 9,
"text": "cT_2 =L-vT_2"
},
{
"math_id": 10,
"text": "T_2 =L/(c+v)"
},
{
"math_id": 11,
"text": "T_\\ell=T_1+T_2"
},
{
"math_id": 12,
"text": "T_\\ell=\\frac{L}{c-v}+\\frac{L}{c+v} =\\frac{2L}{c}\\frac{1}{1-\\frac{v^2}{c^2}} \\approx\\frac{2L}{c} \\left(1+\\frac{v^2}{c^2}\\right)"
},
{
"math_id": 13,
"text": "T_t=\\frac{2L}{c},"
},
{
"math_id": 14,
"text": "T_3"
},
{
"math_id": 15,
"text": "cT_3"
},
{
"math_id": 16,
"text": "vT_3"
},
{
"math_id": 17,
"text": " \\sqrt{L^2+\\left(vT_3\\right)^2}"
},
{
"math_id": 18,
"text": " cT_3 =\\sqrt{L^2+\\left(vT_3\\right)^2}"
},
{
"math_id": 19,
"text": " T_3 =L/\\sqrt{c^2-v^2}"
},
{
"math_id": 20,
"text": "T_t=2T_3"
},
{
"math_id": 21,
"text": "T_t=\\frac{2L}{\\sqrt{c^2-v^2}}=\\frac{2L}{c}\\frac{1}{\\sqrt{1-\\frac{v^2}{c^2}}}\\approx\\frac{2L}{c} \\left(1+\\frac{v^2}{2c^2}\\right)"
},
{
"math_id": 22,
"text": "T_\\ell"
},
{
"math_id": 23,
"text": "T_t"
},
{
"math_id": 24,
"text": "T_\\ell-T_t=\\frac{2L}{c}\\left(\\frac{1}{1-\\frac{v^2}{c^2}}-\\frac{1}{\\sqrt{1-\\frac{v^2}{c^2}}}\\right)"
},
{
"math_id": 25,
"text": "c"
},
{
"math_id": 26,
"text": "\\Delta{\\lambda}_1=2L\\left(\\frac{1}{1-\\frac{v^2}{c^2}}-\\frac{1}{\\sqrt{1-\\frac{v^2}{c^2}}}\\right)"
},
{
"math_id": 27,
"text": "\\Delta \\lambda"
},
{
"math_id": 28,
"text": "\\lambda"
},
{
"math_id": 29,
"text": "c{\\Delta}T = \\Delta\\lambda"
},
{
"math_id": 30,
"text": "{v^2}/{c^2} << 1"
},
{
"math_id": 31,
"text": "(1-x)^n \\approx {1-nx}"
},
{
"math_id": 32,
"text": "\\Delta{\\lambda}_1 = 2L\\left(\\left({1-\\frac{v^2}{c^2}}\\right)^{-1}-\\left(1-\\frac{v^2}{c^2}\\right)^{-1/2}\\right)"
},
{
"math_id": 33,
"text": "\\Delta{\\lambda}_1 = 2L\\left( (1+\\frac{v^2}{c^2}) - (1+\\frac{v^2}{2c^2})\\right)={2L}\\frac{v^2}{2c^2}"
},
{
"math_id": 34,
"text": "\\Delta{\\lambda}_1={L}\\frac{v^2}{c^2}"
},
{
"math_id": 35,
"text": "n=\\frac{\\Delta\\lambda_1-\\Delta\\lambda_2}{\\lambda}\\approx\\frac{2Lv^2}{\\lambda c^2}."
},
{
"math_id": 36,
"text": "c-v"
},
{
"math_id": 37,
"text": "c+v"
},
{
"math_id": 38,
"text": "T_2"
},
{
"math_id": 39,
"text": "\\sqrt{c^2-v^2}"
},
{
"math_id": 40,
"text": "L/\\gamma"
},
{
"math_id": 41,
"text": "\\gamma=1/\\sqrt{1-v^2/c^2}"
},
{
"math_id": 42,
"text": "T_\\ell"
},
{
"math_id": 43,
"text": "T_\\ell=\\frac{2L\\sqrt{1-\\frac{v^2}{c^2}}}{c}\\frac{1}{1-\\frac{v^2}{c^2}}=\\frac{2L}{c} \\frac{1}{\\sqrt{1-\\frac{v^2}{c^2}}}=T_t"
},
{
"math_id": 44,
"text": "\\gamma"
},
{
"math_id": 45,
"text": "L_1"
},
{
"math_id": 46,
"text": "L_2"
},
{
"math_id": 47,
"text": "L'_1=L'_2"
},
{
"math_id": 48,
"text": "\\frac{L_2}{L_1}=\\frac{L'_2}{\\varphi}\\left/\\frac{L'_1}{\\gamma\\varphi}\\right.=\\gamma."
},
{
"math_id": 49,
"text": "\\varphi"
},
{
"math_id": 50,
"text": "\\varphi=1"
},
{
"math_id": 51,
"text": "\\varphi=1/\\gamma"
},
{
"math_id": 52,
"text": "x'=\\gamma\\varphi(x-vt),\\ y'=\\varphi y,\\ z'=\\varphi z,\\ t'=\\gamma\\varphi\\left(t-\\frac{vx}{c^2}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=91100 |
91110 | Doubling the cube | Ancient geometric construction problem
Doubling the cube, also known as the Delian problem, is an ancient9 geometric problem. Given the edge of a cube, the problem requires the construction of the edge of a second cube whose volume is double that of the first. As with the related problems of squaring the circle and trisecting the angle, doubling the cube is now known to be impossible to construct by using only a compass and straightedge, but even in ancient times solutions were known that employed other methods.
The Egyptians, Indians, and particularly the Greeks were aware of the problem and made many futile attempts at solving what they saw as an obstinate but soluble problem. However, the nonexistence of a compass-and-straightedge solution was finally proven by Pierre Wantzel in 1837.
In algebraic terms, doubling a unit cube requires the construction of a line segment of length "x", where "x"3 = 2; in other words, "x"
formula_0, the cube root of two. This is because a cube of side length 1 has a volume of 13 = 1, and a cube of twice that volume (a volume of 2) has a side length of the cube root of 2. The impossibility of doubling the cube is therefore equivalent to the statement that formula_0 is not a constructible number. This is a consequence of the fact that the coordinates of a new point constructed by a compass and straightedge are roots of polynomials over the field generated by the coordinates of previous points, of no greater degree than a quadratic. This implies that the degree of the field extension generated by a constructible point must be a power of 2. The field extension generated by formula_0, however, is of degree 3.
Proof of impossibility.
We begin with the unit line segment defined by points (0,0) and (1,0) in the plane. We are required to construct a line segment defined by two points separated by a distance of formula_0. It is easily shown that compass and straightedge constructions would allow such a line segment to be freely moved to touch the origin, parallel with the unit line segment - so equivalently we may consider the task of constructing a line segment from (0,0) to (formula_0, 0), which entails constructing the point (formula_0, 0).
Respectively, the tools of a compass and straightedge allow us to create circles centred on one previously defined point and passing through another, and to create lines passing through two previously defined points. Any newly defined point either arises as the result of the intersection of two such circles, as the intersection of a circle and a line, or as the intersection of two lines. An exercise of elementary analytic geometry shows that in all three cases, both the x- and y-coordinates of the newly defined point satisfy a polynomial of degree no higher than a quadratic, with coefficients that are additions, subtractions, multiplications, and divisions involving the coordinates of the previously defined points (and rational numbers). Restated in more abstract terminology, the new x- and y-coordinates have minimal polynomials of degree at most 2 over the subfield of formula_1 generated by the previous coordinates. Therefore, the degree of the field extension corresponding to each new coordinate is 2 or 1.
So, given a coordinate of any constructed point, we may proceed inductively backwards through the x- and y-coordinates of the points in the order that they were defined until we reach the original pair of points (0,0) and (1,0). As every field extension has degree 2 or 1, and as the field extension over formula_2 of the coordinates of the original pair of points is clearly of degree 1, it follows from the tower rule that the degree of the field extension over formula_2 of any coordinate of a constructed point is a power of 2.
Now, "p"("x") = "x"3 − 2 = 0 is easily seen to be irreducible over formula_3 – any factorisation would involve a linear factor ("x" − "k") for some "k" ∈ formula_3, and so "k" must be a root of "p"("x"); but also "k" must divide 2 (by the rational root theorem); that is, "k" = 1, 2, −1 or −2, and none of these are roots of "p"("x"). By Gauss's Lemma, "p"("x") is also irreducible over formula_2, and is thus a minimal polynomial over formula_2 for formula_0. The field extension formula_4 is therefore of degree 3. But this is not a power of 2, so by the above:
History.
The problem owes its name to a story concerning the citizens of Delos, who consulted the oracle at Delphi in order to learn how to defeat a plague sent by Apollo.9 According to Plutarch, however, the citizens of Delos consulted the oracle at Delphi to find a solution for their internal political problems at the time, which had intensified relationships among the citizens. The oracle responded that they must double the size of the altar to Apollo, which was a regular cube. The answer seemed strange to the Delians, and they consulted Plato, who was able to interpret the oracle as the mathematical problem of doubling the volume of a given cube, thus explaining the oracle as the advice of Apollo for the citizens of Delos to occupy themselves with the study of geometry and mathematics in order to calm down their passions.
According to Plutarch, Plato gave the problem to Eudoxus and Archytas and Menaechmus, who solved the problem using mechanical means, earning a rebuke from Plato for not solving the problem using pure geometry. This may be why the problem is referred to in the 350s BC by the author of the pseudo-Platonic "Sisyphus" (388e) as still unsolved. However another version of the story (attributed to Eratosthenes by Eutocius of Ascalon) says that all three found solutions but they were too abstract to be of practical value.
A significant development in finding a solution to the problem was the discovery by Hippocrates of Chios that it is equivalent to finding two geometric mean proportionals between a line segment and another with twice the length. In modern notation, this means that given segments of lengths "a" and 2"a", the duplication of the cube is equivalent to finding segments of lengths "r" and "s" so that
formula_5
In turn, this means that
formula_6
But Pierre Wantzel proved in 1837 that the cube root of 2 is not constructible; that is, it cannot be constructed with straightedge and compass.
Solutions via means other than compass and straightedge.
Menaechmus' original solution involves the intersection of two conic curves. Other more complicated methods of doubling the cube involve neusis, the cissoid of Diocles, the conchoid of Nicomedes, or the Philo line. Pandrosion, a probably female mathematician of ancient Greece, found a numerically accurate approximate solution using planes in three dimensions, but was heavily criticized by Pappus of Alexandria for not providing a proper mathematical proof. Archytas solved the problem in the 4th century BC using geometric construction in three dimensions, determining a certain point as the intersection of three surfaces of revolution.
Descartes theory of geometric solution of equations uses a parabola to introduce cubic equations, in this way it is possible to set up an equation whose solution is a cube root of two. Note that the parabola itself is not constructible except by three dimensional methods.
False claims of doubling the cube with compass and straightedge abound in mathematical crank literature (pseudomathematics).
Origami may also be used to construct the cube root of two by folding paper.
Using a marked ruler.
There is a simple neusis construction using a marked ruler for a length which is the cube root of 2 times another length.
Then AG is the given length times formula_0.
In music theory.
In music theory, a natural analogue of doubling is the octave (a musical interval caused by doubling the frequency of a tone), and a natural analogue of a cube is dividing the octave into three parts, each the same interval. In this sense, the problem of doubling the cube is solved by the major third in equal temperament. This is a musical interval that is exactly one third of an octave. It multiplies the frequency of a tone by formula_7, the side length of the Delian cube.
Explanatory notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt[3]{2}"
},
{
"math_id": 1,
"text": "\\mathbb{R}"
},
{
"math_id": 2,
"text": "\\mathbb{Q}"
},
{
"math_id": 3,
"text": "\\mathbb{Z}"
},
{
"math_id": 4,
"text": "\\mathbb{Q} (\\sqrt[3]{2}):\\mathbb{Q}"
},
{
"math_id": 5,
"text": "\\frac{a}{r} = \\frac{r}{s} = \\frac{s}{2a} ."
},
{
"math_id": 6,
"text": "r=a\\cdot\\sqrt[3]{2}."
},
{
"math_id": 7,
"text": "2^{4/12}=2^{1/3}=\\sqrt[3]{2}"
}
] | https://en.wikipedia.org/wiki?curid=91110 |
91111 | Angle trisection | Construction of an angle equal to one third a given angle
Angle trisection is a classical problem of straightedge and compass construction of ancient Greek mathematics. It concerns construction of an angle equal to one third of a given arbitrary angle, using only two tools: an unmarked straightedge and a compass.
In 1837, Pierre Wantzel proved that the problem, as stated, is impossible to solve for arbitrary angles. However, some special angles can be trisected: for example, it is trivial to trisect a right angle.
It is possible to trisect an arbitrary angle by using tools other than straightedge and compass. For example, neusis construction, also known to ancient Greeks, involves simultaneous sliding and rotation of a marked straightedge, which cannot be achieved with the original tools. Other techniques were developed by mathematicians over the centuries.
Because it is defined in simple terms, but complex to prove unsolvable, the problem of angle trisection is a frequent subject of pseudomathematical attempts at solution by naive enthusiasts. These "solutions" often involve mistaken interpretations of the rules, or are simply incorrect.
Background and problem statement.
Using only an unmarked straightedge and a compass, Greek mathematicians found means to divide a line into an arbitrary set of equal segments, to draw parallel lines, to bisect angles, to construct many polygons, and to construct squares of equal or twice the area of a given polygon.
Three problems proved elusive, specifically, trisecting the angle, doubling the cube, and squaring the circle. The problem of angle trisection reads:
Construct an angle equal to one-third of a given arbitrary angle (or divide it into three equal angles), using only two tools:
Proof of impossibility.
Pierre Wantzel published a proof of the impossibility of classically trisecting an arbitrary angle in 1837. Wantzel's proof, restated in modern terminology, uses the concept of field extensions, a topic now typically combined with Galois theory. However, Wantzel published these results earlier than Évariste Galois (whose work, written in 1830, was published only in 1846) and did not use the concepts introduced by Galois.
The problem of constructing an angle of a given measure "θ" is equivalent to constructing two segments such that the ratio of their length is cos "θ". From a solution to one of these two problems, one may pass to a solution of the other by a compass and straightedge construction. The triple-angle formula gives an expression relating the cosines of the original angle and its trisection: cos "θ" = 4 cos3 − 3 cos.
It follows that, given a segment that is defined to have unit length, the problem of angle trisection is equivalent to constructing a segment whose length is the root of a cubic polynomial. This equivalence reduces the original geometric problem to a purely algebraic problem.
Every rational number is constructible. Every irrational number that is constructible in a single step from some given numbers is a root of a polynomial of degree 2 with coefficients in the field generated by these numbers. Therefore, any number that is constructible by a sequence of steps is a root of a minimal polynomial whose degree is a power of two. The angle radians (60 degrees, written 60°) is constructible. The argument below shows that it is impossible to construct a 20° angle. This implies that a 60° angle cannot be trisected, and thus that an arbitrary angle cannot be trisected.
Denote the set of rational numbers by Q. If 60° could be trisected, the degree of a minimal polynomial of cos 20° over Q would be a power of two. Now let "x"
cos 20°. Note that cos 60° = cos = . Then by the triple-angle formula, cos
4"x"3 − 3"x" and so 4"x"3 − 3"x"
. Thus 8"x"3 − 6"x" − 1
0. Define "p"("t") to be the polynomial "p"("t")
8"t"3 − 6"t" − 1.
Since "x"
cos 20° is a root of "p"("t"), the minimal polynomial for cos 20° is a factor of "p"("t"). Because "p"("t") has degree 3, if it is reducible over by Q then it has a rational root. By the rational root theorem, this root must be ±1, ±, ± or ±, but none of these is a root. Therefore, "p"("t") is irreducible over by Q, and the minimal polynomial for cos 20° is of degree 3.
So an angle of measure 60° cannot be trisected.
Angles which can be trisected.
However, some angles can be trisected. For example, for any constructible angle "θ", an angle of measure 3"θ" can be trivially trisected by ignoring the given angle and directly constructing an angle of measure "θ". There are angles that are not constructible but are trisectible (despite the one-third angle itself being non-constructible). For example, is such an angle: five angles of measure combine to make an angle of measure , which is a full circle plus the desired .
For a positive integer N, an angle of measure is "trisectible" if and only if 3 does not divide N. In contrast, is "constructible" if and only if N is a power of 2 or the product of a power of 2 with the product of one or more distinct Fermat primes.
Algebraic characterization.
Again, denote the set of rational numbers by Q.
Theorem: An angle of measure "θ" may be trisected if and only if "q"("t")
4"t"3 − 3"t" − cos("θ") is reducible over the field extension Q(cos("θ")).
The proof is a relatively straightforward generalization of the proof given above that a 60° angle is not trisectible.
Other numbers of parts.
For any nonzero integer N, an angle of measure radians can be divided into n equal parts with straightedge and compass if and only if n is either a power of 2 or is a power of 2 multiplied by the product of one or more distinct Fermat primes, none of which divides N. In the case of trisection ("n"
3, which is a Fermat prime), this condition becomes the above-mentioned requirement that N not be divisible by 3.
Other methods.
The general problem of angle trisection is solvable by using additional tools, and thus going outside of the original Greek framework of compass and straightedge.
Many incorrect methods of trisecting the general angle have been proposed. Some of these methods provide reasonable approximations; others (some of which are mentioned below) involve tools not permitted in the classical problem. The mathematician Underwood Dudley has detailed some of these failed attempts in his book "The Trisectors".
Approximation by successive bisections.
Trisection can be approximated by repetition of the compass and straightedge method for bisecting an angle. The geometric series = + + + + ⋯ or = − + − + ⋯ can be used as a basis for the bisections. An approximation to any degree of accuracy can be obtained in a finite number of steps.
Using origami.
Trisection, like many constructions impossible by ruler and compass, can easily be accomplished by the operations of paper folding, or origami. Huzita's axioms (types of folding operations) can construct cubic extensions (cube roots) of given lengths, whereas ruler-and-compass can construct only quadratic extensions (square roots).
Using a linkage.
There are a number of simple linkages which can be used to make an instrument to trisect angles including Kempe's Trisector and Sylvester's Link Fan or Isoklinostat.
With a right triangular ruler.
In 1932, Ludwig Bieberbach published in "Journal für die reine und angewandte Mathematik" his work "Zur Lehre von den kubischen Konstruktionen". He states therein (free translation):
"As is known ... every cubic construction can be traced back to the trisection of the angle and to the multiplication of the cube, that is, the extraction of the third root. I need only to show how these two classical tasks can be solved by means of the right angle hook."
The construction begins with drawing a circle passing through the vertex P of the angle to be trisected, centered at A on an edge of this angle, and having B as its second intersection with the edge. A circle centered at P and of the same radius intersects the line supporting the edge in A and O.
Now the "right triangular ruler" is placed on the drawing in the following manner: one leg of its right angle passes through O; the vertex of its right angle is placed at a point S on the line PC in such a way that the second leg of the ruler is tangent at E to the circle centered at A. It follows that the original angle is trisected by the line PE, and the line PD perpendicular to SE and passing through P. This line can be drawn either by using again the right triangular ruler, or by using a traditional straightedge and compass construction. With a similar construction, one can improve the location of E, by using that it is the intersection of the line SE and its perpendicular passing through A.
"Proof:" One has to prove the angle equalities formula_0 and formula_1 The three lines OS, PD, and AE are parallel. As the line segments OP and PA are equal, these three parallel lines delimit two equal segments on every other secant line, and in particular on their common perpendicular SE. Thus "SD'" = "D'E", where D' is the intersection of the lines PD and SE. It follows that the right triangles and are congruent, and thus that formula_2 the first desired equality. On the other hand, the triangle PAE is isosceles, since all radiuses of a circle are equal; this implies that formula_3 One has also formula_4 since these two angles are alternate angles of a transversal to two parallel lines. This proves the second desired equality, and thus the correctness of the construction.
With an auxiliary curve.
There are certain curves called trisectrices which, if drawn on the plane using other methods, can be used to trisect arbitrary angles. Examples include the trisectrix of Colin Maclaurin, given in Cartesian coordinates by the implicit equation
formula_5
and the Archimedean spiral. The spiral can, in fact, be used to divide an angle into "any" number of equal parts.
Archimedes described how to trisect an angle using the Archimedean spiral in On Spirals around 225 BC.
With a marked ruler.
Another means to trisect an arbitrary angle by a "small" step outside the Greek framework is via a ruler with two marks a set distance apart. The next construction is originally due to Archimedes, called a "Neusis construction", i.e., that uses tools other than an "un-marked" straightedge. The diagrams we use show this construction for an acute angle, but it indeed works for any angle up to 180 degrees.
This requires three facts from geometry (at right):
Let l be the horizontal line in the adjacent diagram. Angle a (left of point B) is the subject of trisection. First, a point A is drawn at an angle's ray, one unit apart from B. A circle of radius AB is drawn. Then, the markedness of the ruler comes into play: one mark of the ruler is placed at A and the other at B. While keeping the ruler (but not the mark) touching A, the ruler is slid and rotated until one mark is on the circle and the other is on the line l. The mark on the circle is labeled C and the mark on the line is labeled D. This ensures that "CD"
"AB". A radius BC is drawn to make it obvious that line segments AB, BC, and CD all have equal length. Now, triangles ABC and BCD are isosceles, thus (by Fact 3 above) each has two equal angles.
Hypothesis: Given AD is a straight line, and AB, BC, and CD all have equal length,
Conclusion: angle "b"
Proof:
and the theorem is proved.
Again, this construction stepped outside the framework of allowed constructions by using a marked straightedge.
With a string.
Thomas Hutcheson published an article in the "Mathematics Teacher" that used a string instead of a compass and straight edge. A string can be used as either a straight edge (by stretching it) or a compass (by fixing one point and identifying another), but can also wrap around a cylinder, the key to Hutcheson's solution.
Hutcheson constructed a cylinder from the angle to be trisected by drawing an arc across the angle, completing it as a circle, and constructing from that circle a cylinder on which a, say, equilateral triangle was inscribed (a 360-degree angle divided in three). This was then "mapped" onto the angle to be trisected, with a simple proof of similar triangles.
With a "tomahawk".
A "tomahawk" is a geometric shape consisting of a semicircle and two orthogonal line segments, such that the length of the shorter segment is equal to the circle radius. Trisection is executed by leaning the end of the tomahawk's shorter segment on one ray, the circle's edge on the other, so that the "handle" (longer segment) crosses the angle's vertex; the trisection line runs between the vertex and the center of the semicircle.
While a tomahawk is constructible with compass and straightedge, it is not generally possible to construct a tomahawk in any desired position. Thus, the above construction does not contradict the nontrisectibility of angles with ruler and compass alone.
As a tomahawk can be used as a set square, it can be also used for trisection angles by the method described in .
The tomahawk produces the same geometric effect as the paper-folding method: the distance between circle center and the tip of the shorter segment is twice the distance of the radius, which is guaranteed to contact the angle. It is also equivalent to the use of an architects L-Ruler (Carpenter's Square).
With interconnected compasses.
An angle can be trisected with a device that is essentially a four-pronged version of a compass, with linkages between the prongs designed to keep the three angles between adjacent prongs equal.
Uses of angle trisection.
A cubic equation with real coefficients can be solved geometrically with compass, straightedge, and an angle trisector if and only if it has three real roots.
A regular polygon with "n" sides can be constructed with ruler, compass, and angle trisector if and only if formula_10 where "r, s, k" ≥ 0 and where the "p""i" are distinct primes greater than 3 of the form formula_11 (i.e. Pierpont primes greater than 3).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\widehat{EPD}= \\widehat{DPS}"
},
{
"math_id": 1,
"text": "\\widehat{BPE} = \\widehat{EPD}."
},
{
"math_id": 2,
"text": "\\widehat{EPD}= \\widehat{DPS},"
},
{
"math_id": 3,
"text": "\\widehat{APE}=\\widehat{AEP}."
},
{
"math_id": 4,
"text": "\\widehat{AEP}=\\widehat{EPD},"
},
{
"math_id": 5,
"text": "2x(x^2+y^2)=a(3x^2-y^2),"
},
{
"math_id": 6,
"text": " e + c = 180"
},
{
"math_id": 7,
"text": " e + 2b = 180"
},
{
"math_id": 8,
"text": " c = 2b"
},
{
"math_id": 9,
"text": "a=c+b=2b+b=3b"
},
{
"math_id": 10,
"text": "n=2^r3^sp_1p_2\\cdots p_k,"
},
{
"math_id": 11,
"text": "2^t3^u +1"
}
] | https://en.wikipedia.org/wiki?curid=91111 |
9111167 | William Rutherford (mathematician) | William Rutherford (1798–1871) was an English mathematician famous for his calculation of 208 digits of the mathematical constant π in 1841.
Only the first 152 calculated digits were later found to be correct; but that broke the record of the time, which was held by the Slovenian mathematician Jurij Vega since 1789 (126 first digits correct). Rutherford used the following formula:
formula_0
Life.
Rutherford was born about 1798. He was a master at a school at Woodburn from 1822 to 1825, when he went to Hawick, Roxburghshire, and he was later (1832–1837) a master at Corporation Academy, Berwick-on-Tweed.
In 1838 Rutherford obtained a mathematical post at the Royal Military Academy, Woolwich. He was a member of the council of the Royal Astronomical Society from 1844 to 1847, and honorary secretary in 1845 and 1846. He was a friend of Wesley S. B. Woolhouse.
Rutherford retired from his post at Woolwich about 1864, and died on 16 September 1871, at his residence, Tweed Cottage, Maryon Road, Charlton, at the age of seventy-three.
Works.
Rutherford was the editor, with Stephen Fenwick and (for the first volume only) with Thomas Stephen Davies, of "The Mathematician", vol. i. 1845, vol. ii. 1847, vol. iii. 1850, to which he contributed many papers. He sent problems, solutions and papers to "The Ladies' Diary" from 1822 to 1869, and also contributed to the "Gentlemen's Diary". His mathematical studies were of a traditional type.
Rutherford edited
He published also:
He also wrote mathematical pamphlets, including one on the solution of spherical triangles.
References and notes.
<templatestyles src="Reflist/styles.css" />
incorporates text from a publication now in the public domain: | [
{
"math_id": 0,
"text": " {\\pi\\over 4} = 4 \\arctan \\left({1\\over 5}\\right) - \\arctan \\left({1\\over 70}\\right) + \\arctan \\left({1\\over 99}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=9111167 |
9111849 | Shape-memory polymer | Materials that can retain several shapes
Shape-memory polymers (SMPs) are polymeric smart materials that have the ability to return from a deformed state (temporary shape) to their original (permanent) shape when induced by an external stimulus (trigger), such as temperature change.
<templatestyles src="Template:Quote_box/styles.css" />
IUPAC definition
Polymer that, after heating and being subjected to a plastic deformation, resumes its original shape
when heated above its glass-transition or melting temperature
"Note:"
Properties of shape-memory polymers.
SMPs can retain two or sometimes three shapes, and the transition between those is often induced by temperature change. In addition to temperature change, the shape change of SMPs can also be triggered by an electric or magnetic field, light or solution. Like polymers in general, SMPs cover a wide range of properties from stable to biodegradable, from soft to hard, and from elastic to rigid, depending on the structural units that constitute the SMP. SMPs include thermoplastic and thermoset (covalently cross-linked) polymeric materials. SMPs are known to be able to store up to three different shapes in memory. SMPs have demonstrated recoverable strains of above 800%.
Two important quantities that are used to describe shape-memory effects are the strain recovery rate ("R"r) and strain fixity rate ("R"f). The strain recovery rate describes the ability of the material to memorize its permanent shape, while the strain fixity rate describes the ability of switching segments to fix the mechanical deformation.
formula_0
formula_1
where formula_2 is the cycle number, formula_3 is the maximum strain imposed on the material, and formula_4 and formula_5 are the strains of the sample in two successive cycles in the stress-free state before yield stress is applied.
Shape-memory effect can be described briefly as the following mathematical model:
formula_6
formula_7
where formula_8 is the glassy modulus, formula_9 is the rubbery modulus, formula_10 is viscous flow strain and formula_11 is strain for formula_12.
Triple-shape memory.
While most traditional shape-memory polymers can only hold a permanent and temporary shape, recent technological advances have allowed the introduction of triple-shape-memory materials. Much as a traditional double-shape-memory polymer will change from a temporary shape back to a permanent shape at a particular temperature, triple-shape-memory polymers will switch from one temporary shape to another at the first transition temperature, and then back to the permanent shape at another, higher activation temperature. This is usually achieved by combining two double-shape-memory polymers with different glass transition temperatures or when heating a programmed shape-memory polymer first above the glass transition temperature and then above the melting transition temperature of the switching segment.
Description of the thermally induced shape-memory effect.
Polymers exhibiting a shape-memory effect have both a visible, current (temporary) form and a stored (permanent) form. Once the latter has been manufactured by conventional methods, the material is changed into another, temporary form by processing through heating, deformation, and finally, cooling. The polymer maintains this temporary shape until the shape change into the permanent form is activated by a predetermined external stimulus. The secret behind these materials lies in their molecular network structure, which contains at least two separate phases. The phase showing the highest thermal transition, "Tperm", is the temperature that must be exceeded to establish the physical crosslinks responsible for the permanent shape. The switching segments, on the other hand, are the segments with the ability to soften past a certain transition temperature ("Ttrans") and are responsible for the temporary shape. In some cases this is the glass transition temperature ("Tg") and others the melting temperature ("Tm"). Exceeding "Ttrans" (while remaining below "Tperm") activates the switching by softening these switching segments and thereby allowing the material to resume its original (permanent) form. Below "Ttrans", flexibility of the segments is at least partly limited. If "Tm" is chosen for programming the SMP, strain-induced crystallization of the switching segment can be initiated when it is stretched above "Tm" and subsequently cooled below "Tm". These crystallites form covalent netpoints which prevent the polymer from reforming its usual coiled structure. The hard to soft segment ratio is often between 5/95 and 95/5, but ideally this ratio is between 20/80 and 80/20. The shape-memory polymers are effectively viscoelastic and many models and analysis methods exist.
Thermodynamics of the shape-memory effect.
In the amorphous state, polymer chains assume a completely random distribution within the matrix. W represents the probability of a strongly coiled conformation, which is the conformation with maximum entropy, and is the most likely state for an amorphous linear polymer chain. This relationship is represented mathematically by Boltzmann's entropy formula "S" = "k" ln "W", where "S" is the entropy and "k" is Boltzmann's constant.
In the transition from the glassy state to a rubber-elastic state by thermal activation, the rotations around segment bonds become increasingly unimpeded. This allows chains to assume other possibly, energetically equivalent conformations with a small amount of disentangling. As a result, the majority of SMPs will form compact, random coils because this conformation is entropically favored over a stretched conformation.
Polymers in this elastic state with number average molecular weight greater than 20,000 stretch in the direction of an applied external force. If the force is applied for a short time, the entanglement of polymer chains with their neighbors will prevent large movement of the chain and the sample recovers its original conformation upon removal of the force. If the force is applied for a longer period of time, however, a relaxation process takes place whereby a plastic, irreversible deformation of the sample takes place due to the slipping and disentangling of the polymer chains.
To prevent the slipping and flow of polymer chains, cross-linking can be used, both chemical and physical.
Physically crosslinked SMPs.
Linear block copolymers.
Representative shape-memory polymers in this category are polyurethanes, polyurethanes with ionic or mesogenic components made by prepolymer method. Other block copolymers also show the shape-memory effect, such as, block copolymer of polyethylene terephthalate (PET) and polyethyleneoxide (PEO), block copolymers containing polystyrene and poly(1,4-butadiene), and an ABA triblock copolymer made from poly(2-methyl-2-oxazoline) and polytetrahydrofuran.
Other thermoplastic polymers.
A linear, amorphous polynorbornene (Norsorex, developed by CdF Chemie/Nippon Zeon) or organic-inorganic hybrid polymers consisting of polynorbornene units that are partially substituted by polyhedral oligosilsesquioxane (POSS) also have shape-memory effect.
Another example reported in the literature is a copolymer consisting of polycyclooctene (PCOE) and poly(5-norbornene-exo,exo-2,3-dicarboxylic anhydride) (PNBEDCA), which was synthesized through ring-opening metathesis polymerization (ROMP). Then the obtained copolymer P(COE-co-NBEDCA) was readily modified by grafting reaction of NBEDCA units with polyhedral oligomeric silsesquioxanes (POSS) to afford a functionalized copolymer P(COE-co-NBEDCA-g-POSS). It exhibits shape-memory effect.
Chemically crosslinked SMPs.
The main limitation of physically crosslinked polymers for the shape-memory application is irreversible deformation during memory programming due to the creep. The network polymer can be synthesized by either polymerization with multifunctional (3 or more) crosslinker or by subsequent crosslinking of a linear or branched polymer. They form insoluble materials which swell in certain solvents.
Crosslinked polyurethane.
This material can be made by using excess diisocyanate or by using a crosslinker such as glycerin, trimethylol propane. Introduction of covalent crosslinking improves in creep, increase in recovery temperature and recovery window.
PEO based crosslinked SMPs.
The PEO-PET block copolymers can be crosslinked by using maleic anhydride, glycerin or dimethyl 5-isophthalates as a crosslinking agent. The addition of 1.5 wt% maleic anhydride increased in shape recovery from 35% to 65% and tensile strength from 3 to 5 MPa.
Thermoplastic shape-memory.
While shape-memory effects are traditionally limited to thermosetting plastics, some thermoplastic polymers, most notably PEEK, can be used as well.
Light-induced SMPs.
Light-activated shape-memory polymers (LASMP) use processes of photo-crosslinking and photo-cleaving to change "Tg". Photo-crosslinking is achieved by using one wavelength of light, while a second wavelength of light reversibly cleaves the photo-crosslinked bonds. The effect achieved is that the material may be reversibly switched between an elastomer and a rigid polymer. Light does not change the temperature, only the cross-linking density within the material. For example, it has been reported that polymers containing cinnamic groups can be fixed into predetermined shapes by UV light illumination (> 260 nm) and then recover their original shape when exposed to UV light of a different wavelength (< 260 nm). Examples of photoresponsive switches include cinnamic acid and cinnamylidene acetic acid.
Electro-active SMPs.
The use of electricity to activate the shape-memory effect of polymers is desirable for applications where it would not be possible to use heat and is another active area of research. Some current efforts use conducting SMP composites with carbon nanotubes, short carbon fibers (SCFs), carbon black, or metallic Ni powder. These conducting SMPs are produced by chemically surface-modifying multi-walled carbon nanotubes (MWNTs) in a mixed solvent of nitric acid and sulfuric acid, with the purpose of improving the interfacial bonding between the polymers and the conductive fillers. The shape-memory effect in these types of SMPs have been shown to be dependent on the filler content and the degree of surface modification of the MWNTs, with the surface modified versions exhibiting good energy conversion efficiency and improved mechanical properties.
Another technique being investigated involves the use of surface-modified super-paramagnetic nanoparticles. When introduced into the polymer matrix, remote actuation of shape transitions is possible. An example of this involves the use of oligo (e-caprolactone)dimethacrylate/butyl acrylate composite with between 2 and 12% magnetite nanoparticles. Nickel and hybrid fibers have also been used with some degree of success.
Shape-memory polymers vs. shape-memory alloys.
Shape-memory polymers differ from shape memory alloys (SMAs) by their glass transition or melting transition from a hard to a soft phase which is responsible for the shape-memory effect. In shape-memory alloys martensitic/austenitic transitions are responsible for the shape-memory effect.
There are numerous advantages that make SMPs more attractive than shape memory alloys. They have a high capacity for elastic deformation (up to 200% in most cases), much lower cost, lower density, a broad range of application temperatures which can be tailored, easy processing, potential biocompatibility and biodegradability, and probably exhibit superior mechanical properties to those of SMAs.
Applications.
Industrial applications.
One of the first conceived industrial applications was in robotics where shape-memory (SM) foams were used to provide initial soft pretension in gripping. These SM foams could be subsequently hardened by cooling, making a shape adaptive grip. Since this time, the materials have seen widespread usage in, for example, the building industry (foam which expands with warmth to seal window frames), sports wear (helmets, judo and karate suits) and in some cases with thermochromic additives for ease of thermal profile observation. Polyurethane SMPs are also applied as an autochoke element for engines.
Application in photonics.
One field in which SMPs are having a significant impact is photonics. Due to the shape changing capability, SMPs enable the production of functional and responsive photonic gratings. By using modern soft lithography techniques such as replica molding, it is possible to imprint periodic nanostructures, with sizes of the order of magnitude of visible light, onto the surface of shape memory polymeric blocks. As a result of the refractive index periodicity, these systems diffract light. By taking advantage of the polymer's shape memory effect, it is possible to reprogram the lattice parameter of the structure and consequently tune its diffractive behavior. Another application of SMPs in photonics is shape changing random lasers. By doping SMPs with highly scattering particles such as titania it is possible to tune the light transport properties of the composite. Additionally, optical gain may be introduced by adding a molecular dye to the material. By configuring both the amount of scatters and of the organic dye, a light amplification regime may be observed when the composites are optically pumped. Shape memory polymers have also been used in conjunction with nanocellulose to fabricate composites exhibiting both chiroptical properties and thermo-activated shape memory effect.
Medical applications.
Most medical applications of SMP have yet to be developed, but devices with SMP are now beginning to hit the market. Recently, this technology has expanded to applications in orthopedic surgery.
Additionally, SMPs are now being used in various ophthalmic devices including punctal plugs, glaucoma shunts and intraocular lenses.
Potential medical applications.
SMPs are smart materials with potential applications as, e.g., intravenous cannula, self-adjusting orthodontic wires and selectively pliable tools for small scale surgical procedures where currently metal-based shape-memory alloys such as Nitinol are widely used. Another application of SMP in the medical field could be its use in implants: for example minimally invasive, through small incisions or natural orifices, implantation of a device in its small temporary shape. Shape-memory technologies have shown great promise for cardiovascular stents, since they allow a small stent to be inserted along a vein or artery and then expanded to prop it open. After activating the shape memory by temperature increase or mechanical stress, it would assume its permanent shape. Certain classes of shape-memory polymers possess an additional property: biodegradability. This offers the option to develop temporary implants. In the case of biodegradable polymers, after the implant has fulfilled its intended use, e.g. healing/tissue regeneration has occurred, the material degrades into substances which can be eliminated by the body. Thus full functionality would be restored without the necessity for a second surgery to remove the implant. Examples of this development are vascular stents and surgical sutures. When used in surgical sutures, the shape-memory property of SMPs enables wound closure with self-adjusting optimal tension, which avoids tissue damage due to overtightened sutures and does support healing and regeneration. SMPs have also potential for use as compression garments and hands-free door openers, whereby the latter can be produced via so-called 4D printing.
Potential industrial applications.
Further potential applications include self-repairing structural components, such as e.g. automobile fenders in which dents are repaired by application of temperature. After an undesired deformation, such as a dent in the fender, these materials "remember" their original shape. Heating them activates their "memory". In the example of the dent, the fender could be repaired with a heat source, such as a hair-dryer. The impact results in a temporary form, which changes back to the original form upon heating—in effect, the plastic repairs itself. SMPs may also be useful in the production of aircraft which would morph during flight. Currently, the Defense Advanced Research Projects Agency DARPA is testing wings which would change shape by 150%.
The realization of a better control over the switching behavior of polymers is seen as key factor to implement new technical concepts. For instance, an accurate setting of the onset temperature of shape recovering can be exploited to tune the release temperature of information stored in a shape memory polymer. This may pave the way for the monitoring of temperature abuses of food or pharmaceuticals.
Recently, a new manufacturing process, mnemosynation, was developed at Georgia Tech to enable mass production of crosslinked SMP devices, which would otherwise be cost-prohibitive using traditional thermoset polymerization techniques. Mnemosynation was named for the Greek goddess of memory, Mnemosyne, and is the controlled imparting of memory on an amorphous thermoplastic materials utilizing radiation-induced covalent crosslinking, much like vulcanization imparts recoverable elastomeric behavior on rubbers using sulfur crosslinks. Mnemosynation combines advances in ionizing radiation and tuning the mechanical properties of SMPs to enable traditional plastics processing (extrusion, blow molding, injection molding, resin transfer molding, etc.) and allows thermoset SMPs in complex geometries. The customizable mechanical properties of traditional SMPs are achievable with high throughput plastics processing techniques to enable mass producible plastic products with thermosetting shape-memory properties: low residual strains, tunable recoverable force and adjustable glass transition temperatures.
Brand protection and anti-counterfeiting.
Shape memory polymers may serve as technology platform for a safe way of information storage and release. Overt anti-counterfeiting labels have been constructed that display a visual symbol or code when exposed to specific chemicals. Multifunctional labels may even make counterfeiting increasingly difficult. Shape memory polymers have already been made into shape memory film by extruder machine, with covert and overt 3D embossed pattern internally, and 3D pattern will be released to be embossed or disappeared in just seconds irreversibly as soon as it is heated; Shape memory film can be used as label substrates or face stock for anti-counterfeiting, brand protection, tamper-evident seals, anti-pilferage seals, etc.
Multifunctional composites.
Using shape memory polymers as matrices, multifunctional composite materials can be produced. Such composites can have temperature dependant shape morphing (i.e. shape memory) characteristics. This phenomenon allows these composites to be potentially used to create deployable structures such as booms, hinges, wings etc. While using SMPs can help produce one-way shape morphing structures, it has been reported that using SMPs in combination with shape memory alloys allows creation of more complex shape memory composites that is capable of two-way shape memory deformation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_r(N) = \\frac{\\varepsilon_m - \\varepsilon_p(N)}{\\varepsilon_m - \\varepsilon_p(N-1)}"
},
{
"math_id": 1,
"text": "R_f(N) = \\frac{\\varepsilon_u(N)}{\\varepsilon_m}"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "\\varepsilon_m"
},
{
"math_id": 4,
"text": "\\varepsilon_p(N)"
},
{
"math_id": 5,
"text": "\\varepsilon_p(N-1)"
},
{
"math_id": 6,
"text": "R_f(N) = 1 - \\frac{E_f}{E_g}"
},
{
"math_id": 7,
"text": "R_r(N) = 1 - \\frac{f_{IR}}{f_\\alpha (1 - E_f/E_g)}"
},
{
"math_id": 8,
"text": "E_g"
},
{
"math_id": 9,
"text": "E_r"
},
{
"math_id": 10,
"text": "f_{IR}"
},
{
"math_id": 11,
"text": "f_{\\alpha}"
},
{
"math_id": 12,
"text": "t >> t_r"
}
] | https://en.wikipedia.org/wiki?curid=9111849 |
911229 | Cavity ring-down spectroscopy | Optical spectroscopic technique
Cavity ring-down spectroscopy (CRDS) is a highly sensitive optical spectroscopic technique that enables measurement of absolute optical extinction by samples that scatter and absorb light. It has been widely used to study gaseous samples which absorb light at specific wavelengths, and in turn to determine mole fractions down to the parts per trillion level. The technique is also known as cavity ring-down laser absorption spectroscopy (CRLAS).
A typical CRDS setup consists of a laser that is used to illuminate a high-finesse optical cavity, which in its simplest form consists of two highly reflective mirrors. When the laser is in resonance with a cavity mode, intensity builds up in the cavity due to constructive interference. The laser is then turned off in order to allow the measurement of the exponentially decaying light intensity leaking from the cavity. During this decay, light is reflected back and forth thousands of times between the mirrors giving an effective path length for the extinction on the order of a few kilometers.
If a light-absorbing material is now placed in the cavity, the mean lifetime decreases as fewer bounces through the medium are required before the light is fully absorbed, or absorbed to some fraction of its initial intensity. A CRDS setup measures how long it takes for the light to decay to 1/"e" of its initial intensity, and this "ringdown time" can be used to calculate the concentration of the absorbing substance in the gas mixture in the cavity.
Detailed description.
Cavity ring-down spectroscopy is a form of laser absorption spectroscopy. In CRDS, a laser pulse is trapped in a highly reflective (typically R > 99.9%) detection cavity. The intensity of the trapped pulse will decrease by a fixed percentage during each round trip within the cell due to absorption, scattering by the medium within the cell, and reflectivity losses. The intensity of light within the cavity is then determined as an exponential function of time.
formula_0
The principle of operation is based on the measurement of a decay rate rather than an absolute absorbance. This is one reason for the increased sensitivity over traditional absorption spectroscopy, as the technique is then immune to shot-to-shot laser fluctuations. The decay constant, τ, which is the time taken for the intensity of light to fall to 1/e of the initial intensity, is called the ring-down time and is dependent on the loss mechanism(s) within the cavity. For an empty cavity, the decay constant is dependent on mirror loss and various optical phenomena like scattering and refraction:
formula_1
where "n" is the index of refraction within the cavity, "c" is the speed of light in vacuum, "l" is the cavity length, "R" is the mirror reflectivity, and "X" takes into account other miscellaneous optical losses. This equation uses the approximation that ln(1+"x") ≈ "x" for "x" close to zero, which is the case under cavity ring-down conditions. Often, the miscellaneous losses are factored into an effective mirror loss for simplicity. An absorbing species in the cavity will increase losses according to the Beer-Lambert law. Assuming the sample fills the entire cavity,
formula_2
where α is the absorption coefficient for a specific analyte concentration at the cavity's resonance wavelength. The decadic absorbance, "A", due to the analyte can be determined from both ring-down times.
formula_3
Alternatively, the molar absorptivity, ε, and analyte concentration, "C", can be determined from the ratio of both ring-down times. If "X" can be neglected, one obtains
formula_4
When a ratio of species' concentrations is the analytical objective, as for example in carbon-13 to carbon-12 measurements in carbon dioxide, the ratio of ring-down times measured for the same sample at the relevant absorption frequencies can be used directly with extreme accuracy and precision.
Advantages of CRDS.
There are two main advantages to CRDS over other absorption methods:
First, it is not affected by fluctuations in the laser intensity. In most absorption measurements, the light source must be assumed to remain steady between blank (no analyte), standard (known amount of analyte), and sample (unknown amount of analyte). Any drift (change in the light source) between measurements will introduce errors. In CRDS, the ringdown time does not depend on the intensity of the laser, so fluctuations of this type are not a problem. Independency from laser intensity makes CRDS needless to any calibration and comparison with standards.
Second, it is very sensitive due to its long pathlength. In absorption measurements, the smallest amount that can be detected is proportional to the length that the light travels through a sample. Since the light reflects many times between the mirrors, it ends up traveling long distances. For example, a laser pulse making 500 round trips through a 1-meter cavity will effectively have traveled through 1 kilometer of sample.
Thus, the advantages include: | [
{
"math_id": 0,
"text": "I(t) = I_0 \\exp \\left (- t / \\tau \\right)"
},
{
"math_id": 1,
"text": "\\tau_0 = \\frac{n}{c} \\cdot \\frac{l}{1-R+X}"
},
{
"math_id": 2,
"text": "\\tau = \\frac{n}{c} \\cdot \\frac{l}{1-R+X+ \\alpha l }"
},
{
"math_id": 3,
"text": "A = \\frac{n}{c} \\cdot \\frac{l}{2.303} \\cdot \\left ( \\frac{1}{\\tau} - \\frac{1}{\\tau_0} \\right) "
},
{
"math_id": 4,
"text": "\\frac{\\tau_0}{\\tau} =1+ \\frac{ \\alpha l }{1-R} = 1+\\frac{2.303 \\epsilon l C}{(1-R)}"
}
] | https://en.wikipedia.org/wiki?curid=911229 |
91127 | Fermat number | Positive integer of the form (2^(2^n))+1
In mathematics, a Fermat number, named after Pierre de Fermat, the first known to have studied them, is a positive integer of the form:formula_0 where "n" is a non-negative integer. The first few Fermat numbers are: 3, 5, 17, 257, 65537, 4294967297, 18446744073709551617, ... (sequence in the OEIS).
If 2"k" + 1 is prime and "k" > 0, then "k" itself must be a power of 2, so 2"k" + 1 is a Fermat number; such primes are called Fermat primes. As of 2023[ [update]], the only known Fermat primes are "F"0 = 3, "F"1 = 5, "F"2 = 17, "F"3 = 257, and "F"4 = 65537 (sequence in the OEIS).
Basic properties.
The Fermat numbers satisfy the following recurrence relations:
formula_1
formula_2
for "n" ≥ 1,
formula_3
formula_4
for "n" ≥ 2. Each of these relations can be proved by mathematical induction. From the second equation, we can deduce Goldbach's theorem (named after Christian Goldbach): no two Fermat numbers share a common integer factor greater than 1. To see this, suppose that 0 ≤ "i" < "j" and "F""i" and "F""j" have a common factor "a" > 1. Then "a" divides both
formula_5
and "F""j"; hence "a" divides their difference, 2. Since "a" > 1, this forces "a" = 2. This is a contradiction, because each Fermat number is clearly odd. As a corollary, we obtain another proof of the infinitude of the prime numbers: for each "F""n", choose a prime factor "p""n"; then the sequence {"p""n"} is an infinite sequence of distinct primes.
Primality.
Fermat numbers and Fermat primes were first studied by Pierre de Fermat, who conjectured that all Fermat numbers are prime. Indeed, the first five Fermat numbers "F"0, ..., "F"4 are easily shown to be prime. Fermat's conjecture was refuted by Leonhard Euler in 1732 when he showed that
formula_6
Euler proved that every factor of "F""n" must have the form "k"
2"n"+1 + 1 (later improved to "k"
2"n"+2 + 1 by Lucas) for "n" ≥ 2.
That 641 is a factor of "F"5 can be deduced from the equalities 641 = 27 × 5 + 1 and 641 = 24 + 54. It follows from the first equality that 27 × 5 ≡ −1 (mod 641) and therefore (raising to the fourth power) that 228 × 54 ≡ 1 (mod 641). On the other hand, the second equality implies that 54 ≡ −24 (mod 641). These congruences imply that 232 ≡ −1 (mod 641).
Fermat was probably aware of the form of the factors later proved by Euler, so it seems curious that he failed to follow through on the straightforward calculation to find the factor. One common explanation is that Fermat made a computational mistake.
There are no other known Fermat primes "F""n" with "n" > 4, but little is known about Fermat numbers for large "n". In fact, each of the following is an open problem:
As of 2024[ [update]], it is known that "F""n" is composite for 5 ≤ "n" ≤ 32, although of these, complete factorizations of "F""n" are known only for 0 ≤ "n" ≤ 11, and there are no known prime factors for "n" = 20 and "n" = 24. The largest Fermat number known to be composite is "F"18233954, and its prime factor 7 × 218233956 + 1 was discovered in October 2020.
Heuristic arguments.
Heuristics suggest that "F"4 is the last Fermat prime.
The prime number theorem implies that a random integer in a suitable interval around "N" is prime with probability 1
ln "N". If one uses the heuristic that a Fermat number is prime with the same probability as a random integer of its size, and that "F"5, ..., "F"32 are composite, then the expected number of Fermat primes beyond "F"4 (or equivalently, beyond "F"32) should be
formula_7
One may interpret this number as an upper bound for the probability that a Fermat prime beyond "F"4 exists.
This argument is not a rigorous proof. For one thing, it assumes that Fermat numbers behave "randomly", but the factors of Fermat numbers have special properties. Boklan and Conway published a more precise analysis suggesting that the probability that there is another Fermat prime is less than one in a billion.
Anders Bjorn and Hans Riesel estimated the number of square factors of Fermat numbers from "F"5 onward as
formula_8
in other words, there are unlikely to be any non-squarefree Fermat numbers, and in general square factors of formula_9 are very rare for large "n".
Equivalent conditions.
Let formula_10 be the "n"th Fermat number. Pépin's test states that for "n" > 0,
formula_11 is prime if and only if formula_12
The expression formula_13 can be evaluated modulo formula_11 by repeated squaring. This makes the test a fast polynomial-time algorithm. But Fermat numbers grow so rapidly that only a handful of them can be tested in a reasonable amount of time and space.
There are some tests for numbers of the form "k"
2"m" + 1, such as factors of Fermat numbers, for primality.
Proth's theorem (1878). Let "N" = "k"
2"m" + 1 with odd "k" < 2"m". If there is an integer "a" such that
formula_14
then formula_15 is prime. Conversely, if the above congruence does not hold, and in addition
formula_16 (See Jacobi symbol)
then formula_15 is composite.
If "N" = "F""n" > 3, then the above Jacobi symbol is always equal to −1 for "a" = 3, and this special case of Proth's theorem is known as Pépin's test. Although Pépin's test and Proth's theorem have been implemented on computers to prove the compositeness of some Fermat numbers, neither test gives a specific nontrivial factor. In fact, no specific prime factors are known for "n" = 20 and 24.
Factorization.
Because of Fermat numbers' size, it is difficult to factorize or even to check primality. Pépin's test gives a necessary and sufficient condition for primality of Fermat numbers, and can be implemented by modern computers. The elliptic curve method is a fast method for finding small prime divisors of numbers. Distributed computing project "Fermatsearch" has found some factors of Fermat numbers. Yves Gallot's proth.exe has been used to find factors of large Fermat numbers. Édouard Lucas, improving Euler's above-mentioned result, proved in 1878 that every factor of the Fermat number formula_11, with "n" at least 2, is of the form formula_17 (see Proth number), where "k" is a positive integer. By itself, this makes it easy to prove the primality of the known Fermat primes.
Factorizations of the first 12 Fermat numbers are:
As of 2023[ [update]], only "F"0 to "F"11 have been completely factored. The distributed computing project Fermat Search is searching for new factors of Fermat numbers. The set of all Fermat factors is (or, sorted, ) in OEIS.
The following factors of Fermat numbers were known before 1950 (since then, digital computers have helped find more factors):
As of 2023[ [update]], 368 prime factors of Fermat numbers are known, and 324 Fermat numbers are known to be composite. Several new Fermat factors are found each year.
Pseudoprimes and Fermat numbers.
Like composite numbers of the form 2"p" − 1, every composite Fermat number is a strong pseudoprime to base 2. This is because all strong pseudoprimes to base 2 are also Fermat pseudoprimes – i.e.,
formula_18
for all Fermat numbers.
In 1904, Cipolla showed that the product of at least two distinct prime or composite Fermat numbers formula_19 formula_20 will be a Fermat pseudoprime to base 2 if and only if formula_21.
Other theorems about Fermat numbers.
<templatestyles src="Math_theorem/styles.css" />
Lemma. — If "n" is a positive integer,
formula_22
<templatestyles src="Math_proof/styles.css" />Proof
formula_23
<templatestyles src="Math_theorem/styles.css" />
<templatestyles src="Math_theorem/styles.css" />
<templatestyles src="Math_theorem/styles.css" />
Theorem (Édouard Lucas) — Any prime divisor "p" of formula_24 is of the form formula_25 whenever "n" > 1.
<templatestyles src="Math_proof/styles.css" />"Sketch of proof"
Let "G""p" denote the group of non-zero integers modulo "p" under multiplication, which has order "p" − 1. Notice that 2 (strictly speaking, its image modulo "p") has multiplicative order equal to formula_26 in "G""p" (since formula_27 is the square of formula_28 which is −1 modulo "F""n"), so that, by Lagrange's theorem, "p" − 1 is divisible by formula_29 and "p" has the form formula_30 for some integer "k", as Euler knew. Édouard Lucas went further. Since "n" > 1, the prime "p" above is congruent to 1 modulo 8. Hence (as was known to Carl Friedrich Gauss), 2 is a quadratic residue modulo "p", that is, there is integer "a" such that formula_31 Then the image of "a" has order formula_32 in the group "G""p" and (using Lagrange's theorem again), "p" − 1 is divisible by formula_32 and "p" has the form formula_33 for some integer "s".
In fact, it can be seen directly that 2 is a quadratic residue modulo "p", since
formula_34
Since an odd power of 2 is a quadratic residue modulo "p", so is 2 itself.
A Fermat number cannot be a perfect number or part of a pair of amicable numbers.
The series of reciprocals of all prime divisors of Fermat numbers is convergent.
If "n""n" + 1 is prime, there exists an integer "m" such that "n" = 22"m". The equation
"n""n" + 1 = "F"(2"m"+"m")
holds in that case.
Let the largest prime factor of the Fermat number "F""n" be "P"("F""n"). Then,
formula_35
Relationship to constructible polygons.
Carl Friedrich Gauss developed the theory of Gaussian periods in his "Disquisitiones Arithmeticae" and formulated a sufficient condition for the constructibility of regular polygons. Gauss stated that this condition was also necessary, but never published a proof. Pierre Wantzel gave a full proof of necessity in 1837. The result is known as the Gauss–Wantzel theorem:
An "n"-sided regular polygon can be constructed with compass and straightedge if and only if "n" is either a power of 2 or the product of a power of 2 and distinct Fermat primes: in other words, if and only if "n" is of the form "n" = 2"k" or "n" = 2"k""p"1"p"2..."p""s", where "k", "s" are nonnegative integers and the "p""i" are distinct Fermat primes.
A positive integer "n" is of the above form if and only if its totient "φ"("n") is a power of 2.
Applications of Fermat numbers.
Pseudorandom number generation.
Fermat primes are particularly useful in generating pseudo-random sequences of numbers in the range 1, ..., "N", where "N" is a power of 2. The most common method used is to take any seed value between 1 and "P" − 1, where "P" is a Fermat prime. Now multiply this by a number "A", which is greater than the square root of "P" and is a primitive root modulo "P" (i.e., it is not a quadratic residue). Then take the result modulo "P". The result is the new value for the RNG.
formula_36 (see linear congruential generator, RANDU)
This is useful in computer science, since most data structures have members with 2"X" possible values. For example, a byte has 256 (28) possible values (0–255). Therefore, to fill a byte or bytes with random values, a random number generator that produces values 1–256 can be used, the byte taking the output value −1. Very large Fermat primes are of particular interest in data encryption for this reason. This method produces only pseudorandom values, as after "P" − 1 repetitions, the sequence repeats. A poorly chosen multiplier can result in the sequence repeating sooner than "P" − 1.
Generalized Fermat numbers.
Numbers of the form formula_37 with "a", "b" any coprime integers, "a" > "b" > 0, are called generalized Fermat numbers. An odd prime "p" is a generalized Fermat number if and only if "p" is congruent to 1 (mod 4). (Here we consider only the case "n" > 0, so 3 = formula_38 is not a counterexample.)
An example of a probable prime of this form is 1215131072 + 242131072 (found by Kellen Shenton).
By analogy with the ordinary Fermat numbers, it is common to write generalized Fermat numbers of the form formula_39 as "F""n"("a"). In this notation, for instance, the number 100,000,001 would be written as "F"3(10). In the following we shall restrict ourselves to primes of this form, formula_39, such primes are called "Fermat primes base "a"". Of course, these primes exist only if "a" is even.
If we require "n" > 0, then Landau's fourth problem asks if there are infinitely many generalized Fermat primes "Fn"("a").
Generalized Fermat primes of the form Fn("a").
Because of the ease of proving their primality, generalized Fermat primes have become in recent years a topic for research within the field of number theory. Many of the largest known primes today are generalized Fermat primes.
Generalized Fermat numbers can be prime only for even a, because if a is odd then every generalized Fermat number will be divisible by 2. The smallest prime number formula_40 with formula_41 is formula_42, or 3032 + 1. Besides, we can define "half generalized Fermat numbers" for an odd base, a half generalized Fermat number to base "a" (for odd "a") is formula_43, and it is also to be expected that there will be only finitely many half generalized Fermat primes for each odd base.
In this list, the generalized Fermat numbers (formula_40) to an even a are formula_44, for odd a, they are formula_45. If a is a perfect power with an odd exponent (sequence in the OEIS), then all generalized Fermat number can be algebraic factored, so they cannot be prime.
See for even bases up to 1000, and for odd bases. For the smallest number formula_46 such that formula_40 is prime, see OEIS: .
For the smallest even base a such that formula_40 is prime, see OEIS: .
The smallest bases "b=b(n)" such that "b"2"n" + 1 (for given "n= 0,1,2, ...") is prime are
2, 2, 2, 2, 2, 30, 102, 120, 278, 46, 824, 150, 1534, 30406, 67234, 70906, 48594, 62722, 24518, 75898, 919444, ... (sequence in the OEIS)
Conversely, the smallest "k=k(n)" such that (2"n")"k" + 1 (for given "n") is prime are
1, 1, 1, 0, 1, 1, 2, 1, 1, 2, 1, 2, 2, 1, 1, 0, 4, 1, ... (The next term is unknown) (sequence in the OEIS) (also see OEIS: and OEIS: )
A more elaborate theory can be used to predict the number of bases for which formula_40 will be prime for fixed formula_46. The number of generalized Fermat primes can be roughly expected to halve as formula_46 is increased by 1.
Generalized Fermat primes of the form Fn("a", "b").
It is also possible to construct generalized Fermat primes of the form formula_9. As in the case where "b"=1, numbers of this form will always be divisible by 2 if "a+b" is even, but it is still possible to define generalized half-Fermat primes of this type. For the smallest prime of the form formula_47 (for odd formula_48), see also OEIS: .
Largest known generalized Fermat primes.
The following is a list of the five largest known generalized Fermat primes. The whole top-5 is discovered by participants in the PrimeGrid project.
On the Prime Pages one can find the current top 100 generalized Fermat primes.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_{n} = 2^{2^n} + 1,"
},
{
"math_id": 1,
"text": "\nF_{n} = (F_{n-1}-1)^{2}+1"
},
{
"math_id": 2,
"text": "\nF_{n} = F_{0} \\cdots F_{n-1} + 2"
},
{
"math_id": 3,
"text": "\nF_{n} = F_{n-1} + 2^{2^{n-1}}F_{0} \\cdots F_{n-2}"
},
{
"math_id": 4,
"text": "\nF_{n} = F_{n-1}^2 - 2(F_{n-2}-1)^2"
},
{
"math_id": 5,
"text": "F_{0} \\cdots F_{j-1}"
},
{
"math_id": 6,
"text": " F_{5} = 2^{2^5} + 1 = 2^{32} + 1 = 4294967297 = 641 \\times 6700417. "
},
{
"math_id": 7,
"text": " \\sum_{n \\ge 33} \\frac{1}{\\ln F_{n}} < \\frac{1}{\\ln 2} \\sum_{n \\ge 33} \\frac{1}{\\log_2(2^{2^n})} = \\frac{1}{\\ln 2} 2^{-32} < 3.36 \\times 10^{-10}."
},
{
"math_id": 8,
"text": "\n\\sum_{n \\ge 5} \\sum_{k \\ge 1} \\frac{1}{k (k 2^n + 1) \\ln(k 2^n)} < \\frac{\\pi^2}{6 \\ln 2} \\sum_{n \\ge 5} \\frac{1}{n 2^n} \\approx 0.02576;\n"
},
{
"math_id": 9,
"text": "a^{2^n} + b^{2^n}"
},
{
"math_id": 10,
"text": "F_n=2^{2^n}+1"
},
{
"math_id": 11,
"text": "F_n"
},
{
"math_id": 12,
"text": "3^{(F_n-1)/2}\\equiv-1\\pmod{F_n}."
},
{
"math_id": 13,
"text": "3^{(F_n-1)/2}"
},
{
"math_id": 14,
"text": "a^{(N-1)/2} \\equiv -1\\pmod{N}"
},
{
"math_id": 15,
"text": "N"
},
{
"math_id": 16,
"text": "\\left(\\frac{a}{N}\\right)=-1"
},
{
"math_id": 17,
"text": "k\\times2^{n+2}+1"
},
{
"math_id": 18,
"text": "2^{F_n-1} \\equiv 1 \\pmod{F_n}"
},
{
"math_id": 19,
"text": "F_{a} F_{b} \\dots F_{s},"
},
{
"math_id": 20,
"text": "a > b > \\dots > s > 1"
},
{
"math_id": 21,
"text": "2^s > a "
},
{
"math_id": 22,
"text": "a^n-b^n=(a-b)\\sum_{k=0}^{n-1} a^kb^{n-1-k}."
},
{
"math_id": 23,
"text": "\\begin{align}\n(a-b)\\sum_{k=0}^{n-1}a^kb^{n-1-k} &=\\sum_{k=0}^{n-1}a^{k+1}b^{n-1-k}-\\sum_{k=0}^{n-1}a^kb^{n-k}\\\\\n&=a^n+\\sum_{k=1}^{n-1}a^kb^{n-k}-\\sum_{k=1}^{n-1}a^kb^{n-k}-b^n\\\\\n&=a^n-b^n\n\\end{align}"
},
{
"math_id": 24,
"text": "F_n = 2^{2^n}+1"
},
{
"math_id": 25,
"text": "k2^{n+2}+1"
},
{
"math_id": 26,
"text": "2^{n+1}"
},
{
"math_id": 27,
"text": " 2^{2^{n+1}}"
},
{
"math_id": 28,
"text": "2^{2^n}"
},
{
"math_id": 29,
"text": "2^{n+1} "
},
{
"math_id": 30,
"text": "k2^{n+1} + 1"
},
{
"math_id": 31,
"text": "p|a^2-2."
},
{
"math_id": 32,
"text": "2^{n+2}"
},
{
"math_id": 33,
"text": "s2^{n+2} + 1"
},
{
"math_id": 34,
"text": "\\left(1 +2^{2^{n-1}} \\right)^{2} \\equiv 2^{1+2^{n-1}} \\pmod p."
},
{
"math_id": 35,
"text": "P(F_n) \\ge 2^{n+2}(4n+9) + 1."
},
{
"math_id": 36,
"text": "V_{j+1} = (A \\times V_j) \\bmod P"
},
{
"math_id": 37,
"text": "a^{2^{ \\overset{n} {}}} \\!\\!+ b^{2^{ \\overset{n} {}}}"
},
{
"math_id": 38,
"text": "2^{2^{0}} \\!+ 1"
},
{
"math_id": 39,
"text": "a^{2^{ \\overset{n} {}}} \\!\\!+ 1"
},
{
"math_id": 40,
"text": "F_n(a)"
},
{
"math_id": 41,
"text": "n>4"
},
{
"math_id": 42,
"text": "F_5(30)"
},
{
"math_id": 43,
"text": "\\frac{a^{2^n} \\!+ 1}{2}"
},
{
"math_id": 44,
"text": "a^{2^n} \\!+ 1"
},
{
"math_id": 45,
"text": "\\frac{a^{2^n} \\!\\!+ 1}{2}"
},
{
"math_id": 46,
"text": "n"
},
{
"math_id": 47,
"text": "F_n(a,b)"
},
{
"math_id": 48,
"text": "a+b"
}
] | https://en.wikipedia.org/wiki?curid=91127 |
911520 | Character table | In group theory, a branch of abstract algebra, a character table is a two-dimensional table whose rows correspond to irreducible representations, and whose columns correspond to conjugacy classes of group elements. The entries consist of characters, the traces of the matrices representing group elements of the column's class in the given row's group representation. In chemistry, crystallography, and spectroscopy, character tables of point groups are used to classify "e.g." molecular vibrations according to their symmetry, and to predict whether a transition between two states is forbidden for symmetry reasons. Many university level textbooks on physical chemistry, quantum chemistry, spectroscopy and inorganic chemistry devote a chapter to the use of symmetry group character tables.
Definition and example.
The irreducible complex characters of a finite group form a character table which encodes much useful information about the group "G" in a concise form. Each row is labelled by an irreducible character and the entries in the row are the values of that character on any representative of the respective conjugacy class of "G" (because characters are class functions). The columns are labelled by (representatives of) the conjugacy classes of "G". It is customary to label the first row by the character of the trivial representation, which is the trivial action of G on a 1-dimensional vector space by formula_0 for all formula_1. Each entry in the first row is therefore 1. Similarly, it is customary to label the first column by the identity. The entries of the first column are the values of the irreducible characters at the identity, the degrees of the irreducible characters. Characters of degree 1 are known as linear characters.
Here is the character table of "C"3 = "", the cyclic group with three elements and generator "u":
where ω is a primitive cube root of unity. The character table for general cyclic groups is (a scalar multiple of) the DFT matrix.
Another example is the character table of formula_2:
where (12) represents the conjugacy class consisting of (12), (13), (23), while (123) represents the conjugacy class consisting of (123), (132). To learn more about character table of symmetric groups, see .
The first row of the character table always consists of 1s, and corresponds to the trivial representation (the 1-dimensional representation consisting of 1×1 matrices containing the entry 1). Further, the character table is always square because (1) irreducible characters are pairwise orthogonal, and (2) no other non-trivial class function is orthogonal to every character. (A class function is one that is constant on conjugacy classes.) This is tied to the important fact that the irreducible representations of a finite group "G" are in bijection with its conjugacy classes. This bijection also follows by showing that the class sums form a basis for the center of the group algebra of "G", which has dimension equal to the number of irreducible representations of "G".
Orthogonality relations.
The space of complex-valued class functions of a finite group "G" has a natural inner product:
formula_3
where formula_4 denotes the complex conjugate of the value of formula_5 on formula_6. With respect to this inner product, the irreducible characters form an orthonormal basis
for the space of class functions, and this yields the orthogonality relation for the rows of the character
table:
formula_7
For formula_8 the orthogonality relation for columns is as follows:
formula_9
where the sum is over all of the irreducible characters formula_10 of "G" and the symbol formula_11 denotes the order of the centralizer of formula_6.
For an arbitrary character formula_10, it is irreducible if and only if formula_12.
The orthogonality relations can aid many computations including:
If the irreducible representation "V" is non-trivial, then formula_15
More specifically, consider the regular representation which is the permutation obtained from a finite group "G" acting on (the free vector space spanned by) itself. The characters of this representation are formula_16 and formula_17 for formula_6 not the identity. Then given an irreducible representation formula_18,
formula_19.
Then decomposing the regular representations as a sum of irreducible representations of "G", we get formula_20, from which we conclude
formula_21
over all irreducible representations formula_18. This sum can help narrow down the dimensions of the irreducible representations in a character table. For example, if the group has order 10 and 4 conjugacy classes (for instance, the dihedral group of order 10) then the only way to express the order of the group as a sum of four squares is formula_22, so we know the dimensions of all the irreducible representations.
Properties.
Complex conjugation acts on the character table: since the complex conjugate of a representation is again a representation, the same is true for characters, and thus a character that takes on non-real complex values has a conjugate character.
Certain properties of the group "G" can be deduced from its character table:
The character table does not in general determine the group up to isomorphism: for example, the quaternion group and the dihedral group of order 8 have the same character table. Brauer asked whether the character table, together with the knowledge of how the powers of elements of its conjugacy classes are distributed, determines a finite group up to isomorphism. In 1964, this was answered in the negative by E. C. Dade.
The linear representations of G are themselves a group under the tensor product, since the tensor product of 1-dimensional vector spaces is again 1-dimensional. That is, if formula_24 and formula_25 are linear representations, then formula_26 defines a new linear representation. This gives rise to a group of linear characters, called the character group under the operation formula_27. This group is connected to Dirichlet characters and Fourier analysis.
Outer automorphisms.
The outer automorphism group acts on the character table by permuting columns (conjugacy classes) and accordingly rows, which gives another symmetry to the table. For example, abelian groups have the outer automorphism formula_28, which is non-trivial except for elementary abelian 2-groups, and outer because abelian groups are precisely those for which conjugation (inner automorphisms) acts trivially. In the example of formula_29 above, this map sends formula_30 and accordingly switches formula_31 and formula_32 (switching their values of formula_33 and formula_34). Note that this particular automorphism (negative in abelian groups) agrees with complex conjugation.
Formally, if formula_35 is an automorphism of "G" and formula_36 is a representation, then formula_37 is a representation. If formula_38 is an inner automorphism (conjugation by some element "a"), then it acts trivially on representations, because representations are class functions (conjugation does not change their value). Thus a given class of outer automorphisms, it acts on the characters – because inner automorphisms act trivially, the action of the automorphism group formula_39 descends to the quotient formula_40.
This relation can be used both ways: given an outer automorphism, one can produce new representations (if the representation is not equal on conjugacy classes that are interchanged by the outer automorphism), and conversely, one can restrict possible outer automorphisms based on the character table.
Finding the vibrational modes of a water molecule using character table.
To find the total number of vibrational modes of a water molecule, the irreducible representation Γirreducible needs to calculate from the character table of a water molecule first.
Finding Γreducible from the Character Table of H²O molecule.
Water (<chem>H2O</chem>) molecule falls under the point group formula_41. Below is the character table of formula_41 point group, which is also the character table for a water molecule.
In here, the first row describes the possible symmetry operations of this point group and the first column represents the Mulliken symbols. The fifth and sixth columns are functions of the axis variables.
Functions:
When determining the characters for a representation, assign formula_62 if it remains unchanged, formula_63 if it moved, and formula_64 if it reversed its direction. A simple way to determine the characters for the reducible representation formula_65, is to multiply the ""number of unshifted atom(s)" with "contribution per atom"" along each of three axis (formula_66) when a symmetry operation is carried out.
Unless otherwise stated, for the identity operation formula_42, "contribution per unshifted atom" for each atom is always formula_67, as none of the atom(s) change their position during this operation. For any reflective symmetry operation formula_68, "contribution per atom" is always formula_62, as for any reflection, an atom remains unchanged along with two axis and reverse its direction along with the other axis. For the inverse symmetry operation formula_69, "contribution per unshifted atom" is always formula_70, as each of three axis of an atom reverse its direction during this operation. An easiest way to calculate "contribution per unshifted atom" for formula_71 and formula_72 symmetry operation is to use below formulas
formula_73
formula_74
where, formula_75
A simplified version of above statements is summarized in the table below
"Character of formula_65 for any symmetry operation formula_76 Number of unshifted atom(s) during this operation formula_77 Contribution per unshifted atom along each of three axis"
Calculating the irreducible representation Γirreducible from the reducible representation Γreducible along with the character table.
From the above discussion, a new character table for a water molecule (formula_41 point group) can be written as
Using the new character table including formula_78, the reducible representation for all motion of the <chem>H2O</chem> molecule can be reduced using below formula
formula_79
where,
formula_80 order of the group,
formula_81 character of the formula_65 for a particular class,
formula_82 character from the reducible representation for a particular class,
formula_83 the number of operations in the class
So,
formula_84
formula_85
formula_86
formula_87
So, the reduced representation for all motions of water molecule will be
formula_88
Translational motion for water molecule.
Translational motion will corresponds with the reducible representations in the character table, which have formula_52, formula_53 and formula_44 function
As only the reducible representations formula_48, formula_50 and formula_43 correspond to the formula_52, formula_53 and formula_44 function,
formula_89
Rotational motion for water molecule.
Rotational motion will corresponds with the reducible representations in the character table, which have formula_54, formula_55 and formula_46 function
As only the reducible representations formula_50, formula_48 and formula_45 correspond to the formula_52, formula_53 and formula_44 function,
formula_90
Total vibrational modes for water molecule.
Total vibrational mode, formula_91
formula_92
formula_93
So, total formula_94 vibrational modes are possible for water molecules and two of them are symmetric vibrational modes (as formula_95) and the other vibrational mode is antisymmetric (as formula_96)
Checking whether the water molecule is IR active or Raman active.
There is some rules to be IR active or Raman active for a particular mode.
As the vibrational modes for water molecule formula_97 contains both formula_52, formula_53 or formula_44 and quadratic functions, it has both the IR active vibrational modes and Raman active vibrational modes.
Similar rules will apply for rest of the irreducible representations formula_98
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho(g)=1"
},
{
"math_id": 1,
"text": "g\\in G"
},
{
"math_id": 2,
"text": "S_3"
},
{
"math_id": 3,
"text": "\\left\\langle \\alpha, \\beta \\right\\rangle := \\frac{1}{\\left| G \\right|} \\sum_{g \\in G} \\alpha(g) \\overline{\\beta(g)}"
},
{
"math_id": 4,
"text": "\\overline{\\beta(g)}"
},
{
"math_id": 5,
"text": "\\beta"
},
{
"math_id": 6,
"text": "g"
},
{
"math_id": 7,
"text": "\\left\\langle \\chi_i, \\chi_j \\right\\rangle = \\begin{cases} 0& \\mbox{ if } i \\ne j, \\\\ 1& \\mbox{ if } i=j. \\end{cases}"
},
{
"math_id": 8,
"text": "g, h \\in G"
},
{
"math_id": 9,
"text": "\\sum_{\\chi_i} \\chi_i(g) \\overline{\\chi_i(h)} = \\begin{cases} \\left| C_G(g) \\right|, &\\mbox{ if } g, h \\mbox{ are conjugate} \\\\ 0 &\\mbox{ otherwise.}\\end{cases}"
},
{
"math_id": 10,
"text": "\\chi_i"
},
{
"math_id": 11,
"text": "\\left| C_G(g) \\right|"
},
{
"math_id": 12,
"text": "\\left\\langle \\chi_i, \\chi_i \\right\\rangle = 1"
},
{
"math_id": 13,
"text": "V = \\left\\langle \\chi, \\chi_i \\right\\rangle"
},
{
"math_id": 14,
"text": "\\left| G \\right| = \\left| Cl(g) \\right| * \\sum_{\\chi_i} \\chi_i(g) \\overline{\\chi_i(g)}"
},
{
"math_id": 15,
"text": "\\sum_g \\chi(g) = 0."
},
{
"math_id": 16,
"text": "\\chi(e) = \\left| G \\right|"
},
{
"math_id": 17,
"text": "\\chi(g) = 0"
},
{
"math_id": 18,
"text": "V_i"
},
{
"math_id": 19,
"text": "\\left\\langle \\chi_{\\text{reg}}, \\chi_i \\right\\rangle = \\frac{1}{\\left| G \\right|}\\sum_{g \\in G} \\chi_i(g) \\overline{\\chi_{\\text{reg}}(g)} = \\frac{1}{\\left| G \\right|} \\chi_i(1) \\overline{\\chi_{\\text{reg}}(1)} = \\operatorname{dim} V_i"
},
{
"math_id": 20,
"text": "V_{\\text{reg}} = \\bigoplus V_i^{\\operatorname{dim} V_i}"
},
{
"math_id": 21,
"text": "|G| = \\operatorname{dim} V_{\\text{reg}} = \\sum(\\operatorname{dim} V_i)^2"
},
{
"math_id": 22,
"text": "10 = 1^2 + 1^2 + 2^2 + 2^2"
},
{
"math_id": 23,
"text": "|G| \\!\\times\\! |G|"
},
{
"math_id": 24,
"text": "\\rho_1:G \\to V_1"
},
{
"math_id": 25,
"text": "\\rho_2:G \\to V_2"
},
{
"math_id": 26,
"text": "\\rho_1\\otimes\\rho_2(g) = (\\rho_1(g)\\otimes\\rho_2(g))"
},
{
"math_id": 27,
"text": "[\\chi_1*\\chi_2](g) = \\chi_1(g)\\chi_2(g)"
},
{
"math_id": 28,
"text": "g \\mapsto g^{-1}"
},
{
"math_id": 29,
"text": "C_3"
},
{
"math_id": 30,
"text": "u \\mapsto u^2, u^2 \\mapsto u,"
},
{
"math_id": 31,
"text": "\\chi_1"
},
{
"math_id": 32,
"text": "\\chi_2"
},
{
"math_id": 33,
"text": "\\omega"
},
{
"math_id": 34,
"text": "\\omega^2"
},
{
"math_id": 35,
"text": "\\phi\\colon G \\to G"
},
{
"math_id": 36,
"text": "\\rho \\colon G \\to \\operatorname{GL}"
},
{
"math_id": 37,
"text": "\\rho^\\phi := g \\mapsto \\rho(\\phi(g))"
},
{
"math_id": 38,
"text": "\\phi = \\phi_a"
},
{
"math_id": 39,
"text": "\\mathrm{Aut}"
},
{
"math_id": 40,
"text": "\\mathrm{Out}"
},
{
"math_id": 41,
"text": "C_{2v}"
},
{
"math_id": 42,
"text": "E"
},
{
"math_id": 43,
"text": "A_1"
},
{
"math_id": 44,
"text": "z"
},
{
"math_id": 45,
"text": "A_2"
},
{
"math_id": 46,
"text": "R_z"
},
{
"math_id": 47,
"text": "xy"
},
{
"math_id": 48,
"text": "B_1"
},
{
"math_id": 49,
"text": "xz"
},
{
"math_id": 50,
"text": "B_2"
},
{
"math_id": 51,
"text": "yz"
},
{
"math_id": 52,
"text": "x"
},
{
"math_id": 53,
"text": "y"
},
{
"math_id": 54,
"text": "R_x"
},
{
"math_id": 55,
"text": "R_y"
},
{
"math_id": 56,
"text": "x^2+y^2"
},
{
"math_id": 57,
"text": "x^2-y^2"
},
{
"math_id": 58,
"text": "x^2"
},
{
"math_id": 59,
"text": "y^2"
},
{
"math_id": 60,
"text": "z^2"
},
{
"math_id": 61,
"text": "zx"
},
{
"math_id": 62,
"text": "1"
},
{
"math_id": 63,
"text": "0"
},
{
"math_id": 64,
"text": "-1"
},
{
"math_id": 65,
"text": "\\Gamma_{\\text{reducible}}"
},
{
"math_id": 66,
"text": "x,y,z"
},
{
"math_id": 67,
"text": "3"
},
{
"math_id": 68,
"text": "\\sigma"
},
{
"math_id": 69,
"text": "i"
},
{
"math_id": 70,
"text": "-3"
},
{
"math_id": 71,
"text": "C_n"
},
{
"math_id": 72,
"text": "S_n"
},
{
"math_id": 73,
"text": "C_n = 2\\cos\\theta+1"
},
{
"math_id": 74,
"text": "S_n = 2\\cos\\theta-1"
},
{
"math_id": 75,
"text": "\\theta = \\frac{360}{n}"
},
{
"math_id": 76,
"text": "="
},
{
"math_id": 77,
"text": "\\times"
},
{
"math_id": 78,
"text": "\\Gamma_{\\text{red}}"
},
{
"math_id": 79,
"text": "N = \\frac{1}{h}\\sum_{x}(X^x_i \\times X^x_r\\times n^x)"
},
{
"math_id": 80,
"text": "h ="
},
{
"math_id": 81,
"text": "X^x_i ="
},
{
"math_id": 82,
"text": "X^x_r ="
},
{
"math_id": 83,
"text": "n^x ="
},
{
"math_id": 84,
"text": "N_{A_1} = \\frac{1}{4}[(9\\times 1\\times 1)+((-1)\\times 1\\times 1)+(3\\times 1\\times 1)+(1\\times 1\\times 1)] = 3"
},
{
"math_id": 85,
"text": "N_{A_2} = \\frac{1}{4}[(9\\times 1\\times 1+((-1)\\times 1\\times 1)+(3\\times(-1)\\times 1)+(1\\times(-1)\\times 1)] = 1"
},
{
"math_id": 86,
"text": "N_{B_1} = \\frac{1}{4}[(9\\times 1\\times 1)+((-1)\\times(-1)\\times 1)+(3\\times 1\\times 1)+(1\\times(-1)\\times 1)] = 3"
},
{
"math_id": 87,
"text": "N_{B_2} = \\frac{1}{4}[(9\\times 1\\times 1)+((-1)\\times(-1)\\times 1)+(3\\times(-1)\\times 1)+(1\\times 1\\times 1)] = 2"
},
{
"math_id": 88,
"text": "\\Gamma_{\\text{irreducible}} = 3A_1 + A_2 + 3B_1 + 2B_2"
},
{
"math_id": 89,
"text": "\\Gamma_{\\text{translational}} = A_1 + B_1 + B_2"
},
{
"math_id": 90,
"text": "\\Gamma_{\\text{rotational}} = A_2 + B_1 + B_2"
},
{
"math_id": 91,
"text": "\\Gamma_{\\text{vibrational}} = \\Gamma_{\\text{irreducible}} - \\Gamma_{\\text{translational}} - \\Gamma_{\\text{rotational}}"
},
{
"math_id": 92,
"text": "= (3A_1 + A_2 + 3B_1 + 2B_2) - (A_1 + B_1 + B_2) - (A_2 + B_1 + B_2)"
},
{
"math_id": 93,
"text": "= 2A_1 + B_1"
},
{
"math_id": 94,
"text": "2+1 = 3"
},
{
"math_id": 95,
"text": "2A_1"
},
{
"math_id": 96,
"text": "1B_1"
},
{
"math_id": 97,
"text": "\\Gamma_{\\text{vibrational}}"
},
{
"math_id": 98,
"text": "\\Gamma_{\\text{irreducible}}, \\Gamma_{\\text{translational}}, \\Gamma_{\\text{rotational}}"
}
] | https://en.wikipedia.org/wiki?curid=911520 |
91161 | Andrey Kolmogorov | Soviet mathematician (1903–1987)
Andrey Nikolaevich Kolmogorov (Russian: , , 25 April 1903 – 20 October 1987) was a Soviet mathematician who played a central role in the creation of modern probability theory. He also contributed to the mathematics of topology, intuitionistic logic, turbulence, classical mechanics, algorithmic information theory and computational complexity.
Biography.
Early life.
Andrey Kolmogorov was born in Tambov, about 500 kilometers southeast of Moscow, in 1903. His unmarried mother, Maria Yakovlevna Kolmogorova, died giving birth to him. Andrey was raised by two of his aunts in Tunoshna (near Yaroslavl) at the estate of his grandfather, a well-to-do nobleman.
Little is known about Andrey's father. He was supposedly named Nikolai Matveyevich Katayev and had been an agronomist. Katayev had been exiled from Saint Petersburg to the Yaroslavl province after his participation in the revolutionary movement against the tsars. He disappeared in 1919 and was presumed to have been killed in the Russian Civil War.
Andrey Kolmogorov was educated in his aunt Vera's village school, and his earliest literary efforts and mathematical papers were printed in the school journal "The Swallow of Spring". Andrey (at the age of five) was the "editor" of the mathematical section of this journal. Kolmogorov's first mathematical discovery was published in this journal: at the age of five he noticed the regularity in the sum of the series of odd numbers: formula_0 etc.
In 1910, his aunt adopted him, and they moved to Moscow, where he graduated from high school in 1920. Later that same year, Kolmogorov began to study at Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time: "I arrived at Moscow University with a fair knowledge of mathematics. I knew in particular the beginning of set theory. I studied many questions in articles in the Encyclopedia of Brockhaus and Efron, filling out for myself what was presented too concisely in these articles."
Kolmogorov gained a reputation for his wide-ranging erudition. While an undergraduate student in college, he attended the seminars of the Russian historian S. V. Bakhrushin, and he published his first research paper on the fifteenth and sixteenth centuries' landholding practices in the Novgorod Republic. During the same period (1921–22), Kolmogorov worked out and proved several results in set theory and in the theory of Fourier series.
Adulthood.
In 1922, Kolmogorov gained international recognition for constructing a Fourier series that diverges almost everywhere. Around this time, he decided to devote his life to mathematics.
In 1925, Kolmogorov graduated from Moscow State University and began to study under the supervision of Nikolai Luzin. He formed a lifelong close friendship with Pavel Alexandrov, a fellow student of Luzin; indeed, several researchers have concluded that the two friends were involved in a homosexual relationship, although neither acknowledged this openly during their lifetimes. Kolmogorov (together with Aleksandr Khinchin) became interested in probability theory. Also in 1925, he published his work in intuitionistic logic, "On the principle of the excluded middle," in which he proved that under a certain interpretation all statements of classical formal logic can be formulated as those of intuitionistic logic. In 1929, Kolmogorov earned his Doctor of Philosophy (Ph.D.) degree from Moscow State University. In 1929, Kolmogorov and Alexandrov during a long travel stayed about a month in an island in lake Sevan in Armenia.
In 1930, Kolmogorov went on his first long trip abroad, traveling to Göttingen and Munich and then to Paris. He had various scientific contacts in Göttingen, first with Richard Courant and his students working on limit theorems, where diffusion processes proved to be the limits of discrete random processes, then with Hermann Weyl in intuitionistic logic, and lastly with Edmund Landau in function theory. His pioneering work "About the Analytical Methods of Probability Theory" was published (in German) in 1931. Also in 1931, he became a professor at Moscow State University.
In 1933, Kolmogorov published his book "Foundations of the Theory of Probability", laying the modern axiomatic foundations of probability theory and establishing his reputation as the world's leading expert in this field. In 1935, Kolmogorov became the first chairman of the department of probability theory at Moscow State University. Around the same years (1936) Kolmogorov contributed to the field of ecology and generalized the Lotka–Volterra model of predator–prey systems.
During the Great Purge in 1936, Kolmogorov's doctoral advisor Nikolai Luzin became a high-profile target of Stalin's regime in what is now called the "Luzin Affair." Kolmogorov and several other students of Luzin testified against Luzin, accusing him of plagiarism, nepotism, and other forms of misconduct; the hearings eventually concluded that he was a servant to "fascistoid science" and thus an enemy of the Soviet people. Luzin lost his academic positions, but curiously he was neither arrested nor expelled from the Academy of Sciences of the Soviet Union. The question of whether Kolmogorov and others were coerced into testifying against their teacher remains a topic of considerable speculation among historians; all parties involved refused to publicly discuss the case for the rest of their lives. Soviet-Russian mathematician Semën Samsonovich Kutateladze concluded in 2013, after reviewing archival documents made available during the 1990s and other surviving testimonies, that the students of Luzin had initiated the accusations against Luzin out of personal acrimony; there was no definitive evidence that the students were coerced by the state, nor was there any definitive evidence to support their allegations of academic misconduct. Soviet historian of mathematics A.P. Yushkevich surmised that, unlike many of the other high-profile persecutions of the era, Stalin did not personally initiate the persecution of Luzin and instead eventually concluded that he was not a threat to the regime, which would explain the unusually mild punishment relative to other contemporaries.
In a 1938 paper, Kolmogorov "established the basic theorems for smoothing and predicting stationary stochastic processes"—a paper that had major military applications during the Cold War. In 1939, he was elected a full member (academician) of the USSR Academy of Sciences.
During World War II Kolmogorov contributed to the Soviet war effort by applying statistical theory to artillery fire, developing a scheme of stochastic distribution of barrage balloons intended to help protect Moscow from German bombers during the Battle of Moscow.
In his study of stochastic processes, especially Markov processes, Kolmogorov and the British mathematician Sydney Chapman independently developed a pivotal set of equations in the field that have been given the name of the Chapman–Kolmogorov equations.
Later, Kolmogorov focused his research on turbulence, beginning his publications in 1941. In classical mechanics, he is best known for the Kolmogorov–Arnold–Moser theorem, first presented in 1954 at the International Congress of Mathematicians. In 1957, working jointly with his student Vladimir Arnold, he solved a particular interpretation of Hilbert's thirteenth problem. Around this time he also began to develop, and has since been considered a founder of, algorithmic complexity theory – often referred to as Kolmogorov complexity theory.
Kolmogorov married Anna Dmitrievna Egorova in 1942. He pursued a vigorous teaching routine throughout his life both at the university level and also with younger children, as he was actively involved in developing a pedagogy for gifted children in literature, music, and mathematics. At Moscow State University, Kolmogorov occupied different positions including the heads of several departments: probability, statistics, and random processes; mathematical logic. He also served as the Dean of the Moscow State University Department of Mechanics and Mathematics.
In 1971, Kolmogorov joined an oceanographic expedition aboard the research vessel "Dmitri Mendeleev." He wrote a number of articles for the "Great Soviet Encyclopedia." In his later years, he devoted much of his effort to the mathematical and philosophical relationship between probability theory in abstract and applied areas.
Kolmogorov died in Moscow in 1987 and his remains were buried in the Novodevichy cemetery.
A quotation attributed to Kolmogorov is [translated into English]: "Every mathematician believes that he is ahead of the others. The reason none state this belief in public is because they are intelligent people."
Vladimir Arnold once said: "Kolmogorov – Poincaré – Gauss – Euler – Newton, are only five lives separating us from the source of our science."
Awards and honours.
Kolmogorov received numerous awards and honours both during and after his lifetime:
The following are named in Kolmogorov's honour:
<templatestyles src="Div col/styles.css"/>
Bibliography.
A bibliography of his works appeared in
Textbooks:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " 1 = 1^2; 1 + 3 = 2^2; 1 + 3 + 5 = 3^2, "
}
] | https://en.wikipedia.org/wiki?curid=91161 |
911658 | Reserve requirement | Type of regulation on commercial banks
Reserve requirements are central bank regulations that set the minimum amount that a commercial bank must hold in liquid assets. This minimum amount, commonly referred to as the commercial bank's reserve, is generally determined by the central bank on the basis of a specified proportion of deposit liabilities of the bank. This rate is commonly referred to as the cash reserve ratio or shortened as reserve ratio. Though the definitions vary, the commercial bank's reserves normally consist of cash held by the bank and stored physically in the bank vault (vault cash), plus the amount of the bank's balance in that bank's account with the central bank. A bank is at liberty to hold in reserve sums above this minimum requirement, commonly referred to as "excess reserves".
The reserve ratio is sometimes used by a country’s monetary authority as a tool in monetary policy, to influence the country's money supply by limiting or expanding the amount of lending by the banks. Monetary authorities increase the reserve requirement only after careful consideration because an abrupt change may cause liquidity problems for banks with low excess reserves; they generally prefer to use other monetary policy instruments to implement their monetary policy. In many countries (except Brazil, China, India, Russia), reserve requirements are generally not altered frequently in implementing a country's monetary policy because of the short-term disruptive effect on financial markets. In several countries, including the United States, there are today zero reserve requirements.
Policy objective.
One of the critical functions of a country's central bank is to maintain public confidence in the banking system, as under a fractional-reserve banking system banks are not expected to hold cash to cover all deposits liabilities in full. One of the mechanisms used by most central banks to further this objective is to set a reserve requirement to ensure that banks have, in normal circumstances, sufficient cash on hand in the event that large deposits are withdrawn, which may precipitate a bank run. The central bank in some jurisdictions, such as the European Union, does not require reserves to be held during the day, while in others, such as the United States, the central bank does not set a reserve requirement at all.
Bank deposits are usually of a relatively short-term duration, and may be “at call”, while loans made by banks tend to be longer-term, resulting in a risk that customers may at any time collectively wish to withdraw cash out of their accounts in excess of the bank reserves. The reserves only provide liquidity to cover withdrawals within the normal pattern. Banks and the central bank expect that in normal circumstances only a proportion of deposits will be withdrawn at the same time, and that the reserves will be sufficient to meet the demand for cash. However, banks routinely find themselves in a shortfall situation or may experience an unexpected bank run, when depositors wish to withdraw more funds than the reserves held by the bank. In that event, the bank experiencing the liquidity shortfall may routinely borrow short-term funds in the interbank lending market from banks with a surplus. In exceptional situations, the central bank may provide funds to cover the short-term shortfall as lender of last resort. When the bank liquidity problem exceeds the central bank’s desire to continue as "lender of last resort", as happened during the global financial crisis of 2007-2008, the government may try to restore confidence in the banking system, for example, by providing government guarantees.
Effects on money supply.
Textbook view.
Many textbooks describe a system in which reserve requirements can act as a tool of a country’s monetary policy though these bear little resemblance to reality and many central banks impose no such requirements. The commonly assumed requirement is 10% though almost no central bank and no major central bank imposes such a ratio requirement.
With higher reserve requirements, there would be less funds available to banks for lending. Under this view, the money multiplier compounds the effect of bank lending on the money supply. The multiplier effect on the money supply is governed by the following formulas:
formula_0 : definitional relationship between monetary base "MB" (bank reserves plus currency held by the non-bank public) and the narrowly defined money supply, formula_1,
formula_2 : derived formula for the money multiplier "m", the factor by which lending and re-lending leads formula_1 to be a multiple of the monetary base:
where notationally,
formula_3 the currency ratio: the ratio of the public's holdings of currency (undeposited cash) to the public's holdings of demand deposits; and
formula_4 the total reserve ratio (the ratio of legally required plus non-required reserve holdings of banks to demand deposit liabilities of banks).
This limit on the money supply does not apply in the real world.
Endogenous money view.
Central banks dispute the money multiplier theory of the reserve requirement and instead consider money as endogenous. See endogenous money.
Jaromir Benes and Michael Kumhof of the IMF Research Department report that the "deposit multiplier" of the undergraduate economics textbook, where monetary aggregates are created at the initiative of the central bank, through an initial injection of high-powered money into the banking system that gets multiplied through bank lending, turns the actual operation of the monetary transmission mechanism on its head. Benes and Kumhof assert that in most cases where banks ask for replenishment of depleted reserves, the central bank obliges. Under this view, reserves therefore impose no constraints, as the deposit multiplier is simply, in the words of Kydland and Prescott (1990), a myth. Under this theory, private banks almost fully control the money creation process.
Required reserves.
China.
The People's Bank of China uses changes in the reserve requirement as an inflation-fighting tool, and raised the reserve requirement ten times in 2007 and eleven times since the beginning of 2010.
India.
The Reserve Bank of India uses changes in the CRR as a liquidity management tool, hiked it alongside SLR to navigate 2008 financial crisis. RBI introduced and withdrew Incremental - Cash reserve ratio I-CRR over and above CRR for managing liquidity.
Countries and districts without reserve requirements.
Canada, the UK, New Zealand, Australia, Sweden and Hong Kong have no reserve requirements.
This does not mean that banks can—even in theory—create money without limit. On the contrary, banks are constrained by capital requirements, which are arguably more important than reserve requirements even in countries that have reserve requirements.
A commercial bank's overnight reserves are not permitted to become "negative". The central bank will step in to lend a bank funds if necessary so that this does not happen. Historically, a central bank might have run out of reserves to lend to banks with liquidity problems and so had to suspend redemptions, but this can no longer happen to modern central banks because of the end of the gold standard worldwide, which means that all nations use a fiat currency.
A zero reserve requirement cannot be explained by a theory that holds that monetary policy works by varying the quantity of money using the reserve requirement.
Even in the United States, which retained formal reserve requirements until 2020, the notion of controlling the money supply by targeting the quantity of base money fell out of favor many years ago, and now the pragmatic explanation of monetary policy refers to targeting the "interest rate" to control the broad money supply. (See also Regulation D (FRB).)
United Kingdom.
In the United Kingdom, commercial banks are called clearing banks with direct access to the clearing system.
The Bank of England, the central bank for the United Kingdom, previously set a voluntary reserve ratio, and not a minimum reserve requirement. In theory, this meant that commercial banks could retain zero reserves. The average cash reserve ratio across the entire United Kingdom banking system, though, was higher during that period, at about 0.15% as of 1999[ [update]].
From 1971 to 1980, the commercial banks all agreed to a reserve ratio of 1.5%. In 1981 this requirement was abolished.
From 1981 to 2009, each commercial bank set out its own monthly voluntary reserve target in a contract with the Bank of England. Both shortfalls and excesses of reserves relative to the commercial bank's own target over an averaging period of one day would result in a charge, incentivising the commercial bank to stay near its target, a system known as "reserves averaging".
Upon the parallel introduction of quantitative easing and interest on excess reserves in 2009, banks were no longer required to set out a target, and so were no longer penalised for holding excess reserves; indeed, they were proportionally compensated for holding all their reserves at the Bank Rate (the Bank of England now uses the same interest rate for its bank rate, its deposit rate and its interest rate target). In the absence of an agreed target, the concept of excess reserves does not really apply to the Bank of England any longer, so it is technically incorrect to call its new policy "interest on excess reserves".
Canada.
Canada abolished its reserve requirement in 1992.
Australia.
Australia abolished "statutory reserve deposits" in 1988, which were replaced with 1% non-callable deposits.
United States.
In the Thomas Amendment to the Agricultural Adjustment Act of 1933, the Fed was granted the authority to set reserve requirements jointly with the president as one of several provisions that sought to mitigate or prevent deflation. The power was granted to the Fed, without presidential consent, in the Banking Act of 1935. Under the International Banking Act of 1978, the same reserve ratios would apply to branches of foreign banks operating in the United States.
The United States removed reserve requirements for nonpersonal time deposits and eurocurrency liabilities on Dec 27, 1990 and for net transaction accounts on March 27, 2020, thus eliminating reserve requirements altogether. Before that, the Board of Governors of the Federal Reserve System used to set reserve requirements (“liquidity ratio”) based on categories of deposit liabilities ("Net Transaction Accounts" or "NTAs") of depository institutions, such as commercial banks including U.S. branches of a foreign bank, savings and loan association, savings bank, and credit union. For a time, checking accounts were subject to reserve requirements, whereas there was no reserve requirement on savings accounts and time deposit accounts of individuals. The Board for some time set a zero reserve requirement for banks with eligible deposits up to $, 3% for banks up to $, and 10% thereafter. The total removal of reserve requirements followed the Federal Reserve's shift to an "ample-reserves" system, in which the Federal Reserve Banks pay member banks interest on excess reserves held by them.
The total amount of all NTAs held by customers with U.S. depository institutions, plus the U.S. paper currency and coin currency held by the nonbank public, is called M1.
Reserve requirements by country.
The reserve ratios set in each country and district vary. The following list is non-exhaustive:
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M_1=\\mathit{MB} \\times m \\,"
},
{
"math_id": 1,
"text": "M_1"
},
{
"math_id": 2,
"text": "m=\\frac{(1+c)}{(c+R)} = \\frac{1+\\frac{C}{D}}{\\frac{C}{D}+R}"
},
{
"math_id": 3,
"text": "c ="
},
{
"math_id": 4,
"text": "R ="
}
] | https://en.wikipedia.org/wiki?curid=911658 |
911820 | Normal modal logic | In logic, a normal modal logic is a set "L" of modal formulas such that "L" contains:
and it is closed under:
The smallest logic satisfying the above conditions is called K. Most modal logics commonly used nowadays (in terms of having philosophical motivations), e.g. C. I. Lewis's S4 and S5, are normal (and hence are extensions of K). However a number of deontic and epistemic logics, for example, are non-normal, often because they give up the Kripke schema.
Every normal modal logic is regular and hence classical.
Common normal modal logics.
<onlyinclude>
The following table lists several common normal modal systems. </onlyinclude>The notation refers to the table at Kripke semantics § Common modal axiom schemata. <onlyinclude>Frame conditions for some of the systems were simplified: the logics are "sound and complete" with respect to the frame classes given in the table, but they may "correspond" to a larger class of frames.
</onlyinclude> | [
{
"math_id": 0,
"text": "\\Box(A\\to B)\\to(\\Box A\\to\\Box B)"
},
{
"math_id": 1,
"text": " A\\to B, A \\in L"
},
{
"math_id": 2,
"text": " B \\in L"
},
{
"math_id": 3,
"text": " A \\in L"
},
{
"math_id": 4,
"text": "\\Box A \\in L"
}
] | https://en.wikipedia.org/wiki?curid=911820 |
911833 | Graphene | Hexagonal lattice made of carbon atoms
Graphene () is a type of allotrope of carbon consisting of a single layer of atoms arranged in a honeycomb nanostructure. The name is derived from "graphite" and the suffix -ene, reflecting the fact that the graphite allotrope of carbon contains numerous double bonds.
Each atom in a graphene sheet is connected to its three nearest neighbors by σ-bonds, and a delocalised π-bond, which contributes to a valence band that extends over the whole sheet. This type of bonding is also seen in polycyclic aromatic hydrocarbons. The valence band is touched by a conduction band, making graphene a semimetal with unusual electronic properties that are best described by theories for massless relativistic particles. Charge carriers in graphene show linear, rather than quadratic, dependence of energy on momentum, and field-effect transistors with graphene can be made that show bipolar conduction. Charge transport is ballistic over long distances; the material exhibits large quantum oscillations and large nonlinear diamagnetism. Graphene conducts heat and electricity very efficiently along its plane. The material strongly absorbs light of all visible wavelengths, which accounts for the black color of graphite, yet a single graphene sheet is nearly transparent because of its extreme thinness. On a microscopic scale, graphene is the strongest material ever measured.
Scientists theorized the potential existence and production of graphene for decades. It has likely been unknowingly produced in small quantities for centuries through the use of pencils and other similar applications of graphite. It was possibly observed in electron microscopes in 1962, but studied only while supported on metal surfaces.
In 2004, the material was rediscovered, isolated and investigated at the University of Manchester, by Andre Geim and Konstantin Novoselov. In 2010, Geim and Novoselov were awarded the Nobel Prize in Physics for their "groundbreaking experiments regarding the two-dimensional material graphene". High-quality graphene had proved to be surprisingly easy to isolate.
Graphene has become a valuable and useful nanomaterial due to its exceptionally high tensile strength, electrical conductivity, transparency, and being the thinnest two-dimensional material in the world. The global market for graphene was $9 million in 2012, with most of the demand from research and development in semiconductor, electronics, electric batteries, and composites.
The IUPAC (International Union for Pure and Applied Chemistry) recommends use of the name "graphite" for the three-dimensional material, and "graphene" only when the reactions, structural relations, or other properties of individual layers are discussed. A narrower definition, of "isolated or free-standing graphene" requires that the layer be sufficiently isolated from its environment, but would include layers suspended or transferred to silicon dioxide or silicon carbide.
History.
Structure of graphite and its intercalation compounds.
In 1859, Benjamin Brodie noted the highly lamellar structure of thermally reduced graphite oxide. Pioneers in X-ray crystallography attempted to determine the structure of graphite. The lack of large single-crystal graphite specimens contributed to the independent development of X-ray powder diffraction by Debye and Scherrer in 1915 and Hull in 1916, however, neither of their proposed structures is correct. In 1918, V. Kohlschütter and P. Haenni described the properties of graphite oxide paper. The structure of graphite was successfully determined from single-crystal X-ray diffraction by Bernal in 1924, although subsequent research has made small modifications to the unit cell parameters.
The theory of graphene was first explored by P. R. Wallace in 1947 as a starting point for understanding the electronic properties of 3D graphite. The emergent massless Dirac equation was first pointed out in 1984 separately by Gordon Walter Semenoff, and by David P. DiVincenzo and Eugene J. Mele. Semenoff emphasized the occurrence in a magnetic field of an electronic Landau level precisely at the Dirac point. This level is responsible for the anomalous integer quantum Hall effect.
Observations of thin graphite layers and related structures.
Transmission electron microscopy (TEM) images of thin graphite samples consisting of a few graphene layers were published by G. Ruess and F. Vogt in 1948. Eventually, single layers were also observed directly. Single layers of graphite were also observed by transmission electron microscopy within bulk materials, in particular inside soot obtained by chemical exfoliation.
In 1961–1962, Hanns-Peter Boehm published a study of extremely thin flakes of graphite, and coined the term "graphene" for the hypothetical single-layer structure. This paper reports graphitic flakes that give an additional contrast equivalent of down to ~0.4 nm or 3 atomic layers of amorphous carbon. This was the best possible resolution for 1960 TEMs. However, neither then nor today is it possible to argue how many layers were in those flakes. Now we know that the TEM contrast of graphene most strongly depends on focusing conditions. For example, it is impossible to distinguish between suspended monolayer and multilayer graphene by their TEM contrasts, and the only known way is to analyze the relative intensities of various diffraction spots. The first reliable TEM observations of monolayers are probably given in refs. 24 and 26 of Geim and Novoselov's 2007 review.
Starting in the 1970s, C. Oshima and others described single layers of carbon atoms that were grown epitaxially on top of other materials. This "epitaxial graphene" consists of a single-atom-thick hexagonal lattice of sp2-bonded carbon atoms, as in free-standing graphene. However, there is significant charge transfer between the two materials, and, in some cases, hybridization between the d-orbitals of the substrate atoms and π orbitals of graphene; which significantly alter the electronic structure compared to that of free-standing graphene.
The term "graphene" was used again in 1987 to describe single sheets of graphite as a constituent of graphite intercalation compounds, which can be seen as crystalline salts of the intercalant and graphene. It was also used in the descriptions of carbon nanotubes by R. Saito and Mildred and Gene Dresselhaus in 1992, and of polycyclic aromatic hydrocarbons in 2000 by S. Wang and others.
Efforts to make thin films of graphite by mechanical exfoliation started in 1990.
Initial attempts employed exfoliation techniques similar to the drawing method. Multilayer samples down to 10 nm in thickness were obtained.
In 2002, Robert B. Rutherford and Richard L. Dudman filed for a patent in the US on a method to produce graphene by repeatedly peeling off layers from a graphite flake adhered to a substrate, achieving a graphite thickness of . The key to success was high-throughput visual recognition of graphene on a properly chosen substrate, which provides a small but noticeable optical contrast.
Another U.S. patent was filed in the same year by Bor Z. Jang and Wen C. Huang for a method to produce graphene based on exfoliation followed by attrition.
In 2014, inventor Larry Fullerton patents a process for producing single layer graphene sheets.
Full isolation and characterization.
Graphene was properly isolated and characterized in 2004 by Andre Geim and Konstantin Novoselov at the University of Manchester, UK. They pulled graphene layers from graphite with a common adhesive tape in a process called either micromechanical cleavage or the Scotch tape technique. The graphene flakes were then transferred onto thin silicon dioxide (silica) layer on a silicon plate ("wafer"). The silica electrically isolated the graphene and weakly interacted with it, providing nearly charge-neutral graphene layers. The silicon beneath the SiO2 could be used as a "back gate" electrode to vary the charge density in the graphene over a wide range.
This work resulted in the two winning the Nobel Prize in Physics in 2010 "for groundbreaking experiments regarding the two-dimensional material graphene." Their publication, and the surprisingly easy preparation method that they described, sparked a "graphene gold rush". Research expanded and split off into many different subfields, exploring different exceptional properties of the material—quantum mechanical, electrical, chemical, mechanical, optical, magnetic, etc.
Exploring commercial applications.
Since the early 2000s, a number of companies and research laboratories have been working to develop commercial applications of graphene. In 2014 a National Graphene Institute was established with that purpose at the University of Manchester, with a £60 million initial funding. In North East England two commercial manufacturers, Applied Graphene Materials and Thomas Swan Limited have begun manufacturing. Cambridge Nanosystems is a large-scale graphene powder production facility in East Anglia.
Structure.
Graphene is a single layer (monolayer) of carbon atoms, tightly bound in a hexagonal honeycomb lattice. It is an allotrope of carbon in the form of a plane of sp2-bonded atoms with a molecular bond length of .
Bonding.
Three of the four outer-shell electrons of each atom in a graphene sheet occupy three sp2 hybrid orbitals – a combination of orbitals s, px and py — that are shared with the three nearest atoms, forming σ-bonds. The length of these bonds is about 0.142 nanometers.
The remaining outer-shell electron occupies a pz orbital that is oriented perpendicularly to the plane. These orbitals hybridize together to form two half-filled bands of free-moving electrons, π and π∗, which are responsible for most of graphene's notable electronic properties. Recent quantitative estimates of aromatic stabilization and limiting size derived from the enthalpies of hydrogenation (ΔHhydro) agree well with the literature reports.
Graphene sheets stack to form graphite with an interplanar spacing of .
Graphene sheets in solid form usually show evidence in diffraction for graphite's (002) layering. This is true of some single-walled nanostructures. However, unlayered graphene displaying only (hk0) rings has been observed in the core of presolar graphite onions. TEM studies show faceting at defects in flat graphene sheets and suggest a role for two-dimensional crystallization from a melt.
Geometry.
The hexagonal lattice structure of isolated, single-layer graphene can be directly seen with transmission electron microscopy (TEM) of sheets of graphene suspended between bars of a metallic grid Some of these images showed a "rippling" of the flat sheet, with amplitude of about one nanometer. These ripples may be intrinsic to the material as a result of the instability of two-dimensional crystals, or may originate from the ubiquitous dirt seen in all TEM images of graphene. Photoresist residue, which must be removed to obtain atomic-resolution images, may be the "adsorbates" observed in TEM images, and may explain the observed rippling.
The hexagonal structure is also seen in scanning tunneling microscope (STM) images of graphene supported on silicon dioxide substrates The rippling seen in these images is caused by conformation of graphene to the subtrate's lattice, and is not intrinsic.
Stability.
Ab initio calculations show that a graphene sheet is thermodynamically unstable if its size is less than about 20 nm and becomes the most stable fullerene (as within graphite) only for molecules larger than 24,000 atoms.
Electronic properties.
Graphene is a zero-gap semiconductor because its conduction and valence bands meet at the Dirac points. The Dirac points are six locations in momentum space on the edge of the Brillouin zone, divided into two non-equivalent sets of three points. These sets are labeled K and K'. These sets give graphene a valley degeneracy of "gv" = 2. In contrast, for traditional semiconductors, the primary point of interest is generally Γ, where momentum is zero. Four electronic properties distinguish graphene from other condensed matter systems.
If the in-plane direction is confined rather than infinite, its electronic structure changes. These confined structures are referred to as graphene nanoribbons. If the nanoribbon has a"zig-zag" edge, the bandgap remains zero. If it has an "armchair" edge, the bandgap is non-zero.
Graphene's hexagonal lattice can be viewed as two interleaving triangular lattices. This perspective has been used to calculate the band structure for a single graphite layer using a tight-binding approximation.
Electronic spectrum.
Electrons propagating through graphene's honeycomb lattice effectively lose their mass, producing quasi-particles described by a 2D analogue of the Dirac equation rather than the Schrödinger equation for spin- particles.
Dispersion relation.
The cleavage technique led directly to the first observation of the anomalous quantum Hall effect in graphene in 2005 by Geim's group and by Philip Kim and Yuanbo Zhang. This effect provided direct evidence of graphene's theoretically predicted Berry's phase of massless Dirac fermions and proof of the Dirac fermion nature of electrons. These effects were previously observed in bulk graphite by Yakov Kopelevich, Igor A. Luk'yanchuk, and others, in 2003–2004.
When atoms are placed onto the graphene hexagonal lattice, the overlap between the "p"z(π) orbitals and the "s" or the "p"x and "p"y orbitals is zero by symmetry. Therefore, "p"z electrons forming the π bands in graphene can be treated independently. Within this π-band approximation, using a conventional tight-binding model, the dispersion relation (restricted to first-nearest-neighbor interactions only) that produces the energy of the electrons with wave vector "k" is:
formula_0
with the nearest-neighbor (π orbitals) hopping energy "γ"0 ≈ and the lattice constant "a" ≈. The conduction and valence bands correspond to the different signs. With one "p"z electron per atom in this model, the valence band is fully occupied, while the conduction band is vacant. The two bands touch at the zone corners (the "K" point in the Brillouin zone), where there is a zero density of states but no band gap. Thus, graphene exhibits a semimetallic (or zero-gap semiconductor) character, although this is not true for a graphene sheet rolled into a carbon nanotube due to its curvature. Two of the six Dirac points are independent, while the rest are equivalent by symmetry. Near the "K"-points, the energy depends "linearly" on the wave vector, similar to a relativistic particle. Since an elementary cell of the lattice has a basis of two atoms, the wave function has an effective 2-spinor structure.
Consequently, at low energies, even neglecting the true spin, electrons can be described by an equation formally equivalent to the massless Dirac equation. Hence, the electrons and holes are called Dirac fermions. This pseudo-relativistic description is restricted to the chiral limit, i.e., to vanishing rest mass "M"0, leading to interesting additional features:
formula_1
Here "vF" ~ (.003 c) is the Fermi velocity in graphene, which replaces the velocity of light in the Dirac theory; formula_2 is the vector of the Pauli matrices, formula_3 is the two-component wave function of the electrons, and "E" is their energy.
The equation describing the electrons' linear dispersion relation is:
formula_4
where the wavevector "q" is measured from the Brillouin zone vertex K, formula_5, and the zero of energy is set to coincide with the Dirac point. The equation uses a pseudospin matrix formula that describes two sublattices of the honeycomb lattice.
Single-atom wave propagation.
Electron waves in graphene propagate within a single-atom layer, making them sensitive to the proximity of other materials such as high-κ dielectrics, superconductors and ferromagnetics.
Ambipolar electron and hole transport.
Graphene exhibits high electron mobility at room temperature, with values reported in excess of . Hole and electron mobilities are nearly identical. The mobility is independent of temperature between and , showing minimal change even at room temperature (300 K), suggesting that the dominant scattering mechanism is defect scattering. Scattering by graphene's acoustic phonons intrinsically limits room temperature mobility in freestanding graphene to at a carrier density of .
The corresponding resistivity of graphene sheets is , lower than the resistivity of silver, which is the lowest known at room temperature. However, on SiO2 substrates, electron scattering by optical phonons of the substrate has a more significant effect than scattering by graphene's own phonons, limiting mobility to .
Charge transport can be affected by the adsorption of contaminants such as water and oxygen molecules, leading to non-repetitive and large hysteresis I-V characteristics. Researchers need to conduct electrical measurements in a vacuum. Coating the graphene surface with materials such as SiN, PMMA or h-BN has been proposed for protection. In January 2015, the first stable graphene device operation in air over several weeks was reported for graphene whose surface was protected by aluminum oxide. In 2015, lithium-coated graphene exhibited superconductivity, a first for graphene.
Electrical resistance in 40-nanometer-wide nanoribbons of epitaxial graphene changes in discrete steps. The ribbons' conductance exceeds predictions by a factor of 10. The ribbons can function more like optical waveguides or quantum dots, allowing electrons to flow smoothly along the ribbon edges. In copper, resistance increases proportionally with length as electrons encounter impurities.
Transport is dominated by two modes: one ballistic and temperature-independent, and the other thermally activated. Ballistic electrons resemble those in cylindrical carbon nanotubes. At room temperature, resistance increases abruptly at a specific length—the ballistic mode at 16 micrometres and the thermally activated mode at 160 nanometres (1% of the former length).
Graphene electrons can traverse micrometer distances without scattering, even at room temperature.
Electrical Conductivity and Charge Transport.
Despite zero carrier density near the Dirac points, graphene exhibits a minimum conductivity on the order of formula_6. The origin of this minimum conductivity is still unclear. However, rippling of the graphene sheet or ionized impurities in the SiO2 substrate may lead to local puddles of carriers that allow conduction. Several theories suggest that the minimum conductivity should be formula_7; however, most measurements are of the order of formula_6 or greater and depend on impurity concentration.
Near zero carrier density, graphene exhibits positive photoconductivity and negative photoconductivity at high carrier density, governed by the interplay between photoinduced changes of both the Drude weight and the carrier scattering rate.
Graphene doped with various gaseous species (both acceptors and donors) can be returned to an undoped state by gentle heating in a vacuum. Even for dopant concentrations in excess of 1012 cm−2, carrier mobility exhibits no observable change. Graphene doped with potassium in ultra-high vacuum at low temperature can reduce mobility 20-fold. The mobility reduction is reversible on heating the graphene to remove the potassium.
Due to graphene's two dimensions, charge fractionalization (where the apparent charge of individual pseudoparticles in low-dimensional systems is less than a single quantum) is thought to occur. It may therefore be a suitable material for constructing quantum computers using anyonic circuits.
Chiral half-integer quantum Hall effect.
Quantum hall effect in Graphene.
The quantum Hall effect is a quantum mechanical version of the Hall effect, which is the production of transverse (perpendicular to the main current) conductivity in the presence of a magnetic field. The quantization of the Hall effect formula_8 at integer multiples (the "Landau level") of the basic quantity "e"2/"h" (where "e" is the elementary electric charge and "h" is the Planck constant). It can usually be observed only in very clean silicon or gallium arsenide solids at temperatures around and very high magnetic fields.
Graphene shows the quantum Hall effect with respect to conductivity quantization: the effect is unusual in that the sequence of steps is shifted by 1/2 with respect to the standard sequence and with an additional factor of 4. Graphene's Hall conductivity is formula_9, where "N" is the Landau level and the double valley and double spin degeneracies give the factor of 4. These anomalies are present not only at extremely low temperatures but also at room temperature, i.e. at roughly .
Chiral Electrons and Anomalies.
This behavior is a direct result of graphene's chiral, massless Dirac electrons. In a magnetic field, their spectrum has a Landau level with energy precisely at the Dirac point. This level is a consequence of the Atiyah–Singer index theorem and is half-filled in neutral graphene, leading to the "+1/2" in the Hall conductivity. Bilayer graphene also shows the quantum Hall effect, but with only one of the two anomalies (i.e. formula_10). In the second anomaly, the first plateau at "N" = 0 is absent, indicating that bilayer graphene stays metallic at the neutrality point.
Unlike normal metals, graphene's longitudinal resistance shows maxima rather than minima for integral values of the Landau filling factor in measurements of the Shubnikov–de Haas oscillations, thus the term "integral" quantum Hall effect. These oscillations show a phase shift of π, known as Berry's phase. Berry's phase arises due to chirality or dependence (locking) of the pseudospin quantum number on momentum of low-energy electrons near the Dirac points. The temperature dependence of the oscillations reveals that the carriers have a non-zero cyclotron mass, despite their zero effective mass in the Dirac-fermion formalism.
Experimental Observations.
Graphene samples prepared on nickel films, and on both the silicon face and carbon face of silicon carbide, show the anomalous effect directly in electrical measurements. Graphitic layers on the carbon face of silicon carbide show a clear Dirac spectrum in angle-resolved photoemission experiments, and the effect is observed in cyclotron resonance and tunneling experiments.
'Massive' electrons.
Graphene's unit cell has two identical carbon atoms and two zero-energy states: one where the electron resides on atom A, and the other on atom B. However, if the unit cell's two atoms are not identical, the situation changes. Research shows that placing hexagonal boron nitride (h-BN) in contact with graphene can alter the potential felt at atoms A and B sufficiently for the electrons to develop a mass and an accompanying band gap of about 30 meV [0.03 Electron Volt(eV)].
The mass can be positive or negative. An arrangement that slightly raises the energy of an electron on atom A relative to atom B gives it a positive mass, while an arrangement that raises the energy of atom B produces a negative electron mass. The two versions behave alike and are indistinguishable via optical spectroscopy. An electron traveling from a positive-mass region to a negative-mass region must cross an intermediate region where its mass once again becomes zero. This region is gapless and therefore metallic. Metallic modes bounding semiconducting regions of opposite-sign mass is a hallmark of a topological phase and display much the same physics as topological insulators.
If the mass in graphene can be controlled, electrons can be confined to massless regions by surrounding them with massive regions, allowing the patterning of quantum dots, wires, and other mesoscopic structures. It also produces one-dimensional conductors along the boundary. These wires would be protected against backscattering and could carry currents without dissipation.
Interactions and Phenomena.
Strong magnetic fields.
In magnetic fields above 10 tesla, additional plateaus of the Hall conductivity at "σ""xy" = "νe"2/"h" with "ν" = 0, ±1, ±4 are observed. A plateau at "ν" = 3 and the fractional quantum Hall effect at "ν" = were also reported.
These observations with "ν" = 0, ±1, ±3, ±4 indicate that the four-fold degeneracy (two valley and two spin degrees of freedom) of the Landau energy levels is partially or completely lifted.
Casimir effect.
The Casimir effect is an interaction between disjoint neutral bodies provoked by the fluctuations of the electromagnetic vacuum. Mathematically, it can be explained by considering the normal modes of electromagnetic fields, which explicitly depend on the boundary conditions on the interacting bodies' surfaces. Due to graphene's strong interaction with the electromagnetic field as a one-atom-thick material, the Casimir effect has garnered significant interest..
Van der Waals force.
The Van der Waals force (or dispersion force) is also unusual, obeying an inverse cubic asymptotic power law in contrast to the usual inverse quartic law.
Permittivity.
Graphene's permittivity varies with frequency. Over a range from microwave to millimeter wave frequencies, it is approximately 3.3. This permittivity, combined with its ability to function as both a conductor and as insulator, theoretically allows compact capacitors made of graphene to store large amounts of electrical energy.
Optical properties.
Graphene's exhibits unique optical properties, showing unexpectedly high opacity for an atomic monolayer in vacuum, absorbing approximately "πα" ≈ 2.3% of light from visible to infrared wavelengths, where "α" is the fine-structure constant. This is due to the unusual low-energy electronic structure of monolayer graphene, characterized by electron and hole conical bands meeting at the Dirac point, which is qualitatively different from more common quadratic massive bands. Based on the Slonczewski–Weiss–McClure (SWMcC) band model of graphite, calculations using Fresnel equations in the thin-film limit account for interatomic distance, hopping values, and frequency, thus assessing optical conductance.
Experimental verification, though confirmed, lacks the precision required to improve upon existing techniques for determining the fine-structure constant.
Multi-parametric surface plasmon resonance.
Multi-parametric surface plasmon resonance has been utilized to characterize both thickness and refractive index of chemical-vapor-deposition (CVD)-grown graphene films. At a wavelength of , measured refractive index and extinction coefficient values are 3.135 and 0.897, respectively. Thickness determination yielded 3.7Å across a 0.5mm area, consistent with the 3.35Å reported for layer-to-layer carbon atom distance of graphite crystals. This method is applicable for real-time label-free interactions of graphene with organic and inorganic substances. The existence of unidirectional surface plasmons in nonreciprocal graphene-based gyrotropic interfaces has been theoretically demonstrated, offering tunability from THz to near-infrared and visible frequencies by controlling graphene's chemical potential. Particularly, the unidirectional frequency bandwidth can be 1– 2 orders of magnitude larger than that achievable with metal under similar magnetic field conditions, stemming from graphene's extremely small effective electron mass.
Tunable band gap and optical response.
Graphene's band gap can be tuned from 0 to (about 5 micrometre wavelength) by applying voltage to a dual-gate bilayer graphene field-effect transistor (FET) at room temperature. The optical response of graphene nanoribbons is tunable into the terahertz regime by an applied magnetic fields. Graphene/graphene oxide systems exhibit electrochromic behavior, enabling tuning of both linear and ultrafast optical properties.
Graphene-based Bragg grating.
A graphene-based Bragg grating (one-dimensional photonic crystal) has been fabricated, demonstrating its capability to excite surface electromagnetic waves in periodic structure using a He–Ne laser as the light source.
Saturable absorption.
Graphene exhibits unique saturable absorption, which saturates when the input optical intensity exceeds a threshold value. This nonlinear optical behavior, termed saturable absorption, occurs across the visible to near-infrared spectrum, due to graphene's universal optical absorption and zero band gap. This property has enabled fullband mode locking in fiber lasers using graphene-based saturable absorbers, contributing significantly to ultrafast photonics. Additionally, the optical response of graphene/graphene oxide layers can be electrically tuned.
Saturable absorption in graphene could occur at the Microwave and Terahertz band, owing to its wideband optical absorption property. The microwave saturable absorption in graphene demonstrates the possibility of graphene microwave and terahertz photonics devices, such as a microwave saturable absorber, modulator, polarizer, microwave signal processing and broad-band wireless access networks.
Nonlinear Kerr effect.
Under intense laser illumination, graphene exhibits a nonlinear phase shift due to the optical nonlinear Kerr effect. Graphene demonstrates a large nonlinear Kerr coefficient of , nearly nine orders of magnitude larger than that of bulk dielectrics, suggesting its potential as a powerful nonlinear Kerr medium capable of supporting various nonlinear effects, including solitons.
Excitonic properties.
First-principle calculations incorporating quasiparticle corrections and many-body effects have been employed to study the electronic and optical properties of graphene-based materials. The approach is described as three stages. With GW calculation, the properties of graphene-based materials are accurately investigated, including bulk graphene, nanoribbons, edge and surface functionalized armchair oribbons, hydrogen saturated armchair ribbons, Josephson effect in graphene SNS junctions with single localized defect and armchair ribbon scaling properties.
Spin transport.
Graphene is considered an ideal material for spintronics due to its minimal spin–orbit interaction, the near absence of nuclear magnetic moments in carbon, and with weak hyperfine interaction. Electrical injection and detection of spin current have been demonstrated up to room temperature, with spin coherence length exceeding 1 micrometre observed at this temperature. Control of spin current polarity via electrical gating has been achieved at low temperatures.
Magnetic properties.
Strong magnetic fields.
Graphene's quantum Hall effect in magnetic fields above approximately 10 Teslas reveals additional interesting features. Additional plateaus in Hall conductivity at formula_11 with formula_12 have been observed, along with plateau at formula_13 and a fractional quantum Hall effect at formula_14.
These observations with formula_15 indicate that the four-fold degeneracy (two valley and two spin degrees of freedom) of the Landau energy levels is partially or completely lifted. One hypothesis proposes that magnetic catalysis of symmetry breaking is responsible for this degeneracy lift.
Spintronic properties.
Graphene exhibits spintronic and magnetic properties concurrently. Low-defect graphene nanomeshes, fabricated using a non-lithographic approach, exhibit significant ferromagnetism even at room temperature. Additionally, a spin pumping effect has been observe with fields applied in parallel to the planes of few-layer ferromagnetic nanomeshes, while a magnetoresistance hysteresis loop is evident under perpendicular fields. Charge-neutral graphene has demonstrated magnetoresistance exceeding 100% in magnetic fields generated by standard permanent magnets (approximately 0.1 tesla), marking a record magnetoresistivity "at room temperature" among known materials.
Magnetic substrates.
In 2014 researchers magnetized graphene by placing it on an atomically smooth layer of magnetic yttrium iron garnet, maintaining graphene's electronic properties unaffected. Previous methods involved doping graphene with other substances, The dopant's presence negatively affected its electronic properties.
Mechanical properties.
The (two-dimensional) density of graphene is 0.763 mg per square meter.
Graphene is the strongest material ever tested, with an intrinsic tensile strength of (with representative engineering tensile strength ~50-60 GPa for stretching large-area freestanding graphene) and a Young's modulus (stiffness) close to . The Nobel announcement illustrated this by saying that a 1 square meter graphene hammock would support a cat but would weigh only as much as one of the cat's whiskers, at (about 0.001% of the weight of of paper).
Large-angle bending of graphene monolayers with minimal strain demonstrates its mechanical robustness. Even under extreme deformation, monolayer graphene maintains excellent carrier mobility.
The spring constant of suspended graphene sheets has been measured using an atomic force microscope (AFM). Graphene sheets were suspended over SiO2 cavities where an AFM tip was used to apply a stress to the sheet to test its mechanical properties. Its spring constant was in the range 1–5 N/m and the stiffness was , which differs from that of bulk graphite. These intrinsic properties could lead to applications such as NEMS as pressure sensors and resonators. Due to its large surface energy and out of plane ductility, flat graphene sheets are unstable with respect to scrolling, i.e. bending into a cylindrical shape, which is its lower-energy state.
In two-dimensional structures like graphene, thermal and quantum fluctuations cause relative displacement, with fluctuations growing logarithmically with structure size as per the Mermin–Wagner theorem. This shows that the amplitude of long-wavelength fluctuations grows logarithmically with the scale of a 2D structure, and would therefore be unbounded in structures of infinite size. Local deformation and elastic strain are negligibly affected by this long-range divergence in relative displacement. It is believed that a sufficiently large 2D structure, in the absence of applied lateral tension, will bend and crumple to form a fluctuating 3D structure. Researchers have observed ripples in suspended layers of graphene, and it has been proposed that the ripples are caused by thermal fluctuations in the material. As a consequence of these dynamical deformations, it is debatable whether graphene is truly a 2D structure. These ripples, when amplified by vacancy defects, induce a negative Poisson's ratio into graphene, resulting in the thinnest auxetic material known so far.
Graphene-nickel (Ni) composites, created through plating processes, exhibit enhanced mechanical properties due to strong Ni-graphene interactions inhibiting dislocation sliding in the Ni matrix.
Fracture toughness.
In 2014, researchers from Rice University and the Georgia Institute of Technology have indicated that despite its strength, graphene is also relatively brittle, with a fracture toughness of about 4 MPa√m. This indicates that imperfect graphene is likely to crack in a brittle manner like ceramic materials, as opposed to many metallic materials which tend to have fracture toughnesses in the range of 15–50 MPa√m. Later in 2014, the Rice team announced that graphene showed a greater ability to distribute force from an impact than any known material, ten times that of steel per unit weight. The force was transmitted at .
Polycrystalline graphene.
Various methods – most notably, chemical vapor deposition (CVD), as discussed in the section below - have been developed to produce large-scale graphene needed for device applications. Such methods often synthesize polycrystalline graphene. The mechanical properties of polycrystalline graphene is affected by the nature of the defects, such as grain-boundaries (GB) and vacancies, present in the system and the average grain-size.
Graphene grain boundaries typically contain heptagon-pentagon pairs. The arrangement of such defects depends on whether the GB is in zig-zag or armchair direction. It further depends on the tilt-angle of the GB. In 2010, researchers from Brown University computationally predicted that as the tilt-angle increases, the grain boundary strength also increases. They showed that the weakest link in the grain boundary is at the critical bonds of the heptagon rings. As the grain boundary angle increases, the strain in these heptagon rings decreases, causing the grain-boundary to be stronger than lower-angle GBs. They proposed that, in fact, for sufficiently large angle GB, the strength of the GB is similar to pristine graphene. In 2012, it was further shown that the strength can increase or decrease, depending on the detailed arrangements of the defects. These predictions have since been supported by experimental evidences. In a 2013 study led by James Hone's group, researchers probed the elastic stiffness and strength of CVD-grown graphene by combining nano-indentation and high-resolution TEM. They found that the elastic stiffness is identical and strength is only slightly lower than those in pristine graphene. In the same year, researchers from UC Berkeley and UCLA probed bi-crystalline graphene with TEM and AFM. They found that the strength of grain-boundaries indeed tend to increase with the tilt angle.
While the presence of vacancies is not only prevalent in polycrystalline graphene, vacancies can have significant effects on the strength of graphene. The general consensus is that the strength decreases along with increasing densities of vacancies. In fact, various studies have shown that for graphene with sufficiently low density of vacancies, the strength does not vary significantly from that of pristine graphene. On the other hand, high density of vacancies can severely reduce the strength of graphene.
Compared to the fairly well-understood nature of the effect that grain boundary and vacancies have on the mechanical properties of graphene, there is no clear consensus on the general effect that the average grain size has on the strength of polycrystalline graphene. In fact, three notable theoretical/computational studies on this topic have led to three different conclusions. First, in 2012, Kotakoski and Myer studied the mechanical properties of polycrystalline graphene with "realistic atomistic model", using molecular-dynamics (MD) simulation. To emulate the growth mechanism of CVD, they first randomly selected nucleation sites that are at least 5A (arbitrarily chosen) apart from other sites. Polycrystalline graphene was generated from these nucleation sites and was subsequently annealed at 3000K, then quenched. Based on this model, they found that cracks are initiated at grain-boundary junctions, but the grain size does not significantly affect the strength. Second, in 2013, Z. Song et al. used MD simulations to study the mechanical properties of polycrystalline graphene with uniform-sized hexagon-shaped grains. The hexagon grains were oriented in various lattice directions and the GBs consisted of only heptagon, pentagon, and hexagonal carbon rings. The motivation behind such model was that similar systems had been experimentally observed in graphene flakes grown on the surface of liquid copper. While they also noted that crack is typically initiated at the triple junctions, they found that as the grain size decreases, the yield strength of graphene increases. Based on this finding, they proposed that polycrystalline follows pseudo Hall-Petch relationship. Third, in 2013, Z. D. Sha et al. studied the effect of grain size on the properties of polycrystalline graphene, by modelling the grain patches using Voronoi construction. The GBs in this model consisted of heptagon, pentagon, and hexagon, as well as squares, octagons, and vacancies. Through MD simulation, contrary to the fore-mentioned study, they found inverse Hall-Petch relationship, where the strength of graphene increases as the grain size increases. Experimental observations and other theoretical predictions also gave differing conclusions, similar to the three given above. Such discrepancies show the complexity of the effects that grain size, arrangements of defects, and the nature of defects have on the mechanical properties of polycrystalline graphene.
Other properties.
Thermal conductivity.
Thermal transport in graphene is a burgeoning area of research, particularly for its potential applications in thermal management. Most experimental measurements have posted large uncertainties in the results of thermal conductivity due to limitations of the instruments used. Following predictions for graphene and related carbon nanotubes, early measurements of the thermal conductivity of suspended graphene reported an exceptionally large thermal conductivity up to , compared with the thermal conductivity of pyrolytic graphite of approximately at room temperature. However, later studies primarily on more scalable but more defected graphene derived by Chemical Vapor Deposition have been unable to reproduce such high thermal conductivity measurements, producing a wide range of thermal conductivities between – for suspended single layer graphene. The large range in the reported thermal conductivity can be caused by large measurement uncertainties as well as variations in the graphene quality and processing conditions.
In addition, it is known that when single-layer graphene is supported on an amorphous material, the thermal conductivity is reduced to about – at room temperature as a result of scattering of graphene lattice waves by the substrate, and can be even lower for few layer graphene encased in amorphous oxide. Likewise, polymeric residue can contribute to a similar decrease in the thermal conductivity of suspended graphene to approximately – for bilayer graphene.
Isotopic composition, specifically the ratio of 12C to 13C, significantly affects graphene's thermal conductivity. Isotopically pure 12C graphene exhibits higher thermal conductivity than either a 50:50 isotope ratio or the naturally occurring 99:1 ratio. It can be shown by using the Wiedemann–Franz law, that the thermal conduction is phonon-dominated. However, for a gated graphene strip, an applied gate bias causing a Fermi energy shift much larger than "k"B"T" can cause the electronic contribution to increase and dominate over the phonon contribution at low temperatures. The ballistic thermal conductance of graphene is isotropic.
Graphite, a 3D counterpart to graphene, exhibits a basal plane thermal conductivity exceeding (similar to diamond), In graphite, the c-axis (out of plane) thermal conductivity is over a factor of ~100 smaller due to the weak binding forces between basal planes as well as the larger lattice spacing. In addition, the ballistic thermal conductance of graphene is shown to give the lower limit of the ballistic thermal conductances, per unit circumference, length of carbon nanotubes.
Graphene's thermal conductivity is influenced by its three acoustic phonon modes: two linear dispersion relation dispersion relation in-plane modes (LA, TA) and one quadratic dispersion relation out-of-plane mode (ZA). At low temperatures, the dominance of the T1.5 thermal conductivity contribution of the out-of-plane mode supersedes the "T"2 dependence of the linear modes. Some graphene phonon bands exhibit negative Grüneisen parameters, resulting in negative thermal expansion coefficient at low temperatures. The lowest negative Grüneisen parameters correspond to the lowest transverse acoustic ZA modes, whose frequencies increase with in-plane lattice parameter, akin to a stretched string with higher frequency vibrations.
Chemical.
Graphene has a theoretical specific surface area (SSA) of . This is much larger than that reported to date for carbon black (typically smaller than ) or for carbon nanotubes (CNTs), from ≈100 to and is similar to activated carbon.
Graphene is the only form of carbon (or solid material) in which every atom is available for chemical reaction from two sides (due to the 2D structure). Atoms at the edges of a graphene sheet have special chemical reactivity. Graphene has the highest ratio of edge atoms of any allotrope. Defects within a sheet increase its chemical reactivity. The onset temperature of reaction between the basal plane of single-layer graphene and oxygen gas is below . Graphene burns at very low temperature (e.g., ). Graphene is commonly modified with oxygen- and nitrogen-containing functional groups and analyzed by infrared spectroscopy and X-ray photoelectron spectroscopy. However, determination of structures of graphene with oxygen- and nitrogen- functional groups requires the structures to be well controlled.
In 2013, Stanford University physicists reported that single-layer graphene is a hundred times more chemically reactive than thicker multilayer sheets.
Graphene can self-repair holes in its sheets, when exposed to molecules containing carbon, such as hydrocarbons. Bombarded with pure carbon atoms, the atoms perfectly align into hexagons, completely filling the holes.
Biological.
Despite the promising results in different cell studies and proof of concept studies, there is still incomplete understanding of the full biocompatibility of graphene based materials. Different cell lines react differently when exposed to graphene, and it has been shown that the lateral size of the graphene flakes, the form and surface chemistry can elicit different biological responses on the same cell line.
There are indications that graphene has promise as a useful material for interacting with neural cells; studies on cultured neural cells show limited success.
Graphene also has some utility in osteogenics. Researchers at the Graphene Research Centre at the National University of Singapore (NUS) discovered in 2011 the ability of graphene to accelerate the osteogenic differentiation of human Mesenchymal Stem Cells without the use of biochemical inducers.
Graphene can be used in biosensors; in 2015, researchers demonstrated that a graphene-based sensor be can used to detect a cancer risk biomarker. In particular, by using epitaxial graphene on silicon carbide, they were repeatably able to detect 8-hydroxydeoxyguanosine (8-OHdG), a DNA damage biomarker.
Support substrate.
The electronics property of graphene can be significantly influenced by the supporting substrate. Studies of graphene monolayers on clean and hydrogen(H)-passivated silicon (100) (Si(100)/H) surfaces have been performed. The Si(100)/H surface does not perturb the electronic properties of graphene, whereas the interaction between the clean Si(100) surface and graphene changes the electronic states of graphene significantly. This effect results from the covalent bonding between C and surface Si atoms, modifying the π-orbital network of the graphene layer. The local density of states shows that the bonded C and Si surface states are highly disturbed near the Fermi energy.
Graphene layers and structural variants.
Monolayer sheets.
In 2013 a group of Polish scientists presented a production unit that allows the manufacture of continuous monolayer sheets. The process is based on graphene growth on a liquid metal matrix. The product of this process was called High Strength Metallurgical Graphene. In a new study published in Nature, the researchers have used a single layer graphene electrode and a novel surface sensitive non-linear spectroscopy technique to investigate the top-most water layer at the electrochemically charged surface. They found that the interfacial water response to applied electric field is asymmetric with respect to the nature of the applied field.
Bilayer graphene.
Bilayer graphene displays the anomalous quantum Hall effect, a tunable band gap and potential for excitonic condensation –making it a promising candidate for optoelectronic and nanoelectronic applications. Bilayer graphene typically can be found either in twisted configurations where the two layers are rotated relative to each other or graphitic Bernal stacked configurations where half the atoms in one layer lie atop half the atoms in the other. Stacking order and orientation govern the optical and electronic properties of bilayer graphene.
One way to synthesize bilayer graphene is via chemical vapor deposition, which can produce large bilayer regions that almost exclusively conform to a Bernal stack geometry.
It has been shown that the two graphene layers can withstand important strain or doping mismatch which ultimately should lead to their exfoliation.
Turbostratic.
Turbostratic graphene exhibits weak interlayer coupling, and the spacing is increased with respect to Bernal-stacked multilayer graphene. Rotational misalignment preserves the 2D electronic structure, as confirmed by Raman spectroscopy. The D peak is very weak, whereas the 2D and G peaks remain prominent. A rather peculiar feature is that the I2D/IG ratio can exceed 10. However, most importantly, the M peak, which originates from AB stacking, is absent, whereas the TS1 and TS2 modes are visible in the Raman spectrum. The material is formed through conversion of non-graphenic carbon into graphenic carbon without providing sufficient energy to allow for the reorganization through annealing of adjacent graphene layers into crystalline graphitic structures.
Graphene superlattices.
Periodically stacked graphene and its insulating isomorph provide a fascinating structural element in implementing highly functional superlattices at the atomic scale, which offers possibilities in designing nanoelectronic and photonic devices. Various types of superlattices can be obtained by stacking graphene and its related forms. The energy band in layer-stacked superlattices is found to be more sensitive to the barrier width than that in conventional III–V semiconductor superlattices. When adding more than one atomic layer to the barrier in each period, the coupling of electronic wavefunctions in neighboring potential wells can be significantly reduced, which leads to the degeneration of continuous subbands into quantized energy levels. When varying the well width, the energy levels in the potential wells along the L-M direction behave distinctly from those along the K-H direction.
A superlattice corresponds to a periodic or quasi-periodic arrangement of different materials, and can be described by a superlattice period which confers a new translational symmetry to the system, impacting their phonon dispersions and subsequently their thermal transport properties.
Recently, uniform monolayer graphene-hBN structures have been successfully synthesized via lithography patterning coupled with chemical vapor deposition (CVD).
Furthermore, superlattices of graphene-hBN are ideal model systems for the realization and understanding of coherent (wave-like) and incoherent (particle-like) phonon thermal transport.
Nanostructured graphene forms.
Graphene nanoribbons.
Graphene nanoribbons ("nanostripes" in the "zig-zag"/"zigzag" orientation), at low temperatures, show spin-polarized metallic edge currents, which also suggests applications in the new field of spintronics. (In the "armchair" orientation, the edges behave like semiconductors.)
Graphene quantum dots.
A graphene quantum dot (GQD) is a graphene fragment with size less than 100 nm. The properties of GQDs are different from 'bulk' graphene due to the quantum confinement effects which only becomes apparent when size is smaller than 100 nm.
Modified and functionalized graphene.
Graphene oxide.
Graphene oxide is usually produced through chemical exfoliation of graphite. A particularly popular technique is the improved Hummer's method. Using paper-making techniques on dispersed, oxidized and chemically processed graphite in water, the monolayer flakes form a single sheet and create strong bonds. These sheets, called graphene oxide paper, have a measured tensile modulus of 32 GPa. The chemical property of graphite oxide is related to the functional groups attached to graphene sheets. These can change the polymerization pathway and similar chemical processes. Graphene oxide flakes in polymers display enhanced photo-conducting properties. Graphene is normally hydrophobic and impermeable to all gases and liquids (vacuum-tight). However, when formed into graphene oxide-based capillary membrane, both liquid water and water vapor flow through as quickly as if the membrane was not present.
In 2022 were performed an evaluation of biological effects of graphene oxide Evaluation of biological effects of graphene oxide using Drosophila. It was shown the graphene oxide at low doses was evaluated for its biological effects on larvae and the imago of Drosophila melanogaster. Oral administration of graphene oxide at concentrations of 0.02-1% has a beneficial effect on the developmental rate and hatching ability of larvae. Long-term administration of a low dose of graphene oxide extends Drosophila lifespan and significantly enhances resistance to environmental stresses. These suggest about graphene oxide affects carbohydrate and lipid metabolism in adult Drosophila. These findings might provide a useful reference to assess the biological effects of graphene oxide, which could play an important role in a variety of graphene-based biomedical applications.
Chemical modification.
Soluble fragments of graphene can be prepared in the laboratory through chemical modification of graphite. First, microcrystalline graphite is treated with an acidic mixture of sulfuric acid and nitric acid. A series of oxidation and exfoliation steps produce small graphene plates with carboxyl groups at their edges. These are converted to acid chloride groups by treatment with thionyl chloride; next, they are converted to the corresponding graphene amide via treatment with octadecylamine. The resulting material (circular graphene layers of thickness) is soluble in tetrahydrofuran, tetrachloromethane and dichloroethane.
Refluxing single-layer graphene oxide (SLGO) in solvents leads to size reduction and folding of individual sheets as well as loss of carboxylic group functionality, by up to 20%, indicating thermal instabilities of SLGO sheets dependent on their preparation methodology. When using thionyl chloride, acyl chloride groups result, which can then form aliphatic and aromatic amides with a reactivity conversion of around 70–80%.
Hydrazine reflux is commonly used for reducing SLGO to SLG(R), but titrations show that only around 20–30% of the carboxylic groups are lost, leaving a significant number available for chemical attachment. Analysis of SLG(R) generated by this route reveals that the system is unstable and using a room temperature stirring with HCl (< 1.0 M) leads to around 60% loss of COOH functionality. Room temperature treatment of SLGO with carbodiimides leads to the collapse of the individual sheets into star-like clusters that exhibited poor subsequent reactivity with amines (c. 3–5% conversion of the intermediate to the final amide). It is apparent that conventional chemical treatment of carboxylic groups on SLGO generates morphological changes of individual sheets that leads to a reduction in chemical reactivity, which may potentially limit their use in composite synthesis. Therefore, chemical reactions types have been explored. SLGO has also been grafted with polyallylamine, cross-linked through epoxy groups. When filtered into graphene oxide paper, these composites exhibit increased stiffness and strength relative to unmodified graphene oxide paper.
Full hydrogenation from both sides of graphene sheet results in graphane, but partial hydrogenation leads to hydrogenated graphene. Similarly, both-side fluorination of graphene (or chemical and mechanical exfoliation of graphite fluoride) leads to fluorographene (graphene fluoride), while partial fluorination (generally halogenation) provides fluorinated (halogenated) graphene.
Graphene ligand/complex.
Graphene can be a ligand to coordinate metals and metal ions by introducing functional groups. Structures of graphene ligands are similar to e.g. metal-porphyrin complex, metal-phthalocyanine complex, and metal-phenanthroline complex. Copper and nickel ions can be coordinated with graphene ligands.
Advanced graphene structures.
Graphene fiber.
In 2011, researchers reported a novel yet simple approach to fabricate graphene fibers from chemical vapor deposition grown graphene films. The method was scalable and controllable, delivering tunable morphology and pore structure by controlling the evaporation of solvents with suitable surface tension. Flexible all-solid-state supercapacitors based on this graphene fibers were demonstrated in 2013.
In 2015, intercalating small graphene fragments into the gaps formed by larger, coiled graphene sheets, after annealing provided pathways for conduction, while the fragments helped reinforce the fibers. The resulting fibers offered better thermal and electrical conductivity and mechanical strength. Thermal conductivity reached , while tensile strength reached .
In 2016, Kilometer-scale continuous graphene fibers with outstanding mechanical properties and excellent electrical conductivity are produced by high-throughput wet-spinning of graphene oxide liquid crystals followed by graphitization through a full-scale synergetic defect-engineering strategy. The graphene fibers with superior performances promise wide applications in functional textiles, lightweight motors, microelectronic devices, etc.
Tsinghua University in Beijing, led by Wei Fei of the Department of Chemical Engineering, claims to be able to create a carbon nanotube fibre which has a tensile strength of .
3D graphene.
In 2013, a three-dimensional honeycomb of hexagonally arranged carbon was termed 3D graphene, and self-supporting 3D graphene was also produced. 3D structures of graphene can be fabricated by using either CVD or solution based methods. A 2016 review by Khurram and Xu et al. provided a summary of then-state-of-the-art techniques for fabrication of the 3D structure of graphene and other related two-dimensional materials.
In 2013, researchers at Stony Brook University reported a novel radical-initiated crosslinking method to fabricate porous 3D free-standing architectures of graphene and carbon nanotubes using nanomaterials as building blocks without any polymer matrix as support. These 3D graphene (all-carbon) scaffolds/foams have applications in several fields such as energy storage, filtration, thermal management and biomedical devices and implants.
Box-shaped graphene (BSG) nanostructure appearing after mechanical cleavage of pyrolytic graphite was reported in 2016. The discovered nanostructure is a multilayer system of parallel hollow nanochannels located along the surface and having quadrangular cross-section. The thickness of the channel walls is approximately equal to 1 nm. Potential fields of BSG application include: ultra-sensitive detectors, high-performance catalytic cells, nanochannels for DNA sequencing and manipulation, high-performance heat sinking surfaces, rechargeable batteries of enhanced performance, nanomechanical resonators, electron multiplication channels in emission nanoelectronic devices, high-capacity sorbents for safe hydrogen storage.
Three dimensional bilayer graphene has also been reported.
Pillared graphene.
Pillared graphene is a hybrid carbon, structure consisting of an oriented array of carbon nanotubes connected at each end to a sheet of graphene. It was first described theoretically by George Froudakis and colleagues of the University of Crete in Greece in 2008. Pillared graphene has not yet been synthesised in the laboratory, but it has been suggested that it may have useful electronic properties, or as a hydrogen storage material.
Reinforced graphene.
Graphene reinforced with embedded carbon nanotube reinforcing bars ("rebar") is easier to manipulate, while improving the electrical and mechanical qualities of both materials.
Functionalized single- or multiwalled carbon nanotubes are spin-coated on copper foils and then heated and cooled, using the nanotubes themselves as the carbon source. Under heating, the functional carbon groups decompose into graphene, while the nanotubes partially split and form in-plane covalent bonds with the graphene, adding strength. π–π stacking domains add more strength. The nanotubes can overlap, making the material a better conductor than standard CVD-grown graphene. The nanotubes effectively bridge the grain boundaries found in conventional graphene. The technique eliminates the traces of substrate on which later-separated sheets were deposited using epitaxy.
Stacks of a few layers have been proposed as a cost-effective and physically flexible replacement for indium tin oxide (ITO) used in displays and photovoltaic cells.
Molded graphene.
In 2015, researchers from the University of Illinois at Urbana-Champaign (UIUC) developed a new approach for forming 3D shapes from flat, 2D sheets of graphene. A film of graphene that had been soaked in solvent to make it swell and become malleable was overlaid on an underlying substrate "former". The solvent evaporated over time, leaving behind a layer of graphene that had taken on the shape of the underlying structure. In this way they were able to produce a range of relatively intricate micro-structured shapes. Features vary from 3.5 to 50 μm. Pure graphene and gold-decorated graphene were each successfully integrated with the substrate.
Specialized graphene configurations.
Graphene aerogel.
An aerogel made of graphene layers separated by carbon nanotubes was measured at 0.16 milligrams per cubic centimeter. A solution of graphene and carbon nanotubes in a mold is freeze dried to dehydrate the solution, leaving the aerogel. The material has superior elasticity and absorption. It can recover completely after more than 90% compression, and absorb up to 900 times its weight in oil, at a rate of 68.8 grams per second.
Graphene nanocoil.
In 2015, a coiled form of graphene was discovered in graphitic carbon (coal). The spiraling effect is produced by defects in the material's hexagonal grid that causes it to spiral along its edge, mimicking a Riemann surface, with the graphene surface approximately perpendicular to the axis. When voltage is applied to such a coil, current flows around the spiral, producing a magnetic field. The phenomenon applies to spirals with either zigzag or armchair patterns, although with different current distributions. Computer simulations indicated that a conventional spiral inductor of 205 microns in diameter could be matched by a nanocoil just 70 nanometers wide, with a field strength reaching as much as 1 tesla.
The nano-solenoids analyzed through computer models at Rice should be capable of producing powerful magnetic fields of about 1 tesla, about the same as the coils found in typical loudspeakers, according to Yakobson and his team – and about the same field strength as some MRI machines. They found the magnetic field would be strongest in the hollow, nanometer-wide cavity at the spiral's center.
A solenoid made with such a coil behaves as a quantum conductor whose current distribution between the core and exterior varies with applied voltage, resulting in nonlinear inductance.
Crumpled graphene.
In 2016, Brown University introduced a method for 'crumpling' graphene, adding wrinkles to the material on a nanoscale. This was achieved by depositing layers of graphene oxide onto a shrink film, then shrunken, with the film dissolved before being shrunken again on another sheet of film. The crumpled graphene became superhydrophobic, and, when used as a battery electrode, the material was shown to have as much as a 400% increase in electrochemical current density.
Production.
A rapidly increasing list of production techniques have been developed to enable graphene's use in commercial applications.
Isolated 2D crystals cannot be grown via chemical synthesis beyond small sizes even in principle, because the rapid growth of phonon density with increasing lateral size forces 2D crystallites to bend into the third dimension. In all cases, graphene must bond to a substrate to retain its two-dimensional shape.
Small graphene structures, such as graphene quantum dots and nanoribbons, can be produced by "bottom up" methods that assemble the lattice from organic molecule monomers (e. g. citric acid, glucose). "Top down" methods, on the other hand, cut bulk graphite and graphene materials with strong chemicals (e. g. mixed acids).
Exfoliation techniques.
Mechanical exfoliation.
Geim and Novoselov initially used adhesive tape to pull graphene sheets away from graphite. Achieving single layers typically requires multiple exfoliation steps. After exfoliation, the flakes are deposited on a silicon wafer. Crystallites larger than 1 mm and visible to the naked eye can be obtained.
As of 2014, exfoliation produced graphene with the lowest number of defects and highest electron mobility.
Alternatively a sharp single-crystal diamond wedge penetrates onto the graphite source to cleave layers.
In 2014 defect-free, unoxidized graphene-containing liquids were made from graphite using mixers that produce local shear rates greater than .
Shear exfoliation is another method which by using rotor-stator mixer the scalable production of the defect-free Graphene has become possible. It has been shown that, as turbulence is not necessary for mechanical exfoliation, low speed ball milling is shown to be effective in the production of High-Yield and water-soluble graphene.
Liquid phase exfoliation.
Liquid phase exfoliation (LPE) is a relatively simple method which involves dispersing graphite in a liquid medium to produce graphene by sonication or high shear mixing, followed by centrifugation. Restacking is an issue with this technique unless solvents with appropriate surface energy are used (e.g. NMP).
Adding a surfactant to a solvent prior to sonication prevents restacking by adsorbing to the graphene's surface. This produces a higher graphene concentration, but removing the surfactant requires chemical treatments.
LPE results in nanosheets with a broad size distribution and thicknesses roughly in the range of 1-10 monolayers. However, liquid cascade centrifugation can be used to size select the suspensions and achieve monolayer enrichment.
Sonicating graphite at the interface of two immiscible liquids, most notably heptane and water, produced macro-scale graphene films. The graphene sheets are adsorbed to the high energy interface between the materials and are kept from restacking. The sheets are up to about 95% transparent and conductive.
With definite cleavage parameters, the box-shaped graphene (BSG) nanostructure can be prepared on graphite crystal.
A major advantage of LPE is that it can be used to exfoliate many inorganic 2D materials beyond graphene, e.g. BN, MoS2, WS2.
Splitting monolayer carbon.
Graphene can be created by opening carbon nanotubes by cutting or etching. In one such method multi-walled carbon nanotubes are cut open in solution by action of potassium permanganate and sulfuric acid.
In 2014, carbon nanotube-reinforced graphene was made via spin coating and annealing functionalized carbon nanotubes.
Another approach sprays buckyballs at supersonic speeds onto a substrate. The balls cracked open upon impact, and the resulting unzipped cages then bond together to form a graphene film.
Chemical synthesis methods.
Graphite oxide reduction.
P. Boehm reported producing monolayer flakes of reduced graphene oxide in 1962. Rapid heating of graphite oxide and exfoliation yields highly dispersed carbon powder with a few percent of graphene flakes.
Another method is reduction of graphite oxide monolayer films, e.g. by hydrazine with annealing in argon/hydrogen with an almost intact carbon framework that allows efficient removal of functional groups. Measured charge carrier mobility exceeded 1,000 cm/Vs (10 m/Vs).
Burning a graphite oxide coated DVD produced a conductive graphene film (1,738 siemens per meter) and specific surface area (1,520 square meters per gram) that was highly resistant and malleable.
A dispersed reduced graphene oxide suspension was synthesized in water by a hydrothermal dehydration method without using any surfactant. The approach is facile, industrially applicable, environmentally friendly and cost effective. Viscosity measurements confirmed that the graphene colloidal suspension (Graphene nanofluid) exhibit Newtonian behavior, with the viscosity showing close resemblance to that of water.
Molten salts.
Graphite particles can be corroded in molten salts to form a variety of carbon nanostructures including graphene. Hydrogen cations, dissolved in molten lithium chloride, can be discharged on cathodically polarized graphite rods, which then intercalate, peeling graphene sheets. The graphene nanosheets produced displayed a single-crystalline structure with a lateral size of several hundred nanometers and a high degree of crystallinity and thermal stability.
Electrochemical synthesis.
Electrochemical synthesis can exfoliate graphene. Varying a pulsed voltage controls thickness, flake area, number of defects and affects its properties. The process begins by bathing the graphite in a solvent for intercalation. The process can be tracked by monitoring the solution's transparency with an LED and photodiode.
Hydrothermal self-assembly.
Graphene has been prepared by using a sugar (e.g. glucose, sugar, fructose, etc.) This substrate-free "bottom-up" synthesis is safer, simpler and more environmentally friendly than exfoliation. The method can control thickness, ranging from monolayer to multilayers, which is known as "Tang-Lau Method".
Sodium ethoxide pyrolysis.
Gram-quantities were produced by the reaction of ethanol with sodium metal, followed by pyrolysis and washing with water.
Microwave-assisted oxidation.
In 2012, microwave energy was reported to directly synthesize graphene in one step. This approach avoids use of potassium permanganate in the reaction mixture. It was also reported that by microwave radiation assistance, graphene oxide with or without holes can be synthesized by controlling microwave time. Microwave heating can dramatically shorten the reaction time from days to seconds.
Graphene can also be made by microwave assisted hydrothermal pyrolysis.
Thermal decomposition of silicon carbide.
Heating silicon carbide (SiC) to high temperatures () under low pressures (c. 10−6 torr, or 10−4 Pa) reduces it to graphene.
Vapor deposition and growth techniques.
Chemical vapor deposition.
Epitaxy.
Epitaxial graphene growth on silicon carbide is wafer-scale technique to produce graphene. Epitaxial graphene may be coupled to surfaces weakly enough (by the active valence electrons that create Van der Waals forces) to retain the two dimensional electronic band structure of isolated graphene.
A normal silicon wafer coated with a layer of germanium (Ge) dipped in dilute hydrofluoric acid strips the naturally forming germanium oxide groups, creating hydrogen-terminated germanium. CVD can coat that with graphene.
The direct synthesis of graphene on insulator TiO2 with high-dielectric-constant (high-κ). A two-step CVD process is shown to grow graphene directly on TiO2 crystals or exfoliated TiO2 nanosheets without using any metal catalyst.
Metal substrates.
CVD graphene can be grown on metal substrates including ruthenium, iridium, nickel and copper.
Roll-to-roll.
In 2014, a two-step roll-to-roll manufacturing process was announced. The first roll-to-roll step produces the graphene via chemical vapor deposition. The second step binds the graphene to a substrate.
Cold wall.
Growing graphene in an industrial resistive-heating cold wall CVD system was claimed to produce graphene 100 times faster than conventional CVD systems, cut costs by 99% and produce material with enhanced electronic qualities.
Wafer scale CVD graphene.
CVD graphene is scalable and has been grown on deposited Cu thin film catalyst on 100 to 300 mm standard Si/SiO2 wafers on an Axitron Black Magic system. Monolayer graphene coverage of >95% is achieved on 100 to 300 mm wafer substrates with negligible defects, confirmed by extensive Raman mapping.
Solvent interface trapping method (SITM).
Reported by a group led by D. H. Adamson, graphene can be produced from natural graphite while preserving the integrity of the sheets using solvent interface trapping method (SITM). SITM use a high energy interface, such as oil and water, to exfoliate graphite to graphene. Stacked graphite delaminates, or spreads, at the oil/water interface to produce few-layer graphene in a thermodynamically favorable process in much the same way as small molecule surfactants spread to minimize the interfacial energy. In this way, graphene behaves like a 2D surfactant. SITM has been reported for a variety of applications such conductive polymer-graphene foams, conductive polymer-graphene microspheres, conductive thin films and conductive inks.
Carbon dioxide reduction.
A highly exothermic reaction combusts magnesium in an oxidation–reduction reaction with carbon dioxide, producing carbon nanoparticles including graphene and fullerenes.
Supersonic spray.
Supersonic acceleration of droplets through a Laval nozzle was used to deposit reduced graphene-oxide on a substrate. The energy of the impact rearranges that carbon atoms into flawless graphene.
Laser.
In 2014, a CO2 infrared laser was used to produce patterned porous three-dimensional laser-induced graphene (LIG) film networks from commercial polymer films. The resulting material exhibits high electrical conductivity and surface area. The laser induction process is compatible with roll-to-roll manufacturing processes. A similar material, laser-induced graphene fibers (LIGF), was reported in 2018.
Flash Joule heating.
In 2019, flash Joule heating (transient high-temperature electrothermal heating) was discovered to be a method to synthesize turbostratic graphene in bulk powder form. The method involves electrothermally converting various carbon sources, such as carbon black, coal, and food waste into micron-scale flakes of graphene. More recent works demonstrated the use of mixed plastic waste, waste rubber tires, and pyrolysis ash as carbon feedstocks. The graphenization process is kinetically controlled, and the energy dose is chosen to preserve the carbon in its graphenic state (excessive energy input leads to subsequent graphitization through annealing).
Ion implantation.
Accelerating carbon ions inside an electrical field into a semiconductor made of thin nickel films on a substrate of SiO2/Si, creates a wafer-scale () wrinkle/tear/residue-free graphene layer at a relatively low temperature of 500 °C.
CMOS-compatible graphene.
Integration of graphene in the widely employed CMOS fabrication process demands its transfer-free direct synthesis on dielectric substrates at temperatures below 500 °C. At the IEDM 2018, researchers from University of California, Santa Barbara, demonstrated a novel CMOS-compatible graphene synthesis process at 300 °C suitable for back-end-of-line (BEOL) applications. The process involves pressure-assisted solid-state diffusion of carbon through a thin-film of metal catalyst. The synthesized large-area graphene films were shown to exhibit high-quality (via Raman characterization) and similar resistivity values when compared with high-temperature CVD synthesized graphene films of same cross-section down to widths of 20 nm.
Simulation.
In addition to experimental investigation of graphene and graphene-based devices, their numerical modeling and simulation have been an important research topic. The Kubo formula provides an analytic expression for the graphene's conductivity and shows that it is a function of several physical parameters including wavelength, temperature, and chemical potential. Moreover, a surface conductivity model, which describes graphene as an infinitesimally thin (two sided) sheet with a local and isotropic conductivity, has been proposed. This model permits derivation of analytical expressions for the electromagnetic field in the presence of a graphene sheet in terms of a dyadic Green function (represented using Sommerfeld integrals) and exciting electric current. Even though these analytical models and methods can provide results for several canonical problems for benchmarking purposes, many practical problems involving graphene, such as design of arbitrarily shaped electromagnetic devices, are analytically intractable. With the recent advances in the field of computational electromagnetics (CEM), various accurate and efficient numerical methods have become available for analysis of electromagnetic field/wave interactions on graphene sheets and/or graphene-based devices. A comprehensive summary of computational tools developed for analyzing graphene-based devices/systems is proposed.
Graphene analogs.
Graphene analogs (also referred to as "artificial graphene") are two-dimensional systems which exhibit similar properties to graphene. Graphene analogs are studied intensively since the discovery of graphene in 2004. People try to develop systems in which the physics is easier to observe and to manipulate than in graphene. In those systems, electrons are not always the particles which are used. They might be optical photons, microwave photons, plasmons, microcavity polaritons, or even atoms. Also, the honeycomb structure in which those particles evolve can be of a different nature than carbon atoms in graphene. It can be, respectively, a photonic crystal, an array of metallic rods, metallic nanoparticles, a lattice of coupled microcavities, or an optical lattice.
Applications.
Graphene is a transparent and flexible conductor that holds great promise for various material/device applications, including solar cells, light-emitting diodes (LED), integrated photonic circuit devices, touch panels, and smart windows or phones. Smartphone products with graphene touch screens are already on the market.
In 2013, Head announced their new range of graphene tennis racquets.
As of 2015, there is one product available for commercial use: a graphene-infused printer powder. Many other uses for graphene have been proposed or are under development, in areas including electronics, biological engineering, filtration, lightweight/strong composite materials, photovoltaics and energy storage. Graphene is often produced as a powder and as a dispersion in a polymer matrix. This dispersion is supposedly suitable for advanced composites, paints and coatings, lubricants, oils and functional fluids, capacitors and batteries, thermal management applications, display materials and packaging, solar cells, inks and 3D-printers' materials, and barriers and films.
On August 2, 2016, BAC's new Mono model is said to be made out of graphene as a first of both a street-legal track car and a production car.
In January 2018, graphene based spiral inductors exploiting kinetic inductance at room temperature were first demonstrated at the University of California, Santa Barbara, led by Kaustav Banerjee. These inductors were predicted to allow significant miniaturization in radio-frequency integrated circuit applications.
The potential of epitaxial graphene on SiC for metrology has been shown since 2010, displaying quantum Hall resistance quantization accuracy of three parts per billion in monolayer epitaxial graphene. Over the years precisions of parts-per-trillion in the Hall resistance quantization and giant quantum Hall plateaus have been demonstrated. Developments in encapsulation and doping of epitaxial graphene have led to the commercialisation of epitaxial graphene quantum resistance standards.
Novel uses for graphene continue to be researched and explored. One such use is in combination with water-based epoxy resins to produce anticorrosive coatings. The van der Waals nature of graphene and other two-dimensional (2D) materials also permits van der Waals heterostructures and integrated circuits based on Van der Waals integration of 2D materials.
Graphene is utilized in detecting gasses and chemicals in environmental monitoring, developing highly sensitive biosensors for medical diagnostics, and creating flexible, wearable sensors for health monitoring. Graphene's transparency also enhances optical sensors, making them more effective in imaging and spectroscopy.
Toxicity.
One review on graphene toxicity published in 2016 by Lalwani et al. summarizes the in vitro, in vivo, antimicrobial and environmental effects and highlights the various mechanisms of graphene toxicity. Another review published in 2016 by Ou et al. focused on graphene-family nanomaterials (GFNs) and revealed several typical mechanisms such as physical destruction, oxidative stress, DNA damage, inflammatory response, apoptosis, autophagy, and necrosis.
A 2020 study showed that the toxicity of graphene is dependent on several factors such as shape, size, purity, post-production processing steps, oxidative state, functional groups, dispersion state, synthesis methods, route and dose of administration, and exposure times.
In 2014, research at Stony Brook University showed that graphene nanoribbons, graphene nanoplatelets and graphene nano–onions are non-toxic at concentrations up to 50 μg/ml. These nanoparticles do not alter the differentiation of human bone marrow stem cells towards osteoblasts (bone) or adipocytes (fat) suggesting that at low doses graphene nanoparticles are safe for biomedical applications. In 2013 research at Brown University found that 10 μm few-layered graphene flakes are able to pierce cell membranes in solution. They were observed to enter initially via sharp and jagged points, allowing graphene to be internalized in the cell. The physiological effects of this remain unknown, and this remains a relatively unexplored field. | [
{
"math_id": 0,
"text": "E(k_x,k_y)=\\pm\\,\\gamma_0\\sqrt{1+4\\cos^2{\\tfrac{1}{2}ak_x}+4\\cos{\\tfrac{1}{2}ak_x} \\cdot \\cos{\\tfrac{\\sqrt{3}}{2}ak_y}}"
},
{
"math_id": 1,
"text": "v_F\\, \\vec \\sigma \\cdot \\nabla \\psi(\\mathbf{r})\\,=\\,E\\psi(\\mathbf{r})."
},
{
"math_id": 2,
"text": "\\vec{\\sigma}"
},
{
"math_id": 3,
"text": "\\psi(\\mathbf{r})"
},
{
"math_id": 4,
"text": "E(q)=\\hbar v_F q"
},
{
"math_id": 5,
"text": "q=\\left|\\mathbf{k}-\\mathrm{K}\\right|"
},
{
"math_id": 6,
"text": "4e^2/h"
},
{
"math_id": 7,
"text": "4e^2/{(\\pi}h)"
},
{
"math_id": 8,
"text": "\\sigma_{xy}"
},
{
"math_id": 9,
"text": "\\sigma_{xy}=\\pm {4\\cdot\\left(N + 1/2 \\right)e^2}/h "
},
{
"math_id": 10,
"text": "\\sigma_{xy}=\\pm {4\\cdot N\\cdot e^2}/h "
},
{
"math_id": 11,
"text": "\\sigma_{xy}=\\nu e^2/h"
},
{
"math_id": 12,
"text": "\\nu=0,\\pm {1},\\pm {4}"
},
{
"math_id": 13,
"text": "\\nu=3"
},
{
"math_id": 14,
"text": "\\nu=1/3"
},
{
"math_id": 15,
"text": "\\nu=0,\\pm 1,\\pm 3, \\pm 4"
}
] | https://en.wikipedia.org/wiki?curid=911833 |
9118440 | Dvoretzky's theorem | In mathematics, Dvoretzky's theorem is an important structural theorem about normed vector spaces proved by Aryeh Dvoretzky in the early 1960s, answering a question of Alexander Grothendieck. In essence, it says that every sufficiently high-dimensional normed vector space will have low-dimensional subspaces that are approximately Euclidean. Equivalently, every high-dimensional bounded symmetric convex set has low-dimensional sections that are approximately ellipsoids.
A new proof found by Vitali Milman in the 1970s was one of the starting points for the development of asymptotic geometric analysis (also called "asymptotic functional analysis" or the "local theory of Banach spaces").
Original formulations.
For every natural number "k" ∈ N and every "ε" > 0 there exists a natural number "N"("k", "ε") ∈ N such that if ("X", ‖·‖) is any normed space of dimension "N"("k", "ε"), there exists a subspace "E" ⊂ "X" of dimension "k" and a positive definite quadratic form "Q" on "E" such that the corresponding Euclidean norm
formula_0
on "E" satisfies:
formula_1
In terms of the multiplicative Banach-Mazur distance "d" the theorem's conclusion can be formulated as:
formula_2
where formula_3 denotes the standard "k"-dimensional Euclidean space.
Since the unit ball of every normed vector space is a bounded, symmetric, convex set and the unit ball of every Euclidean space is an ellipsoid, the theorem may also be formulated as a statement about ellipsoid sections of convex sets.
Further developments.
In 1971, Vitali Milman gave a new proof of Dvoretzky's theorem, making use of the concentration of measure on the sphere to show that a random "k"-dimensional subspace satisfies the above inequality with probability very close to 1. The proof gives the sharp dependence on "k":
formula_4
where the constant "C"("ε") only depends on "ε".
We can thus state: for every "ε" > 0 there exists a constant C(ε) > 0 such that for every normed space ("X", ‖·‖) of dimension "N", there exists a subspace "E" ⊂ "X" of dimension
"k" ≥ "C"("ε") log "N" and a Euclidean norm |⋅| on "E" such that
formula_1
More precisely, let "S""N" − 1 denote the unit sphere with respect to some Euclidean structure "Q" on "X", and let "σ" be the invariant probability measure on "S""N" − 1. Then:
formula_5
formula_6
Here "c"1 is a universal constant. For given "X" and "ε", the largest possible "k" is denoted "k"*("X") and called the Dvoretzky dimension of "X".
The dependence on "ε" was studied by Yehoram Gordon, who showed that "k"*("X") ≥ "c"2 "ε"2 log "N". Another proof of this result was given by Gideon Schechtman.
Noga Alon and Vitali Milman showed that the logarithmic bound on the dimension of the subspace in Dvoretzky's theorem can be significantly improved, if one is willing to accept a subspace that is close either to a Euclidean space or to a Chebyshev space. Specifically, for some constant "c", every "n"-dimensional space has a subspace of dimension "k" ≥ exp("c"√log "N") that is close either to "ℓ" or to "ℓ".
Important related results were proved by Tadeusz Figiel, Joram Lindenstrauss and Milman.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "| \\cdot | = \\sqrt{Q(\\cdot)} "
},
{
"math_id": 1,
"text": " |x| \\leq \\|x\\| \\leq (1+\\varepsilon)|x| \\quad \\text{for every} \\ x \\in E."
},
{
"math_id": 2,
"text": "d(E,\\ \\ell_k^2)\\leq 1+\\varepsilon "
},
{
"math_id": 3,
"text": "\\ell_k^2"
},
{
"math_id": 4,
"text": "N(k,\\varepsilon)\\leq\\exp(C(\\varepsilon)k)"
},
{
"math_id": 5,
"text": "k = \\dim E \\geq C(\\varepsilon) \\, \\left(\\frac{\\int_{S^{N-1}} \\| \\xi \\| \\, d\\sigma(\\xi)}{\\max_{\\xi \\in S^{N-1}} \\| \\xi \\|}\\right)^2 \\, N. "
},
{
"math_id": 6,
"text": " c_1 \\sqrt{\\frac{\\log N}{N}}."
}
] | https://en.wikipedia.org/wiki?curid=9118440 |
911887 | Metaballs | N-dimensional isosurfaces which can meld together
In computer graphics, metaballs, also known as blobby objects, are organic-looking "n"-dimensional isosurfaces, characterised by their ability to meld together when in close proximity to create single, contiguous objects.
In solid modelling, polygon meshes are commonly used. In certain instances, however, metaballs are superior. A metaball's "blobby" appearance makes them versatile tools, often used to model organic objects and also to create base meshes for sculpting.
The technique for rendering metaballs was invented by Jim Blinn in the early 1980s to model atom interactions for Carl Sagan's 1980 TV series "". It is also referred to colloquially as the "jelly effect" in the motion and UX design community, commonly appearing in UI elements such as navigations and buttons. Metaball behavior corresponds to mitosis in cell biology, where chromosomes generate identical copies of themselves through cell division.
Definition.
Each metaball is defined as a function in "n" dimensions (e.g., for three dimensions, formula_0; three-dimensional metaballs tend to be most common, with two-dimensional implementations popular as well). A thresholding value is also chosen, to define a solid volume. Then,
formula_1
determines whether the volume enclosed by the surface defined by the metaballs is filled at formula_2 or not.
A more Informal definition could be, That if you take 2 circles in 2D, and at point P, circle 1's influence(1/distance) is X and circle 2's influence is Y.
If X+Y>threshold. point P is part of Metaball.. And then you calculate it for all Points, obviously there are Graphing techniques to do that. Interactive Metaball with Neat Function
Implementation.
A typical function chosen for metaballs is the inverse-square law, that is, the contribution to the thresholding function falls off in a bell-shaped curve as the distance from the centre of the metaball increases.
For the three-dimensional case,
formula_3
where formula_4 is the center of the metaball. The fast inverse square root technique may be used in this calculation.
Various other falloff functions have historically been used for reasons of computational efficiency. Desirable properties of the function include:
More complicated models use a Gaussian potential constrained to a finite radius or a mixture of polynomials to achieve smoothness. The Soft Object model by the Wyvill brothers provides higher degree of smoothness.
A simple generalization of metaballs is to apply the falloff curve to distance-from-lines or distance-from-surfaces.
There are a number of ways to render the metaballs to the screen. In the case of three dimensional metaballs, the two most common are brute force raycasting and the marching cubes algorithm.
2D metaballs were a very common demo effect in the 1990s. The effect is also available as an XScreensaver module.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x,y,z)"
},
{
"math_id": 1,
"text": "\\sum_{i} \\mbox{metaball}_i(x,y,z) \\leq \\mbox{threshold}"
},
{
"math_id": 2,
"text": "(x,y,z)"
},
{
"math_id": 3,
"text": "f(x,y,z) = 1 / \\sqrt{(x-x_0)^2 + (y-y_0)^2 + (z-z_0)^2}"
},
{
"math_id": 4,
"text": "(x_0, y_0, z_0)"
}
] | https://en.wikipedia.org/wiki?curid=911887 |
911960 | Kripke semantics | Formal semantics for non-classical logic systems
Kripke semantics (also known as relational semantics or frame semantics, and often confused with possible world semantics) is a formal semantics for non-classical logic systems created in the late 1950s and early 1960s by Saul Kripke and André Joyal. It was first conceived for modal logics, and later adapted to intuitionistic logic and other non-classical systems. The development of Kripke semantics was a breakthrough in the theory of non-classical logics, because the model theory of such logics was almost non-existent before Kripke (algebraic semantics existed, but were considered 'syntax in disguise').
Semantics of modal logic.
The language of propositional modal logic consists of a countably infinite set of propositional variables, a set of truth-functional connectives (in this article formula_0 and formula_1), and the modal operator formula_2 ("necessarily"). The modal operator formula_3 ("possibly") is (classically) the dual of formula_2 and may be defined in terms of necessity like so: formula_4 ("possibly A" is defined as equivalent to "not necessarily not A").
Basic definitions.
A Kripke frame or modal frame is a pair formula_5, where "W" is a (possibly empty) set, and "R" is a binary relation on "W". Elements
of "W" are called "nodes" or "worlds", and "R" is known as the accessibility relation.
A Kripke model is a triple formula_6, where
formula_5 is a Kripke frame, and formula_7 is a relation between nodes of "W" and modal formulas, such that for all "w" ∈ "W" and modal formulas "A" and "B":
We read formula_16 as “"w" satisfies
"A"”, “"A" is satisfied in "w"”, or
“"w" forces "A"”. The relation formula_7 is called the
"satisfaction relation", "evaluation", or "forcing relation".
The satisfaction relation is uniquely determined by its
value on propositional variables.
A formula "A" is valid in:
We define Thm("C") to be the set of all formulas that are valid in
"C". Conversely, if "X" is a set of formulas, let Mod("X") be the
class of all frames which validate every formula from "X".
A modal logic (i.e., a set of formulas) "L" is sound with
respect to a class of frames "C", if "L" ⊆ Thm("C"). "L" is
complete wrt "C" if "L" ⊇ Thm("C").
Correspondence and completeness.
Semantics is useful for investigating a logic (i.e. a derivation system) only if the semantic consequence relation reflects its syntactical counterpart, the "syntactic consequence" relation ("derivability"). It is vital to know which modal logics are sound and complete with respect to a class of Kripke frames, and to determine also which class that is.
For any class "C" of Kripke frames, Thm("C") is a normal modal logic (in particular, theorems of the minimal normal modal logic, "K", are valid in every Kripke model). However, the converse does not hold in general: while most of the modal systems studied are complete of classes of frames described by simple conditions,
Kripke incomplete normal modal logics do exist. A natural example of such a system is Japaridze's polymodal logic.
A normal modal logic "L" corresponds to a class of frames "C", if "C" = Mod("L"). In other words, "C" is the largest class of frames such that "L" is sound wrt "C". It follows that "L" is Kripke complete if and only if it is complete of its corresponding class.
Consider the schema T : formula_17.
T is valid in any reflexive frame formula_5: if
formula_18, then formula_16
since "w" "R" "w". On the other hand, a frame which
validates T has to be reflexive: fix "w" ∈ "W", and
define satisfaction of a propositional variable "p" as follows:
formula_19 if and only if "w" "R" "u". Then
formula_20, thus formula_21
by T, which means "w" "R" "w" using the definition of
formula_7. T corresponds to the class of reflexive
Kripke frames.
It is often much easier to characterize the corresponding class of "L" than to prove its completeness, thus correspondence serves as a guide to completeness proofs. Correspondence is also used to show "incompleteness" of modal logics: suppose "L"1 ⊆ "L"2 are normal modal logics that correspond to the same class of frames, but "L"1 does not prove all theorems of "L"2. Then "L"1 is Kripke incomplete. For example, the schema formula_22 generates an incomplete logic, as it
corresponds to the same class of frames as GL (viz. transitive and
converse well-founded frames), but does not prove the GL-tautology formula_23.
Common modal axiom schemata.
The following table lists common modal axioms together with their corresponding classes. The naming of the axioms often varies; Here, axiom K is named after Saul Kripke; axiom T is named after the truth axiom in epistemic logic; axiom D is named after deontic logic; axiom B is named after L. E. J. Brouwer; and axioms 4 and 5 are named based on C. I. Lewis's numbering of symbolic logic systems.
Axiom K can also be rewritten as formula_25, which logically establishes modus ponens as a rule of inference in every possible world.
Note that for axiom D, formula_26 implicitly implies formula_24, which means that for every possible world in the model, there is always at least one possible world accessible from it (which could be itself). This implicit implication formula_27 is similar to the implicit implication by existential quantifier on the range of quantification.
Canonical models.
For any normal modal logic, "L", a Kripke model (called the canonical model) can be constructed that refutes precisely the non-theorems of
"L", by an adaptation of the standard technique of using maximal consistent sets as models. Canonical Kripke models play a
role similar to the Lindenbaum–Tarski algebra construction in algebraic
semantics.
A set of formulas is "L"-"consistent" if no contradiction can be derived from it using the theorems of "L", and Modus Ponens. A "maximal L-consistent set" (an "L"-"MCS"
for short) is an "L"-consistent set that has no proper "L"-consistent superset.
The canonical model of "L" is a Kripke model
formula_6, where "W" is the set of all "L"-"MCS",
and the relations "R" and formula_7 are as follows:
formula_28 if and only if for every formula formula_29, if formula_30 then formula_31,
formula_32 if and only if formula_33.
The canonical model is a model of "L", as every "L"-"MCS" contains
all theorems of "L". By Zorn's lemma, each "L"-consistent set
is contained in an "L"-"MCS", in particular every formula
unprovable in "L" has a counterexample in the canonical model.
The main application of canonical models are completeness proofs. Properties of the canonical model of K immediately imply completeness of K with respect to the class of all Kripke frames.
This argument does "not" work for arbitrary "L", because there is no guarantee that the underlying "frame" of the canonical model satisfies the frame conditions of "L".
We say that a formula or a set "X" of formulas is canonical
with respect to a property "P" of Kripke frames, if
A union of canonical sets of formulas is itself canonical.
It follows from the preceding discussion that any logic axiomatized by
a canonical set of formulas is Kripke complete, and
compact.
The axioms T, 4, D, B, 5, H, G (and thus
any combination of them) are canonical. GL and Grz are not
canonical, because they are not compact. The axiom M by itself is
not canonical (Goldblatt, 1991), but the combined logic S4.1 (in
fact, even K4.1) is canonical.
In general, it is undecidable whether a given axiom is
canonical. We know a nice sufficient condition: Henrik Sahlqvist identified a broad class of formulas (now called
Sahlqvist formulas) such that
This is a powerful criterion: for example, all axioms
listed above as canonical are (equivalent to) Sahlqvist formulas.
Finite model property.
A logic has the finite model property (FMP) if it is complete
with respect to a class of finite frames. An application of this
notion is the decidability question: it
follows from
Post's theorem that a recursively axiomatized modal logic "L"
which has FMP is decidable, provided it is decidable whether a given
finite frame is a model of "L". In particular, every finitely
axiomatizable logic with FMP is decidable.
There are various methods for establishing FMP for a given logic.
Refinements and extensions of the canonical model construction often
work, using tools such as filtration or
unravelling. As another possibility,
completeness proofs based on cut-free
sequent calculi usually produce finite models
directly.
Most of the modal systems used in practice (including all listed above) have FMP.
In some cases, we can use FMP to prove Kripke completeness of a logic:
every normal modal logic is complete with respect to a class of
modal algebras, and a "finite" modal algebra can be transformed
into a Kripke frame. As an example, Robert Bull proved using this method
that every normal extension of S4.3 has FMP, and is Kripke
complete.
Multimodal logics.
Kripke semantics has a straightforward generalization to logics with
more than one modality. A Kripke frame for a language with
formula_34 as the set of its necessity operators
consists of a non-empty set "W" equipped with binary relations
"Ri" for each "i" ∈ "I". The definition of a
satisfaction relation is modified as follows:
formula_35 if and only if formula_36
A simplified semantics, discovered by Tim Carlson, is often used for
polymodal provability logics. A Carlson model is a structure
formula_37
with a single accessibility relation "R", and subsets
"Di" ⊆ "W" for each modality. Satisfaction is
defined as
formula_35 if and only if formula_38
Carlson models are easier to visualize and to work with than usual
polymodal Kripke models; there are, however, Kripke complete polymodal
logics which are Carlson incomplete.
Semantics of intuitionistic logic.
Kripke semantics for intuitionistic logic follows the same principles as the semantics of modal logic, but it uses a different definition of satisfaction.
An intuitionistic Kripke model is a triple
formula_39, where formula_40 is a preordered Kripke frame, and formula_7 satisfies the following conditions:
The negation of "A", ¬"A", could be defined as an abbreviation for "A" → ⊥. If for all "u" such that "w" ≤ "u", not "u" ⊩ "A", then "w" ⊩ "A" → ⊥ is vacuously true, so "w" ⊩ ¬"A".
Intuitionistic logic is sound and complete with respect to its Kripke
semantics, and it has the finite model property.
Intuitionistic first-order logic.
Let "L" be a first-order language. A Kripke
model of "L" is a triple
formula_47, where
formula_40 is an intuitionistic Kripke frame, "Mw" is a
(classical) "L"-structure for each node "w" ∈ "W", and
the following compatibility conditions hold whenever "u" ≤ "v":
Given an evaluation "e" of variables by elements of "Mw", we
define the satisfaction relation formula_48:
Here "e"("x"→"a") is the evaluation which gives "x" the
value "a", and otherwise agrees with "e".
Kripke–Joyal semantics.
As part of the independent development of sheaf theory, it was realised around 1965 that Kripke semantics was intimately related to the treatment of existential quantification in topos theory. That is, the 'local' aspect of existence for sections of a sheaf was a kind of logic of the 'possible'. Though this development was the work of a number of people, the name Kripke–Joyal semantics is often used in this connection.
Model constructions.
As in classical model theory, there are methods for
constructing a new Kripke model from other models.
The natural homomorphisms in Kripke semantics are called
p-morphisms (which is short for "pseudo-epimorphism", but the
latter term is rarely used). A p-morphism of Kripke frames
formula_5 and formula_64 is a mapping
formula_65 such that
A p-morphism of Kripke models formula_6 and
formula_66 is a p-morphism of their
underlying frames formula_65, which
satisfies
formula_21 if and only if formula_67, for any propositional variable "p".
P-morphisms are a special kind of bisimulations. In general, a
bisimulation between frames formula_5 and
formula_64 is a relation
"B ⊆ W × W’", which satisfies
the following “zig-zag” property:
A bisimulation of models is additionally required to preserve forcing
of atomic formulas:
if "w B w’", then formula_21 if and only if formula_68, for any propositional variable "p".
The key property which follows from this definition is that
bisimulations (hence also p-morphisms) of models preserve the
satisfaction of "all" formulas, not only propositional variables.
We can transform a Kripke model into a tree using
unravelling. Given a model formula_6 and a fixed
node "w"0 ∈ "W", we define a model
formula_66, where "W’" is the
set of all finite sequences
formula_69 such
that "wi R wi+1" for all
"i" < "n", and formula_70 if and only if
formula_71 for a propositional variable
"p". The definition of the accessibility relation "R’"
varies; in the simplest case we put
formula_72,
but many applications need the reflexive and/or transitive closure of
this relation, or similar modifications.
Filtration is a useful construction which uses to prove FMP for many logics. Let "X" be a set of
formulas closed under taking subformulas. An "X"-filtration of a
model formula_6 is a mapping "f" from "W" to a model
formula_66 such that
It follows that "f" preserves satisfaction of all formulas from
"X". In typical applications, we take "f" as the projection
onto the quotient of "W" over the relation
"u ≡X v" if and only if for all "A" ∈ "X", formula_13 if and only if formula_74.
As in the case of unravelling, the definition of the accessibility
relation on the quotient varies.
General frame semantics.
The main defect of Kripke semantics is the existence of Kripke incomplete logics, and logics which are complete but not compact. It can be remedied by equipping Kripke frames with extra structure which restricts the set of possible valuations, using ideas from algebraic semantics. This gives rise to the general frame semantics.
Computer science applications.
Blackburn et al. (2001) point out that because a relational structure is simply a set together with a collection of relations on that set, it is unsurprising that relational structures are to be found just about everywhere. As an example from theoretical computer science, they give labeled transition systems, which model program execution. Blackburn et al. thus claim because of this connection that modal languages are ideally suited in providing "internal, local perspective on relational structures." (p. xii)
History and terminology.
Similar work that predated Kripke's revolutionary semantic breakthroughs: | [
{
"math_id": 0,
"text": "\\to"
},
{
"math_id": 1,
"text": "\\neg"
},
{
"math_id": 2,
"text": "\\Box"
},
{
"math_id": 3,
"text": "\\Diamond"
},
{
"math_id": 4,
"text": "\\Diamond A := \\neg\\Box\\neg A"
},
{
"math_id": 5,
"text": "\\langle W,R\\rangle"
},
{
"math_id": 6,
"text": "\\langle W,R,\\Vdash\\rangle"
},
{
"math_id": 7,
"text": "\\Vdash"
},
{
"math_id": 8,
"text": "w\\Vdash\\neg A"
},
{
"math_id": 9,
"text": "w\\nVdash A"
},
{
"math_id": 10,
"text": "w\\Vdash A\\to B"
},
{
"math_id": 11,
"text": "w\\Vdash B"
},
{
"math_id": 12,
"text": "w\\Vdash\\Box A"
},
{
"math_id": 13,
"text": "u\\Vdash A"
},
{
"math_id": 14,
"text": "u"
},
{
"math_id": 15,
"text": "w\\; R\\; u"
},
{
"math_id": 16,
"text": "w\\Vdash A"
},
{
"math_id": 17,
"text": "\\Box A\\to A"
},
{
"math_id": 18,
"text": "w\\Vdash \\Box A"
},
{
"math_id": 19,
"text": "u\\Vdash p"
},
{
"math_id": 20,
"text": "w\\Vdash \\Box p"
},
{
"math_id": 21,
"text": "w\\Vdash p"
},
{
"math_id": 22,
"text": "\\Box(A\\leftrightarrow\\Box\nA)\\to\\Box A"
},
{
"math_id": 23,
"text": "\\Box\nA\\to\\Box\\Box A"
},
{
"math_id": 24,
"text": "\\Diamond\\top"
},
{
"math_id": 25,
"text": "\\Box [(A\\to B)\\land A]\\to \\Box B"
},
{
"math_id": 26,
"text": "\\Diamond A"
},
{
"math_id": 27,
"text": "\\Diamond A \\rightarrow \\Diamond\\top"
},
{
"math_id": 28,
"text": "X\\;R\\;Y"
},
{
"math_id": 29,
"text": "A"
},
{
"math_id": 30,
"text": "\\Box A\\in X"
},
{
"math_id": 31,
"text": "A\\in Y"
},
{
"math_id": 32,
"text": "X\\Vdash A"
},
{
"math_id": 33,
"text": "A\\in X"
},
{
"math_id": 34,
"text": "\\{\\Box_i\\mid\\,i\\in I\\}"
},
{
"math_id": 35,
"text": "w\\Vdash\\Box_i A"
},
{
"math_id": 36,
"text": "\\forall u\\,(w\\;R_i\\;u\\Rightarrow u\\Vdash A)."
},
{
"math_id": 37,
"text": "\\langle W,R,\\{D_i\\}_{i\\in I},\\Vdash\\rangle"
},
{
"math_id": 38,
"text": "\\forall u\\in D_i\\,(w\\;R\\;u\\Rightarrow u\\Vdash A)."
},
{
"math_id": 39,
"text": "\\langle W,\\le,\\Vdash\\rangle"
},
{
"math_id": 40,
"text": "\\langle W,\\le\\rangle"
},
{
"math_id": 41,
"text": "w\\le u"
},
{
"math_id": 42,
"text": "w\\Vdash A\\land B"
},
{
"math_id": 43,
"text": "w\\Vdash A\\lor B"
},
{
"math_id": 44,
"text": "u\\ge w"
},
{
"math_id": 45,
"text": "u\\Vdash B"
},
{
"math_id": 46,
"text": "w\\Vdash\\bot"
},
{
"math_id": 47,
"text": "\\langle W,\\le,\\{M_w\\}_{w\\in W}\\rangle"
},
{
"math_id": 48,
"text": "w\\Vdash A[e]"
},
{
"math_id": 49,
"text": "w\\Vdash P(t_1,\\dots,t_n)[e]"
},
{
"math_id": 50,
"text": "P(t_1[e],\\dots,t_n[e])"
},
{
"math_id": 51,
"text": "w\\Vdash(A\\land B)[e]"
},
{
"math_id": 52,
"text": "w\\Vdash B[e]"
},
{
"math_id": 53,
"text": "w\\Vdash(A\\lor B)[e]"
},
{
"math_id": 54,
"text": "w\\Vdash(A\\to B)[e]"
},
{
"math_id": 55,
"text": "u\\Vdash A[e]"
},
{
"math_id": 56,
"text": "u\\Vdash B[e]"
},
{
"math_id": 57,
"text": "w\\Vdash\\bot[e]"
},
{
"math_id": 58,
"text": "w\\Vdash(\\exists x\\,A)[e]"
},
{
"math_id": 59,
"text": "a\\in M_w"
},
{
"math_id": 60,
"text": "w\\Vdash A[e(x\\to a)]"
},
{
"math_id": 61,
"text": "w\\Vdash(\\forall x\\,A)[e]"
},
{
"math_id": 62,
"text": "a\\in M_u"
},
{
"math_id": 63,
"text": "u\\Vdash A[e(x\\to a)]"
},
{
"math_id": 64,
"text": "\\langle W',R'\\rangle"
},
{
"math_id": 65,
"text": "f\\colon W\\to W'"
},
{
"math_id": 66,
"text": "\\langle W',R',\\Vdash'\\rangle"
},
{
"math_id": 67,
"text": "f(w)\\Vdash'p"
},
{
"math_id": 68,
"text": "w'\\Vdash'p"
},
{
"math_id": 69,
"text": "s=\\langle w_0,w_1,\\dots,w_n\\rangle"
},
{
"math_id": 70,
"text": "s\\Vdash p"
},
{
"math_id": 71,
"text": "w_n\\Vdash p"
},
{
"math_id": 72,
"text": "\\langle w_0,w_1,\\dots,w_n\\rangle\\;R'\\;\\langle w_0,w_1,\\dots,w_n,w_{n+1}\\rangle"
},
{
"math_id": 73,
"text": "u\\Vdash\\Box A"
},
{
"math_id": 74,
"text": "v\\Vdash A"
}
] | https://en.wikipedia.org/wiki?curid=911960 |
9121395 | Filling factor | Filling factor, formula_0 is a quantity measuring the efficiency of absorption of pump in the core of a double-clad fiber.
Definition.
The efficiency of absorption of pumping energy in the fiber is an important parameter of a double-clad fiber laser. In many cases this efficiency can be approximated with
formula_1
where
formula_2 is the cross-sectional area of the cladding
formula_3 is the radius of the core (which is taken to be circular)
formula_4 is the absorption coefficient of pump light in the core
formula_5 is the length of the double-clad fiber, and
formula_6 is a dimensionless adjusting parameter, which is sometimes called the "filling factor"; formula_7.
The filling factor may depend on the initial distribution of the pump light, the shape of the cladding, and the position of the core within it.
Application.
The large (close to unity) filling factor is important in double-clad amplifiers; it allows them to reduce the requirements for the brightness of the pump and to reduce the length of the fiber laser. Such a reduction is especially important for the power scaling of various nonlinear processes, and contributions of stimulated scattering to the degradation of signal. Use of the filling factor for the estimate of the efficiency of absorption of the pump in fiber lasers allows quick estimates without performing complicated numerical simulations. | [
{
"math_id": 0,
"text": "~F,~"
},
{
"math_id": 1,
"text": "1- \\exp\\left( - F \\frac{\\pi r^2}{S}\\alpha L \\right) ,"
},
{
"math_id": 2,
"text": "~S~"
},
{
"math_id": 3,
"text": "~r~"
},
{
"math_id": 4,
"text": "~\\alpha~"
},
{
"math_id": 5,
"text": "~L~"
},
{
"math_id": 6,
"text": "~F~"
},
{
"math_id": 7,
"text": "~0<F<1~"
}
] | https://en.wikipedia.org/wiki?curid=9121395 |
9121625 | Band offset | Band offset describes the relative alignment of the energy bands at a semiconductor heterojunction.
Introduction.
At semiconductor heterojunctions, energy bands of two different materials come together, leading to an interaction. Both band structures are positioned discontinuously from each other, causing them to align close to the interface. This is done to ensure that the Fermi energy level stays continuous throughout the two semiconductors. This alignment is caused by the discontinuous band structures of the semiconductors when compared to each other and the interaction of the two surfaces at the interface. This relative alignment of the energy bands at such semiconductor heterojunctions is called the Band offset.
The band offsets can be determined by both intrinsic properties, that is, determined by properties of the bulk materials, as well as non-intrinsic properties, namely, specific properties of the interface. Depending on the type of the interface, the offsets can be very accurately considered intrinsic, or be able to be modified by manipulating the interfacial structure. Isovalent heterojunctions are generally insensitive to manipulation of the interfacial structure, whilst heterovalent heterojunctions can be influenced in their band offsets by the geometry, the orientation, and the bonds of the interface and the charge transfer between the heterovalent bonds. The band offsets, especially those at heterovalent heterojunctions depend significantly on the distribution of interface charge.
The band offsets are determined by two kinds of factors for the interface, the band discontinuities and the built-in potential. These discontinuities are caused by the difference in band gaps of the semiconductors and are distributed between two band discontinuities, the valence-band discontinuity, and the conduction-band discontinuity. The built-in potential is caused by the bands which bend close at the interface due to a charge imbalance between the two semiconductors, and can be described by Poisson's equation.
Semiconductor types.
The behaviour of semiconductor heterojunctions depend on the alignment of the energy bands at the interface and thus on the band offsets. The interfaces of such heterojunctions can be categorized in three types: straddling gap (referred to as type I), staggered gap (type II), and broken gap (type III).
These representations do not take into account the band bending, which is a reasonable assumption if you only look at the interface itself, as band bending exerts its influence on a length scale of generally hundreds of angström. For a more accurate picture of the situation at hand, the inclusion of band bending is important.
Experimental methods.
Two kinds of experimental techniques are used to describe band offsets. The first is an older technique, the first technique to probe the heterojunction built-in potential and band discontinuities. This methods are generally called transport methods. These methods consist of two classes, either capacitance-voltage (C-V) or current-voltage (I-V) techniques. These older techniques were used to extract the built-in potential by assuming a square-root dependence for the capacitance C on formula_0bi - qV, with formula_0bi the built-in potential, q the electron charge, and V the applied voltage. If band extrema away from the interface, as well as the distance between the Fermi level, are known parameters, known a priori from bulk doping, it becomes possible to obtain the conduction band offset and the valence band offset. This square root dependence corresponds to an ideally abrupt transition at the interface and it may or may not be a good approximation of the real junction behaviour.
The second kind of technique consists of optical methods. Photon absorption is used effectively as the conduction band and valence band discontinuities define quantum wells for the electrons and the holes. Optical techniques can be used to probe the direct transitions between sub-bands within the quantum wells, and with a few parameters known, such as the geometry of the structure and the effective mass, the transition energy measured experimentally can be used to probe the well depth. Band offset values are usually estimated using the optical response as a function of certain geometrical parameters or the intensity of an applied magnetic field. Light scattering could also be used to determine the size of the well depth.
Alignment.
Prediction of the band alignment is at face value dependent on the heterojunction type, as well as whether or not the heterojunction in question is heterovalent or isovalent. However, quantifying this alignment proved a difficult task for a long time. Anderson's rule is used to construct energy band diagrams at heterojunctions between two semiconductors. It states that during the construction of an energy band diagram, the vacuum levels of the semiconductors on either side of the heterojunction should be equal.
Anderson's rule states that when we construct the heterojunction, we need to have both semiconductors on an equal vacuum energy level. This ensures that the energy bands of both the semiconductors are being held to the same reference point, from which ΔEc and ΔEv, the conduction band offset and valence band offset can be calculated. By having the same reference point for both semiconductors, ΔEc becomes equal to the built-in potential, Vbi = Φ1 - Φ2, and the behaviour of the bands at the interface can be predicted as can be seen at the picture above.
Anderson's rule fails to predict real band offsets. This is primarily due to the fact that Anderson's model implies that the materials are assumed to behave the same as if they were separated by a large vacuum distance, however at these heterojunctions consisting of solids filling the space, there is no vacuum, and the use of the electron affinities at vacuum leads to wrong results. Anderson's rule ignores actual chemical bonding effects that occur on small vacuum separation or non-existent vacuum separation, which leads to wrong predictions about the band offsets.
A better theory for predicting band offsets has been linear-response theory. In this theory, interface dipoles have a significant impact on the lining up of the bands of the semiconductors. These interface dipoles however are not ions, rather they are mathematical constructs based upon the difference of charge density between the bulk and the interface. Linear-response theory is based on first-principles calculations, which are calculations aimed at solving the quantum-mechanical equations, without input from experiment. In this theory, the band offset is the sum of two terms, the first term is intrinsic and depends solely on the bulk properties, the second term, which vanishes for isovalent and abrupt non-polar heterojunctions, depends on the interface geometry, and can easily be calculated once the geometry is known, as well as certain quantities (such as the lattice parameters).
The goal of the model is to attempt to model the difference between the two semiconductors, that is, the difference with respect to an chosen optimal average (whose contribution to the band offset should vanish). An example would be GaAs-AlAs, constructing it from a virtual crystal of Al0.5Ga0.5As, then introducing an interface. After this a perturbation is added to turn the crystal into pure GaAs, whilst on the other side, the perturbation transforms the crystal in pure AlAs. These perturbations are sufficiently small so that they can be handled by linear-response theory and the electrostatic potential lineup across the interface can then be obtained up to the first order from the charge density response to those localized perturbations. Linear response theory works well for semiconductors with similar potentials (such as GaAs-AlAs) as well as dissimilar potentials (such as GaAs-Ge), which was doubted at first. However predictions made by linear response theory coincide exactly with those of self-consistent first principle calculations. If interfaces are polar however, or nonabrupt nonpolar oriented, additional effects must be taken into account. These are additional terms which require simple electrostatics, which is within the linear response approach.
References.
<templatestyles src="Reflist/styles.css" />
Franciosi A.; Van de Walle C.G: "Heterojunction band offset engineering", Surface Science Reports, Volume 25, Number 1, October 1996, pp. 1–140
Raymond T. Tung; Leeor; Kronik: "Charge Density and Band Offsets at Heterovalent Semiconductor Interfaces"; http://onlinelibrary.wiley.com/doi/10.1002/adts.201700001/pdf | [
{
"math_id": 0,
"text": "\\Phi"
}
] | https://en.wikipedia.org/wiki?curid=9121625 |
912171 | Optical ring resonators | Set of waveguides including a closed loop
An optical ring resonator is a set of waveguides in which at least one is a closed loop coupled to some sort of light input and output. (These can be, but are not limited to being, waveguides.) The concepts behind optical ring resonators are the same as those behind whispering galleries except that they use light and obey the properties behind constructive interference and total internal reflection. When light of the resonant wavelength is passed through the loop from the input waveguide, the light builds up in intensity over multiple round-trips owing to constructive interference and is output to the output bus waveguide which serves as a detector waveguide. Because only a select few wavelengths will be at resonance within the loop, the optical ring resonator functions as a filter. Additionally, as implied earlier, two or more ring waveguides can be coupled to each other to form an add/drop optical filter.
Background.
Optical ring resonators work on the principles behind total internal reflection, constructive interference, and optical coupling.
Total internal reflection.
The light travelling through the waveguides in an optical ring resonator remains within the waveguides due to the ray optics phenomenon known as total internal reflection (TIR). TIR is an optical phenomenon that occurs when a ray of light strikes the boundary of a medium and fails to refract through the boundary. Given that the angle of incidence is larger than the critical angle (with respect to the normal of the surface) and the refractive index is lower on the other side of the boundary relative to the incident ray, TIR will occur and no light will be able to pass through. For an optical ring resonator to work well, total internal reflection conditions must be met and the light travelling through the waveguides must not be allowed to escape by any means.
Interference.
Interference is the process by which two waves superimpose to form a resultant wave of greater or less amplitude. Interference usually refers to the interaction of two distinct waves and it is a result of the linearity of Maxwell Equation. Interference could be constructive or destructive depending on the relative phase of the two waves. In constructive interference, the two waves have the same phase and, as a result, interfere in a way that the resulting wave amplitude will be equal to the sum of the two individual amplitudes. As the light in an optical ring resonator completes multiple circuits around the ring component, it will interfere with the other light still in the loop. As such, assuming there are no losses in the system such as those due to absorption, evanescence, or imperfect coupling and the resonance condition is met, the intensity of the light emitted from a ring resonator will be equal to the intensity of the light fed into the system.
Optical coupling.
Important for understanding how an optical ring resonator works, is the concept of how the linear waveguides are coupled to the ring waveguide. When a beam of light passes through a wave guide as shown in the graph on the right, part of light will be coupled into the optical ring resonator. The reason for this is the phenomenon of the evanescent field, which extends outside of the waveguide mode in an exponentially decreasing radial profile. In other words, if the ring and the waveguide are brought closely together, some light from the waveguide can couple into the ring. There are three aspects that affect the optical coupling: the distance, the coupling length and the refractive indices between the waveguide and the optical ring resonator. In order to optimize the coupling, it is usually the case to narrow the distance between the ring resonator and the waveguide. The closer the distance, the easier the optical coupling happens. In addition, the coupling length affects the coupling as well. The coupling length represents the effective curve length of the ring resonator for the coupling phenomenon to happen with the waveguide. It has been studied that as the optical coupling length increases, the difficulty for the coupling to happen decreases. Furthermore, the refractive index of the waveguide material, the ring resonator material and the medium material in between the waveguide and the ring resonator also affect the optical coupling. The medium material is usually the most important feature under study since it has a great effect on the transmission of the light wave. The refractive index of the medium can be either large or small according to various applications and purposes.
One more feature about optical coupling is the critical coupling. The critical coupling shows that no light is passing through the waveguide after the light beam is coupled into the optical ring resonator. The light will be stored and lost inside the resonator thereafter.
Lossless coupling is when no light is transmitted all the way through the input waveguide to its own output; instead, all of the light is coupled into the ring waveguide (such as what is depicted in the image at the top of this page). For lossless coupling to occur, the following equation must be satisfied:
formula_0
where t is the transmission coefficient through the coupler and formula_1 is the taper-sphere mode coupling amplitude, also referred to as the coupling coefficient.
Theory.
To understand how optical ring resonators work, we must first understand the optical path length difference (OPD) of a ring resonator. This is given as follows for a single-ring ring resonator:
formula_2
where "r" is the radius of the ring resonator and "formula_3" is the effective index of refraction of the waveguide material. Due to the total internal reflection requirement, formula_3 must be greater than the index of refraction of the surrounding fluid in which the resonator is placed (e.g. air). For resonance to take place, the following resonant condition must be satisfied:
formula_4
where "formula_5" is the resonant wavelength and "m" is the mode number of the ring resonator. This equation means that in order for light to interfere constructively inside the ring resonator, the circumference of the ring must be an integer multiple of the wavelength of the light. As such, the mode number must be a positive integer for resonance to take place. As a result, when the incident light contains multiple wavelengths (such as white light), only the resonant wavelengths will be able to pass through the ring resonator fully.
The quality factor and the finesse of an optical ring resonator can be quantitatively described using the following formulas (see: eq: 2.37 in ,or eq:19+20 in, or eq:12+19 in ):
formula_6
formula_7
where "formula_8" is the finesse of the ring resonator, formula_9 is the operation frequency, formula_10 is the free spectral range and formula_11 is the full-width half-max of the transmission spectra. The quality factor is useful in determining the spectral range of the resonance condition for any given ring resonator. The quality factor is also useful for quantifying the amount of losses in the resonator as a low formula_12 factor is usually due to large losses.
Double ring resonators.
In a double ring resonator, two ring waveguides are used instead of one. They may be arranged in series (as shown on the right) or in parallel. When using two ring waveguides in series, the output of the double ring resonator will be in the same direction as the input (albeit with a lateral shift). When the input light meets the resonance condition of the first ring, it will couple into the ring and travel around inside of it. As subsequent loops around the first ring bring the light to the resonance condition of the second ring, the two rings will be coupled together and the light will be passed into the second ring. By the same method, the light will then eventually be transferred into the bus output waveguide. Therefore, in order to transmit light through a double ring resonator system, we will need to satisfy the resonant condition for both rings as follows:
formula_13
formula_14
where formula_15 and formula_16 are the mode numbers of the first and second ring respectively and they must remain as positive integer numbers. For the light to exit the ring resonator to the output bus waveguide, the wavelength of the light in each ring must be same. That is, formula_17 for resonance to occur. As such, we get the following equation governing resonance:
formula_18
Note that both formula_15 and formula_16 need to remain integers.
A system of two ring resonators coupled to a single waveguide has also been shown to work as a tunable reflective filter (or an optical mirror). Forward propagating waves in the waveguide excite anti-clockwise rotating waves in both rings. Due to the inter-resonator coupling, these waves generate clockwise rotating waves in both rings which are in turn coupled to backward propagating (reflected) waves in the waveguide.
In this context, the utilization of nested ring resonator cavities has been demonstrated in recent studies. These nested ring resonators are designed to enhance the quality factor (Q-factor) and extend the effective light-matter interaction length. These nested cavity configurations enable light to traverse the nested cavity multiple times, a number equal to the round trips of the main cavity multiplied by the round trips of the nested cavity, as depicted in Figure below.
Applications.
Due to the nature of the optical ring resonator and how it "filters" certain wavelengths of light passing through, it is possible to create high-order optical filters by cascading many optical ring resonators in series. This would allow for "small size, low losses, and integrability into [existing] optical networks." Additionally, since the resonance wavelengths can be changed by simply increasing or decreasing the radius of each ring, the filters can be considered tunable. This basic property can be used to create a sort of mechanical sensor. If an optical fiber experiences mechanical strain, the dimensions of the fiber will be altered, thus resulting in a change in the resonant wavelength of light emitted. This can be used to monitor fibers or waveguides for changes in their dimensions.
The tuning process can be affected also by a change of refractive index using various means including thermo-optic, electro-optic or all-optical effects. Electro-optic and all-optical tuning is faster than thermal and mechanical means, and hence find various applications including in optical communication. Optical modulators with a high-Q microring are reported to yield outstandingly small power of modulation at a speed of > 50 Gbit/s at cost of a tuning power to match wavelength of the light source. A ring modulator placed in a Fabry-Perot laser cavity was reported to eliminate the tuning power by automatic matching of the laser wavelength with that of the ring modulator while maintaining high-speed ultralow-power modulation of a Si microring modulator.
Optical ring, cylindrical, and spherical resonators have also been proven useful in the field of biosensing., and a crucial research focus is the enhancement of biosensing performance One of the main benefits of using ring resonators in biosensing is the small volume of sample specimen required to obtain a given spectroscopy results in greatly reduced background Raman and fluorescence signals from the solvent and other impurities. Resonators have also been used to characterize a variety of absorption spectra for the purposes of chemical identification, particularly in the gaseous phase.
Another potential application for optical ring resonators are in the form of whispering gallery mode switches. "[Whispering Gallery Resonator] microdisk lasers are stable and switch reliably and hence, are suitable as switching elements in all-optical networks." An all-optical switch based on a high Quality factor cylindrical resonator has been proposed that allows for fast binary switching at low power.
Many researchers are interested in creating three-dimensional ring resonators with very high quality factors. These dielectric spheres, also called microsphere resonators, "were proposed as low-loss optical resonators with which to study cavity quantum electrodynamics with laser-cooled atoms or as ultrasensitive detectors for the detection of single trapped atoms.”
Ring resonators have also proved useful as single photon sources for quantum information experiments. Many materials used to fabricate ring resonator circuits have non-linear responses to light at high enough intensities. This non-linearity allows for frequency modulation processes such as four-wave mixing and Spontaneous parametric down-conversion which generate photon pairs. Ring resonators amplify the efficiency of these processes as they allow the light to circulate around the ring. | [
{
"math_id": 0,
"text": "|\\Kappa|^2 + |t|^2 = \\mathbf{1}"
},
{
"math_id": 1,
"text": "\\Kappa"
},
{
"math_id": 2,
"text": "\\mathbf{OPD} = 2 \\pi r n_\\text{eff}"
},
{
"math_id": 3,
"text": "n_\\text{eff}"
},
{
"math_id": 4,
"text": "\\mathbf{OPD} = m \\lambda_{m}"
},
{
"math_id": 5,
"text": "\\lambda_{m}"
},
{
"math_id": 6,
"text": "\\mathbf{Q} = \\frac{\\nu}{\\delta\\nu}\n"
},
{
"math_id": 7,
"text": "\\mathcal{F} = \\frac{\\nu_{f}}{\\delta\\nu}"
},
{
"math_id": 8,
"text": "\\mathcal{F}"
},
{
"math_id": 9,
"text": "\\nu"
},
{
"math_id": 10,
"text": "\\nu_{f}"
},
{
"math_id": 11,
"text": "\\delta\\nu"
},
{
"math_id": 12,
"text": "Q"
},
{
"math_id": 13,
"text": "\\ 2 \\pi n_{1} R_{1} = m_{1} \\lambda_{1}"
},
{
"math_id": 14,
"text": "\\ 2 \\pi n_{2} R_{2} = m_{2} \\lambda_{2}"
},
{
"math_id": 15,
"text": "m_{1}"
},
{
"math_id": 16,
"text": "m_{2}"
},
{
"math_id": 17,
"text": "\\lambda_{1} = \\lambda_{2}"
},
{
"math_id": 18,
"text": "\\ \\frac{n_{1} R_{1}}{m_{1}} = \\frac{n_{2} R_{2}}{m_{2}} "
}
] | https://en.wikipedia.org/wiki?curid=912171 |
9122581 | Young–Laplace equation | Describing pressure difference over an interface in fluid mechanics
In physics, the Young–Laplace equation () is an algebraic equation that describes the capillary pressure difference sustained across the interface between two static fluids, such as water and air, due to the phenomenon of surface tension or wall tension, although use of the latter is only applicable if assuming that the wall is very thin. The Young–Laplace equation relates the pressure difference to the shape of the surface or wall and it is fundamentally important in the study of static capillary surfaces. It is a statement of normal stress balance for static fluids meeting at an interface, where the interface is treated as a surface (zero thickness):
formula_0
where formula_1 is the Laplace pressure, the pressure difference across the fluid interface (the exterior pressure minus the interior pressure), formula_2 is the surface tension (or wall tension), formula_3 is the unit normal pointing out of the surface, formula_4 is the mean curvature, and formula_5 and formula_6 are the principal radii of curvature. Note that only normal stress is considered, because a static interface is possible only in the absence of tangential stress.
The equation is named after Thomas Young, who developed the qualitative theory of surface tension in 1805, and Pierre-Simon Laplace who completed the mathematical description in the following year. It is sometimes also called the Young–Laplace–Gauss equation, as Carl Friedrich Gauss unified the work of Young and Laplace in 1830, deriving both the differential equation and boundary conditions using Johann Bernoulli's virtual work principles.
Soap films.
If the pressure difference is zero, as in a soap film without gravity, the interface will assume the shape of a minimal surface.
Emulsions.
The equation also explains the energy required to create an emulsion. To form the small, highly curved droplets of an emulsion, extra energy is required to overcome the large pressure that results from their small radius.
The Laplace pressure, which is greater for smaller droplets, causes the diffusion of molecules out of the smallest droplets in an emulsion and drives emulsion coarsening via Ostwald ripening.
Capillary pressure in a tube.
In a sufficiently narrow (i.e., low Bond number) tube of circular cross-section (radius "a"), the interface between two fluids forms a meniscus that is a portion of the surface of a sphere with radius "R". The pressure jump across this surface is related to the radius and the surface tension γ by
formula_7
This may be shown by writing the Young–Laplace equation in spherical form with a contact angle boundary condition and also a prescribed height boundary condition at, say, the bottom of the meniscus. The solution is a portion of a sphere, and the solution will exist "only" for the pressure difference shown above. This is significant because there isn't another equation or law to specify the pressure difference; existence of solution for one specific value of the pressure difference prescribes it.
The radius of the sphere will be a function only of the contact angle, θ, which in turn depends on the exact properties of the fluids and the container material with which the fluids in question are contacting/interfacing:
formula_8
so that the pressure difference may be written as:
formula_9
In order to maintain hydrostatic equilibrium, the induced capillary pressure is balanced by a change in height, "h", which can be positive or negative, depending on whether the wetting angle is less than or greater than 90°. For a fluid of density ρ:
formula_10
where "g" is the gravitational acceleration. This is sometimes known as the Jurin's law or Jurin height after James Jurin who studied the effect in 1718.
For a water-filled glass tube in air at sea level:
and so the height of the water column is given by:
formula_11
Thus for a 2 mm wide (1 mm radius) tube, the water would rise 14 mm. However, for a capillary tube with radius 0.1 mm, the water would rise 14 cm (about 6 inches).
Capillary action in general.
In the general case, for a free surface and where there is an applied "over-pressure", Δ"p", at the interface in equilibrium, there is a balance between the applied pressure, the hydrostatic pressure and the effects of surface tension. The Young–Laplace equation becomes:
formula_12
The equation can be non-dimensionalised in terms of its characteristic length-scale, the capillary length:
formula_13
and characteristic pressure
formula_14
For clean water at standard temperature and pressure, the capillary length is ~2 mm.
The non-dimensional equation then becomes:
formula_15
Thus, the surface shape is determined by only one parameter, the over pressure of the fluid, Δ"p"* and the scale of the surface is given by the capillary length. The solution of the equation requires an initial condition for position, and the gradient of the surface at the start point.
Axisymmetric equations.
The (nondimensional) shape, "r"("z") of an axisymmetric surface can be found by substituting general expressions for principal curvatures to give the hydrostatic Young–Laplace equations:
formula_16
formula_17
Application in medicine.
In medicine it is often referred to as the Law of Laplace, used in the context of cardiovascular physiology, and also respiratory physiology, though the latter use is often erroneous.
History.
Francis Hauksbee performed some of the earliest observations and experiments in 1709 and these were repeated in 1718 by James Jurin who observed that the height of fluid in a capillary column was a function only of the cross-sectional area at the surface, not of any other dimensions of the column.
Thomas Young laid the foundations of the equation in his 1804 paper "An Essay on the Cohesion of Fluids" where he set out in descriptive terms the principles governing contact between fluids (along with many other aspects of fluid behaviour). Pierre Simon Laplace followed this up in "Mécanique Céleste" with the formal mathematical description given above, which reproduced in symbolic terms the relationship described earlier by Young.
Laplace accepted the idea propounded by Hauksbee in his book "Physico-mechanical Experiments" (1709), that the phenomenon was due to a force of attraction that was insensible at sensible distances. The part which deals with the action of a solid on a liquid and the mutual action of two liquids was not worked out thoroughly, but ultimately was completed by Carl Friedrich Gauss. Franz Ernst Neumann (1798-1895) later filled in a few details.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n\\Delta p &= -\\gamma \\nabla \\cdot \\hat n \\\\\n&= -2\\gamma H_f \\\\\n&= -\\gamma \\left(\\frac{1}{R_1} + \\frac{1}{R_2}\\right)\n\\end{align}"
},
{
"math_id": 1,
"text": "\\Delta p"
},
{
"math_id": 2,
"text": "\\gamma"
},
{
"math_id": 3,
"text": "\\hat n"
},
{
"math_id": 4,
"text": "H_f"
},
{
"math_id": 5,
"text": "R_1"
},
{
"math_id": 6,
"text": "R_2"
},
{
"math_id": 7,
"text": "\\Delta p = \\frac{2 \\gamma}{R}."
},
{
"math_id": 8,
"text": "R = \\frac{a}{\\cos \\theta}"
},
{
"math_id": 9,
"text": "\\Delta p = \\frac{2 \\gamma \\cos \\theta}{a}."
},
{
"math_id": 10,
"text": " \\rho g h = \\frac{2 \\gamma \\cos \\theta}{a}."
},
{
"math_id": 11,
"text": "h\\approx {{1.4 \\times 10^{-5}} \\mathrm{m}^2\\over a}."
},
{
"math_id": 12,
"text": "\\Delta p = \\rho g h - \\gamma \\left( \\frac{1}{R_1} + \\frac{1}{R_2}\\right)"
},
{
"math_id": 13,
"text": "L_{c} = \\sqrt{\\frac{\\gamma}{\\rho g}},"
},
{
"math_id": 14,
"text": "p_{c} = \\frac {\\gamma} {L_{c}} = \\sqrt{ \\gamma \\rho g}."
},
{
"math_id": 15,
"text": "h^*- \\Delta p^*= \\left( \\frac{1}{{R_1}^{*}} + \\frac{1}{{R_2}^{*}}\\right)."
},
{
"math_id": 16,
"text": "\\frac{r''}{(1+r'^2)^{{3}/{2}}} - \\frac{1}{r(z) \\sqrt{1+r'^2} } = z - \\Delta p^*"
},
{
"math_id": 17,
"text": "\\frac{z''}{(1+z'^2)^{3/2}} + \\frac{z'}{r (1+z'^2)^{{1}/{2}} } = \\Delta p^*- z(r)."
}
] | https://en.wikipedia.org/wiki?curid=9122581 |
9124553 | Generalized assignment problem | Combinatorial optimization problem
In applied mathematics, the maximum generalized assignment problem is a problem in combinatorial optimization. This problem is a generalization of the assignment problem in which both tasks and agents have a size. Moreover, the size of each task might vary from one agent to the other.
This problem in its most general form is as follows: There are a number of agents and a number of tasks. Any agent can be assigned to perform any task, incurring some cost and profit that may vary depending on the agent-task assignment. Moreover, each agent has a budget and the sum of the costs of tasks assigned to it cannot exceed this budget. It is required to find an assignment in which all agents do not exceed their budget and total profit of the assignment is maximized.
In special cases.
In the special case in which all the agents' budgets and all tasks' costs are equal to 1, this problem reduces to the assignment problem. When the costs and profits of all tasks do not vary between different agents, this problem reduces to the multiple knapsack problem. If there is a single agent, then, this problem reduces to the knapsack problem.
Explanation of definition.
In the following, we have "n" kinds of items, formula_0 through formula_1 and "m" kinds of bins formula_2 through formula_3. Each bin formula_4 is associated with a budget formula_5. For a bin formula_4, each item formula_6 has a profit formula_7 and a weight formula_8. A solution is an assignment from items to bins. A feasible solution is a solution in which for each bin formula_4 the total weight of assigned items is at most formula_5. The solution's profit is the sum of profits for each item-bin assignment. The goal is to find a maximum profit feasible solution.
Mathematically the generalized assignment problem can be formulated as an integer program:
formula_9
Complexity.
The generalized assignment problem is NP-hard, However, there are linear-programming relaxations which give a formula_10-approximation.
Greedy approximation algorithm.
For the problem variant in which not every item must be assigned to a bin, there is a family of algorithms for solving the GAP by using a combinatorial translation of any algorithm for the knapsack problem into an approximation algorithm for the GAP.
Using any formula_11-approximation algorithm ALG for the knapsack problem, it is possible to construct a (formula_12)-approximation for the generalized assignment problem in a greedy manner using a residual profit concept.
The algorithm constructs a schedule in iterations, where during iteration formula_13 a tentative selection of items to bin formula_14 is selected.
The selection for bin formula_14 might change as items might be reselected in a later iteration for other bins.
The residual profit of an item formula_15 for bin formula_14 is formula_7 if formula_15 is not selected for any other bin or formula_16 – formula_17 if formula_15 is selected for bin formula_18.
Formally: We use a vector formula_19 to indicate the tentative schedule during the algorithm. Specifically, formula_20 means the item formula_15 is scheduled on bin formula_14 and formula_21 means that item formula_15 is not scheduled. The residual profit in iteration formula_13 is denoted by formula_22, where formula_23 if item formula_15 is not scheduled (i.e. formula_21) and formula_24 if item formula_15 is scheduled on bin formula_18 (i.e. formula_25).
Formally:
Set formula_26
For formula_27 do:
Call ALG to find a solution to bin formula_14 using the residual profit function formula_22. Denote the selected items by formula_28.
Update formula_19 using formula_28, i.e., formula_20 for all formula_29.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_1"
},
{
"math_id": 1,
"text": "a_n"
},
{
"math_id": 2,
"text": "b_1"
},
{
"math_id": 3,
"text": "b_m"
},
{
"math_id": 4,
"text": "b_i"
},
{
"math_id": 5,
"text": "t_i"
},
{
"math_id": 6,
"text": "a_j"
},
{
"math_id": 7,
"text": "p_{ij}"
},
{
"math_id": 8,
"text": "w_{ij}"
},
{
"math_id": 9,
"text": "\n\\begin{align}\n\\text{maximize } & \\sum_{i=1}^m\\sum_{j=1}^n p_{ij} x_{ij}. \\\\\n\\text{subject to } & \\sum_{j=1}^n w_{ij} x_{ij} \\le t_i & & i=1, \\ldots, m; \\\\\n& \\sum_{i=1}^m x_{ij} \\le 1 & & j=1, \\ldots, n; \\\\\n& x_{ij} \\in \\{0,1\\} & & i=1, \\ldots, m, \\quad j=1, \\ldots, n;\n\\end{align}\n"
},
{
"math_id": 10,
"text": "(1 - 1/e)"
},
{
"math_id": 11,
"text": "\\alpha"
},
{
"math_id": 12,
"text": "\\alpha + 1"
},
{
"math_id": 13,
"text": "j"
},
{
"math_id": 14,
"text": "b_j"
},
{
"math_id": 15,
"text": "x_i"
},
{
"math_id": 16,
"text": " p_{ij}"
},
{
"math_id": 17,
"text": "p_{ik} "
},
{
"math_id": 18,
"text": "b_k"
},
{
"math_id": 19,
"text": "T"
},
{
"math_id": 20,
"text": "T[i]=j"
},
{
"math_id": 21,
"text": "T[i]=-1"
},
{
"math_id": 22,
"text": "P_j"
},
{
"math_id": 23,
"text": "P_j[i]=p_{ij}"
},
{
"math_id": 24,
"text": "P_j[i]=p_{ij}-p_{ik}"
},
{
"math_id": 25,
"text": "T[i]=k"
},
{
"math_id": 26,
"text": "T[i]=-1 \\text{ for } i = 1\\ldots n"
},
{
"math_id": 27,
"text": "j=1,\\ldots,m"
},
{
"math_id": 28,
"text": "S_j"
},
{
"math_id": 29,
"text": "i \\in S_j"
}
] | https://en.wikipedia.org/wiki?curid=9124553 |
912495 | Fall factor | Mathematical ratio relevant to climbing safety
In lead climbing using a dynamic rope, the fall factor (f) is the ratio of the height ("h") a climber falls before the climber's rope begins to stretch and the rope length ("L") available to absorb the energy of the fall,
formula_0
It is the main factor determining the violence of the forces acting on the climber and the gear.
As a numerical example, consider a fall of 20 feet that occurs with 10 feet of rope out (i.e., the climber has placed no protection and falls from 10 feet above the belayer to 10 feet below—a factor 2 fall). This fall produces far more force on the climber and the gear than if a similar 20 foot fall had occurred 100 feet above the belayer. In the latter case (a fall factor of 0.2), the rope acts like a bigger, longer rubber band, and its stretch more effectively cushions the fall.
Sizes of fall factors.
The smallest possible fall factor is zero. This occurs, for example, in top-rope a fall onto a rope with no slack. The rope stretches, so although "h"=0, there is a fall.
When climbing from the ground up, the maximum possible fall factor is 1, since any greater fall would mean that the climber hit the ground.
In multi-pitch climbing (and big wall climbing), or in any climb where a leader starts from a position on an exposed ledge well above the ground, a fall factor in lead climbing can be as high as 2. This can occur only when a lead climber who has placed no protection falls past the belayer (two times the distance of the rope length between them), or the anchor if the climber is solo climbing the route using a self-belay. As soon as the climber clips the rope into protection above the belay, the fall factor drops below 2.
In falls occurring on a via ferrata, fall factors can be much higher. This is possible because the length of rope between the harness and the carabiner is short and fixed, while the distance the climber can fall depends on the gaps between anchor points of the safety cable (i.e. the climber's lanyard will fall down the safety cable until it reaches an anchor point); to mitigate this, via ferrata climbers can use energy absorbers.
Derivation and impact force.
The impact force is defined as the maximum tension in the rope when a climber falls. We first state an equation for this quantity and describe its interpretation, and then show its derivation and how it can be put into a more convenient form.
Equation for the impact force and its interpretation.
When modeling the rope as an undamped harmonic oscillator (HO) the impact force "Fmax" in the rope is given by:
formula_1
where "mg" is the climber's weight, "h" is the fall height and "k" is the spring constant of the portion of the rope that is in play.
We will see below that when varying the height of the fall while keeping the fall factor fixed, the quantity "hk" stays constant.
There are two factors of two involved in the interpretation of this equation. First, the maximum force on the top piece of protection is roughly 2"Fmax", since the gear acts as a simple pulley. Second, it may seem strange that even when "f=0", we have "Fmax"=2"mg" (so that the maximum force on the top piece is approximately 4"mg"). This is because a factor-zero fall is still a fall onto a slack rope. The average value of the tension over a full cycle of harmonic oscillation will be "mg", so that the tension will cycle between 0 and 2"mg".
Derivation of the equation.
Conservation of energy at rope's maximum elongation "xmax" gives
formula_2
The maximum force on the climber is "Fmax-mg". It is convenient to express things in terms of the elastic modulus "E" = "k L/q" which is a property of the material that the rope is constructed from. Here "L" is the rope's length and "q" its cross-sectional area. Solution of the quadratic gives
formula_3
Other than fixed properties of the system, this form of the equation shows that the impact force depends only on the fall factor.
Using the HO model to obtain the impact force of real climbing ropes as a function of fall height "h" and climber's weight "mg", one must know the experimental value for "E" of a given rope. However, rope manufacturers give only the rope’s impact force "F0" and its static and dynamic elongations that are measured under standard UIAA fall conditions: A fall height "h0" of 2 × 2.3 m with an available rope length "L0" = 2.6m leads to a fall factor "f0" = "h0/L0" = 1.77 and a fall velocity "v0" = ("2gh0")1/2 = 9.5 m/s at the end of falling the distance "h0". The mass "m0" used in the fall is 80 kg. Using these values to eliminate the unknown quantity "E" leads to an expression of the impact force as a function of arbitrary fall heights "h", arbitrary fall factors "f", and arbitrary gravity "g" of the form:
formula_4
Note that keeping "g"0 from the derivation of "Eq" based on UIAA test into the above "Fmax" formula assures that the transformation will continue to be valid for different gravity fields, as over a slope making less than 90 degrees with the horizontal. This simple undamped harmonic oscillator model of a rope, however, does not correctly describe the entire fall process of real ropes. Accurate measurements on the behaviour of a climbing rope during the entire fall can be explained if the undamped harmonic oscillator is complemented by a non-linear term up to the maximum impact force, and then, near the maximum force in the rope, internal friction in the rope is added that ensures the rapid relaxation of the rope to its rest position.
Effect of friction.
When the rope is clipped into several carabiners between the climber and the belayer, an additional type of friction occurs, the so-called dry friction between the rope and particularly the last clipped carabiner. "Dry" friction (i.e., a frictional force that is velocity-independent) leads to an effective rope length smaller than the available length "L" and thus increases the impact force. | [
{
"math_id": 0,
"text": "f = \\frac{h}{L}."
},
{
"math_id": 1,
"text": "F_{max} = mg + \\sqrt{(mg)^2 + 2mghk},"
},
{
"math_id": 2,
"text": " mgh = \\frac{1}{2}kx_{max}^2 - mgx_{max}\\ ; \\ F_{max} = k x_{max}. "
},
{
"math_id": 3,
"text": "F_{max} = mg + \\sqrt{(mg)^2 + 2mgEqf}."
},
{
"math_id": 4,
"text": "F_{max} = mg + \\sqrt{(mg)^2 + F_0(F_0-2m_0g_0)\\frac{m}{m_0}\\frac{g}{g_0}\\frac{f}{f_0}} "
}
] | https://en.wikipedia.org/wiki?curid=912495 |
9126127 | Fair computational tree logic | Fair computational tree logic is conventional computational tree logic studied with explicit fairness constraints.
Weak fairness / justice.
This declares conditions such as all processes execute infinitely often. If you consider the processes to be Pi, then the condition becomes:
formula_0
Strong fairness / compassion.
Here, if a process is requesting a resource infinitely often (R), it should be allowed to get the resource (C) infinitely often:
formula_1
Model checking for fair CTL.
Consider a Kripke model with set of states "F". A path formula_2 is considered a fair path, if and
only if the path includes all members of "F" infinitely often.
Fair CTL model checking restricts the checks to only fair paths. There are two kinds of fair quantifiers:
1. Mf, si |= Aformula_3 if and only if formula_3 holds in "all" fair paths.
2. Mf, si |= Eformula_3 if and only if formula_3 holds in "one or more" fair paths.
A fair state is one from which at least one fair path originates. This translates to Mf, s |= EGtrue.
SCC-based approach.
A strongly connected component (SCC) of a directed graph is a maximal strongly connected subgraph—all the nodes are reachable from each other. A fair SCC is one that has an edge into at least one node for each of the fair conditions.
To check for fair EG for any formula,
Emerson Lei algorithm.
The fix point characterization of Exist Globally is given by: [EGφ] = νZ .([φ] ∩ [EXZ ]), which is basically the limit applied according to Kleene's theorem. To fair paths, it becomes [Ef Gφ] = νZ .([φ] ∩Fi ∈FT [EX[E(Z U(Z ∧ Fi ))]), which means the formula holds in the current state and the next states and the next to next states until it meets all the members of the fair conditions. This means that, the condition is equivalent to a sort of accepting point where the accepting condition is the entire set of Fair conditions. | [
{
"math_id": 0,
"text": "\\bigwedge GFP_{i}"
},
{
"math_id": 1,
"text": "\\bigwedge( GFR \\longrightarrow GFC)"
},
{
"math_id": 2,
"text": "\\pi = s_o, s_1 \\dots"
},
{
"math_id": 3,
"text": "\\phi"
}
] | https://en.wikipedia.org/wiki?curid=9126127 |
9126665 | Square root of 3 | Unique positive real number which when multiplied by itself gives 3
The square root of 3 is the positive real number that, when multiplied by itself, gives the number 3. It is denoted mathematically as formula_0 or formula_1. It is more precisely called the principal square root of 3 to distinguish it from the negative number with the same property. The square root of 3 is an irrational number. It is also known as Theodorus' constant, after Theodorus of Cyrene, who proved its irrationality.
In 2013, its numerical value in decimal notation was computed to ten billion digits. Its decimal expansion, written here to 65 decimal places, is given by OEIS: :
The fraction formula_2 (...) can be used as a good approximation. Despite having a denominator of only 56, it differs from the correct value by less than formula_3 (approximately formula_4, with a relative error of formula_5). The rounded value of is correct to within 0.01% of the actual value.
The fraction formula_6 (...) is accurate to formula_7.
Archimedes reported a range for its value: formula_8.
The lower limit formula_9 is an accurate approximation for formula_10 to formula_11 (six decimal places, relative error formula_12) and the upper limit formula_13 to formula_14 (four decimal places, relative error formula_15).
Expressions.
It can be expressed as the continued fraction [1; 1, 2, 1, 2, 1, 2, 1, …] (sequence in the OEIS).
So it is true to say:
formula_16
then when formula_17 :
formula_18
It can also be expressed by generalized continued fractions such as
formula_19
which is [1; 1, 2, 1, 2, 1, 2, 1, …] evaluated at every second term.
Geometry and trigonometry.
The square root of 3 can be found as the leg length of an equilateral triangle that encompasses a circle with a diameter of 1.
If an equilateral triangle with sides of length 1 is cut into two equal halves, by bisecting an internal angle across to make a right angle with one side, the right angle triangle's hypotenuse is length one, and the sides are of length formula_20 and formula_21. From this, formula_22, formula_23, and formula_24.
The square root of 3 also appears in algebraic expressions for various other trigonometric constants, including the sines of 3°, 12°, 15°, 21°, 24°, 33°, 39°, 48°, 51°, 57°, 66°, 69°, 75°, 78°, 84°, and 87°.
It is the distance between parallel sides of a regular hexagon with sides of length 1.
It is the length of the space diagonal of a unit cube.
The vesica piscis has a major axis to minor axis ratio equal to formula_25. This can be shown by constructing two equilateral triangles within it.
Other uses and occurrence.
Power engineering.
In power engineering, the voltage between two phases in a three-phase system equals formula_0 times the line to neutral voltage. This is because any two phases are 120° apart, and two points on a circle 120 degrees apart are separated by formula_0 times the radius (see geometry examples above).
Special functions.
It is known that most roots of the "n"th derivatives of formula_26 (where n < 18 and formula_27 is the Bessel function of the first kind of order formula_28) are transcendental. The only exceptions are the numbers formula_29, which are the algebraic roots of both formula_30 and formula_31.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt {3}"
},
{
"math_id": 1,
"text": "3^{1/2}"
},
{
"math_id": 2,
"text": "\\frac{97}{56}"
},
{
"math_id": 3,
"text": "\\frac {1}{10,000}"
},
{
"math_id": 4,
"text": "9.2\\times 10^{-5}"
},
{
"math_id": 5,
"text": "5\\times 10^{-5}"
},
{
"math_id": 6,
"text": "\\frac {716,035}{413,403}"
},
{
"math_id": 7,
"text": "1\\times 10^{-11}"
},
{
"math_id": 8,
"text": "(\\frac{1351}{780})^{2}>3>(\\frac{265}{153})^{2}\n"
},
{
"math_id": 9,
"text": "\\frac {1351}{780}"
},
{
"math_id": 10,
"text": "\\sqrt {3}"
},
{
"math_id": 11,
"text": "\\frac {1}{608,400}"
},
{
"math_id": 12,
"text": "3 \\times 10^{-7}"
},
{
"math_id": 13,
"text": "\\frac {265}{153}"
},
{
"math_id": 14,
"text": "\\frac {2}{23,409}"
},
{
"math_id": 15,
"text": "1\\times 10^{-5}"
},
{
"math_id": 16,
"text": "\\begin{bmatrix}1 & 2 \\\\1 & 3 \\end{bmatrix}^n = \\begin{bmatrix}a_{11} & a_{12} \\\\a_{21} & a_{22} \\end{bmatrix}"
},
{
"math_id": 17,
"text": "n\\to\\infty"
},
{
"math_id": 18,
"text": " \\sqrt{3} = 2 \\cdot \\frac{a_{22}}{a_{12}} -1 "
},
{
"math_id": 19,
"text": " [2; -4, -4, -4, ...] = 2 - \\cfrac{1}{4 - \\cfrac{1}{4 - \\cfrac{1}{4 - \\ddots}}}"
},
{
"math_id": 20,
"text": "\\frac{1}{2}"
},
{
"math_id": 21,
"text": "\\frac{\\sqrt{3}}{2}"
},
{
"math_id": 22,
"text": "\\tan{60^\\circ}=\\sqrt{3}"
},
{
"math_id": 23,
"text": "\\sin{60^\\circ}=\\frac {\\sqrt{3}}{2}"
},
{
"math_id": 24,
"text": "\\cos{30^\\circ}=\\frac {\\sqrt{3}}{2}"
},
{
"math_id": 25,
"text": "1:\\sqrt{3}"
},
{
"math_id": 26,
"text": "J_\\nu^{(n)}(x)"
},
{
"math_id": 27,
"text": "J_\\nu(x)"
},
{
"math_id": 28,
"text": "\\nu"
},
{
"math_id": 29,
"text": "\\pm\\sqrt{3}"
},
{
"math_id": 30,
"text": "J_1^{(3)}(x)"
},
{
"math_id": 31,
"text": "J_0^{(4)}(x)"
}
] | https://en.wikipedia.org/wiki?curid=9126665 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.