text
stringlengths 256
16.4k
|
---|
Cellulose synthase (GDP-forming) - Wikipedia
In enzymology, a cellulose synthase (GDP-forming) (EC 2.4.1.29) is an enzyme that catalyzes the chemical reaction
GDP-glucose + (1,4-beta-D-glucosyl)n
{\displaystyle \rightleftharpoons }
GDP + (1,4-beta-D-glucosyl)n+1
Thus, the two substrates of this enzyme are GDP-glucose and (1,4-beta-D-glucosyl)n, whereas its two products are GDP and (1,4-beta-D-glucosyl)n+1.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is GDP-glucose:1,4-beta-D-glucan 4-beta-D-glucosyltransferase. Other names in common use include cellulose synthase (guanosine diphosphate-forming), cellulose synthetase, guanosine diphosphoglucose-1,4-beta-glucan glucosyltransferase, and guanosine diphosphoglucose-cellulose glucosyltransferase. This enzyme participates in starch and sucrose metabolism.
As of August 2019[update], no proteins with this activity are known in the UniProt/NiceZYme or the gene ontology database.
Flowers HM, Batra KK, Kemp J, Hassid WZ (1969). "Biosynthesis of cellulose in vitro from guanosine diphosphate D-glucose with enzymic preparations from Phaseolus aureus and Lupinus albus". J. Biol. Chem. 244 (18): 4969–74. PMID 5824571.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Cellulose_synthase_(GDP-forming)&oldid=917349013"
|
Mannokinase - Wikipedia
In enzymology, a mannokinase (EC 2.7.1.7) is an enzyme that catalyzes the chemical reaction
ATP + D-mannose
{\displaystyle \rightleftharpoons }
ADP + D-mannose 6-phosphate
Thus, the two substrates of this enzyme are ATP and D-mannose, whereas its two products are ADP and D-mannose 6-phosphate.
This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:D-mannose 6-phosphotransferase. Other names in common use include mannokinase (phosphorylating), and D-fructose (D-mannose) kinase. This enzyme participates in fructose and mannose metabolism.
BUEDING E, MACKINNON JA (1955). "Hexokinases of Schistosoma mansoni". J. Biol. Chem. 215 (2): 495–506. PMID 13242546.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Mannokinase&oldid=917518783"
|
GetLastResult - Maple Help
Home : Support : Online Help : Programming : Grid Package : GetLastResult
get the last computed value from a remote parallel compute node
GetLastResult(node)
The GetLastResult command gets the last result computed on the specified grid node.
The last result could be any activity initiated by the Launch, Run, Set, or Get commands.
This command can only be used when a node is finished computing. If the specified node is not yet done computing, GetLastResult will block and wait for the computation to finish.
The GetLastResult command is only available in local Grid mode.
\mathrm{Grid}:-\mathrm{Run}\left(0,\mathrm{int},[x,x]\right)
\mathrm{Grid}:-\mathrm{Run}\left(1,\mathrm{int},['\mathrm{sin}\left(x\right)',x]\right)
\mathrm{Grid}:-\mathrm{GetLastResult}\left(1\right)
\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{cos}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)
\mathrm{Grid}:-\mathrm{GetLastResult}\left(0\right)
\frac{{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}}{\textcolor[rgb]{0,0,1}{2}}
The NULL value is a valid result that can be returned. The print command returns NULL. Note that without waiting, you may not see the displayed output in sequence. This should not be confused with the returned result.
\mathrm{Grid}:-\mathrm{Run}\left(1,\mathrm{print},[42]\right)
r≔\mathrm{Grid}:-\mathrm{GetLastResult}\left(1\right)
\mathrm{evalb}\left(r=\mathrm{NULL}\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
The Grid[GetLastResult] command was introduced in Maple 2015.
Grid:-Get
|
From Asa Gray 18–19 August 1862
The dawn of hope in yours of July 23d, so brightened in your last, of the 28th gave us heart-felt joy.1 Thank God your dear boy is convalescent, by this time, we trust so decidedly so as to give you full relief from all anxiety. Really, if one can give so much satisfaction at so cheap a rate, one would become a stamp collector for the purpose of supplying the good fellow.2
When he gets quite well he is to write me a note and let me know what American stamps he lacks, and I will see that some of our young people help him. Tell him that so far, he is not so much indebted to the Harvard professor as to a venerable French gentleman of Philadelphia, Mr. Durand.—aged something like 75.3
I have lost the time in which I was to write you this week.—and can send only a hasty line. My old friend Dr. Torrey4 is expected to-morrow to visit us for several days, and I must clear up matters before his coming. But I shall scold him for not having supplied me with Specularia perfoliata, with the “precociously fertilized flowers”. (It seems a good term,—it express the fact.)5
Since my last a few Orchids have come in, e.g. our White Fringed Orchis P. blephariglottis.—but rather too old,—and while I was out, after sunset, a friend has brought me P. ciliaris—the Yellow Fringed O.— which I shall look at to-morrow.
I will record notes, and try to put on record the most important,—so it will not be worth while to send you so many details in the raw state.6
If you so wish it, perhaps I may make a note of some obs. elsewhere— But I like to stick such things into my Notices & Reviews.7 No matter if they are overlooked by the outsiders. If you know of them, or any one else who really wants to use such observations, that is all I care. I rather like to stick these farthing candles under the bushel,—and the same of larger lights, to a degree.
The Sept. no. of Sill. Jour is so delayed that your enclosures were back in time. I was sitting down to use them, when Silliman wrote me that I had already sent him more matter than he could print.8
And so, I was not sorry to let the Orchids go by till the Nov. no.—where I hope to use the cuts you sent me,9 and give some gossiping observations on several American orchideæ. But these must be regarded as only rapid reconnaisance.— All should be held subject to confirmation or the reverse by more careful & reiterated observations—
But I have two points, and both, I know, will please you.
1. Goodyera repens. Before reading your remark on p. 114 line 11, —i.e. having forgotten it.—I have confirmed it.10 It is very distinct, the difference between the early position when the proboscis must hit gland & remove pollinia, but cannot thrust pollinia down to stigma.—and the later position, as seen in flowers lower down the spike. Where there is room, the stigma is in sight on looking down the flower, and the pollinia on a pin go down to stigma as sure as can be.— But the difference is only slightly, if at all owing to any downward movement of the labellum.; it comes from a backward movement of the column, which becomes more erect.
2. I have another, and a different case of close-self-fertilization,—in Gymnadenia tridentata. The arrangement different from P. hyperborea,—11 and indeed, I do not see just how the pollen so surely gets on to the stigmas—on to an arm of stigma carried up each side of anther, & one between the cells. But they get pollen packets in the bud, and pollen-tubes are emitted abundantly.— It is most interesting case yet.—such determination to self-fertilize and yet I suspect pollinia are often removed by insects, & cross-fertilize occasionally. I will describe this.12 But it must be looked at more particularly next year.
19th | The charm of Platanthera ciliaris & blephariglottis is—the very long & narrow arms to the stigma, which, with the anterior portion of the anther containing the long caudicle, are like a stalk bearing the (small) disk on the extremity.— these thrust forward to meet the head of insect, who cannot get in the whole length of proboscis in to the long (1
\frac{1}{4}
inch) straight spur, before his face will come in to contact with the glands.
Unfortunately for my experiments, the little glands, so far projecting, dry up sooner than usual, & loose their viscidity, & the stalk its power of depression.
The slight differences between the two species—the white and the yellow—are interesting; but I will not trouble you now, as I keep notes on them.13
Torrey is here. He tells me he could not find any pollen-tubes emitted from Specularia perfoliata— the early fertilised flowers—while the pollen was still in the anthers.14
Good bye, in haste, Ever Yours | A. Gray
1.1 The dawn … reconnaisance.— 8.3] crossed pencil
10.1 1. Goodyera] opening square bracket, pencil
15.1 Torrey … anthers. 15.3] double scored pencil
Top of first page: ‘Keep’ circled pencil; ‘Orchids’ pencil; ‘Goodyera’ red crayon, enclosed in parentheses, double underl
Gray refers to Leonard Darwin’s gradual recovery from scarlet fever (see letters to Asa Gray, 23[–4] July [1862] and 28 July [1862]).
At CD’s request, Gray had sent a number of stamps from the United States for Leonard’s collection (see letter to Asa Gray, 10–20 June [1862], and letters from Asa Gray, 2–3 July 1862, 15 July [1862], and 21 July 1862). See also letters to Asa Gray, 23[–4] July [1862] and 28 July [1862].
Gray refers to the botanist and pharmacist, Elias Durand. Gray was Fisher Professor of natural history at Harvard University.
Gray refers to John Torrey, his former mentor and botanical collaborator.
In his letter to CD of 2–3 July 1862, Gray promised to obtain specimens of this species from Torrey, and to observe the behaviour of the pollen-tubes in those flowers that underwent ‘precocious fertilization’ (later known as cleistogamy). In reply, CD commented that the phenomenon seemed ‘too remarkable to be called “precocious flowering’” (letter to Asa Gray, 23[–4] July [1862]).
Platanthera blephariglottis and P. ciliaris are described in A. Gray 1862b, p. 424.
In the letters to Asa Gray, 23[–4] July [1862] and 28 July [1862], CD argued that Gray’s observations on American species of orchids were too good to be reported only in a review of Orchids, and that Gray should publish at least some of them separately.
Gray refers to his notes on American species of orchids, returned by CD with the letter to Asa Gray, 23[–4] July [1862]. Gray intended to include some of his observations in a follow-up article to his review of Orchids (A. Gray 1862b), to be published in the monthly American Journal of Science and Arts, edited by Benjamin Silliman Jr, and commonly referred to as ‘Silliman’s journal’.
CD had arranged, at Gray’s request, for John Murray to send Gray electrotype plates of three of the illustrations from Orchids, figuring Orchis mascula and O. pyramidalis, for reproduction in Gray’s review of the book (A. Gray 1862a; see letter from Asa Gray, 18 May 1862, and letter to Asa Gray, 10–20 June [1862]). The plates arrived too late for use in Gray’s review (see letter from Asa Gray, 21 July 1862 and nn. 3 and 4), but appeared in the follow-up article, published in the November number of the American Journal of Science and Arts (A. Gray 1862b).
In Orchids, p. 114, CD stated that in Goodyera repens the passage into the flower between the rostellum and labellum was contracted. He reported that, by analogy with Spiranthes autumnalis, he suspected the labellum moved ‘further from the column in mature flowers, in order to allow insects, with the pollinia adhering to their heads or probosces, to enter the flower more freely.’ Gray gave his observations on this point in A. Gray 1862b, p. 427; CD cited Gray’s confirmation of his view in ‘Fertilization of orchids’, p. 151 (Collected papers 2: 148).
Gray sent CD notes on Platanthera hyperborea with his letter of 2–3 July 1862; although the notes have not been found, it is clear from CD’s response in the letter to Asa Gray, 23[–4] July [1862], that Gray had told him that the species was often self-pollinated. In Orchids, one of CD’s purposes was to demonstrate that the ‘main object’ of the various ‘contrivances by which Orchids are fertilised’ was cross-fertilisation (p. 1), and he noted only one exception (p. 359). CD included P. hyperborea and Gymnadenia tridentata on an undated list of ‘self-fertilisers’ that is now in DAR 70: 167; he also included a modified discussion of the occurrence of ‘self-fertilisation’ in orchids in Orchids 2d ed., pp. 288–93.
Gray discussed Gymnadenia tridentata in A. Gray 1862c, p. 260 n. and A. Gray 1862b, p. 426. CD made undated notes referring himself to the latter account (DAR 70: 8, 17); he cited Gray’s observations in ‘Fertilization of orchids’, p. 147 (Collected papers 2: 144).
Notes and observations on orchids.
|
(acetyl-CoA carboxylase)-phosphatase - Wikipedia
(acetyl-CoA carboxylase)-phosphatase
In enzymology, a [acetyl-CoA carboxylase]-phosphatase (EC 3.1.3.44) is an enzyme that catalyzes the chemical reaction
[acetyl-CoA carboxylase] phosphate + H2O
{\displaystyle \rightleftharpoons }
[acetyl-CoA carboxylase] + phosphate
Thus, the two substrates of this enzyme are acetyl-CoA carboxylase phosphate and H2O, whereas its two products are acetyl-CoA carboxylase and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name of this enzyme class is [acetyl-CoA:carbon-dioxide ligase (ADP-forming)]-phosphate phosphohydrolase.
Krakower GR, Kim KH (March 1981). "Purification and properties of acetyl-CoA carboxylase phosphatase". The Journal of Biological Chemistry. 256 (5): 2408–13. PMID 6257718.
Retrieved from "https://en.wikipedia.org/w/index.php?title=(acetyl-CoA_carboxylase)-phosphatase&oldid=917309835"
|
Vapor pressure/Citable Version - Citizendium
< Vapor pressure
This version approved either by the Approvals Committee, or an Editor from at least one of the listed workgroups. The Chemistry, Physics and Engineering Workgroups are responsible for this citable version. While we have done conscientious work, we cannot guarantee that this version is wholly free of mistakes. See here (not History) for authorship.
Vapor pressure (also known as equilibrium vapor pressure), is the pressure of a vapor in equilibrium with its liquid or solid phase.[1][2] At any given temperature, for a specific substance, there is a pressure at which the gas of that specific substance is in dynamic equilibrium with its liquid or solid forms. This is the vapor pressure of the specific substance at that temperature.
The Antoine equation [6][7] is a mathematical expression of the relation between the vapor pressure and the temperature of pure substances. The basic form of the equation is:
{\displaystyle \log P=A-{\frac {B}{C+T}}}
{\displaystyle T={\frac {B}{A-\log P}}-C}
{\displaystyle P}
{\displaystyle T}
{\displaystyle A}
{\displaystyle B}
{\displaystyle C}
{\displaystyle \log }
{\displaystyle \log _{10}}
{\displaystyle \log _{e}}
{\displaystyle \log P=A-{\frac {B}{T}}}
{\displaystyle T={\frac {B}{A-\log P}}}
{\displaystyle P=\sum _{i}P_{i}^{o}\,x_{i}}
{\displaystyle P}
= vapor pressure of a liquid mixture
{\displaystyle P_{i}^{o}}
= vapor pressure of pure component
{\displaystyle i}n the liquid mixture
{\displaystyle x_{i}}
= mole fraction of component
{\displaystyle i}n the liquid mixture
{\displaystyle P_{i}^{o}\,x}
{\displaystyle p_{i}}
= the partial pressure of component
{\displaystyle i}n the liquid mixture
There are a number of methods for calculating the sublimation pressure (i.e., the vapor pressure) of a solid. One method is to calculate sublimation pressures [8] from extrapolated liquid vapor pressures if the heat of fusion is known. The heat of fusion has to be added in addition to the heat of vaporization to vaporize a solid. Assuming that the heat of fusion is temperature-independent and ignoring additional transition temperatures between different solid phases, the sublimation pressure can be calculated using this version of the Clausius-Clapeyron equation:
{\displaystyle \log \,P_{\mathrm {solid} }^{S}=\log \,P_{\mathrm {liquid} }^{S}-{\frac {\Delta H_{m}}{R}}\left({\frac {1}{T}}-{\frac {1}{T_{m}}}\right)}
{\displaystyle P_{\mathrm {solid} }^{S}}
{\displaystyle T\!<T_{m}}
{\displaystyle P_{\mathrm {liquid} }^{S}}
{\displaystyle T\!<T_{m}}
{\displaystyle \Delta H_{m}}
{\displaystyle R}
{\displaystyle T}
{\displaystyle T_{m}}
gives a fair estimation for temperatures not too far from the melting point. This equation also shows that the sublimation pressure is lower than the extrapolated liquid vapor pressure (ΔHm is positive) and the difference increases with increased distance from the melting point.
{\displaystyle \%RH={\Big (}{\frac {p_{\mathrm {water} }}{P_{\mathrm {water} }^{o}}}{\Big )}100}
{\displaystyle \%RH}
{\displaystyle p_{\mathrm {water} }}
{\displaystyle P_{\mathrm {water} }^{o}}
{\displaystyle p_{\mathrm {water} }}
{\displaystyle P_{\mathrm {water} }^{o}}
↑ What is the Antoine Equation? (Chemistry Department, Frostburg State University, Maryland)
↑ Bruce Moller, Jürgen Rarey and Deresh Ramjugernath (2008). "Estimation of the vapour pressure of non-electrolyte organic compounds via group contributions and group interactions". J.Molecular Liquids 143 (1): 52-63.
Retrieved from "https://citizendium.org/wiki/index.php?title=Vapor_pressure/Citable_Version&oldid=606762"
|
Mechanics of Indentation into Micro- and Nanoscale Forests of Tubes, Rods, or Pillars | J. Eng. Mater. Technol. | ASME Digital Collection
e-mail: [email protected]
Wang, L., Ortiz, C., and Boyce, M. C. (December 3, 2010). "Mechanics of Indentation into Micro- and Nanoscale Forests of Tubes, Rods, or Pillars." ASME. J. Eng. Mater. Technol. January 2011; 133(1): 011014. https://doi.org/10.1115/1.4002648
The force-depth behavior of indentation into fibrillar-structured surfaces such as those consisting of forests of micro- or nanoscale tubes or rods is a depth-dependent behavior governed by compression, bending, and buckling of the nanotubes. Using a micromechanical model of the indentation process, the effective elastic properties of the constituent tubes or rods as well as the effective properties of the forest can be deduced from load-depth curves of indentation into forests. These studies provide fundamental understanding of the mechanics of indentation of nanotube forests, showing the potential to use indentation to deduce individual nanotube or nanorod properties as well as the effective indentation properties of such nanostructured surface coatings. In particular, the indentation behavior can be engineered by tailoring various forest features, where the force-depth behavior scales linearly with tube areal density (
m
, number per unit area), tube moment of inertia
(I)
, tube modulus
(E)
, and indenter radius
(R)
and scales inversely with the square of tube length
(L2)
, which provides guidelines for designing forests whether to meet indentation stiffness or for energy storage applications in microdevice designs.
bending, buckling, compressibility, elasticity, indentation, nanorods, nanotubes, indentation, micromechanics, nanotube forest, buckling, bending
Buckling, Compression, Density, Finite element analysis, Friction, Nanoscale phenomena, Nanotubes, Rods, Nanorods, Stress, Surface roughness, Columns (Structural), Energy storage, Stiffness, Elasticity
Design of Biomimetic Fibrillar Interfaces: 1. Making Contact
Frictional Adhesion of Patterned Surfaces and Implications for Gecko and Biomimetic Systems
From Rolling Ball to Complete Wetting: The Dynamic Tuning of Liquids on Nanostructured Surfaces
Stable Biomimetic Super-Hydrophobic Engineering Materials
Use of Highly-Ordered TiO2 Nanotube Arrays in Dye-Sensitized Solar Cells
Long Vertically Aligned Titania Nanotubes on Transparent Conducting Oxide for Highly Efficient Solar Cells
Polypyrrole Nanowire Actuators
Persson-Gulda
Super-Compressible Foamlike Carbon Nanotube Films
Impact Response by a Foamlike Forest of Coiled Carbon Nanotubes
Carbon Nanosyringe Array as a Platform for Intracellular Delivery
Ohyabu
Cell Culture on Nanopillar Sheet: Study of HeLa Cells on Nanopillar Sheet
Viscoelastic Characterization and Modeling of Polymer Transducers for Biological Applications
Diseher
Cell Responses to the Mechanochemical Microenvironment—Implications for Regenerative Medicine and Drug Delivery
pH-Responsive Reversibly Swellable Nanotube Arrays
Determination of Mechanical Properties of Carbon Nanotubes and Vertically Aligned Carbon Nanotube Forests Using Nanoindentation
Shell Buckling of Individual Multiwalled Carbon Nanotubes Using Nanoindentation
Nanoindentation of Silver Nanowires
Nanoscale Mechanical Behavior of Individual Semiconducting Nanobelts
Adhesion and Sliding Response of a Biologically Inspired Fibrillar Surface: Experimental Observations
Spectral Finite Element Formulation for Nanorods via Nonlocal Continuum Mechanics
|
Tusi_couple Knowpia
The Tusi couple is a mathematical device in which a small circle rotates inside a larger circle twice the diameter of the smaller circle. Rotations of the circles cause a point on the circumference of the smaller circle to oscillate back and forth in linear motion along a diameter of the larger circle. The Tusi couple is a 2-cusped hypocycloid.
An animated model of a Tusi couple.
The couple was first proposed by the 13th-century Persian astronomer and mathematician Nasir al-Din al-Tusi in his 1247 Tahrir al-Majisti (Commentary on the Almagest) as a solution for the latitudinal motion of the inferior planets,[1] and later used extensively as a substitute for the equant introduced over a thousand years earlier in Ptolemy's Almagest.[2][3]
Tusi's diagram of the Tusi couple, 13th century[4]
Tusi described the curve as follows:
If two coplanar circles, the diameter of one of which is equal to half the diameter of the other, are taken to be internally tangent at a point, and if a point is taken on the smaller circle—and let it be at the point of tangency—and if the two circles move with simple motions in opposite direction in such a way that the motion of the smaller [circle] is twice that of the larger so the smaller completes two rotations for each rotation of the larger, then that point will be seen to move on the diameter of the larger circle that initially passes through the point of tangency, oscillating between the endpoints.[5]
Algebraically, this can be expressed with complex numbers as
{\displaystyle \left(1-{\frac {1}{2}}\right)e^{i\theta }-{\frac {1}{2}}e^{-i\theta }=i\,\sin \theta .}
Other commentators have observed that the Tusi couple can be interpreted as a rolling curve where the rotation of the inner circle satisfies a no-slip condition as its tangent point moves along the fixed outer circle.
The term "Tusi couple" is a modern one, coined by Edward Stewart Kennedy in 1966.[6] It is one of several late Islamic astronomical devices bearing a striking similarity to models in Nicolaus Copernicus's De revolutionibus, including his Mercury model and his theory of trepidation. Historians suspect that Copernicus or another European author had access to an Arabic astronomical text, but an exact chain of transmission has not yet been identified,[7] although the 16th century scientist and traveler Guillaume Postel has been suggested.[8][9]
Since the Tusi-couple was used by Copernicus in his reformulation of mathematical astronomy, there is a growing consensus that he became aware of this idea in some way. It has been suggested[10][11] that the idea of the Tusi couple may have arrived in Europe leaving few manuscript traces, since it could have occurred without the translation of any Arabic text into Latin. One possible route of transmission may have been through Byzantine science; Gregory Chioniades translated some of al-Tusi's works from Arabic into Byzantine Greek. Several Byzantine Greek manuscripts containing the Tusi-couple are still extant in Italy.[12]
There are other sources for this mathematical model for converting circular motions to reciprocating linear motion. It is found in Proclus's Commentary on the First Book of Euclid[13] and the concept was known in Paris by the middle of the 14th Century. In his questiones on the Sphere (written before 1362), Nicole Oresme described how to combine circular motions to produce a reciprocating linear motion of a planet along the radius of its epicycle. Oresme's description is unclear and it is not certain whether this represents an independent invention or an attempt to come to grips with a poorly understood Arabic text.[14]
Although the Tusi couple was developed within an astronomical context, later mathematicians and engineers developed similar versions of what came to be called hypocycloid straight-line mechanisms. The mathematician Gerolamo Cardano designed a system known as Cardan's movement (also known as a Cardan gear).[15] Nineteenth-century engineers James White,[16] Matthew Murray,[17] as well as later designers, developed practical applications of the hypocycloid straight-line mechanism.
HypotrochoidEdit
The ellipses (green, cyan, red) are hypotrochoids of the Tusi couple.
A property of the Tusi couple is that points on the inner circle that are not on the circumference trace ellipses. These ellipses, and the straight line traced by the classic Tusi couple, are special cases of hypotrochoids.[18]
Murray's Hypocycloidal Engine, utilising a Tusi couple as a substitute for crosshead guides or parallel motion
Wikimedia Commons has media related to Tusi-couple.
^ George Saliba (1995), 'A History of Arabic Astronomy: Planetary Theories During the Golden Age of Islam', pp.152-155
^ "Late Medieval Planetary Theory", E. S. Kennedy, Isis 57, #3 (Autumn 1966), 365-378, JSTOR 228366.
^ Vatican Library, Vat. ar. 319 fol. 28 verso math19 NS.15 Archived 2014-12-24 at the Wayback Machine, fourteenth-century copy of a manuscript from Tusi
^ Translated in F. J. Ragep, Memoir on Astronomy II.11 [2], pp. 194, 196.
^ E. S. Kennedy, "Late Medieval Planetary Theory," p. 370.
^ Saliba, George (1996), "Writing the History of Arabic Astronomy: Problems and Differing Perspectives", Journal of the American Oriental Society, 116 (4): 709–18, doi:10.2307/605441, JSTOR 605441 , pp. 716-17.
^ Whose Science is Arabic Science in Renaissance Europe? by George Saliba, Columbia University
^ George Saliba (April 27, 2006). "Islamic Science and the Making of Renaissance Europe". Retrieved 2008-03-01.
^ Veselovsky, I. N. (1973). "Copernicus and Nasir al-Din al-Tusi". Journal for the History of Astronomy. 4: 128–30. Bibcode:1973JHA.....4..128V. doi:10.1177/002182867300400205. S2CID 118453340.
^ Claudia Kren, "The Rolling Device," pp. 490-2.
^ Veselovsky, I. N. (1973). "Copernicus and Nasir al-Din al-Tusi". Journal for the History of Astronomy. 4: 128. Bibcode:1973JHA.....4..128V. doi:10.1177/002182867300400205. S2CID 118453340.
^ "Appleton's dictionary of machines, mechanics, engine work, and engineering". 1857.
^ "Polly Model Engineering: Stationary Engine Kits - Anthony Mount Models".
^ Brande, W.T. (1875), A Dictionary of Science, Literature, & Art, Longmans, Green, and Company, p. 181, retrieved 2017-04-10
Di Bono, Mario (1995). "Copernicus, Amico, Fracastoro and Tusi's Device: Observations on the Use and Transmission of a Model". Journal for the History of Astronomy. 26: 133–154. Bibcode:1995JHA....26..133D. doi:10.1177/002182869502600203. S2CID 118330488.
Kennedy, E. S. (1966). "Late Medieval Planetary Theory". Isis. 57 (3): 365–378. doi:10.1086/350144.
Kren, Claudia (1971). "The Rolling Device of Naṣir al-Dīn al-Ṭūsī in the De spera of Nicole Oresme". Isis. 62 (4): 490–498. doi:10.1086/350791.
Ragep, F. J. "The Two Versions of the Tusi Couple," in From Deferent to Equant: A Volume of Studies in the History of Science in Ancient and Medieval Near East in Honor of E. S. Kennedy, ed. David King and George Saliba, Annals of the New York Academy of Sciences, 500. New York Academy of Sciences, 1987. ISBN 0-89766-396-9 (pbk.)
Ragep, F. J. Nasir al-Din al-Tusi's "Memoir on Astronomy," Sources in the History of Mathematics and Physical Sciences,12. 2 vols. Berlin/New York: Springer, 1993. ISBN 3-540-94051-0 / ISBN 0-387-94051-0.
Dennis W. Duke, Ancient Planetary Model Animations includes two links of interest:
An interactive Tusi couple
Arabic models for replacing the equant
George Saliba, "Whose Science is Arabic Science in Renaissance Europe?" Discusses the model of Nasir al-Din al-Tusi and the interactions of Arabic, Greek, and Latin astronomers.
|
Harvey’s Espresso Express, a drive-through coffee stop, is famous for its great house coffee, a blend of Colombian and Mocha Java beans. Their archrival, Jojo’s Java, sent a spy to steal their ratio for blending beans. The spy returned with a torn part of an old receipt that showed only the total number of pounds and the total cost,
18
pounds for
\$92.07
. At first Jojo was angry, but then he realized that he knew the price per pound of each kind of coffee (
\$4.89
for Colombian and
\$5.43
for Mocha Java). Show how he could use equations to figure out how many pounds of each type of beans Harvey’s used.
First, define variables you can use to write equations.
c=
the number of pounds of Colombian coffee beans
m=
the number of pounds of Mocha Java coffee beans
Next, set up an equation to represent the information about cost.
4.89c+5.43m=92.07
Now set up a second equation to show the total number of pounds.
Finally, solve your system of equations.
|
2015 VanderLaan Circulant Type Matrices
Hongyan Pan, Zhaolin Jiang
Circulant matrices have become a satisfactory tools in control methods for modern complex systems. In the paper, VanderLaan circulant type matrices are presented, which include VanderLaan circulant, left circulant, and
g
-circulant matrices. The nonsingularity of these special matrices is discussed by the surprising properties of VanderLaan numbers. The exact determinants of VanderLaan circulant type matrices are given by structuring transformation matrices, determinants of well-known tridiagonal matrices, and tridiagonal-like matrices. The explicit inverse matrices of these special matrices are obtained by structuring transformation matrices, inverses of known tridiagonal matrices, and quasi-tridiagonal matrices. Three kinds of norms and lower bound for the spread of VanderLaan circulant and left circulant matrix are given separately. And we gain the spectral norm of VanderLaan
g
-circulant matrix.
Hongyan Pan. Zhaolin Jiang. "VanderLaan Circulant Type Matrices." Abstr. Appl. Anal. 2015 1 - 11, 2015. https://doi.org/10.1155/2015/329329
Hongyan Pan, Zhaolin Jiang "VanderLaan Circulant Type Matrices," Abstract and Applied Analysis, Abstr. Appl. Anal. 2015(none), 1-11, (2015)
|
Geometric evolutions driven by threshold dynamics | EMS Press
Geometric evolutions driven by threshold dynamics
We study threshold dynamics on
\R^n
which satisfies monotonicity, translation invariance and finite propagation speed. We develop the general schemes for the convergence of threshold dynamics to geometric evolutions governed by a velocity function depending on the normal direction alone.
Minsu Song, Geometric evolutions driven by threshold dynamics. Interfaces Free Bound. 7 (2005), no. 3, pp. 303–318
|
Amplitude-shift keying - Wikipedia
This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: this article need neutral and better phrasing. Please help improve this article if you can. (November 2012) (Learn how and when to remove this template message)
Amplitude-shift keying (ASK) is a form of amplitude modulation that represents digital data as variations in the amplitude of a carrier wave. In an ASK system, a symbol, representing one or more bits, is sent by transmitting a fixed-amplitude carrier wave at a fixed frequency for a specific time duration. For example, if each symbol represents a single bit, then the carrier signal could be transmitted at nominal amplitude when the input value is 1, but transmitted at reduced amplitude or not at all when the input value is 0.
Any digital modulation scheme uses a finite number of distinct signals to represent digital data. ASK uses a finite number of amplitudes, each assigned a unique pattern of binary digits. Usually, each amplitude encodes an equal number of bits. Each pattern of bits forms the symbol that is represented by the particular amplitude. The demodulator, which is designed specifically for the symbol-set used by the modulator, determines the amplitude of the received signal and maps it back to the symbol it represents, thus recovering the original data. Frequency and phase of the carrier are kept constant.
Like AM, an ASK is also linear and sensitive to atmospheric noise, distortions, propagation conditions on different routes in PSTN, etc. Both ASK modulation and demodulation processes are relatively inexpensive. The ASK technique is also commonly used to transmit digital data over optical fiber. For LED transmitters, binary 1 is represented by a short pulse of light and binary 0 by the absence of light. Laser transmitters normally have a fixed "bias" current that causes the device to emit a low light level. This low level represents binary 0, while a higher-amplitude lightwave represents binary 1.
The simplest and most common form of ASK operates as a switch, using the presence of a carrier wave to indicate a binary one and its absence to indicate a binary zero. This type of modulation is called on-off keying (OOK), and is used at radio frequencies to transmit Morse code (referred to as continuous wave operation),
ASK diagram
ASK system can be divided into three blocks. The first one represents the transmitter, the second one is a linear model of the effects of the channel, the third one shows the structure of the receiver. The following notation is used:
ht(f) is the carrier signal for the transmission
hc(f) is the impulse response of the channel
n(t) is the noise introduced by the channel
hr(f) is the filter at the receiver
L is the number of levels that are used for transmission
Ts is the time between the generation of two symbols
Different symbols are represented with different voltages. If the maximum allowed value for the voltage is A, then all the possible values are in the range [−A, A] and they are given by:
{\displaystyle v_{i}={\frac {2A}{L-1}}i-A;\quad i=0,1,\dots ,L-1}
{\displaystyle \Delta ={\frac {2A}{L-1}}}
Considering the picture, the symbols v[n] are generated randomly by the source S, then the impulse generator creates impulses with an area of v[n]. These impulses are sent to the filter ht to be sent through the channel. In other words, for each symbol a different carrier wave is sent with the relative amplitude.
Out of the transmitter, the signal s(t) can be expressed in the form:
{\displaystyle s(t)=\sum _{n=-\infty }^{\infty }v[n]\cdot h_{t}(t-nT_{s})}
In the receiver, after the filtering through hr (t) the signal is:
{\displaystyle z(t)=n_{r}(t)+\sum _{n=-\infty }^{\infty }v[n]\cdot g(t-nT_{s})}
{\displaystyle {\begin{aligned}n_{r}(t)&=n(t)*h_{r}(t)\\g(t)&=h_{t}(t)*h_{c}(t)*h_{r}(t)\end{aligned}}}
where * indicates the convolution between two signals. After the A/D conversion the signal z[k] can be expressed in the form:
{\displaystyle z[k]=n_{r}[k]+v[k]g[0]+\sum _{n\neq k}v[n]g[k-n]}
In this relationship, the second term represents the symbol to be extracted. The others are unwanted: the first one is the effect of noise, the third one is due to the intersymbol interference.
If the filters are chosen so that g(t) will satisfy the Nyquist ISI criterion, then there will be no intersymbol interference and the value of the sum will be zero, so:
{\displaystyle z[k]=n_{r}[k]+v[k]g[0]}
Probability of errorEdit
The probability density function of having an error of a given size can be modelled by a Gaussian function; the mean value will be the relative sent value, and its variance will be given by:
{\displaystyle \sigma _{N}^{2}=\int _{-\infty }^{+\infty }\Phi _{N}(f)\cdot |H_{r}(f)|^{2}df}
{\displaystyle \Phi _{N}(f)}
is the spectral density of the noise within the band and Hr (f) is the continuous Fourier transform of the impulse response of the filter hr (f).
The probability of making an error is given by:
{\displaystyle P_{e}=P_{e|H_{0}}\cdot P_{H_{0}}+P_{e|H_{1}}\cdot P_{H_{1}}+\cdots +P_{e|H_{L-1}}\cdot P_{H_{L-1}}=\sum _{k=0}^{L-1}P_{e|H_{k}}\cdot P_{H_{k}}}
{\displaystyle P_{e|H_{0}}}
is the conditional probability of making an error given that a symbol v0 has been sent and
{\displaystyle P_{H_{0}}}
is the probability of sending a symbol v0.
{\displaystyle P_{H_{i}}={\frac {1}{L}}}
If we represent all the probability density functions on the same plot against the possible value of the voltage to be transmitted, we get a picture like this (the particular case of
{\displaystyle L=4}
is shown):
The probability of making an error after a single symbol has been sent is the area of the Gaussian function falling under the functions for the other symbols. It is shown in cyan for just one of them. If we call
{\displaystyle P^{+}}
the area under one side of the Gaussian, the sum of all the areas will be:
{\displaystyle 2LP^{+}-2P^{+}}
. The total probability of making an error can be expressed in the form:
{\displaystyle P_{e}=2\left(1-{\frac {1}{L}}\right)P^{+}}
We now have to calculate the value of
{\displaystyle P^{+}}
. In order to do that, we can move the origin of the reference wherever we want: the area below the function will not change. We are in a situation like the one shown in the following picture:
{\displaystyle P^{+}=\int _{\frac {Ag(0)}{L-1}}^{\infty }{\frac {1}{{\sqrt {2\pi }}\sigma _{N}}}e^{-{\frac {x^{2}}{2\sigma _{N}^{2}}}}dx={\frac {1}{2}}\operatorname {erfc} \left({\frac {Ag(0)}{{\sqrt {2}}(L-1)\sigma _{N}}}\right)}
{\displaystyle \operatorname {erfc} (x)}
is the complementary error function. Putting all these results together, the probability to make an error is:
{\displaystyle P_{e}=\left(1-{\frac {1}{L}}\right)\operatorname {erfc} \left({\frac {Ag(0)}{{\sqrt {2}}(L-1)\sigma _{N}}}\right)}
This relationship is valid when there is no intersymbol interference, i.e.
{\displaystyle g(t)}
is a Nyquist function.
Calculating the Sensitivity of an Amplitude Shift Keying (ASK) Receiver Archived 2009-08-29 at the Wayback Machine
Retrieved from "https://en.wikipedia.org/w/index.php?title=Amplitude-shift_keying&oldid=1071486355"
|
Classification loss for naive Bayes classifier - MATLAB loss - MathWorks India
Determine Test Sample Classification Loss of Naive Bayes Classifier
Determine Test Sample Logit Loss of Naive Bayes Classifier
Classification loss for naive Bayes classifier
L = loss(Mdl,tbl,ResponseVarName) returns the Classification Loss, a scalar representing how well the trained naive Bayes classifier Mdl classifies the predictor data in table tbl compared to the true class labels in tbl.ResponseVarName.
loss normalizes the class probabilities in tbl.ResponseVarName to the prior class probabilities used by fitcnb for training, which are stored in the Prior property of Mdl.
L = loss(Mdl,X,Y) returns the classification loss based on the predictor data in matrix X compared to the true class labels in Y.
L = loss(___,Name,Value) specifies options using one or more name-value pair arguments in addition to any of the input argument combinations in previous syntaxes. For example, you can specify the loss function and the classification weights.
Determine the test sample classification error (loss) of a naive Bayes classifier. When you compare the same type of loss among many models, a lower loss indicates a better predictive model.
The naive Bayes classifier misclassifies approximately 4% of the test sample.
You might decrease the classification error by specifying better predictor distributions when you train the classifier with fitcnb.
Mdl = fitcnb(XTrain,YTrain,'ClassNames',{'setosa','versicolor','virginica'});
Determine how well the algorithm generalizes by estimating the test sample logit loss.
L = loss(Mdl,XTest,YTest,'LossFun','logit')
The logit loss is approximately 0.34.
If tbl contains the response variable used to train Mdl, then you do not need to specify ResponseVarName.
Class labels, specified as a categorical, character, or string array, logical or numeric vector, or cell array of character vectors. Y must have the same data type as Mdl.ClassNames. (The software treats string arrays as cell arrays of character vectors.)
The length of Y must be equal to the number of rows of tbl or X.
Example: loss(Mdl,tbl,Y,'Weights',W) weighs the observations in each row of tbl using the corresponding weight in each row of the variable W.
'mincost' is appropriate for classification scores that are posterior probabilities. Naive Bayes models return posterior probabilities as classification scores by default (see predict).
Suppose that n is the number of observations in X and K is the number of distinct classes (numel(Mdl.ClassNames), where Mdl is the input model). Your function must have this signature
Create C by setting C(p,q) = 1 if observation p is in class q, for each row. Set all other elements of row p to 0.
Cost is a K-by-K numeric matrix of misclassification costs. For example, Cost = ones(K) - eye(K) specifies a cost of 0 for correct classification and 1 for misclassification.
Observation weights, specified as a numeric vector or the name of a variable in tbl. The software weighs the observations in each row of X or tbl with the corresponding weights in Weights.
If you specify Weights as a numeric vector, then the size of Weights must be equal to the number of rows of X or tbl.
If you do not specify a loss function, then the software normalizes Weights to add up to 1.
Classification loss, returned as a scalar. L is a generalization or resubstitution quality measure. Its interpretation depends on the loss function and weighting scheme; in general, better classifiers yield smaller loss values.
\sum _{j=1}^{n}{w}_{j}=1.
L=\sum _{j=1}^{n}{w}_{j}\mathrm{log}\left\{1+\mathrm{exp}\left[-2{m}_{j}\right]\right\}.
L=\sum _{j=1}^{n}{w}_{j}{c}_{{y}_{j}{\stackrel{^}{y}}_{j}},
{\stackrel{^}{y}}_{j}
{c}_{{y}_{j}{\stackrel{^}{y}}_{j}}
{\stackrel{^}{y}}_{j}
L=\sum _{j=1}^{n}{w}_{j}I\left\{{\stackrel{^}{y}}_{j}\ne {y}_{j}\right\},
L=-\sum _{j=1}^{n}\frac{{\stackrel{˜}{w}}_{j}\mathrm{log}\left({m}_{j}\right)}{Kn},
{\stackrel{˜}{w}}_{j}
L=\sum _{j=1}^{n}{w}_{j}\mathrm{exp}\left(-{m}_{j}\right).
L=\sum _{j=1}^{n}{w}_{j}\mathrm{max}\left\{0,1-{m}_{j}\right\}.
L=\sum _{j=1}^{n}{w}_{j}\mathrm{log}\left(1+\mathrm{exp}\left(-{m}_{j}\right)\right).
{\gamma }_{jk}={\left(f{\left({X}_{j}\right)}^{\prime }C\right)}_{k}.
{\stackrel{^}{y}}_{j}=\underset{k=1,...,K}{\text{argmin}}{\gamma }_{jk}.
L=\sum _{j=1}^{n}{w}_{j}{c}_{j}.
L=\sum _{j=1}^{n}{w}_{j}{\left(1-{m}_{j}\right)}^{2}.
{c}_{k}=\sum _{j=1}^{K}\stackrel{^}{P}\left(Y=j|{x}_{1},...,{x}_{P}\right)Cos{t}_{jk}.
\stackrel{^}{P}\left(Y=k|{x}_{1},..,{x}_{P}\right)=\frac{P\left({X}_{1},...,{X}_{P}|y=k\right)\pi \left(Y=k\right)}{P\left({X}_{1},...,{X}_{P}\right)},
P\left({X}_{1},...,{X}_{P}|y=k\right)
P\left({X}_{1},..,{X}_{P}\right)
P\left({X}_{1},...,{X}_{P}\right)=\sum _{k=1}^{K}P\left({X}_{1},...,{X}_{P}|y=k\right)\pi \left(Y=k\right).
ClassificationNaiveBayes | CompactClassificationNaiveBayes | predict | fitcnb | resubLoss
|
Alanine—oxo-acid transaminase - Wikipedia
Alanine—oxo-acid transaminase
In enzymology, an alanine-oxo-acid transaminase (EC 2.6.1.12) is an enzyme that catalyzes the chemical reaction
L-alanine + a 2-oxo acid
{\displaystyle \rightleftharpoons }
pyruvate + an L-amino acid
Thus, the two substrates of this enzyme are L-alanine and 2-oxo acid, whereas its two products are pyruvate and L-amino acid.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is L-alanine:2-oxo-acid aminotransferase. Other names in common use include L-alanine-alpha-keto acid aminotransferase, leucine-alanine transaminase, alanine-keto acid aminotransferase, and alanine-oxo acid aminotransferase. This enzyme participates in alanine and aspartate metabolism. It employs one cofactor, pyridoxal phosphate.
ALTENBERN RA, HOUSEWRIGHT RD (1953). "Transaminases in smooth Brucella abortus, strain 19" (PDF). J. Biol. Chem. 204 (1): 159–67. PMID 13084587.
Rowsell EV (1956). "Transaminations with pyruvate and other alpha-keto acids". Biochem. J. 64 (2): 246–252. PMC 1199724. PMID 13363834.
Sallach HJ (1956). "Formation of serine from hydroxypyruvate and L-alanine" (PDF). J. Biol. Chem. 223 (2): 1101–1108.
Wilson DG, King KW, Burris RH (1954). "Transaminase reactions in plants" (PDF). J. Biol. Chem. 208 (2): 863–874.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Alanine—oxo-acid_transaminase&oldid=917333751"
|
2-hydroxy-3-oxopropionate reductase - Wikipedia
In enzymology, a 2-hydroxy-3-oxopropionate reductase (EC 1.1.1.60) is an enzyme that catalyzes the chemical reaction
(R)-glycerate + NAD(P)+
{\displaystyle \rightleftharpoons }
2-hydroxy-3-oxopropanoate + NAD(P)H + H+
The 3 substrates of this enzyme are (R)-glycerate, NAD+, and NADP+, whereas its 4 products are 2-hydroxy-3-oxopropanoate, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (R)-glycerate:NAD(P)+ oxidoreductase. This enzyme is also called tartronate semialdehyde reductase. This enzyme participates in glyoxylate and dicarboxylate metabolism.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1YB4.
Gotto AM, Kornberg HL (1961). "The metabolism of C2 compounds in micro-organisms. 7. Preparation and properties of crystalline tartronic semialdehyde reductase". Biochem. J. 81 (2): 273–84. doi:10.1042/bj0810273. PMC 1243334. PMID 13900766.
Retrieved from "https://en.wikipedia.org/w/index.php?title=2-hydroxy-3-oxopropionate_reductase&oldid=989158976"
|
A perfectly still water surface is an ideal hypothesis that is rarely encountered in real-world applications. It is impossible to find an open surface of water without any disturbance or waves, as well as an occupied closed water surface, such as a swimming pool. This experiment was done to predict the evaporation rate from the wavy water surface under the different convection regimes (free, forced, and mixed) at turbulent airflow conditions over a wide range of the ratio
(Gr/Re2)
. The evaporation rate from the wavy water surface is strongly affected by combinations between wave steepness and main airflow velocity above the wavy water surface. Experimental results show that no pattern can be followed for which combinations of evaporation rate will increase. Thus, only two facts can be noticed: the evaporation rate is larger than that measured under the same airflow velocity conditions with no waves existing on evaporated water surface because the airflow is smooth and attached along the still water surface and when increasing the wave steepness
(H/L,H/T)
, airflow will separate at the lee side of the wave crest near the bottom of the wave trough. Thus, the vortex will be generated in the airflow separation region. These vortexes are unstable and cause an increase in turbulence, reducing the water surface's resistance to vertical transport water vapor and increasing the evaporation rate. Also, experimental results show that the evaporation rates are somewhat less than that measured under the same airflow velocity with smaller wave steepness due to the air trapped region observed at the leeside of the wave crest near the bottom of the wave trough. Also, the result shows the evaporation rate increases with increased airflow velocity under the same convection regime. The current study implemented the particle image velocimetry (PIV) technique to analyze the airflow structure above the evaporated wavy water surface.
|
Hydroxypyruvate reductase - Wikipedia
In enzymology, a hydroxypyruvate reductase (EC 1.1.1.81) is an enzyme that catalyzes the chemical reaction
D-glycerate + NAD(P)+
{\displaystyle \rightleftharpoons }
hydroxypyruvate + NAD(P)H + H+
The 3 substrates of this enzyme are D-glycerate, NAD+, and NADP+, whereas its 4 products are hydroxypyruvate, NADH, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is D-glycerate:NADP+ 2-oxidoreductase. Other names in common use include beta-hydroxypyruvate reductase, NADH:hydroxypyruvate reductase, and D-glycerate dehydrogenase. This enzyme participates in glycine, serine and threonine metabolism and glyoxylate and dicarboxylate metabolism.
Kleczkowski LA; Edwards GE (1989). "Identification of hydroxypyruvate and glyoxylate reductases in maize leaves". Plant Physiol. 91 (1): 278–286. doi:10.1104/pp.91.1.278. PMC 1061987. PMID 16667010.
Kleczkowski LA, Randall DD (1988). "Purification and characterization of a novel NADPH(NADH)-dependent hydroxypyruvate reductase from spinach leaves. Comparison of immunological properties of leaf hydroxypyruvate reductases". Biochem. J. 250 (1): 145–52. doi:10.1042/bj2500145. PMC 1148826. PMID 3281657.
Kohn LD, Jakoby WB (1968). "Tartaric acid metabolism. VII. Crystalline hydroxypyruvate reductase (D-glycerate dehydrogenase)". J. Biol. Chem. 243 (10): 2494–9. PMID 4385077.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Hydroxypyruvate_reductase&oldid=989151832"
|
(pyruvate dehydrogenase (acetyl-transferring))-phosphatase - Wikipedia
[pyruvate dehydrogenase (lipoamide)] phosphatase
In enzymology, a [pyruvate dehydrogenase (acetyl-transferring)]-phosphatase (EC 3.1.3.43) is an enzyme that catalyzes the chemical reaction
[pyruvate dehydrogenase (acetyl-transferring)] phosphate + H2O
{\displaystyle \rightleftharpoons }
[pyruvate dehydrogenase (acetyl-transferring)] + phosphate
Thus, the two substrates of this enzyme are pyruvate dehydrogenase (acetyl-transferring) phosphate and H2O, whereas its two products are pyruvate dehydrogenase (acetyl-transferring) and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name of this enzyme class is [pyruvate dehydrogenase (acetyl-transferring)]-phosphate phosphohydrolase. Other names in common use include pyruvate dehydrogenase phosphatase, phosphopyruvate dehydrogenase phosphatase, [pyruvate dehydrogenase (lipoamide)]-phosphatase, and [pyruvate dehydrogenase (lipoamide)]-phosphate phosphohydrolase.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2PNQ.
Linn TC, Pelley JW, Pettit FH, Hucho F, Randall DD, Reed LJ (1972). "-Keto acid dehydrogenase complexes. XV. Purification and properties of the component enzymes of the pyruvate dehydrogenase complexes from bovine kidney and heart". Arch. Biochem. Biophys. 148 (2): 327–42. doi:10.1016/0003-9861(72)90151-8. PMID 4401694.
Reed LJ, Damuni Z, Merryfield ML (1985). "Regulation of mammalian pyruvate and branched-chain alpha-keto acid dehydrogenase complexes by phosphorylation-dephosphorylation". Curr. Top. Cell. Regul. Current Topics in Cellular Regulation. 27: 41–9. doi:10.1016/b978-0-12-152827-0.50011-6. ISBN 9780121528270. PMID 3004826.
Retrieved from "https://en.wikipedia.org/w/index.php?title=(pyruvate_dehydrogenase_(acetyl-transferring))-phosphatase&oldid=1084960407"
|
Phosphoserine transaminase - Wikipedia
Phosphoserine transaminase (EC 2.6.1.52, PSAT, phosphoserine aminotransferase, 3-phosphoserine aminotransferase, hydroxypyruvic phosphate-glutamic transaminase, L-phosphoserine aminotransferase, phosphohydroxypyruvate transaminase, phosphohydroxypyruvic-glutamic transaminase, 3-O-phospho-L-serine:2-oxoglutarate aminotransferase, SerC, PdxC, 3PHP transaminase) is an enzyme with systematic name O-phospho-L-serine:2-oxoglutarate aminotransferase.[1][2][3][4][5] This enzyme catalyses the following chemical reaction
(1) O-phospho-L-serine + 2-oxoglutarate
{\displaystyle \rightleftharpoons }
3-phosphonooxypyruvate + L-glutamate
(2) 4-phosphonooxy-L-threonine + 2-oxoglutarate
{\displaystyle \rightleftharpoons }
(3R)-3-hydroxy-2-oxo-4-phosphonooxybutanoate + L-glutamate
This enzyme is a pyridoxal-phosphate protein.
^ Hirsch H, Greenberg DM (May 1967). "Studies on phosphoserine aminotransferase of sheep brain". The Journal of Biological Chemistry. 242 (9): 2283–7. PMID 6022873.
^ Pizer LI (December 1963). "The pathway and control of serine biosynthesis in Escherichia coli". The Journal of Biological Chemistry. 238: 3934–44. PMID 14086727.
^ Zhao G, Winkler ME (January 1996). "A novel alpha-ketoglutarate reductase activity of the serA-encoded 3-phosphoglycerate dehydrogenase of Escherichia coli K-12 and its possible implications for human 2-hydroxyglutaric aciduria". Journal of Bacteriology. 178 (1): 232–9. PMC 177644. PMID 8550422.
^ Drewke C, Klein M, Clade D, Arenz A, Müller R, Leistner E (July 1996). "4-O-phosphoryl-L-threonine, a substrate of the pdxC(serC) gene product involved in vitamin B6 biosynthesis". FEBS Letters. 390 (2): 179–82. doi:10.1016/0014-5793(96)00652-7. PMID 8706854.
^ Zhao G, Winkler ME (January 1996). "4-Phospho-hydroxy-L-threonine is an obligatory intermediate in pyridoxal 5'-phosphate coenzyme biosynthesis in Escherichia coli K-12". FEMS Microbiology Letters. 135 (2–3): 275–80. doi:10.1111/j.1574-6968.1996.tb08001.x. PMID 8595869.
Phosphoserine+transaminase at the US National Library of Medicine Medical Subject Headings (MeSH)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Phosphoserine_transaminase&oldid=956984975"
|
Revision as of 19:37, 27 May 2020 by JohnWStockwellJr (talk | contribs)
{\displaystyle \phi }
{\displaystyle \phi =\iint {\textbf {g}}\cdot d{\textbf {s}}=\iiint \nabla \cdot {\textbf {g}}\;dxdydz}
{\displaystyle \phi =\iint _{S}{\textbf {g}}\cdot \mathbf {\hat {n}} \;dS=\iiint _{V}\nabla \cdot {\textbf {g}}\;dV}
{\displaystyle dS}
{\displaystyle S}
{\displaystyle V}
{\displaystyle dV}
{\displaystyle D}
{\displaystyle R^{m}}
{\displaystyle \partial D}
{\displaystyle R^{m-1}}
{\displaystyle \int _{D}\mathbf {\nabla } \cdot \mathbf {Q} \;dV=\int _{\partial D}\mathbf {\hat {n}} \cdot \mathbf {Q} \;dS.}
{\displaystyle Q}
{\displaystyle \mathbf {\hat {n}} =({\bar {n}}_{1},{\bar {n}}_{2},...{\bar {n}}_{m}),}
{\displaystyle \partial D.}
{\displaystyle {\bar {n}}_{k}}
{\displaystyle \partial D}
{\displaystyle {\hat {x}}_{k}}
{\displaystyle \mathbf {x} =(x_{1},x_{2},...,x_{m})}
{\displaystyle \nabla \equiv \left({\frac {\partial }{\partial x_{1}}},{\frac {\partial }{\partial x_{2}}},...,{\frac {\partial }{\partial x_{m}}}\right)}
{\displaystyle m=3}
{\displaystyle D}
{\displaystyle \partial D}
{\displaystyle \int _{D}\mathbf {\nabla } \cdot \mathbf {Q} \;dV=\int _{D}{\frac {\partial Q_{1}}{\partial x_{1}}}\;dV+\int _{D}{\frac {\partial Q_{2}}{\partial x_{2}}}\;dV+\int _{D}{\frac {\partial Q_{3}}{\partial x_{3}}}\;dV.}
{\displaystyle x_{1}}
{\displaystyle \int _{D}{\frac {\partial Q_{1}}{\partial x_{1}}}dV=\int _{X_{2}^{-}(x_{1},x_{3})}^{X_{2}^{+}(x_{1},x_{3})}\int _{X_{3}^{-}(x_{1},x_{2})}^{X_{3}^{+}(x_{1},x_{2})}\int _{X_{1}^{-}(x_{2},x_{3})}^{X_{1}^{+}(x_{2},x_{3})}{\frac {\partial Q_{1}}{\partial x_{1}}}\;dx_{1}dx_{2}dx_{3}}
{\displaystyle =\int _{X_{2}^{-}(X_{1}^{-},x_{3})}^{x_{2}^{+}(X_{1}^{+},x_{3})}\int _{X_{3}^{-}(X_{1}^{-},x_{2})}^{X_{3}^{+}(X_{1}^{+},x_{2})}\left[Q_{1}(X_{1}^{+},x_{2},x_{3})-Q_{1}(X_{1}^{-},x_{2},x_{3})\right]\;dx_{2}dx_{3}.}
{\displaystyle X_{k}^{+}}
{\displaystyle X_{k}^{-}}
{\displaystyle \partial D}
{\displaystyle x_{k}}
{\displaystyle x_{k},}
{\displaystyle \int _{\partial D}\mathbf {\hat {n}} \cdot \mathbf {Q} \;dS.=\int _{\partial D}\left[{\bar {n}}_{1}Q_{1}+{\bar {n}}_{2}Q_{2}+{\bar {n}}_{3}Q_{3}\right]\;dS}
{\displaystyle x_{1}}
{\displaystyle \int _{\partial D}{\bar {n}}_{1}Q_{1}\;dS=\int _{X_{2}^{-}(X_{1}^{-},x_{3})}^{X_{2}^{+}(X_{1}^{+},x_{3})}\int _{X_{3}^{-}(X_{1}^{-},x_{2})}^{X_{3}^{+}(X_{1}^{+},x_{2})}{\bar {n}}_{1}Q_{1}(x_{1},x_{2},x_{3})\;dS.}
{\displaystyle \partial D}
{\displaystyle x_{1}}
{\displaystyle Q_{1}=Q_{1}(X_{1}^{+},x_{2},x_{3})}
{\displaystyle {\bar {n}}_{1}dS=dx_{2}dx_{3}}
{\displaystyle x_{1}}
{\displaystyle Q_{1}=Q_{1}(X_{1}^{-},x_{2},x_{3})}
{\displaystyle {\bar {n}}_{1}dS=-dx_{2}dx_{3}}
{\displaystyle {\bar {n}}_{1}\equiv \pm {\hat {x}}_{1}\cdot \mathbf {\hat {n}} }
{\displaystyle Q_{1}}
{\displaystyle \partial Q_{1}/\partial x_{1}}
{\displaystyle \int _{\partial D}{\bar {n_{1}}}Q_{1}\;dS=\int _{X_{2}^{-}(X_{1}^{-},x_{3})}^{X_{2}^{+}(X_{1}^{+},x_{3})}\int _{X_{3}^{-}(X_{1}^{-},x_{2})}^{X_{3}^{+}(X_{1}^{+},x_{2})}\left[Q_{1}(X_{1}^{+},x_{2},x_{3})-Q_{1}(X_{1}^{-},x_{2},x_{3})\right]\;dx_{2}dx_{3}.}
{\displaystyle X_{1}^{\pm }=X_{1}^{\pm }(x_{2},x_{3}).}
{\displaystyle Q_{2}}
{\displaystyle Q_{3}}
{\displaystyle m>3}
|
Investigation of Periodically Unsteady Flow in a Radial Pump by CFD Simulations and LDV Measurements | J. Turbomach. | ASME Digital Collection
Jianjun Feng,
, Duisburg 47048, Germany
e-mail: [email protected]
e-mail: [email protected]
e-mail: [email protected]
Feng, J., Benra, F., and Dohmen, H. J. (September 7, 2010). "Investigation of Periodically Unsteady Flow in a Radial Pump by CFD Simulations and LDV Measurements." ASME. J. Turbomach. January 2011; 133(1): 011004. https://doi.org/10.1115/1.4000486
The periodically unsteady flow fields in a low specific speed radial diffuser pump have been investigated both numerically and experimentally for the design condition
(Qdes)
and also one part-load condition
(0.5Qdes)
. Three-dimensional, unsteady Reynolds-averaged Navier–Stokes equations are solved on high-quality structured grids with the shear stress transport turbulence model by using the CFD (computational fluid dynamics) code CFX-10. Furthermore, two-dimensional laser Doppler velocimetry (LDV) measurements are successfully conducted in the interaction region between the impeller and the vaned diffuser, in order to capture the complex flow with abundant measurement data and to validate the CFD results. The analysis of the obtained results has been focused on the behavior of the periodic velocity field and the turbulence field, as well as the associated unsteady phenomena due to the unsteady interaction. In addition, the comparison between CFD and LDV results has also been addressed. The blade orientation effects caused by the impeller rotation are quantitatively examined and detailedly compared with the turbulence effect. This work offers a good data set to develop the comprehension of the impeller-diffuser interaction and how the flow varies with relative impeller position to the diffuser in radial diffuser pumps.
computational fluid dynamics, Doppler measurement, laser velocimetry, Navier-Stokes equations, pulsatile flow, pumps
Blades, Computational fluid dynamics, Diffusers, Flow (Dynamics), Impellers, Pumps, Simulation, Turbulence, Unsteady flow, Design, Rotation
Concept ETI Inc.
Unsteady Flow Calculation in a Centrifugal Pump Using a Finite Element Method
Proceedings of the 18th IAHR Symposium on Hydraulic Machinery and Cavitation
Analysis of Unsteady Impeller Diffuser Interaction in a Centrifugal Pump
Numerical Investigation of the Transient Flow in a Centrifugal Pump Stage
Flow Structure in a Radial Flow Pumping System Using High-Image-Density Particle Image Velocimetry
Quantitative Visualization of the Flow in a Centrifugal Pump With Diffuser Vanes. Part I: On Flow Structures and Turbulence
Wuibaut
Rotor Stator Interactions in a Vaned Diffuser Radial Flow Pump
Qualitative Comparison of Unsteady Flow Between Numerical and Experimental Results in a Radial Diffuser Pump
Unsteady Flow Visualization at Part-Load Conditions of a Radial Diffuser Pump: By PIV and CFD
Internal Flow Investigation of a Centrifugal Pump at the Design Point
Rotor Stator Interactions in a Centrifugal Pump Equipped With a Vaned Diffuser
Proceedings of the 4th European Conference on Turbomachinery, Fluid Dynamics and Thermodynamics
Analysis of Unsteady Flows in a Vaned Diffuser Radial Flow Pump
Proceedings of the 21st IAHR Symposium on Hydraulic Machinery and Systems
The Flow Rate Influence on the Interaction of a Radial Pump Impeller and the Diffuser
Numerical Investigation on Pressure Fluctuations for Different Configurations of Vaned Diffuser Pumps
Gattanei
PIV Measurements of Unsteady Flow in a Diffuser Pump at Different Flow Rates
|
Revision as of 18:03, 2 September 2011 by Jmcbride (talk | contribs) (added more content)
{\displaystyle \left(-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V\right)\psi =E\psi .\,\!}
{\displaystyle E={\frac {k^{2}\hbar ^{2}}{2m}}.\,\!}
{\displaystyle k}
{\displaystyle V=V_{0}}
{\displaystyle E-V_{0}={\frac {k^{2}\hbar ^{2}}{2m}}.\,\!}
{\displaystyle \sigma (E)={\frac {S(E)}{E}}e^{-(E_{G}/E)^{1/2}}.\,\!}
{\displaystyle S(E)}
{\displaystyle E}
{\displaystyle E_{G}}
{\displaystyle E_{G}=(1{\rm {\;MeV}})Z_{1}^{2}Z_{2}^{2}{\frac {m_{r}}{m_{p}}}.\,\!}
{\displaystyle n_{1}}
{\displaystyle n_{2}}
{\displaystyle \ell _{2}={\frac {1}{n_{1}\sigma }}\,\!}
{\displaystyle \tau _{2}={\frac {1}{n_{1}\sigma v}}.\,\!}
{\displaystyle r_{12}={\frac {n_{2}}{\tau _{2}}}=n_{1}n_{2}\sigma v.\,\!}
{\displaystyle r_{12}=n_{1}n_{2}<\sigma (E)v>.\,\!}
{\displaystyle <\sigma (E)v>}
{\displaystyle <\sigma (E)v>=\int d^{3}v\;prob(v)\sigma (E)v.\,\!}
{\displaystyle r_{12}=n_{1}n_{2}\int d^{3}v\sigma (E)v\left({\frac {m_{r}}{2\pi kT}}\right)^{3/2}e^{-{\frac {{\frac {1}{2}}m_{r}v^{2}}{kT}}}.\,\!}
{\displaystyle E={\frac {1}{2}}m_{r}v^{2},\,\!}
{\displaystyle dE=m_{r}vdv,\,\!}
{\displaystyle d^{3}v=4\pi v^{2}dv=4\pi {\frac {v^{2}}{v}}{\frac {dE}{m_{r}}},\,\!}
{\displaystyle vd^{3}v={\frac {8\pi E}{m_{r}}}{\frac {dE}{m_{r}}}.\,\!}
{\displaystyle <\sigma (E)v>=\left({\frac {2}{kT}}\right)^{3/2}{\frac {1}{\sqrt {\pi m_{r}}}}\int dEE\sigma (E)e^{-E/kT}.\,\!}
{\displaystyle \sigma (E)}
{\displaystyle <\sigma (E)v>=\left({\frac {2}{kT}}\right)^{3/2}{\frac {1}{\sqrt {\pi m_{r}}}}\int dES(E)e^{-(E_{G}/E)^{1/2}\;-\;E/kT}.\,\!}
{\displaystyle S(E)}
{\displaystyle <\sigma (E)v>=\left({\frac {2}{kT}}\right)^{3/2}{\frac {1}{\sqrt {\pi m_{r}}}}S(E)I.\,\!}
{\displaystyle I=\int _{0}^{\infty }e^{-(E_{G}/E)^{1/2}\;-\;E/kT}dE.\,\!}
{\displaystyle E_{0}}
{\displaystyle E_{0}}
{\displaystyle f(E)}
{\displaystyle {\frac {df}{dE}}=0={\frac {1}{kT}}-{\frac {E_{G}^{1/2}}{2E^{3/2}}}.\,\!}
{\displaystyle E_{0}=\left({\frac {1}{2}}E_{G}^{1/2}kT\right)^{2/3}.\,\!}
{\displaystyle E_{G}}
{\displaystyle E_{0}=(5.7\;{\rm {keV}})Z_{1}^{2/3}Z_{2}^{2/3}T_{7}^{2/3}\left({\frac {m_{r}}{m_{p}}}\right)^{1/3}.\,\!}
{\displaystyle E_{G}}
{\displaystyle kT}
{\displaystyle f(E)=f(E_{0})+{\frac {1}{2}}(E-E_{0})^{2}+f^{''}(E_{0}),\,\!}
{\displaystyle f^{''}(E_{0})={\frac {3E_{G}^{1/2}}{4E_{0}^{5/2}}}.\,\!}
{\displaystyle I}
{\displaystyle I={\frac {e^{-f(E_{0})}{\sqrt {2\pi }}}{\sqrt {f^{''}(E_{0}}}}.\,\!}
{\displaystyle <\sigma (E)v>=2.6S(E_{0}){\frac {E_{G}^{1/6}}{(kT)^{2/3}{\sqrt {m_{r}}}}}e^{-3(E_{G}/4kT)^{1/3}}.\,\!}
{\displaystyle \epsilon }
{\displaystyle L=\int \epsilon dM_{r}=\int \epsilon 4\pi r^{2}\rho dr.\,\!}
{\displaystyle {\frac {dL_{r}}{dr}}=4\pi r^{2}\rho \epsilon .\,\!}
{\displaystyle Q}
{\displaystyle r_{12}}
{\displaystyle \epsilon }
{\displaystyle \epsilon _{12}={\frac {r_{12}Q}{\rho }}.\,\!}
{\displaystyle n_{1}={\frac {X_{1}\rho }{m_{1}}}.\,\!}
{\displaystyle X_{1}}
{\displaystyle \epsilon _{12}={\frac {2.6QS(E_{0})X_{1}X_{2}}{m_{1}m_{2}{\sqrt {m_{r}}}(kT)^{2/3}}}\rho E_{G}^{1/6}e^{-3(E_{G}/4kT)^{1/3}}.\,\!}
{\displaystyle \epsilon \propto \rho ^{\alpha }T^{\beta }.\,\!}
{\displaystyle \alpha }
{\displaystyle \beta }
{\displaystyle \alpha =1}
{\displaystyle \beta }
{\displaystyle \epsilon }
{\displaystyle \beta ={\frac {d\ln \epsilon }{d\ln T}}.\,\!}
{\displaystyle \epsilon }
{\displaystyle \beta =-{\frac {2}{3}}+\left({\frac {E_{G}}{4kT}}\right)^{1/3}.\,\!}
{\displaystyle \beta \approx 4.3}
{\displaystyle \epsilon _{pp}\propto \rho T^{4.3}\,\!}
{\displaystyle 10^{7}}
{\displaystyle T_{c}\sim 10^{7}}
{\displaystyle \rho \sim 1}
{\displaystyle ^{-3}}
{\displaystyle S(E)}
{\displaystyle Q}
{\displaystyle \epsilon }
{\displaystyle \epsilon \sim 10^{20}{\rm {\;erg/s/g}}.\,\!}
{\displaystyle L=\int dM_{r}\epsilon \sim \epsilon M_{\odot }.\,\!}
{\displaystyle L\sim 10^{54}{\rm {\;erg/s}}\sim 10^{20}L_{\odot }.\,\!}
{\displaystyle 10^{20}}
{\displaystyle E_{G}}
{\displaystyle 4p\rightarrow {}^{4}{\rm {He}}+{\rm {energy}}.\,\!}
{\displaystyle p+p\rightarrow {}^{2}{\rm {H}}+e^{+}+\nu _{e}.\,\!}
{\displaystyle S(keV)\approx 3.78\times 10^{-22}}
{\displaystyle {}^{2}{\rm {H}}+p\rightarrow {}^{3}{\rm {He}}+\gamma ,\,\!}
{\displaystyle \times 10^{-4}}
{\displaystyle {}^{3}{\rm {He}}+{}^{3}{\rm {He}}\rightarrow {}^{4}{\rm {He}}+2p,\,\!}
{\displaystyle \epsilon _{cycle}=r_{p-p\;step}Q_{cycle}/\rho .\,\!}
{\displaystyle \epsilon _{pp}\propto \rho T^{-2/3}e^{-15.7T_{7}^{-1/3}}.\,\!}
{\displaystyle \epsilon _{pp}=(5\times 10^{5}){\frac {\rho X^{2}}{T^{2/3}}}e^{-15.7T_{7}^{-1/3}}{\rm {erg/s/g}}.\,\!}
{\displaystyle L=\int \epsilon dM\sim \epsilon (center)M_{\odot },\,\!}
{\displaystyle L_{\odot }\sim 10^{7}{\frac {M_{\odot }}{T_{7}^{2/3}}}e^{-15.7T_{7}^{-1/3}},\,\!}
{\displaystyle T_{c}\approx 10^{7}K.\,\!}
{\displaystyle p+p\rightarrow {}^{2}H+e^{+}+\nu _{e}\,\!}
{\displaystyle {}^{2}H+p\rightarrow {}^{3}He+\gamma \,\!}
{\displaystyle {}^{3}He+{}^{3}He\rightarrow {}^{4}He+2p\,\!}
{\displaystyle \,\!}
{\displaystyle 10^{7}}
{\displaystyle 10^{-7}}
{\displaystyle 10^{-31}}
{\displaystyle 10^{24}}
{\displaystyle \epsilon _{CNO}\approx (4\times 10^{2}7){\frac {\rho }{T_{7}^{2/3}}}XZe^{-70.7T_{7}^{-1/3}}{\rm {\;erg/g/s}}.\,\!}
{\displaystyle \beta ={\frac {-2}{3}}+{\frac {23.6}{T_{7}^{1/3}}},\,\!}
{\displaystyle \epsilon \propto \rho T^{\beta }.\,\!}
{\displaystyle \sigma \sim 10^{-44}\left({\frac {E_{\nu }}{m_{e}c^{2}}}\right)^{2}{\rm {\;cm^{2}}}.\,\!}
{\displaystyle \ell ={\frac {1}{n\sigma }}.\,\!}
{\displaystyle E_{\nu }\sim }
{\displaystyle \ell \sim 10^{9}R_{\odot }.\,\!}
{\displaystyle {}^{37}Cl+\nu _{e}\rightarrow {}^{37}Ar+e^{-}.\,\!}
{\displaystyle 10^{22}}
{\displaystyle \nu _{e}+D\rightarrow p+p+e^{-}.\,\!}
{\displaystyle \nu +D\rightarrow p+n+\nu .\,\!}
|
{\displaystyle \pi }
^ Markoff, John (20 December 1999). "An Internet Pioneer Ponders the Next Revolution". The New York Times. Archived from the original on 22 September 2008. Retrieved 20 September 2008.
^ Cerf, Vint (20 March 2022). "[Internet Policy] Why the World Must Resist Calls to Undermine the Internet". IETF-Discussion (Mailing list). Retrieved 24 March 2022.
^ Savio, Jessica (1 April 2011). "Browsing history: A heritage site has been set up in Boelter Hall 3420, the room the first Internet message originated in". Daily Bruin. UCLA. Retrieved 6 June 2020.
^ Sutton, Chris (2 September 2004). "Internet Began 35 Years Ago at UCLA with First Message Ever Sent Between Two Computers". UCLA. Archived from the original on 8 March 2008.
^ Postel, Jon (November 1981). NCP/TCP Transition Plan. doi:10.17487/RFC0801. RFC 801.
^ McKenzie, Alex; Walden, Dave (1991). "ARPANET, the Defense Data Network, and Internet". The Froehlich/Kent Encyclopedia of Telecommunications. Vol. 1. CRC Press. pp. 341–375. ISBN 978-0-8247-2900-4.
{\displaystyle \pi }
|
Macaulay Duration vs. Modified Duration: An Overview
The Macaulay Duration
The Modified Duration
Comparing the Macaulay Duration and the Modified Duration
The Macaulay duration and the modified duration are chiefly used to calculate the duration of bonds. The Macaulay duration calculates the weighted average time before a bondholder would receive the bond's cash flows. Conversely, the modified duration measures the price sensitivity of a bond when there is a change in the yield to maturity.
There are a few different ways to approach the concept of duration, or a fixed-income asset's price sensitivity to changes in interest rates.
The Macaulay duration is the weighted average term to maturity of the cash flows from a bond, and is frequently used by portfolio managers who use an immunization strategy.
The modified duration of a bond is an adjusted version of the Macaulay duration and is used to calculate the changes in a bond's duration and price for each percentage change in the yield to maturity.
The Macaulay duration is calculated by multiplying the time period by the periodic coupon payment and dividing the resulting value by 1 plus the periodic yield raised to the time to maturity. Next, the value is calculated for each period and added together. Then, the resulting value is added to the total number of periods multiplied by the par value, divided by 1, plus the periodic yield raised to the total number of periods. Then the value is divided by the current bond price.
\begin{aligned} &\text{Macaulay Duration}=\frac{\left( \sum_{t=1}^{n}{\frac{t*C}{\left(1+y\right)^t}} + \frac{n*M}{\left(1+y\right)^n } \right)}{\text{Current bond price}}\\ &\textbf{where:}\\ &C=\text{periodic coupon payment}\\ &y=\text{periodic yield}\\ &M=\text{the bond's maturity value}\\ &n=\text{duration of bond in periods}\\ \end{aligned}
Macaulay Duration=Current bond price(∑t=1n(1+y)tt∗C+(1+y)nn∗M)where:C=periodic coupon paymenty=periodic yieldM=the bond’s maturity valuen=duration of bond in periods
A bond's price is calculated by multiplying the cash flow by 1, minus 1, divided by 1, plus the yield to maturity, raised to the number of periods divided by the required yield. The resulting value is added to the par value, or maturity value, of the bond divided by 1, plus the yield to maturity raised to the total number of periods.
For example, assume the Macaulay duration of a five-year bond with a maturity value of $5,000 and a coupon rate of 6% is 4.87 years ((1*60) / (1+0.06) + (2*60) / (1 + 0.06) ^ 2 + (3*60) / (1 + 0.06) ^ 3 + (4*60) / (1 + 0.06) ^ 4 + (5*60) / (1 + 0.06) ^ 5 + (5*5000) / (1 + 0.06) ^ 5) / (60*((1- (1 + 0.06) ^ -5) / (0.06)) + (5000 / (1 + 0.06) ^ 5)).
The modified duration for this bond, with a yield to maturity of 6% for one coupon period, is 4.59 years (4.87/(1+0.06/1). Therefore, if the yield to maturity increases from 6% to 7%, the duration of the bond will decrease by 0.28 years (4.87 - 4.59).
The formula to calculate the percentage change in the price of the bond is the change in yield multiplied by the negative value of the modified duration multiplied by 100%. This resulting percentage change in the bond, for a 1% yield increase, is calculated to be -4.59% (0.01*- 4.59* 100%).
\begin{aligned} &\text{Modified Duration}=\frac{\text{Macauley Duration}}{\left( 1 + \frac{YTM}{n}\right)} \\ &\textbf{where:}\\ &YTM=\text{yield to maturity}\\ &n=\text{number of coupon periods per year} \end{aligned}
Modified Duration=(1+nYTM)Macauley Durationwhere:YTM=yield to maturityn=number of coupon periods per year
The modified duration is an adjusted version of the Macaulay duration, which accounts for changing yield to maturities. The formula for the modified duration is the value of the Macaulay duration divided by 1, plus the yield to maturity, divided by the number of coupon periods per year. The modified duration determines the changes in a bond's duration and price for each percentage change in the yield to maturity.
For example, assume a six-year bond has a par value of $1,000 and an annual coupon rate of 8%. The Macaulay duration is calculated to be 4.99 years ((1*80) / (1 + 0.08) + (2*80) / (1 + 0.08) ^ 2 + (3*80) / (1 + 0.08) ^ 3 + (4*80) / (1 + 0.08) ^ 4 + (5*80) / (1 + 0.08) ^ 5 + (6*80) / (1 + 0.08) ^ 6 + (6*1000) / (1 + 0.08) ^ 6) / (80*(1- (1 + 0.08) ^ -6) / 0.08 + 1000 / (1 + 0.08) ^ 6).
The modified duration for this bond, with a yield to maturity of 8% for one coupon period, is 4.62 years (4.99 / (1 + 0.08 / 1). Therefore, if the yield to maturity increases from 8% to 9%, the duration of the bond will decrease by 0.37 years (4.99 - 4.62).
The formula to calculate the percentage change in the price of the bond is the change in yield multiplied by the negative value of the modified duration multiplied by 100%. This resulting percentage change in the bond, for an interest rate increase from 8% to 9%, is calculated to be -4.62% (0.01* - 4.62* 100%).
Therefore, if interest rates rise 1% overnight, the price of the bond is expected to drop 4.62%.
The Modified Duration and Interest Rate Swaps
Modified duration could be extended to calculate the number of years it would take an interest rate swap to repay the price paid for the swap. An interest rate swap is the exchange of one set of cash flows for another and is based on interest rate specifications between the parties.
The modified duration is calculated by dividing the dollar value of a one basis point change of an interest rate swap leg, or series of cash flows, by the present value of the series of cash flows. The value is then multiplied by 10,000. The modified duration for each series of cash flows can also be calculated by dividing the dollar value of a basis point change of the series of cash flows by the notional value plus the market value. The fraction is then multiplied by 10,000.
The modified duration of both legs must be calculated to compute the modified duration of the interest rate swap. The difference between the two modified durations is the modified duration of the interest rate swap. The formula for the modified duration of the interest rate swap is the modified duration of the receiving leg minus the modified duration of the paying leg.
For example, assume bank A and bank B enter into an interest rate swap. The modified duration of the receiving leg of a swap is calculated as nine years and the modified duration of the paying leg is calculated as five years. The resulting modified duration of the interest rate swap is four years (9 years – 5 years).
Since the Macaulay duration measures the weighted average time an investor must hold a bond until the present value of the bond’s cash flows is equal to the amount paid for the bond, it is often used by bond managers looking to manage bond portfolio risk with immunization strategies.
In contrast, the modified duration identifies how much the duration changes for each percentage change in the yield while measuring how much a change in the interest rates impacts the price of a bond. Thus, the modified duration can provide a risk measure to bond investors by approximating how much the price of a bond could decline with an increase in interest rates. It's important to note that bond prices and interest rates have an inverse relationship with each other.
Milken Institute. "An Introduction to Duration." Accessed Sept. 25, 2020.
New York University. "Duration and Convexity." Accessed Sept. 25, 2020.
|
Display symbolic formula from string - MATLAB displayFormula - MathWorks Deutschland
displayFormula
Multiplication Formula of Matrix and Scalar
Display Differential Equation
Display and Evaluate Symbolic Expression
Display and Solve Quadratic Equation
Display symbolic formula from string
displayFormula(symstr)
displayFormula(symstr,old,new)
displayFormula(symstr) displays the symbolic formula from the string symstr without evaluating the operations. All workspace variables that are specified in symstr are replaced by their values.
displayFormula(symstr,old,new) replaces only the expression or variable old with new. Expressions or variables other than old are not replaced by their values.
Create a 3-by-3 matrix. Multiply the matrix by the scalar coefficient K^2.
A = [-1, 0, 1; 1, 2, 0; 1, 1, 0];
B = K^2*A
\left(\begin{array}{ccc}-{K}^{2}& 0& {K}^{2}\\ {K}^{2}& 2 {K}^{2}& 0\\ {K}^{2}& {K}^{2}& 0\end{array}\right)
The result automatically shows the multiplication being carried out element-wise.
Show the multiplication formula without evaluating the operations by using displayFormula. Input the formula as a string. The variable A in the string is replaced by its values.
displayFormula("F = K^2*A")
F={K}^{2} \left(\begin{array}{ccc}-1& 0& 1\\ 1& 2& 0\\ 1& 1& 0\end{array}\right)
Define a string that describes a differential equation.
S = "m*diff(y,t,t) == m*g-k*y";
Create a string array that combines the differential equation and additional text. Display the formula along with the text.
symstr = ["'The equation of motion is'"; S;"'where k is the elastic coefficient.'"];
\mathrm{The equation of motion is}
m \frac{{\partial }^{2}}{\partial {t}^{2}}\mathrm{ }y=m g-k y
\mathrm{where k is the elastic coefficient.}
Create a string S representing a symbolic expression.
S = "exp(2*pi*i)";
Create another string symstr that contains S.
symstr = "1 + S + S^2 + cos(S)"
symstr =
"1 + S + S^2 + cos(S)"
Display symstr as a formula without evaluating the operations by using displayFormula. S in symstr is replaced by its value.
1+{\mathrm{e}}^{2 \pi \mathrm{i}}+{\left({\mathrm{e}}^{2 \pi \mathrm{i}}\right)}^{2}+\mathrm{cos}\left({\mathrm{e}}^{2 \pi \mathrm{i}}\right)
To evaluate the strings S and symstr as symbolic expressions, use str2sym.
S = str2sym(S)
1
expr = str2sym(symstr)
S+\mathrm{cos}\left(S\right)+{S}^{2}+1
Substitute the variable S with its value by using subs. Evaluate the result in double precision using double.
double(subs(expr))
Define a string that represents a quadratic formula with the coefficients a, b, and c.
syms a b c k
symstr = "a*x^2 + b*x + c";
Display the quadratic formula, replacing a with k.
displayFormula(symstr,a,k)
k {x}^{2}+b x+c
Display the quadratic formula again, replacing a, b, and c with 2, 3, and -1, respectively.
displayFormula(symstr,[a b c],[2 3 -1])
2 {x}^{2}+3 x-1
To solve the quadratic equation, convert the string into a symbolic expression using str2sym. Use solve to find the zeros of the quadratic equation.
f = str2sym(symstr);
sol = solve(f)
\left(\begin{array}{c}-\frac{b+\sqrt{{b}^{2}-4 a c}}{2 a}\\ -\frac{b-\sqrt{{b}^{2}-4 a c}}{2 a}\end{array}\right)
Use subs to replace a, b, and c in the solution with 2, 3, and -1, respectively.
solValues = subs(sol,[a b c],[2 3 -1])
solValues =
\left(\begin{array}{c}-\frac{\sqrt{17}}{4}-\frac{3}{4}\\ \frac{\sqrt{17}}{4}-\frac{3}{4}\end{array}\right)
symstr — String representing symbolic formula
String representing a symbolic formula, specified as a character vector, string scalar, cell array of character vectors, or string array.
You can also combine a string that represents a symbolic formula with regular text (enclosed in single quotation marks) as a string array. For an example, see Display Differential Equation.
old — Expression or variable to be replaced
character vector | string scalar | cell array of character vectors | string array | symbolic variable | symbolic function | symbolic expression | symbolic array
Expression or variable to be replaced, specified as a character vector, string scalar, cell array of character vectors, string array, symbolic variable, function, expression, or array.
number | character vector | string scalar | cell array of character vectors | string array | symbolic number | symbolic variable | symbolic expression | symbolic array
New value, specified as a number, character vector, string scalar, cell array of character vectors, string array, symbolic number, variable, expression, or array.
str2sym | subs | syms | sym | solve
|
The equation of the plane containing the line (x-alpha)/(1)=(-Turito
Answer:The correct answer is: 0 Since these two lines are intersecting so shortest distance between the lines will be 0.
y=k{x}^{2} \left(y
m
x
a
y
ma\mathrm{cos}\theta =mg\mathrm{cos}\left(90-\theta \right)
⇒\frac{a}{g}=\mathrm{tan}\theta ⇒\frac{a}{g}=\frac{dy}{dx}
⇒\frac{d}{dx}{\left(kx\right)}^{2}=\frac{a}{g}⇒x=\frac{a}{2gk}
y=k{x}^{2} \left(y
m
x
a
y
ma\mathrm{cos}\theta =mg\mathrm{cos}\left(90-\theta \right)
⇒\frac{a}{g}=\mathrm{tan}\theta ⇒\frac{a}{g}=\frac{dy}{dx}
⇒\frac{d}{dx}{\left(kx\right)}^{2}=\frac{a}{g}⇒x=\frac{a}{2gk}
v
\theta
v=\sqrt{5gL}
{\left(\frac{v}{2}\right)}^{2}={v}^{2}-2gh\left(ii\right)
h=L\left(1-\mathrm{cos}\theta \right)\left(iii\right)
Solving Eqs.\left(i\right), \left(ii\right)and \left(iii\right), we get
\mathrm{cos}\theta =-\frac{7}{8}
or \theta ={cos}^{-1}\left(-\frac{7}{8}\right)=151°
v
\theta
v=\sqrt{5gL}
{\left(\frac{v}{2}\right)}^{2}={v}^{2}-2gh\left(ii\right)
h=L\left(1-\mathrm{cos}\theta \right)\left(iii\right)
Solving Eqs.\left(i\right), \left(ii\right)and \left(iii\right), we get
\mathrm{cos}\theta =-\frac{7}{8}
or \theta ={cos}^{-1}\left(-\frac{7}{8}\right)=151°
\frac{x-4/3}{2}=\frac{y+6/5}{3}=\frac{z-3/2}{4}
\frac{5y+6}{8}=\frac{2z-3}{9}=\frac{3x-4}{5}
\frac{x-4/3}{2}=\frac{y+6/5}{3}=\frac{z-3/2}{4}
\frac{5y+6}{8}=\frac{2z-3}{9}=\frac{3x-4}{5}
{\int }_{0}^{100} \left\{\sqrt{x}\right\}dx
{\int }_{0}^{100} \left\{\sqrt{x}\right\}dx
{\int }_{0}^{1} |\mathrm{sin} 2\pi x|\mid dx
{\int }_{0}^{1} |\mathrm{sin} 2\pi x|\mid dx
f:R\to R,f\left(x\right)=\left\{\begin{array}{c}|x-\left[x\right]|,\left[x\right]\\ |x-\left[x+1\right]|,\left[x\right]\end{array}\right\
\begin{array}{r}\text{ is odd }\\ 1\text{ is even where [.] }\end{array}
{\int }_{-2}^{4} f\left(x\right)dx
f:R\to R,f\left(x\right)=\left\{\begin{array}{c}|x-\left[x\right]|,\left[x\right]\\ |x-\left[x+1\right]|,\left[x\right]\end{array}\right\
\begin{array}{r}\text{ is odd }\\ 1\text{ is even where [.] }\end{array}
{\int }_{-2}^{4} f\left(x\right)dx
{\int }_{-\pi /4}^{\pi /4} \frac{{e}^{x}\left(x\mathrm{sin}x\right)}{{e}^{2x}-1}dx
\frac{l}{3}=\frac{m}{-3}=\frac{n}{-9}
\frac{l}{-1}=\frac{m}{+1}=\frac{n}{3}
\frac{x-2}{-1}=\frac{y+1}{1}=\frac{z+1}{3}
{\int }_{-\pi /4}^{\pi /4} \frac{{e}^{x}\left(x\mathrm{sin}x\right)}{{e}^{2x}-1}dx
\frac{l}{3}=\frac{m}{-3}=\frac{n}{-9}
\frac{l}{-1}=\frac{m}{+1}=\frac{n}{3}
\frac{x-2}{-1}=\frac{y+1}{1}=\frac{z+1}{3}
|
Erratum: “Modeling of Neutral Solute Transport in a Dynamically Loaded Porous Permeable Gel: Implications for Articular Cartilage Biosynthesis and Tissue Engineering” [ASME Journal of Biomechanical Engineering, 2003, 125, pp. 602–614] | J. Biomech Eng. | ASME Digital Collection
Erratum: “Modeling of Neutral Solute Transport in a Dynamically Loaded Porous Permeable Gel: Implications for Articular Cartilage Biosynthesis and Tissue Engineering” [ASME Journal of Biomechanical Engineering, 2003, 125, pp. 602–614]
Robert L. Mauck,,
Clark T. Hung, and,
Clark T. Hung, and
This is a correction to: Modeling of Neutral Solute Transport in a Dynamically Loaded Porous Permeable Gel: Implications for Articular Cartilage Biosynthesis and Tissue Engineering
Mauck, , R. L., Hung, and , C. T., and Ateshian , G. A. (September 27, 2004). "Erratum: “Modeling of Neutral Solute Transport in a Dynamically Loaded Porous Permeable Gel: Implications for Articular Cartilage Biosynthesis and Tissue Engineering” [ASME Journal of Biomechanical Engineering, 2003, 125, pp. 602–614] ." ASME. J Biomech Eng. August 2004; 126(4): 541. https://doi.org/10.1115/1.1785817
bone, tissue engineering, biotransport, modelling, biomechanics, biorheology, porous materials, gels, permeability, biodiffusion, biological tissues, convection, biomedical materials
Biomechanical engineering, Cartilage, Modeling, Tissue engineering, Biological tissues, Biomaterials, Biomechanics, Biorheology, Biotransport, Bone, Convection, Permeability, Porous materials
Typesetting corrections for equations ( 32), ( 48), and ( 49):
urr,0=−Rθ 1−κfHA+λs c0fr,cfr,0=κfc0f,andpr,0=p0−Rθ1−κfc0f.
∂u^r∂r^r^=1+λsHA [u^r1,t^+εt^]=−1−κf1−φwRd c^f*,c^f1,t^=κfc^f*,andp^1,t^=p^*−1−κf1−φwRd c^f*,
u^rr^,0=−1−κf1−φwRdHAr^HA+λsc^0f,c^fr^,0=κfc^0f,andp^r^,0=p^0−1−κf1−φwRd c^0f.
|
Quantum Theory - Course Hero
General Chemistry/Electron Behavior and Periodic Properties of Elements/Quantum Theory
As the wave-particle duality of light was being confirmed, in 1924 French physicist Louis de Broglie hypothesized that the same might be true of electrons. The idea that matter may also behave as a wave is called the de Broglie hypothesis. He also developed an equation for the wavelength (
\lambda
) of a particle having mass m and traveling at velocity v using Planck's constant, h, equal to
6.62607004\times10^{-34}\,\rm{kg}\cdot{\rm{m}}^2\rm{/s}
, which is called the de Broglie wavelength.
\lambda=\frac h{mv}
This equation can be used to calculate the wavelength of an electron if its mass and velocity are known. For example, assume an electron has
m=9.11\times10^{-31}\;\rm{kg}
v=5.31\times10^6\;\rm m/\rm s.
\begin{aligned}\lambda&=\frac{6.62607004\times10^{-34}\;\rm kg\cdot m^2/s}{(9.11\times10^{-31}\;\rm{kg})\,(5.31\times10^6\;\rm m/\rm s)}\\&=1.37\times10^{-10}\;\rm m\\&=0.137\;\rm{nm}\end{aligned}
Austrian physicist Erwin Schrödinger built on the de Broglie hypothesis. He considered that the fact that electrons can only have specific quantized energies made them similar to a standing wave, which is a wave that exists with fixed points at either end. For example, a standing wave is created when a guitar string is plucked. The string vibrates between the nut and the bridge but remains fixed at those points. Each point on a standing wave that stays fixed and does not oscillate is a node.
Nodes in a Standing Wave
A standing wave forms when a wave reflects off a medium in such a way that the waves appear stationary. Each point where the crest of a wave approaching the medium exactly meets the trough of the reflected wave forms a point of zero amplitude, called a node, that does not move.
In 1925 Schrödinger developed an equation to describe electrons as standing waves. This equation gives information about a wave in terms of time and position. In Schrödinger's equation,
\psi
is the wave function, which is a mathematical expression that gives information about measurable properties of a system, such as energy, momentum, and position.
\widehat H
is the Hamiltonian operator that determines the change of quantum states, and E is the energy of the system.
\widehat H\psi=E\psi
However, the Heisenberg uncertainty principle states that it is impossible to simultaneously measure the position and the momentum of a particle. It is only possible to know a range of probabilities where an electron might appear, not where it actually is. The range of probabilities where an electron might appear is an electron shell, which is one or more electron subshells that have the same quantum number
n
. Electron shells are determined by an electron's distance from the nucleus. However, an electron's angular position relative to the nucleus also affects where it might be found. An electron orbital is the area of an atom in which an electron has the greatest probability of being located. Each orbital can contain at most two electrons. An electron subshell is a group of electron energy levels with the same size and shape that have the same quantum numbers
n
\ell
. For example, the orbitals 2px, 2py, and 2pz make up the 2p subshell.
Shapes of Electron Orbitals
An electron orbital shows the shape and orientation of an electron subshell. The notation for each orbital is from the subshell and which plane the electron occupies. For example,
d_{xy}
is in the d subshell and its shape is aligned with the x and y planes. The electrons fill orbitals in order of increasing energy.
Electron orbitals can be described based on quantum mechanics, the branch of science that deals with subatomic particles, their behaviors, and their interactions. A quantum number is a number that describes electrons in terms of the number of subshells, the shape of the orbital, the number of angular nodes, the energy levels, and the spin on the electrons. The first quantum number is the principal quantum number,
n
, which describes most of the energy in an electron and can be any integer but zero. The principal quantum number
n
is proportional to the distance of the radius of the electron orbital from the nucleus. It indicates the shell where the electron is found. For example, bromine has its outermost electrons in the fourth electron shell from the nucleus, so its principal quantum number is 4. In general, higher principal quantum numbers are associated with higher energies than lower ones.
Energy Levels of Principal Quantum Number
As the principal quantum number increases, the energy level of the associated orbitals increases.
The second quantum number is the orbital angular momentum number,
\ell
. This describes the subshell of the electron shell and can only be positive integer values or zero. Angular momentum limits the volume of space where an electron may be found and also relates to the shape of the subshell. Thus,
\ell
is 0 for an s subshell, 1 for a p subshell, 2 for a d subshell, 3 for an f subshell, and so on. This number is also related to
n
. The angular momentum number
\ell
consists of integers that range from 0 to
n-1
. For example, bromine has
n=4
, so its
\ell
values are 0, 1, 2, and 3.
The third quantum number is the magnetic quantum number,
m
, which indicates the orientation in space of a particular orbital. The value of
m
can be any integer from
-\ell
+\ell
, including 0. So, for
\ell=2
(d subshell), the range for
m
includes –2, –1, 0, 1, and 2. These define the
d_{{x^2}-{y^2}},d_{z^2},d_{xy},d_{xz}
d_{yz}
The fourth quantum number is the spin quantum number,
s
. Its value is either
+1\rm{/}2
-1\rm{/}2
and relates to the electron's spin as either up or down. Paired electrons may never have the same spin value, which means they cannot have the same four quantum numbers, according to the to Pauli exclusion principle. An electron with
s=+1/2
is called an alpha electron, while an electron with
s=-\rm{1/2}
is called a beta electron. Electrons with opposite spins are depicted as one arrow pointing upward and the other arrow pointing downward.
<Electron Configurations>Suggested Reading
|
Metalogic — Wikipedia Republished // WIKI 2
Study of the properties of logical systems
Metalogic is the study of the metatheory of logic. Whereas logic studies how logical systems can be used to construct valid and sound arguments, metalogic studies the properties of logical systems.[1] Logic concerns the truths that may be derived using a logical system; metalogic concerns the truths that may be derived about the languages and systems that are used to express truths.[2]
The basic objects of metalogical study are formal languages, formal systems, and their interpretations. The study of interpretation of formal systems is the branch of mathematical logic that is known as model theory, and the study of deductive systems is the branch that is known as proof theory.
Dhorpatan and Metalogic 1
1.1 Formal language
1.3 Formal systems
1.4 Formal proofs
2 Important distinctions
2.1 Metalanguage–object language
2.2 Syntax–semantics
2.3 Use–mention
2.4 Type–token
Main article: Formal language
A formal language is an organized set of symbols, the symbols of which precisely define it by shape and place. Such a language therefore can be defined without reference to the meanings of its expressions; it can exist before any interpretation is assigned to it—that is, before it has any meaning. First-order logic is expressed in some formal language. A formal grammar determines which symbols and sets of symbols are formulas in a formal language.
A formal language can be formally defined as a set A of strings (finite sequences) on a fixed alphabet α. Some authors, including Rudolf Carnap, define the language as the ordered pair <α, A>.[3] Carnap also requires that each element of α must occur in at least one string in A.
Main article: Formation rule
Formation rules (also called formal grammar) are a precise description of the well-formed formulas of a formal language. They are synonymous with the set of strings over the alphabet of the formal language that constitute well formed formulas. However, it does not describe their semantics (i.e. what they mean).
Main article: Formal system
A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules (also called inference rules) or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions.
A formal system can be formally defined as an ordered triple <α,
{\displaystyle {\mathcal {I}}}
{\displaystyle {\mathcal {D}}}
d>, where
{\displaystyle {\mathcal {D}}}
d is the relation of direct derivability. This relation is understood in a comprehensive sense such that the primitive sentences of the formal system are taken as directly derivable from the empty set of sentences. Direct derivability is a relation between a sentence and a finite, possibly empty set of sentences. Axioms are so chosen that every first place member of
{\displaystyle {\mathcal {D}}}
d is a member of
{\displaystyle {\mathcal {I}}}
and every second place member is a finite subset of
{\displaystyle {\mathcal {I}}}
A formal system can also be defined with only the relation
{\displaystyle {\mathcal {D}}}
d. Thereby can be omitted
{\displaystyle {\mathcal {I}}}
and α in the definitions of interpreted formal language, and interpreted formal system. However, this method can be more difficult to understand and use.[3]
Main article: Formal proof
A formal proof is a sequence of well-formed formulas of a formal language, the last of which is a theorem of a formal system. The theorem is a syntactic consequence of all the well formed formulae that precede it in the proof system. For a well formed formula to qualify as part of a proof, it must result from applying a rule of the deductive apparatus of some formal system to the previous well formed formulae in the proof sequence.
Main articles: Interpretation (logic) and Formal semantics (logic)
An interpretation of a formal system is the assignment of meanings to the symbols and truth-values to the sentences of the formal system. The study of interpretations is called Formal semantics. Giving an interpretation is synonymous with constructing a model.
Metalanguage–object language
Main articles: Metalanguage and Object language
In metalogic, formal languages are sometimes called object languages. The language used to make statements about an object language is called a metalanguage. This distinction is a key difference between logic and metalogic. While logic deals with proofs in a formal system, expressed in some formal language, metalogic deals with proofs about a formal system which are expressed in a metalanguage about some object language.
Main articles: Syntax (logic) and Formal semantics (logic)
In metalogic, 'syntax' has to do with formal languages or formal systems without regard to any interpretation of them, whereas, 'semantics' has to do with interpretations of formal languages. The term 'syntactic' has a slightly wider scope than 'proof-theoretic', since it may be applied to properties of formal languages without any deductive systems, as well as to formal systems. 'Semantic' is synonymous with 'model-theoretic'.
Main article: Use–mention distinction
In metalogic, the words 'use' and 'mention', in both their noun and verb forms, take on a technical sense in order to identify an important distinction.[2] The use–mention distinction (sometimes referred to as the words-as-words distinction) is the distinction between using a word (or phrase) and mentioning it. Usually it is indicated that an expression is being mentioned rather than used by enclosing it in quotation marks, printing it in italics, or setting the expression by itself on a line. The enclosing in quotes of an expression gives us the name of an expression, for example:
'Metalogic' is the name of this article.
This article is about metalogic.
Main article: Type–token distinction
The type-token distinction is a distinction in metalogic, that separates an abstract concept from the objects which are particular instances of the concept. For example, the particular bicycle in your garage is a token of the type of thing known as "The bicycle." Whereas, the bicycle in your garage is in a particular place at a particular time, that is not true of "the bicycle" as used in the sentence: "The bicycle has become more popular recently." This distinction is used to clarify the meaning of symbols of formal languages.
Metalogical questions have been asked since the time of Aristotle. However, it was only with the rise of formal languages in the late 19th and early 20th century that investigations into the foundations of logic began to flourish. In 1904, David Hilbert observed that in investigating the foundations of mathematics that logical notions are presupposed, and therefore a simultaneous account of metalogical and metamathematical principles was required. Today, metalogic and metamathematics are largely synonymous with each other, and both have been substantially subsumed by mathematical logic in academia. A possible alternate, less mathematical model may be found in the writings of Charles Sanders Peirce and other semioticians.
Results in metalogic consist of such things as formal proofs demonstrating the consistency, completeness, and decidability of particular formal systems.
Major results in metalogic include:
Proof of the uncountability of the power set of the natural numbers (Cantor's theorem 1891)
Löwenheim–Skolem theorem (Leopold Löwenheim 1915 and Thoralf Skolem 1919)
Proof of the consistency of truth-functional propositional logic (Emil Post 1920)
Proof of the semantic completeness of truth-functional propositional logic (Paul Bernays 1918),[4] (Emil Post 1920)[2]
Proof of the syntactic completeness of truth-functional propositional logic (Emil Post 1920)[2]
Proof of the decidability of truth-functional propositional logic (Emil Post 1920)[2]
Proof of the consistency of first-order monadic predicate logic (Leopold Löwenheim 1915)
Proof of the semantic completeness of first-order monadic predicate logic (Leopold Löwenheim 1915)
Proof of the decidability of first-order monadic predicate logic (Leopold Löwenheim 1915)
Proof of the consistency of first-order predicate logic (David Hilbert and Wilhelm Ackermann 1928)
Proof of the semantic completeness of first-order predicate logic (Gödel's completeness theorem 1930)
Proof of the cut-elimination theorem for the sequent calculus (Gentzen's Hauptsatz 1934)
Proof of the undecidability of first-order predicate logic (Church's theorem 1936)
Gödel's first incompleteness theorem 1931
Gödel's second incompleteness theorem 1931
Tarski's undefinability theorem (Gödel and Tarski in the 1930s)
Metalogic programming
^ Harry Gensler, Introduction to Logic, Routledge, 2001, p. 336.
^ a b c d e Hunter, Geoffrey, Metalogic: An Introduction to the Metatheory of Standard First-Order Logic, University of California Press, 1973
^ a b Rudolf Carnap (1958) Introduction to Symbolic Logic and its Applications, p. 102.
^ Hao Wang, Reflections on Kurt Gödel
Media related to Metalogic at Wikimedia Commons
Dragalin, A.G. (2001) [1994], "Meta-logic", Encyclopedia of Mathematics, EMS Press
|
Think about how you might sketch a parabola on a graph.
Do the sides of a parabola ever curve back in like the figure at right? Explain your reasoning.
No, because there is only one
y
-value for every
x
-value in a parabola.
Do the sides of the parabola approach straight vertical lines as shown in the figure at right? (In other words, do parabolas have asymptotes?) Give a reason for your answer.
The domain of a parabola is all real numbers, so the graph must extend all the way left and right. Hence it never goes straight up.
|
Revision as of 18:19, 2 September 2011 by Jmcbride (talk | contribs) (added note about missing lecture)
{\displaystyle \left(-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V\right)\psi =E\psi .\,\!}
{\displaystyle E={\frac {k^{2}\hbar ^{2}}{2m}}.\,\!}
{\displaystyle k}
{\displaystyle V=V_{0}}
{\displaystyle E-V_{0}={\frac {k^{2}\hbar ^{2}}{2m}}.\,\!}
{\displaystyle \sigma (E)={\frac {S(E)}{E}}e^{-(E_{G}/E)^{1/2}}.\,\!}
{\displaystyle S(E)}
{\displaystyle E}
{\displaystyle E_{G}}
{\displaystyle E_{G}=(1{\rm {\;MeV}})Z_{1}^{2}Z_{2}^{2}{\frac {m_{r}}{m_{p}}}.\,\!}
{\displaystyle n_{1}}
{\displaystyle n_{2}}
{\displaystyle \ell _{2}={\frac {1}{n_{1}\sigma }}\,\!}
{\displaystyle \tau _{2}={\frac {1}{n_{1}\sigma v}}.\,\!}
{\displaystyle r_{12}={\frac {n_{2}}{\tau _{2}}}=n_{1}n_{2}\sigma v.\,\!}
{\displaystyle r_{12}=n_{1}n_{2}<\sigma (E)v>.\,\!}
{\displaystyle <\sigma (E)v>}
{\displaystyle <\sigma (E)v>=\int d^{3}v\;prob(v)\sigma (E)v.\,\!}
{\displaystyle r_{12}=n_{1}n_{2}\int d^{3}v\sigma (E)v\left({\frac {m_{r}}{2\pi kT}}\right)^{3/2}e^{-{\frac {{\frac {1}{2}}m_{r}v^{2}}{kT}}}.\,\!}
{\displaystyle E={\frac {1}{2}}m_{r}v^{2},\,\!}
{\displaystyle dE=m_{r}vdv,\,\!}
{\displaystyle d^{3}v=4\pi v^{2}dv=4\pi {\frac {v^{2}}{v}}{\frac {dE}{m_{r}}},\,\!}
{\displaystyle vd^{3}v={\frac {8\pi E}{m_{r}}}{\frac {dE}{m_{r}}}.\,\!}
{\displaystyle <\sigma (E)v>=\left({\frac {2}{kT}}\right)^{3/2}{\frac {1}{\sqrt {\pi m_{r}}}}\int dEE\sigma (E)e^{-E/kT}.\,\!}
{\displaystyle \sigma (E)}
{\displaystyle <\sigma (E)v>=\left({\frac {2}{kT}}\right)^{3/2}{\frac {1}{\sqrt {\pi m_{r}}}}\int dES(E)e^{-(E_{G}/E)^{1/2}\;-\;E/kT}.\,\!}
{\displaystyle S(E)}
{\displaystyle <\sigma (E)v>=\left({\frac {2}{kT}}\right)^{3/2}{\frac {1}{\sqrt {\pi m_{r}}}}S(E)I.\,\!}
{\displaystyle I=\int _{0}^{\infty }e^{-(E_{G}/E)^{1/2}\;-\;E/kT}dE.\,\!}
{\displaystyle E_{0}}
{\displaystyle E_{0}}
{\displaystyle f(E)}
{\displaystyle {\frac {df}{dE}}=0={\frac {1}{kT}}-{\frac {E_{G}^{1/2}}{2E^{3/2}}}.\,\!}
{\displaystyle E_{0}=\left({\frac {1}{2}}E_{G}^{1/2}kT\right)^{2/3}.\,\!}
{\displaystyle E_{G}}
{\displaystyle E_{0}=(5.7\;{\rm {keV}})Z_{1}^{2/3}Z_{2}^{2/3}T_{7}^{2/3}\left({\frac {m_{r}}{m_{p}}}\right)^{1/3}.\,\!}
{\displaystyle E_{G}}
{\displaystyle kT}
{\displaystyle f(E)=f(E_{0})+{\frac {1}{2}}(E-E_{0})^{2}+f^{''}(E_{0}),\,\!}
{\displaystyle f^{''}(E_{0})={\frac {3E_{G}^{1/2}}{4E_{0}^{5/2}}}.\,\!}
{\displaystyle I}
{\displaystyle I={\frac {e^{-f(E_{0})}{\sqrt {2\pi }}}{\sqrt {f^{''}(E_{0}}}}.\,\!}
{\displaystyle <\sigma (E)v>=2.6S(E_{0}){\frac {E_{G}^{1/6}}{(kT)^{2/3}{\sqrt {m_{r}}}}}e^{-3(E_{G}/4kT)^{1/3}}.\,\!}
{\displaystyle \epsilon }
{\displaystyle L=\int \epsilon dM_{r}=\int \epsilon 4\pi r^{2}\rho dr.\,\!}
{\displaystyle {\frac {dL_{r}}{dr}}=4\pi r^{2}\rho \epsilon .\,\!}
{\displaystyle Q}
{\displaystyle r_{12}}
{\displaystyle \epsilon }
{\displaystyle \epsilon _{12}={\frac {r_{12}Q}{\rho }}.\,\!}
{\displaystyle n_{1}={\frac {X_{1}\rho }{m_{1}}}.\,\!}
{\displaystyle X_{1}}
{\displaystyle \epsilon _{12}={\frac {2.6QS(E_{0})X_{1}X_{2}}{m_{1}m_{2}{\sqrt {m_{r}}}(kT)^{2/3}}}\rho E_{G}^{1/6}e^{-3(E_{G}/4kT)^{1/3}}.\,\!}
{\displaystyle \epsilon \propto \rho ^{\alpha }T^{\beta }.\,\!}
{\displaystyle \alpha }
{\displaystyle \beta }
{\displaystyle \alpha =1}
{\displaystyle \beta }
{\displaystyle \epsilon }
{\displaystyle \beta ={\frac {d\ln \epsilon }{d\ln T}}.\,\!}
{\displaystyle \epsilon }
{\displaystyle \beta =-{\frac {2}{3}}+\left({\frac {E_{G}}{4kT}}\right)^{1/3}.\,\!}
{\displaystyle \beta \approx 4.3}
{\displaystyle \epsilon _{pp}\propto \rho T^{4.3}\,\!}
{\displaystyle 10^{7}}
{\displaystyle T_{c}\sim 10^{7}}
{\displaystyle \rho \sim 1}
{\displaystyle ^{-3}}
{\displaystyle S(E)}
{\displaystyle Q}
{\displaystyle \epsilon }
{\displaystyle \epsilon \sim 10^{20}{\rm {\;erg/s/g}}.\,\!}
{\displaystyle L=\int dM_{r}\epsilon \sim \epsilon M_{\odot }.\,\!}
{\displaystyle L\sim 10^{54}{\rm {\;erg/s}}\sim 10^{20}L_{\odot }.\,\!}
{\displaystyle 10^{20}}
{\displaystyle E_{G}}
{\displaystyle 4p\rightarrow {}^{4}{\rm {He}}+{\rm {energy}}.\,\!}
{\displaystyle p+p\rightarrow {}^{2}{\rm {H}}+e^{+}+\nu _{e}.\,\!}
{\displaystyle S(keV)\approx 3.78\times 10^{-22}}
{\displaystyle {}^{2}{\rm {H}}+p\rightarrow {}^{3}{\rm {He}}+\gamma ,\,\!}
{\displaystyle \times 10^{-4}}
{\displaystyle {}^{3}{\rm {He}}+{}^{3}{\rm {He}}\rightarrow {}^{4}{\rm {He}}+2p,\,\!}
{\displaystyle \epsilon _{cycle}=r_{p-p\;step}Q_{cycle}/\rho .\,\!}
{\displaystyle \epsilon _{pp}\propto \rho T^{-2/3}e^{-15.7T_{7}^{-1/3}}.\,\!}
{\displaystyle \epsilon _{pp}=(5\times 10^{5}){\frac {\rho X^{2}}{T^{2/3}}}e^{-15.7T_{7}^{-1/3}}{\rm {erg/s/g}}.\,\!}
{\displaystyle L=\int \epsilon dM\sim \epsilon (center)M_{\odot },\,\!}
{\displaystyle L_{\odot }\sim 10^{7}{\frac {M_{\odot }}{T_{7}^{2/3}}}e^{-15.7T_{7}^{-1/3}},\,\!}
{\displaystyle T_{c}\approx 10^{7}K.\,\!}
{\displaystyle p+p\rightarrow {}^{2}H+e^{+}+\nu _{e}\,\!}
{\displaystyle {}^{2}H+p\rightarrow {}^{3}He+\gamma \,\!}
{\displaystyle {}^{3}He+{}^{3}He\rightarrow {}^{4}He+2p\,\!}
{\displaystyle \,\!}
{\displaystyle 10^{7}}
{\displaystyle 10^{-7}}
{\displaystyle 10^{-31}}
{\displaystyle 10^{24}}
{\displaystyle \epsilon _{CNO}\approx (4\times 10^{2}7){\frac {\rho }{T_{7}^{2/3}}}XZe^{-70.7T_{7}^{-1/3}}{\rm {\;erg/g/s}}.\,\!}
{\displaystyle \beta ={\frac {-2}{3}}+{\frac {23.6}{T_{7}^{1/3}}},\,\!}
{\displaystyle \epsilon \propto \rho T^{\beta }.\,\!}
{\displaystyle \sigma \sim 10^{-44}\left({\frac {E_{\nu }}{m_{e}c^{2}}}\right)^{2}{\rm {\;cm^{2}}}.\,\!}
{\displaystyle \ell ={\frac {1}{n\sigma }}.\,\!}
{\displaystyle E_{\nu }\sim }
{\displaystyle \ell \sim 10^{9}R_{\odot }.\,\!}
{\displaystyle {}^{37}Cl+\nu _{e}\rightarrow {}^{37}Ar+e^{-}.\,\!}
{\displaystyle 10^{22}}
{\displaystyle \nu _{e}+D\rightarrow p+p+e^{-}.\,\!}
{\displaystyle \nu +D\rightarrow p+n+\nu .\,\!}
|
Juggler sequence - Wikipedia
Not to be confused with Juggling pattern.
In number theory, a juggler sequence is an integer sequence that starts with a positive integer a0, with each subsequent term in the sequence defined by the recurrence relation:
{\displaystyle a_{k+1}={\begin{cases}\left\lfloor a_{k}^{\frac {1}{2}}\right\rfloor ,&{\text{if }}a_{k}{\text{ is even}}\\\\\left\lfloor a_{k}^{\frac {3}{2}}\right\rfloor ,&{\text{if }}a_{k}{\text{ is odd}}.\end{cases}}}
Juggler sequences were publicised by American mathematician and author Clifford A. Pickover.[1] The name is derived from the rising and falling nature of the sequences, like balls in the hands of a juggler.[2]
For example, the juggler sequence starting with a0 = 3 is
{\displaystyle a_{1}=\lfloor 3^{\frac {3}{2}}\rfloor =\lfloor 5.196\dots \rfloor =5,}
{\displaystyle a_{2}=\lfloor 5^{\frac {3}{2}}\rfloor =\lfloor 11.180\dots \rfloor =11,}
{\displaystyle a_{3}=\lfloor 11^{\frac {3}{2}}\rfloor =\lfloor 36.482\dots \rfloor =36,}
{\displaystyle a_{4}=\lfloor 36^{\frac {1}{2}}\rfloor =\lfloor 6\rfloor =6,}
{\displaystyle a_{5}=\lfloor 6^{\frac {1}{2}}\rfloor =\lfloor 2.449\dots \rfloor =2,}
{\displaystyle a_{6}=\lfloor 2^{\frac {1}{2}}\rfloor =\lfloor 1.414\dots \rfloor =1.}
If a juggler sequence reaches 1, then all subsequent terms are equal to 1. It is conjectured that all juggler sequences eventually reach 1. This conjecture has been verified for initial terms up to 106,[3] but has not been proved. Juggler sequences therefore present a problem that is similar to the Collatz conjecture, about which Paul Erdős stated that "mathematics is not yet ready for such problems".
For a given initial term n, one defines l(n) to be the number of steps which the juggler sequence starting at n takes to first reach 1, and h(n) to be the maximum value in the juggler sequence starting at n. For small values of n we have:
3 3, 5, 11, 36, 6, 2, 1 6 36
5 5, 11, 36, 6, 2, 1 5 36
7 7, 18, 4, 2, 1 4 18
9 9, 27, 140, 11, 36, 6, 2, 1 7 140
10 10, 3, 5, 11, 36, 6, 2, 1 7 36
Juggler sequences can reach very large values before descending to 1. For example, the juggler sequence starting at a0 = 37 reaches a maximum value of 24906114455136. Harry J. Smith has determined that the juggler sequence starting at a0 = 48443 reaches a maximum value at a60 with 972,463 digits, before reaching 1 at a157.[4]
^ Pickover, Clifford A. (1992). "Chapter 40". Computers and the Imagination. St. Martin's Press. ISBN 978-0-312-08343-4.
^ Pickover, Clifford A. (2002). "Chapter 45: Juggler Numbers". The Mathematics of Oz: Mental Gymnastics from Beyond the Edge. Cambridge University Press. pp. 102–106. ISBN 978-0-521-01678-0.
^ Weisstein, Eric W. "Juggler Sequence". MathWorld.
^ Letter from Harry J. Smith to Clifford A. Pickover, 27 June 1992
Weisstein, Eric W. "Juggler sequence". MathWorld.
Juggler sequence (A094683) at the On-Line Encyclopedia of Integer Sequences. See also:
Number of steps needed for juggler sequence (A094683) started at n to reach 1.
n sets a new record for number of iterations to reach 1 in the juggler sequence problem.
Number of steps where the Juggler sequence reaches a new record.
Smallest number which requires n iterations to reach 1 in the juggler sequence problem.
Starting values that produce a larger juggler number than smaller starting values.
Juggler sequence calculator at Collatz Conjecture Calculation Center
Juggler Number pages by Harry J. Smith
Retrieved from "https://en.wikipedia.org/w/index.php?title=Juggler_sequence&oldid=1049492229"
|
An envelope, in technical analysis, refers to trend lines plotted both above and below the current price.
An envelope's upper and lower bands are typically generated by a simple moving average and a pre-determined distance above and below the moving average—but can be created using any number of other techniques.
Many traders react to a sell signal when the price reaches or crosses the upper band and a buy signal when the price reaches or crosses the lower band of an envelope channel.
Example of an Envelope
Moving average envelopes are the most common type of envelope indicator. Using either a simple or exponential moving average, an envelope is created by defining a fixed percentage to create upper and lower bounds.
Let's take a look at a five percent simple moving average envelope for the S&P 500 SPDR (SPY):
The calculations for this envelope are:
\begin{aligned} &\text{Upper Bound} = \text{SMA}_{50} + \text{SMA}_{50}*0.05\\ &\text{Lower Bound} = \text{SMA}_{50} - \text{SMA}_{50}*0.05\\ &\text{Midpoint} = \text{SMA}_{50}\\ \\ \textbf{where:}&\\ &\text{SMA}_{50}=\text{50-day Simple Moving Average} \\ \end{aligned}
where:Upper Bound=SMA50+SMA50∗0.05Lower Bound=SMA50−SMA50∗0.05Midpoint=SMA50SMA50=50-day Simple Moving Average
Traders may have taken a short position in the exchange-traded fund when the price moved beyond the upper range and a long position when the price moved below the lower range. In these cases, the trader would have benefited from the reversion to the mean over the following periods.
Traders may set stop-loss points at a fixed percentage beyond the upper and lower bounds, while take-profit points are often set at the midpoint line.
|
Isotropic radiator — Wikipedia Republished // WIKI 2
Not to be confused with isotropic radiation.
Animated diagram of waves from an isotropic radiator (red dot). As they travel away from the source, the waves decrease in amplitude by the inverse of distance
{\displaystyle 1/r}
and in power by the inverse square of distance
{\displaystyle 1/r^{2}}
, shown by the declining contrast of the wavefronts. This diagram only shows the waves in one plane through the source; an isotropic source actually radiates in all three dimensions.
A depiction of an isotropic radiator of sound, published in Popular Science Monthly in 1878. Note how the rings are even and of the same width all the way around each circle, though they fade as they move away from the source.
An isotropic radiator is a theoretical point source of electromagnetic or sound waves which radiates the same intensity of radiation in all directions. It has no preferred direction of radiation. It radiates uniformly in all directions over a sphere centred on the source. Isotropic radiators are used as reference radiators with which other sources are compared, for example in determining the gain of antennas. A coherent isotropic radiator of electromagnetic waves is theoretically impossible, but incoherent radiators can be built. An isotropic sound radiator is possible because sound is a longitudinal wave.
The unrelated term isotropic radiation refers to radiation which has the same intensity in all directions, thus an isotropic radiator does not radiate isotropic radiation.
Episode 16 - The Isotropic Radiator
Radiation pattern of isotropic, directional & Omnidirectional antenna in antenna by Engineering Fund
AaroniaUSA BicoLOG Radial Isotropic Antennas
1.1 Antenna theory
1.1.1 Isotropic receiver
2 Derivation of aperture of an isotropic antenna
In physics, an isotropic radiator is a point radiation or sound source. At a distance, the sun is an isotropic radiator of electromagnetic radiation.
In antenna theory, an isotropic antenna is a hypothetical antenna radiating the same intensity of radio waves in all directions. It thus is said to have a directivity of 0 dBi (dB relative to isotropic) in all directions. Since it is entirely non-directional, it serves as a hypothetical worst-case against which directional antennas may be compared.
In reality, a coherent isotropic radiator of linear polarization can be shown to be impossible. Its radiation field could not be consistent with the Helmholtz wave equation (derived from Maxwell's equations) in all directions simultaneously. Consider a large sphere surrounding the hypothetical point source, in the far field of the radiation pattern so that at that radius the wave over a reasonable area is essentially planar. In the far field the electric (and magnetic) field of a plane wave in free space is always perpendicular to the direction of propagation of the wave. So the electric field would have to be tangent to the surface of the sphere everywhere, and continuous along that surface. However the hairy ball theorem shows that a continuous vector field tangent to the surface of a sphere must fall to zero at one or more points on the sphere, which is inconsistent with the assumption of an isotropic radiator with linear polarization.
Incoherent isotropic radiators are possible and do not violate Maxwell's equations.[citation needed] Acoustic isotropic radiators are possible because sound waves in a gas or liquid are longitudinal waves and not transverse waves.
Even though an isotropic antenna cannot exist in practice, it is used as a base of comparison to calculate the directivity of actual antennas. Antenna gain
{\displaystyle \scriptstyle G}
, which is equal to the antenna's directivity multiplied by the antenna efficiency, is defined as the ratio of the intensity
{\displaystyle \scriptstyle I}
(power per unit area) of the radio power received at a given distance from the antenna (in the direction of maximum radiation) to the intensity
{\displaystyle \scriptstyle I_{\text{iso}}}
received from a perfect lossless isotropic antenna at the same distance. This is called isotropic gain
{\displaystyle G={I \over I_{\text{iso}}}\,}
Gain is often expressed in logarithmic units called decibels (dB). When gain is calculated with respect to an isotropic antenna, these are called decibels isotropic (dBi)
{\displaystyle G\mathrm {(dBi)} =10\log {I \over I_{\text{iso}}}\,}
The gain of any perfectly efficient antenna averaged over all directions is unity, or 0 dBi.
Isotropic receiver
In EMF measurement applications, an isotropic receiver (also called isotropic antenna) is a calibrated radio receiver with an antenna which approximates an isotropic reception pattern; that is, it has close to equal sensitivity to radio waves from any direction. It is used as a field measurement instrument to measure electromagnetic sources and calibrate antennas. The isotropic receiving antenna is usually approximated by three orthogonal antennas or sensing devices with a radiation pattern of the omnidirectional type
{\displaystyle \sin(\theta )}
, such as short dipoles or small loop antennas.
The parameter used to define accuracy in the measurements is called isotropic deviation.
In optics, an isotropic radiator is a point source of light. The sun approximates an isotropic radiator of light. Certain munitions such as flares and chaff have isotropic radiator properties. Whether a radiator is isotropic is independent of whether it obeys Lambert's law. As radiators, a spherical black body is both, a flat black body is Lambertian but not isotropic, a flat chrome sheet is neither, and by symmetry the Sun is isotropic, but not Lambertian on account of limb darkening.
An isotropic sound radiator is a theoretical loudspeaker radiating equal sound volume in all directions. Since sound waves are longitudinal waves, a coherent isotropic sound radiator is feasible; an example is a pulsing spherical membrane or diaphragm, whose surface expands and contracts radially with time, pushing on the air.[1]
Derivation of aperture of an isotropic antenna
Diagram of antenna and resistor in cavity
The aperture of an isotropic antenna can be derived by a thermodynamic argument.[2][3][4] Suppose an ideal (lossless) isotropic antenna A located within a thermal cavity CA, is connected via a lossless transmission line through a band-pass filter Fν to a matched resistor R in another thermal cavity CR (the characteristic impedance of the antenna, line and filter are all matched). Both cavities are at the same temperature
{\displaystyle T}
. The filter Fν only allows through a narrow band of frequencies from
{\displaystyle \nu }
{\displaystyle \nu +\Delta \nu }
. Both cavities are filled with blackbody radiation in equilibrium with the antenna and resistor. Some of this radiation is received by the antenna. The amount of this power
{\displaystyle P_{\text{A}}}
within the band of frequencies
{\displaystyle \Delta \nu }
passes through the transmission line and filter Fν and is dissipated as heat in the resistor. The rest is reflected by the filter back to the antenna and is reradiated into the cavity. The resistor also produces Johnson–Nyquist noise current due to the random motion of its molecules at the temperature
{\displaystyle T}
. The amount of this power
{\displaystyle P_{\text{R}}}
within the frequency band
{\displaystyle \Delta \nu }
passes through the filter and is radiated by the antenna. Since the entire system is at the same temperature it is in thermodynamic equilibrium; there can be no net transfer of power between the cavities, otherwise one cavity would heat up and the other would cool down in violation of the second law of thermodynamics. Therefore the power flows in both directions must be equal
{\displaystyle P_{\text{A}}=P_{\text{R}}}
The radio noise in the cavity is unpolarized, containing an equal mixture of polarization states. However any antenna with a single output is polarized, and can only receive one of two orthogonal polarization states. For example, a linearly polarized antenna cannot receive components of radio waves with electric field perpendicular to the antenna's linear elements; similarly a right circularly polarized antenna cannot receive left circularly polarized waves. Therefore the antenna only receives the component of power density S in the cavity matched to its polarization, which is half of the total power density
{\displaystyle S_{\text{matched}}={1 \over 2}S}
{\displaystyle B_{\nu }}
is the spectral radiance per hertz in the cavity; the power of black body radiation per unit area (meter2) per unit solid angle (steradian) per unit frequency (hertz) at frequency
{\displaystyle \nu }
{\displaystyle T}
in the cavity. If
{\displaystyle A_{\text{e}}(\theta ,\phi )}
is the antenna's aperture, the amount of power in the frequency range
{\displaystyle \Delta \nu }
the antenna receives from an increment of solid angle
{\displaystyle d\Omega =d\theta d\phi }
{\displaystyle \theta ,\phi }
{\displaystyle dP_{\text{A}}(\theta ,\phi )=A_{\text{e}}(\theta ,\phi )S_{\text{matched}}\Delta \nu d\Omega ={1 \over 2}A_{\text{e}}(\theta ,\phi )B_{\nu }\Delta \nu d\Omega }
To find the total power in the frequency range
{\displaystyle \Delta \nu }
the antenna receives, this is integrated over all directions (a solid angle of
{\displaystyle 4\pi }
{\displaystyle P_{\text{A}}={1 \over 2}\int \limits _{4\pi }A_{\text{e}}(\theta ,\phi )B_{\nu }\Delta \nu d\Omega }
Since the antenna is isotropic, it has the same aperture
{\displaystyle A_{\text{e}}(\theta ,\phi )=A_{\text{e}}}
in any direction. So the aperture can be moved outside the integral. Similarly the radiance
{\displaystyle B_{\nu }}
in the cavity is the same in any direction
{\displaystyle P_{\text{A}}={1 \over 2}A_{\text{e}}B_{\nu }\Delta \nu \int \limits _{4\pi }d\Omega }
{\displaystyle P_{\text{A}}=2\pi A_{\text{e}}B_{\nu }\Delta \nu }
Radio waves are low enough in frequency so the Rayleigh–Jeans formula gives a very close approximation of the blackbody spectral radiance[5]
{\displaystyle B_{\nu }={2\nu ^{2}kT \over c^{2}}={2kT \over \lambda ^{2}}}
{\displaystyle P_{\text{A}}={4\pi A_{\text{e}}kT \over \lambda ^{2}}\Delta \nu }
The Johnson–Nyquist noise power produced by a resistor at temperature
{\displaystyle T}
over a frequency range
{\displaystyle \Delta \nu }
{\displaystyle P_{\text{R}}=kT\Delta \nu }
Since the cavities are in thermodynamic equilibrium
{\displaystyle P_{\text{A}}=P_{\text{R}}}
{\displaystyle {4\pi A_{\text{e}}kT \over \lambda ^{2}}\Delta \nu =kT\Delta \nu }
{\displaystyle A_{\text{e}}={\lambda ^{2} \over 4\pi }}
E-plane and H-plane
^ Remsburg, Ralph (2011). Advanced Thermal Design of Electronic Equipment. Springer Science and Business Media. p. 534. ISBN 1441985093.
^ Pawsey, J. L.; Bracewell, R. N. (1955). Radio Astronomy. London: Oxford University Press. pp. 23–24.
^ Rohlfs, Kristen; Wilson, T. L. (2013). Tools of Radio Astronomy, 4th Edition. Springer Science and Business Media. pp. 134–135. ISBN 3662053942.
^ Condon, J. J.; Ransom, S. M. (2016). "Antenna Fundamentals". Essential Radio Astronomy course. US National Radio Astronomy Observatory (NRAO) website. Retrieved 22 August 2018.
^ The Rayleigh-Jeans formula is a good approximation as long as the energy in a radio photon is small compared with the thermal energy per degree of freedom:
{\displaystyle h\nu <<kT}
. This is true throughout the radio spectrum at all ordinary temperatures.
Isotropic Radiators, Matzner and McDonald, arXiv Antennas
Antennas D.Jefferies
isotropic radiator AMS Glossary
U.S. Patent 4,130,023 - Method and apparatus for testing and evaluating loudspeaker performance
Non Lethal Concepts - Implications for Air Force Intelligence Published Aerospace Power Journal, Winter 1994
Cosmic Microwave Background - Introduction
Isotropic Radiators Holon Academic Institute of Technology
Isotropic radiator
Batwing antenna
Biconical antenna
Cage aerial
Choke ring antenna
Coaxial antenna
Crossed field antenna
Dielectric resonator antenna
Discone antenna
Folded unipole antenna
Franklin antenna
Ground-plane antenna
G5RV antenna
Halo antenna
Helical antenna
Inverted-F antenna
Inverted vee antenna
J-pole antenna
Mast radiator
Monopole antenna
Random wire antenna
Rubber ducky antenna
Sloper antenna
Turnstile antenna
T2FD antenna
Umbrella antenna
Whip antenna
Adcock antenna
AS-2259 Antenna
AWX antenna
Beverage antenna
Cassegrain antenna
Collinear antenna array
Conformal antenna
Corner reflector antenna
Curtain array
Folded inverted conformal antenna
Fractal antenna
Horn antenna
Log-periodic antenna
Loop antenna
Microstrip antenna
Moxon antenna
Offset dish antenna
Patch antenna
Phased array
Planar array
Parabolic antenna
Plasma antenna
Quad antenna
Reflective array antenna
Regenerative loop antenna
Rhombic antenna
Sector antenna
Short backfire antenna
Sterba antenna
Vivaldi antenna
Corner reflector (passive)
Evolved antenna
Ground dipole
Reconfigurable antenna
Reference antenna
Spiral antenna
|
Standard asteroid physical characteristics - Wikipedia
Standard asteroid physical characteristics
For most numbered asteroids, almost nothing is known apart from a few physical parameters and orbital elements. Some physical characteristics can only be estimated. The physical data is determined by making certain standard assumptions.
4 Surface gravity
4.1 Spherical body
4.2 Irregular body
4.4 Close binaries
6 Rotation period
7 Spectral class
10.3 Temperature measurements and regular temperature variations
10.4 Albedo inaccuracy problem
11 Other common data
Data from the IRAS minor planet survey[1] or the Midcourse Space Experiment (MSX) minor planet survey[2] (available at the Planetary Data System Small Bodies Node (PDS)) is the usual source of the diameter.
For many asteroids, lightcurve analysis provides estimates of pole direction and diameter ratios. Pre-1995 estimates collected by Per Magnusson[3] are tabulated in the PDS,[4] with the most reliable data being the syntheses labeled in the data tables as "Synth". More recent determinations for several dozens of asteroids are collected at the web page of a Finnish research group in Helsinki which is running a systematic campaign to determine poles and shape models from lightcurves.[5]
These data can be used to obtain a better estimate of dimensions. A body's dimensions are usually given as a tri-axial ellipsoid, the axes of which are listed in decreasing order as a×b×c. If we have the diameter ratios μ = a/b, ν = b/c from lightcurves, and an IRAS mean diameter d, one sets the geometric mean of the diameters
{\displaystyle d=(abc)^{\frac {1}{3}}\,\!}
for consistency, and obtains the three diameters:
{\displaystyle a=d\,(\mu ^{2}\nu )^{\frac {1}{3}}\,\!}
{\displaystyle b=d\,\left({\frac {\nu }{\mu }}\right)^{\frac {1}{3}}\,\!}
{\displaystyle c={\frac {d}{(\nu ^{2}\mu )^{\frac {1}{3}}}}\,\!}
See also: Dynamic method
Barring detailed mass determinations,[6] the mass M can be estimated from the diameter and (assumed) density values ρ worked out as below.
{\displaystyle M={\frac {\pi abc\rho }{6}}\,\!}
Such estimates can be indicated as approximate by use of a tilde "~". Besides these "guesstimates", masses can be obtained for the larger asteroids by solving for the perturbations they cause in each other's orbits,[7] or when the asteroid has an orbiting companion of known orbital radius. The masses of the largest asteroids 1 Ceres, 2 Pallas, and 4 Vesta can also be obtained from perturbations of Mars.[8] While these perturbations are tiny, they can be accurately measured from radar ranging data from the Earth to spacecraft on the surface of Mars, such as the Viking landers.
Apart from a few asteroids whose densities have been investigated,[6] one has to resort to enlightened guesswork. See Carry[9] for a summary.
For many asteroids a value of ρ~2 g/cm3 has been assumed.
However, density depends on the asteroid's spectral type. Krasinsky et al. gives calculations for the mean densities of C, S, and M class asteroids as 1.38, 2.71, and 5.32 g/cm3.[10] (Here "C" included Tholen classes C, D, P, T, B, G, and F, while "S" included Tholen classes S, K, Q, V, R, A, and E). Assuming these values (rather than the present ~2 g/cm3) is a better guess.
Surface gravity[edit]
Main article: Surface gravity
Spherical body[edit]
For a spherical body, the gravitational acceleration at the surface (g), is given by
{\displaystyle g_{\rm {spherical}}={\frac {GM}{r^{2}}}\,\!}
Where G = 6.6742×10−11 m3s−2kg−1 is the gravitational constant, M is the mass of the body, and r its radius.
Irregular body[edit]
For irregularly shaped bodies, the surface gravity will differ appreciably with location. The above formula then is only an approximation, as the calculations become more involved. The value of g at surface points closer to the center of mass is usually somewhat greater than at surface points farther out.
On a rotating body, the apparent weight experienced by an object on the surface is reduced by the centripetal force, when one is away from the poles. The centripetal acceleration experienced at a latitude θ is
{\displaystyle g_{\rm {centrifugal}}=-\left({\frac {2\pi }{T}}\right)^{2}r\sin \theta }
where T is the rotation period in seconds, r is the equatorial radius, and θ is the latitude. Its magnitude is maximized when one is at the equator, and sin θ=1. The negative sign indicates that it acts in the opposite direction to the gravitational acceleration g.
The effective acceleration is
{\displaystyle g_{\rm {effective}}=g_{\rm {gravitational}}+g_{\rm {centrifugal}}\ .}
Close binaries[edit]
If the body in question is a member of a close binary with components of comparable mass, the effect of the second body may also be non-negligible.
For surface gravity g and radius r of a spherically symmetric body, the escape velocity is:
{\displaystyle v_{e}={\sqrt {\frac {2GM}{r}}}}
Rotation period is usually taken from lightcurve parameters at the PDS.[11]
Spectral class[edit]
Spectral class is usually taken from the Tholen classification at the PDS.[12]
Absolute magnitude[edit]
Absolute magnitude is usually given by the IRAS minor planet survey[1] or the MSX minor planet survey[2] (available at the PDS).
Albedo[edit]
Astronomical albedos are usually given by the IRAS minor planet survey[1] or the MSX minor planet survey[2] (available at the PDS). These are geometric albedos. If there is no IRAS/MSX data a rough average of 0.1 can be used.
The simplest method which gives sensible results is to assume the asteroid behaves as a greybody in equilibrium with the incident solar radiation. Then, its mean temperature is obtained by equating the mean incident and radiated heat power. The total incident power is:
{\displaystyle R_{\mathrm {in} }={\frac {(1-A)L_{0}\pi r^{2}}{4\pi a^{2}}},}
{\displaystyle A\,\!}
is the asteroid albedo (precisely, the Bond albedo),
{\displaystyle a\,\!}
its semi-major axis,
{\displaystyle L_{0}\,\!}
is the solar luminosity (i.e. total power output 3.827×1026 W), and
{\displaystyle r}
the asteroid's radius. It has been assumed that: the absorptivity is
{\displaystyle 1-A}
, the asteroid is spherical, it is on a circular orbit, and that the Sun's energy output is isotropic.
Using a greybody version of the Stefan–Boltzmann law, the radiated power (from the entire spherical surface of the asteroid) is:
{\displaystyle R_{\mathrm {out} }=4\pi r^{2}\epsilon \sigma T^{4}{\frac {}{}},}
{\displaystyle \sigma \,\!}
is the Stefan–Boltzmann constant (5.6704×10−8 W/m2K4),
{\displaystyle T}
is the temperature in kelvins, and
{\displaystyle \epsilon \,\!}
is the asteroid's infra-red emissivity. Equating
{\displaystyle R_{\mathrm {in} }=R_{\mathrm {out} }}
{\displaystyle T=\left({\frac {(1-A)L_{0}}{\epsilon \sigma 16\pi a^{2}}}\right)^{1/4}\,\!}
The standard value of
{\displaystyle \epsilon }
=0.9, estimated from detailed observations of a few of the large asteroids is used.
While this method gives a fairly good estimate of the average surface temperature, the local temperature varies greatly, as is typical for bodies without atmospheres.
A rough estimate of the maximum temperature can be obtained by assuming that when the Sun is overhead, the surface is in thermal equilibrium with the instantaneous solar radiation. This gives average "sub-solar" temperature of
{\displaystyle T_{ss}={\sqrt {2}}\,T\approx 1.41\,T,}
{\displaystyle T}
is the average temperature calculated as above.
At perihelion, the radiation is maximised, and
{\displaystyle T_{ss}^{\rm {max}}={\sqrt {\frac {2}{1-e}}}\ T,}
wher{\displaystyle e\,\!}
is the eccentricity of the orbit.
Temperature measurements and regular temperature variations[edit]
Infra-red observations are commonly combined with albedo to measure the temperature more directly. For example, L.F.Lim et al. [Icarus, Vo. 173, 385 (2005)] does this for 29 asteroids. These are measurements for a particular observing day, and the asteroid's surface temperature will change in a regular way depending on its distance from the Sun. From the Stefan-Boltzmann calculation above,
{\displaystyle T={\rm {constant}}\times {\frac {1}{\sqrt {d}}},}
{\displaystyle d\,\!}
is the distance from the Sun on any particular day. If the day of the relevant observations is known, the distance from the Sun on that day can be obtained online from e.g. the NASA orbit calculator,[13] and corresponding temperature estimates at perihelion, aphelion, etc. can be obtained from the expression above.
Albedo inaccuracy problem[edit]
There is a snag when using these expressions to estimate the temperature of a particular asteroid. The calculation requires the Bond albedo A (the proportion of total incoming power reflected, taking into account all directions), while the IRAS and MSX albedo data that is available for asteroids gives only the geometric albedo p which characterises only the strength of light reflected back to the source (the Sun).
While these two albedos are correlated, the numerical factor between them depends in a very nontrivial way on the surface properties. Actual measurements of Bond albedo are not forthcoming for most asteroids because they require measurements from high phase angles that can only be acquired by spacecraft that pass near or beyond the asteroid belt. Some complicated modelling of surface and thermal properties can lead to estimates of the Bond albedo given the geometric one, but this far is beyond the scope of a quick estimate for these articles. It can be obtained for some asteroids from scientific publications.
For want of a better alternative for most asteroids, the best that can be done here is to assume that these two albedos are equal, but keep in mind that there is an inherent inaccuracy in the resulting temperature values.
How large is this inaccuracy?
A glance at the examples in this table shows that for bodies in the asteroid albedo range, the typical difference between Bond and geometric albedo is 20% or less, with either quantity capable of being larger. Since the calculated temperature varies as (1-A)1/4, the dependence is fairly weak for typical asteroid A≈p values of 0.05−0.3.
The typical inaccuracy in calculated temperature from this source alone is then found to be about 2%. This translates to an uncertainty of about ±5 K for maximum temperatures.
Other common data[edit]
Some other information for large numbers of asteroids can be found at the Planetary Data System Small Bodies Node.[14] Up-to-date information on pole orientation of several dozen asteroids is provided by Doc. Mikko Kaasalainen,[5] and can be used to determine axial tilt.
Another source of useful information is NASA's orbit calculator.[13]
^ a b c "IRAS Minor Planet Survey Supplemental IRAS Minor Planet Survey". PDS Asteroid/Dust Archive. Archived from the original on 2006-09-02. Retrieved 2006-10-21.
^ a b c "Midcourse Space Experiment (MSX) Infrared Minor Planet Survey". PDS Asteroid/Dust Archive. Archived from the original on 2006-09-02. Retrieved 2006-10-21.
^ Magnusson, Per (1989). "Pole determinations of asteroids". In Richard P. Binzel; Tom Gehrels; Mildred S. Matthews (eds.). Asteroids II. Tucson: University of Arizona Press. pp. 1180–1190.
^ "Asteroid Spin Vectors". Archived from the original on 2006-09-02. Retrieved 2006-10-21.
^ a b Modeled asteroids. rni.helsinki.fi. 2006-06-18.
^ a b For example "Asteroid Densities Compilation". PDS Asteroid/Dust Archive. Archived from the original on 2006-09-02. Retrieved 2006-10-21.
^ Hilton, James L. (November 30, 1999). "Masses of the Largest Asteroids". Archived from the original on February 12, 2009. Retrieved 2009-09-05.
^ Pitjeva, E. V. (2004). Estimations of masses of the largest asteroids and the main asteroid belt from ranging to planets, Mars orbiters and landers. 35th COSPAR Scientific Assembly. Held 18–25 July 2004. Paris, France. p. 2014. Bibcode:2004cosp...35.2014P.
^ Benoit Carry, Density of asteroids, Planetary & Space Science to be published, accessed Dec. 20, 2013
^ "Asteroid Lightcurve Parameters". PDS Asteroid/Dust Archive. Archived from the original on 2006-09-02. Retrieved 2006-10-21.
^ a b "Orbit Diagrams". NASA. Archived from the original on 2000-08-17. Retrieved 2006-06-18.
^ "Asteroid Data Sets". PDS Asteroid/Dust Archive. Archived from the original on 2006-09-28. Retrieved 2006-10-21.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Standard_asteroid_physical_characteristics&oldid=1072095086"
|
Glycerate kinase - Wikipedia
Glyc_kinase
PDB: 1to6
In enzymology, a glycerate kinase (EC 2.7.1.31) is an enzyme that catalyzes the chemical reaction
ATP + (R)-glycerate
{\displaystyle \rightleftharpoons }
ADP + 3-phospho-(R)-glycerate
{\displaystyle \rightleftharpoons }
Thus, the two substrates of this enzyme are ATP and (R)-glycerate, whereas its two products are ADP and either 3-phospho-(R)-glycerate or 2-phospho-(R)-glycerate.[1]
This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:(R)-glycerate 3-phosphotransferase. Other names in common use include glycerate kinase (phosphorylating), D-glycerate 3-kinase, D-glycerate kinase, glycerate-3-kinase, GK, D-glyceric acid kinase, and ATP:D-glycerate 2-phosphotransferase. This enzyme participates in 3 metabolic pathways: serine/glycine/threonine metabolism, glycerolipid metabolism, and glyoxylate-dicarboxylate metabolism.
This enzyme had been thought to produce 3-phosphoglycerate, but some glycerate kinases produce 2-phosphoglycerate instead.[1]
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1TO6, 1X3L, and 2B8N.
^ a b Bartsch, Oliver; Hagemann, Martin; Bauwe, Hermann (2008-09-03). "Only plant-type (GLYK) glycerate kinases produce d-glycerate 3-phosphate". FEBS Letters. 582 (20): 3025–3028. doi:10.1016/j.febslet.2008.07.038. ISSN 0014-5793. PMID 18675808. S2CID 28262946.
Doughty CC, Hayashi JA, Guenther HL (1966). "Purification and properties of D-glycerate 3-kinase from Escherichia coli". J. Biol. Chem. 241 (3): 568–72. PMID 5325263.
ICHIHARA A, GREENBERG DM (1957). "Studies on the purification and properties of D-glyceric acid kinase of liver". J. Biol. Chem. 225 (2): 949–58. PMID 13416296.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Glycerate_kinase&oldid=997975445"
|
Balance bounding box labels for object detection - MATLAB balanceBoxLabels - MathWorks Benelux
balanceBoxLabels
Sample Block Sets to Use in Blocked Image Object Detection
Balancing Box Labels
Balance bounding box labels for object detection
locationSet = balanceBoxLabels(boxLabels,blockedImages,blockSize,numObservations)
locationSet = balanceBoxLabels(boxLabels,blockedImages,blockSize,numObservations,Name,Value)
locationSet = balanceBoxLabels(boxLabels,blockedImages,blockSize,numObservations) balances bounding box labels, boxLabels, by oversampling blocks of images containing less frequent classes, contained in the collection of blocked image objects blockedImages. numObservations is the required number of block locations, and blockSize specifies the block size.
locationSet = balanceBoxLabels(boxLabels,blockedImages,blockSize,numObservations,Name,Value) specifies additional aspects of the selected blocks using name-value arguments.
Load box labels data that contains boxes and labels for one image. The height and width of each box is [20,20].
d = load('balanceBoxLabelsData.mat');
boxLabels = d.BoxLabels;
Create a blocked image of size [500,500].
blockedImages = blockedImage(zeros([500,500]));
Choose the images size of each observation.
blockSize = [50,50];
Visualize using a histogram to identify any class imbalance in the box labels.
blds = boxLabelDatastore(boxLabels);
datasetCount = countEachLabel(blds);
h1 = histogram('Categories',datasetCount.Label,'BinCounts',datasetCount.Count)
Values: [1 1 1 1 1 1 1 1 1 1 1 11]
NumDisplayBins: 12
Categories: {1x12 cell}
Measure the distribution of box labels. If the coefficent of variation is more than 1, then there is class imbalance.
cvBefore = std(datasetCount.Count)/mean(datasetCount.Count)
cvBefore = 1.5746
Choose a heuristic value for number of observations by finding the mean of the counts of each class, multiplied by the number of classes.
numClasses = height(datasetCount);
numObservations = mean(datasetCount.Count) * numClasses;
Control the amount a box can be cut using OverlapThreshold. Using a lower threshold value will cut objects more at the border of a block. Increase this value to reduce the amount an object can be clipped at the border, at the expense of a less balanced box labels.
ThresholdValue = 0.5;
Balance boxLabels using the balanceBoxLabels function.
locationSet = balanceBoxLabels(boxLabels,blockedImages,blockSize,...
numObservations,'OverlapThreshold',ThresholdValue);
Elaps[==================================================] 100%
Balancing box labels complete.
Count the labels that are contained within the image blocks.
bldsBalanced = boxLabelDatastore(boxLabels,locationSet);
balancedDatasetCount = countEachLabel(bldsBalanced);
Overlay another histogram against the original label count to see if the box labels are balanced. If the labels appear to be not balanced by looking at the histograms, increase the value for numObservations.
balancedLabels = balancedDatasetCount.Label;
balancedCount = balancedDatasetCount.Count;
h2 = histogram('Categories',balancedLabels,'BinCounts',balancedCount);
title(h2.Parent,"Balanced class labels (OverlapThreshold: " + ThresholdValue + ")" );
legend(h2.Parent,{'Before','After'});
Measure the distribution of the new baanced box labels.
cvAfter = std(balancedCount)/mean(balancedCount)
cvAfter = 0.4588
boxLabels — Labeled bounding box data
Labeled bounding box data, specified as a table with two columns.
The first column contains bounding boxes and must be a cell vector. Each element in the cell vector contains M-by-4 matrices in the format [x, y, width, height] for M boxes.
The second column must be a cell vector that contains the label names corresponding to each bounding box. Each element in the cell vector must be an M-by-1 categorical or string vector.
To create a box label table from ground truth data,
Use the Image Labeler or Video Labeler app to label your ground truth. Export the labeled ground truth data to your workspace.
Create a bounding box label datastore using the objectDetectorTrainingData function.
You can obtain the boxLabels from the LabelData property of the box label datastore returned by objectDetectorTrainingData, ( blds.LabelData).
Labeled blocked images, specified as an array of blockedImage objects containing pixel label images.
Example: 'OverlapThreshold','1'
Levels — Resolution level of each image
1 (default) | positive integer scalar | B-by-1 vector of positive integers
Resolution level of each image in the array of blockedImage objects, specified as a positive integer scalar or a B-by-1 vector of positive integers, where B is the length of the array of blockedImage objects.
Overlap threshold, specified as a positive scalar in the range [0,1]. When the overlap between a bounding box and a cropping window is greater than the threshold, boxes in the boxLabels input are clipped to the image block window border. When the overlap is less than the threshold, the boxes are discarded. When you lower the threshold, part of an object can get discarded. To reduce the amount an object can be clipped at the border, increase the threshold. Increasing the threshold can also cause less-balanced box labels.
The amount of overlap between the bounding box and a cropping window is defined as.
area\left(bboxA\cap window\right)/area\left(bboxA\right)
Display progress information, specified as a numeric or logical 1 (true) or 0 (false). Set this property to true to display information.
locationSet — Balanced box labels
Balanced box labels, returned as a blockLocationSet object. The object contains numObservations number of locations of balanced blocks, each of size blockSize.
To balance box labels, the function over samples classes that are less represented in the blocked image or big image. The box labels are counted across the dataset and sorted based on each class count. Each image size is split into several quadrants, based on the blockSize input value. The algorithm randomly picks several blocks within each quadrant with less-represented classes. The blocks without any objects are discarded. The balancing stops once the specified number of blocks are selected.
You can check the success of balancing by comparing the histograms of label count before and after balancing. You can also check the coefficient of variation value. For best results, the value should be less than the original value. For more information, see the National Institute of Standards and Technology (NIST) website, see Coefficient of Variation for more information.
Replace bigimage object input with blockedImage object input for the second argument of this function.
This example selects blocks at resolution level 1 from a bigimage object.
boxLabels = load('balanceBoxLabelsData.mat').BoxLabels;
bim = bigimage(zeros([500,500]));
locationSet = balanceBoxLabels(boxLabels,bim,1, ...
bim = blockedImage(zeros([500,500]));
locationSet = balanceBoxLabels(boxLabels,bim, ...
blockLocationSet | blockedImage | blockedImageDatastore | boxLabelDatastore
|
From W. E. Darwin [19 February 1871]1
I recd “Man” this morning and am very glad to receive him.2 I shall very soon gobble him up.
It is a very neatly got up book, and I like the plain white edges. Please keep all reviews and letters about it till I have seen them. The Reviews will be fine fun no doubt.3
Please ask Mother to send me a
\frac{1}{2}
d. saying whether there would be a bed for me at Uncle Ras’ on Saturday & Sunday next, as I perhaps may manage to come up.4
I am going on all right but slowly, it will be some weeks till I may walk fairly. I am very glad to hear Hen. is better & finds Tunbridge Wells cheerful,5 we are just beginning the School Board scrummage. I expect I shall have to stand, one cannot decline, but it will be a trimendous undertaking if one is elected.6
I enclose rather a funny letter from Sanford about pouting which may be burnt.7
The date is established by the reference to William’s copy of Descent; the publisher, John Murray, distributed CD’s presentation copies on or around 18 February 1871 (see Correspondence vol. 19, letter from R. F. Cooke, 15 February 1871, and letter from David Forbes, 18 February 1871). The first Sunday after 15 February 1871 was 19 February.
William’s name appears on the presentation list for Descent (Correspondence vol. 19, Appendix IV).
CD kept a scrapbook of reviews (DAR 226.2); a list of reviews of Descent appears in Correspondence vol. 19, Appendix V).
Erasmus Alvey Darwin lived at 6 Queen Anne Street, London. CD and Emma stayed at Erasmus’s house from 23 February to 2 March 1871 (Correspondence vol. 19, Appendix II).
On William’s injury, see the letter from W. E. Darwin, 6 February 1871. Henrietta Emma Darwin had an attack of measles in January (Emma Darwin’s diary (DAR 242)).
Following the Elementary Education Act of 1870, local boards were elected to oversee schools that received state aid; in some cases, the boards also procured land and materials for the building of new schools, appointed teachers, and advised on curricula (Stephens 1998).
The letter from George Edward Langham Somerset Sanford has not been found. He was an officer in the Royal Engineers residing in Fawley, Hampshire, with children who were aged two and three in 1871 (Census returns of England and Wales 1871 (The National Archives: Public Record Office (RG10/1185/9/10)). William had communicated observations from Sanford of a crying infant in 1868 (see Correspondence vol. 16, letter from W. E. Darwin to Emma Darwin, 28 February [1868], and letter from W. E. Darwin, [7 April 1868]).
Thanks CD for copy of Descent. Is considering running for School Board.
|
CoKeStTkTa-p - DispersiveWiki
J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao. Global well-posedness and scattering in the energy space for the critical nonlinear Schrödinger equation in
{\displaystyle {\mathbb {R} }^{3}}
. Annals of Mathematics, to appear (2004), 1-85. MathSciNet, arXiv.
Retrieved from "https://dispersivewiki.org/DispersiveWiki/index.php?title=CoKeStTkTa-p&oldid=6003"
|
Molecular Diversity Analysis of Some Chilli (Capsicum spp.) Genotypes Using SSR Markers
1Department of Biotechnology, Sher-e-Bangla Agricultural University, Dhaka, Bangladesh
2Regional Spices Research Centre, Bangladesh Agricultural Research Institute (BARI), Dhaka, Bangladesh
PIC=1-\sum _{j=i}^{n}{\left({P}_{ij}\right)}^{2}
Sharmin, A., Hoque, M.E., Haque, M.M. and Khatun, F. (2018) Molecular Diversity Analysis of Some Chilli (Capsicum spp.) Genotypes Using SSR Markers. American Journal of Plant Sciences, 9, 368-379. https://doi.org/10.4236/ajps.2018.93029
1. Bosland, P.W. and Votava, E.J. (2000) Peppers: Vegetables and Spice Capsicums. Crop Production Science in Horticulture 12. CAB International Publishing, Wallingford, England, UK, 204.
2. Basavaraj, N. (2008) Production on Cenario of Byadagi Chilli. Indian Journal of Arecanut Spices and Medicinal Plants, 9, 186-193.
3. Lucca, A.J., Boue, S. and Palmgren, M.S. (2006) Fungicidal Properties of Two Saponins from Capsicum frutescens and the Relationship of Structure. Canadian Journal of Microbiology, 52, 336-342. https://doi.org/10.1139/w05-137
4. Hossain, S.M., Habiba, U., Bhuyan, S.I., Haque, M.S., Begum, S.N. and Hossain, D.M. (2014) DNA Fingerprinting and Genetic Diversity Analysis of Chilli Germplasm Using Microsatellite Markers. Biotechnology, 13, 174-180. https://doi.org/10.3923/biotech.2014.174.180
5. Costa, F.R., Pereira, T.N.S., Vitória, A.P., Campos, K.P., Rodrigues, R., Silva, D.H. and Pereira, M.G. (2006) Genetic Diversity among Capsicum Accessions Using RAPD Markers. Crop Breeding and Applied Biotechnology, 6, 18-23. https://doi.org/10.12702/1984-7033.v06n01a03
6. Geleta, L.F., Labuschagne, M.T. and Viljoen, C.D. (2005) Genetic Variability in Pepper (Capsicum annuum L.) Estimated by Morphological Data and Amplified Length Polymorphism Markers. Biodiversity and Conservation, 14, 2361-2375. https://doi.org/10.1007/s10531-004-1669-9
7. Joseph, A. (1993) A Taxocological Review Capsaicinoid (Oleorecin of Capsicums). Ruddick Hazardous Product Section, Environmental Health. Directorate Health and Welfare, Canada.
8. Kwon, Y.S., Kim, K.M., Kim, D.H., Eun, M.Y. and Sohn, J.K. (2002) Marker-Assisted Introgression of Quantitative Trait Loci Associated with Plant Regeneration Ability in Anther Culture of Rice (Oryza sativa L.). Molecular Cell, 14, 24-28.
9. Jang, I.O., Moon, J.H., Yoon, J.B., Yoo, J.H. and Yang, T.J. (2004) Application of RAPD and SCAR Markers for Purity Testing of F1 Hybrid Seed in Chili Pepper (Capsicum annuum). Molecular Cell, 18, 295-299.
10. Lee, J.M., Nahm, S.H., Kim, Y.M. and Kim, B.D. (2004) Characterization and Molecular Genetic Mapping of Microsatellite Loci in Pepper. Theoretical and Applied Genetics, 108, 619-627. https://doi.org/10.1007/s00122-003-1467-x
11. Becher, S.A., Steinmetz, K., Weising, K., Boury, S., Peltier, D., et al. (2000) Microsatellites for Cultivar Identification in Pelargonium. Theoretical and Applied Genetics, 101, 643-651. https://doi.org/10.1007/s001220051526
12. McCouch, S.R., Chen, X.L., Panaud, O., Temnykh, S. and Xu, Y.B. (1997) Microsatellite Marker Development, Mapping and Application in Rice Genetics and Breeding. Plant Molecular Biology, 35, 89-99. https://doi.org/10.1023/A:1005711431474
14. Dhaliwal, M.S., Yadav, A. and Jindal, S.K. (2014) Molecular Characterization and Diversity Analysis in Chilli Pepper Using Simple Sequence Repeats (SSR) Markers. The African Journal of Biotechnology, 13, 3137-3143.https://doi.org/10.5897/AJB2014.13695
15. Rai, V.P., Kumar, R., Kumar, S., Rai, A., Kumar, S., Singh, M., Singh, S.P., Rai, A.B. and Paliwal, R. (2013) Genetic Diversity in Capsicum Germplasm Based on Microsatellite and Random Amplified Microsatellite Polymorphism Markers. Physiology and Molecular Biology of Plants, 19, 575-586. https://doi.org/10.1007/s12298-013-0185-3
16. Tilahun, S., Paramaguru, P. and Bapu, J.R.K. (2013) Genetic Diversity in Certain Genotypes of Chilli and Paprika as Revealed by RAPD and SSR Analysis. Asian Journal of Agricultural Sciences, 5, 25-31.
17. Liu, K. and Muse, S.V. (2005) Power Marker: An Integrated Analysis Environment for Genetic Marker Analysis. Bioinformatics, 21, 2128-2129.https://doi.org/10.1093/bioinformatics/bti282
18. Rohlf, F.J. (2006) NTSYS-pc. Numerical Taxonomy and Multivariance Analysis System, Version 2.02. Setauket, New York.
19. Nei, M. (1972) Genetic Distance between Populations. The American Naturalist, 106, 283-292. https://doi.org/10.1086/282771
20. Nei, M. (1973) Analysis of Gene Diversity in Subdivided Populations. Proceedings of the National Academy of Sciences of the United States of America, 70, 3321-3323.
21. IPGRI (1999) Isolated from a Size-Fractionated Genomic Library of Brassica napus L. (Rapeseed). Theoretical and Applied Genetics, 91, 206-211.
22. Senior, M.L., Murphy, J.P., Good, M.M. and Stuber, C.W. (1998) Utility of SSRs for Determining Genetic Similarities and Relationships in Maize Using an Agarose Gel System. Crop Science, 38, 1088-1098. https://doi.org/10.2135/cropsci1998.0011183X003800040034x
23. Yumnam, J.S., Tyagi, W., Pandey, A., Meetei, N.T. and Rai, M. (2012) Evaluation of Genetic Diversity of Chilli Landraces from North Eastern India Based on Morphology, SSR Markers and the Pun1 Locus. Plant Molecular Biology Reporter, 30, 1470-1479. https://doi.org/10.1007/s11105-012-0466-y
24. Kwon, Y.S., Lee, J.M., Gi-Bum,Y.I., Seung-In, Y.I., Kyung-Min, K.I.M., Eun-Hee, S., Kyung-Min, B., Eun-Kyung, P., In-Ho, S. and Kim, B.D. (2005) Use of SSR Markers to Complement Tests of Distinctiveness, Uniformity and Stability (DUS) of Pepper (Capsicum annum L.) Varieties. Molecular Cell, 19, 428-435.
25. Cheng, J., Zhao, Z., Li, B., Qin, C., Wu, Z., Trejo-Saavedra, D.L., Luo, X., Cui, J., Rivera-Bustamante, R.F., Li, S. and Hu, K. (2016) A Comprehensive Characterization of Simple Sequence Repeats in Pepper Genomes Provides Valuable Resources for Marker Development in Capsicum. Scientific Reports, 6, Article ID: 18919. https://doi.org/10.1038/srep18919
|
For each equation below, make tables that include x-values from
−2\ \text{to}\ 2
and draw each graph.
y = ( x - 1 ) ^ { 2 } ( x + 1 )
\left. \begin{array} { | c | c | } \hline x & { y } \\ \hline - 2 & { - 9 } \\ \hline - 1 & { 0 } \\ \hline 0 & { 1 } \\ \hline 1 & { 0 } \\ \hline 2 & { 3 } \\ \hline \end{array} \right.
y = ( x - 1 ) ^ { 2 } ( x + 1 ) ^ { 2 }
y = x ^ { 3 } - 4 x
Use the same method as in parts (a) and (b).
What are the parent functions for these equations?
What is the highest-valued exponent?
y = \left(x − 1\right)^{2} \left(x + 1\right)
y = \left(x^{2} − 2x + 1\right) \left(x + 1\right)
y = x^{3} − x^{2} − x + 1
The parent function is
y = x^{3}
|
Insertion_sort Knowpia
Simple implementation: Jon Bentley shows a three-line C++ version, and a five-line optimized version[1]
Adaptive, i.e., efficient for data sets that are already substantially sorted: the time complexity is O(kn) when each element in the input is no more than k places away from its sorted position
Animation of insertion sort
{\displaystyle O(n^{2})}
comparisons and swaps
{\displaystyle O(n)}
{\displaystyle O(1)}
{\displaystyle O(n^{2})}
{\displaystyle O(n)}
total,
{\displaystyle O(1)}
When people manually sort cards in a bridge hand, most use a method that is similar to insertion sort.[2]
A graphical example of insertion sort. The partial sorted list (black) initially contains only the first element in the list. With each iteration one element (red) is removed from the "not yet checked for order" input data and inserted in-place into the sorted list.
Insertion sort iterates, consuming one input element each repetition, and grows a sorted output list. At each iteration, insertion sort removes one element from the input data, finds the location it belongs within the sorted list, and inserts it there. It repeats until no input elements remain.
The resulting array after k iterations has the property where the first k + 1 entries are sorted ("+1" because the first entry is skipped). In each iteration the first remaining entry of the input is removed, and inserted into the result at the correct position, thus extending the result:
with each element greater than x copied to the right as it is compared against x.
The most common variant of insertion sort, which operates on arrays, can be described as follows:
Suppose there exists a function called Insert designed to insert a value into a sorted sequence at the beginning of an array. It operates by beginning at the end of the sequence and shifting each element one place to the right until a suitable position is found for the new element. The function has the side effect of overwriting the value stored immediately after the sorted sequence in the array.
To perform an insertion sort, begin at the left-most element of the array and invoke Insert to insert each element encountered into its correct position. The ordered sequence into which the element is inserted is stored at the beginning of the array in the set of indices already examined. Each insertion overwrites a single value: the value being inserted.
Pseudocode of the complete algorithm follows, where the arrays are zero-based:[1]
The outer loop runs over all the elements except the first one, because the single-element prefix A[0:1] is trivially sorted, so the invariant that the first i entries are sorted is true from the start. The inner loop moves element A[i] to its correct place so that after the loop, the first i+1 elements are sorted. Note that the and-operator in the test must use short-circuit evaluation, otherwise the test might result in an array bounds error, when j=0 and it tries to evaluate A[j-1] > A[j] (i.e. accessing A[-1] fails).
After expanding the swap operation in-place as x ← A[j]; A[j] ← A[j-1]; A[j-1] ← x (where x is a temporary variable), a slightly faster version can be produced that moves A[i] to its position in one go and only performs one assignment in the inner loop body:[1]
x ← A[i]
A[j+1] ← x[3]
The new inner loop shifts elements to the right to clear a spot for x = A[i].
The algorithm can also be implemented in a recursive way. The recursion just replaces the outer loop, calling itself and storing successively smaller values of n on the stack until n equals 0, where the function then returns up the call chain to execute the code after each recursive call starting with n equal to 1, with n increasing by 1 as each instance of the function returns to the prior instance. The initial call would be insertionSortR(A, length(A)-1).
function insertionSortR(array A, int n)
insertionSortR(A, n-1)
x ← A[n]
A[j+1] ← x
It does not make the code any shorter, it also doesn't reduce the execution time, but it increases the additional memory consumption from O(1) to O(N) (at the deepest level of recursion the stack contains N references to the A array, each with accompanying value of variable n from N down to 1).
Best, worst, and average casesEdit
The simplest worst case input is an array sorted in reverse order. The set of all worst case inputs consists of all arrays where each element is the smallest or second-smallest of the elements before it. In these cases every iteration of the inner loop will scan and shift the entire sorted subsection of the array before inserting the next element. This gives insertion sort a quadratic running time (i.e., O(n2)).
The average case is also quadratic,[4] which makes insertion sort impractical for sorting large arrays. However, insertion sort is one of the fastest algorithms for sorting very small arrays, even faster than quicksort; indeed, good quicksort implementations use insertion sort for arrays smaller than a certain threshold, also when arising as subproblems; the exact threshold must be determined experimentally and depends on the machine, but is commonly around ten.
Example: The following table shows the steps for sorting the sequence {3, 7, 4, 9, 5, 2, 6, 1}. In each step, the key under consideration is underlined. The key that was moved (or left in place because it was the biggest yet considered) in the previous step is marked with an asterisk.
3* 7 4 9 5 2 6 1
3 7* 4 9 5 2 6 1
3 4 7 9* 5 2 6 1
3 4 5* 7 9 2 6 1
2 3 4 5 6* 7 9 1
Relation to other sorting algorithmsEdit
Insertion sort is very similar to selection sort. As in selection sort, after k passes through the array, the first k elements are in sorted order. However, the fundamental difference between the two algorithms is that insertion sort scans backwards from the current key, while selection sort scans forwards. This results in selection sort making the first k elements the k smallest elements of the unsorted input, while in insertion sort they are simply the first k elements of the input.
The primary advantage of insertion sort over selection sort is that selection sort must always scan all remaining elements to find the absolute smallest element in the unsorted portion of the list, while insertion sort requires only a single comparison when the (k + 1)-st element is greater than the k-th element; when this is frequently true (such as if the input array is already sorted or partially sorted), insertion sort is distinctly more efficient compared to selection sort. On average (assuming the rank of the (k + 1)-st element rank is random), insertion sort will require comparing and shifting half of the previous k elements, meaning that insertion sort will perform about half as many comparisons as selection sort on average.
In the worst case for insertion sort (when the input array is reverse-sorted), insertion sort performs just as many comparisons as selection sort. However, a disadvantage of insertion sort over selection sort is that it requires more writes due to the fact that, on each iteration, inserting the (k + 1)-st element into the sorted portion of the array requires many element swaps to shift all of the following elements, while only a single swap is required for each iteration of selection sort. In general, insertion sort will write to the array O(n2) times, whereas selection sort will write only O(n) times. For this reason selection sort may be preferable in cases where writing to memory is significantly more expensive than reading, such as with EEPROM or flash memory.
While some divide-and-conquer algorithms such as quicksort and mergesort outperform insertion sort for larger arrays, non-recursive sorting algorithms such as insertion sort or selection sort are generally faster for very small arrays (the exact size varies by environment and implementation, but is typically between 7 and 50 elements). Therefore, a useful optimization in the implementation of those algorithms is a hybrid approach, using the simpler algorithm when the array has been divided to a small size.[1]
D.L. Shell made substantial improvements to the algorithm; the modified version is called Shell sort. The sorting algorithm compares elements separated by a distance that decreases on each pass. Shell sort has distinctly improved running times in practical work, with two simple variants requiring O(n3/2) and O(n4/3) running time.[5][6]
If the cost of comparisons exceeds the cost of swaps, as is the case for example with string keys stored by reference or with human interaction (such as choosing one of a pair displayed side-by-side), then using binary insertion sort may yield better performance.[7] Binary insertion sort employs a binary search to determine the correct location to insert new elements, and therefore performs ⌈log2 n⌉ comparisons in the worst case. When each element in the array is searched for and inserted this is O(n log n).[7] The algorithm as a whole still has a running time of O(n2) on average because of the series of swaps required for each insertion.[7]
The number of swaps can be reduced by calculating the position of multiple elements before moving them. For example, if the target position of two elements is calculated before they are moved into the proper position, the number of swaps can be reduced by about 25% for random data. In the extreme case, this variant works similar to merge sort.
A variant named binary merge sort uses a binary insertion sort to sort groups of 32 elements, followed by a final sort using merge sort. It combines the speed of insertion sort on small data sets with the speed of merge sort on large data sets.[8]
To avoid having to make a series of swaps for each insertion, the input could be stored in a linked list, which allows elements to be spliced into or out of the list in constant time when the position in the list is known. However, searching a linked list requires sequentially following the links to the desired position: a linked list does not have random access, so it cannot use a faster method such as binary search. Therefore, the running time required for searching is O(n), and the time for sorting is O(n2). If a more sophisticated data structure (e.g., heap or binary tree) is used, the time required for searching and insertion can be reduced significantly; this is the essence of heap sort and binary tree sort.
In 2006 Bender, Martin Farach-Colton, and Mosteiro published a new variant of insertion sort called library sort or gapped insertion sort that leaves a small number of unused spaces (i.e., "gaps") spread throughout the array. The benefit is that insertions need only shift elements over until a gap is reached. The authors show that this sorting algorithm runs with high probability in O(n log n) time.[9]
If a skip list is used, the insertion time is brought down to O(log n), and swaps are not needed because the skip list is implemented on a linked list structure. The final running time for insertion would be O(n log n).
List insertion sort is a variant of insertion sort. It reduces the number of movements.[citation needed]
List insertion sort code in CEdit
If the items are stored in a linked list, then the list can be sorted with O(1) additional space. The algorithm starts with an initially empty (and therefore trivially sorted) list. The input items are taken off the list one at a time, and then inserted in the proper place in the sorted list. When the input list is empty, the sorted list has the desired result.
struct LIST * SortList1(struct LIST * pList)
// zero or one element in list
if (pList == NULL || pList->pNext == NULL)
// head is the first element of resulting sorted list
struct LIST * head = NULL;
struct LIST * current = pList;
if (head == NULL || current->iValue < head->iValue) {
// insert into the head of the sorted list
// or as the first element into an empty sorted list
current->pNext = head;
// insert current element into proper position in non-empty sorted list
struct LIST * p = head;
if (p->pNext == NULL || // last element of the sorted list
current->iValue < p->pNext->iValue) // middle of the list
// insert into middle of the sorted list or as the last element
current->pNext = p->pNext;
p->pNext = current;
The algorithm below uses a trailing pointer[10] for the insertion into the sorted list. A simpler recursive method rebuilds the list each time (rather than splicing) and can use O(n) stack space.
struct LIST * pNext;
int iValue;
struct LIST * SortList(struct LIST * pList)
if (!pList || !pList->pNext)
/* build up the sorted array from the empty list */
struct LIST * pSorted = NULL;
/* take items off the input list one by one until empty */
/* remember the head */
struct LIST * pHead = pList;
/* trailing pointer for efficient splice */
struct LIST ** ppTrail = &pSorted;
/* pop head off list */
/* splice head into sorted list at proper place */
while (!(*ppTrail == NULL || pHead->iValue < (*ppTrail)->iValue)) { /* does head belong here? */
/* no - continue down the list */
ppTrail = &(*ppTrail)->pNext;
pHead->pNext = *ppTrail;
*ppTrail = pHead;
return pSorted;
^ a b c d Bentley, Jon (2000). "Column 11: Sorting". Programming Pearls (2nd ed.). ACM Press / Addison-Wesley. pp. 115–116. ISBN 978-0-201-65788-3. OCLC 1047840657.
^ Sedgewick, Robert (1983), Algorithms, Addison-Wesley, pp. 95ff, ISBN 978-0-201-06672-2 .
^ Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990], "Section 2.1: Insertion sort", Introduction to Algorithms (3rd ed.), MIT Press and McGraw-Hill, pp. 16–18, ISBN 0-262-03384-4 . See in particular p. 18.
^ Schwarz, Keith. "Why is insertion sort Θ(n^2) in the average case? (answer by "templatetypedef")". Stack Overflow.
^ Frank, R. M.; Lazarus, R. B. (1960). "A High-Speed Sorting Procedure". Communications of the ACM. 3 (1): 20–22. doi:10.1145/366947.366957.
^ Sedgewick, Robert (1986). "A New Upper Bound for Shellsort". Journal of Algorithms. 7 (2): 159–173. doi:10.1016/0196-6774(86)90001-5.
^ a b c Samanta, Debasis (2008). Classic Data Structures. PHI Learning. p. 549. ISBN 9788120337312.
^ "Binary Merge Sort"
^ Bender, Michael A.; Farach-Colton, Martín; Mosteiro, Miguel A. (2006), "Insertion sort is O(n log n)", Theory of Computing Systems, 39 (3): 391–397, arXiv:cs/0407003, doi:10.1007/s00224-005-1237-z, MR 2218409
^ Hill, Curt (ed.), "Trailing Pointer Technique", Euler, Valley City State University, retrieved 22 September 2012 .
Knuth, Donald (1998), "5.2.1: Sorting by Insertion", The Art of Computer Programming, vol. 3. Sorting and Searching (second ed.), Addison-Wesley, pp. 80–105, ISBN 0-201-89685-0 .
The Wikibook Algorithm implementation has a page on the topic of: Insertion sort
Wikimedia Commons has media related to Insertion sort.
Animated Sorting Algorithms: Insertion Sort at the Wayback Machine (archived 8 March 2015) – graphical demonstration
Adamovsky, John Paul, Binary Insertion Sort – Scoreboard – Complete Investigation and C Implementation, Pathcom .
Insertion Sort – a comparison with other O(n2) sorting algorithms, UK: Core war .
Insertion sort (C) (wiki), LiteratePrograms – implementations of insertion sort in C and several other programming languages
|
Rotational coupling between two driveline shafts - MATLAB - MathWorks Nordic
Maximum joint angle
Joint compliance
Initial base shaft angle
Initial torque from base to follower shaft
Rotational coupling between two driveline shafts
The Universal Joint block represents a rotational coupling between two driveline shafts. The coupling transfers torque between the shafts so they spin as a unit under an applied load. Two rotational degrees of freedom, internal to the coupling, allow the shafts to connect at an angle. This intersection angle varies according to the physical signal input from port A. Optional compliance, modeled as a parallel spring-damper set, allows the coupling to deform under load.
You can use the Universal Joint block as a connection between two rotational driveline components—for example, between the driving and driven shafts in an automobile drive train.
The ratio of the shaft angular velocities depends on two parameters: the intersection angle between the two shafts and the rotation angle of the base shaft. A physical signal input provides the intersection angle while a property inspector parameter provides the initial base shaft angle. These two angles fix the ratio of the two shaft angular velocities according to the nonlinear equation:
{\omega }_{F}=\frac{\mathrm{cos}\left(A\right)}{1-{\mathrm{sin}}^{2}\left(A\right)\cdot {\mathrm{cos}}^{2}\left({\theta }_{B}\right)}{\omega }_{B},
ωF is the angular velocity of the follower shaft about its length axis.
ωB is the angular velocity of the base shaft about its length axis.
θB is the rotation angle of the base shaft about its length axis.
A is the intersection angle between base and follower shafts about the base shaft pin.
The two schematics in the figure illustrate the equation parameters. In each schematic, the left shaft represents the base shaft, while the right shaft represents the follower shaft. The right schematic shows the coupling seen in the left schematic after the shafts spin 90° about their length axes (dashed line segments).
In the figure, the intersection angle is the angle between the two shafts about the pin of the base shaft. The absolute value of this angle must fall in the range 0 ≤ A < Maximum intersection angle. The base shaft angle is the angle of the base shaft about its length axis. The base shaft angle is also the time-integral of the base shaft angular velocity, ωB.
B — Base shaft
Conserving rotational port associated with the base shaft.
F — Follower shaft
Conserving rotational port associated with the folower shaft.
A — Intersection angle
Physical signal input port for the intersection angle.
Maximum joint angle — Maximum joint intersection angle, A
pi/4 rad (default) | 0 ≤ A < pi/2 | scalar
Maximum intersection angle the joint allows. This angle measures the rotation between base and follower shafts about the base shaft pin. The value of this angle must fall in the range
0\le A<pi/2
Joint compliance — Compliance model
Compliance model for the block:
Off — Do not model stiffness and damping.
On — Model stiffness and damping.
If this parameter is set On, related parameters are enabled.
Joint stiffness — Stiffness coefficient
1e6 N*m/rad (default) | scalar
Linear spring stiffness of the joint. The spring stiffness accounts for elastic energy storage in the joint due to material compliance.
This parameter is enabled when Joint compliance is set to On.
Joint damping — Damping coefficient
1e3 N*m/(rad/s) (default) | scalar
Linear damping coefficient of the joint. The damping coefficient accounts for energy dissipation in the joint due to material compliance.
Initial base shaft angle — Initial base shaft angle
Rotation angle of the base shaft about its length axis at the beginning of simulation.
Initial torque from base to follower shaft — Initial torque
Torque that the base shaft transfers to the follower shaft at the beginning of simulation. This torque determines the initial state of material compliance at the joint. Set this value to greater than zero to preload the shafts with torque. Changing this value alters the initial transient response due to material compliance.
This parameter is enabled when Jointcompliance is set to On.
Belt Drive | Belt Pulley | Chain Drive | Flexible Shaft | Rope Drum
|
Covering degrees are determined by graph manifolds involved | EMS Press
Covering degrees are determined by graph manifolds involved
W.Thurston raised the following question in 1976: Suppose that a compact 3-manifold M is not covered by (surface)
\times S^1
or a torus bundle over
S^1
M_1
M_2
are two homeomorphic finite covering spaces of M, do they have the same covering degree? For so called geometric 3-manifolds (a famous conjecture is that all compact orientable 3-manifolds are geometric), it is known that the answer is affirmative if M is not a non-trivial graph manifold. In this paper, we prove that the answer for non-trivial graph manifolds is also affirmative. Hence the answer for the Thurston's question is complete for geometric 3-manifolds. Some properties of 3-manifold groups are also derived.
Shicheng Wang, F. Yu, Covering degrees are determined by graph manifolds involved. Comment. Math. Helv. 74 (1999), no. 2, pp. 238–247
|
Phosphoenolpyruvate carboxykinase (diphosphate) - Wikipedia
Phosphoenolpyruvate carboxykinase (pyrophosphate) homodimer, Actinomyces israelii
Phosphoenolpyruvate carboxykinase (diphosphate) (EC 4.1.1.38, phosphopyruvate carboxylase, phosphoenolpyruvate carboxylase, PEP carboxyphosphotransferase, PEP carboxykinase, phosphopyruvate carboxykinase (pyrophosphate), PEP carboxylase, phosphoenolpyruvic carboxykinase, phosphoenolpyruvic carboxylase, phosphoenolpyruvate carboxykinase, phosphoenolpyruvate carboxytransphosphorylase, phosphoenolpyruvate carboxykinase, phosphoenolpyruvic carboxykinase, PEPCTrP, phosphoenolpyruvic carboxykinase (pyrophosphate), phosphoenolpyruvic carboxylase (pyrophosphate), phosphoenolpyruvate carboxyphosphotransferase, phosphoenolpyruvic carboxytransphosphorylase, phosphoenolpyruvate carboxylase (pyrophosphate), phosphopyruvate carboxylase (pyrophosphate), diphosphate:oxaloacetate carboxy-lyase (transphosphorylating)) is an enzyme with systematic name diphosphate:oxaloacetate carboxy-lyase (transphosphorylating; phosphoenolpyruvate-forming).[1] This enzyme catalyses the following chemical reaction
diphosphate + oxaloacetate
{\displaystyle \rightleftharpoons }
phosphate + phosphoenolpyruvate + CO2
This enzyme also catalyses the reaction:
phosphoenolpyruvate + GTP + CO2
{\displaystyle \rightleftharpoons }
pyruvate + GDP.
It is transcriptionally upregulated in the liver by glucagon.
^ Lochmüller H, Wood HG, Davis JJ (December 1966). "Phosphoenolpyruvate carboxytransphosphorylase. II. Crystallization and properties". The Journal of Biological Chemistry. 241 (23): 5678–91. PMID 4288896.
Phosphoenolpyruvate+carboxykinase+(diphosphate) at the US National Library of Medicine Medical Subject Headings (MeSH)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Phosphoenolpyruvate_carboxykinase_(diphosphate)&oldid=1052064018"
|
Econometrics | Special Issue : Celebrated Econometricians: David Hendry
Celebrated Econometricians: David Hendry
Special Issue "Celebrated Econometricians: David Hendry"
Federal Reserve Board, Washington, DC, US
Interests: Econometrics and Statistics; Monetary Economics; Macroeconomics
Contributions for the Special Issue, in honor of David Hendry, should relate to an area of research in which David has made recent important contributions. Potential areas include the following: Exploring alternative modeling strategies and empirical methodologies for macro-econometrics; analyzing concepts and criteria for viable empirical modeling of time series; diagnostic testing and model specification techniques; computer automated procedures for model selection, especially when facing structural breaks; developing software for econometric analysis; empirical investigations of money demand, wage and price inflation, and climate change; empirical and theoretical analyses of forecasting, especially forecast failure and co-breaking; and the history of econometric thought.
Inquiries about this Special Issue should be addressed to: [email protected]
The COVID-19 pandemic resulted in the most abrupt changes in U.S. labor force participation and unemployment since the Second World War, with different consequences for men and women. This paper models the U.S. labor market to help to interpret the pandemic’s effects. After [...] Read more.
The COVID-19 pandemic resulted in the most abrupt changes in U.S. labor force participation and unemployment since the Second World War, with different consequences for men and women. This paper models the U.S. labor market to help to interpret the pandemic’s effects. After replicating and extending Emerson’s (2011) model of the labor market, we formulate a joint model of male and female unemployment and labor force participation rates for 1980–2019 and use it to forecast into the pandemic to understand the pandemic’s labor market consequences. Gender-specific differences were particularly large at the pandemic’s outset; lower labor force participation persists. Full article
We analyze real-time forecasts of US inflation over 1999Q3–2019Q4 and subsamples, investigating whether and how forecast accuracy and robustness can be improved with additional information such as expert judgment, additional macroeconomic variables, and forecast combination. The forecasts include those from the Federal Reserve [...] Read more.
We analyze real-time forecasts of US inflation over 1999Q3–2019Q4 and subsamples, investigating whether and how forecast accuracy and robustness can be improved with additional information such as expert judgment, additional macroeconomic variables, and forecast combination. The forecasts include those from the Federal Reserve Board’s Tealbook, the Survey of Professional Forecasters, dynamic models, and combinations thereof. While simple models remain hard to beat, additional information does improve forecasts, especially after 2009. Notably, forecast combination improves forecast accuracy over simpler models and robustifies against bad forecasts; aggregating forecasts of inflation’s components can improve performance compared to forecasting the aggregate directly; and judgmental forecasts, which may incorporate larger and more timely datasets in conjunction with model-based forecasts, improve forecasts at short horizons. Full article
S. Yanki Kalfa
(Hendry 1980, p. 403) The three golden rules of econometrics are “test, test, and test”. The current paper applies that approach to model the forecasts of the Federal Open Market Committee over 1992–2019 and to forecast those forecasts themselves. Monetary policy is forward-looking, [...] Read more.
(Hendry 1980, p. 403) The three golden rules of econometrics are “test, test, and test”. The current paper applies that approach to model the forecasts of the Federal Open Market Committee over 1992–2019 and to forecast those forecasts themselves. Monetary policy is forward-looking, and as part of the FOMC’s effort toward transparency, the FOMC publishes its (forward-looking) economic projections. The overall views on the economy of the FOMC participants–as characterized by the median of their projections for inflation, unemployment, and the Fed’s policy rate–are themselves predictable by information publicly available at the time of the FOMC’s meeting. Their projections also communicate systematic behavior on the part of the FOMC’s participants. Full article
Econometrics 2021, 9(3), 26; https://doi.org/10.3390/econometrics9030026 - 25 Jun 2021
We investigate forecasting in models that condition on variables for which future values are unknown. We consider the role of the significance level because it guides the binary decisions whether to include or exclude variables. The analysis is extended by allowing for a [...] Read more.
We investigate forecasting in models that condition on variables for which future values are unknown. We consider the role of the significance level because it guides the binary decisions whether to include or exclude variables. The analysis is extended by allowing for a structural break, either in the first forecast period or just before. Theoretical results are derived for a three-variable static model, but generalized to include dynamics and many more variables in the simulation experiment. The results show that the trade-off for selecting variables in forecasting models in a stationary world, namely that variables should be retained if their noncentralities exceed unity, still applies in settings with structural breaks. This provides support for model selection at looser than conventional settings, albeit with many additional features explaining the forecast performance, and with the caveat that retaining irrelevant variables that are subject to location shifts can worsen forecast performance. Full article
We analyze the influence of climate change on soybean yields in a multivariate time-series framework for a major soybean producer and exporter—Argentina. Long-run relationships are found in partial systems involving climatic, technological, and economic factors. Automatic model selection simplifies dynamic specification for a [...] Read more.
We analyze the influence of climate change on soybean yields in a multivariate time-series framework for a major soybean producer and exporter—Argentina. Long-run relationships are found in partial systems involving climatic, technological, and economic factors. Automatic model selection simplifies dynamic specification for a model of soybean yields and permits encompassing tests of different economic hypotheses. Soybean yields adjust to disequilibria that reflect technological improvements to seed and crops practices. Climatic effects include (a) a positive effect from increased CO
{}_{2}
concentrations, which may capture accelerated photosynthesis, and (b) a negative effect from high local temperatures, which could increase with continued global warming. Full article
We study the stability of estimated linear statistical relations of global mean temperature and global mean sea level with regard to data revisions. Using four different model specifications proposed in the literature, we compare coefficient estimates and long-term sea level projections using two [...] Read more.
We study the stability of estimated linear statistical relations of global mean temperature and global mean sea level with regard to data revisions. Using four different model specifications proposed in the literature, we compare coefficient estimates and long-term sea level projections using two different vintages of each of the annual time series, covering the periods 1880–2001 and 1880–2013. We find that temperature and sea level updates and revisions have a substantial influence both on the magnitude of the estimated coefficients of influence (differences of up to 50%) and therefore on long-term projections of sea level rise following the RCP4.5 and RCP6 scenarios (differences of up to 40 cm by the year 2100). This shows that in order to replicate earlier results that informed the scientific discussion and motivated policy recommendations, it is crucial to have access to and to work with the data vintages used at the time. Full article
We apply a bootstrap test to determine whether some forecasters are able to make superior probability assessments to others. In contrast to some findings in the literature for point predictions, there is evidence that some individuals really are better than others. The testing [...] Read more.
We apply a bootstrap test to determine whether some forecasters are able to make superior probability assessments to others. In contrast to some findings in the literature for point predictions, there is evidence that some individuals really are better than others. The testing procedure controls for the different economic conditions the forecasters may face, given that each individual responds to only a subset of the surveys. One possible explanation for the different findings for point predictions and histograms is explored: that newcomers may make less accurate histogram forecasts than experienced respondents given the greater complexity of the task. Full article
Econometrics 2020, 8(2), 14; https://doi.org/10.3390/econometrics8020014 - 23 Apr 2020
In this paper, we propose a hybrid version of Dynamic Stochastic General Equilibrium models with an emphasis on parameter invariance and tracking performance at times of rapid changes (recessions). We interpret hypothetical balanced growth ratios as moving targets for economic agents that rely [...] Read more.
In this paper, we propose a hybrid version of Dynamic Stochastic General Equilibrium models with an emphasis on parameter invariance and tracking performance at times of rapid changes (recessions). We interpret hypothetical balanced growth ratios as moving targets for economic agents that rely upon an Error Correction Mechanism to adjust to changes in target ratios driven by an underlying state Vector AutoRegressive process. Our proposal is illustrated by an application to a pilot Real Business Cycle model for the US economy from 1948 to 2019. An extensive recursive validation exercise over the last 35 years, covering 3 recessions, is used to highlight its parameters invariance, tracking and 1- to 3-step ahead forecasting performance, outperforming those of an unconstrained benchmark Vector AutoRegressive model. Full article
HAR Testing for Spurious Regression in Trend
The usual t test, the t test based on heteroskedasticity and autocorrelation consistent (HAC) covariance matrix estimators, and the heteroskedasticity and autocorrelation robust (HAR) test are three statistics that are widely used in applied econometric work. The use of these significance tests in [...] Read more.
The usual t test, the t test based on heteroskedasticity and autocorrelation consistent (HAC) covariance matrix estimators, and the heteroskedasticity and autocorrelation robust (HAR) test are three statistics that are widely used in applied econometric work. The use of these significance tests in trend regression is of particular interest given the potential for spurious relationships in trend formulations. Following a longstanding tradition in the spurious regression literature, this paper investigates the asymptotic and finite sample properties of these test statistics in several spurious regression contexts, including regression of stochastic trends on time polynomials and regressions among independent random walks. Concordant with existing theory (Phillips 1986, 1998; Sun 2004, 2014b) the usual t test and HAC standardized test fail to control size as the sample size in these spurious formulations, whereas HAR tests converge to well-defined limit distributions in each case and therefore have the capacity to be consistent and control size. However, it is shown that when the number of trend regressors all three statistics, including the HAR test, diverge and fail to control size as . These findings are relevant to high-dimensional nonstationary time series regressions where machine learning methods may be employed. Full article
|
Glycerate 2-kinase - Wikipedia
Glycerate 2-kinase (EC 2.7.1.165, D-glycerate-2-kinase, glycerate kinase (2-phosphoglycerate forming), ATP:(R)-glycerate 2-phosphotransferase) is an enzyme with systematic name ATP:D-glycerate 2-phosphotransferase.[1][2][3][4][5][6] This enzyme catalyses the following chemical reaction
ATP + D-glycerate
{\displaystyle \rightleftharpoons }
ADP + 2-phospho-D-glycerate
A key enzyme in the nonphosphorylative Entner-Doudoroff pathway in archaea.
^ Liu B, Wu L, Liu T, Hong Y, Shen Y, Ni J (December 2009). "A MOFRL family glycerate kinase from the thermophilic crenarchaeon, Sulfolobus tokodaii, with unique enzymatic properties". Biotechnology Letters. 31 (12): 1937–41. doi:10.1007/s10529-009-0089-z. PMID 19690808.
^ Reher M, Bott M, Schönheit P (June 2006). "Characterization of glycerate kinase (2-phosphoglycerate forming), a key enzyme of the nonphosphorylative Entner-Doudoroff pathway, from the thermoacidophilic euryarchaeon Picrophilus torridus". FEMS Microbiology Letters. 259 (1): 113–9. doi:10.1111/j.1574-6968.2006.00264.x. PMID 16684110.
^ Liu B, Hong Y, Wu L, Li Z, Ni J, Sheng D, Shen Y (September 2007). "A unique highly thermostable 2-phosphoglycerate forming glycerate kinase from the hyperthermophilic archaeon Pyrococcus horikoshii: gene cloning, expression and characterization". Extremophiles. 11 (5): 733–9. doi:10.1007/s00792-007-0079-9. PMID 17563835.
^ Noh, M.; Jung, J.H.; Lee, S.B. (2006). "Purification and characterization of glycerate kinase from the thermoacidophilic archaeon Thermoplasma acidophilum: an enzyme belonging to the second glycerate kinase family". Biotechnol. Bioprocess Eng. 11 (4): 344–350. doi:10.1007/bf03026251.
^ Yoshida T, Fukuta K, Mitsunaga T, Yamada H, Izumi Y (December 1992). "Purification and characterization of glycerate kinase from a serine-producing methylotroph, Hyphomicrobium methylovorum GM2". European Journal of Biochemistry. 210 (3): 849–54. doi:10.1111/j.1432-1033.1992.tb17488.x. PMID 1336459.
^ Hubbard BK, Koch M, Palmer DR, Babbitt PC, Gerlt JA (October 1998). "Evolution of enzymatic activities in the enolase superfamily: characterization of the (D)-glucarate/galactarate catabolic pathway in Escherichia coli". Biochemistry. 37 (41): 14369–75. doi:10.1021/bi981124f. PMID 9772162.
Glycerate+2-kinase at the US National Library of Medicine Medical Subject Headings (MeSH)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Glycerate_2-kinase&oldid=997805605"
|
Polynomials are expressions that can be written as a sum of terms of the form:
(any number). x(whole number)
Which of the following equations are polynomial equations? For those that are not polynomials, explain why not. Check the Lesson 8.1.1 Math Notes box for further details about polynomials.
f\left(x\right) = 8x^{5} + x^{2} + 6.5x^{4} + 6
This is a polynomial.
y=\frac{3}{5}x^6+19x^2
y=2^x+8
This is not a polynomial because x is an exponent.
f\left(x\right) = 9 + \sqrt{x} − 3
This is not a polynomial because the power of x is a fraction, not a whole number.
P\left(x\right) = 7\left(x − 3\right)\left(x + 5\right)^{2}
y=x^2+\frac{1}{x^2+5}
This is not a polynomial because the variable is in the denominator.
Write an equation for a new polynomial function and then write an equation for a new function that is not a polynomial.
Look at parts (a) through (f) for examples.
|
Protochlorophyllide reductase - Wikipedia
This article is missing information about taxonomic distribution of EC 1.3.1.33; InterPro refs for both. Please expand the article to include this information. Further details may exist on the talk page. (March 2022)
The reduction of ring D of protochlorophyllide completes the biosynthesis of chlorophyllide a
light-independent protochlorophyllide reduzctase
Crystallographic structure of heterooctamer of a dark-operative protochlorophyllide oxidoreductase from Prochlorococcus marinus.[1]
In enzymology, protochlorophyllide reductases (POR)[2][3] are enzymes that catalyze the conversion from protochlorophyllide to chlorophyllide a. They are oxidoreductases participating in the biosynthetic pathway to chlorophylls.[4][5]
There are two structurally unrelated proteins with this sort of activity, referred to as light-dependent (LPOR) and the dark-operative (DPOR). The light- and NADPH-dependent reductase is part of the short-chain dehydrogenase/reductase (SDR) superfamily and is found in plants and oxygenic photosynthetic bacteria,[6][7] while the ATP-dependent dark-operative version is a completely different protein, consisting of three subunits that exhibit significant sequence and quaternary structure similarity to the three subunits of nitrogenase.[8] This enzyme may be evolutionary older but due to its bound iron-sulfur clusters is highly sensitive to free oxygen and does not function if the atmospheric oxygen concentration exceeds about 3%.[9] It is possible that evolutionary pressure associated with the great oxidation event resulted in the development of the light-dependent system.
The light-dependent version (EC 1.3.1.33) uses NADPH:
protochlorophyllide + NADPH + H+
{\displaystyle \rightleftharpoons }
chlorophyllide a + NADP+
While the light-independent or dark-operative version (EC 1.3.7.7) uses ATP and ferredoxin:[10][11][12]
1 Light-dependent
2 Light-independent
Light-dependent[edit]
The light-dependent version has the accepted name protochlorophyllide reductase. The systematic name is chlorophyllide-a :NADP+ 7,8-oxidoreductase. Other names in common use include NADPH2-protochlorophyllide oxidoreductase, NADPH-protochlorophyllide oxidoreductase, NADPH-protochlorophyllide reductase, protochlorophyllide oxidoreductase, and protochlorophyllide photooxidoreductase.
LPOR is one of only three known light-dependent enzymes. The enzyme enables light-dependent protochlorophyllide reduction via direct local hydride transfer from NADPH and a longer-range proton transfer along a defined structural pathway.[13] LPOR is a ~40kDa monomeric enzyme, for which the structure has been solved by X-ray crystallography. It is part of the SDR superfamily, which includes alcohol dehydrogenase, and consists of a Rossman-fold NADPH-binding site and a substrate-specific C-terminal segment region. The protochlorophyllide substrate is thought to bind to a cavity near the nicotinamide end of the bound NADPH.[7][13] LPOR is primarily found in plants and oxygenic photosynthetic bacteria, as well as in some algae.
Light-independent[edit]
The light-independent version has the accepted name of ferredoxin:protochlorophyllide reductase (ATP-dependent). Systematically it is known as ATP-dependent ferredoxin:protochlorophyllide-a 7,8-oxidoreductase. Other names in common use include light-independent protochlorophyllide reductase and dark-operative protochlorophyllide reductase (DPOR).
DPOR is a nitrogenase homologue[8] and adopts an almost identical overall architecture arrangement to both nitrogenase as well as the downstream chlorophyllide a reductase (COR). The enzyme consists of a catalytic heterotetramer and two transiently-bound ATPase dimers (right).[14] Similar to nitrogenase, the reduction mechanism relies on an electron transfer from the iron-sulfur cluster of the ATPase domain, through a secondary cluster on the catalytic heterotetramer and finally to the protochlorophyllide-bound active site (which, distinct from nitrogenase, does not contain FeMoco). The reduction requires significantly less input than the nitrogenase reaction, requiring only a 2-electron reduction and 4 ATP equivalents, and as such may require an auto-inhibitory mechanism to avoid over-activity.[15]
DPOR can alternatively take as its substrate the compound with a second vinyl group (instead of an ethyl group) in the structure, in which case the reaction is
3,8-divinylprotochlorophyllide + reduced ferredoxin + 2 ATP + 2 H2O
{\displaystyle \rightleftharpoons }
3,8-divinylchlorophyllide a + oxidized ferredoxin + 2 ADP + 2 phosphate
This enzyme is present in photosynthetic bacteria, cyanobacteria, green algae and gymnosperms.[4][16]
^ PDB: 2ynm; Moser J, Lange C, Krausze J, Rebelein J, Schubert WD, Ribbe MW, Heinz DW, Jahn D (2013). "Structure of ADP-aluminium fluoride-stabilized protochlorophyllide oxidoreductase complex". Proc Natl Acad Sci U S A. 110 (6): 2094–2098. Bibcode:2013PNAS..110.2094M. doi:10.1073/pnas.1218303110. PMC 3568340. PMID 23341615.
^ Griffiths WT (1978). "Reconstitution of chlorophyllide formation by isolated etioplast membranes". Biochem. J. 174 (3): 681–92. doi:10.1042/bj1740681. PMC 1185970. PMID 31865.
^ Apel K, Santel HJ, Redlinger TE, Falk H (1980). "The protochlorophyllide holochrome of barley (Hordeum vulgare L.) Isolation and characterization of the NADPH:protochlorophyllide oxidoreductase". Eur. J. Biochem. 111 (1): 251–8. doi:10.1111/j.1432-1033.1980.tb06100.x. PMID 7439188.
^ a b Willows, Robert D. (2003). "Biosynthesis of chlorophylls from protoporphyrin IX". Natural Product Reports. 20 (6): 327–341. doi:10.1039/B110549N. PMID 12828371.
^ Nomata, Jiro; Kondo, Toru; Mizoguchi, Tadashi; Tamiaki, Hitoshi; Itoh, Shigeru; Fujita, Yuichi (May 2015). "Dark-operative protochlorophyllide oxidoreductase generates substrate radicals by an iron-sulphur cluster in bacteriochlorophyll biosynthesis". Scientific Reports. 4 (1): 5455. doi:10.1038/srep05455. ISSN 2045-2322. PMC 4071322. PMID 24965831.
^ a b Dong, Chen-Song; Zhang, Wei-Lun; Wang, Qiao; Li, Yu-Shuai; Wang, Xiao; Zhang, Min; Liu, Lin (2020-04-14). "Crystal structures of cyanobacterial light-dependent protochlorophyllide oxidoreductase". Proceedings of the National Academy of Sciences. 117 (15): 8455–8461. doi:10.1073/pnas.1920244117. ISSN 0027-8424. PMC 7165480. PMID 32234783.
^ a b Yuichi FujitaDagger and Carl E. Bauer (2000). Reconstitution of Light-independent Protochlorophyllide Reductase from Purified Bchl and BchN-BchB Subunits. J. Biol. Chem., Vol. 275, Issue 31, 23583-23588. [1]
^ S.Yamazaki, J.Nomata, Y.Fujita (2006) Differential operation of dual protochlorophyllide reductases for chlorophyll biosynthesis in response to environmental oxygen levels in the cyanobacterium Leptolyngbya boryana. Plant Physiology, 2006, 142, 911-922 [2]
^ Fujita Y, Matsumoto H, Takahashi Y, Matsubara H (March 1993). "Identification of a nifDK-like gene (ORF467) involved in the biosynthesis of chlorophyll in the cyanobacterium Plectonema boryanum". Plant & Cell Physiology. 34 (2): 305–14. PMID 8199775.
^ Nomata J, Ogawa T, Kitashima M, Inoue K, Fujita Y (April 2008). "NB-protein (BchN-BchB) of dark-operative protochlorophyllide reductase is the catalytic component containing oxygen-tolerant Fe-S clusters". FEBS Letters. 582 (9): 1346–50. doi:10.1016/j.febslet.2008.03.018. PMID 18358835.
^ Muraki N, Nomata J, Ebata K, Mizoguchi T, Shiba T, Tamiaki H, et al. (May 2010). "X-ray crystal structure of the light-independent protochlorophyllide reductase". Nature. 465 (7294): 110–4. Bibcode:2010Natur.465..110M. doi:10.1038/nature08950. PMID 20400946. S2CID 4427639.
^ a b Zhang, Shaowei; Heyes, Derren J.; Feng, Lingling; Sun, Wenli; Johannissen, Linus O.; Liu, Huanting; Levy, Colin W.; Li, Xuemei; Yang, Ji; Yu, Xiaolan; Lin, Min (2019-10-31). "Structural basis for enzymatic photocatalysis in chlorophyll biosynthesis". Nature. 574 (7780): 722–725. Bibcode:2019Natur.574..722Z. doi:10.1038/s41586-019-1685-2. ISSN 0028-0836. PMID 31645759. S2CID 204849396.
^ Moser, Jürgen; Lange, Christiane; Krausze, Joern; Rebelein, Johannes; Schubert, Wolf-Dieter; Ribbe, Markus W.; Heinz, Dirk W.; Jahn, Dieter (2013-02-05). "Structure of ADP-aluminium fluoride-stabilized protochlorophyllide oxidoreductase complex". Proceedings of the National Academy of Sciences. 110 (6): 2094–2098. Bibcode:2013PNAS..110.2094M. doi:10.1073/pnas.1218303110. ISSN 0027-8424. PMC 3568340. PMID 23341615.
^ Corless, Elliot I.; Saad Imran, Syed Muhammad; Watkins, Maxwell B.; Bacik, John-Paul; Mattice, Jenna R.; Patterson, Angela; Danyal, Karamatullah; Soffe, Mark; Kitelinger, Robert; Seefeldt, Lance C.; Origanti, Sofia (January 2021). "The flexible N-terminus of BchL autoinhibits activity through interaction with its [4Fe-4S] cluster and released upon ATP binding". Journal of Biological Chemistry. 296: 100107. doi:10.1074/jbc.RA120.016278. PMC 7948495. PMID 33219127.
Ferredoxin:protochlorophyllide+reductase+(ATP-dependent) at the US National Library of Medicine Medical Subject Headings (MeSH)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Protochlorophyllide_reductase&oldid=1087456980"
|
Distance to the Moon - Everything Wiki
File:Moon names.svg
Lunar nearside, major maria and craters are labeled. Credit: Peter Freiman, Cmglee, and background photograph by Gregory H. Revera.
This laboratory is an activity for you to determine a distance to the Moon.
Some suggested entities to consider are right ascension, declination, circular orbit, longitude, and latitude.
Template:One Box Section Okay, this is an astronomy distance, or displacement, laboratory.
I will provide an example of calculations of the distance to the Moon. The rest is up to you.
Control groups[edit]
For measuring a distance to the Moon, what would make an acceptable control group? Think about a control group to compare your experimental results and calculations to.
File:Copernicus (LRO) 2.png
Crater Copernicus is imaged on the Moon. Credit: NASA (image by Lunar Reconnaissance Orbiter).{{free media}}
One way to calculate an estimate for the distance to the Moon is to use a feature that occurs both on Earth and on the Moon. Subject to size and magnification, the distance should be a function of perceived size. The diameter of the Meteor Crater in the image is 35 mm. For an observer at the level of the crater, it is 1,186 m in diameter. The angle between the observer's path vertically decreases from 90° as the distance of the observer from the crater rim increases from 593 m for an observer starting level with the crater rim at the center.
Landsat 1 has a periapsis of 897 km and an apoapsis of 917 km, for an average of 907 km. If the Barringer Meteor Crater was photographed by Landsat 1, it produced a 35 mm diameter image at 907 km. This suggests a photographic "shrink factor". As the perceived size shrinks with distance from the crater, the "shrink factor" should remain constant until the camera onboard can no longer image the crater (spatial resolutions ranging from 15 to 60 meters). The tangent of the angle is 593 m / 907,000 m or about 6 x 10-4.
For the Meteor Crater image:
{\displaystyle imagesize=1186m{\frac {593m}{907,000m\times 2}}=593m\times {\frac {593m}{distancem}}.}
For example, if the size of a crater in the image at the top of the page is 1 mm and the image was taken by the same Landsat 1 camera, then the distance to the crater is approximately 35 times the distance of Landsat 1 above the Meteor Crater or 35,000 km.
If there are 11 pixels/mm, then a comparable crater of one pixel diameter corresponds to a distance of 385,000 km, for a Moon at an average distance of 384,000 km.
"A lunar observation by Landsat could provide improved radiometric and geometric calibration of both the Thematic Mapper and the Multispectral Scanner in terms of absolute radiometry, determination of the modulation transfer function, and sensitivity to scattered light. A pitch of the spacecraft would be required."[1]
Therefore, Landsat can take an image of the Moon. If its minimal image size corresponds to a crater comparable to the Meteor Crater, then such a Landsat image would yield a value for the distance to the Moon.
The crater Copernicus is 93 km in diameter or 78 times the size of the Meteor crater which would be about 78 pixels in the Landsat 1 image from Earth orbit (see the Copernicus crater in the Moon image at the top right) giving a distance of about 386,000 km.
File:Supermoon by Landsat 8.jpg
The supermoon is seen by Landsat 8. Credit: U.S. Geological Survey.{{fairuse}}
"The moon has an elliptical orbit that carries it between 363,000 and 406,000 kilometers from Earth. A so-called supermoon happens when a full moon coincides with the moon's closest approach to Earth. Original image data was obtained as dated on or about July 12, 2014."[2]
Copernicus crater is upper left center.
A distance to the Moon
Many methods can be used to calculate a distance to the Moon including using lunar parallax. Here a comparison of similar shaped craters is used with the Landsat system in orbit of Earth.
Landsat 1 has imaged the Meteor crater. By calculating the tangent of the angle subtended by the satellite and assuming stable optics a calculation can be made using the Copernicus crater on the Moon.
The size of the Meteor Crater was measured on the Earth. The size of the Copernicus crater was measured by the Lunar Reconnaissance Orbiter. A shrink factor calculation was made using the radius of the Meteor crater and an average distance of Landsat 1 from the rocky surface of Earth. To calculate an estimated distance to the Moon at Copernicus crater, a likely pixel density was matched to the ratio of sizes.
A distance estimate of about 386,000 km was obtained.
Exact parameters for the Landsat 8 calibration using a recent super Moon were not obtained. An estimated pixel diameter of 78 corresponded to the size ratio and appears close. Quality of optics of the Landsat series was assumed constant.
A reasonable distance to the Moon can be obtained by comparing similar Earth and Moon features.
To assess your calculations, including your justification, analysis and discussion, I will provide such an assessment of my example for comparison.
Exact photometric parameters such as focal length, magnification, and resolution were not used that could have better explained the approach. Shrink factor with distance is an unfamiliar term.
The width of a feature decreases perceptively the further away vertically the observer travels.
↑ H. H. Kieffer and R. L. Wildey (September 1985). "Absolute calibration of Landsat instruments using the moon". Photogrammetric Engineering and Remote Sensing 51 (09): 1391-3. Bibcode: 1985PgERS..51.1391K. http://adsabs.harvard.edu/abs/1985PgERS..51.1391K. Retrieved 2015-06-17.
Template:Radiation astronomy resources{{History of science resources}}{{Reasoning resources}}
Learn more about Distance to the Moon
Retrieved from "https://everything.wiki/index.php?title=Distance_to_the_Moon&oldid=3568571"
Physics/Laboratories
|
June 2021 Antithetic multilevel sampling method for nonlinear functionals of measure
Łukasz Szpruch, Alvin Tse
Łukasz Szpruch,1 Alvin Tse1
\mathit{\mu }\in {\mathcal{P}}_{2}\left({\mathbb{R}}^{\mathit{d}}\right)
{\mathcal{P}}_{2}\left({\mathbb{R}}^{\mathit{d}}\right)
denotes the space of square integrable probability measures, and consider a Borel-measurable function
\mathrm{\Phi }:{\mathcal{P}}_{2}\left({\mathbb{R}}^{\mathit{d}}\right)\to \mathbb{R}
. In this paper we develop an antithetic Monte Carlo estimator (A-MLMC) for
\mathrm{\Phi }\left(\mathit{\mu }\right)
, which achieves sharp error bound under mild regularity assumptions. The estimator takes as input the empirical laws
{\mathit{\mu }}^{\mathit{N}}=\frac{1}{\mathit{N}}{\sum }_{\mathit{i}=1}^{\mathit{N}}{\mathit{\delta }}_{{\mathit{X}}_{\mathit{i}}}
, where (a)
{\left({\mathit{X}}_{\mathit{i}}\right)}_{\mathit{i}=1}^{\mathit{N}}
is a sequence of i.i.d. samples from μ or (b)
{\left({\mathit{X}}_{\mathit{i}}\right)}_{\mathit{i}=1}^{\mathit{N}}
is a system of interacting particles (diffusions) corresponding to a McKean–Vlasov stochastic differential equation (McKV-SDE). Each case requires a separate analysis. For a mean-field particle system, we also consider the empirical law induced by its Euler discretisation which gives a fully implementable algorithm. As by-products of our analysis, we establish a dimension-independent rate of uniform strong propagation of chaos, as well as an
{\mathit{L}}^{2}
estimate of the antithetic difference for i.i.d. random variables corresponding to general functionals defined on the space of probability measures.
This was work has been supported by The Alan Turing Institute under the Engineering and Physical Sciences Research Council Grant EP/N510129/1.
Łukasz Szpruch. Alvin Tse. "Antithetic multilevel sampling method for nonlinear functionals of measure." Ann. Appl. Probab. 31 (3) 1100 - 1139, June 2021. https://doi.org/10.1214/20-AAP1614
Received: 1 May 2019; Revised: 1 July 2020; Published: June 2021
Primary: 60H10 , 60K35 , 65C35
Keywords: antithetic multi-level Monte Carlo estimator , McKean–Vlasov SDEs , propagation of chaos , Wasserstein calculus
Łukasz Szpruch, Alvin Tse "Antithetic multilevel sampling method for nonlinear functionals of measure," The Annals of Applied Probability, Ann. Appl. Probab. 31(3), 1100-1139, (June 2021)
|
L_{\varphi}
-Spaces and some Related Sequence Spaces
Johann BoosKarl-Goswin Grosse-ErdmannT. Leiger
An Embedding Theorem for Functions whose Fourier Transforms are Weighted Square Summable
Homogenization of the Stokes Equations with General Random Coefficients
Steiner Symmetrization and Periodic Solutions of Boundary Value Problems
On Fundamental Solutions of the Heat Conduction Difference Operator
Klaus GürlebeckA. Hommel
On the Canonical Proboscis
Robert FinnT.L. Leise
The Smoothness of Solutions to Nonlinear Weakly Singular Integral Equations
Arvet PedasGennadi Vainikko
Uniqueness Result for the Generalized Entropy Solutions to the Cauchy Problem for First-Order Partial Differential-Functional Equations
Zdzisław KamontH. Leszczyński
Some Bifurcation Results Including Banach Space Valued Parameters
The Grünwald-Letnikov Difference Operator and Regularization of the Weyl Fractional Differentiation
Vu Kim TuanRudolf Gorenflo
Analyticity of some Kernels
On the Existence of Closed Orbits for a Differential System
Wang Hui-FengYu Shu-Xiang
|
Gauss (unit) - Citizendium
In physics, gauss (symbol G) is the unit of strength of magnetic flux density |B| (also known as magnetic induction). The gauss belongs to the Gaussian and emu (electromagnetic) systems of units, which are cgs (centimeter-gram-second) systems. The unit is related to the SI unit tesla (T) as follows.
1 G ≡ 1 Mx/cm2 = 10−4 T,
where Mx (maxwell) is the Gaussian unit for magnetic flux.
The unit is named in honor of the German mathematician and physicist Carl Friedrich Gauss.
The gauss is defined through an electromotive force
{\displaystyle {\mathcal {E}}}
induced by a change in magnetic field B. For constant surface S and uniform rate of decrease of |B|, Faraday's law takes the simple form
{\displaystyle |\mathbf {B} |={\frac {\Phi }{S}}=-{\frac {t\,{\mathcal {E}}}{S}},}
where Φ is the magnetic flux passing through S and uniform rate of decrease means linearity in time:
{\displaystyle \Phi =-t{\mathcal {E}}.}
Hence, gauss is equal to maxwell per unit surface, where maxwell (symbol Mx) is the Gaussian unit for Φ, and |B| is a flux density.
In Gaussian units S is in cm2, time t in s,
{\displaystyle {\mathcal {E}}}
in abV ( = 10−8 volt), |B| in G, and Φ in Mx:
1 G = 1 Mx/cm2 = 1 abV•s/cm2
The oersted is the Gaussian unit of strength of a magnetic field |H|. The oersted is defined by means of an electric current giving the field H.
Retrieved from "https://citizendium.org/wiki/index.php?title=Gauss_(unit)&oldid=499612"
|
Electrophoresis - Wikipedia
For specific types and uses of electrophoresis (for example, in various analytical methods and as a process of administering medicine, iontophoresis), see Electrophoresis (disambiguation).
Electrophoresis, from Ancient Greek ἤλεκτρον (ḗlektron, "amber") and φόρησις (phórēsis, "the act of bearing"), is the motion of dispersed particles relative to a fluid under the influence of a spatially uniform electric field.[1][2][3][4][5][6][7] Electrophoresis of positively charged particles (cations) is sometimes called cataphoresis, while electrophoresis of negatively charged particles (anions) is sometimes called anaphoresis.
1. Illustration of electrophoresis
2. Illustration of electrophoresis retardation
The electrokinetic phenomenon of electrophoresis was observed for the first time in 1807 by Russian professors Peter Ivanovich Strakhov and Ferdinand Frederic Reuss at Moscow University,[8] who noticed that the application of a constant electric field caused clay particles dispersed in water to migrate. It is ultimately caused by the presence of a charged interface between the particle surface and the surrounding fluid. It is the basis for analytical techniques used in chemistry for separating molecules by size, charge, or binding affinity.
Main article: History of electrophoresis
Suspended particles have an electric surface charge, strongly affected by surface adsorbed species,[9] on which an external electric field exerts an electrostatic Coulomb force. According to the double layer theory, all surface charges in fluids are screened by a diffuse layer of ions, which has the same absolute charge but opposite sign with respect to that of the surface charge. The electric field also exerts a force on the ions in the diffuse layer which has direction opposite to that acting on the surface charge. This latter force is not actually applied to the particle, but to the ions in the diffuse layer located at some distance from the particle surface, and part of it is transferred all the way to the particle surface through viscous stress. This part of the force is also called electrophoretic retardation force, or ERF in short. When the electric field is applied and the charged particle to be analyzed is at steady movement through the diffuse layer, the total resulting force is zero :
{\displaystyle F_{tot}=0=F_{el}+F_{f}+F_{ret}}
Considering the drag on the moving particles due to the viscosity of the dispersant, in the case of low Reynolds number and moderate electric field strength E, the drift velocity of a dispersed particle v is simply proportional to the applied field, which leaves the electrophoretic mobility μe defined as:[10]
{\displaystyle \mu _{e}={v \over E}.}
The most well known and widely used theory of electrophoresis was developed in 1903 by Smoluchowski:[11]
{\displaystyle \mu _{e}={\frac {\varepsilon _{r}\varepsilon _{0}\zeta }{\eta }}}
where εr is the dielectric constant of the dispersion medium, ε0 is the permittivity of free space (C² N−1 m−2), η is dynamic viscosity of the dispersion medium (Pa s), and ζ is zeta potential (i.e., the electrokinetic potential of the slipping plane in the double layer, units mV or V).
The Smoluchowski theory is very powerful because it works for dispersed particles of any shape at any concentration. It has limitations on its validity. For instance, it does not include Debye length κ−1 (units m). However, Debye length must be important for electrophoresis, as follows immediately from Figure 2, "Illustration of electrophoresis retardation". Increasing thickness of the double layer (DL) leads to removing the point of retardation force further from the particle surface. The thicker the DL, the smaller the retardation force must be.
Detailed theoretical analysis proved that the Smoluchowski theory is valid only for sufficiently thin DL, when particle radius a is much greater than the Debye length:
{\displaystyle a\kappa \gg 1}
This model of "thin double layer" offers tremendous simplifications not only for electrophoresis theory but for many other electrokinetic theories. This model is valid for most aqueous systems, where the Debye length is usually only a few nanometers. It only breaks for nano-colloids in solution with ionic strength close to water.
The Smoluchowski theory also neglects the contributions from surface conductivity. This is expressed in modern theory as condition of small Dukhin number:
{\displaystyle Du\ll 1}
In the effort of expanding the range of validity of electrophoretic theories, the opposite asymptotic case was considered, when Debye length is larger than particle radius:
{\displaystyle a\kappa <\!\,1}
Under this condition of a "thick double layer", Hückel[12] predicted the following relation for electrophoretic mobility:
{\displaystyle \mu _{e}={\frac {2\varepsilon _{r}\varepsilon _{0}\zeta }{3\eta }}}
There are several analytical theories that incorporate surface conductivity and eliminate the restriction of a small Dukhin number, pioneered by Overbeek.[13] and Booth.[14] Modern, rigorous theories valid for any Zeta potential and often any aκ stem mostly from Dukhin–Semenikhin theory.[15]
In the thin double layer limit, these theories confirm the numerical solution to the problem provided by O'Brien and White.[16]
Nonlinear frictiophoresis
^ Lyklema, J. (1995). Fundamentals of Interface and Colloid Science. Vol. 2. p. 3.208.
^ Russel, W.B.; Saville, D.A.; Schowalter, W.R. (1989). Colloidal Dispersions. Cambridge University Press.
^ Kruyt, H.R. (1952). Colloid Science. Vol. 1, Irreversible systems. Elsevier.
^ Dukhin, A.S.; Goetz, P.J. (2017). Characterization of liquids, nano- and micro- particulates and porous bodies using Ultrasound. Elsevier. ISBN 978-0-444-63908-0.
^ Anderson, J L (January 1989). "Colloid Transport by Interfacial Forces". Annual Review of Fluid Mechanics. 21 (1): 61–99. Bibcode:1989AnRFM..21...61A. doi:10.1146/annurev.fl.21.010189.000425. ISSN 0066-4189.
^ Hanaor, D.A.H.; Michelazzi, M.; Leonelli, C.; Sorrell, C.C. (2012). "The effects of carboxylic acids on the aqueous dispersion and electrophoretic deposition of ZrO2". Journal of the European Ceramic Society. 32 (1): 235–244. arXiv:1303.2754. doi:10.1016/j.jeurceramsoc.2011.08.015. S2CID 98812224.
^ Booth, F. (1948). "Theory of Electrokinetic Effects". Nature. 161 (4081): 83–86. Bibcode:1948Natur.161...83B. doi:10.1038/161083a0. PMID 18898334. S2CID 4115758.
Shim, J.; P. Dutta; C.F. Ivory (2007). "Modeling and simulation of IEF in 2-D microgeometries". Electrophoresis. 28 (4): 527–586. doi:10.1002/elps.200600402. PMID 17253629. S2CID 23274096.
Wikimedia Commons has media related to Electrophoresis.
Look up electrophoresis in Wiktionary, the free dictionary.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Electrophoresis&oldid=1085255111"
|
Galactose-1-phosphate thymidylyltransferase - Wikipedia
In enzymology, a galactose-1-phosphate thymidylyltransferase (EC 2.7.7.32) is an enzyme that catalyzes the chemical reaction
dTTP + alpha-D-galactose 1-phosphate
{\displaystyle \rightleftharpoons }
diphosphate + dTDP-galactose
Thus, the two substrates of this enzyme are dTTP and alpha-D-galactose 1-phosphate, whereas its two products are diphosphate and dTDP-galactose.
This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is dTTP:alpha-D-galactose-1-phosphate thymidylyltransferase. Other names in common use include dTDP galactose pyrophosphorylase, galactose 1-phosphate thymidylyl transferase, thymidine diphosphogalactose pyrophosphorylase, thymidine triphosphate:alpha-D-galactose 1-phosphate, and thymidylyltransferase. This enzyme participates in nucleotide sugars metabolism.
Pazur JH, Anderson JS (October 1963). "Thymidine triphosphate: alpha-D-galactose 1-phosphate thymidylyltransferase from Streptococcus faecalis grown on D-galactose". The Journal of Biological Chemistry. 238: 3155–60. PMID 14085355.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Galactose-1-phosphate_thymidylyltransferase&oldid=970475171"
|
Quadratic equation/Related Articles - Citizendium
Quadratic equation/Related Articles
< Quadratic equation
A list of Citizendium articles, and planned articles, about Quadratic equation.
See also changes related to Quadratic equation, or pages that link to Quadratic equation or to this page or whose text contains "Quadratic equation".
Polynomial equation [r]: An equation in which a polynomial in one or more variables is set equal to zero. [e]
Root (mathematics) [r]: Add brief definition or description
Factor [r]: Add brief definition or description
{\displaystyle i^{2}=-1}
Retrieved from "https://citizendium.org/wiki/index.php?title=Quadratic_equation/Related_Articles&oldid=562030"
|
Volume-preserving mean curvature flow of rotationally symmetric surfaces | EMS Press
Volume-preserving mean curvature flow of rotationally symmetric surfaces
A rotationally symmetric n-dimensional surface in
{\Bbb R}^{n+1}
, of enclosed volume V and with boundary in two parallel planes, is evolving under volume-preserving mean curvature flow. For large volume V, we obtain gradient and curvature estimates, leading to long-time existence of the flow, and convergence to a constant mean curvature surface.
Maria Athanassenas, Volume-preserving mean curvature flow of rotationally symmetric surfaces. Comment. Math. Helv. 72 (1997), no. 1, pp. 52–66
|
D-lactate dehydrogenase (cytochrome) - Wikipedia
In enzymology, a D-lactate dehydrogenase (cytochrome) (EC 1.1.2.4) is an enzyme that catalyzes the chemical reaction
(D)-lactate + 2 ferricytochrome c
{\displaystyle \rightleftharpoons }
pyruvate + 2 ferrocytochrome c
Thus, the two substrates of this enzyme are (D)-lactate and ferricytochrome c, whereas its two products are pyruvate and ferrocytochrome c.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with a cytochrome as acceptor. The systematic name of this enzyme class is (D)-lactate:ferricytochrome-c 2-oxidoreductase. Other names in common use include lactic acid dehydrogenase, D-lactate (cytochrome) dehydrogenase, cytochrome-dependent D-(−)-lactate dehydrogenase, D-lactate-cytochrome c reductase, and D-(−)-lactic cytochrome c reductase. This enzyme participates in pyruvate metabolism. It employs one cofactor, FAD. This type of enzyme has been characterized in animals, fungi, bacteria and recently in plants[1] .[2] It is believed to be important in the detoxification of methylglyoxal through the glyoxylase pathway
^ Atlante, A.; de Bari, L.; Valenti, D.; Pizzuto, R.; Paventi, G. & Passarella, S. (2005). "Transport and metabolism of D-lactate in Jerusalem artichoke mitochondria". Biochim. Biophys. Acta. 1708 (1): 13–22. doi:10.1016/j.bbabio.2005.03.003. PMID 15949980.
^ Martin Engqvist; Maria Fabiana Drincovich; Ulf-Ingo Flügge & Veronica G. Maurino (2009). "Two D-2-hydroxyacid dehydrogenases in Arabidopsis thaliana with catalytic capacities to participate in the last reactions of the methylglyoxal and {beta}-oxidation pathways". J Biol Chem. 284 (September 11): 25026–25037. doi:10.1074/jbc.M109.021253. PMC 2757207. PMID 19586914.
GREGOLIN C, SINGER TP (1963). "The lactic dehydrogenase of yeast. III. D(-)Lactic cytochrome c reductase, a zinc-flavoprotein from aerobic yeast". Biochim. Biophys. Acta. 67: 201–18. doi:10.1016/0006-3002(63)91818-3. PMID 13950255.
GREGOLIN C, SINGER TP, KEARNEY EB, BOERI E (1961). "The formation and enzymatic properties of the various lactic dehydrogenases of yeast". Ann. N. Y. Acad. Sci. 94 (3): 780–97. doi:10.1111/j.1749-6632.1961.tb35573.x. PMID 13901630.
Nygaard AP (1961). "D(−)-Lactate cytochrome c reductase, a flavoprotein from yeast". J. Biol. Chem. 236: 920–925.
Retrieved from "https://en.wikipedia.org/w/index.php?title=D-lactate_dehydrogenase_(cytochrome)&oldid=1032241664"
|
Oxalate CoA-transferase - Wikipedia
In enzymology, an oxalate CoA-transferase (EC 2.8.3.2) is an enzyme that catalyzes the chemical reaction
succinyl-CoA + oxalate
{\displaystyle \rightleftharpoons }
succinate + oxalyl-CoA
Thus, the two substrates of this enzyme are succinyl-CoA and oxalate, whereas its two products are succinate and oxalyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is succinyl-CoA:oxalate CoA-transferase. Other names in common use include succinyl-beta-ketoacyl-CoA transferase, and oxalate coenzyme A-transferase. This enzyme participates in glyoxylate and dicarboxylate metabolism.
Quayle JR, Keech DB, Taylor GA (1961). "Carbon assimilation by Pseudomonas oxalaticus (OXI). 4. Metabolism of oxalate in cell-free extracts of the organism grown on oxalate". Biochem. J. 78 (2): 225–36. PMC 1205258. PMID 16748872.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Oxalate_CoA-transferase&oldid=925698003"
|
On generalized solutions of two-phase flows for viscous incompressible fluids | EMS Press
We discuss the existence of generalized solutions of the flow of two immiscible, incompressible, viscous Newtonian and Non-Newtonian fluids with and without surface tension in a domain
\Omega\subseteq \R^d
d=2,3
. In the case without surface tension, the existence of weak solutions is shown, but little is known about the interface between both fluids. If surface tension is present, the energy estimates gives an a priori bound on the
(d-1)
-dimensional Hausdorff measure of the interface, but the existence of weak solutions is open. This might be due to possible oscillation and concentration effects of the interface related to instabilities of the interface as for example fingering, emulsification or just cancellation of area, when two parts of the interface meet. Nevertheless we will show the existence of so-called measure-valued varifold solutions, where the interface is modeled by an oriented general varifold
V(t)
which is a non-negative measure on
\Omega\times \mathbb{S}^{d-1}
\mathbb{S}^{d-1}
\R^d
Moreover, it is shown that measure-valued varifold solutions are weak solution if an energy equality is satisfied.
Helmut Abels, On generalized solutions of two-phase flows for viscous incompressible fluids. Interfaces Free Bound. 9 (2007), no. 1, pp. 31–65
|
Block Preconditioned SSOR Methods for H-Matrices Linear Systems
2013 Block Preconditioned SSOR Methods for
H
-Matrices Linear Systems
Zhao-Nian Pu, Xue-Zhong Wang
We present a block preconditioner and consider block preconditioned SSOR iterative methods for solving linear system
Ax=b
A
H
-matrix, the convergence and some comparison results of the spectral radius for our methods are given. Numerical examples are also given to illustrate that our methods are valid.
Zhao-Nian Pu. Xue-Zhong Wang. "Block Preconditioned SSOR Methods for
H
-Matrices Linear Systems." J. Appl. Math. 2013 1 - 7, 2013. https://doi.org/10.1155/2013/213659
Zhao-Nian Pu, Xue-Zhong Wang "Block Preconditioned SSOR Methods for
H
-Matrices Linear Systems," Journal of Applied Mathematics, J. Appl. Math. 2013(none), 1-7, (2013)
|
वेबर संख्या - विकिपीडिया
स्रोत खोजें: "वेबर संख्या" – समाचार · अखबार पुरालेख · किताबें · विद्वान · जेस्टोर (JSTOR)
A splash after half a brick hits the water; the image is about half a meter across. Note the freely moving airborne water droplets, a phenomenon typical of high Reynolds number flows; the intricate non-spherical shapes of the droplets show that the Weber number is high. Also note the entrained bubbles in the body of the water, and an expanding ring of disturbance propagating away from the impact site.
The Weber number is a dimensionless number in fluid mechanics that is often useful in analysing fluid flows where there is an interface between two different fluids, especially for multiphase flows with strongly curved surfaces. It can be thought of as a measure of the relative importance of the fluid's inertia compared to its surface tension. The quantity is useful in analyzing thin film flows and the formation of droplets and bubbles.
It is named after Moritz Weber (1871–1951) and may be written as:
{\displaystyle {\mathit {We}}={\frac {\rho v^{2}l}{\sigma }}}
{\displaystyle \rho }
is the density of the fluid.
{\displaystyle v}
{\displaystyle l}
is its characteristic length, typically the droplet diameter.
{\displaystyle \sigma }
is the surface tension.
The modified Weber number,
{\displaystyle We^{*}={\frac {We}{48}}}
equals the ratio of the kinetic energy on impact to the surface energy,
{\displaystyle We^{*}={\frac {E_{kin}}{E_{surf}}}}
{\displaystyle E_{kin}=\pi \rho l~^{3}U~^{2}/24}
{\displaystyle E_{surf}=2\pi l~^{2}\sigma }
Weast, R. Lide, D. Astle, M. Beyer, W. (1989-1990). CRC Handbook of Chemistry and Physics. 70th ed. Boca Raton, Florida: CRC Press, Inc.. F-373,376.
We=rho*U^2*D/Sigma
"https://hi.wikipedia.org/w/index.php?title=वेबर_संख्या&oldid=5092706" से लिया गया
|
Isotopy and invariants of Albert algebras | EMS Press
Isotopy and invariants of Albert algebras
Let k be a field with characteristic different from 2 and 3. Let B be a central simple algebra of degree 3 over a quadratic extension K/k, which admits involutions of second kind. In this paper, we prove that if the Albert algebras
J(B,\sigma,u,\mu)
J(B,\tau,v,\nu)
have same
f_3
g_3
invariants, then they are isotopic. We prove that for a given Albert algebra J, there exists an Albert algebra J' with
f_3(J')=0
f_5(J')=0
g_3(J')=g_3(J)
. We conclude with a construction of Albert division algebras, which are pure second Tits' constructions.
, Isotopy and invariants of Albert algebras. Comment. Math. Helv. 74 (1999), no. 2, pp. 297–305
|
2012 Optimal Results and Numerical Simulations for Flow Shop Scheduling Problems
Tao Ren, Yuandong Diao, Xiaochuan Luo
This paper considers the m-machine flow shop problem with two objectives: makespan with release dates and total quadratic completion time, respectively. For F
m|{r}_{j}|{\text{C}}_{max}
, we prove the asymptotic optimality for any dense scheduling when the problem scale is large enough. For F
m{‖\Sigma \text{C}}_{j}^{2}
, improvement strategy with local search is presented to promote the performance of the classical SPT heuristic. At the end of the paper, simulations show the effectiveness of the improvement strategy.
Tao Ren. Yuandong Diao. Xiaochuan Luo. "Optimal Results and Numerical Simulations for Flow Shop Scheduling Problems." J. Appl. Math. 2012 1 - 9, 2012. https://doi.org/10.1155/2012/395947
Tao Ren, Yuandong Diao, Xiaochuan Luo "Optimal Results and Numerical Simulations for Flow Shop Scheduling Problems," Journal of Applied Mathematics, J. Appl. Math. 2012(none), 1-9, (2012)
|
Maltose phosphorylase - Wikipedia
maltose phosphorylase dimer, Lactobacillus brevis
In enzymology, a maltose phosphorylase (EC 2.4.1.8) is an enzyme that catalyzes the chemical reaction
maltose + phosphate
{\displaystyle \rightleftharpoons }
D-glucose + beta-D-glucose 1-phosphate
Thus, the two substrates of this enzyme are maltose and phosphate, whereas its two products are D-glucose and beta-D-glucose 1-phosphate.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is maltose:phosphate 1-beta-D-glucosyltransferase. This enzyme participates in starch and sucrose metabolism.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1H54.
FITTING C, DOUDOROFF M (1952). "Phosphorolysis of maltose by enzyme preparations from Neisseria meningitidis". J. Biol. Chem. 199 (1): 153–63. PMID 12999827.
Putman EW, Litt CF, Hassid WZ (1955). "The structure of D-glucose-D-xylose synthesized by maltose phosphorylase". J. Am. Chem. Soc. 77 (16): 4351–4353. doi:10.1021/ja01621a050.
Wood BJ, Rainbow C (1961). "The maltophosphorylase of beer lactobacilli". Biochem. J. 78: 204–209. PMC 1205197. PMID 13786484.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Maltose_phosphorylase&oldid=917518577"
|
Unit sphere — Wikipedia Republished // WIKI 2
Some 1-spheres.
{\displaystyle \|{\boldsymbol {x}}\|_{2}}
is the norm for Euclidean space discussed in the first section below.
In mathematics, a unit sphere is simply a sphere of radius one around a given center. More generally, it is the set of points of distance 1 from a fixed central point, where different norms can be used as general notions of "distance". A unit ball is the closed set of points of distance less than or equal to 1 from a fixed central point. Usually the center is at the origin of the space, so one speaks of "the unit ball" or "the unit sphere". Special cases are the unit circle and the unit disk.
Open Balls, Closed Balls and Spheres
So why do the volumes of all even dimensional unit spheres sum to e^π?
Surface integral example part 1: Parameterizing the unit sphere | Khan Academy
Determine if the Unit Sphere is a Subspace of the Vector Space R^3
Find a Unit Normal Vector to the Sphere x^2 + y^2 + z^2 = 57 at (4, 4, 5)
1 Unit spheres and balls in Euclidean space
1.1 General area and volume formulas
1.1.1 Recursion
1.1.2 Non-negative real-valued dimensions
1.1.3 Other radii
2 Unit balls in normed vector spaces
3.2 Quadratic forms
In Euclidean space of n dimensions, the (n−1)-dimensional unit sphere is the set of all points
{\displaystyle (x_{1},\ldots ,x_{n})}
{\displaystyle x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}=1.}
{\displaystyle x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}<1,}
{\displaystyle x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}\leq 1.}
{\displaystyle f(x,y,z)=x^{2}+y^{2}+z^{2}=1}
The volume of the unit ball in n-dimensional Euclidean space, and the surface area of the unit sphere, appear in many important formulas of analysis. The volume of the unit ball in n dimensions, which we denote Vn, can be expressed by making use of the gamma function. It is
{\displaystyle V_{n}={\frac {\pi ^{n/2}}{\Gamma (1+n/2)}}={\begin{cases}{\pi ^{n/2}}/{(n/2)!}&\mathrm {if~} n\geq 0\mathrm {~is~even,} \\~\\{\pi ^{\lfloor n/2\rfloor }2^{\lceil n/2\rceil }}/{n!!}&\mathrm {if~} n\geq 0\mathrm {~is~odd,} \end{cases}}}
where n!! is the double factorial.
{\displaystyle A_{n-1}=nV_{n}={\frac {n\pi ^{n/2}}{\Gamma (1+n/2)}}={\frac {2\pi ^{n/2}}{\Gamma (n/2)}}\,,}
where the last equality holds only for n > 0. For example,
{\displaystyle A_{0}=2}
is the "area" of the boundary of the unit ball
{\displaystyle [-1,1]\subset \mathbb {R} }
, which simply counts the two points. Then
{\displaystyle A_{1}=2\pi }
is the "area" of the boundary of the unit disc, which is the circumference of the unit circle.
{\displaystyle A_{2}=4\pi }
is the area of the boundary of the unit ball
{\displaystyle \{x\in \mathbb {R} ^{3}:x_{1}^{2}+x_{2}^{2}+x_{3}^{2}\leq 1\}}
, which is the surface area of the unit sphere
{\displaystyle \{x\in \mathbb {R} ^{3}:x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=1\}}
The surface areas and the volumes for some values of
{\displaystyle n}
{\displaystyle n}
{\displaystyle A_{n-1}}
(surface area)
{\displaystyle V_{n}}
(volume)
{\displaystyle (1/0!)\pi ^{0}}
{\displaystyle 1(2^{1}/1!!)\pi ^{0}}
{\displaystyle (2^{1}/1!!)\pi ^{0}}
{\displaystyle 2(1/1!)\pi ^{1}=2\pi }
{\displaystyle (1/1!)\pi ^{1}=\pi }
{\displaystyle 3(2^{2}/3!!)\pi ^{1}=4\pi }
{\displaystyle (2^{2}/3!!)\pi ^{1}=(4/3)\pi }
{\displaystyle 4(1/2!)\pi ^{2}=2\pi ^{2}}
{\displaystyle (1/2!)\pi ^{2}=(1/2)\pi ^{2}}
{\displaystyle 5(2^{3}/5!!)\pi ^{2}=(8/3)\pi ^{2}}
{\displaystyle (2^{3}/5!!)\pi ^{2}=(8/15)\pi ^{2}}
{\displaystyle 6(1/3!)\pi ^{3}=\pi ^{3}}
{\displaystyle (1/3!)\pi ^{3}=(1/6)\pi ^{3}}
{\displaystyle 7(2^{4}/7!!)\pi ^{3}=(16/15)\pi ^{3}}
{\displaystyle (2^{4}/7!!)\pi ^{3}=(16/105)\pi ^{3}}
{\displaystyle 8(1/4!)\pi ^{4}=(1/3)\pi ^{4}}
{\displaystyle (1/4!)\pi ^{4}=(1/24)\pi ^{4}}
{\displaystyle 9(2^{5}/9!!)\pi ^{4}=(32/105)\pi ^{4}}
{\displaystyle (2^{5}/9!!)\pi ^{4}=(32/945)\pi ^{4}}
{\displaystyle 10(1/5!)\pi ^{5}=(1/12)\pi ^{5}}
{\displaystyle (1/5!)\pi ^{5}=(1/120)\pi ^{5}}
{\displaystyle A_{0}=2}
{\displaystyle A_{1}=2\pi }
{\displaystyle A_{n}={\frac {2\pi }{n-1}}A_{n-2}}
{\displaystyle n>1}
{\displaystyle V_{0}=1}
{\displaystyle V_{1}=2}
{\displaystyle V_{n}={\frac {2\pi }{n}}V_{n-2}}
{\displaystyle n>1}
Main article: Hausdorff measure
{\displaystyle 2^{-n}V_{n}={\frac {\pi ^{n/2}}{2^{n}\Gamma (1+n/2)}}}
at non-negative real values of n is sometimes used for normalization of Hausdorff measure.[1][2]
The open unit ball of a normed vector space
{\displaystyle V}
{\displaystyle \|\cdot \|}
{\displaystyle \{x\in V:\|x\|<1\}}
It is the topological interior of the closed unit ball of (V,||·||):
{\displaystyle \{x\in V:\|x\|\leq 1\}}
{\displaystyle \{x\in V:\|x\|=1\}}
The 'shape' of the unit ball is entirely dependent on the chosen norm; it may well have 'corners', and for example may look like [−1,1]n, in the case of the max-norm in Rn. One obtains a naturally round ball as the unit ball pertaining to the usual Hilbert space norm, based in the finite-dimensional case on the Euclidean distance; its boundary is what is usually meant by the unit sphere.
{\displaystyle x=(x_{1},...x_{n})\in \mathbb {R} ^{n}.}
Define the usual
{\displaystyle \ell _{p}}
-norm for p ≥ 1 as:
{\displaystyle \|x\|_{p}=\left(\sum _{k=1}^{n}|x_{k}|^{p}\right)^{1/p}}
{\displaystyle \|x\|_{2}}
is the usual Hilbert space norm.
{\displaystyle \|x\|_{1}}
is called the Hamming norm, or
{\displaystyle \ell _{1}}
-norm. The condition p ≥ 1 is necessary in the definition of the
{\displaystyle \ell _{p}}
norm, as the unit ball in any normed space must be convex as a consequence of the triangle inequality. Let
{\displaystyle \|x\|_{\infty }}
denote the max-norm or
{\displaystyle \ell _{\infty }}
-norm of x.
Note that for the one-dimensional circumferences
{\displaystyle C_{p}}
of the two-dimensional unit balls, we have:
{\displaystyle C_{1}=4{\sqrt {2}}}
is the minimum value.
{\displaystyle C_{2}=2\pi \,.}
{\displaystyle C_{\infty }=8}
is the maximum value.
All three of the above definitions can be straightforwardly generalized to a metric space, with respect to a chosen origin. However, topological considerations (interior, closure, border) need not apply in the same way (e.g., in ultrametric spaces, all of the three are simultaneously open and closed sets), and the unit sphere may even be empty in some metric spaces.
If V is a linear space with a real quadratic form F:V → R, then { p ∈ V : F(p) = 1 } may be called the unit sphere[3][4] or unit quasi-sphere of V. For example, the quadratic form
{\displaystyle x^{2}-y^{2}}
, when set equal to one, produces the unit hyperbola which plays the role of the "unit circle" in the plane of split-complex numbers. Similarly, the quadratic form x2 yields a pair of lines for the unit sphere in the dual number plane.
Unit circle
Unit disk
Unit sphere bundle
Unit square
^ The Chinese University of Hong Kong, Math 5011, Chapter 3, Lebesgue and Hausdorff Measures
^ Manin, Yuri I. "The notion of dimension in geometry and algebra" (PDF). Bulletin of the American Mathematical Society. 43 (2): 139–161. Retrieved 17 December 2021.
^ Takashi Ono (1994) Variations on a Theme of Euler: quadratic forms, elliptic curves, and Hopf maps, chapter 5: Quadratic spherical maps, page 165, Plenum Press, ISBN 0-306-44789-4
^ F. Reese Harvey (1990) Spinors and calibrations, "Generalized Spheres", page 42, Academic Press, ISBN 0-12-329650-1
Deza, E.; Deza, M. (2006), Dictionary of Distances, Elsevier, ISBN 0-444-52087-2 . Reviewed in Newsletter of the European Mathematical Society 64 (June 2007), p. 57. This book is organized as a list of distances of many types, each with a brief description.
Look up unit sphere in Wiktionary, the free dictionary.
Weisstein, Eric W. "Unit sphere". MathWorld.
|
Constant of integration - Wikipedia
Find sources: "Constant of integration" – news · newspapers · books · scholar · JSTOR (August 2020) (Learn how and when to remove this template message)
In calculus, the constant of integration, often denoted by
{\displaystyle C}
, is a constant term added to an antiderivative of a function
{\displaystyle f(x)}
to indicate that the indefinite integral of
{\displaystyle f(x)}
(i.e., the set of all antiderivatives of
{\displaystyle f(x)}
), on a connected domain, is only defined up to an additive constant.[1][2][3] This constant expresses an ambiguity inherent in the construction of antiderivatives.
More specifically, if a function
{\displaystyle f(x)}
is defined on an interval, and
{\displaystyle F(x)}
{\displaystyle f(x)}
, then the set of all antiderivatives of
{\displaystyle f(x)}
is given by the functions
{\displaystyle F(x)+C}
{\displaystyle C}
is an arbitrary constant (meaning that any value of
{\displaystyle C}
{\displaystyle F(x)+C}
a valid antiderivative). For that reason, the indefinite integral is often written as
{\textstyle \int f(x)\,dx=F(x)+C}
,[4] although the constant of integration might be sometimes omitted in lists of integrals for simplicity.
The derivative of any constant function is zero. Once one has found one antiderivative
{\displaystyle F(x)}
{\displaystyle f(x)}
, adding or subtracting any constant
{\displaystyle C}
will give us another antiderivative, because
{\textstyle {\frac {d}{dx}}(F(x)+C)={\frac {d}{dx}}F(x)+{\frac {d}{dx}}C=F'(x)=f(x)}
. The constant is a way of expressing that every function with at least one antiderivative will have an infinite number of them.
{\displaystyle F:\mathbb {R} \to \mathbb {R} }
{\displaystyle G:\mathbb {R} \to \mathbb {R} }
be two everywhere differentiable functions. Suppose that
{\displaystyle F\,'(x)=G\,'(x)}
for every real number x. Then there exists a real number
{\displaystyle C}
{\displaystyle F(x)-G(x)=C}
To prove this, notice that
{\displaystyle [F(x)-G(x)]'=0}
{\displaystyle F}
{\displaystyle F-G}
{\displaystyle G}
by the constant function
{\displaystyle 0}
, making the goal to prove that an everywhere differentiable function whose derivative is always zero must be constant:
Choose a real number
{\displaystyle a}
{\displaystyle C=F(a)}
. For any x, the fundamental theorem of calculus, together with the assumption that the derivative of
{\displaystyle F}
vanishes, implies that
{\displaystyle {\begin{aligned}&0=\int _{a}^{x}F'(t)\ dt\\&0=F(x)-F(a)\\&0=F(x)-C\\&F(x)=C\\\end{aligned}}}
thereby showing that
{\displaystyle F}
Two facts are crucial in this proof. First, the real line is connected. If the real line were not connected, we would not always be able to integrate from our fixed a to any given x. For example, if we were to ask for functions defined on the union of intervals [0,1] and [2,3], and if a were 0, then it would not be possible to integrate from 0 to 3, because the function is not defined between 1 and 2. Here, there will be two constants, one for each connected component of the domain. In general, by replacing constants with locally constant functions, we can extend this theorem to disconnected domains. For example, there are two constants of integration for
{\textstyle \int dx/x}
, and infinitely many for
{\textstyle \int \tan x\,dx}
, so for example, the general form for the integral of 1/x is:[5][6]
{\displaystyle \int {\frac {dx}{x}}={\begin{cases}\ln \left|x\right|+C^{-}&x<0\\\ln \left|x\right|+C^{+}&x>0\end{cases}}}
{\displaystyle F}
{\displaystyle G}
were assumed to be everywhere differentiable. If
{\displaystyle F}
{\displaystyle G}
are not differentiable at even one point, then the theorem might fail. As an example, let
{\displaystyle F(x)}
be the Heaviside step function, which is zero for negative values of x and one for non-negative values of x, and let
{\displaystyle G(x)=0}
. Then the derivative of
{\displaystyle F}
is zero where it is defined, and the derivative of
{\displaystyle G}
is always zero. Yet it's clear that
{\displaystyle F}
{\displaystyle G}
do not differ by a constant, even if it is assumed that
{\displaystyle F}
{\displaystyle G}
are everywhere continuous and almost everywhere differentiable the theorem still fails. As an example, take
{\displaystyle F}
to be the Cantor function and again let
{\displaystyle G=0}
For example, suppose one wants to find antiderivatives of
{\displaystyle \cos(x)}
. One such antiderivative is
{\displaystyle \sin(x)}
. Another one is
{\displaystyle \sin(x)+1}
. A third is
{\displaystyle \sin(x)-\pi }
. Each of these has derivative
{\displaystyle \cos(x)}
, so they are all antiderivatives of
{\displaystyle \cos(x)}
It turns out that adding and subtracting constants is the only flexibility we have in finding different antiderivatives of the same function. That is, all antiderivatives are the same up to a constant. To express this fact for
{\displaystyle \cos(x)}
, we write:
{\displaystyle \int \cos(x)\,dx=\sin(x)+C.}
{\displaystyle C}
by a number will produce an antiderivative. By writing
{\displaystyle C}
instead of a number, however, a compact description of all the possible antiderivatives of
{\displaystyle \cos(x)}
{\displaystyle C}
is called the constant of integration. It is easily determined that all of these functions are indeed antiderivatives of
{\displaystyle \cos(x)}
{\displaystyle {\begin{aligned}{\frac {d}{dx}}[\sin(x)+C]&={\frac {d}{dx}}\sin(x)+{\frac {d}{dx}}C\\&=\cos(x)+0\\&=\cos(x)\end{aligned}}}
At first glance, it may seem that the constant is unnecessary, since it can be set to zero. Furthermore, when evaluating definite integrals using the fundamental theorem of calculus, the constant will always cancel with itself.
However, trying to set the constant to zero does not always make sense. For example,
{\displaystyle 2\sin(x)\cos(x)}
can be integrated in at least three different ways:
{\displaystyle {\begin{aligned}\int 2\sin(x)\cos(x)\,dx&=&\sin ^{2}(x)+C&=&-\cos ^{2}(x)+1+C&=&-{\frac {1}{2}}\cos(2x)+C\\\int 2\sin(x)\cos(x)\,dx&=&-\cos ^{2}(x)+C&=&\sin ^{2}(x)-1+C&=&-{\frac {1}{2}}\cos(2x)+C\\\int 2\sin(x)\cos(x)\,dx&=&-{\frac {1}{2}}\cos(2x)+C&=&\sin ^{2}(x)+C&=&-\cos ^{2}(x)+C\end{aligned}}}
So setting
{\displaystyle C}
to zero can still leave a constant. This means that, for a given function, there is no "simplest antiderivative".
Another problem with setting
{\displaystyle C}
equal to zero is that sometimes we want to find an antiderivative that has a given value at a given point (as in an initial value problem). For example, to obtain the antiderivative of
{\displaystyle \cos(x)}
that has the value 100 at x = π, then only one value of
{\displaystyle C}
will work (in this case
{\displaystyle C=100}
This restriction can be rephrased in the language of differential equations. Finding an indefinite integral of a function
{\displaystyle f(x)}
is the same as solving the differential equation
{\textstyle {\frac {dy}{dx}}=f(x)}
. Any differential equation will have many solutions, and each constant represents the unique solution of a well-posed initial value problem. Imposing the condition that our antiderivative takes the value 100 at x = π is an initial condition. Each initial condition corresponds to one and only one value of
{\displaystyle C}
, so without
{\displaystyle C}
it would be impossible to solve the problem.
There is another justification, coming from abstract algebra. The space of all (suitable) real-valued functions on the real numbers is a vector space, and the differential operator
{\textstyle {\frac {d}{dx}}}
is a linear operator. The operator
{\textstyle {\frac {d}{dx}}}
maps a function to zero if and only if that function is constant. Consequently, the kernel of
{\textstyle {\frac {d}{dx}}}
is the space of all constant functions. The process of indefinite integration amounts to finding a pre-image of a given function. There is no canonical pre-image for a given function, but the set of all such pre-images forms a coset. Choosing a constant is the same as choosing an element of the coset. In this context, solving an initial value problem is interpreted as lying in the hyperplane given by the initial conditions.
^ "Definition of constant of integration | Dictionary.com". www.dictionary.com. Retrieved 2020-08-14.
^ Weisstein, Eric W. "Constant of Integration". mathworld.wolfram.com. Retrieved 2020-08-14.
^ Banner, Adrian (2007). The calculus lifesaver : all the tools you need to excel at calculus. Princeton [u.a.]: Princeton University Press. p. 380. ISBN 978-0-691-13088-0.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Constant_of_integration&oldid=1048202490"
|
one word to say that if a Sunday be fine I shd. be very glad if you would test your observation on mid-styled & see what difference is in pods.—2 You might get
\frac{1}{2}
dozen pods from other plant. Also, if then you could step distance from the old long-styled to nearest other plant & look what form that is.— I shall not publish this year & shall work out whole case very carefully.—3
We are most sincerely sorry about Maude A.—4
In Haste. Your | C. Darwin
Etty came home yesterday very brisk.—5
Poor Mamma is unwell with very feverish cold.—6
Saturday. Down—
The date is established by the relationship to the letters from W. E. Darwin, 21 October [1862] and 28 October 1862; the intervening Saturday was 25 October 1862.
The letter containing William’s comments on the seed-pods of the mid-styled form of Lythrum salicaria has not been found; however, see the letter to W. E. Darwin, 30 [October 1862].
William was assisting his father by collecting seed-pods from wild plants of Lythrum salicaria (see letter from W. E. Darwin, 21 October [1862]); however, CD had concluded that he would have to perform a further 126 crosses before he could publish his results (see letter to J. D. Hooker, 27 [October 1862] and n. 11). CD’s paper, ‘Three forms of Lythrum salicaria’, was read before the Linnean Society of London on 16 June 1864; he reported observations based on William’s specimens on page 173 of the published paper (Collected papers 2: 109–10).
Maud Atherley. See letter from W. E. Darwin, 21 October [1862] and n. 5.
Emma Darwin’s diary (DAR 242) records that Henrietta Emma Darwin returned to Down House on 22 October 1862, and that on 24 October she was ‘languid in [the] m[ornin]g as before’.
On 25 October 1862, Emma Darwin recorded in her diary (DAR 242) that she took to her bed feeling ‘feverish’ with a ‘bad cold’.
Asks WED to make some observations on differences in pods of Lythrum.
|
A Treatise on Electricity and Magnetism/Part II/Chapter VII - Wikisource, the free online library
A Treatise on Electricity and Magnetism/Part II/Chapter VII
Part II, Chapter VII: Conduction in Three Dimensions
Chapter VIII: Resistance and Conductivity in Three Dimensions
84458A Treatise on Electricity and Magnetism — Part II, Chapter VII: Conduction in Three DimensionsJames Clerk Maxwell
Notation of Electric Currents.
285.] At any point let an element of area
{\displaystyle dS}
be taken normal to the axis of
{\displaystyle x}
{\displaystyle Q}
units of electricity pass across this area from the negative to the positive side in unit of time, then, if
{\displaystyle {\frac {Q}{dS}}}
becomes ultimately equal to
{\displaystyle u}
{\displaystyle dS}
is indefinitely diminished,
{\displaystyle u}
is said to be the Component of the electric current in the direction of
{\displaystyle x}
at the given point.
In the same way we may determine
{\displaystyle v}
{\displaystyle w}
, the components of the current in the directions of
{\displaystyle y}
{\displaystyle z}
286.] To determine the component of the current in any other direction
{\displaystyle OR}
through the given point
{\displaystyle O}
{\displaystyle l,m,n}
be the direction-cosines of
{\displaystyle OR}
, then cutting off from the axes of
{\displaystyle x,y,z}
portions equal to
{\displaystyle {\frac {r}{l}},{\frac {r}{m}},\;\;{\mbox{ and }}\;\;{\frac {r}{n}}}
respectively at
{\displaystyle A,B,\mathrm {and} C,}
{\displaystyle ABC}
will be normal to
{\displaystyle OR.}
The area of this triangle
{\displaystyle ABC}
{\displaystyle ds={\frac {1}{2}}{\frac {r^{2}}{lmn}}}
and by diminishing
{\displaystyle r}
this area may be diminished without limit.
The quantity of electricity which leaves the tetrahedron
{\displaystyle ABCO}
by the triangle
{\displaystyle ABC}
must be equal to that which enters it through the three triangles
{\displaystyle OBC,OCA,}
{\displaystyle OAB.}
{\displaystyle OBC}
{\displaystyle {\frac {1}{2}}{\frac {r^{2}}{mn}},}
and the component of the current normal to its plane is
{\displaystyle u,}
so that the quantity which enters through this triangle is
{\displaystyle {\frac {1}{2}}r^{2}{\frac {u}{mn}}.}
The quantities which enter through the triangles
{\displaystyle OCA}
{\displaystyle OAB}
respectively are
{\displaystyle {\frac {1}{2}}r^{2}{\frac {v}{nl}},\quad \quad {\mbox{and}}\quad \quad {\frac {1}{2}}r^{2}{\frac {w}{lm}}.}
{\displaystyle \gamma }
is the component of the velocity in the direction
{\displaystyle OR,}
then the quantity which leaves the tetrahedron through
{\displaystyle ABC}
{\displaystyle {\frac {1}{2}}r{\frac {\gamma }{lmn}}.}
Since this is equal to the quantity which enters through the three other triangles,
{\displaystyle {\frac {1}{2}}{\frac {r^{2}\gamma }{lmn}}={\frac {1}{2}}r^{2}\left\lbrace {\frac {u}{mn}}+{\frac {v}{nl}}+{\frac {w}{lm}}\right\rbrace ;}
{\displaystyle {\frac {2lmn}{r^{2}}},}
{\displaystyle \gamma =lu+mv+nw}
{\displaystyle u^{2}+v^{2}+w^{2}=\Gamma ^{2}}
{\displaystyle l',m',n'}
{\displaystyle u=l'\Gamma ,\quad v=m'\Gamma ,\quad {\mbox{and}}\quad w=n'\Gamma ;}
{\displaystyle \gamma =\Gamma (ll'+mm'+nn').}
Hence, if we define the resultant current as a vector whose magnitude is
{\displaystyle \Gamma ,}
and whose direction-cosines are
{\displaystyle l',m',n'}
{\displaystyle \gamma }
denotes the current resolved in a direction making an angle
{\displaystyle \theta }
with that of the resultant current, then
{\displaystyle \gamma =\Gamma \cos \theta ;}
shewing that the law of resolution of currents is the same as that of velocities, forces, and all other vectors.
287.] To determine the condition that a given surface may be a surface of flow.
{\displaystyle F(x,y,z)=\lambda }
be the equation of a family of surfaces any one of which is given by making
{\displaystyle \lambda }
constant, then, if we make
{\displaystyle \left.{\frac {\overline {d\lambda }}{dx}}\right|^{2}+\left.{\frac {\overline {d\lambda }}{dy}}\right|^{2}+\left.{\frac {\overline {d\lambda }}{dz}}\right|^{2}={\frac {1}{N^{2}}},}
the direction-cosines of the normal, reckoned in the direction in which
{\displaystyle \lambda }
increases, are
{\displaystyle l=N{\frac {d\lambda }{dx}},\quad \quad m=N{\frac {d\lambda }{dy}},\quad \quad n=N{\frac {d\lambda }{dz}}.}
Hence, if
{\displaystyle \gamma }
is the component of the current normal to the surface,
{\displaystyle \gamma =N\left\{u{\frac {d\lambda }{dx}}+v{\frac {d\lambda }{dy}}+w{\frac {d\lambda }{dz}}\right\}.}
{\displaystyle \gamma =0}
there will be no current through the surface, and the surface may be called a Surface of Flow, because the lines of motion are in the surface.
288.] The equation of a surface of flow is therefore
{\displaystyle u{\frac {d\lambda }{dx}}+v{\frac {d\lambda }{dy}}+w{\frac {d\lambda }{dz}}=0}
If this equation is true for all values of
{\displaystyle \lambda ,}
all the surfaces of the family will be surfaces of flow.
289.] Let there be another family of surfaces, whose parameter is
{\displaystyle \lambda ',}
then, if these are also surfaces of flow, we shall have
{\displaystyle u{\frac {d\lambda '}{dx}}+v{\frac {d\lambda '}{dy}}+w{\frac {d\lambda '}{dz}}=0}
If there is a third family of surfaces of flow, whose parameter is
{\displaystyle \lambda '',}
{\displaystyle u{\frac {d\lambda ''}{dx}}+v{\frac {d\lambda ''}{dy}}+w{\frac {d\lambda ''}{dz}}=0}
Eliminating between these three equations,
{\displaystyle u,v,}
{\displaystyle w}
disappear together, and we find
{\displaystyle {\begin{vmatrix}{\frac {d\lambda }{dx}},&{\frac {d\lambda }{dy}},&{\frac {d\lambda }{dz}}\\{\frac {d\lambda '}{dx}},&{\frac {d\lambda '}{dy}},&{\frac {d\lambda '}{dz}}\\{\frac {d\lambda ''}{dx}},&{\frac {d\lambda ''}{dy}},&{\frac {d\lambda ''}{dz}}\end{vmatrix}}=0;}
{\displaystyle \mathrm {or} \quad \quad \lambda ''=\phi (\lambda ,\lambda ');}
{\displaystyle \lambda ''}
{\displaystyle \lambda }
{\displaystyle \lambda '.}
290.] Now consider the four surfaces whose parameters are
{\displaystyle \lambda ,\lambda +\delta \lambda ,\lambda ',}
{\displaystyle \lambda '+\delta \lambda '.}
These four surfaces enclose a quadrilateral tube, which we may call the tube
{\displaystyle \delta \lambda \cdot \delta \lambda '.}
Since this tube is bounded by surfaces across which there is no flow, we may call it a Tube of Flow. If we take any two sections across the tube, the quantity which enters the tube at one section must be equal to the quantity which leaves it at the other, and since this quantity is therefore the same for every section of the tube, let us call it
{\displaystyle L\delta \lambda \cdot \delta \lambda '}
{\displaystyle L}
{\displaystyle \lambda }
{\displaystyle \lambda ',}
the parameters which determine the particular tube.
291.] If
{\displaystyle dS}
denotes the section of a tube of flow by a plane normal to
{\displaystyle x,}
we have by the theory of the change of the independent variables,
{\displaystyle \delta \lambda \cdot \delta \lambda '=\delta S\left({\frac {d\lambda }{dy}}{\frac {d\lambda '}{dz}}-{\frac {d\lambda }{dz}}{\frac {d\lambda '}{dy}}\right),}
and by the definition of the components of the current
{\displaystyle u\,\delta S=L\,\delta \lambda \cdot \delta \lambda '.}
{\displaystyle \left.{\begin{matrix}{\mbox{Hence}}&&u=L\left({\frac {d\lambda }{dy}}{\frac {d\lambda '}{dz}}-{\frac {d\lambda }{dz}}{\frac {d\lambda '}{dy}}\right).\\{\mbox{Similarly}}&&v=L\left({\frac {d\lambda }{dz}}{\frac {d\lambda '}{dx}}-{\frac {d\lambda }{dx}}{\frac {d\lambda '}{dz}}\right),\\&&w=L\left({\frac {d\lambda }{dx}}{\frac {d\lambda '}{dy}}-{\frac {d\lambda }{dy}}{\frac {d\lambda '}{dx}}\right).\end{matrix}}\right\}}
292.] It is always possible when one of the functions
{\displaystyle \lambda }
{\displaystyle \lambda }
is known, to determine the other so that
{\displaystyle L}
may be equal to unity. For instance, let us take the plane of
{\displaystyle yz,}
and draw upon it a series of equidistant lines parallel to
{\displaystyle y,}
to represent the sections of the family
{\displaystyle \lambda '}
by this plane. In other words, let the function
{\displaystyle \lambda '}
be determined by the condition that when
{\displaystyle x=0\;\lambda '=z.}
If we then make
{\displaystyle L=1,}
and therefore (when
{\displaystyle x=0}
{\displaystyle \lambda =\int u\,dy;}
then in the plane
{\displaystyle (x=0)}
the amount of electricity which passes through any portion will be
{\displaystyle \iint u\,dy\,dz=\iint d\lambda \,d\lambda '.}
Having determined the nature of the sections of the surfaces of flow by the plane of
{\displaystyle yz,}
the form of the surfaces elsewhere is determined by the conditions (8) and (9). The two functions
{\displaystyle \lambda }
{\displaystyle \lambda '}
thus determined are sufficient to determine the current at every point by equations (15), unity being substituted for
{\displaystyle L.}
On Lines of Flow.
293.] Let a series of values of
{\displaystyle \lambda }
{\displaystyle \lambda ;}
be chosen, the successive differences in each series being unity. The two series of surfaces defined by these values will divide space into a system of quadrilateral tubes through each of which there will be a unit current. By assuming the unit sufficiently small, the details of the current may be expressed by these tubes with any desired amount of minuteness. Then if any surface be drawn cutting the system of tubes, the quantity of the current which passes through this surface will be expressed by the number of tubes which cut it, since each tube carries unity of current.
The actual intersections of the surfaces may be called Lines of Flow. When the unit is taken sufficiently small, the number of lines of flow which cut a surface is approximately equal to the number of tubes of flow which cut it, so that we may consider the lines of flow as expressing not only the direction of the current but its strength, since each line of flow through a given section corresponds to a unit current.
On Current-Sheets and Current-Functions.
294.] A stratum of a conductor contained between two consecutive surfaces of flow of one system, say that of
{\displaystyle \lambda ',}
is called a Current-Sheet. The tubes of flow within this sheet are determined by the function
{\displaystyle \lambda .}
{\displaystyle \lambda _{A}}
{\displaystyle \lambda _{P}}
denote the values of
{\displaystyle \lambda }
{\displaystyle A}
{\displaystyle P}
respectively, then the current from right to left across any line drawn on the sheet from
{\displaystyle A}
{\displaystyle P}
{\displaystyle \lambda _{P}-\lambda _{A}.}
{\displaystyle AP}
be an element,
{\displaystyle ds,}
of a curve drawn on the sheet, the current which crosses this element from right to left is
{\displaystyle {\frac {d\lambda }{ds}}ds.}
{\displaystyle \lambda ,}
from which the distribution of the current in the sheet can be completely determined, is called the Current-Function.
Any thin sheet of metal or conducting matter bounded on both sides by air or some other non-conducting medium may be treated as a current-sheet, in which the distribution of the current may be expressed by means of a current-function. See Art. 647.
Equation of 'Continuity.'
295.] If we differentiate the three equations (15) with respect to
{\displaystyle x,y,z}
respectively, remembering that
{\displaystyle L}
{\displaystyle \lambda }
{\displaystyle \lambda ',}
{\displaystyle {\frac {du}{dx}}+{\frac {dv}{dy}}+{\frac {dw}{dz}}=0}
The corresponding equation in Hydrodynamics is called the Equation of 'Continuity.' The continuity which it expresses is the continuity of existence, that is, the fact that a material substance cannot leave one part of space and arrive at another, without going through the space between. It cannot simply vanish in the one place and appear in the other, but it must travel along a continuous path, so that if a closed surface be drawn, including the one place and excluding the other, a material substance in passing from the one place to the other must go through the closed surface. The most general form of the equation in hydrodynamics is
{\displaystyle {\frac {d(\rho u)}{dx}}+{\frac {d(\rho v)}{dy}}+{\frac {d(\rho w)}{dz}}+{\frac {d\rho }{dt}}=0;}
{\displaystyle \rho }
signifies the ratio of the quantity of the substance to the volume it occupies, that volume being in this case the differential element of volume, and
{\displaystyle (\rho u),\,(\rho v),}
{\displaystyle (\rho w)}
signify the ratio of the quantity of the substance which crosses an element of area in unit of time to that area, these areas being normal to the axes of
{\displaystyle x,\,y,}
{\displaystyle z}
respectively. Thus understood, the equation is applicable to any material substance, solid or fluid, whether the motion be continuous or discontinuous, provided the existence of the parts of that substance is continuous. If anything, though not a substance, is subject to the condition of continuous existence in time and space, the equation will express this condition. In other parts of Physical Science, as, for instance, in the theory of electric and magnetic quantities, equations of a similar form occur. We shall call such equations 'equations of continuity' to indicate their form, though we may not attribute to these quantities the properties of matter, or even continuous existence in time and space.
The equation (17), which we have arrived at in the case of electric currents, is identical with (18) if we make
{\displaystyle \rho =1,}
that is, if we suppose the substance homogeneous and incompressible. The equation, in the case of fluids, may also be established by either of the modes of proof given in treatises on Hydrodynamics. In one of these we trace the course and the deformation of a certain element of the fluid as it moves along. In the other, we fix our attention on an element of space, and take account of all that enters or leaves it. The former of these methods cannot be applied to electric currents, as we do not know the velocity with which the electricity passes through the body, or even whether it moves in the positive or the negative direction of the current. All that we know is the algebraical value of the quantity which crosses unit of area in unit of time, a quantity corresponding to
{\displaystyle (\rho u)}
in the equation (18). We have no means of ascertaining the value of either of the factors
{\displaystyle \rho }
{\displaystyle u,}
and therefore we cannot follow a particular portion of electricity in its course through the body. The other method of investigation, in which we consider what passes through the walls of an element of volume, is applicable to electric currents, and is perhaps preferable in point of form to that which we have given, but as it may be found in any treatise on Hydrodynamics we need not repeat it here.
Quantity of Electricity which passes through a given Surface.
{\displaystyle \Gamma }
be the resultant current at any point of the surface. Let
{\displaystyle dS}
be an element of the surface, and let
{\displaystyle \epsilon }
{\displaystyle \Gamma }
and the normal to the surface, then the total current through the surface will be
{\displaystyle \iint \Gamma \cos \epsilon \;ds,}
the integration being extended over the surface.
As in Art. 21, we may transform this integral into the form
{\displaystyle \iint \Gamma \cos \epsilon \;ds=\iiint \left({\frac {du}{dz}}+{\frac {dv}{dy}}+{\frac {dw}{dz}}\right)dx\;dy\;dz}
in the case of any closed surface, the limits of the triple integration being those included by the surface. This is the expression for the total efflux from the closed surface. Since in all cases of steady currents this must be zero whatever the limits of the integration, the quantity under the integral sign must vanish, and we obtain in this way the equation of continuity (17).
Retrieved from "https://en.wikisource.org/w/index.php?title=A_Treatise_on_Electricity_and_Magnetism/Part_II/Chapter_VII&oldid=1823415"
|
Shaft with torsional and bending compliance - MATLAB - MathWorks Nordic
N+1
N+1
{l}_{FE\text{_1}}={l}_{FE\text{_2}}=\cdots ={l}_{FE\text{_}N}=\frac{L}{N}
{k}_{FE\text{_1}}={k}_{FE\text{_2}}=\cdots ={k}_{FE\text{_}N}=k
{b}_{FE\text{_1}}={b}_{FE\text{_2}}=\cdots ={b}_{FE\text{_}N}=b
{I}_{FE\text{_1C}}={I}_{FE\text{_1R}}={I}_{FE\text{_2C}}={I}_{FE\text{_2R}}=\cdots ={I}_{FE\text{_}N\text{C}}={I}_{FE\text{_}N\text{R}}=\frac{I}{2N}
N\ge {N}_{\mathrm{min}}
l=\frac{L}{{N}_{\mathrm{min}}}
{l}_{1}={z}_{1}
{l}_{2}={l}_{3}=\frac{\left({z}_{2}-{z}_{1}\right)}{2}
{l}_{4}={l}_{5}=\frac{\left({z}_{3}-{z}_{2}\right)}{2}
{l}_{6}={l}_{7}=\frac{\left({z}_{4}-{z}_{3}\right)}{2}
{l}_{8}={l}_{9}=\frac{\left({z}_{5}-{z}_{4}\right)}{2}
{l}_{10}={l}_{11}=\frac{\left({z}_{6}-{z}_{5}\right)}{2}
{l}_{12}={z}_{7}-{z}_{6}
{J}_{p}=\frac{\pi }{32}\left({D}^{4}-{d}^{4}\right)
m=\frac{\pi }{4}\left({D}^{2}-{d}^{2}\right)\rho l
J=\frac{m}{8}\left({D}^{2}+{d}^{2}\right)=\rho l\cdot Jp
k=Jp\cdot \frac{G}{l}
d=0
d>0
l
2\frac{ck}{{\omega }_{N}}
{\omega }_{N}=\sqrt{\frac{2k}{J}}.
Fr=\left[{F}_{xB1},{F}_{yB1},{F}_{xI1},{F}_{yI1},{F}_{xF1},{F}_{yF1}\right]
M=\left[{M}_{xB1},{M}_{yB1},{M}_{xI1},{M}_{yI1},{M}_{xF1},{M}_{yF1}\right]
V=\left[{V}_{xB1},{V}_{yB1},{V}_{xI1},{V}_{yI1},{V}_{xF1},{V}_{yF1}\right]
M=\left[{M}_{xB1},{M}_{yB1},{M}_{xI1},{M}_{yI1},{M}_{xF1},{M}_{yF1}\right]
Fr=\left[{F}_{xB1},{F}_{yB1},{F}_{xF1},{F}_{yF1}\right]
Fr=\left[{F}_{xB1},{F}_{yB1},{F}_{xI1},{F}_{yI1},{F}_{xI2},{F}_{yI2},{F}_{xF1},{F}_{yF1}\right]
N+1
M\stackrel{¨}{\stackrel{\to }{x}}+\left(B+{G}_{Disk}\Omega \right)\stackrel{˙}{\stackrel{\to }{x}}+\left(K+{G}_{Disk}\stackrel{˙}{\Omega }\right)\stackrel{\to }{x}=\stackrel{\to }{f}.
4\left(N+1\right)×4\left(N+1\right)
4\left(N+1\right)×4\left(N+1\right)
4\left(N+1\right)×4\left(N+1\right)
4\left(N+1\right)×4\left(N+1\right)
\stackrel{\to }{x}
4\left(N+1\right)×1
\stackrel{\to }{f}
4\left(N+1\right)×1
M={M}_{1/2}+{M}_{2/3}+\dots {M}_{i/i+1}+\dots {M}_{N/N+1} + {\sum }^{\text{}}{M}_{disk, i},
{M}_{i/\left(i+1\right)}
{M}_{i/\left(i+1\right)}
\left(4i-3\right):\left(4i+4\right)
\left(4i-3\right):\left(4i+4\right)
{M}_{i/\left(i+1\right)}= \left[\begin{array}{cccccccccccc}0& & & & & & & & & & & \\ & \ddots & & & & & & & & & & \\ & & \frac{1}{2}m& 0& 0& 0& 0& 0& 0& 0& & \\ & & 0& \frac{1}{2}m& 0& 0& 0& 0& 0& 0& & \\ & & 0& 0& {I}_{d}& 0& 0& 0& 0& 0& & \\ & & 0& 0& 0& {I}_{d}& 0& 0& 0& 0& & \\ & & 0& 0& 0& 0& \frac{1}{2}m& 0& 0& 0& & \\ & & 0& 0& 0& 0& 0& \frac{1}{2}m& 0& 0& & \\ & & 0& 0& 0& 0& 0& 0& {I}_{d}& 0& & \\ & & 0& 0& 0& 0& 0& 0& 0& {I}_{d}& & \\ & & & & & & & & & & \ddots & \\ & & & & & & & & & & & 0\end{array}\right],
l
m= \left(\frac{\pi }{4}\right)\left({D}^{2}-{d}^{2}\right)\rho l
l
{I}_{d}=\frac{J}{4}+\frac{m}{6}{\left(\frac{l}{2}\right)}^{2}
{\sum }^{\text{}}{M}_{disk, i}
i
{M}_{disk, i}\left(\left[\left(4i-3\right):4i\right],\left[\left(4i-3\right):4i\right]\right)=\left[\begin{array}{cccc}{M}_{disk, i}& 0& 0& 0\\ 0& {M}_{disk, i}& 0& 0\\ 0& 0& {I}_{D,disk, i}& 0\\ 0& 0& 0& {I}_{D,disk, i}\end{array}\right],
{I}_{D,disk, i}\text{= 0}
B=\text{ }\alpha M\text{ +}\beta K+{B}_{support},
{B}_{support}\left(\left[\left(4\text{i }-\text{ }3\right)\text{ }:\text{ }4\text{i}\right],\text{ }\left[\left(4\text{i }-\text{ }3\right)\text{ }:\text{ }4\text{i}\right]\right)=\text{ }\left[\begin{array}{cccc}{b}_{xx}& {b}_{xy}& 0& 0\\ {b}_{yx}& {b}_{yy}& 0& 0\\ 0& 0& {b}_{\theta \theta }& 0\\ 0& 0& 0& {b}_{\phi \phi }\end{array}\right],
\left[{b}_{xx} {b}_{xy} {b}_{yx} {b}_{yy}\right]
\left[{b}_{\theta \theta } {b}_{\phi \phi }\right]
{G}_{disk, i}
{G}_{disk, i}\left(\left[\left(4i-3\right):4i\right],\left[\left(4i-3\right):4i\right]\right)=\left[\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& \Omega {I}_{P,disk, i}\\ 0& 0& -\Omega {I}_{P,disk, i}& 0\end{array}\right],
{I}_{P,disk, i}\text{= 0}
K ={K}_{1/2} + {K}_{2/3} + ...+{K}_{N/N+1} + {\sum }^{\text{}}{K}_{\text{support}},
{K}_{i/i+1}
{i}^{th}
{\left(i+1\right)}^{th}
\left(4i-3\right):\left(4i+4\right)
\left(4i-3\right):\left(4i+4\right)
{K}_{i/i+1}=\frac{2EI}{{l}^{3}} \left[\begin{array}{cccccccccccc}0& & & & & & & & & & & \\ & \ddots & & & & & & & & & & \\ & & 6& 0& 0& 3l& -6& 0& 0& 3l& & \\ & & 0& 6& -3l& 0& 0& -6& -3l& 0& & \\ & & 0& -3l& 2{l}^{2}& 0& 0& 3l& {l}^{2}& 0& & \\ & & 3l& 0& 0& 2{l}^{2}& -3l& 0& 0& {l}^{2}& & \\ & & -6& 0& 0& -3l& 6& 0& 0& -3l& & \\ & & 0& -6& 3l& 0& 0& 6& 3l& 0& & \\ & & 0& -3l& {l}^{2}& 0& 0& 3l& 2{l}^{2}& 0& & \\ & & 3l& 0& 0& {l}^{2}& -3l& 0& 0& 2{l}^{2}& & \\ & & & & & & & & & & \ddots & \\ & & & & & & & & & & & 0\end{array}\right],
l
{K}_{support}\left(\left[\left(4\text{i }-\text{ }3\right)\text{ }:\text{ }4\text{i}\right],\text{ }\left[\left(4\text{i }-\text{ }3\right)\text{ }:\text{ }4\text{i}\right]\right)=\text{ }\left[\begin{array}{cccc}{k}_{xx}& {k}_{xy}& 0& 0\\ {k}_{yx}& {k}_{yy}& 0& 0\\ 0& 0& {k}_{\theta \theta }& 0\\ 0& 0& 0& {k}_{\phi \phi }\end{array}\right],
\left[{k}_{xx} {k}_{xy} {k}_{yx} {k}_{yy}\right]
\left[{k}_{\theta \theta } {k}_{\phi \phi }\right]
{x}_{i}= 0, {y}_{i}= 0, {\theta }_{i}= 0, {\phi }_{i}= 0
{x}_{i}= 0, {y}_{i}= 0
{K}_{Support}\left(\Omega \right)=lookup\left( |{\Omega }_{Ref}|, {K}_{Support, Ref}, \Omega , interpolation=linear, extrapolation=nearest\right),
\stackrel{\to }{x}
{\left(i+1\right)}^{th}
\stackrel{\to }{x}=\left[\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}⋮\\ \begin{array}{c}{x}_{i}\\ {y}_{i}\\ {\theta }_{i}\end{array}\\ {\phi }_{i}\\ \begin{array}{c}\begin{array}{c}{x}_{i+1}\\ {y}_{i+1}\\ {\theta }_{i+1}\end{array}\\ {\phi }_{i+1}\\ ⋮\end{array}\end{array}\end{array}\end{array}\end{array}\right].
{i}^{th}
{\stackrel{\to }{f}}_{4\left(i-1:i-2\right)}= \left[\begin{array}{c}m{\epsilon }_{j,offset}\left({\Omega }^{2}{}_{i}\mathrm{cos}\left({\phi }_{shaft,i}+{\phi }_{offset, j}\right) +\frac{\partial {\Omega }_{i}}{\partial t} \mathrm{sin}\left({\phi }_{shaft, i}+{\phi }_{offset, j}\right) \right)\\ m{\epsilon }_{j,offset}\left({\Omega }^{2}{}_{i}\mathrm{sin}\left({\phi }_{shaft,i}+{\phi }_{offset, j}\right) -\frac{\partial {\Omega }_{i}}{\partial t} \mathrm{cos}\left({\phi }_{shaft, i}+{\phi }_{offset, j}\right) \right)\end{array}\right],
4\left(N+1\right)
M\stackrel{¨}{\stackrel{\to }{x}}+\left(B+{G}_{Disk}\Omega \right)\stackrel{˙}{\stackrel{\to }{x}}+\left(K+{G}_{Disk}\stackrel{˙}{\Omega }\right)\stackrel{\to }{x}=\stackrel{\to }{f}.
\stackrel{\to }{x}
{N}_{Min, Eig}= round\left(\frac{L}{dz}\right),
4\left(N+1\right)×M
\stackrel{\to }{x}
\stackrel{\to }{x}
\stackrel{\to }{x}
{M}_{Modal}= {H}^{T}MH
{K}_{Modal} = {H}^{T}KH
{B}_{Modal} = {H}^{T}BH
{G}_{Modal} = {H}^{T}{G}_{Disk}H
{\stackrel{\to }{f}}_{Modal} = {H}^{T}\stackrel{\to }{f}
{M}_{Modal}\stackrel{¨}{\stackrel{\to }{\eta }}+\left({B}_{Modal}+{G}_{Modal}\Omega \right)\stackrel{˙}{\stackrel{\to }{\eta }}+\left({K}_{Modal}+{G}_{Modal}\stackrel{˙}{\Omega }\right)\stackrel{\to }{\eta } = {\stackrel{\to }{f}}_{Modal},
\stackrel{\to }{\eta }
\stackrel{\to }{x}=H\stackrel{\to }{\eta }
{M}_{Modal}\stackrel{¨}{\stackrel{\to }{\eta }}+ \left({B}_{Modal}\left(\Omega \right)+{G}_{Modal}\left(\Omega \right)\Omega \right)\stackrel{˙}{\stackrel{\to }{\eta }}+ \left({K}_{Modal}\left(\Omega \right)+{G}_{Modal}\left(\Omega \right)\stackrel{˙}{\Omega }\right)\stackrel{\to }{\eta } = {\stackrel{\to }{f}}_{Modal}\left(\Omega \right),
{K}_{Modal}\left(\Omega \right)=lookup\left( |{\Omega }_{Ref}|, {K}_{Modal, Ref}, \Omega , interpolation=linear, extrapolation=nearest\right) ,
{B}_{Modal}\left(\Omega \right)=lookup\left( |{\Omega }_{Ref}|, {B}_{Modal, Ref}, \Omega , interpolation=linear, extrapolation=nearest\right) ,
{G}_{Modal}\left(\Omega \right)=lookup\left( |{\Omega }_{Ref}|, {G}_{Modal, Ref}, \Omega , interpolation=linear, extrapolation=nearest\right) ,
{\stackrel{\to }{f}}_{Modal}\left(\Omega \right)=lookup\left( |{\Omega }_{Ref}|, {\stackrel{\to }{f}}_{Modal, Ref}, \Omega , interpolation=linear, extrapolation=nearest\right) ,
\stackrel{\to }{\eta }
|
A Mathematical Model of Alveolar Gas Exchange in Partial Liquid Ventilation | J. Biomech Eng. | ASME Digital Collection
Vinod Suresh,
Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109
Joseph C. Anderson,
James B. Grotberg,
Department of Surgery, University of Michigan, Ann Arbor, MI 48109
Contributed by the Bioengineering Division for publication in the JOURNAL OF BIOMECHANICAL ENGINEERING. Manuscript received by the Bioengineering Division December 1, 2003; revision received September 8, 2004. Associate Editor: James Moore.
Suresh , V., Anderson , J. C., Grotberg, J. B., and Hirschl, R. B. (March 8, 2005). "A Mathematical Model of Alveolar Gas Exchange in Partial Liquid Ventilation ." ASME. J Biomech Eng. February 2005; 127(1): 46–59. https://doi.org/10.1115/1.1835352
In partial liquid ventilation (PLV), perfluorocarbon (PFC) acts as a diffusion barrier to gas transport in the alveolar space since the diffusivities of oxygen and carbon dioxide in this medium are four orders of magnitude lower than in air. Therefore convection in the PFC layer resulting from the oscillatory motions of the alveolar sac during ventilation can significantly affect gas transport. For example, a typical value of the Pe´clet number in air ventilation is Pe∼0.01, whereas in PLV it is Pe∼20. To study the importance of convection, a single terminal alveolar sac is modeled as an oscillating spherical shell with gas, PFC, tissue and capillary blood compartments. Differential equations describing mass conservation within each compartment are derived and solved to obtain time periodic partial pressures. Significant partial pressure gradients in the PFC layer and partial pressure differences between the capillary and gas compartments
PC-Pg
are found to exist. Because Pe≫1, temporal phase differences are found to exist between
PC-Pg
and the ventilatory cycle that cannot be adequately described by existing non-convective models of gas exchange in PLV. The mass transfer rate is nearly constant throughout the breath when Pe≫1, but when Pe≪1 nearly 100% of the transport occurs during inspiration. A range of respiratory rates (RR), including those relevant to high frequency oscillation (HFO)+PLV, tidal volumes
VT
and perfusion rates are studied to determine the effect of heterogeneous distributions of ventilation and perfusion on gas exchange. The largest changes in
PCO2
PCCO2
occur at normal and low perfusion rates respectively as RR and
VT
are varied. At a given ventilation rate, a low RR-high
VT
combination results in higher
PCO2,
PCCO2
PC-Pg
than a high RR-low
VT
pneumodynamics, physiological models, diffusion barriers, biodiffusion, organic compounds, oxygen, carbon compounds, blood, biological tissues, differential equations, lung, mass transfer, convection, biothermics, Partial Liquid Ventilation, Liquid Breathing, Perfluorocarbon, Gas Exchange, Convection
Biological tissues, Blood, Carbon dioxide, Diffusion (Physics), Pressure, Ventilation, Convection, Lung, Tides, Mass transfer
Survival of Mammals Breathing Organic Liquids Equilibrated With Oxygen at Atmospheric Pressure
Partial Liquid Breathing With Perflubron Improves Arterial Oxygenation in Acute Canine Lung Injury
Improvement of Gas Exchange, Pulmonary Function, and Lung Injury With Partial Liquid Ventilation. A Study Model in a Setting of Severe Respiratory Failure
Perfluorocarbon-Associated Gas Exchange (Partial Liquid Ventilation) in Respiratory Distress Syndrome: A Prospective, Randomized, Controlled Study
Perfluorocarbon-Associated Gas Exchange Improves Oxygenation, Lung Mechanics, and Survival in a Model of Adult Respiratory Distress Syndrome
Computer Tomographic Assessment of Perfluorocarbon and Gas Distribution During Partial Liquid Ventilation for Acute Respiratory Failure
Initial Experience With Partial Liquid Ventilation in Pediatric Patients With the Acute Respiratory Distress Syndrome
Initial Experience With Partial Liquid Ventilation in Adult Patients With the Acute Respiratory Distress Syndrome
Prospective, Randomized, Controlled Pilot Study of Partial Liquid Ventilation in Adult Acute Respiratory Distress Syndrome
Partial Liquid Ventilation With Perflubron in Premature Infants With Severe Respiratory Distress Syndrome: The LiquiVent Study Group
Pulmonary Blood Flow Distribution During Partial Liquid Ventilation
Distribution of Pulmonary Blood Flow in the Perfluorocarbon-Filled Lung
Shunt and Ventilation-Perfusion Distribution During Partial Liquid Ventilation in Healthy Piglets
Effect of Increasing Perfluorocarbon Dose on V(Over Dot)A/Q(Over Dot) Distribution During Partial Liquid Ventilation in Acute Lung Injury
Regional VA, Q, and VA/Q During PLV: Effects of Nitroprusside and Inhaled Nitric Oxide
van Lobensels
Modeling Diffusion Limitation of Gas Exchange in Lungs Containing Perfluorocarbon
High-Frequency Oscillatory Ventilation of the Perfluorocarbon-Filled Lung: Dose-Response Relationships in an Animal Model of Acute Lung Injury
High-Frequency Oscillatory Ventilation and Partial Liquid Ventilation After Acute Lung Injury in Premature Lambs With Respiratory Distress Syndrome
Partial Liquid Ventilation: a Comparison Using Conventional and High-Frequency Techniques in an Animal Model of Acute Respiratory Failure
High-Frequency Partial Liquid Ventilation in Respiratory Distress Syndrome: Hemodynamics and Gas Exchange
Analysis of Stress Distribution in the Alveolar Septa of Normal and Simulated Emphysematic Lungs
Guyton, A. C., 1986, Textbook of Medical Physiology, 7th ed., W. B. Saunders Company, Philadelphia.
Simple, Accurate Equations for Human-Blood O2 Dissociation Computations
Digital Computer Procedure for Conversion of PcO2 Into Blood Co2 Content
Weibel, E. R., 1963, Morphometry of the Human Lung, Academic, New York, p. 151.
Haefeli-Bleuer
Morphometry of the Human Pulmonary Acinus
Perfluorocarbon-Associated Gas Exchange in Normal and Acid-Injured Large Sheep
Efficacy of Perfluorocarbon Partial Liquid Ventilation in a Large Animal Model of Acute Respiratory Failure
High-Frequency Oscillatory Ventilation With Partial Liquid Ventilation in a Model of Acute Respiratory Failure
Measurement of Tidal Lung Volumes in Neonates During High-Frequency Oscillation
Reliable Tidal Volume Estimates at the Airway Opening With an Infant Monitor During High-Frequency Oscillatory Ventilation
Diffusion Coefficients and Solubility Coefficients for Gases in Biological Fluids and Tissues: A Review
Diffusion-Coefficients of O2, N2, and Co2 in Fluorinated Ethers
Gas Transport in Human Lung
Flow Patterns in Models of Small Airway Units of Lung
Wei, H. H., and Grotberg, J. B., “Flow and Transport in a Rhythmically Breathing Alveolus Partially Filled With Liquid,” Phys. Fluids (Submitted).
|
IR Reflective Sensor 1-4mm - 1146_0 at Phidgets
Measure the distance of an opaque object up to 4mm away using reflected IR light. Connects to an Analog Input or VINT Hub port.
Other Distance Sensors
The 1146_0 IR Reflective Sensor uses an infra-red LED and a phototransistor to measure the distance of an object between 1mm and 4mm away. This sensor can also detect the presence of an object up to 9mm away, but it won't be able to reliably measure the distance.
This sensor works best with objects with smooth, opaque surfaces. Because of this, you can also use it to differentiate between a reflective object and a non-reflective object at the same distance.
Select the 1146 from the Sensor Type drop-down menu. The example will now convert the voltage into distance (mm) automatically. Converting the voltage to distance (mm) is not specific to this example, it is handled by the Phidget libraries, with functions you have access to when you begin developing!
The 1146 can detect the distance of an object from 1mm to 4mm away. Objects with smooth, opaque surfaces are typically easier to detect.
The 1146 voltage changes from 4.5V to 0.15V as the object is moving closer to the sensor from a distance of 4mm. When the object is more than 4mm away, you may notice some change in voltage as the object enters or leaves the sensor's field of view, but this value does not represent the actual distance to the object. The exact equation for the sensor is as follows:
The Phidget libraries can automatically convert sensor voltage into distance (mm) by selecting the appropriate SensorType. See the Phidget22 API for more details. The Formula to translate voltage ratio into distance is:
{\displaystyle {\text{Distance (mm)}}=1.3927e^{({\text{VoltageRatio}}\times 1.967)}}
Because this sensor uses infrared light, it works just as well in both dim and well lit areas.
Measurement Distance Min 1.5 mm
Measurement Distance Max 4 mm
Detecting Distance Min 200 μm
January 2013 0 N/A Product Release
Have a look at our distance sensors:
Measurement Distance Min
Measurement Distance Max
1146_0B $5.00 Distance (Infrared) VoltageRatio Input 1.5 mm 4 mm
3520_0 $14.00 Distance (Infrared) Sharp Adapter 40 mm 300 mm
3521_0 $12.00 Distance (Infrared) Sharp Adapter 100 mm 800 mm
3522_0 $16.00 Distance (Infrared) Sharp Adapter 200 mm 1.5 m
DST1000_0 $30.00 Infrared (Time-of-Flight) VINT 4 mm * 170 mm
DST1001_0 $30.00 Infrared (Time-of-Flight) VINT 20 mm * 650 mm
DST1002_0 $35.00 Infrared (Time-of-Flight) VINT 20 mm * 1.3 m
DST1200_0 $25.00 Distance (Sonar) VINT 40 mm 10 m
|
Elliptic-curve Diffie–Hellman - Wikipedia
(Redirected from Elliptic Curve Diffie-Hellman)
Elliptic-curve Diffie–Hellman (ECDH) is a key agreement protocol that allows two parties, each having an elliptic-curve public–private key pair, to establish a shared secret over an insecure channel.[1][2][3] This shared secret may be directly used as a key, or to derive another key. The key, or the derived key, can then be used to encrypt subsequent communications using a symmetric-key cipher. It is a variant of the Diffie–Hellman protocol using elliptic-curve cryptography.
1 Key establishment protocol
Key establishment protocol[edit]
The following example illustrates how a shared key is established. Suppose Alice wants to establish a shared key with Bob, but the only channel available for them may be eavesdropped by a third party. Initially, the domain parameters (that is,
{\displaystyle (p,a,b,G,n,h)}
in the prime case or
{\displaystyle (m,f(x),a,b,G,n,h)}
in the binary case) must be agreed upon. Also, each party must have a key pair suitable for elliptic curve cryptography, consisting of a private key
{\displaystyle d}
(a randomly selected integer in the interval
{\displaystyle [1,n-1]}
) and a public key represented by a point
{\displaystyle Q}
{\displaystyle Q=d\cdot G}
, that is, the result of adding
{\displaystyle G}
{\displaystyle d}
times). Let Alice's key pair be
{\displaystyle (d_{\text{A}},Q_{\text{A}})}
and Bob's key pair be
{\displaystyle (d_{\text{B}},Q_{\text{B}})}
. Each party must know the other party's public key prior to execution of the protocol.
Alice computes point
{\displaystyle (x_{k},y_{k})=d_{\text{A}}\cdot Q_{\text{B}}}
. Bob computes point
{\displaystyle (x_{k},y_{k})=d_{\text{B}}\cdot Q_{\text{A}}}
. The shared secret is
{\displaystyle x_{k}}
(the x coordinate of the point). Most standardized protocols based on ECDH derive a symmetric key from
{\displaystyle x_{k}}
using some hash-based key derivation function.
The shared secret calculated by both parties is equal, because
{\displaystyle d_{\text{A}}\cdot Q_{\text{B}}=d_{\text{A}}\cdot d_{\text{B}}\cdot G=d_{\text{B}}\cdot d_{\text{A}}\cdot G=d_{\text{B}}\cdot Q_{\text{A}}}
The only information about her key that Alice initially exposes is her public key. So, no party except Alice can determine Alice's private key (Alice of course knows it by having selected it), unless that party can solve the elliptic curve discrete logarithm problem. Bob's private key is similarly secure. No party other than Alice or Bob can compute the shared secret, unless that party can solve the elliptic curve Diffie–Hellman problem.
The public keys are either static (and trusted, say via a certificate) or ephemeral (also known as ECDHE, where final 'E' stands for "ephemeral"). Ephemeral keys are temporary and not necessarily authenticated, so if authentication is desired, authenticity assurances must be obtained by other means. Authentication is necessary to avoid man-in-the-middle attacks. If one of either Alice's or Bob's public keys is static, then man-in-the-middle attacks are thwarted. Static public keys provide neither forward secrecy nor key-compromise impersonation resilience, among other advanced security properties. Holders of static private keys should validate the other public key, and should apply a secure key derivation function to the raw Diffie–Hellman shared secret to avoid leaking information about the static private key. For schemes with other security properties, see MQV.
If Alice maliciously chooses invalid curve points for her key and Bob does not validate that Alice's points are part of the selected group, she can collect enough residues of Bob's key to derive his private key. Several TLS libraries were found to be vulnerable to this attack.[4]
While the shared secret may be used directly as a key, it can be desirable to hash the secret to remove weak bits due to the Diffie–Hellman exchange.
Curve25519 is a popular set of elliptic curve parameters and reference implementation by Daniel J. Bernstein in C. Bindings and alternative implementations are also available.
LINE messenger app has used the ECDH protocol for its "Letter Sealing" end-to-end encryption of all messages sent through said app since October 2015.[5]
Signal Protocol uses ECDH to obtain post-compromise security. Implementations of this protocol are found in Signal, WhatsApp, Facebook Messenger and Skype.
^ NIST, Special Publication 800-56A, Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography, March, 2006.
^ Certicom Research, Standards for efficient cryptography, SEC 1: Elliptic Curve Cryptography, Version 2.0, May 21, 2009.
^ NSA Suite B Cryptography, Suite B Implementers' Guide to NIST SP 800-56A Archived 2016-03-06 at the Wayback Machine, July 28, 2009.
^ Tibor Jager; Jorg Schwenk; Juraj Somorovsky (2015-09-04). "Practical Invalid Curve Attacks on TLS-ECDH" (PDF). European Symposium on Research in Computer Security (ESORICS'15).
^ JI (13 October 2015). "New generation of safe messaging: "Letter Sealing"". LINE Engineers' Blog. LINE Corporation. Retrieved 5 February 2018.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Elliptic-curve_Diffie–Hellman&oldid=1085310059"
|
Implementing a Lucas-Kanade tracker from scratch
Understanding the basics of Optical Flow and XCode
§Theory
In Computer Vision, Optical Flow deals with the detection of apparent movement between the frames of a video, or between images. The simplest of these is called a Lucas-Kanade Tracker, which attempts to solve the Optical Flow equation using the least-squares method.
§Method
The Optical-flow equation for Lucas-Kanade assumes that the change - or displacement - of moving objects between sucessive frames is small. This can be extended to fast-moving objects using other methods, and we will cover that in a separate post. Assuming small movement between frames, we end up with the following simple equation:
\begin{align} I_xV_x+I_yV_y = -I_t \end{align}
The proof can be found on wikipedia so I'm not going to go into detail. But in simple terms,
I_x
I_y
denotes intensity change in the x and y directions, and
V_x
V_y
stand for velocity, which we need to solve for.
I_t
is the change in intensity w.r.t time (between frames).
If you're astute in math you will have noticed that this is a single equation in two unknowns (
V_x
V_y
). In order to solve it, we need to introduce additional restrictions. In order to do that, we need to make assumptions about the data. At the basic level, different solutions make different assumptions to solve for different conditions.
§Lucas-Kanade Solution
Lucas-Kanade is one of the oldest solutions for the Optical Flow equation, and it assumes that the movement between successive frames is small and uniform within a the window being considered. If we do this, we can assume that the solution for the equation we saw before is the same for all these pixels. Namely,
\begin{bmatrix} I_x(p_1) \dot I_y(p_1) \\ I_x(p_2) \dot I_y(p_2) \\ I_x(p_3) \dot I_y(p_3) \\ ... \end{bmatrix} \times \begin{bmatrix} V_x \\ V_y \end{bmatrix} = \begin{bmatrix} -I_t(p_1) \\ -I_t(p_2) \\ -I_t(p_3) \\ \end{bmatrix}
Now, we have more equations than we have unknowns. Once again, the assumption of uniform velocity reduces the size of
V_x
V_y
to two variables for a small window. Now all we need are windows to look into.
§Corner Tracking
We need to identify points where motion can be detected in successive frames. It stands to reason that motion can be detected easily in corners, as there may not be enough detection in uniform areas within the tracked object.
The science of corner detection is almost as deep as that of optical flow, and the two often go hand in hand. For this implementation, as we're focusing on Optical Flow (and because of my inexperience), we're going to pick the simplest. Moravec Corner-Detection makes the assumption that a corner is a point of low self-similarity. There are many complex mathematical implementations of this, but we're simply looking for a few corners so we can see our algorithm in action. But before we do that, we need to set up.
§XCode
We're going to need some functionality for capturing and representing images. For this purpose, (and this purpose alone) we're going to use OpenCV. Later on - once this concept is stable - we can start using functions from OpenCV so we're not building on reinvented wheels.
Setting up OpenCV in XCode turned out to be relatively painless. There are two tutorials online that made this easy, one on installing OpenCV libraries on Mac, and the other on linking the installed libraries to XCode. Make sure the libraries are installed by running the following piece of code.
If it compiles and links fine, we're ready for implementation.
§Moravec Corner Detection
Implementation here is quite easy. Here's a simplified version of the full system:
deque<Point> findCorners(Mat img, int xarea, int yarea, int thres, bool verbose=true) {
deque<Point> corners;
ofstream log; //This will be used for dumping raw data for corner analysis
log.open("log.csv");
log << "x,y,score1,score2\n";
//Image for marking up corners
Mat outimg = img.clone();
int dimx = img.cols, dimy = img.rows;
//Count number of corners, start looping
for(int startx=0;(startx+xarea)<dimx;startx+=xarea)
for(int starty=0;(starty+yarea)<dimy;starty+=yarea)
Mat curarea = img(Range(starty,min(starty+yarea,dimy)),Range(startx,min(dimx,startx+xarea)));
double results[2] = {0,0};
for(int dir = 0;dir<4;dir++)
int newsx=startx,newsy=starty;
//Check similarity in each direction
newsx-=xarea;
newsx = max(newsx,0);
newsy-=yarea;
newsy = max(newsy,0);
newsx+=xarea;
newsx = min(newsx,dimx);
newsy+=yarea;
newsy = min(newsy,dimy);
Mat newarea = img(Range(newsy,min(newsy+yarea,dimy)),Range(newsx,min(newsx+xarea,dimx)));
if(newarea.cols!=curarea.cols || newarea.rows!=curarea.rows)
Mat diff = abs(curarea-newarea);
results[dir%2] = mean(mean(diff))(0);
results[0]/=2;
//thresholding
if(results[0]>=thres && results[1]>=thres)
corners.push_back(Point(startx,starty));
rectangle(outimg, Point(startx,starty), Point(startx+xarea,starty+yarea), Scalar(0),2);
log << startx << "," << starty << "," << results[0] << "," << results[1] << "\n";
We're isolating windows, looking at equal sized windows in each direction, and compiling a score that would tell us the probability of a particular window containing a corner. We're also taking into account the size of the image and any possible issues we could have with clipping.
It really shows how rusty my C++ is. There are many optimizations we can make. The simplest one would be to detect anomalies using larger windows, and zoom in using the same method to the required window size. This would cut down on the number of windows we need to analyze before we're done. Also, there is a better way of corner detection that doesn't depend on cardinal directions (north, south, east, west) for identificaton. However, there are already built and optimized state-of-the-art algorithms in the OpenCV library that we can use. For simplicity, we're going to keep corner detection as it is.
Looking at the results, we've managed to isolate a few useful markers to track motion:
##Lucas-Kanade Corner Tracking
Remember the equation we considered before:
A = \begin{bmatrix} I_x(p_1) \dot I_y(p_1) \\ I_x(p_2) \dot I_y(p_2) \\ I_x(p_3) \dot I_y(p_3) \\ ... \end{bmatrix}
v = \begin{bmatrix} V_x \\ V_y \end{bmatrix}
T = \begin{bmatrix} -I_t(p_1) \\ -I_t(p_2) \\ -I_t(p_3) \\ \end{bmatrix}
Av = T
We can solve for v:
v = A^{-1}T
And here we can see why we're looking for corners. The matrix
A
needs to be inversible, and by selecting corners we exclude the uniform areas of the image that would prevent this.
First, we compute
I_x
I_y
for each pixel using the intensity values of adjacent pixels:
Ix.at<uchar>(y,x) = ((int)imgA.at<uchar>(y,px)-(int)imgA.at<uchar>(y,nx))/2;
Iy.at<uchar>(y,x) = ((int)imgA.at<uchar>(py,x)-(int)imgA.at<uchar>(ny,x))/2;
I_t
by subtracting intensity values between frames:
double curdI = ((int)imgA.at<uchar>(y,x)-(int)imgB.at<uchar>(y,x));
And here is the part I'm ashamed of. I was working with a vanilla install of XCode, and not wanting to bother with matrix libraries, I decided to implement it by hand:
double detG = (G[0][0]*G[1][1])-(G[0][1]*G[1][0]);
double Ginv[2][2] = {0,0,0,0};
Ginv[0][0] = G[1][1]/detG;
Ginv[0][1] = -G[0][1]/detG;
But it works, and final implementation shows both the corners and the velocity values we've computed.
This is an intermediate step, and it serves to illustrate the underlying concept behind Lucas-Kanade. Next we'll move on to Horn-Shunck and eventually arrive at a more application-specific solution.
The complete working implementation can be found here.
Similar (6 min)Flow Energy and Residual Energy
|
Google Maps Ranking - Proximity Study | Rankings
Google Maps Ranking – Proximity Study
Last updated on 2021-06-30 – Data can be found here
Google relies on user proximity to provide local results for keywords. How vital is the proximity factor? How fast does the ranking decrease by distance from the location of a business?
The study’s goals are to estimate the drop in the ranking by geographical distance and to measure the variability due to the local context (city).
For this study, we focused on personal injury lawyers in major US cities. We collected 20 top-ranking personal injury lawyers in each of the 50 largest cities.
For each of these law firms, we used the service Local Falcon to collect Google My Business rankings for listings that show up either in the Maps portion of the organic search or from a search in the Google Maps Local Finder (i.e. Google Maps).
We collected their rankings for the keyword car accident lawyer at 225 locations on a 15×15 grid centered on their geographic location.
This is an example for the city of Miami:
At the location of the law firm, it ranks 1st for the keyword car accident lawyer. Its ranking drops, however, as soon we are further away from its location. At the fringe of the grid, the law firm does not appear anymore in the top 20 (its exact ranking is not tracked anymore by Local Falcon).
This drop in the ranking can vary drastically between law firms, even in the same city. We see this variation if we flank our initial example with two other samples from Miami:
On the left, we see a very rapid drop in the ranking. On the right, we witness the case of a law firm’s ranking that does not drop much. The grid is always centered on the location of the target law firm.
To account for this high variation between the firms, we need to gather several samples in each city; we collected 20. We used a radius of 10 miles. This allows us to highlight the drop in ranking around the firm’s exact location and identify the distance where most firms drop out of the top 20. Furthermore, for the ten largest cities, we also collected ten samples at a 5 miles radius, a finer granularity, to better highlight the drop in ranking around the firm’s location.
After collecting the data, we can reproduce the grid shown above with a heat map. Below is for instance the same law firm. Each tile is the rank of the law firm observed by Local Falcon on the 15×15 grid centered at the firm’s location. The grid measures 10 miles by 10 miles.
Then, we can visualize on the same 10-mile by 10-mile grid all of the 20 law firm samples collected in Miami (the ten first samples were collected on a 5-mile by 5-mile grid and are not shown below). Sample 12 is the one shown above. We observe that the law firm of sample 11 keeps ranking high even at a high distance, whereas sample law firm 13 directly drops out of the top 20 outside its location.
The grids for all 50 cities are shown in the Annex. The data for this study can be found here.
Most of the 1100 law firms rank 1st at their own location (56%).
We want to compute the ranking by distance to a law firm’s location. So, we compute the geographical distance to the location of the target law firm from the latitude and longitude of each of the 225 measurements on the 15×15 grid. We then average the ranking of a law firm by mile distance to its own location.
There is a major caveat of the data collected with Local Falcon: Local Falcon does not collect rankings above 20 – the first page of search results; they are just collected as “20+”. So, to numerically estimate the decline in ranking, for instance by computing the average rank at a certain distance from a law firm’s localization, we need to impute the value of these missing ranks. For the sake of this study, we assigned the value of 25 to all “20+” measurements. While this is not perfect and impacts the computation of the average ranking, it still allows us to visualize this decline.
For instance, with our previous example in Miami, we see that the law firm ranked first at its own location (distance = 0 miles). The ranking drops quickly, and the position of all the measurements taken between 0 and 1 miles averages to ~9. The average rank oscillates then around 20 as from beginning mile 3. The further away from the location, the more often the firm’s ranking is high or out of the top 20. We used indeed the value of 25 for “+20”, reflected in the average. The average is in orange when above 20, i.e., where law firms rank mostly out of the top 20.
To obtain more stable measurements of the drop in ranking, we average the rankings from each law firm, which why we collected 20 samples per city.
2.1 Rank at Each Mile from Location
We start by visualizing the rank at each mile from the center location for each law firm in each city. Each line is a sample – a law firm. First, for the most populated and less populated city:
Then, for all 50 largest US cities:
We observe that the patterns are slightly different between cities. There is nevertheless a consistency: the drop in ranking varies greatly between law firms. Some law firms only see a slight drop in their ranking, even at 5 or 10 miles from their location. Other law firms quickly drop out of the top 20 (showed in orange on the plot). Because there is high variability between the law firms, it is helpful to show the average rank at each mile to highlight the general trend:
And for all 50 cities:
Pink signifies the average rank across all law firms. We see that the shape of the average rank by mile is similar between cities: it drops fast in the first mile and then slowly stabilizes. It is computed with a 25 rank for the firms outside of the top 20 and for which Local Falcon no longer records the rank. This distorts the “true” average, which is unknown and likely lower at large miles. Another potential distortion is that the ranking is expected to “continuously” decline, and not stabilize at a particular value. The current impression of stabilization of the mean is due to the constant value of 25 attributed to the “+20” measurements. Nevertheless, our method allows for a visualization of an estimate of the average drop in each city. This estimate is just more precise for smaller distances.
2.1.1 Drop from Initial Position (Relative Ranking)
To better compare the drop in ranking between law firms and cities, we can visualize their drop from their initial position – the relative ranking. Note that this drop is still computed with a value of 25 for the “+20” measurements. First, for the most populated and less populated city:
Then, for all 50 cities:
The drop is always 0 at the location of the firms. We observe that the shape of the average drop, despite slight variations, is similar between cities. We can superimpose all the drops in one single plot to show the average decline in ranking in relation to the distance from the location of a firm for each city:
Again, the average is computed with a constant value of “25” for the samples out of the top 20. This explains the stabilization of the curve at large distances. Nevertheless, all cities see a drop of -5 to -12 in the average ranking of the law firms in the first mile. The fall seems to be larger in the Queens than in New Orleans.
2.1.2 The Drop Follows a Rule of Exponential Decay
As we just saw, the drop in terms of ranking has a similar shape in all cities. The drop seems to follow more or less a rule of exponential decay: it decreases at a rate proportional to its current value. At first, it falls fast and then reaches stability. The exponential decay function can be formalized like this: Drop(d)=(Drop0−DropFinal)∗e−λd+DropFinal “>Drop(d)=(Drop0−DropFinal)∗e−λd+DropFinal Drop(d)=(Drop0−DropFinal)∗e−λd+DropFinal
Drop(d) = (Drop0 - DropFinal)* e^{-λd} + DropFinal
Where Drop(d) “>Drop(d) Drop(d)
is the drop at a distance d “>d d
and λ “>λ λ
is the decay constant. Drop0 “>Drop0 Drop0
is the intercept, the drop at distance 0. The parameter DropFinal “>DropFinal DropFinal
is included as a “correction” because we work with negative values (the drops in position are encoded as “-1”, “-2”, etc.). When fitting an exponential decay function to the average, we estimate λ “>λ λ
. If we have an estimate of λ “>λ λ
, we can use the exponential decay function to calculate the drop which would be expected, on average, at a certain distance d “>d d
. We start by illustrating the decay with all the samples taken in all cities together. To better estimate the exponential decay function, we average the data each tenth of a mile. In pink, we see the average drop in ranking regardless of the city:
Then we can fit an exponential decay function to the average, in green:
An exponential decay function fits the average drop very well. The decay constant λ “>λ λ
estimated by the fit is 2.3. The other two constants are estimated as Dropfinal “>Dropfinal Dropfinal
= -11.9 and Drop0 “>Drop0 Drop0
= -2.1. Note that the estimated drop at a distance of 0 mile is thus -2.1, which is not perfect as we know that it should be 0. We could use it to estimate the expected drop in ranking at any distance for an average law firm. For instance, the estimated drop at 1000 yards (0.59 mile) would be of Drop(0.59)=(−2.1+11.9)∗e−2.3∗0.59)−11.9 “>Drop(0.59)=(−2.1+11.9)∗e−2.3∗0.59)−11.9 Drop(0.59)=(−2.1+11.9)∗e−2.3∗0.59)−11.9
Drop(0.59) = (-2.1 + 11.9)* e^{-2.3 * 0.59}) - 11.9
= -9.4 positions. This is just an estimate based on an average. We see on the plots above that, in reality, law firms drop following all sort of trajectories, as illustrated by the plot being “filled” by black lines between 0 and -20. Note also that the caveat of having imputed missing “+20” measurements with the constant value of 25 impacts the average and thus the fit, especially the final stabilized value of -11.9 for the drop. Nevertheless, it is possible to fit such an exponential decay function separately for the averages in all cities. It would allow us to compute predictions of what the typical drop would look like in each city. For simplicity, here is the same plot showing only the average on all law firms and the exponential decay fit:
2.2 When are Law Firms Dropping Out of the Top 20?
Google Maps shows 20 results on the first search page and Local Falcon does not collect rankings above the top 20. We saw above that the ranking was dropping fast in the first mile and that not all the firms were dropping out of the top 20 after 10 miles. And this, in all cities, regardless of their area. For example, a 10-mile radius is enough to cover the city of Boston and its surroundings completely, but this is not the case in Los Angeles. However, in both cases, we identify companies that exit the top 20 after 5 or 10 miles and others that do not leave the top 20. How does the proportion of law firms out of the top 20 change with distance? We first have a look at New York and Oklahoma City:
There is a radical difference here. The percentage of law firms that dropped out of the top 20 rises to 80% in New York, after around 10 miles. Whereas in Oklahoma City, this number never rises above 30%; a larger proportion of law firms rank well, even at large distances. The same figure, for all the 50 largest U.S. cities:
The percentage of law firms that exited the top 20 at the largest distance ranges from 27% in Pittsburgh to 92% in Queens. The cities appear by population size. It seems that the percentage of law firms that can remain in the top 20 is lower in the largest cities. This measure is likely an estimate of the competition in each city. Note that these percentages are computed on 20 sample law firms. Please laso note that the largest distance is not exactly the same in all cities. These differences are due to the precision of Local Falcon, geolocalization, and computation of geographical distance from coordinates.
3 Summary and Key Observations
We sampled 20 personal injury law firms in the 50 largest U.S. cities. For each, we measured their ranking for the keyword “car accident lawyer” at 225 locations dispersed on a squared grid with a 10-mile “radius” around the original location of the firm, using Local Falcon (+ 10 samples with a radius of 5 miles for the ten largest cities).
We then compute the rankings and relative ranking (drop) of each law firm for each mile away from its location, as well as the percentage of the firms leaving the top 20 positions.
The ranking drops dramatically in the first mile; in all cities. On average, the drop in ranking in the first mile is -8 positions.
The drop in ranking varies greatly between law firms. Some top-ranking firms do not even see a dip in the 10-mile radius. This means that there is probably no distance guaranteeing that all of the law firms in a given city drop out of the top 20. On the other hand, some law firms drop very quickly out of the top 20. Often, these are firms that already did not rank 1st at their own location.
After the quick drop, the average ranking stabilizes or decreases much slower. This effect is partly due to observation 2: we compute an average between law firms still ranking well, and law firms with a ranking imputed to 25 because they are out of the top 20. This effect, albeit with some slight variations, is seen in all cities.
This drop in the ranking follows an exponential decay rule, and this rule could be used to estimate the expected drop for any firm in any city at any distance. Caveats: in reality, the variance between the law firms is considerable and this is just an estimate of their average. Furthermore, this rule is based on assigning the value 25 to the “+20” ranks.
The percentage of law firms that dropped out of the top 20 for each mile distance varies a lot between cities. In most cities, the most significant increase of law firms dropping out of the top 20 is taking place in the first mile. The maximum of companies out of the top 20 varies dramatically between cities, ranging from 27% in Pittsburgh to 92% in Queens. These percentages can be used to estimate the probability of a law firm to rank in (or out) of the top 20 in each city at each mile. These results are likely a reflection of the competition among personal injury lawyers in each city.
We visualized the Local Falcon grids of all samples as heat maps in an annex: 10-Mile Grids for All Cities.
|
A Treatise on Electricity and Magnetism/Part IV/Chapter XVII - Wikisource, the free online library
A Treatise on Electricity and Magnetism/Part IV/Chapter XVII
109592A Treatise on Electricity and Magnetism — Comparison of CoilsJames Clerk Maxwell
COMPARISON OF COILS..
Experimental Determination of the Electrical Constants of a Coil.
752.] We have seen in Art. 717 that in a sensitive galvanometer the coils should be of small radius, and should contain many windings of the wire. It would be extremely difficult to determine the electrical constants of such a coil by direct measurement of its form and dimensions, even if we could obtain access to every winding of the wire in order to measure it. But in fact the greater number of the windings are not only completely hidden by the outer windings, but we are uncertain whether the pressure of the outer windings may not have altered the form of the inner ones after the coiling of the wire.
It is better therefore to determine the electrical constants of the coil by direct electrical comparison with a standard coil whose constants are known.
Since the dimensions of the standard coil must be determined by actual measurement, it must be made of considerable size, so that the unavoidable error of measurement of its diameter or circumference may be as small as possible compared with the quantity measured. The channel in which the coil is wound should be of rectangular section, and the dimensions of the section should be small compared with the radius of the coil. This is necessary, not so much in order to diminish the correction for the size of the section, as to prevent any uncertainty about the position of those windings of the coil which are hidden by the external windings[1]. The principal constants which we wish to determine are—
(1) The magnetic force at the centre of the coil due to a unit-current. This is the quantity denoted by
{\displaystyle G_{1}}
in Art. 700.
(2) The magnetic moment of the coil due to a unit-current. This is the quantity
{\displaystyle g_{1}}
753.] To determine
{\displaystyle G_{1}}
. Since the coils of the working galvanometer are much smaller than the standard coil, we place the galvanometer within the standard coil, so that their centres coincide, the planes of both coils being vertical and parallel to the earth's magnetic force. We have thus obtained a differential galvanometer one of whose coils is the standard coil, for which the value of
{\displaystyle G_{1}}
is known, while that of the other coil is
{\displaystyle G_{1}^{\prime }}
, the value of which we have to determine.
The magnet suspended in the centre of the galvanometer coil is acted on by the currents in both coils. If the strength of the current in the standard coil is
{\displaystyle \gamma }
, and that in the galvanometer coil
{\displaystyle \gamma ^{\prime }}
, then, if these currents flowing in opposite directions produce a deflexion
{\displaystyle \delta }
of the magnet,
{\displaystyle H\tan \delta =G_{1}^{\prime }\gamma ^{\prime }-G_{1}\gamma }
{\displaystyle H}
is the horizontal magnetic force of the earth.
If the currents are so arranged as to produce no deflexion, we may find
{\displaystyle G_{1}^{\prime }}
{\displaystyle G_{1}^{\prime }={\frac {\gamma }{\gamma ^{\prime }}}G_{1}}
We may determine the ratio of
{\displaystyle \gamma }
{\displaystyle \gamma ^{\prime }}
in several ways. Since the value of
{\displaystyle G_{1}}
is in general greater for the galvanometer than for the standard coil, we may arrange the circuit so that the whole current
{\displaystyle \gamma }
flows through the standard coil, and is then divided so that
{\displaystyle \gamma ^{\prime }}
flows through the galvanometer and resistance coils, the combined resistance of which is
{\displaystyle R_{1}}
, while the remainder
{\displaystyle \gamma -\gamma ^{\prime }}
flows through another set of resistance coils whose combined resistance is
{\displaystyle R_{2}}
We have then, by Art. 276,]
{\displaystyle \gamma ^{\prime }R_{1}}
{\displaystyle {}=(\gamma -\gamma ^{\prime })R_{2}}
{\displaystyle {\frac {\gamma }{\gamma ^{\prime }}}}
{\displaystyle {}={\frac {R_{1}+R_{2}}{R_{2}}}}
{\displaystyle G_{1}^{\prime }}
{\displaystyle {}={\frac {R_{1}+R_{2}}{R_{2}}}G_{1}}
If there is any uncertainty about the actual resistance of the galvanometer coil (on account, say, of an uncertainty as to its temperature) we may add resistance coils to it, so that the resistance of the galvanometer itself forms but a small part of
{\displaystyle R_{1}}
, and thus introduces but little uncertainty into the final result.
{\displaystyle g_{1}}
, the magnetic moment of a small coil due to a unit-current flowing through it, the magnet is still suspended at the centre of the standard coil, but the small coil is moved parallel to itself along the common axis of both coils, till the same current, flowing in opposite directions round the coils, no longer deflects the magnet. If the distance between the centres of the coils is
{\displaystyle r}
, we have now
{\displaystyle G_{1}=2{\frac {g_{1}}{r^{3}}}+3{\frac {g_{2}}{r^{4}}}+4{\frac {g_{3}}{r^{5}}}+\mathrm {\&c} }
By repeating the experiment with the small coil on the opposite side of the standard coil, and measuring the distance between the positions of the small coil, we eliminate the uncertain error in the determination of the position of the centres of the magnet and of the small coil, and we get rid of the terms in
{\displaystyle g_{2}}
{\displaystyle g_{4}}
If the standard coil is so arranged that we can send the current through half the number of windings, so as to give a different value to
{\displaystyle G_{1}}
, we may determine a new value of
{\displaystyle r}
, and thus, as in Art. 454, we may eliminate the term involving
{\displaystyle g_{3}}
It is often possible, however, to determine
{\displaystyle g_{3}}
by direct measurement of the small coil with sufficient accuracy to make it available in calculating the value of the correction to be applied to
{\displaystyle g_{1}}
{\displaystyle g_{1}={\frac {1}{2}}G_{1}r^{3}-2{\frac {g_{3}}{r^{2}}}}
{\displaystyle g_{3}=-{\frac {1}{8}}\pi a^{2}(6a^{2}+3\xi ^{2}-2\eta ^{2})}
, by Art. 700.
Comparison of Coefficients of Induction.
755.] It is only in a small number of cases that the direct calculation of the coefficients of induction from the form and position of the circuits can be easily performed. In order to attain a sufficient degree of accuracy, it is necessary that the distance between the circuits should be capable of exact measurement. But when the distance between the circuits is sufficient to prevent errors of measurement from introducing large errors into the result, the coefficient of induction itself is necessarily very much reduced in magnitude. Now for many experiments it is necessary to make the coefficient of induction large, and we can only do so by bringing the circuits close together, so that the method of direct measurement becomes impossible, and, in order to determine the coefficient of induction, we must compare it with that of a pair of coils arranged so that their coefficient may be obtained by direct measurement and calculation.
Fig. 61.Let
{\displaystyle A}
{\displaystyle a}
be the standard pair of coils,
{\displaystyle B}
{\displaystyle b}
the coils to be compared with them. Connect
{\displaystyle A}
{\displaystyle B}
in one circuit, and place the electrodes of the galvanometer,
{\displaystyle G}
{\displaystyle P}
{\displaystyle Q}
, so that the resistance of
{\displaystyle PAQ}
{\displaystyle R}
{\displaystyle QBP}
{\displaystyle S}
{\displaystyle K}
being the resistance of the galvanometer. Connect
{\displaystyle a}nd
{\displaystyle b}
in one circuit with the battery.
Let the current in
{\displaystyle A}
{\displaystyle {\dot {x}}}
, that in
{\displaystyle B}
{\displaystyle {\dot {y}}}
, and that in the galvanometer,
{\displaystyle {\dot {x}}-{\dot {y}}}
, that in the battery circuit being
{\displaystyle \gamma }
{\displaystyle M_{1}}
is the coefficient of induction between
{\displaystyle A}
{\displaystyle a}
{\displaystyle M_{2}}
that between
{\displaystyle B}
{\displaystyle b}
, the integral induction current through the galvanometer at breaking the battery circuit is
{\displaystyle x-y=\gamma {\frac {{\frac {M_{1}}{R}}-{\frac {M_{2}}{S}}}{1+{\frac {K}{R}}+{\frac {K}{S}}}}}
By adjusting the resistances
{\displaystyle R}
{\displaystyle S}
till there is no current through the galvanometer at making or breaking the galvanometer circuit, the ratio of
{\displaystyle M_{2}}
{\displaystyle M_{1}}
may be determined by measuring that of
{\displaystyle S}
{\displaystyle R}
Comparison of a Coefficient of Self-induction with a Coefficient of Mutual Induction.
Fig. 62. 756.] In the branch
{\displaystyle AF}
of Wheatstone's Bridge let a coil be inserted, the coefficient of self-induction of which we wish to find. Let us call it
{\displaystyle L}
In the connecting wire between
{\displaystyle A}
and the battery another coil is inserted. The coefficient of mutual induction between this coil and the coil in
{\displaystyle AF}
{\displaystyle M}
. It may be measured by the method described in Art. 755.
If the current from
{\displaystyle A}
{\displaystyle F}
{\displaystyle x}
, and that from
{\displaystyle A}
{\displaystyle H}
{\displaystyle y}
, that from
{\displaystyle Z}
{\displaystyle A}
{\displaystyle B}
{\displaystyle x+y}
. The external electromotive force from
{\displaystyle A}
{\displaystyle F}
{\displaystyle A-F=Px+L{\frac {dx}{dt}}+M\left({\frac {dx}{dt}}+{\frac {dy}{dt}}\right)}
The external electromotive force along
{\displaystyle AH}
{\displaystyle A-H=Qy}
If the galvanometer placed between
{\displaystyle F}
{\displaystyle H}
indicates no current, either transient or permanent, then by (9) and (10), since
{\displaystyle H-F=0}
{\displaystyle Px=Qy}
{\displaystyle L{\frac {dx}{dt}}+M\left({\frac {dx}{dt}}+{\frac {dy}{dt}}\right)=0}
{\displaystyle L=-\left(1+{\frac {P}{Q}}\right)M}
{\displaystyle L}
is always positive,
{\displaystyle M}
must be negative, and therefore the current must flow in opposite directions through the coils placed in
{\displaystyle P}
{\displaystyle B}
. In making the experiment we may either begin by adjusting the resistances so that
{\displaystyle PS=QR}
which is the condition that there may be no permanent current, and then adjust the distance between the coils till the galvanometer ceases to indicate a transient current on making and breaking the battery connexion; or, if this distance is not capable of adjustment, we may get rid of the transient current by altering the resistances
{\displaystyle Q}
{\displaystyle S}
in such a way that the ratio of
{\displaystyle Q}
{\displaystyle S}
If this double adjustment is found too troublesome, we may adopt a third method. Beginning with an arrangement in which the transient current due to self-induction is slightly in excess of that due to mutual induction, we may get rid of the inequality by inserting a conductor whose resistance is
{\displaystyle W}
{\displaystyle A}
{\displaystyle Z}
. The condition of no permanent current through the galvanometer is not affected by the introduction of
{\displaystyle W}
. We may therefore get rid of the transient current by adjusting the resistance of
{\displaystyle W}
alone. When this is done the value of
{\displaystyle L}
{\displaystyle L=-\left(1+{\frac {P}{Q}}+{\frac {P+R}{W}}\right)M}
Comparison of the Coefficients of Self-induction of Two Coils.
757.] Insert the coils in two adjacent branches of Wheatstone's Bridge. Let
{\displaystyle L}
{\displaystyle N}
be the coefficients of self-induction of the coils inserted in
{\displaystyle P}
{\displaystyle R}
respectively, then the condition of no galvanometer current is
{\displaystyle \left(Px+L{\frac {dx}{dt}}\right)Sy=Qy\left(Rx+N{\frac {dx}{dt}}\right)}
{\displaystyle PS=QR}
, for no permanent current, (17)
{\displaystyle {\frac {L}{P}}={\frac {N}{R}}}
, for no transient current. (18)
Hence, by a proper adjustment of the resistances, both the permanent and the transient current can be got rid of, and then the ratio of
{\displaystyle L}
{\displaystyle N}
can be determined by a comparison of the resistances.
↑ Large tangent galvanometers are sometimes made with a single circular conducting ring of considerable thickness, which is sufficiently stiff to maintain its form without any support. This is not a good plan for a standard instrument. The distribution of the current within the conductor depends on the relative conductivity of its various parts. Hence any concealed flaw in the continuity of the metal may cause the main stream of electricity to flow either close to the outside or close to the inside of the circular ring. Thus the true path of the current becomes uncertain. Besides this, when the current flows only once round the circle, especial care is necessary to avoid any action on the suspended magnet due to the current on its way to or from the circle, because the current in the electrodes is equal to that in the circle. In the construction of many instruments the action of this part of the current seems to have been altogether lost sight of.
The most perfect method is to make one of the electrodes in the form of a metal tube, and the other a wire covered with insulating material, and placed inside the tube and concentric with it. The external action of the electrodes when thus arranged is zero, by Art. 683.
Retrieved from "https://en.wikisource.org/w/index.php?title=A_Treatise_on_Electricity_and_Magnetism/Part_IV/Chapter_XVII&oldid=1823473"
|
L2-theory for the ∂¯-operator on compact complex spaces
{L}^{2}
\overline{\partial }
-operator on compact complex spaces
Duke Math. J. 163(15): 2887-2934 (1 December 2014). DOI: 10.1215/0012794-2838545
X
be a singular Hermitian complex space of pure dimension
. We use a resolution of singularities to give a smooth representation of the
{L}^{2}
\overline{\partial }
\left(n,q\right)
X
. The central tool is an
{L}^{2}
-resolution for the Grauert–Riemenschneider canonical sheaf
{\mathcal{K}}_{X}
. As an application, we obtain a Grauert–Riemenschneider-type vanishing theorem for forms with values in almost positive line bundles. If
X
is a Gorenstein space with canonical singularities, then we also get an
{L}^{2}
-representation of the flabby cohomology of the structure sheaf
{\mathcal{O}}_{X}
. To understand also the
{L}^{2}
\overline{\partial }
\left(0,q\right)
X
, we introduce a new kind of canonical sheaf, namely, the canonical sheaf of square-integrable holomorphic
n
-forms with some (Dirichlet) boundary condition at the singular set of
X
X
has only isolated singularities, then we use an
{L}^{2}
-resolution for that sheaf and a resolution of singularities to give a smooth representation of the
{L}^{2}
\overline{\partial }
\left(0,q\right)
-forms.
J. Ruppenthal. "
{L}^{2}
\overline{\partial }
-operator on compact complex spaces." Duke Math. J. 163 (15) 2887 - 2934, 1 December 2014. https://doi.org/10.1215/0012794-2838545
Digital Object Identifier: 10.1215/0012794-2838545
Primary: 32C35 , 32J25 , 32W05
Keywords: $L2$-theory , canonical sheaves , Cauchy–Riemann equations , Gorenstein singularities , resolution of singularities , singular complex spaces
J. Ruppenthal "
{L}^{2}
\overline{\partial }
-operator on compact complex spaces," Duke Mathematical Journal, Duke Math. J. 163(15), 2887-2934, (1 December 2014)
|
Transitively twisted flows of 3-manifolds | EMS Press
A non-singular C1 vector field X of a closed 3-manifold M generating a flow
\varphi_t
induces a flow of the bundle N X orthogonal to X. This flow further induces a flow
P \varphi_t
of the projectivized bundle of N X. In this paper, we assume that the projectivized bundle is a trivial bundle, and study the lift
\angle\varphi_t
P \varphi_t
to the infinite cyclic covering
M \times \mathbb{R}
. We prove that the flow
\angle\varphi_t
is not minimal, and construct an example of
\varphi_t
\angle\varphi_t
has a dense orbit. If
\varphi_t
is almost periodic and minimal, then
\angle\varphi_t
is shown to be classified into three cases: (1) All the orbits of
\angle\varphi_t
are bounded. (2) All the orbits of
\angle\varphi_t
are proper. (3)
\angle\varphi_t
H. Nakayama, Transitively twisted flows of 3-manifolds. Comment. Math. Helv. 76 (2001), no. 4, pp. 577–588
|
Partial pressure/Citable Version - Citizendium
< Partial pressure
Dalton's law states that each gas in a mixture of ideal gases has a partial pressure which is the pressure that the gas would have if it alone occupied the same volume at the same temperature. The total pressure of a gas mixture is the sum of the partial pressures of each individual gas in the mixture.
Henry's law states that at a constant temperature, the partial pressure of a gas in equilibrium with a liquid solution containing some of the gas is directly proportional to the concentration of that gas in the liquid solution.
4 Equilibrium constants of reactions involving gases
For more information, see: Ideal gas law.
John Dalton, an English chemist, meteorologist and physicist, first propounded his law of partial pressures in 1803 and published it in 1805. His statement that the total pressure of a gas mixture is the sum of the partial pressures of each individual gas in the mixture can be expressed mathematically. For example, for a mixture of three ideal gases (denoted as gases a, b and c):
{\displaystyle p_{t}=p_{a}+p_{b}+p_{c}}
{\displaystyle p_{t}}
{\displaystyle p_{a}}
= partial pressure of gas a
{\displaystyle p_{b}}
= partial pressure of gas b
{\displaystyle p_{c}}
= partial pressure of gas c
Dalton's law applies only to gases that behave in accordance with the ideal gas law which is applicable for hypothetical gases with no intermolecular forces. The ideal gas law is a useful approximation for predicting the behavior of many gases over a wide range of temperatures and pressures.
However, real gases can deviate considerably from ideal gas behavior because of the intermolecular attractive and repulsive forces. The deviation is especially significant at low temperatures or high pressures. In other words, Dalton's law is not applicable for real gases at conditions where they deviate significantly from ideal gas behavior.
Dalton's law is also applicable only to gases at conditions under which they are mutually inert (i.e., they do not react with each other).
The mole fraction of an individual gas component in an ideal gas mixture can be expressed in terms of the component's partial pressure or the moles of the component:
{\displaystyle x_{i}={\frac {p_{i}}{p_{t}}}={\frac {n_{i}}{n}}}
{\displaystyle p_{i}=x_{i}\cdot p_{t}}
{\displaystyle x_{i}}
{\displaystyle p_{i}}
{\displaystyle p_{t}}
{\displaystyle n_{i}}
{\displaystyle n}
Using equation (2), Dalton's law as expressed in equation (1) may also be expressed as:
{\displaystyle p_{t}=(x_{a}\cdot p_{t})+(x_{b}\cdot p_{t})+(x_{c}\cdot p_{t})}
For more information, see: Henry's Law.
William Henry, an English chemist, formulated Henry's law in 1803. It stated that:
Henry's law is commonly expressed as:[1] [2][3][4]
{\displaystyle p=kc\,}
{\displaystyle p}
is the partial pressure of the solute gas above the liquid solution
{\displaystyle k}
is the Henry's law constant in units such as L·atm/mol, atm/(mol fraction) or Pa·m3/mol
{\displaystyle c}
is the concentration of the solute in the solution
Henry's Law is sometimes written as:
{\displaystyle p={\frac {c}{k^{*}}}}
{\displaystyle k^{*}}
is also referred to as the Henry's law constant. As seen by comparing equations (4) and (5),
{\displaystyle k^{*}}
{\displaystyle k}
. Since both may be referred to as the Henry's law constant, readers of the technical literature must be quite careful to note which version of the Henry's law equation is being used.
Equilibrium constants of reactions involving gases
For more information, see: Law of mass action.
The Law of Mass Action, formulated in 1864 by Cater Guldberg and Peter Waage of Norway,[5] states that:[6][7][8][9]
The rate of a chemical reaction is proportional to the concentration of the reacting substances.
That law makes it possible to obtain the equilibrium reaction constant for reversible reactions involving gas reactants and gas products given the partial pressures of the reactant and product gases. As an example, for the following generalized reaction:
{\displaystyle w\,A+x\,B\leftrightarrow y\,C+z\,D}
{\displaystyle K_{p}={\frac {p_{C}^{y}\;p_{D}^{z}}{p_{A}^{w}\;p_{B}^{x}}}}
{\displaystyle K_{p}}
{\displaystyle w}
= moles of gas reactant
{\displaystyle A}
{\displaystyle x}
{\displaystyle B}
{\displaystyle y}
= moles of gas product
{\displaystyle C}
{\displaystyle z}
{\displaystyle D}
{\displaystyle p_{C}^{y}}
= the partial pressure of
{\displaystyle C}
{\displaystyle y}
{\displaystyle p_{D}^{z}}
{\displaystyle D}
{\displaystyle z}
{\displaystyle p_{A}^{w}}
{\displaystyle A}
{\displaystyle w}
{\displaystyle p_{B}^{x}}
{\displaystyle B}
{\displaystyle x}
When the Law of Mass Action is expressed using partial pressures, as in equations (6) and (7) above, the equilibrium reaction constant is denoted by
{\displaystyle K_{p}}
. When expressed using concentrations (such as mole/m3 ) rather than partial pressures, the equilibrium reaction constant is denoted by
{\displaystyle K_{c}}
{\displaystyle K_{p}}
{\displaystyle K_{c}}
{\displaystyle K_{p}=K_{c}(R\,T)^{(y+z-w-x)}}
{\displaystyle R}
{\displaystyle T}
For reversible reactions, changes in the total pressure, temperature or reactant concentrations will shift the equilibrium position so as to favor either the right or left side of the reaction in accordance with Le Chatelier's Principle. However, the reaction kinetics may either oppose or enhance the equilibrium shift. In some cases, the reaction kinetics may be the over-riding factor to consider.
Using diving terminology, partial pressure is calculated as:
For example, at 50 metres (164 feet), the total absolute pressure is approximately 6 bar (600 kPa) and the partial pressures of the main components of air, oxygen 21% by volume and nitrogen 79% by volume are:
ppN2 = 6 bar x 0.79 = 4.7 bar absolute
ppO2 = 6 bar x 0.21 = 1.3 bar absolute
= partial pressure of gas component i =
{\displaystyle p_{i}}
in the terms used in this article
= total pressure =
{\displaystyle p_{t}}
= volume fraction of gas component i = mole fraction,
{\displaystyle x_{i}}
, in the terms used in this article
= partial pressure of nitrogen =
{\displaystyle p_{{\mathrm {N} }_{2}}}
= partial pressure of oxygen =
{\displaystyle p_{{\mathrm {O} }_{2}}}
The minimum safe lower limit for the partial pressures of oxygen in a gas mixture is 0.16 bar (16 kPa) absolute. Hypoxia and sudden unconsciousness becomes a problem with an oxygen partial pressure of less than 0.16 bar absolute. The NOAA Diving Manual recommends a maximum single exposure of 45 minutes at 1.6 bar absolute, of 120 minutes at 1.5 bar absolute, of 150 minutes at 1.4 bar absolute, of 180 minutes at 1.3 bar absolute and of 210 minutes at 1.2 bar absolute. Oxygen toxicity, involving convulsions, becomes a risk when these oxygen partial pressures and exposures are exceeded. The partial pressure of oxygen determines the maximum operating depth of a gas mixture.
Nitrogen narcosis is a problem with gas mixes containing nitrogen. A typical planned maximum partial pressure of nitrogen for technical diving is 3.5 bar absolute, based on an equivalent air depth of 35 metres (115 feet).
↑ University of Delaware physical chemistry lecture
↑ Robert G. Mortimer (2000). Physical Chemistry, Second Edition. Academic Press. ISBN 0-12-508345-9.
↑ Green, Don W. and Perry, Robert H. (deceased) (1997). Perry's Chemical Engineers' Handbook, 6th Edition. McGraw-Hill. ISBN 0-07-049479-7. (See page 14-9)
↑ Online Introductory Chemistry: Solubility of gases in liquids
↑ E.W. Lund (1965). "Guldberg and Waage and the Law of Mass Action". J. Chem. Ed 42: 548-550.
↑ A.V. Jones, M. Clement, A. Higton and E. Golding (1999). Access to Chemistry, 1st Edition. Royal Society of Chemistry. ISBN 0-85404-564-3.
↑ Mass Action Law
↑ Michael Clugston and Rosalind Flemming (2000). Advanced Chemistry, 1st Edition. Oxford University Press. ISBN 0-19-914633-0.
↑ E.N. Ramsden (2000). A-Level Chemistry, 4th Edition. Nelson Thornes. ISBN 0-7487-5299-4.
↑ The Law of Mass Action From the website of Loyola University of Chicago. (Click through the slides from sld011.htm to sld024.htm)
Retrieved from "https://citizendium.org/wiki/index.php?title=Partial_pressure/Citable_Version&oldid=565784"
|
Credit Scorecards with Constrained Logistic Regression Coefficients - MATLAB & Simulink Example - MathWorks France
{b}_{i}
\mathit{i}
{\mathit{L}}_{\mathit{i}}={\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)}^{{\mathit{y}}_{\mathit{i}}}×{\left(1-\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)\right)}^{1-{\mathit{y}}_{\mathit{i}}}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}
{\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)=\frac{\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}1\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}}{1+{\mathit{e}}^{-\mathit{b}{\mathrm{x}}_{\mathit{i}}}}\text{\hspace{0.17em}}}^{\text{\hspace{0.17em}}}
\mathbit{b}=\left[{\mathit{b}}_{1\text{\hspace{0.17em}}}{\mathit{b}}_{2}\text{\hspace{0.17em}}...{\mathit{b}}_{\mathit{K}}\right]
{\mathbit{x}}_{\mathbit{i}}=\left[{\mathit{x}}_{\mathrm{i1}}\text{\hspace{0.17em}}{\mathit{x}}_{2}\text{\hspace{0.17em}}...{\mathit{x}}_{\mathrm{iK}}\right]
\mathit{i}
{\mathit{y}}_{\mathit{i}}
{\mathit{w}}_{\mathit{i}}
{\mathit{w}}_{\mathit{i}}
{\mathit{p}}_{\mathit{i}\text{\hspace{0.17em}}}=\text{\hspace{0.17em}}\underset{{\mathit{w}}_{\mathit{i}\text{\hspace{0.17em}}}\mathrm{times}}{{\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)}^{{\mathit{y}}_{\mathit{i}}}*{\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)}^{{\mathit{y}}_{\mathit{i}}}*...*{\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)}^{{\mathit{y}}_{\mathit{i}}}}={\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)}^{{{\mathit{w}}_{\mathit{i}}*\mathit{y}}_{\mathit{i}}}
{\stackrel{ˆ}{\mathit{p}}}_{\mathit{i}\text{\hspace{0.17em}}}=\text{\hspace{0.17em}}\underset{{\mathit{w}}_{\mathit{i}\text{\hspace{0.17em}}}\mathrm{times}}{{\mathit{p}\left({~\mathrm{Default}}_{\mathit{i}}\right)}^{{1-\mathit{y}}_{\mathit{i}}}*{\mathit{p}\left(~{\mathrm{Default}}_{\mathit{i}}\right)}^{1-{\mathit{y}}_{\mathit{i}}}*...*{\mathit{p}\left(~{\mathrm{Default}}_{\mathit{i}}\right)}^{1-{\mathit{y}}_{\mathit{i}}}}={\left(1-\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)\right)}^{{\mathit{w}}_{\mathit{i}}*\left(1-{\mathit{y}}_{\mathit{i}}\right)}
{\mathit{w}}_{\mathit{i}}
{\mathit{w}}_{\mathit{i}}
{\mathit{w}}_{\mathit{i}}
{\mathit{L}}_{\mathit{i}}={\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)}^{{{\mathit{w}}_{\mathit{i}}*\mathit{y}}_{\mathit{i}}}×{\left(1-\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)\right)}^{{\mathit{w}}_{\mathit{i}}*\left(1-{\mathit{y}}_{\mathit{i}}\right)}
\mathit{L}={\mathit{L}}_{1}×{\mathit{L}}_{2}×...×{\text{\hspace{0.17em}}\mathit{L}}_{\mathit{N}}
\mathrm{log}\left(\mathit{L}\right)={\sum }_{\mathit{i}=1}^{\mathit{N}}{\mathit{w}}_{\mathit{i}}*\left[{\mathit{y}}_{\mathit{i}}\mathrm{log}\left(\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)\right)+\left(1-{\mathit{y}}_{\mathit{i}}\right)\mathrm{log}\left(1-\mathit{p}\left({\mathrm{Default}}_{\mathit{i}}\right)\right)\right]\text{\hspace{0.17em}}
0\le {\mathit{b}}_{\mathit{i}}\le 1,\forall \mathit{i}=1...\mathit{K}
|{b}_{CusAge}-{b}_{CustIncome}|<0.1
{b}_{2}
{b}_{6}
\mathit{NIter}
{\mathit{b}}_{\mathit{i}}
{25}^{\mathrm{th}}
{75}^{\mathrm{th}}
|
Absorbing element - Citizendium
In algebra, an absorbing element or a zero element for a binary operation has a property similar to that of multiplication by zero.
{\displaystyle \star }
be a binary operation on a set X. An element O of X is absorbing for
{\displaystyle \star }
{\displaystyle O\star x=O=x\star O\,}
holds for all x in X. An absorbing element, if it exists, is unique.
The zero (additive identity element) of a ring is an absorbing element for the ring multiplication.
The zero matrix is the absorbing element for matrix multiplication.
The empty set is the absorbing element for intersection of sets.
Retrieved from "https://citizendium.org/wiki/index.php?title=Absorbing_element&oldid=552785"
|
Multifractal Analysis - MATLAB & Simulink - MathWorks 日本
Power Law Processes
Where is the Process Going Next? Persistent and Antipersistent Behavior
Measuring Fractal Dynamics of Heart Rate Variability
This example shows how to use wavelets to characterize local signal regularity. The ability to describe signal regularity is important when dealing with phenomena that have no characteristic scale. Signals with scale-free dynamics are widely observed in a number of different application areas including biomedical signal processing, geophysics, finance, and internet traffic. Whenever you apply some analysis technique to your data, you are invariably assuming something about the data. For example, if you use autocorrelation or power spectral density (PSD) estimation, you are assuming that your data is translation invariant, which means that signal statistics like mean and variance do not change over time. Signals with no characteristic scale are scale-invariant. This means that the signal statistics do not change if we stretch or shrink the time axis. Classical signal processing techniques typically fail to adequately describe these signals or reveal differences between signals with different scaling behavior. In these cases, fractal analysis can provide unique insights. Some of the following examples use pwelch for illustration. To execute that code, you must have Signal Processing Toolbox™.
An important class of signals with scale-free dynamics have autocorrelation or power spectral densities (PSD) that follow a power law. A power-law process has a PSD of the form
C|\mathrm{Ï}{|}^{-\mathrm{α}}
for some positive constant,
C
, and some exponent
\mathrm{α}
. In some instances, the signal of interest exhibits a power-law PSD. In other cases, the signal of interest is corrupted by noise with a power-law PSD. These noises are often referred to as colored. Being able to estimate the exponent from realizations of these processes has important implications. For one, it allows you to make inferences about the mechanism generating the data as well as providing empirical evidence to support or reject theoretical predictions. In the case of an interfering noise with a power-law PSD, it is helpful in designing effective filters.
Brown noise, or a Brownian process, is one such colored noise process with a theoretical exponent of
\mathrm{α}=2
. One way to estimate the exponent of a power law process is to fit a least-squares line to a log-log plot of the PSD.
load brownnoise;
[Pxx,F] = pwelch(brownnoise,kaiser(1000,10),500,1000,1);
plot(log10(F(2:end)),log10(Pxx(2:end)));
xlabel('log10(F)'); ylabel('log10(Pxx)');
title('Log-Log Plot of PSD Estimate')
Regress the log PSD values on the log frequencies. Note you must ignore zero frequency to avoid taking the log of zero.
Xpred = [ones(length(F(2:end)),1) log10(F(2:end))];
b = lscov(Xpred,log10(Pxx(2:end)));
y = b(1)+b(2)*log10(F(2:end));
plot(log10(F(2:end)),y,'r--');
title(['Estimated Slope is ' num2str(b(2))]);
Alternatively, you can use both discrete and continuous wavelet analysis techniques to estimate the exponent. The relationship between the Holder exponent, H, returned by dwtleader and wtmm and
\mathrm{α}
is this scenario is
\mathrm{α}=2H+1
[dhbrown,hbrown,cpbrown] = dwtleader(brownnoise);
hexp = wtmm(brownnoise);
fprintf('Wavelet leader estimate is %1.2f\n',-2*cpbrown(1)-1);
Wavelet leader estimate is -1.91
fprintf('WTMM estimate is %1.2f\n',-2*hexp-1);
WTMM estimate is -2.00
In this case, the estimate obtained by fitting a least-squares line to the log of the PSD estimate and those obtained using wavelet methods are in good agreement.
There are a number of real-world signals that exhibit nonlinear power-law behavior that depends on higher-order moments and scale. Multifractal analysis provides a way to describe these signals. Multifractal analysis consists of determining whether some type of power-law scaling exists for various statistical moments at different scales. If this scaling behavior is characterized by a single scaling exponent, or equivalently is a linear function of the moments, the process is monofractal. If the scaling behavior by scale is a nonlinear function of the moments, the process is multifractal. The brown noise from the previous section is an example of monofractal process and this is demonstrated in a later section.
To illustrate how fractal analysis can reveal signal structure not apparent with more classic signal processing techniques, load RWdata.mat which contains two time series (Ts1 and Ts2) with 8000 samples each. Plot the data.
plot([Ts1 Ts2]); grid on;
legend('Ts1','Ts2','Location','NorthEast');
The signals have very similar second order statistics. If you look at the means, RMS values, and variances of Ts1 and Ts2, the values are almost identical. The PSD estimates are also very similar.
pwelch([Ts1 Ts2],kaiser(1e3,10))
The autocorrelation sequences decay very slowly for both time series and are not informative for differentiating the time series.
[xc1,lags] = xcorr(Ts1,300,'coef');
xc2 = xcorr(Ts2,300,'coef');
hs1 = stem(lags(301:end),xc1(301:end));
hs1.Marker = 'none';
title('Autocorrelation Sequence of Ts1');
Even at a lag of 300, the autocorrelations are 0.94 and 0.96 respectively.
The fact that these signals are very different is revealed through fractal analysis. Compute and plot the multifractal spectra of the two signals. In multifractal analysis, discrete wavelet techniques based on the so-called wavelet leaders are the most robust.
[dh1,h1,cp1,tauq1] = dwtleader(Ts1);
hp = plot(h1,dh1,'b-o',h2,dh2,'b-^');
hp(1).MarkerFaceColor = 'b';
hp(2).MarkerFaceColor = 'r';
xlabel('h'); ylabel('D(h)');
title('Multifractal Spectrum');
The multifractal spectrum effectively shows the distribution of scaling exponents for a signal. Equivalently, the multifractal spectrum provides a measure of how much the local regularity of a signal varies in time. A signal that is monofractal exhibits essentially the same regularity everywhere in time and therefore has a multifractal spectrum with narrow support. Conversely, A multifractal signal exhibits variations in signal regularity over time and has a multifractal spectrum with wider support. From the multifractal spectra shown here, Ts2, appears to be a monofractal signal characterized by a cluster of scaling exponents around 0.78. On the other hand, Ts1, demonstrates a wide-range of scaling exponents indicating that it is multifractal. Note the total range of scaling (Holder) exponents for Ts2 is just 0.14, while it is 4.6 times as big for Ts1. Ts2 is actually an example of a monofractal fractional Brownian motion (fBm) process with a Holder exponent of 0.8 and Ts1 is a multifractal random walk.
You can also use the scaling exponent outputs from dwtleader along with the 2nd cumulant to help classify a process as monofractal vs. multifractal. Recall a monofractal process has a linear scaling law as a function of the statistical moments, while a multifractal process has a nonlinear scaling law. dwtleader uses the range of moments from -5 to 5 in estimating these scaling laws. A plot of the scaling exponents for the fBm and multifractal random walk (MRW) process shows the difference.
plot(-5:5,tauq1,'b-o',-5:5,tauq2,'r-^');
xlabel('Q-th Moment'); ylabel('Scaling Exponents');
title('Scaling Exponents');
legend('MRW','fBm','Location','SouthEast');
The scaling exponents for the fBm process are a linear function of the moments, while the exponents for the MRW process show a departure from linearity. The same information is summarized by the 1st, 2nd, and 3rd cumulants. The first cumulant is the estimate of the slope, in other words, it captures the linear behavior. The second cumulant captures the first departure from linearity. You can think of the second cumulant as the coefficients for a second-order (quadratic) term, while the third cumulant characterizes a more complicated departure of the scaling exponents from linearity. If you examine the 2nd and 3rd cumulants for the MRW process, they are 6 and 42 times as large as the corresponding cumulants for the fBm data. In the latter case, the 2nd and 3rd cumulants are almost zero as expected.
For comparison, add the multifractal spectrum for the brown noise computed in an earlier example.
hp = plot(h1,dh1,'b-o',h2,dh2,'b-^',hbrown,dhbrown,'r-v');
hp(3).MarkerFaceColor = 'k';
legend('Ts1','Ts2','brown noise','Location','SouthEast');
Both the fractional Brownian process (Ts2) and the brown noise series are monofractal. However, a simple plot of the two time series shows that they appear quite different.
plot(brownnoise); title('Brown Noise');
plot(Ts2); title('fBm H=0.8'); grid on;
The fBm data is much smoother than the brown noise. Brown noise, also known as a random walk, has a theoretical Holder exponent of 0.5. This value forms a boundary between processes with Holder exponents, H, from 0<H<0.5 and those with Holder exponents in the interval 0.5<H<1. The former are called antipersistent and exhibit short memory. The latter are called persistent and exhibit long memory. In antipersistent time series, an increase in value at time t is followed with a decrease in value at time t+1 with a high probability. Similarly, a decrease in value at time t is typically followed by an increase in value at time t+1. In other words, the time series tends to always revert to its mean value. In persistent time series, increases in value tend to be followed by subsequent increases while decreases in value tend to be followed by subsequent decreases.
To see some real-world examples of antipersistent time series, load and analyze the daily log returns for the Taiwan Weighted and Seoul Composite stock indices. The daily returns for both indices cover the approximate period from July, 1997 through April, 2016.
load StockCompositeData;
plot(SeoulComposite); title('Seoul Composite Index - 07/1997-04/2016');
ylabel('Log Returns'); grid on;
plot(TaiwanWeighted); title('Taiwan Weighted Index - 07/1997-04/2016');
ylabel('Log Returns');
Obtain and plot the multifractal spectra of these two time series.
[dhseoul,hseoul,cpseoul] = dwtleader(SeoulComposite);
[dhtaiwan,htaiwan,cptaiwan] = dwtleader(TaiwanWeighted);
plot(hseoul,dhseoul,'b-o','MarkerFaceColor','b');
plot(htaiwan,dhtaiwan,'r-^','MarkerFaceColor','r');
xlabel('h'); ylabel('D(h)'); grid on;
From the multifractal spectrum, it is clear that both time series are antipersistent. For comparison, plot the multifractal spectra of the two financial time series along with the brown noise and fBm data shown earlier.
plot(hbrown,dhbrown,'k-v','MarkerFaceColor','k');
plot(h2,dh2,'b-*','MarkerFaceColor','b');
legend('Seoul Composite','Taiwan Weighted Index','Brown Noise','FBM',...
Determining that a process is antipersistent or persistent is useful in predicting the future. For example, a time series with long memory that is increasing can be expected to continue increasing. While a time series that exhibits antipersistence can be expected to move in the opposite direction.
Normal human heart rate variability measured as RR intervals displays multifractal behavior. Further, reductions in this nonlinear scaling behavior are good predictors of cardiac disease and even mortality.
As an example of an induced change in the fractal dynamics of heart rate variability, consider a patient administered prostaglandin E1 due to a severe hypertensive episode. The data is part of RHRV, an R-based software package for heart rate variability analysis. The authors have kindly granted permission for its use in this example.
Load and plot the data. The vertical red line marks the beginning of the effect of the prostaglandin E1 on the heart rate and heart rate variability.
load hrvDrug;
plot(hrvDrug); grid on;
plot([4642 4642],[min(hrvDrug) max(hrvDrug)],'r','linewidth',2);
ylabel('Heart Rate'); xlabel('Sample');
Split the data into pre-drug and post-drug data sets. Obtain and plot the multifractal spectra of the two time series.
predrug = hrvDrug(1:4642);
postdrug = hrvDrug(4643:end);
[dhpre,hpre] = dwtleader(predrug);
[dhpost,hpost] = dwtleader(postdrug);
hl = plot(hpre,dhpre,'b-d',hpost,dhpost,'r-^');
hl(1).MarkerFaceColor = 'b';
hl(2).MarkerFaceColor = 'r';
legend('Predrug','Postdrug');
title('Multifractal Spectrum'); xlabel('h'); ylabel('D(h)');
The induction of the drug has led to a 50% reduction in the width of the fractal spectrum. This indicates a significant reduction in the nonlinear dynamics of the heart as measured by heart rate variability. In this case, the reduction of the fractal dimension was part of a medical intervention. In a different context, studies on groups of healthy individuals and patients with congestive heart failure have shown that differences in the multifractal spectra can differentiate these groups. Specifically, significant reductions in the width of the multifractal spectrum is a marker of cardiac dysfunction.
L. Rodriguez-Linares, L., A.J. Mendez, M.J. Lado, D.N. Olivieri, X.A. Vila, and I. Gomez-Conde, "An open source tool for heart rate variability spectral analysis", Computer Methods and Programs in Biomedicine, 103(1):39-50,2011.
Wendt, H. and Abry, P. "Multifractality tests using bootstrapped wavelet leaders", IEEE Trans. Signal Processing, vol. 55, no. 10, pp. 4811-4820, 2007.
Wendt, H., Abry, P., and Jaffard, S. "Bootstrap for empirical multifractal analysis", IEEE Signal Processing Magazine, 24, 4, 38-48, 2007.
Jaffard, S., Lashermes, B., and Abry, P. "Wavelet leaders in multifractal analysis". In T. Qian, M.I. Vai and X. Yuesheng, editors. Wavelet Analysis and Applications, pp. 219-264, Birkhauser, 2006.
|
The Impact of Temperature on the Performance of Semiconductor Laser Diode - Ajman University
The Impact of Temperature on the Performance of Semiconductor Laser Diode
, Ahed H. Zyoud
Published in Science and Engineering Research Support Society
The features of a semiconductor laser diode (LD) are extremely dependent on the temperature of its chip. The effect of temperature on the performance of uncooled semiconductor LD was experimentally studied. These results investigated the effect of temperature on several essential parameters in order to define the quality of received output signal, such as threshold current, slope efficiency, bias barrier voltage, output power and the form of the pulse. This is essential for the selection of the best ideal LD in order to use for medical ophthalmologic diagnosis. We have found that the temperature change is led to change in the modes of LD through an external thermal noise. In this case, the performance of the LD will change as the operating temperature increases. Firstly, the results showed that as the temperature increases due to the current injection through the semiconductor laser between 12.5-22.5
°
C, a change in the threshold current, and slope efficiency are obtained by (11.4-11.8 mA) and (189-188 mW) respectively. Secondly, this increase in temperature has led to an increase in the algorithmic threshold current and barrier voltage at rate 0.0065 /
°
C and 3.0 mV/oC respectively. Finally, while the Full Width Half Maximum (FWHM) and the Peak Channel Number shift (PCN) are increased by a rate of 0.65 Ch. No/
°
C and 0.208 Ch. No/
°
C respectively, the peak of pulse is dropped by a rate of 24.96 au/
°
|
A note on degenerate corank-one singularities of integrable Hamiltonian systems | EMS Press
We prove that, in a neighborhood of a corank-1 singularity of an analytic integrable Hamiltonian system with n degrees of freedom, there is a locally-free analytic symplectic
{\Bbb T}^{n-1}
-action which preserves the moment map, under some mild conditions. This result allows one to classify generic degenerate corank-one singularities of integrable Hamiltonian systems. It can also be applied to the study of (non)integrability of perturbations of integrable systems.
, A note on degenerate corank-one singularities of integrable Hamiltonian systems. Comment. Math. Helv. 75 (2000), no. 2, pp. 271–283
|
Barlaston.
My dearest F,
Have these precious seeds, sent by Dyer, sown in 3 Pots.2
Would it not be worth while to clean with tepid sponge
\frac{1}{2}
small cabbage or sea-kale leaf—leave for 2 or 3 days—then cut leaf off & gently submerge for some hours in water & compare stomata,, whether open or shut, on the 2 halves?3
I enclose letter from George; he sent a card this morning (which in your mothers hands disappeared like a flash of lightning, never to be found again) saying that Routh says George is all right in his mathematical view.—4 You are a wicked man never to have told us a word about yourself or Bernard.—5 I like De Vries very much— I hardly ever saw so modest a man.—6
Ever yours | C. D.
The date is established by the address. The Darwins visited Barlaston, Staffordshire, the home of Emma Darwin’s brother Frank Wedgwood and his family, from 15 to 22 August 1878 (Emma Darwin’s diary (DAR 242)); the only Saturday during this period was 17 August.
William Turner Thiselton-Dyer had sent seeds of Trifolium resupinatum (Persian clover; see letter to W. T. Thiselton-Dyer, 24 August [1878]).
CD and Francis had been investigating the function of bloom, a waxy or powdery coating on leaves and other parts of plants. Francis later noted that he had been asked to investigate the relation between bloom and the location of stomata, or breathing pores, of leaves (F. Darwin 1886, p. 99). CD’s suggestion evidently relates to this work.
The letter from George Howard Darwin has not been found, but see the letter to G. H. Darwin, 17 [August 1878]. Edward John Routh was a well-known mathematics coach at the University of Cambridge.
Francis had joined the Darwins at Leith Hill Place, Surrey, on 8 August 1878 (letter from Francis Darwin, [4–7 August 1878]); he and his son Bernard Darwin returned to Down on 12 August (Emma Darwin’s diary (DAR 242)). Francis had been away from 3 June 1878, when he had travelled to Würzburg to work in the laboratory of Julius Sachs (letter to W. T. Thiselton-Dyer, 2 June 1878).
Hugo de Vries visited CD at Abinger on 14 August 1878 (Emma Darwin’s diary (DAR 242)).
Instructions to sow some seeds
and suggestions for experiment on effects of removal of bloom.
Likes Hugo de Vries very much; has hardly ever seen so modest a man.
|
Carnitine O-acetyltransferase - Wikipedia
1NM8, 1S5O
CRAT, carnitine O-acetyltransferase, CAT1, CAT, NBIA8
OMIM: 600184 MGI: 109501 HomoloGene: 598 GeneCards: CRAT
Carnitine O-acetyltransferase also called carnitine acetyltransferase (CRAT, or CAT)[5] (EC 2.3.1.7) is an enzyme that encoded by the CRAT gene that catalyzes the chemical reaction
acetyl-CoA + carnitine
{\displaystyle \rightleftharpoons }
CoA + acetylcarnitine
where the acetyl group displaces the hydrogen atom in the central hydroxyl group of carnitine.[6]
Thus, the two substrates of this enzyme are acetyl-CoA and carnitine, whereas its two products are CoA and O-acetylcarnitine. The reaction is highly reversible and does not depend on the order in which substrates bind.[6]
Different subcellular localizations of the CRAT mRNAs are thought to result from alternative splicing of the CRAT gene suggested by the divergent sequences in the 5' region of peroxisomal and mitochondrial CRAT cDNAs and the location of an intron where the sequences diverge. The alternatively splicing of this gene results in three distinct isoforms, one of which contains an N-terminal mitochondrial transit peptide, and has been shown to be located in mitochondria.[7]
2.2 CoA binding site
2.3 Carnitine binding site
3.1 Enzyme mechanism
3.1.2 Substrate-assisted catalysis
This enzyme belongs to the family of transferases, to be specific those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:carnitine O-acetyltransferase. Other names in common use include acetyl-CoA-carnitine O-acetyltransferase, acetylcarnitine transferase, carnitine acetyl coenzyme A transferase, carnitine acetylase, carnitine acetyltransferase, carnitine-acetyl-CoA transferase, and CATC. This enzyme participates in alanine and aspartate metabolism.
In general, carnitine acetyltransferases have molecular weights of about 70 kDa, and contain approximately 600 residues1. CRAT contains two domains, an N domain and a C domain, and is composed of 20 α helices and 16 β strands. The N domain consists of an eight-stranded β sheet flanked on both sides by eight α helices. A six-stranded mixed β sheet and eleven α helices comprise the enzyme’s C domain.
When compared, the cores of the two domains reflect significantly similar peptide backbone folding. This occurs despite the fact that only 4% of the amino acids that comprise those peptide backbones corresponds to one another.[5]
His343 is the catalytic residue in CRAT.[8] It is located at the interface between the enzyme’s C and N domains towards the heart of CRAT. His343 is accessible via two 15-18 Å channels that approach the residue from opposite ends of the CRAT enzyme. These channels are utilized by the substrates of CRAT, one channel for carnitine, and one for CoA. The side chain of His343 is positioned irregularly, with the δ1 ring nitrogen hydrogen bonded to the carbonyl oxygen on the amino acid backbone.[5][9][10]
CoA binding site[edit]
Due to the fact that CRAT binds CoA, rather than acetyl-CoA, it appears that CRAT possesses the ability to hydrolyze acetyl-CoA, before interacting with the lone CoA fragment at the binding site.[5] CoA is bound in a linear conformation with its pantothenic arm binding at the active site. Here, the pantothenic arm’s terminal thiol group and the ε2 nitrogen on the catalytic His343 side chain form a hydrogen bond. The 3’-phosphate on CoA forms interactions with residues Lys419 and Lys423. Also at the binding site, the residues Asp430 and Glu453 form a direct hydrogen bond to one another. If either residue exhibits a mutation, can result in a decrease in CRAT activity.[11][12]
Carnitine binding site[edit]
Carnitine binds to CRAT in a partially folded state, with its hydroxyl group and carboxyl group facing opposite directions. The site itself is composed of the C domain β sheet and particular residues from the N domain. Upon binding, a face of carnitine is left exposed to the space outside the enzyme. Like CoA, carnitine forms a hydrogen bond with the ε2 nitrogen on His343. In the case of carnitine, the bond is formed with its 3-hydroxyl group. This CRAT catalysis is stereospecific for carnitine, as the stereoisomer of the 3-hydroxyl group cannot sufficiently interact with the CRAT carnitine binding site. CRAT undergoes minor conformational changes upon binding with carnitine.[5][13][14]
transferase Mechanism (His343)
The His343 residue at the active site of CRAT acts as a base that is able to deprotonate the CoA thiol group or the Carnitine 3’-hydroxyl group depending on the direction of the reaction. The structure of CRAT optimizes this reaction by causing direct hydrogen bonding between the His343 and both substrates. The deprotonated group is now free to attack the acetyl group of acetyl-CoA or acetylcarnitine at its carbonyl site. The reaction proceeds directly, without the formation of a His343-acetyl intermediate.
It is possible for catalysis to occur with only one of the two substrates. If either acetyl-CoA or acetylcarnitine binds to CRAT, a water molecule may fill the other binding site and act as an acetyl group acceptor.
Substrate-assisted catalysis[edit]
The literature suggests that the trimethylammonium group on carnitine may be a crucial factor in CRAT catalysis. This group exhibits a positive charge that stabilizes the oxyanion in the reaction’s intermediate. This idea is supported by the fact the positive charge of carnitine is unnecessary for active site binding, but vital for the catalysis to proceed. This has been proven to be the case through the synthesis of a carnitine analog lacking its trimethylammonium group. This compound was able to compete with carnitine in binding to CRAT, but was unable to induce a reaction.[15] The emergence of subtrate-assisted catalysis has opened up new strategies for increasing synthetic substrate specificity.[16]
There is evidence that suggests that CRAT activity is necessary for the cell cycle to proceed from the G1 phase to the S phase.[17]
Those with an inherited deficiency in CRAT activity are at risk for developing severe heart and neurological problems.[5]
Reduced CRAT activity can be found in individuals suffering from Alzheimer’s disease.[5]
CRAT and its family of enzymes have great potential as targets for developing therapeutic treatments for Type 2 diabetes and other diseases.[18][19][20]
CRAT is known to interact with NEDD8, PEX5, SUMO1.[7]
^ a b c d e f g Jogl G, Tong L (Jan 2003). "Crystal structure of carnitine acetyltransferase and implications for the catalytic mechanism and fatty acid transport". Cell. 112 (1): 113–22. doi:10.1016/S0092-8674(02)01228-X. PMID 12526798. S2CID 18633987.
^ a b "Entrez Gene: CRAT carnitine acetyltransferase".
^ McGarry JD, Brown NF (Feb 1997). "The mitochondrial carnitine palmitoyltransferase system. From concept to molecular analysis". European Journal of Biochemistry. 244 (1): 1–14. doi:10.1111/j.1432-1033.1997.00001.x. PMID 9063439.
^ Jogl G, Hsiao YS, Tong L (Nov 2004). "Structure and function of carnitine acyltransferases". Annals of the New York Academy of Sciences. 1033 (1): 17–29. Bibcode:2004NYASA1033...17J. doi:10.1196/annals.1320.002. PMID 15591000. S2CID 24466239.
^ Wu D, Govindasamy L, Lian W, Gu Y, Kukar T, Agbandje-McKenna M, McKenna R (Apr 2003). "Structure of human carnitine acetyltransferase. Molecular basis for fatty acyl transfer". The Journal of Biological Chemistry. 278 (15): 13159–65. doi:10.1074/jbc.M212356200. PMID 12562770.
^ Ramsay RR, Gandour RD, van der Leij FR (Mar 2001). "Molecular enzymology of carnitine transfer and transport". Biochimica et Biophysica Acta (BBA) - Protein Structure and Molecular Enzymology. 1546 (1): 21–43. doi:10.1016/S0167-4838(01)00147-9. PMID 11257506.
^ Hsiao YS, Jogl G, Tong L (Sep 2006). "Crystal structures of murine carnitine acetyltransferase in ternary complexes with its substrates". The Journal of Biological Chemistry. 281 (38): 28480–7. doi:10.1074/jbc.M602622200. PMC 2940834. PMID 16870616.
^ Cronin CN (Sep 1997). "The conserved serine-threonine-serine motif of the carnitine acyltransferases is involved in carnitine binding and transition-state stabilization: a site-directed mutagenesis study". Biochemical and Biophysical Research Communications. 238 (3): 784–9. doi:10.1006/bbrc.1997.7390. PMID 9325168.
^ Hsiao YS, Jogl G, Tong L (Jul 2004). "Structural and biochemical studies of the substrate selectivity of carnitine acetyltransferase". The Journal of Biological Chemistry. 279 (30): 31584–9. doi:10.1074/jbc.M403484200. PMID 15155726.
^ Saeed A, McMillin JB, Wolkowicz PE, Brouillette WJ (Sep 1993). "Carnitine acyltransferase enzymic catalysis requires a positive charge on the carnitine cofactor". Archives of Biochemistry and Biophysics. 305 (2): 307–12. doi:10.1006/abbi.1993.1427. PMID 8373168.
^ Dall'Acqua W, Carter P (Jan 2000). "Substrate-assisted catalysis: molecular basis and biological significance". Protein Science. 9 (1): 1–9. doi:10.1110/ps.9.1.1. PMC 2144443. PMID 10739241.
^ Brunner S, Kramar K, Denhardt DT, Hofbauer R (Mar 1997). "Cloning and characterization of murine carnitine acetyltransferase: evidence for a requirement during cell cycle progression". The Biochemical Journal. 322 (2): 403–10. doi:10.1042/bj3220403. PMC 1218205. PMID 9065756.
^ Anderson RC (Feb 1998). "Carnitine palmitoyltransferase: a viable target for the treatment of NIDDM?". Current Pharmaceutical Design. 4 (1): 1–16. PMID 10197030.
^ Giannessi F, Chiodi P, Marzi M, Minetti P, Pessotto P, De Angelis F, Tassoni E, Conti R, Giorgi F, Mabilia M, Dell'Uomo N, Muck S, Tinti MO, Carminati P, Arduini A (Jul 2001). "Reversible carnitine palmitoyltransferase inhibitors with broad chemical diversity as potential antidiabetic agents". Journal of Medicinal Chemistry. 44 (15): 2383–6. doi:10.1021/jm010889+. PMID 11448219.
^ Wagman AS, Nuss JM (Apr 2001). "Current therapies and emerging targets for the treatment of diabetes". Current Pharmaceutical Design. 7 (6): 417–50. doi:10.2174/1381612013397915. PMID 11281851.
Chase JF, Pearson DJ, Tubbs PK (Jan 1965). "The Preparation of Crystallin Carnitine Acetyltransferase". Biochimica et Biophysica Acta (BBA) - Nucleic Acids and Protein Synthesis. 96: 162–5. doi:10.1016/0005-2787(65)90622-2. PMID 14285260.
Friedman S, Fraenkel G (Dec 1955). "Reversible enzymatic acetylation of carnitine". Archives of Biochemistry and Biophysics. 59 (2): 491–501. doi:10.1016/0003-9861(55)90515-4. PMID 13275966.
Miyazawa S, Ozasa H, Furuta S, Osumi T, Hashimoto T (Feb 1983). "Purification and properties of carnitine acetyltransferase from rat liver". Journal of Biochemistry. 93 (2): 439–51. doi:10.1093/oxfordjournals.jbchem.a134198. PMID 6404901.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Carnitine_O-acetyltransferase&oldid=1053697191"
|
Revision as of 17:49, 19 August 2019 by Nikolay (talk | contribs) (Nikolay moved page Lower bounds on APN-distance for all known APN functions to Lower bounds on APN-distance for all known APN functions in dimension 8 without leaving a redirect: Separating dimension 8 from the main table due to large size)
{\displaystyle \Pi _{F}^{0}}
{\displaystyle m_{F}}
4226 87, 932, 995, 1056, 11113, 11734
|
Initial condition - Wikipedia
(Redirected from Initial conditions)
Parameter in differential equations and dynamical systems
The initial condition of a vibrating string
Evolution from the initial condition
In mathematics and particularly in dynamic systems, an initial condition, in some contexts called a seed value,[1]: pp. 160 is a value of an evolving variable at some point in time designated as the initial time (typically denoted t = 0). For a system of order k (the number of time lags in discrete time, or the order of the largest derivative in continuous time) and dimension n (that is, with n different evolving variables, which together can be denoted by an n-dimensional coordinate vector), generally nk initial conditions are needed in order to trace the system's variables forward through time.
3 Empirical laws and initial conditions
Linear system[edit]
Discrete time[edit]
A linear matrix difference equation of the homogeneous (having no constant term) form
{\displaystyle X_{t+1}=AX_{t}}
has closed form solution
{\displaystyle X_{t}=A^{t}X_{0}}
predicated on the vector
{\displaystyle X_{0}}
of initial conditions on the individual variables that are stacked into the vector;
{\displaystyle X_{0}}
is called the vector of initial conditions or simply the initial condition, and contains nk pieces of information, n being the dimension of the vector X and k = 1 being the number of time lags in the system. The initial conditions in this linear system do not affect the qualitative nature of the future behavior of the state variable X; that behavior is stable or unstable based on the eigenvalues of the matrix A but not based on the initial conditions.
{\displaystyle x_{t}=a_{1}x_{t-1}+a_{2}x_{t-2}+\cdots +a_{k}x_{t-k}.}
Here the dimension is n = 1 and the order is k, so the necessary number of initial conditions to trace the system through time, either iteratively or via closed form solution, is nk = k. Again the initial conditions do not affect the qualitative nature of the variable's long-term evolution. The solution of this equation is found by using its characteristic equation
{\displaystyle \lambda ^{k}-a_{1}\lambda ^{k-1}-a_{2}\lambda ^{k-2}-\cdots -a_{k-1}\lambda -a_{k}=0}
to obtain the latter's k solutions, which are the characteristic values
{\displaystyle \lambda _{1},\dots ,\lambda _{k},}
for use in the solution equation
{\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{k}\lambda _{k}^{t}.}
Here the constants
{\displaystyle c_{1},\dots ,c_{k}}
are found by solving a system of k different equations based on this equation, each using one of k different values of t for which the specific initial condition
{\displaystyle x_{t}}
Continuous time[edit]
{\displaystyle {\frac {dX}{dt}}=AX.}
Its behavior through time can be traced with a closed form solution conditional on an initial condition vector
{\displaystyle X_{0}}
. The number of required initial pieces of information is the dimension n of the system times the order k = 1 of the system, or n. The initial conditions do not affect the qualitative behavior (stable or unstable) of the system.
{\displaystyle {\frac {d^{k}x}{dt^{k}}}+a_{k-1}{\frac {d^{k-1}x}{dt^{k-1}}}+\cdots +a_{1}{\frac {dx}{dt}}+a_{0}x=0.}
Here the number of initial conditions necessary for obtaining a closed form solution is the dimension n = 1 times the order k, or simply k. In this case the k initial pieces of information will typically not be different values of the variable x at different points in time, but rather the values of x and its first k – 1 derivatives, all at some point in time such as time zero. The initial conditions do not affect the qualitative nature of the system's behavior. The characteristic equation of this dynamic equation is
{\displaystyle \lambda ^{k}+a_{k-1}\lambda ^{k-1}+\cdots +a_{1}\lambda +a_{0}=0,}
whose solutions are the characteristic values
{\displaystyle \lambda _{1},\dots ,\lambda _{k};}
these are used in the solution equation
{\displaystyle x(t)=c_{1}e^{\lambda _{1}t}+\cdots +c_{k}e^{\lambda _{k}t}.}
This equation and its first k – 1 derivatives form a system of k equations that can be solved for the k parameters
{\displaystyle c_{1},\dots ,c_{k},}
given the known initial conditions on x and its k – 1 derivatives' values at some time t.
Another initial condition
Evolution of this initial condition for an example PDE
Empirical laws and initial conditions[edit]
Every empirical law has the disquieting quality that one does not know its limitations. We have seen that there are regularities in the events in the world around us which can be formulated in terms of mathematical concepts with an uncanny accuracy. There are, on the other hand, aspects of the world concerning which we do not believe in the existence of any accurate regularities. We call these initial conditions.[2]
^ Baumol, William J. (1970). Economic Dynamics: An Introduction (3rd ed.). London: Collier-Macmillan. ISBN 0-02-306660-1.
^ Wigner, Eugene P. (1960). "The unreasonable effectiveness of mathematics in the natural sciences. Richard Courant lecture in mathematical sciences delivered at New York University, May 11, 1959". Communications on Pure and Applied Mathematics. 13: 1–14. Bibcode:1960CPAM...13....1W. doi:10.1002/cpa.3160130102. Archived from the original (PDF) on February 12, 2020.
Quotations related to Initial condition at Wikiquote
Retrieved from "https://en.wikipedia.org/w/index.php?title=Initial_condition&oldid=1075980363"
|
Power Spectral Density for Bioacousticians | Ecology & Informatics
Power Spectral Density for Bioacousticians
As almost all bioacoustic studies examine frequency in some way or another, it’s important to understand how audio signals are projected from the time domain to the frequency domain. The term frequency domain simply refers to viewing the content of a signal with respect to frequency, as opposed to time. Power spectral density or spectrogram plots are common frequency domain representations of signals. This post will deal with the process of executing this projection and provide context to the interpretation of measures derived from the frequency domain.
As this material can be quite intimidating for biologists, I’ve tried to present the information in a visual manner and write in a less formal language than you’d come across in an acoustics text. Don’t stress about getting lost in the formulas as it may take a few readings before it sinks in (at least it did for me).
The discrete Fourier transform (DFT) lies at the heart of signal processing. Any periodic signal with \(N\) samples can be reconstructed using \(N\) weighted sine functions across a spectrum of frequencies. This allows us to project time domain information into the frequency domain. If \( \mathbf{x} = {x_0 , x_1 , . . . , x_{N −1} } \) is a vector of pressure measurements over time from some recorded signal, the discrete Fourier transform can be written as:
This may look a bit abstract to those who are new to the concept. A more intuitive way to write this is to apply Euler’s formula \(e^{ix} = \cos(x) + i \cdot \sin(x)\) and rewrite it with respect to trigonometric functions you’re likely more familiar with.
Here, the term \(k \in [0, N − 1]\) determines the frequency being examined. \(k\) can be converted to hertz with
\text{Fs}
is the sample rate of the signal. The image below provides a visual breakdown of the components of the discrete Fourier transform when applied to a 100Hz pure tone.
We’ve shown the DFT with respect to two
k
values: 100Hz and 33.3Hz. The top panels show the pure tone in the time domain and the circular domain the DFT operates in. In the middle and lower plots, blue lines show the measured signal,
\color{blue}x_n
. Shaded green and red areas represent \( \color{green} x_n \cdot \cos \left( 2 \pi n \frac{k}{N} \right) \) and \( \color{red} -x_n \cdot \sin \left( 2 \pi n \frac{k}{N} \right) \) which are averaged and combined into a complex value where the complex modulus and argument show the amplitude and phase of the sine wave at the given
k
The complex modulus of these resulting values in
X_k
represent the spectral density of each frequency in the signal. If we plot the modulus of
X_k
from the DFT above and convert our
k
values to Hz, we may see something like the following spectral density plots.
The mirrored values on the two-sided spectrum are a result of the conjugate symmetric property of the DFT when analysing
k
values beyond the Nyquist frequency. The one-sided spectrum can be calculated by simply removing the negative frequencies and normalizing the remaining y-axis values by multiplying them by 2.
It’s important to note that as the Fourier transform works in a circular domain, it assumes periodic signals. This is an assumption that is never met in practice and simply means that the signal consists of a full cycle of a repeating pattern. To be clear, the figure below provides an illustrated guide to what we mean by periodicity.
When we apply the DFT to our non-periodic 100Hz tone, the negative values of the signal are over represented. This causes the incorrect attribution of spectral energy to frequenices outside 100Hz.
As mentioned, measuring real signals inevitably results in violating the DFT assumption of periodicity. To reduce the effects of spectral leakage on our spectral density estimates, we can apply windowing and averaging.
Windowing reduces spectral leakage by applying an envelope function to our measured signal which tapers the amplitude toward the start and end of the signal. This will result in a more periodic signal at the cost of altering the information within signal, hence the choice of window function is important. Two common choices are the Hann and Hamming windows. Hann windows are designed to minimize the spectral leakage across the whole spectrum while Hamming windows offer less of a reduction of spectral leakage across the spectrum but provides more precise estimates of those frequencies present in the signal. The choice of window type needs to be determined with respect to the purpose of the analysis.
The figure above shows DFT results with and without the application of a Hann window. You may have noticed that the areas under each line in the DFT plot are equal. This is termed Parseval’s theorem. When we apply a DFT to a signal, the resulting energy projected onto the frequency domain must be equal to the energy found in the original time domain. Hence, high degrees of spectral leakage can lead to underestimates of the true spectral densities of frequencies (peaks in the DFT will be underestimated more severely) as the spectral energy from the true frequency is incorrectly attributed to other frequencies.
As a consequence of windowing, information located near the beginning or end of our signal is going to be under-represented in the spectral density estimates. With ideal stationary signals this isn’t much of a concern, but when dealing with noisy, non-stationary signals this needs to be addressed. A popular routine to control for this problem is:
Split the signal into multiple small windows which overlap by 50%.
Apply a window function to each of these window segments to reduce the spectral leakage.
Average the DFT results across all window segments.
This protocol for estimating spectral density is termed Welch’s method and is the kernel of all power spectral density (PSD) and spectrogram plots reported in the bioacoustic literature.
By choosing smaller window sizes, you’re reducing the number of samples in the DFTs and thus reducing the frequency resolution of the resulting PSD plot. However, smaller window sizes allow averaging across more window segments which results in a smoother plot providing a more accurate description of the spectral power. Converting our spectral density estimates to PSD values is simple as we just need to convert from pressure (root power quantity) to power by squaring. We can then report our PSD values as decibels
P
P_0
represent your measured and reference power values, respectively. If we want to represent our measured signal as a spectrogram rather than a PSD plot, instead of averaging our PSD values over time we simply calculate the PSD values for each window segment and plot them with respect to time and frequency.
|
Covariance/Citable Version - Citizendium
This version approved either by the Approvals Committee, or an Editor from the listed workgroup. The Mathematics Workgroup is responsible for this citable version. While we have done conscientious work, we cannot guarantee that this version is wholly free of mistakes. See here (not History) for authorship.
2 Finite data
3 Unbiased estimate
The covariance — usually denoted as Cov — is a statistical parameter used to compare two real random variables on the same sample space (more precisely, the same probability space).
It is defined as the expectation (or mean value) of the product of the deviations (from their respective mean values) of the two variables.
The sign of the covariance indicates a linear trend between the two variables.
If one variable increases (in the mean) with the other, then the covariance is positive.
It is negative if one variable tends to decrease when the other increases.
If it is 0 then there is no linear correlation between the two variables.
In particular, this is the case for stochastically independent variables. But the inverse is not true because there may still be other – nonlinear – dependencies.
The value of the covariance is scale-dependent and therefore does not show how strong the correlation is. For this purpose a normed version of the covariance is used — the correlation coefficient which is independent of scale.
The covariance of two real random variables X and Y with expectation (mean value)
{\displaystyle \mathrm {E} (X)=\mu _{X}\quad {\text{and}}\quad \mathrm {E} (Y)=\mu _{Y}}
{\displaystyle \operatorname {Cov} (X,Y):=\mathrm {E} ((X-\mu _{X})(Y-\mu _{Y}))=\mathrm {E} (XY)-\mathrm {E} (X)\mathrm {E} (Y)}
If the two random variables are the same then their covariance is equal to the variance of the single variable: Cov(X,X) = Var(X).
In a more general context of probability theory the covariance is a second-order central moment of the two-dimensional random variable (X,Y), often denoted as μ11.
For a finite set of data
{\displaystyle (x_{i},y_{i})\in \mathbb {R} ^{2}\ {\text{with}}\ i=1,\dots ,n}
the covariance is given by
{\displaystyle {1 \over n}\sum _{i=1}^{n}(x_{i}-{\overline {x}})(y_{i}-{\overline {y}})\qquad {\text{where}}\ {\overline {x}}:={1 \over n}\sum _{i=1}^{n}x_{i}\ {\text{and}}\ {\overline {y}}:={1 \over n}\sum _{i=1}^{n}y_{i}}
or, using a convenient notation
{\displaystyle [a_{i}]:=\sum _{i=1}^{n}a_{i}}
introduced by Gauss, by
{\displaystyle {1 \over n}([x_{i}y_{i}]-[x_{i}][y_{i}])}
This is equivalent to taking the uniform distribution where each item (xi,yi) has probability 1/n.
The expectation of the covariance of a random sample — taken from a probability distribution — depends on the size n of the sample and is slightly smaller than the covariance of the distribution.
An unbiased estimate of the covariance is
{\displaystyle \mathrm {Cov} (X,Y)={n \over n-1}\mathrm {Cov} (x_{i},y_{i})={1 \over n-1}\sum _{i=1}^{n}(x_{i}-{\overline {x}})(y_{i}-{\overline {y}})}
The distinction between the covariance of a sample and the estimated covariance of the distribution is not always clearly made. This explains why one finds both formulae for the covariance — that taking the mean with " 1 / n " and that with " 1 / (n-1) " instead.
(2) bilinear
(3) positive definite
{\displaystyle {\text{(1)}}\ \qquad \operatorname {Cov} (X,Y)=\operatorname {Cov} (Y,X)}
{\displaystyle {\text{(2a)}}\qquad \operatorname {Cov} (aX_{1}+bX_{2},Y)=a\cdot \operatorname {Cov} (X_{1},Y)+b\cdot \operatorname {Cov} (X_{2},Y)}
{\displaystyle {\text{(2b)}}\qquad \operatorname {Cov} (X,aY_{1}+bY_{2})=a\cdot \operatorname {Cov} (X,Y_{1})+b\cdot \operatorname {Cov} (X,Y_{2})}
{\displaystyle {\text{(3)}}\ \qquad \operatorname {Cov} (X,X)\geq 0\qquad {\text{and}}\qquad \operatorname {Cov} (X,X)=0\Leftrightarrow X=\mu _{X}\ {\text{almost surely}}}
Since the covariance cannot distinguish between random variables X1 and X2 that have the same deviation, (i.e., X1 − E(X1) = X2 − E(X2) holds almost surely) it does not define an inner product for random variables, but only for random variables with mean 0 or, equivalently, for the deviations.
Retrieved from "https://citizendium.org/wiki/index.php?title=Covariance/Citable_Version&oldid=740529"
Mathematics Approved Extra Subpages
Mathematics Citable Version Subpages
|
Mathematical Methods for Data Analysis 学习笔记(1) Introduction - 咖啡与代码
Vacabulary for this chapter
quantitative 定量的
alternative 选择性的; 替代选择,可供选择的事物
criterion 准则
replicas 复制品
restrictions 限制
deterministic 确定性的
stochastic 随机的
probabilistic 概率性的
The body of knowledge involving quantitative approaches to decision making is referred to as
Operations Research (运筹学)
Decision Science (决策科学)
7 Steps of Problem Solving (First 5 steps are the process of decision making):
Determine the set of alternative solutions.
Determine the criteria for evaluating alternatives.
Choose an alternative (make a decision).
Decision-Making Process (5 of the 7 steps)
Problems in which the objective is to find the best solution with respect to one criterion are referred to as singlecriterion decision problems (单准则决策问题).
Problems that involve more than one criterion are referred to as multicriteria decision problems (多准则决策问题).
Analysis Phase of Decision-Making Process
Qualitative Analysis (定性分析)
Quantitative Analysis (定量分析)
The role of qualitative and quantitative analysis:
Models are representations of real objects or situations.
Three forms of models are:
Iconic models (图像模型) - physical replicas (scalar representations) of real objects
Analog models (模拟模型) - physical in form, but do not physically resemble the object being modeled
Mathematical models (数学模型) - represent real world problems through a system of mathematical formulas and expressions based on key assumptions, estimates, or statistical analyses
Objective Function (目标函数): a mathematical expression that describes the problem’s objective, such as maximizing profit or minimizing cost.
Constraints (约束条件): a set of restrictions or limitations, such as production capacities.
Uncontrollable Inputs (不可控输入): environmental factors that are not under the control of the decision maker
Decision Variables (决策变量): controllable inputs; decision alternatives specified by the decision maker, such as the number of units of a product to produce.
A complete mathematical model for a simple production problem is:
\begin{array}{cl}{\text { Maximize }} & {10 x \text { (objective function) }} \\ {\text { subject to: }} & {5 x \leq 40 \quad(\text { constraint })} \\ {} & {x \geq 0 \quad(\text { constraint })}\end{array}
The flowchart for the productino model:
Deterministic Model (确定性模型) – if all uncontrollable inputs to the model are known and cannot vary.
Stochastic (or Probabilistic) Model (随机(或概率)模型) – if any uncontrollable are uncertain and subject to variation
Cost/benefit considerations must be made in selecting an appropriate mathematical model.
Frequently a less complicated (and perhaps less precise) model is more appropriate than a more complex and accurate one, due to cost and ease of solution considerations.
The process of transforming model inputs into output:
Data preparation is not a trivial step, due to the time required and the possibility of data collection errors.
The analyst attempts to identify the alternative (the set of decision variable values) that provides the “best” output for the model.
The “best” output is the optimal(最佳的) solution.
If the alternative does not satisfy all of the model constraints, it is rejected as being infeasible (不可实行的), regardless of the objective function value.
If the alternative satisfies all of the model constraints, it is feasible(满足的) and a candidate(候选) for the “best” solution.
Often, goodness/accuracy of a model cannot be assessed until solutions are generated.
Small test problems having known, or at least expected, solutions can be used for model testing and validation.
If the model generates expected solutions, use the model on the full-scale problem.
If inaccuracies or potential shortcomings inherent in the model are identified, take corrective action such as:
Collection of more-accurate input data
A managerial report, based on the results of the model, should be prepared.
The report should be easily understood by the decision maker.
the recommended decision
other pertinent information about the results (for example, how sensitive the model solution is to the assumptions and data used in the model)
The manager must oversee the implementation and follow-up evaluation of the decision.
The continued monitoring of the model’s performance might lead to model expansion or refinement.
Because implementation often requires people to change the way they do things, it often meets with resistance.
To help ensure successful implementation, include users throughout the modeling process.
Linear Programming (线性规划)
Integer Linear Programming (整数线性规划)
Project Scheduling: PERT/CPM - PERT (Program Evaluation and Review Technique 统筹法), CPM (Critical Path Method 关键路径方法)
Inventory Models (库存模型)
Waiting Line or Queueing Models (排队模型)
Simulation (模拟)
Forecasting (预测)
Markov-Process Models (马尔可夫过程模型)
Distribution/Network Models (网络模型)
上一篇:Python 语言特性(语言语义上)
下一篇:Mathematical Methods for Data Analysis 学习笔记(2) Decision Analysis
|
Hyperelliptic_curve Knowpia
In algebraic geometry, a hyperelliptic curve is an algebraic curve of genus g > 1, given by an equation of the form
Fig. 1. A hyperelliptic curve
{\displaystyle y^{2}+h(x)y=f(x)}
where f(x) is a polynomial of degree n = 2g + 1 > 4 or n = 2g + 2 > 4 with n distinct roots, and h(x) is a polynomial of degree < g + 2 (if the characteristic of the ground field is not 2, one can take h(x) = 0).
A hyperelliptic function is an element of the function field of such a curve, or of the Jacobian variety on the curve; these two concepts are identical for elliptic functions, but different for hyperelliptic functions.
Fig. 1 is the graph of
{\displaystyle C:y^{2}=f(x)}
{\displaystyle f(x)=x^{5}-2x^{4}-7x^{3}+8x^{2}+12x=x(x+1)(x-3)(x+2)(x-2).}
Genus of the curveEdit
The degree of the polynomial determines the genus of the curve: a polynomial of degree 2g + 1 or 2g + 2 gives a curve of genus g. When the degree is equal to 2g + 1, the curve is called an imaginary hyperelliptic curve. Meanwhile, a curve of degree 2g + 2 is termed a real hyperelliptic curve. This statement about genus remains true for g = 0 or 1, but those curves are not called "hyperelliptic". Rather, the case g = 1 (if we choose a distinguished point) is an elliptic curve. Hence the terminology.
Formulation and choice of modelEdit
While this model is the simplest way to describe hyperelliptic curves, such an equation will have a singular point at infinity in the projective plane. This feature is specific to the case n > 3. Therefore, in giving such an equation to specify a non-singular curve, it is almost always assumed that a non-singular model (also called a smooth completion), equivalent in the sense of birational geometry, is meant.
To be more precise, the equation defines a quadratic extension of C(x), and it is that function field that is meant. The singular point at infinity can be removed (since this is a curve) by the normalization (integral closure) process. It turns out that after doing this, there is an open cover of the curve by two affine charts: the one already given by
{\displaystyle y^{2}=f(x)}
and another one given by
{\displaystyle w^{2}=v^{2g+2}f(1/v).}
The glueing maps between the two charts are given by
{\displaystyle (x,y)\mapsto (1/x,y/x^{g+1})}
{\displaystyle (v,w)\mapsto (1/v,w/v^{g+1}),}
wherever they are defined.
In fact geometric shorthand is assumed, with the curve C being defined as a ramified double cover of the projective line, the ramification occurring at the roots of f, and also for odd n at the point at infinity. In this way the cases n = 2g + 1 and 2g + 2 can be unified, since we might as well use an automorphism of the projective plane to move any ramification point away from infinity.
Using Riemann–Hurwitz formulaEdit
Using the Riemann–Hurwitz formula, the hyperelliptic curve with genus g is defined by an equation with degree n = 2g + 2. Suppose f : X → P1 is a branched covering with ramification degree 2, where X is a curve with genus g and P1 is the Riemann sphere. Let g1 = g and g0 be the genus of P1 ( = 0 ), then the Riemann-Hurwitz formula turns out to be
{\displaystyle 2-2g_{1}=2(2-2g_{0})-\sum _{s\in X}(e_{s}-1)}
where s is over all ramified points on X. The number of ramified points is n, so n = 2g + 2.
All curves of genus 2 are hyperelliptic, but for genus ≥ 3 the generic curve is not hyperelliptic. This is seen heuristically by a moduli space dimension check. Counting constants, with n = 2g + 2, the collection of n points subject to the action of the automorphisms of the projective line has (2g + 2) − 3 degrees of freedom, which is less than 3g − 3, the number of moduli of a curve of genus g, unless g is 2. Much more is known about the hyperelliptic locus in the moduli space of curves or abelian varieties,[clarification needed] though it is harder to exhibit general non-hyperelliptic curves with simple models.[1] One geometric characterization of hyperelliptic curves is via Weierstrass points. More detailed geometry of non-hyperelliptic curves is read from the theory of canonical curves, the canonical mapping being 2-to-1 on hyperelliptic curves but 1-to-1 otherwise for g > 2. Trigonal curves are those that correspond to taking a cube root, rather than a square root, of a polynomial.
The definition by quadratic extensions of the rational function field works for fields in general except in characteristic 2; in all cases the geometric definition as a ramified double cover of the projective line is available, if it[clarification needed] is assumed to be separable.
Hyperelliptic curves can be used in hyperelliptic curve cryptography for cryptosystems based on the discrete logarithm problem.
Hyperelliptic curves also appear composing entire connected components of certain strata of the moduli space of Abelian differentials.[2]
Hyperellipticity of genus-2 curves was used to prove Gromov's filling area conjecture in the case of fillings of genus =1.
Hyperelliptic curves of given genus g have a moduli space, closely related to the ring of invariants of a binary form of degree 2g+2.[specify]
Hyperelliptic functions were first published[citation needed] by Adolph Göpel (1812-1847) in his last paper Abelsche Transcendenten erster Ordnung (Abelian transcendents of first order) (in Journal für reine und angewandte Mathematik, vol. 35, 1847). Independently Johann G. Rosenhain worked on that matter and published Umkehrungen ultraelliptischer Integrale erster Gattung (in Mémoires des savants etc., vol. 11, 1851).
"Hyper-elliptic curve", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
A user's guide to the local arithmetic of hyperelliptic curves
^ Poor, Cris (1996). "Schottky's form and the hyperelliptic locus". Proceedings of the American Mathematical Society. 124 (7): 1987–1991. doi:10.1090/S0002-9939-96-03312-6. MR 1327038.
^ Kontsevich, Maxim; Zorich, Anton (2003). "Connected components of the moduli spaces of Abelian differentials with prescribed singularities". Inventiones Mathematicae. 153 (3): 631–678. arXiv:math.GT/0201292. Bibcode:2003InMat.153..631K. doi:10.1007/s00222-003-0303-x. S2CID 14716447.
|
To Leonard Darwin 11 September [1876]1
My dear son Leonard.
I have dreadful news to tell you. You will have heard of Amy’s safe confinement.2 Everything went on well for about 48 hours & then she was seized with convulsion. These lasted for about 18 hours, accompanied by other very bad symptoms. Yesterday afternoon the Doctors thought her sinking, but to everyone’s surprise she lasted till this morning & I saw her expire at about
\frac{1}{4}
to seven o clock this morning. She was quite unconscious all the time & never suffered pain or knew, thank God, that she was leaving for ever her husband. But this is miserably poor consolation.
I know how much you were attached to her & how strong a friendship she had for you.—3 I think she was the most gentle & sweet creature I ever knew. God knows what will become of Frank— his life will be a mere wreck. He seemed quite bewildered & dazed yesterday. Your mother keeps up her strength pretty well & has just gone over to see about the Baby.
Poor Bessy4 is quite overwhelmed by the dreadful blow.
My dear Son | Your affectionate Father | Charles Darwin
It is just decided that she shall be buried in Wales, for which I am sorry.—5
The year is established by the reference to the death of Amy Darwin, Francis Darwin’s wife, who died in 1876 (ODNB s.v. Darwin, Francis).
Amy and Francis Darwin’s son, Bernard Darwin, was born on 7 September 1876 (ODNB).
CD’s sons and Amy’s brothers had met at Clapham Grammar School (B. Darwin 1955, p. 53).
Amy was buried at Holy Trinity Church, Corris, near Machynlleth, about five miles from her family’s home, Pantlludw.
Informs LD of the death of Francis Darwin’s wife, Amy.
|
Willful Blindness and Dishonesty
Willful Blindness and Dishonesty ()
Sergio Da Silva1*, Raul Matsushita2, Thayana Gonçalves1
Willful blindness refers to situations where people choose not to look or not to question. We investigate the relationship between willful blindness and honesty using a sample of random participants to respond to two questionnaires. We contrast the responses reported with the intrinsic dis-honesty of the group (the extent to which people lie when they are assured they cannot be caught). To measure intrinsic honesty, we conduct a die-in-a-cup task. Then, we build indices of perceived honesty and willful blindness that take into account intrinsic honesty. After comparing the in-dices, we find an inverse correlation between willful blindness and honesty. Thus, our sample suggests we cannot dismiss that those who exhibit more willful blindness are also more dishonest.
Willful Blindness, Honesty, Die-in-a-Cup Task, Intrinsic Honesty, Perceived Honesty
Silva, S. , Matsushita, R. and Gonçalves, T. (2019) Willful Blindness and Dishonesty. Open Access Library Journal, 6, 1-6. doi: 10.4236/oalib.1105953.
Deceit and self-deception are ubiquitous in both animal and human groups [1]. Though people like to think of themselves as honest, dishonesty pays. Thus, people may behave dishonestly enough to profit, but honestly enough to delude themselves of their own integrity [2]. Therefore, the degree of lying depends on the extent to which self-justifications are available [3]. Cheating and intrinsic dishonesty―that is, the extent to which people lie when they are assured they cannot be caught―are contextually dependent [4] [5]. Here, we focus on one particular context influencing honesty: willful blindness [6]. Willful blindness refers to situations where people choose not to look or not to question. However, people are responsible if they could have known, and should have known, something which instead they tried not to see [6]. Willful blindness can ruin private lives and bring down corporations [6]. It is not straightforward to equate willful blindness to dishonesty because not every situation is black and white with a clear-cut “ethical” answer. For this reason, here we investigate whether people who exhibit more willful blindness are also more dishonest.
We consider a die-in-a-cup task [7] to measure the intrinsic honesty of a group of volunteers, described in Section 2. Then, we apply questionnaires to assess the participants’ perception of honesty and willful blindness, and investigate the relationship between both.
We randomly recruited 101 volunteers from Florianopolis, southern Brazil. We identified each participant’s gender (male or female) and asked them their age (whether above 25 or not). We ended up with 48 females and 53 males; 72 participants ages 25 and older, and 29 participants aged below 25 (mean age = 25.8; female mean age = 26.3; male mean age = 25.3). We first applied the die-in-a-cup task to acquaint the degree of intrinsic honesty of the group of participants as a whole. The result of this task was considered as a benchmark in our analysis to contrast it with individual perceptions of honesty, which were evaluated subsequently through a 10-item questionnaire. Finally, participants were asked to rate five-case vignettes about willful blindness. The responses were given in no more than five minutes on average, and the dataset is available at Figshare (https://doi.org/10.6084/m9.figshare.7571033.v1).
In the die-in-a-cup task, the experimenter (T.G.) asked the participants to roll a die twice, and report the first roll. They received a Brazilian real (R$) if they reported a one, two if they reported two, and so on. However, a six earned them nothing. The experimenter could not see the results, and money was paid based entirely on what one participant said. The participants could then lie because it was clear they could not be caught. So we were measuring the intrinsic honesty of our sample of participants as a group. If everyone was being honest, the average claim would be R$ 2.50. If everyone was maximally dishonest, it would be R$ 5. If the participants reported the higher of the two rolls, rather than the first one, they were still cheating by bending the rules rather than glaringly ignoring them. After all, lying depends on the available self-justifications, as observed [3]. In such a situation of “justified dishonesty,” the expected average payoff is R$ 3.47.
As for the questions of perceived honesty, participants were asked to rate as either “honest,” “dishonest,” or “very dishonest” each one of the ten statements below (they were also allowed not the respond a question if they so wished):
1) Diverting millions of public money destined to public school meals
2) Using front companies for money laundering
3) Evading taxes
4) Bribing a police officer not to issue a ticket
5) Favoring relatives or friends using power or influence
6) Cutting in the line
7) Legally finding a way to escape paying taxes
8) Using a disabled parking permit for 10 minutes
9) Unduly keeping $2 in change
10) Forging a student I.D. card
To gauge the degree of willful blindness of the participants we exposed them to five situations (some of them real) and asked their verdict.
Situation 1. Let’s say your best friend got a lot of sound equipment for a significantly below-market value. In addition to a lower price, the seller did not provide an invoice for the product. Then, your friend decided to sell the goods, but he was intercepted by policemen who discover that they were stolen. Your friend claimed that he had no idea of the illicit origin of the products and that he did not even know the seller. In this case you would consider that your friend is: ( ) not guilty ( ) a little bit guilty ( ) very guilty ( ) I’d rather not answer
Situation 2. Imagine that you need to sell your property because you have to pay for an expensive emergency surgery. Your good is valued at R$ 200,000. One of the most interested buyers is a famous drug dealer who offers you to pay cash in full. If you will opt to sell the property, how guilty would you feel? ( ) not guilty ( ) a little bit guilty ( ) very guilty ( ) I’d rather not answer
Situation 3. In August 2005, a gang took more than R$ 164.7 million in a robbery of the Central Bank of Brazil in Fortaleza. The next day, the criminals bought 11 vehicles at a dealership totaling approximately R$1 million and paid in cash. In 2007, the owners of this car dealership were judged for not being suspicious of the illicit origin of the money. How guilty do you think the owners are in this case? ( ) not guilty ( ) a little bit guilty ( ) very guilty ( ) I’d rather not answer
Situation 4. Eduardo, a 25-year-old man, had just been robbed in Mexico. With no money to go back home to Brazil, he agrees to drive a vehicle across the border in exchange for 500 U.S. dollars offered by a group of suspected young men. Halfway down the road, he is approached by police officers who discover that the car contains more than 100 kilos of hidden drugs. Eduardo was arrested on drug charges. How guilty do you think he is? ( ) not guilty ( ) a little bit guilty ( ) very guilty ( ) I’d rather not answer
Situation 5. Alberto owns a guesthouse and is being accused of allowing illegal gambling at his premises. The defendant affirms that he had no knowledge of such illicit activity that had been taking place in his establishment. In the face of this, he reinforces his innocence by stating that such knowledge would be essential for the penal relevance of the action. You believe Alberto is: ( ) not guilty ( ) a little bit guilty ( ) very guilty ( ) I’d rather not answer
From the 101 participants, 21 failed to respond to all questions. For the remaining 80 respondents, Cronbach’s alpha for the 10-item perceived honesty questionnaire was 0.84, thus suggesting such items present good internal consistency (or perhaps that they are redundant). Cronbach’s alpha for the five-item willful blindness vignettes was 0.61. If this is lengthened by a factor of two (thus rendering a questionnaire with 10 items), Cronbach’s alpha jumps to 0.76.
As for the intrinsic honesty of the group, the mean of the first roll in the die-in-a-cup task reported by the 80 participants with complete data was 3.5 and the standard deviation was 1.51. Although the mean is above the threshold of justified dishonesty, that is, 3.47, there is no statistical difference between the two values (p-value = 0.86). Thus, the group as a whole cannot be considered as very dishonest because the value 3.5 is well below the threshold of maximal dishonesty, that is, 5. Our finding is not at odds with those in the benchmark study of Fischbacher & Follmi-Heusi [7], who find in their experiment that not all dishonest participants lie to the fullest extent: a high share of participants reports a 4; only about 20 percent of the participants lie to the fullest extent possible, while 39 percent are fully honest.
The questionnaire of perceived honesty is composed of 10 items, each of them with three possible responses: “honest,” “dishonest,” or “very dishonest.” And the questionnaire of willful blindness has five items, each of them allowing three responses: “not guilty,” “a little bit guilty,” or “very guilty.” Thus, let
{H}_{i}
be a perceived honesty index given by
{H}_{i}={\sum }_{k=1}^{10}{h}_{ik}
{h}_{ik}
is the response of participant
i
k\text{-th}
item of the questionnaire. Similarly, a willful blindness index
{B}_{i}
{B}_{i}={\sum }_{k=1}^{5}{b}_{ik}
{b}_{i}
i
k\text{-th}
To consider the intrinsic honesty of the group as a whole as a reference, let
{R}_{i}
be the result of the first roll for participant i. Thus, the remainder of the division
{r}_{i}={R}_{i}/6
gauges the payoff in Brazilian real (R$) earned by each participant. Therefore, we expect a negative correlation between
{r}_{i}
{H}_{i}
because the rolls of the less honest participants lead to higher payoffs.
After taking the weighted scores h as “honest = 3,” “dishonest = 2” and “very dishonest = 1,” we found a negative linear correlation between the total scores on the 10-item scale (H) and the values from the die-in-a-cup task (that is, −0.18). Then, we assigned the scores b as “not guilty = 1,” “a little bit guilty = 2,” and “very guilty = 3,” and found a correlation of 0.11 between the willful blindness index B and the values from the die-in-a-cup task.
Figure 1 shows the dispersion between the indices of perceived honesty and willful blindness. The solid red line is the conditional mean curve
H|B
obtained by local polynomial regression through the non-parametric LOESS method. The correlation between willful blindness and perceived honesty was −0.326 (p-value = 0.003). So our sample suggests we cannot dismiss that participants who show more willful blindness are also more dishonest. Table 1 summarizes the participants’ scores related to the three tasks.
We investigate the relationship between willful blindness and honesty using a sample of random participants who perform a die-in-a-cup task and respond to two questionnaires. We contrast the responses reported with the intrinsic honesty of the group of participants, as measured by the die rolls. Intrinsic dishonesty refers to the extent to which people lie when they are assured they cannot be caught. Then, we build indices of perceived honesty and willful blindness that take into account intrinsic honesty. After comparing the indices, we find an
Figure 1. Negative correlation between the indices of willful blindness and honesty.
Table 1. Participants’ scores.
inverse correlation between willful blindness and honesty. So our sample suggests we cannot dismiss that those who show more willful blindness are also more dishonest.
Financial support from CNPq, Capes and FAPDF is acknowledged.
This experiment is part of a larger project that is registered at Plataforma Brasil (Comissão Nacional de Ética em Pesquisa) under No. 64758617.2.0000.0121.
[1] Trivers, R. (2011) The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life. Basic Books, New York.
[2] Mazar, N., Amir, O. and Ariely, D. (2008) The Dishonesty of Honest People: A Theory of Self-Concept Maintenance. Journal of Marketing Research, 45, 633-644.
[3] Shalvi, S., Dana, J., Handgraaf, M.J.J. and De Dreu, C.K.W. (2011) Justified Ethicality: Observing Desired Counterfactuals Modifies Ethical Perceptions and Behavior. Organizational Behavior and Human Decision Processes, 115, 181-190.
[4] Gachter, S. and Schulz, J.F. (2016) Intrinsic Honesty and the Prevalence of Rule Violations across Societies. Nature, 531, 496-499. https://doi.org/10.1038/nature17160
[5] Herrmann, B., Thoni, C. and Gachter, S. (2008) Antisocial Punishment across Societies. Science, 319, 1362-1367. https://doi.org/10.1126/science.1153808
[6] Heffernan, M. (2011) Willful Blindness: Why We Ignore the Obvious at Our Peril. Bloomsbury, New York.
[7] Fischbacher, U. and Follmi-Heusi, F. (2013) Lies in Disguise: An Experimental Study on Cheating. Journal of the European Economic Association, 11, 525-547.
|
A Treatise on Electricity and Magnetism/Part II/Chapter II - Wikisource, the free online library
A Treatise on Electricity and Magnetism/Part II/Chapter II
Chapter I: The Electric Current
Part II, Chapter II: Conduction and Resistance
Chapter III: Electromotive Force between Bodies in Contact
80976A Treatise on Electricity and Magnetism — Part II, Chapter II: Conduction and ResistanceJames Clerk Maxwell
241.] If by means of an electrometer we determine the electric potential at different points of a circuit in which a constant electric current is maintained, we shall find that in any portion of the circuit consisting of a single metal of uniform temperature throughout, the potential at any point exceeds that at any other point farther on in the direction of the current by a quantity depending on the strength of the current and on the nature and dimensions of the intervening portion of the circuit. The difference of the potentials at the extremities of this portion of the circuit is called the External electromotive force acting on it. If the portion of the circuit under consideration is not homogeneous, but contains transitions from one substance to another, from metals to electrolytes, or from hotter to colder parts, there may be, besides the external electromotive force, Internal electromotive forces which must be taken into account.
The relations between Electromotive Force, Current, and Resistance were first investigated by Dr. G. S. Ohm, in a work published in 1827, entitled Die Galvanische Kette Mathematisch Bearbeitet, translated in Taylor's Scientific Memoirs. The result of these investigations in the case of homogeneous conductors is commonly called 'Ohm's Law.'
The electromotive force acting between the extremities of any part of a circuit is the product of the strength of the current and the Resistance of that part of the circuit.
Here a new term is introduced, the Resistance of a conductor, which is defined to be the ratio of the electromotive force to the strength of the current which it produces. The introduction of this term would have been of no scientific value unless Ohm had shewn, as he did experimentally, that it corresponds to a real physical quantity, that is, that it has a definite value which is altered only when the nature of the conductor is altered.
In the first place, then, the resistance of a conductor is independent of the strength of the current flowing through it.
In the second place the resistance is independent of the electric potential at which the conductor is maintained, and of the density of the distribution of electricity on the surface of the conductor.
It depends entirely on the nature of the material of which the conductor is composed, the state of aggregation of its parts, and its temperature.
The resistance of a conductor may be measured to within one ten thousandth or even one hundred thousandth part of its value, and so many conductors have been tested that our assurance of the truth of Ohm's Law is now very high, In the sixth chapter we shall trace its applications and consequences.
Generation of Heat by the Current.
242.] We have seen that when an electromotive force causes a current to flow through a conductor, electricity is transferred from a place of higher to a place of lower potential. If the transfer had been made by convection, that is, by carrying successive charges on a ball from the one place to the other, work would have been done by the electrical forces on the ball, and this might have been turned to account. It is actually turned to account in a partial manner in those dry pile circuits where the electrodes have the form of bells, and the carrier ball is made to swing like a pendulum between the two bells and strike them alternately. In this way the electrical action is made to keep up the swinging of the pendulum and to propagate the sound of the bells to a distance. In the case of the conducting wire we have the same transfer of electricity from a place of high to a place of low potential without any external work being done. The principle of the Conservation of Energy therefore leads us to look for internal work in the conductor. In an electrolyte this internal work consists partly of the separation of its components. In other conductors it is entirely converted into heat.
The energy converted into heat is in this case the product of the electromotive force into the quantity of electricity which passes. But the electromotive force is the product of the current into the resistance, and the quantity of electricity is the product of the current into the time. Hence the quantity of heat multiplied by the mechanical equivalent of unit of heat is equal to the square of the strength of the current multiplied into the resistance and into the time.
The heat developed by electric currents in overcoming the resistance of conductors has been determined by Dr. Joule, who first established that the heat produced in a given time is proportional to the square of the current, and afterwards by careful absolute measurements of all the quantities concerned, verified the equation
{\displaystyle JH=C^{2}Rt,}
{\displaystyle J}
is Joule's dynamical equivalent of heat,
{\displaystyle H}
the number of units of heat,
{\displaystyle C}
{\displaystyle R}
the resistance of the conductor, and
{\displaystyle t}he time during which the current flows. These relations between electromotive force, work, and heat, were first fully explained by Sir W. Thomson in a paper on the application of the principle of mechanical effect to the measurement of electromotive forces[1].
243.] The analogy between the theory of the conduction of electricity and that of the conduction of heat is at first sight almost complete. If we take two systems geometrically similar, and such that the conductivity for heat at any part of the first is proportional to the conductivity for electricity at the corresponding part of the second, and if we also make the temperature at any part of the first proportional to the electric potential at the corresponding point of the second, then the flow of heat across any area of the first will be proportional to the flow of electricity across the corresponding area of the second.
Thus, in the illustration we have given, in which flow of electricity corresponds to flow of heat, and electric potential to temperature, electricity tends to flow from places of high to places of low potential, exactly as heat tends to flow from places of high to places of low temperature.
244.] The theory of potential and that of temperature may therefore be made to illustrate one another; there is, however, one remarkable difference between the phenomena of electricity and those of heat.
Suspend a conducting body within a closed conducting vessel by a silk thread, and charge the vessel with electricity. The potential of the vessel and of all within it will be instantly raised, but however long and however powerfully the vessel be electrified, and whether the body within be allowed to come in contact with the vessel or not, no signs of electrification will appear within the vessel, nor will the body within shew any electrical effect when taken out.
But if the vessel is raised to a high temperature, the body within will rise to the same temperature, but only after a considerable time, and if it is then taken out it will be found hot, and will remain so till it has continued to emit heat for some time.
The difference between the phenomena consists in the fact that bodies are capable of absorbing and emitting heat, whereas they have no corresponding property with respect to electricity. A body cannot be made hot without a certain amount of heat being supplied to it, depending on the mass and specific heat of the body, but the electric potential of a body may be raised to any extent in the way already described without communicating any electricity to the body.
245.] Again, suppose a body first heated and then placed inside the closed vessel. The outside of the vessel will be at first at the temperature of surrounding bodies, but it will soon get hot, and will remain hot till the heat of the interior body has escaped.
It is impossible to perform a corresponding electrical experiment. It is impossible so to electrify a body, and so to place it in a hollow vessel, that the outside of the vessel shall at first shew no signs of electrification but shall afterwards become electrified. It was for some phenomenon of this kind that Faraday sought in vain under the name of an absolute charge of electricity.
Heat may be hidden in the interior of a body so as to have no external action, but it is impossible to isolate a quantity of electricity so as to prevent it from being constantly in inductive relation with an equal quantity of electricity of the opposite kind.
There is nothing therefore among electric phenomena which corresponds to the capacity of a body for heat. This follows at once from the doctrine which is asserted in this treatise, that electricity obeys the same condition of continuity as an incompressible fluid. It is therefore impossible to give a bodily charge of electricity to any substance by forcing an additional quantity of electricity into it. See Arts. 61, 111, 329, 334.
↑ Phil. Mag., Dec. 1851.
Retrieved from "https://en.wikisource.org/w/index.php?title=A_Treatise_on_Electricity_and_Magnetism/Part_II/Chapter_II&oldid=10939113"
|
Cayley_table Knowpia
Named after the 19th century British mathematician Arthur Cayley, a Cayley table describes the structure of a finite group by arranging all the possible products of all the group's elements in a square table reminiscent of an addition or multiplication table. Many properties of a group – such as whether or not it is abelian, which elements are inverses of which elements, and the size and contents of the group's center – can be discovered from its Cayley table.
A simple example of a Cayley table is the one for the group {1, −1} under ordinary multiplication:
Cayley tables were first presented in Cayley's 1854 paper, "On The Theory of Groups, as depending on the symbolic equation θ n = 1". In that paper they were referred to simply as tables, and were merely illustrative – they came to be known as Cayley tables later on, in honour of their creator.
Structure and layoutEdit
Because many Cayley tables describe groups that are not abelian, the product ab with respect to the group's binary operation is not guaranteed to be equal to the product ba for all a and b in the group. In order to avoid confusion, the convention is that the factor that labels the row (termed nearer factor by Cayley) comes first, and that the factor that labels the column (or further factor) is second. For example, the intersection of row a and column b is ab and not ba, as in the following example:
ba b2 bc
Properties and usesEdit
The Cayley table tells us whether a group is abelian. Because the group operation of an abelian group is commutative, a group is abelian if and only if its Cayley table's values are symmetric along its diagonal axis. The cyclic group of order 3, above, and {1, −1} under ordinary multiplication, also above, are both examples of abelian groups, and inspection of the symmetry of their Cayley tables verifies this. In contrast, the smallest non-abelian group, the dihedral group of order 6, does not have a symmetric Cayley table.
Because associativity is taken as an axiom when dealing with groups, it is often taken for granted when dealing with Cayley tables. However, Cayley tables can also be used to characterize the operation of a quasigroup, which does not assume associativity as an axiom (indeed, Cayley tables can be used to characterize the operation of any finite magma). Unfortunately, it is not generally possible to determine whether or not an operation is associative simply by glancing at its Cayley table, as it is with commutativity. This is because associativity depends on a 3 term equation,
{\displaystyle (ab)c=a(bc)}
, while the Cayley table shows 2-term products. However, Light's associativity test can determine associativity with less effort than brute force.
PermutationsEdit
Because the cancellation property holds for groups (and indeed even quasigroups), no row or column of a Cayley table may contain the same element twice. Thus each row and column of the table is a permutation of all the elements in the group. This greatly restricts which Cayley tables could conceivably define a valid group operation.
To see why a row or column cannot contain the same element more than once, let a, x, and y all be elements of a group, with x and y distinct. Then in the row representing the element a, the column corresponding to x contains the product ax, and similarly the column corresponding to y contains the product ay. If these two products were equal – that is to say, row a contained the same element twice, our hypothesis – then ax would equal ay. But because the cancellation law holds, we can conclude that if ax = ay, then x = y, a contradiction. Therefore, our hypothesis is incorrect, and a row cannot contain the same element twice. Exactly the same argument suffices to prove the column case, and so we conclude that each row and column contains no element more than once. Because the group is finite, the pigeonhole principle guarantees that each element of the group will be represented in each row and in each column exactly once.
Thus, the Cayley table of a group is an example of a latin square.
Another, maybe simpler proof: the cancellation property implies that for each x in the group, the one variable function of y f(x,y)= xy must be a one to one map. And one to one maps on finite sets are permutations.
Constructing Cayley tablesEdit
Because of the structure of groups, one can very often "fill in" Cayley tables that have missing elements, even without having a full characterization of the group operation in question. For example, because each row and column must contain every element in the group, if all elements are accounted for save one, and there is one blank spot, without knowing anything else about the group it is possible to conclude that the element unaccounted for must occupy the remaining blank space. It turns out that this and other observations about groups in general allow us to construct the Cayley tables of groups knowing very little about the group in question. It should be noted, however, that a Cayley table constructed using the method that follows may fail to meet the associativity requirement of a group, and therefore represent a quasigroup.
The "identity skeleton" of a finite groupEdit
Because in any group, even a non-abelian group, every element commutes with its own inverse, it follows that the distribution of identity elements on the Cayley table will be symmetric across the table's diagonal. Those that lie on the diagonal are their own unique inverse.
Because the order of the rows and columns of a Cayley table is in fact arbitrary, it is convenient to order them in the following manner: beginning with the group's identity element, which is always its own inverse, list first all the elements that are their own inverse, followed by pairs of inverses listed adjacent to each other.
Then, for a finite group of a particular order, it is easy to characterize its "identity skeleton", so named because the identity elements on the Cayley table constructed in the manner described in the previous paragraph are clustered about the main diagonal – either they lie directly on it, or they are one removed from it.
It is relatively trivial to prove that groups with different identity skeletons cannot be isomorphic, though the converse is not true (for instance, the cyclic group C8 and the quaternion group Q are non-isomorphic but have the same identity skeleton).
Consider a six-element group with elements e, a, b, c, d, and f. By convention, e is the group's identity element. Because the identity element is always its own inverse, and inverses are unique, the fact that there are 6 elements in this group means that at least one element other than e must be its own inverse. So we have the following possible skeletons:
all elements are their own inverses,
all elements save d and f are their own inverses, each of these latter two being the other's inverse,
a is its own inverse, b and c are inverses, and d and f are inverses.
In our particular example, there does not exist a group of the first type of order 6; indeed, simply because a particular identity skeleton is conceivable does not in general mean that there exists a group that fits it.
Any group in which every element is its own inverse is abelian: let a and b be elements of the group, then ab = (ab)−1 = b−1a−1 = ba.
Filling in the identity skeletonEdit
Once a particular identity skeleton has been decided on, it is possible to begin filling out the Cayley table. For example, take the identity skeleton of a group of order 6 of the second type outlined above:
Obviously, the e-row and the e-column can be filled out immediately.
Once this is done there are several possible options on how to proceed. We will focus on the value of ab. By the Latin square property, the only possibly valid values of ab are c, d, or f. However we can see that swapping around the two elements d and f would result in exactly the same table as we already have, save for arbitrarily selected labels. We would therefore expect both of these two options to result in the same outcome, up to isomorphism, and so we need only consider one of them.
It is also important to note that one or several of the values may (and do, in our case) later lead to contradiction – meaning simply that they were in fact not valid values at all.
ab = cEdit
By alternatingly multiplying on the left and on the right it is possible to extend one equation into a loop of equations where any one implies all the others:
Multiplying ab = c on the left by a gives b = ac
Multiplying b = ac on the right by c gives bc = a
Multiplying bc = a on the left by b gives c = ba
Multiplying c = ba on the right by a gives ca = b
Multiplying ca = b on the left by c gives a = cb
Multiplying a = cb on the right by b gives ab = c
Filling in all of these products, the Cayley table now looks like this (new elements in red):
Since the Cayley table is a Latin square, the only possibly valid value of ad is f, and similarly the only possible value of af is d.
Filling in these values, the Cayley table now looks like this (new elements in blue):
Unfortunately, all elements of the group are already present either above or to the left of bd in the table so there is no value of bd that satisfies the Latin square property.
This means that the option we selected (ab = c) has led us to a point where no value can be assigned to bd without causing contradictions. We have therefore shown that ab ≠ c.
If we in a similar way show that all options lead to contradictions, then we must conclude that no group of order 6 exists with the identity skeleton that we started with.
ab = dEdit
Multiplying ab = d on the left by a gives b = ad
Multiplying b = ad on the right by f gives bf = a
Multiplying bf = a on the left by b gives f = ba
Multiplying f = ba on the right by a gives fa = b
Multiplying fa = b on the left by d gives a = db
Multiplying a = db on the right by b gives ab = d
b f e a
The remaining products of a, shown in blue, may now be entered using the Latin square property. For example, c is missing from row a and cannot occur twice in column c, hence ac = f.
Similarly, the remaining products of b, shown in green, may then be entered:
c d f e a
f b c a e
The remaining products, each of which is the only missing value in either a row or a column, may now be filled in using the Latin square property, shown in orange:
As we have managed to fill in the whole table without obtaining a contradiction, we have found a group of order 6, and inspection reveals it to be non-abelian. This group is in fact the smallest non-abelian group, the dihedral group D3
Example of a quasigroup constructed using the above methodEdit
The Cayley table that follows may be constructed by entering an identity skeleton, filling in the first row and column, and then postulating that ab = c. The alternative assumption ab = d results in a homomorphism. The rest of the table follows as a Latin square. However, by reference to the table (ac)b = db = a, while a(cb) = ad = b. It therefore fails the associativity axiom and represents a semigroup rather than a group.
Permutation matrix generationEdit
The standard form of a Cayley table has the order of the elements in the rows the same as the order in the columns. Another form is to arrange the elements of the columns so that the nth column corresponds to the inverse of the element in the nth row. In our example of D3, we need only switch the last two columns, since f and d are the only elements that are not their own inverses, but instead inverses of each other.
f=d−1
d=f−1
e a b c f d
a e d f c b
f b c a d e
This particular example lets us create six permutation matrices (all elements 1 or 0, exactly one 1 in each row and column). The 6x6 matrix representing an element will have a 1 in every position that has the letter of the element in the Cayley table and a zero in every other position, the Kronecker delta function for that symbol. (Note that e is in every position down the main diagonal, which gives us the identity matrix for 6x6 matrices in this case, as we would expect.) Here is the matrix that represents our element a, for example.
This shows us directly that any group of order n is a subgroup of the permutation group Sn, order n!.
The above properties depend on some axioms valid for groups. It is natural to consider Cayley tables for other algebraic structures, such as for semigroups, quasigroups, and magmas, but some of the properties above do not hold.
Cayley, Arthur. "On the theory of groups, as depending on the symbolic equation θ n = 1", Philosophical Magazine, Vol. 7 (1854), pp. 40–47. Available on-line at Google Books as part of his collected works.
Cayley, Arthur. "On the Theory of Groups", American Journal of Mathematics, Vol. 11, No. 2 (Jan 1889), pp. 139–157. Available at JSTOR.
|
The first mensural flow begining at puberty is called menarche.
At sea level, the atmospheric pressure is about 105 Pa.
(a) Increases, increase
The fractional force increases with the increase in roughness of the surface in contact.
Birds produces sound by means of a ring of cartilage called syrinx.
Lemon juice is an acidic solution.
1. The voice box in boys can be seen as the Adam's apple in their throat.
2. The pressure in a liquid at greater depth is greater.
3. Streamlined shape of an object helps reduce friction.
4. The unit used to measure loudness of sound is decibel.
5. The electrode with positive charge is called anode.
1. Cretinism (b) Deficiency of thyroxine
2. Weight (d) Force
3. Drag (a) Fluid
4. Galton's whistles (e) Training of cats and dogs
5. LED (c) Light emitting diode
Sound can travel through solid.
1. Force: Newton (N)
2. Pressure: Newton meter-2 (Nm-2)
3. Frequency: Hertz (Hz)
4. Time period: Second (S)
5. Loudness of a sound: Decibel (dB)
6. Current: Ampere (A)
1. Stringed instruments: Guitar, Sitar
2. Wind instrument: Flute, Shehnai
3. Percussion instrument: Dholak, Tabla
4. Ghana vadya: Ghatam, Jal tarang
5. Conductors: Copper, Aluminium
6. Insulators: Wood, Plastic
1. Audible sounds: 20 to 20,000 Hz
2. Human voice: 60 to 13,000 Hz
3. Ultrasonics: Higher than 20,000 Hz
4. Infrasonics: Less than 20 Hz
Insulin controls the sugar metabolism in the human body. It checks the excess of sugar in the blood.
According to Pascal's Law, when some pressure is applied on any part of a liquid, an equal and uniform pressure gets transmitted throughout the whole liquid. This means that the pressure applied to a liquid in an enclosed vessel gets transmitted equally across the entire vessel containing the liquid.
Force of friction is an opposing force that comes into play when a body tries to move over the surface of another body. Surfaces of all the objects have irregularities in the form of minute hills and valleys. These irregularities in the surfaces of interacting objects get interlocked and oppose their relative motion over each other.
Vibration is a to and fro motion of an object about its mean position.
According to the principal of gravitational force, all the bodies in the universe attract each other. Gravitational force can be measured by measuring the weight of the bodies involved and the distance between them. It is measured in newtons (N).
Gravitational force =
G\frac{{m}_{1}{m}_{2}}{{r}^{2}}
where G is the gravitational constant with a value of 6.67 × 10-11 Nm2/kg2,
m1 and m2 are the masses of two bodies and
Factors affecting friction force:
1. Nature of the surface, i.e., the smoothness or roughness of the surface affects the force of friction.
2. The force of friction is directly proportional to the weight of an object.
Ill effects of noise pollution:
1. It may cause loss of hearing, which can lead to deafness.
2. It causes tension and anger and can interfere with the sleeping pattern of individuals.
3. It causes headache, hypertension and irritability.
Electrolysis is the process of the decomposition of an electrolyte by passing electric current through it.
Electrolysis could be explained as follows:
1. Two electrodes of a conducting material, such as graphite or iron, are connected to the opposite terminals of a battery and submerged in a solution of the electrolyte.
2. When the battery is switched on, the current starts flowing through the solution and the electrolyte dissociates. Cations start moving toward the cathode (negative electrode) and anions start moving toward the anode (positive electrode). This brings about a chemical change in the solution and the circuit gets completed due to the movement of ions.
3. The flow of current can be tested by connecting a bulb to the circuit. When we switch on the battery, the bulb glows, which proves that current flows through the electrolyte.
Terms involved in electrolysis:
Electrolyte: The solution of the chemical, containing cations and anions, in which the electrodes are submerged, is called an electrolyte.
Electrodes: The rods of a conducting material that are connected to the terminals of a battery are called electrodes. They conduct electricity through the electrolyte. The positive electrode is called the anode and the negative electrode is called the cathode.
|
Rd Sharma 2018 for Class 10 Math Chapter 16 - Probability
Rd Sharma 2018 Solutions for Class 10 Math Chapter 16 Probability are provided here with simple step-by-step explanations. These solutions for Probability are extremely popular among Class 10 students for Math Probability Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Rd Sharma 2018 Book of Class 10 Math Chapter 16 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Rd Sharma 2018 Solutions. All Rd Sharma 2018 Solutions for class Class 10 Math are prepared by experts and are 100% accurate.
(c) a multiple of 2 or 3
(d) an even prime number
(e) a number greater than 5
(f) a number lying between 2 and 6
(i) Probability of getting a prime number
(ii) Probability of getting 2 or 4
(iii) Probability of getting a multiple of 2 or 3.
(iv) Probability of getting an even number
(v) Probability of getting a number greater than five.
(vi) Probability of lying between 2 and 6
\frac{7}{8}
\frac{16}{52}=\overline{)\frac{4}{13}}
\frac{\mathrm{number} \mathrm{of} \mathrm{favourable} \mathrm{elementary} \mathrm{events}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{elementary} \mathrm{events}}
\frac{16}{25}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{50}{350}=\frac{1}{7}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{150}{350}=\frac{3}{7}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{100}{350}=\frac{2}{7}
\therefore {a}_{n}=37\phantom{\rule{0ex}{0ex}}⇒3+\left(n-1\right)×2=37 \left[{a}_{n}=a+\left(n-1\right)d\right]\phantom{\rule{0ex}{0ex}}⇒2n+1=37\phantom{\rule{0ex}{0ex}}⇒2n=37-1=36\phantom{\rule{0ex}{0ex}}⇒n=18
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{11}{18}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{3}{12}=\frac{1}{4}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{9}{12}=\frac{3}{4}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{20}{30}=\frac{2}{3}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{6}{30}=\frac{1}{5}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{25}{30}=\frac{5}{6}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{100}{180}=\frac{5}{9}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{30}{180}=\frac{1}{6}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{170}{180}=\frac{17}{18}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{70}{180}=\frac{7}{18}
\therefore {a}_{n}=49\phantom{\rule{0ex}{0ex}}⇒1+\left(n-1\right)×2=49 \left[{a}_{n}=a+\left(n-1\right)d\right]\phantom{\rule{0ex}{0ex}}⇒2n-1=49\phantom{\rule{0ex}{0ex}}⇒2n=49+1=50\phantom{\rule{0ex}{0ex}}⇒n=25
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{25}{49}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{9}{49}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{7}{49}=\frac{1}{7}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{1}{49}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{13}{20}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{8}{20}=\frac{2}{5}
=\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{9}{36}=\frac{1}{4}
=\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{4}{36}=\frac{1}{9}
=\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{11}{36}
=\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{25}{36}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{16}{36}=\frac{4}{9}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{8}{36}=\frac{2}{9}
\frac{\mathrm{Number} \mathrm{of} \mathrm{favourable} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{25}{36}
\frac{1}{11}.
=\frac{9}{49}
\frac{1}{49}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{4}{50}=\frac{2}{25}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{25}{36}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{10}{36}=\frac{5}{18}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{6}{46}=\frac{3}{23}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{26}{46}=\frac{13}{23}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{20}{46}=\frac{10}{23}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{2}{46}=\frac{1}{23}
\therefore {a}_{n}=59\phantom{\rule{0ex}{0ex}}⇒11+\left(n-1\right)×2=59 \left[{a}_{n}=a+\left(n-1\right)d\right]\phantom{\rule{0ex}{0ex}}⇒2n+9=59\phantom{\rule{0ex}{0ex}}⇒2n=59-9=50\phantom{\rule{0ex}{0ex}}⇒n=25
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{25}{50}=\frac{1}{2}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{4}{50}=\frac{2}{25}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{10}{50}=\frac{1}{5}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{4}{50}=\frac{2}{25}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{2}{44}=\frac{1}{22}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{22}{44}=\frac{1}{2}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{0}{40}=0
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{20}{40}=\frac{1}{2}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{4}{48}=\frac{1}{12}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{24}{48}=\frac{1}{2}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{8}{48}=\frac{1}{6}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{2}{48}=\frac{1}{24}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{20}{44}=\frac{5}{11}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{24}{44}=\frac{6}{11}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{9}{44}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{9}{44}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{20}{46}=\frac{10}{23}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{6}{46}=\frac{3}{23}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{13}{46}
\overline{E}
\overline{E}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{21}{26}
=1-\frac{3}{8}=\frac{5}{8}
P\left(E\right)=\frac{3}{7}
\frac{3}{7}
\frac{4}{9}
\frac{5}{9}
\frac{1}{9}
\frac{2}{3}
\frac{4}{9}
\frac{5}{9}
\frac{1}{9}
\frac{2}{3}
\frac{1}{3}
\frac{2}{3}
\frac{1}{9}
\frac{2}{9}
\frac{1}{4}
\frac{3}{8}
\frac{1}{2}
\frac{1}{4}
\frac{1}{2}
\frac{1}{3}
\frac{1}{6}
\frac{2}{3}
\frac{x}{12}
\frac{2}{3}
\frac{1}{4}
\frac{1}{3}
\frac{4}{9}
\frac{7}{9}
\frac{1}{10}
\frac{3}{10}
\frac{7}{10}
\frac{9}{10}
\frac{3}{5}
\frac{2}{5}
\frac{2}{3}
\frac{1}{3}
\frac{1}{4}
\frac{1}{13}
\frac{1}{52}
\frac{12}{13}
\frac{2}{3}
\frac{1}{6}
\frac{1}{3}
\frac{5}{6}
\frac{2}{3}
-1.5
15%
0.7
\frac{19}{20}
\frac{1}{25}
\frac{1}{20}
\frac{17}{20}
\frac{13}{25}
\frac{21}{50}
\frac{12}{25}
\frac{23}{50}
\frac{1}{12}
\frac{1}{6}
\frac{3}{4}
\frac{1}{3}
\frac{3}{7}
\frac{1}{6}
\frac{1}{2}
\frac{2}{3}
\frac{1}{3}
\frac{1}{2}
\frac{1}{6}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{3}{6}=\frac{1}{2}
\frac{1}{2}
\frac{1}{3}
\frac{1}{6}
\frac{5}{6}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{3}{6}=\frac{1}{2}
\frac{7}{90}
\frac{10}{90}
\frac{4}{45}
\frac{9}{89}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{8}{90}=\frac{4}{45}
\frac{4}{15}
\frac{2}{15}
\frac{1}{5}
\frac{1}{3}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{3}{15}=\frac{1}{5}
\frac{1}{4}
\frac{1}{8}
\frac{3}{4}
\frac{7}{8}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{3}{4}
\frac{1}{36}
\frac{1}{2}
\frac{1}{6}
\frac{1}{4}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{9}{36}=\frac{1}{4}
\frac{2}{3}
\frac{1}{6}
\frac{1}{3}
\frac{11}{30}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{10}{30}=\frac{1}{3}
\frac{1}{13}
\frac{9}{13}
\frac{4}{13}
\frac{12}{13}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{48}{52}=\frac{12}{13}
\frac{5}{7}
\frac{2}{7}
\frac{3}{7}
\frac{1}{7}
\frac{7}{9}
\frac{5}{9}
\frac{2}{3}
\frac{1}{9}
\frac{2}{7}
\frac{5}{7}
\frac{6}{7}
\frac{1}{7}
\frac{1}{18}
\frac{7}{36}
\frac{1}{6}
\frac{2}{9}
\frac{6}{7}
\frac{1}{7}
\frac{5}{7}
\frac{a}{b}
\frac{17}{45}
\frac{1}{5}
\frac{17}{90}
\frac{8}{45}
\frac{2}{3}
\frac{1}{6}
\frac{1}{3}
\frac{5}{6}
\frac{2}{7}
\frac{4}{7}
\frac{5}{7}
\frac{6}{7}
\frac{3}{10}
\frac{29}{100}
\frac{1}{3}
\frac{7}{25}
\frac{1}{2}
\frac{1}{3}
\frac{1}{6}
\frac{1}{12}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{6}{36}=\frac{1}{6}
\frac{7}{8}
\frac{1}{8}
\frac{5}{8}
\frac{3}{4}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{7}{8}
\frac{1}{5}
\frac{3}{25}
\frac{4}{25}
\frac{2}{25}
\frac{\mathrm{Favourable} \mathrm{number} \mathrm{of} \mathrm{outcomes}}{\mathrm{Total} \mathrm{number} \mathrm{of} \mathrm{outcomes}}=\frac{4}{25}
|
Acetyl-CoA synthetase - Wikipedia
Acetyl-CoA synthetase (ACS) or Acetate-CoA ligase is an enzyme (EC 6.2.1.1) involved in metabolism of acetate. It is in the ligase class of enzymes, meaning that it catalyzes the formation of a new chemical bond between two large molecules.
The two molecules joined together that make up Acetyl CoA are acetate and coenzyme A (CoA). The complete reaction with all the substrates and products included is:
ATP + Acetate + CoA <=> AMP + Pyrophosphate + Acetyl-CoA[1]
Once acetyl-CoA is formed it can be used in the TCA cycle in aerobic respiration to produce energy and electron carriers. This is an alternate method to starting the cycle, as the more common way is producing acetyl-CoA from pyruvate through the pyruvate dehydrogenase complex. The enzyme's activity takes place in the mitochondrial matrix so that the products are in the proper place to be used in the following metabolic steps.[2] Acetyl Co-A can also be used in fatty acid synthesis, and a common function of the synthetase is to produce acetyl Co-A for this purpose.[3]
The reaction catalyzed by acetyl-CoA synthetase takes place in two steps. First, AMP must be bound by the enzyme to cause a conformational change in the active site, which allows the reaction to take place. The active site is referred to as the A-cluster.[4] A crucial lysine residue must be present in the active site to catalyze the first reaction where Co-A is bound. Co-A then rotates in the active site into the position where acetate can covalently bind to CoA. The covalent bond is formed between the sulfur atom in Co-A and the central carbon atom of acetate.[5]
The ACS1 form of acetyl-CoA synthetase is encoded by the gene facA, which is activated by acetate and deactivated by glucose.[6]
The three dimensional structure of the asymmetric ACS (RCSB PDB ID number: 1PG3) reveals that it is composed of two subunits. Each subunit is then composed primarily of two domains. The larger N-terminal domain is composed of 517 residues, while the smaller C-terminal domain is composed of 130 residues.[7] Each subunit has an active site where the ligands are held. The crystallized structure of ACS was determined with CoA and Adenosine- 5′-propylphosphate bound to the enzyme. The reason for using Adenosine- 5′-propylphosphate is that it is an ATP competitive inhibitor which prevents any conformational changes to the enzyme. The adenine ring of AMP/ATP is held in a hydrophobic pocket create by residues Ile (512) and Trp (413).[7]
The source for the crystallized structure is the organism Salmonella typhimurium (strain LT2 / SGSC1412 / ATCC 700720). The gene for ACS was then transfected into Escherichia coli BL21(DE3) for expression. During chromatography in the process to isolate the enzyme, the subunits came out individually and the total structure was determined separately.[7] The method used to determine the structure was X-ray diffraction with a resolution of 2.3 angstroms. The unit cell values and angles are provided in the following table:
3D structure of ACS (1PG3) using PyMol software.[8]
Axial view of ACS (1PG3) showing ligands bound to active site. Ligands used for crystallization (in image) are adenosine-5'-propylphosphate, CoA and ethanediol.
a= 59.981 α= 90.00
b= 143.160 β= 91.57
c= 71.934 γ= 90.00
The role of the ACS enzyme is to combine acetate and CoA to form acetyl CoA, however its significance is much larger. The most well known function of the product from this enzymatic reaction is the use of Acetyl-CoA in the role of the TCA cycle as well as in the production of fatty acid. This enzyme is vital to the action of histone acetylation as well as gene regulation.[9] The effect this acetylation has is far reaching in mammals. It has been shown that downregulation of the acs gene in the hippocampal region of mice results in lower levels of histone acetylation, but also impairs the long-term spatial memory of the animal. This result points to a link between cellular metabolism, gene regulation and cognitive function.[9] This enzyme has shown to be an interesting biomarker for the presence of tumors in colorectal carcinomas. When the gene is present, the cells are able to take in acetate as a food source to convert it to Acetyl-CoA during stressed conditions. In the cases of advanced carcinoma tumors, the genes for this enzyme were down regulated and indicated a poor 5-year survival rate.[10] Expression of the enzyme has also been linked to the development of metastatic tumor nodes, leading to a poor survival rate in patients with renal cell carcinomas.[11]
The activity of the enzyme is controlled in several ways. The essential lysine residue in the active site plays an important role in regulation of activity. The lysine molecule can be deacetylated by another class of enzyme called sirtuins. In mammals, the cytoplasmic-nuclear synthetase (AceCS1) is activated by SIRT1 while the mitochondrial synthetase (AceCS2) is activated by SIRT3. This action increases activity of this enzyme.[2] The exact location of the lysine residue varies between species, occurring at Lys-642 in humans, but is always present in the active site of the enzyme.[12] Since there is an essential allosteric change that occurs with the binding of an AMP molecule, the presence of AMP can contribute to regulation of the enzyme. Concentration of AMP must be high enough so that it can bind in the allosteric binding site and allow the other substrates to enter the active site. Also, copper ions deactivate acetyl Co-A synthetase by occupying the proximal site of the A-cluster active site, which prevents the enzyme from accepting a methyl group to participate in the Wood-Ljungdahl Pathway.[4] The presence of all the reactants in the proper concentration is also needed for proper functioning as in all enzymes. Acetyl-CoA synthetase is also produced when it is needed for fatty acid synthesis, but, under normal conditions, the gene is inactive and has certain transcriptional factors that activate transcription when necessary.[3] In addition to sirtuins, protein deacetylase (AcuC) also can modify acetyl-CoA synthetase at a lysine residue. However, unlike sirtuins, AcuC does not require NAD+ as a cosubstrate.[13]
While acetyl-CoA synthetase's activity is usually associated with metabolic pathways, the enzyme also participates in gene expression. In yeast, acetyl-CoA synthetase delivers acetyl-CoA to histone acetyltransferases for histone acetylation. Without correct acetylation, DNA cannot condense into chromatin properly, which inevitably results in transcriptional errors.[14]
FAEE (C12) produced using Keasling biosynthetic pathway in engineered E. coli (A2A). Different types possible depending on number of acetyl-CoA units incorporated (result in even number chains).
Representative fatty acid molecule (palmitic acid, C16)
By taking advantage of the pathways which use acetyl-CoA as a substrate, engineered products can be obtained which have potential to be consumer products. By overexpressing the acs gene, and using acetate as a feedstock, the production of fatty acids (FAs) may be increased.[15] The use of acetate as a feed stock is uncommon, as acetate is a normal waste product of E. coli metabolism and is toxic at high levels to the organism. By adapting the E. coli to use acetate as a feedstock, these microbes were able to survive and produce their engineered products. These fatty acids could then be used as a biofuel after being separated from the media, requiring further processing (transesterification) to yield usable biodiesel fuel. Original adaptation protocol for inducing high levels of acetate uptake was innovated in 1959 as a means to induce starvation mechanisms in E. coli.[16]
Transesterification of fatty acid to ester mechanism
{\displaystyle Acetate\Longrightarrow Acetyl-CoA}
{\displaystyle Acetyl-CoA\Longrightarrow FAs}
Acetyl-CoA from the breakdown of sugars in glycolysis have been used to build fatty acids. However the difference comes in the fact that the Keasling strain is able to synthesize its own ethanol, and process (by transesterification) the fatty acid further to create stable fatty acid ethyl esters (FAEEs). Removing the need for further processing prior to obtaining a usable fuel product in Diesel engines.[17]
{\displaystyle glucose\Longrightarrow Acetyl-CoA}
Regulation changes to E. coli for production of FAEE from acetate.
Acetyl CoA used in the production of both ethanol and fatty acids
{\displaystyle Acetyl-CoA\Longrightarrow fatty\;acid+ethanol}
Trans-esterification[edit]
{\displaystyle fatty\;acid+ethanol\Longrightarrow FAEE}
Preliminary studies have been conducted where the combination of these two methods have resulted in the production of FAEEs, using acetate as the only carbon source using a combination of the methods described above.[18][unreliable source] The levels of production of all methods mentioned are not up to levels required for large scale applications (yet).
^ KEGG
^ a b Schwer B, Bunkenborg J, Verdin RO, Andersen JS, Verdin E (July 2006). "Reversible lysine acetylation controls the activity of the mitochondrial enzyme acetyl-CoA synthetase 2". Proceedings of the National Academy of Sciences of the United States of America. 103 (27): 10224–10229. doi:10.1073/pnas.0603968103. PMC 1502439. PMID 16788062.
^ a b Ikeda Y, Yamamoto J, Okamura M, Fujino T, Takahashi S, Takeuchi K, Osborne TF, Yamamoto TT, Ito S, Sakai J (September 2001). "Transcriptional regulation of the murine acetyl-CoA synthetase 1 gene through multiple clustered binding sites for sterol regulatory element-binding proteins and a single neighboring site for Sp1". The Journal of Biological Chemistry. 276 (36): 34259–69. doi:10.1074/jbc.M103848200. PMID 11435428.
^ a b Bramlett MR, Tan X, Lindahl PA (August 2003). "Inactivation of acetyl-CoA synthase/carbon monoxide dehydrogenase by copper". Journal of the American Chemical Society. 125 (31): 9316–7. doi:10.1021/ja0352855. PMID 12889960.
^ PDB: 1RY2; Jogl G, Tong L (February 2004). "Crystal structure of yeast acetyl-coenzyme A synthetase in complex with AMP". Biochemistry. 43 (6): 1425–31. doi:10.1021/bi035911a. PMID 14769018.
^ De Cima S, Rúa J, Perdiguero E, del Valle P, Busto F, Baroja-Mazo A, de Arriaga D (Apr 7, 2005). "An acetyl-CoA synthetase not encoded by the facA gene is expressed under carbon starvation in Phycomyces blakesleeanus". Research in Microbiology. 156 (5–6): 663–9. doi:10.1016/j.resmic.2005.03.003. PMID 15921892.
^ a b c PDB: 1PG3; Gulick AM, Starai VJ, Horswill AR, Homick KM, Escalante-Semerena JC (March 2003). "The 1.75 A crystal structure of acetyl-CoA synthetase bound to adenosine-5'-propylphosphate and coenzyme A". Biochemistry. 42 (10): 2866–73. doi:10.1021/bi0271603. PMID 12627952.
^ The PyMOL Molecular Graphics System, Version 2.0 Schrödinger, LLC.
^ a b Mews P, Donahue G, Drake AM, Luczak V, Abel T, Berger SL (June 2017). "Acetyl-CoA synthetase regulates histone acetylation and hippocampal memory". Nature. 546 (7658): 381–386. doi:10.1038/nature22405. PMC 5505514. PMID 28562591.
^ Bae JM, Kim JH, Oh HJ, Park HE, Lee TH, Cho NY, Kang GH (February 2017). "Downregulation of acetyl-CoA synthetase 2 is a metabolic hallmark of tumor progression and aggressiveness in colorectal carcinoma". Modern Pathology. 30 (2): 267–277. doi:10.1038/modpathol.2016.172. PMID 27713423. S2CID 2474320.
^ Zhang S, He J, Jia Z, Yan Z, Yang J (March 2018). "Acetyl-CoA synthetase 2 enhances tumorigenesis and is indicative of a poor prognosis for patients with renal cell carcinoma". Urologic Oncology. 36 (5): 243.e9–243.e20. doi:10.1016/j.urolonc.2018.01.013. PMID 29503142.
^ Hallows WC, Lee S, Denu JM (July 2006). "Sirtuins deacetylate and activate mammalian acetyl-CoA synthetases". Proceedings of the National Academy of Sciences of the United States of America. 103 (27): 10230–10235. doi:10.1073/pnas.0604392103. PMC 1480596. PMID 16790548.
^ Gardner JG, Grundy FJ, Henkin TM, Escalante-Semerena JC (August 2006). "Control of acetyl-coenzyme A synthetase (AcsA) activity by acetylation/deacetylation without NAD(+) involvement in Bacillus subtilis". Journal of Bacteriology. 188 (15): 5460–8. doi:10.1128/JB.00215-06. PMC 1540023. PMID 16855235.
^ Takahashi H, McCaffery JM, Irizarry RA, Boeke JD (July 2006). "Nucleocytosolic acetyl-coenzyme a synthetase is required for histone acetylation and global transcription". Molecular Cell. 23 (2): 207–17. doi:10.1016/j.molcel.2006.05.040. PMID 16857587.
^ Xiao Y, Ruan Z, Liu Z, Wu SG, Varman AM, Liu Y, Tang YJ (2013). "Engineering Escherichia coli to convert acetic acid to free fatty acids". Biochemical Engineering Journal. 76: 60–69. doi:10.1016/j.bej.2013.04.013.
^ Glasky AJ, Rafelson ME (August 1959). "The utilization of acetate-C14 by Escherichia coli grown on acetate as the sole carbon source". The Journal of Biological Chemistry. 234 (8): 2118–22. doi:10.1016/S0021-9258(18)69876-X. PMID 13673023.
^ Steen EJ, Kang Y, Bokinsky G, Hu Z, Schirmer A, McClure A, Del Cardayre SB, Keasling JD (January 2010). "Microbial production of fatty-acid-derived fuels and chemicals from plant biomass". Nature. 463 (7280): 559–62. doi:10.1038/nature08721. PMID 20111002. S2CID 4425677.
^ Banuelos S, Cervantes E, Perez E, Tang S (March 2017). From toxic byproduct to biofuels: Adapting engineered Escherichia coli to produce fatty acid ethyl esters from acetate. Stanford University Course: CHEMENG 185B (Report).
Retrieved from "https://en.wikipedia.org/w/index.php?title=Acetyl-CoA_synthetase&oldid=1081809336"
|
Aspartate ammonia-lyase - Wikipedia
Aspartate ammonia-lyase homotetramer, Bacillus sp. YM55-1
In enzymology, an aspartate ammonia-lyase (EC 4.3.1.1) is an enzyme that catalyzes the chemical reaction
{\displaystyle \rightleftharpoons }
fumarate + NH3
Hence, this enzyme has one substrate, L-aspartate, and two products, fumarate and NH3. The reaction is the basis of the industrial synthesis of aspartate.[1]
This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is L-aspartate ammonia-lyase (fumarate-forming). Other names in common use include aspartase, fumaric aminase, L-aspartase, and L-aspartate ammonia-lyase. This enzyme participates in alanine and aspartate metabolism and nitrogen metabolism.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1J3U and 1JSW.
^ Karlheinz Drauz, Ian Grayson, Axel Kleemann, Hans-Peter Krimmer, Wolfgang Leuchtenberger, Christoph Weckbecker (2006). "Amino Acids". Ullmann's Encyclopedia of Industrial Chemistry. Weinheim: Wiley-VCH. doi:10.1002/14356007.a02_057.pub2. {{cite encyclopedia}}: CS1 maint: uses authors parameter (link)
Ellfolk N, Kjærgård T, Bánhidi ZG, Virtanen AI, Sörensen NA (1953). "Studies on aspartase. 1. Quantitative separation of aspartase from bacterial cells, and its partial purification". Acta Chem. Scand. 7: 824–830. doi:10.3891/acta.chem.scand.07-0824.
This lyase article is a stub. You can help Wikipedia by expanding it.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Aspartate_ammonia-lyase&oldid=1052156257"
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.