id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
142456
|
Good Will Hunting
|
1997 film by Gus Van Sant
Good Will Hunting is a 1997 American drama film directed by Gus Van Sant and written by Ben Affleck and Matt Damon. It stars Robin Williams, Damon, Affleck, Stellan Skarsgård and Minnie Driver. The film tells the story of janitor Will Hunting, whose mathematical genius is discovered by a professor at MIT.
The film received acclaim from critics and grossed over $225 million during its theatrical run against a $10 million budget. At the 70th Academy Awards, it received nominations in nine categories, including Best Picture and Best Director, and won in two: Best Supporting Actor for Williams and Best Original Screenplay for Affleck and Damon. In 2014, it was ranked at number 53 in "The Hollywood Reporter"'s "100 Favorite Films" list.
Plot.
Twenty-year-old Will Hunting of South Boston is a self-taught math genius who was recently paroled after completing a prison term. He works as a janitor at MIT and spends his free time drinking with his friends Chuckie, Billy and Morgan.
When Professor Gerald Lambeau posts a difficult combinatorial mathematics problem on a blackboard as a challenge for his graduate students, Will solves the problem anonymously, stunning both the students and Lambeau. As a challenge to the unknown genius, Lambeau posts an even more difficult problem. He later catches Will writing the solution on the blackboard late at night, but initially thinks Will is vandalizing it and chases him off. At a bar, Will meets Skylar, an English woman about to graduate from Harvard College, who plans on attending medical school at Stanford.
Meanwhile, Lambeau, realizing Will was not vandalizing the problem but solving it, asks the campus maintenance staff his whereabouts, but is informed he did not turn up for work. Lambeau learns Will was placed at MIT by a parole program, and obtains his parole officer's details.
Will and his friends have started a fight with a gang that includes a member who used to bully Will as a child. Will is arrested after he attacks a responding police officer. Lambeau sits in on his court appearance and watches Will defend himself. He arranges for Will to avoid jail time if he agrees to study mathematics under Lambeau's supervision and participate in psychotherapy sessions. Will agrees but treats his therapists with mockery.
In desperation, Lambeau calls on Dr. Sean Maguire, his college roommate, who now teaches psychology at Bunker Hill Community College. Unlike the other therapists, Sean actually challenges Will's defense mechanisms. During the first session, Will insults his deceased wife, and Sean threatens him, but after a few unproductive sessions, Will finally begins to open up.
Will is particularly struck by Sean's story of how he met his wife, who later died of cancer, by giving up his ticket to the historic game six of the 1975 World Series after falling in love at first sight. Sean's explanation for surrendering his ticket was to "see about a girl", and he does not regret his decision. This encourages Will to build a relationship with Skylar, though he lies to her about his past and is reluctant to introduce her to his friends or show her his home. Will also challenges Sean to take an objective look at his own life, since Sean cannot move on from his wife's death.
Lambeau sets up a number of job interviews for Will, but he scorns them by sending Chuckie as his "chief negotiator", and by turning down a position at the NSA with a scathing critique of the agency's moral position. Skylar asks Will to move to California with her, but he refuses and tells her he is an orphan, and that his foster father physically abused him.
Will breaks up with Skylar and later storms out on Lambeau, dismissing the mathematical research he has been doing. Sean points out to Will that he is so adept at anticipating future failure in his interpersonal relationships that he deliberately sabotages them to avoid emotional pain. Chuckie likewise challenges Will over his resistance to taking any of the positions he interviews for, telling him he owes it to his friends to make the most of opportunities they will never have, even if it means leaving one day. He then tells Will that the best part of his day is the brief moment when he waits at his doorstep, thinking Will has moved on to something greater.
Will walks in on a heated argument between Sean and Lambeau over Will's potential. Lambeau leaves, and Sean and Will talk and it comes to light that they were both victims of child abuse. Sean helps him see that he is a victim of his own inner demons and to accept that it is not his fault, causing him to break down in tears in Sean's arms. Will accepts one of the job offers arranged by Lambeau. Having helped Will manage his problems, Sean reconciles with Lambeau, deciding to take a sabbatical.
Will's friends give him a used car for his 21st birthday so he can commute to work. Later, Chuckie goes to Will's to pick him up, only to find that he is not there, much to his happiness, as Will had finally done what Chuckie had been wishing for all these years. Will leaves a note for Sean asking him to tell Lambeau that he had to go "see about a girl", revealing he passed on the job offer and instead is heading to California to reunite with Skylar.
Cast.
<templatestyles src="Div col/styles.css"/>* Matt Damon as Will Hunting
Production.
Development.
Matt Damon started writing the film as a final assignment for a playwriting class that he was taking at Harvard University. Instead of writing a one-act play, Damon submitted a 40-page script. He wrote into his script his then-girlfriend, medical student Skylar Satenstein (credited in the closing credits of the film). Damon said that the only scene from that script to survive verbatim was when Will (Damon) meets his therapist, Sean Maguire (Robin Williams), for the first time. Damon asked Ben Affleck to develop the screenplay together. They completed the script in 1994. At first, it was written as a thriller about a young man in the rough-and-tumble streets of South Boston who possesses a superior intelligence and is targeted by the government with heavy-handed recruitment.
Castle Rock Entertainment bought the script for $675,000 against $775,000, meaning that Damon and Affleck would earn an additional $100,000 if the film was produced, and they retained sole writing credit. Castle Rock Entertainment president Rob Reiner urged them to drop the thriller aspect of the story and to focus on the relationship between Will and his therapist. Terrence Malick told Affleck and Damon over dinner that the film ought to end with Will's decision to follow his girlfriend Skylar to California, not them leaving together.
At Reiner's request, screenwriter William Goldman read the script. Goldman consistently denied the persistent rumor that he wrote "Good Will Hunting" or acted as a script doctor. In his book "Which Lie Did I Tell?", Goldman jokingly writes, "I did not just doctor it. I wrote the whole thing from scratch," before dismissing the rumor as false and saying that his only advice was agreeing with Reiner's suggestion.
Affleck and Damon proposed to act in the lead roles, but many studio executives said that they wanted Brad Pitt and Leonardo DiCaprio. Meanwhile, Kevin Smith was working with Affleck on "Mallrats" and with both Damon and Affleck on "Chasing Amy." Castle Rock Entertainment put the script in turnaround and gave Damon and Affleck 30 days to find another buyer for the script who would reimburse Castle Rock Entertainment the money paid; otherwise the script would be reverted to the studio, and Damon and Affleck would be out. All of the studios that were involved in the original bidding war for the screenplay turned the pair down, taking meetings with Affleck and Damon only to tell them this to their face.
As a last resort, Affleck passed the script to his "Chasing Amy" director Kevin Smith, who read it and promised to walk the script directly into Harvey Weinstein's office at Miramax Films. Weinstein read the script, loved it, and paid Castle Rock Entertainment their due, while also agreeing to let Damon and Affleck star in the film. Weinstein asked that a few scenes be removed, including an out-of-place, mid-script oral sex scene that Damon and Affleck added to trick executives who were not looking closely.
After buying the rights from Castle Rock Entertainment, Miramax Films put the film into production. Several well-known filmmakers were originally considered to direct, including Mel Gibson and Michael Mann. Originally, Affleck asked Kevin Smith whether he was interested in directing. He declined, saying that they needed a "good director", that he directed only projects that he wrote, and that he was not much of a visual director, but he still served as one of the film's co-executive producers. Damon and Affleck chose Gus Van Sant, whose work on previous films, like "Drugstore Cowboy", had left a favorable impression on the fledgling screenwriters. Miramax was persuaded and hired Van Sant to direct the film.
Filming.
Filming took place between April and June 1997. Although the story is set in Boston, and many of the scenes were shot on location in the Greater Boston area, many of the interior shots were filmed at locations in Toronto, with the University of Toronto standing in for MIT and Harvard University. The classroom scenes were filmed at McLennan Physical Laboratories (of the University of Toronto) and Central Technical School. Harvard normally disallows filming on its property, but permitted limited filming by the project after intervention by Harvard alumnus John Lithgow. Likewise, only the exterior shots of Bunker Hill Community College were filmed in Boston; however, Sean's office was built in Toronto as an exact replica of one at the college.
The interior bar scenes set in "Southie" were shot on location at Woody's L Street Tavern. Meanwhile, the homes of Will (190 West 6th Street) and Sean (259 E Street), although portrayed as some distance apart in the movie, are actually next door to each other on Bowen Street, the narrow street that Chuckie drives on to walk up to Will's back door.
The Bow and Arrow Pub, which was located at the corner of Bow Street and Massachusetts Avenue in Cambridge, doubled as the exterior of the Harvard bar in which Will meets Skylar for the first time. The Baskin-Robbins/Dunkin' Donuts featured in the "How do you like "them" apples?" scene was next door to the pub at the time of the film's release. The Harvard Bar interior scenes were filmed at the Upfront Bar and Grill on Front St. E. in Toronto.
The Tasty, at the corner of JFK and Brattle Streets, was the scene of Will and Skylar's first kiss. The Au Bon Pain, where Will and Skylar discuss the former's photographic memory, was at the corner of Dunster Street and Mass Ave.
The Boston Public Garden bench on which Will and Sean sat for a scene in the film became a temporary shrine after Williams's death in 2014.
Soundtrack.
The musical score for "Good Will Hunting" was composed by Danny Elfman, who had previously collaborated with Gus Van Sant on "To Die For" and would go on to score many of the director's other films. The film also features many songs written and recorded by singer-songwriter Elliott Smith. His song "Miss Misery" was nominated for the Academy Award for Best Original Song but lost to "My Heart Will Go On" from "Titanic". Elfman's score was also nominated for an Oscar but lost to "Titanic" as well. On September 11, 2006, "The Today Show" used Elfman's song "Weepy Donuts" while Matt Lauer spoke during the opening credits.
A soundtrack album for the film was released by Capitol Records on November 18, 1997, although only two of Elfman's cues appear on the release.
"Afternoon Delight" by the Starland Vocal Band was featured in the film but did not appear on the soundtrack album.
A limited-edition soundtrack album featuring Elfman's complete score from the film was released by Music Box Records on March 3, 2014. The soundtrack, issued in 1500 copies, includes all of Elfman's cues (including music not featured on the rare Miramax Academy promo) and contains the songs by Elliott Smith. One of the tracks is Smith's songs with Elfman's arrangements added to the mix.
Mathematics.
In an early version of the script, Will Hunting was going to be a physics prodigy, but Nobel Laureate in Physics Sheldon Glashow at Harvard told Damon that the subject should be math instead of physics. Glashow referred Damon to his brother-in-law, Daniel Kleitman, a mathematics professor at MIT. Columbia University physics and math professor Brian Greene at the Tribeca Sloan retrospectively explained that for physics, "Having some deep insight about the universe [ . . . ] typically [ is ] a group project in the modern era," while "doing some mathematical theorem is a singular undertaking very often." In the spring of 1997, Damon and Affleck asked Kleitman to "speak math to us" for writing realistic dialogue, so Kleitman invited postdoc Tom Bohman to join him, giving them a "quick lecture". When asked for a problem that Will could solve, Kleitman and Bohman suggested the unsolved computer science P versus NP problem, but the movie used other problems.
Patrick O'Donnell, professor of physics at the University of Toronto, served as the mathematical consultant for the film.
The main hallway blackboard is used twice to reveal Will's talent, first to the audience, and second to Professor Lambeau. Damon based it on his artist brother Kyle visiting MIT's Infinite Corridor and writing "an incredibly elaborate, totally fake, version of an equation" on a blackboard, which lasted for months. Kyle returned to Matt, saying that MIT needed those blackboards "because these kids are so smart they just need to, you know, drop everything and solve problems!".
The first blackboard problem.
Near the start of the film, Will sets aside his mop to study a difficult problem posed by Lambeau on the blackboard. The problem has to do with intermediate-level graph theory, but Lambeau describes it as an advanced "Fourier system".
To answer the first part of the question, Will chalks up an adjacency matrix:
formula_0
To answer the second part, he determines the number of 3-step walks in the graph, and finds the third power matrix:
formula_1
The third and fourth parts of the question concern . The other characters are astounded that a janitor shows such facility with matrices.
The second blackboard problem.
Lambeau subsequently poses a new challenge on the blackboard: state Cayley's formula and "draw all the homeomorphically irreducible trees with formula_2". Will writes eight of the ten trees correctly before Lambeau interrupts.
Reception.
Box office.
In the film's opening weekend in limited release, it grossed $272,912. When it opened nationwide in January 1998, it grossed $10,261,471 for the weekend. It went on to gross $138,433,435 in the United States and Canada, surpassing "Pulp Fiction" as Miramax's highest grossing film in that market at the time. It grossed $225,933,435 worldwide.
Critical response.
The film was met with widespread critical acclaim on release. On the review aggregator Rotten Tomatoes, the film holds an approval rating of 97%, based on 91 reviews, with an average rating of 8.10/10. The website's critical consensus reads: "It follows a predictable narrative arc, but "Good Will Hunting" adds enough quirks to the journey – and is loaded with enough powerful performances – that it remains an entertaining, emotionally rich drama." On Metacritic, the film has a weighted average score of 71 out of 100, based on 28 critics, indicating "generally favorable reviews". Audiences surveyed by CinemaScore gave the film an average grade of "A" on a scale of A+ to F.
Roger Ebert of the "Chicago Sun-Times" gave the film three stars out of four, writing that while the story is "predictable", it is "the individual moments, not the payoff, that make it so effective".
Duane Byrge of "The Hollywood Reporter" praised the performances of the cast, writing, "The acting is brilliant overall, with special praise to Matt Damon for his ragingly tender portrayal of the boy cursed with genius."
Peter Stack of the "San Francisco Chronicle" was equally positive, writing, "The glow goes well beyond a radiant performance by Matt Damon ... Intimate, heartfelt and wickedly funny, it's a movie whose impact lingers."
Owen Gleiberman, writing for "Entertainment Weekly", gave the film a "B", stating, ""Good Will Hunting" is stuffed – indeed, overstuffed – with heart, soul, audacity, and blarney. You may not believe a minute of it, but you don't necessarily want to stop watching." He also noted Damon's and Williams's chemistry, describing it as "a quicksilver intercepting each other's thoughts".
Janet Maslin of "The New York Times" called the screenplay "smart and touching", and praised Van Sant for directing with "style, shrewdness and clarity". She also complimented the production design and cinematography, which were able to effortlessly move the viewer from "classroom to dorm room to neighborhood bar", in a small setting.
Quentin Curtis of "The Daily Telegraph" opined that Williams's performance brought "sharpness and tenderness", calling the film a "crowd-pleaser, with bags of charm to spare. It doesn't bear thinking too much about its message ... Damon and Affleck's writing has real wit and vigour, and some depth."
Andrew O'Hehir of "Salon" stated that despite the "enjoyable characters", he thought that the film was somewhat superficial, writing, "there isn't a whole lot of movie to take home with you ... many will wake the next morning wondering why, with all that talent on hand, it amounts to so little in the end."
Writing for the BBC, Nev Pierce gave the film four stars out of five, describing it as "touching, without being sentimental", although he felt that some scenes were "odd lapses into self-help speak".
Emanuel Levy of "Variety" called the film a "beautifully realized tale ... engaging and often quite touching". He felt that the film's visual style showcased Van Sant's talent, but the plot was "quite predictable".
Academic response.
Several scholars have examined the role of class, religion and the cultural geography of Boston in the film. Jeffrey Herlihy-Mera observed that the residual Catholic–Protestant tensions in Boston are an important backdrop in the film, as Irish Catholics from Southie are aligned against ostensibly Protestant characters who are affiliated with Harvard and MIT. Emmett Winn has argued that character interactions show class conflict and stunted social mobility, while, similarly, David Lipset commented that class inequality is a driving subtext.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A=\\begin{pmatrix} 0 & 1 & 0& 1 \\\\ 1 & 0 & 2 & 1 \\\\ 0 & 2 & 0 & 0 \\\\ 1 & 1 & 0 & 0 \\end{pmatrix}."
},
{
"math_id": 1,
"text": "A^3=\\begin{pmatrix} 2 & 7 & 2 & 3 \\\\ 7 & 2 & 12 & 7 \\\\ 2 & 12 & 0 & 2 \\\\ 3 & 7 & 2 & 2 \\end{pmatrix}."
},
{
"math_id": 2,
"text": "n=10"
}
] |
https://en.wikipedia.org/wiki?curid=142456
|
14246049
|
Causal structure
|
Causal relationships between points in a manifold
In mathematical physics, the causal structure of a Lorentzian manifold describes the causal relationships between points in the manifold.
Introduction.
In modern physics (especially general relativity) spacetime is represented by a Lorentzian manifold. The causal relations between points in the manifold are interpreted as describing which events in spacetime can influence which other events.
The causal structure of an arbitrary (possibly curved) Lorentzian manifold is made more complicated by the presence of curvature. Discussions of the causal structure for such manifolds must be phrased in terms of smooth curves joining pairs of points. Conditions on the tangent vectors of the curves then define the causal relationships.
Tangent vectors.
If formula_0 is a Lorentzian manifold (for metric formula_1 on manifold formula_2) then the nonzero tangent vectors at each point in the manifold can be classified into three disjoint types.
A tangent vector formula_3 is:
Here we use the formula_7 metric signature. We say that a tangent vector is non-spacelike if it is null or timelike.
The canonical Lorentzian manifold is Minkowski spacetime, where formula_8 and formula_1 is the flat Minkowski metric. The names for the tangent vectors come from the physics of this model. The causal relationships between points in Minkowski spacetime take a particularly simple form because the tangent space is also formula_9 and hence the tangent vectors may be identified with points in the space. The four-dimensional vector formula_10 is classified according to the sign of formula_11, where formula_12 is a Cartesian coordinate in 3-dimensional space, formula_13 is the constant representing the universal speed limit, and formula_14 is time. The classification of any vector in the space will be the same in all frames of reference that are related by a Lorentz transformation (but not by a general Poincaré transformation because the origin may then be displaced) because of the invariance of the metric.
Time-orientability.
At each point in formula_2 the timelike tangent vectors in the point's tangent space can be divided into two classes. To do this we first define an equivalence relation on pairs of timelike tangent vectors.
If formula_3 and formula_15 are two timelike tangent vectors at a point we say that formula_3 and formula_15 are equivalent (written formula_16) if formula_17.
There are then two equivalence classes which between them contain all timelike tangent vectors at the point.
We can (arbitrarily) call one of these equivalence classes future-directed and call the other past-directed. Physically this designation of the two classes of future- and past-directed timelike vectors corresponds to a choice of an arrow of time at the point. The future- and past-directed designations can be extended to null vectors at a point by continuity.
A Lorentzian manifold is time-orientable if a continuous designation of future-directed and past-directed for non-spacelike vectors can be made over the entire manifold.
Curves.
A path in formula_2 is a continuous map formula_18 where formula_19 is a nondegenerate interval (i.e., a connected set containing more than one point) in formula_20. A smooth path has formula_21 differentiable an appropriate number of times (typically formula_22), and a regular path has nonvanishing derivative.
A curve in formula_2 is the image of a path or, more properly, an equivalence class of path-images related by re-parametrisation, i.e. homeomorphisms or diffeomorphisms of formula_19. When formula_2 is time-orientable, the curve is oriented if the parameter change is required to be monotonic.
Smooth regular curves (or paths) in formula_2 can be classified depending on their tangent vectors. Such a curve is
The requirements of regularity and nondegeneracy of formula_19 ensure that closed causal curves (such as those consisting of a single point) are not automatically admitted by all spacetimes.
If the manifold is time-orientable then the non-spacelike curves can further be classified depending on their orientation with respect to time.
A chronological, null or causal curve in formula_2 is
These definitions only apply to causal (chronological or null) curves because only timelike or null tangent vectors can be assigned an orientation with respect to time.
Causal relations.
There are several causal relations between points formula_23 and formula_24 in the manifold formula_2.
These relations satisfy the following properties:
For a point formula_23 in the manifold formula_2 we define
formula_42
formula_44
We similarly define
formula_46
formula_48
Points contained in formula_50, for example, can be reached from formula_23 by a future-directed timelike curve.
The point formula_23 can be reached, for example, from points contained in formula_47 by a future-directed non-spacelike curve.
In Minkowski spacetime the set formula_41 is the interior of the future light cone at formula_23. The set formula_45 is the full future light cone at formula_23, including the cone itself.
These sets formula_51
defined for all formula_23 in formula_2, are collectively called the causal structure of formula_2.
For formula_52 a subset of formula_2 we define
formula_53
formula_54
For formula_55 two subsets of formula_2 we define
Properties.
See Penrose (1972), p13.
Topological properties:
Conformal geometry.
Two metrics formula_81 and formula_82 are conformally related if formula_83 for some real function formula_84 called the conformal factor. (See conformal map).
Looking at the definitions of which tangent vectors are timelike, null and spacelike we see they remain unchanged if we use formula_81 or formula_82. As an example suppose formula_3 is a timelike tangent vector with respect to the formula_81 metric. This means that formula_4. We then have that formula_85 so formula_3 is a timelike tangent vector with respect to the formula_82 too.
It follows from this that the causal structure of a Lorentzian manifold is unaffected by a conformal transformation.
A null geodesic remains a null geodesic under a conformal rescaling.
Conformal infinity.
An infinite metric admits geodesics of infinite length/proper time. However, we can sometimes make a conformal rescaling of the metric with a conformal factor which falls off sufficiently fast to 0 as we approach infinity to get the conformal boundary of the manifold. The topological structure of the conformal boundary depends upon the causal structure.
In various spaces:
Gravitational singularity.
If a geodesic terminates after a finite affine parameter, and it is not possible to extend the manifold to extend the geodesic, then we have a singularity.
The absolute event horizon is the past null cone of the future timelike infinity. It is generated by null geodesics which obey the Raychaudhuri optical equation.
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\,(M,g)"
},
{
"math_id": 1,
"text": "g"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "\\,g(X,X) < 0"
},
{
"math_id": 5,
"text": "\\,g(X,X) = 0"
},
{
"math_id": 6,
"text": "\\,g(X,X) > 0"
},
{
"math_id": 7,
"text": "(-,+,+,+,\\cdots)"
},
{
"math_id": 8,
"text": "M=\\mathbb{R}^4"
},
{
"math_id": 9,
"text": "\\mathbb{R}^4"
},
{
"math_id": 10,
"text": "X = (t,r)"
},
{
"math_id": 11,
"text": "g(X,X) = - c^2 t^2 + \\|r\\|^2"
},
{
"math_id": 12,
"text": "r \\in \\mathbb{R}^3"
},
{
"math_id": 13,
"text": "c"
},
{
"math_id": 14,
"text": "t"
},
{
"math_id": 15,
"text": "Y"
},
{
"math_id": 16,
"text": "X \\sim Y"
},
{
"math_id": 17,
"text": "\\,g(X,Y) < 0"
},
{
"math_id": 18,
"text": "\\mu : \\Sigma \\to M"
},
{
"math_id": 19,
"text": "\\Sigma"
},
{
"math_id": 20,
"text": "\\mathbb{R}"
},
{
"math_id": 21,
"text": "\\mu"
},
{
"math_id": 22,
"text": "C^\\infty"
},
{
"math_id": 23,
"text": "x"
},
{
"math_id": 24,
"text": "y"
},
{
"math_id": 25,
"text": "\\,x \\ll y"
},
{
"math_id": 26,
"text": "x < y"
},
{
"math_id": 27,
"text": "x \\prec y"
},
{
"math_id": 28,
"text": "x \\le y"
},
{
"math_id": 29,
"text": "x=y"
},
{
"math_id": 30,
"text": "x \\to y"
},
{
"math_id": 31,
"text": " x \\nearrow y "
},
{
"math_id": 32,
"text": "x \\not\\ll y"
},
{
"math_id": 33,
"text": "x \\ll y"
},
{
"math_id": 34,
"text": "y \\prec z"
},
{
"math_id": 35,
"text": "x \\ll z"
},
{
"math_id": 36,
"text": "y \\ll z"
},
{
"math_id": 37,
"text": "\\ll"
},
{
"math_id": 38,
"text": "<"
},
{
"math_id": 39,
"text": "\\prec"
},
{
"math_id": 40,
"text": "\\to"
},
{
"math_id": 41,
"text": "\\,I^+(x)"
},
{
"math_id": 42,
"text": "\\,I^+(x) = \\{ y \\in M | x \\ll y\\}"
},
{
"math_id": 43,
"text": "\\,I^-(x)"
},
{
"math_id": 44,
"text": "\\,I^-(x) = \\{ y \\in M | y \\ll x\\}"
},
{
"math_id": 45,
"text": "\\,J^+(x)"
},
{
"math_id": 46,
"text": "\\,J^+(x) = \\{ y \\in M | x \\prec y\\}"
},
{
"math_id": 47,
"text": "\\,J^-(x)"
},
{
"math_id": 48,
"text": "\\,J^-(x) = \\{ y \\in M | y \\prec x\\}"
},
{
"math_id": 49,
"text": "y \\to x"
},
{
"math_id": 50,
"text": "\\, I^+(x)"
},
{
"math_id": 51,
"text": "\\,I^+(x) ,I^-(x), J^+(x), J^-(x)"
},
{
"math_id": 52,
"text": "S"
},
{
"math_id": 53,
"text": "I^\\pm[S] = \\bigcup_{x \\in S} I^\\pm(x) "
},
{
"math_id": 54,
"text": "J^\\pm[S] = \\bigcup_{x \\in S} J^\\pm(x) "
},
{
"math_id": 55,
"text": "S, T"
},
{
"math_id": 56,
"text": "T"
},
{
"math_id": 57,
"text": "I^+[S;T]"
},
{
"math_id": 58,
"text": "I^+[S] \\cap T"
},
{
"math_id": 59,
"text": "J^+[S;T]"
},
{
"math_id": 60,
"text": "J^+[S] \\cap T"
},
{
"math_id": 61,
"text": "I^-(x)"
},
{
"math_id": 62,
"text": "D^+ (S)"
},
{
"math_id": 63,
"text": "S \\subset M"
},
{
"math_id": 64,
"text": "q,r \\in S"
},
{
"math_id": 65,
"text": "r \\in I^{+}(q)"
},
{
"math_id": 66,
"text": "I^{+}[S]"
},
{
"math_id": 67,
"text": "\\gamma"
},
{
"math_id": 68,
"text": "J^+(\\gamma(t_1)) \\cap J^-(\\gamma(t_2))"
},
{
"math_id": 69,
"text": "\\gamma(t_1)"
},
{
"math_id": 70,
"text": "\\gamma(t_2)"
},
{
"math_id": 71,
"text": "\\,I^-(y)"
},
{
"math_id": 72,
"text": "x \\prec y \\implies I^-(x) \\subset I^-(y)"
},
{
"math_id": 73,
"text": "x \\prec y \\implies I^+(y) \\subset I^+(x)"
},
{
"math_id": 74,
"text": "I^+[S] = I^+[I^+[S]] \\subset J^+[S] = J^+[J^+[S]]"
},
{
"math_id": 75,
"text": "I^-[S] = I^-[I^-[S]] \\subset J^-[S] = J^-[J^-[S]]"
},
{
"math_id": 76,
"text": "I^\\pm(x)"
},
{
"math_id": 77,
"text": "I^\\pm[S]"
},
{
"math_id": 78,
"text": "I^\\pm[S] = I^\\pm[\\overline{S}]"
},
{
"math_id": 79,
"text": "\\overline{S}"
},
{
"math_id": 80,
"text": "I^\\pm[S] \\subset \\overline{J^\\pm[S]}"
},
{
"math_id": 81,
"text": "\\,g"
},
{
"math_id": 82,
"text": "\\hat{g}"
},
{
"math_id": 83,
"text": "\\hat{g} = \\Omega^2 g"
},
{
"math_id": 84,
"text": "\\Omega"
},
{
"math_id": 85,
"text": "\\hat{g}(X,X) = \\Omega^2 g(X,X) < 0"
},
{
"math_id": 86,
"text": "i^+"
},
{
"math_id": 87,
"text": "i^-"
},
{
"math_id": 88,
"text": "i^\\pm"
}
] |
https://en.wikipedia.org/wiki?curid=14246049
|
14246162
|
Memristor
|
Nonlinear two-terminal fundamental circuit element
A memristor (; a portmanteau of "memory resistor") is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage. It was described and named in 1971 by Leon Chua, completing a theoretical quartet of fundamental electrical components which also comprises the resistor, capacitor and inductor.
Chua and Kang later generalized the concept to memristive systems. Such a system comprises a circuit, of multiple conventional components, which mimics key properties of the ideal memristor component and is also commonly referred to as a memristor. Several such memristor system technologies have been developed, notably ReRAM.
The identification of memristive properties in electronic devices has attracted controversy. Experimentally, the ideal memristor has yet to be demonstrated.
As a fundamental electrical component.
Chua in his 1971 paper identified a theoretical symmetry between the non-linear resistor (voltage vs. current), non-linear capacitor (voltage vs. charge), and non-linear inductor (magnetic flux linkage vs. current). From this symmetry he inferred the characteristics of a fourth fundamental non-linear circuit element, linking magnetic flux and charge, which he called the memristor. In contrast to a linear (or non-linear) resistor, the memristor has a dynamic relationship between current and voltage, including a memory of past voltages or currents. Other scientists had proposed dynamic memory resistors such as the memistor of Bernard Widrow, but Chua introduced a mathematical generality.
Derivation and characteristics.
The memristor was originally defined in terms of a non-linear functional relationship between magnetic flux linkage Φm("t") and the amount of electric charge that has flowed, "q"("t"):
formula_0
The "magnetic flux linkage", Φm, is generalized from the circuit characteristic of an inductor. It "does not" represent a magnetic field here. Its physical meaning is discussed below. The symbol Φm may be regarded as the integral of voltage over time.
In the relationship between Φm and q, the derivative of one with respect to the other depends on the value of one or the other, and so each memristor is characterized by its memristance function describing the charge-dependent rate of change of flux with charge.
formula_1
Substituting the flux as the time integral of the voltage, and charge as the time integral of current, the more convenient forms are;
formula_2
To relate the memristor to the resistor, capacitor, and inductor, it is helpful to isolate the term "M"("q"), which characterizes the device, and write it as a differential equation.
The above table covers all meaningful ratios of differentials of "I", "q", "Φ"m, and "V". No device can relate "dI" to "dq", or "dΦ"m to "dV", because "I" is the derivative of "q" and "Φ"m is the integral of "V".
It can be inferred from this that memristance is charge-dependent resistance. If "M"("q"("t")) is a constant, then we obtain Ohm's law "R"("t") = "V"("t")/"I"("t"). If "M"("q"("t")) is nontrivial, however, the equation is not equivalent because "q"("t") and "M"("q"("t")) can vary with time. Solving for voltage as a function of time produces
formula_3
This equation reveals that memristance defines a linear relationship between current and voltage, as long as "M" does not vary with charge. Nonzero current implies time varying charge. Alternating current, however, may reveal the linear dependence in circuit operation by inducing a measurable voltage without net charge movement—as long as the maximum change in "q" does not cause much change in "M".
Furthermore, the memristor is static if no current is applied. If "I"("t") = 0, we find "V"("t") = 0 and "M"("t") is constant. This is the essence of the memory effect.
Analogously, we can define a formula_4 as memductance.
formula_5
The power consumption characteristic recalls that of a resistor, "I"2"R".
formula_6
As long as "M"("q"("t")) varies little, such as under alternating current, the memristor will appear as a constant resistor. If "M"("q"("t")) increases rapidly, however, current and power consumption will quickly stop.
"M"("q") is physically restricted to be positive for all values of "q" (assuming the device is passive and does not become superconductive at some "q"). A negative value would mean that it would perpetually supply energy when operated with alternating current.
Modelling and validation.
In order to understand the nature of memristor function, some knowledge of fundamental circuit theoretic concepts is useful, starting with the concept of device modeling.
Engineers and scientists seldom analyze a physical system in its original form. Instead, they construct a model which approximates the behaviour of the system. By analyzing the behaviour of the model, they hope to predict the behaviour of the actual system. The primary reason for constructing models is that physical systems are usually too complex to be amenable to a practical analysis.
In the 20th century, work was done on devices where researchers did not recognize the memristive characteristics. This has raised the suggestion that such devices should be recognised as memristors. Pershin and Di Ventra have proposed a test that can help to resolve some of the long-standing controversies about whether an ideal memristor does actually exist or is a purely mathematical concept.
The rest of this article primarily addresses memristors as related to ReRAM devices, since the majority of work since 2008 has been concentrated in this area.
Superconducting memristor component.
Dr. Paul Penfield, in a 1974 MIT technical report mentions the memristor in connection with Josephson junctions. This was an early use of the word "memristor" in the context of a circuit device.
One of the terms in the current through a Josephson junction is of the form:
formula_7
where formula_8 is a constant based on the physical superconducting materials, formula_9 is the voltage across the junction and formula_10 is the current through the junction.
Through the late 20th century, research regarding this phase-dependent conductance in Josephson junctions was carried out. A more comprehensive approach to extracting this phase-dependent conductance appeared with Peotta and Di Ventra's seminal paper in 2014.
Memristor circuits.
Due to the practical difficulty of studying the ideal memristor, we will discuss other electrical devices which can be modelled using memristors. For a mathematical description of a memristive device (systems), see Theory.
A discharge tube can be modelled as a memristive device, with resistance being a function of the number of conduction electrons formula_11.
formula_12
formula_13 is the voltage across the discharge tube, formula_10 is the current flowing through it and formula_11 is the number of conduction electrons. A simple memristance function is formula_14. formula_15 and formula_16 are parameters depending on the dimensions of the tube and the gas fillings. An experimental identification of memristive behaviour is the "pinched hysteresis loop" in the formula_17 plane. For an experiment that shows such a characteristic for a common discharge tube, see "A physical memristor Lissajous figure" (YouTube). The video also illustrates how to understand deviations in the pinched hysteresis characteristics of physical memristors.
Thermistors can be modelled as memristive devices.
formula_18
formula_19 is a material constant, formula_20 is the absolute body temperature of the thermistor, formula_21 is the ambient temperature (both temperatures in Kelvin), formula_22 denotes the cold temperature resistance at formula_23, formula_24 is the heat capacitance and formula_25 is the dissipation constant for the thermistor.
A fundamental phenomenon that has hardly been studied is memristive behaviour in pn-junctions. The memristor plays a crucial role in mimicking the charge storage effect in the diode base, and is also responsible for the conductivity modulation phenomenon (that is so important during forward transients).
Criticisms.
In 2008, a team at HP Labs found experimental evidence for the Chua's memristor based on an analysis of a thin film of titanium dioxide, thus connecting the operation of ReRAM devices to the memristor concept. According to HP Labs, the memristor would operate in the following way: the memristor's electrical resistance is not constant but depends on the current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has previously flowed through it and in what direction; the device remembers its history—the so-called "non-volatility property". When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again.
The HP Labs result was published in the scientific journal "Nature".
Following this claim, Leon Chua has argued that the memristor definition could be generalized to cover all forms of two-terminal non-volatile memory devices based on resistance switching effects. Chua also argued that the memristor is the oldest known circuit element, with its effects predating the resistor, capacitor, and inductor. There are, however, some serious doubts as to whether a genuine memristor can actually exist in physical reality. Additionally, some experimental evidence contradicts Chua's generalization since a non-passive nanobattery effect is observable in resistance switching memory. A simple test has been proposed by Pershin and Di Ventra to analyze whether such an ideal or generic memristor does actually exist or is a purely mathematical concept. Up to now, there seems to be no experimental resistance switching device (ReRAM) which can pass the test.
These devices are intended for applications in nanoelectronic memory devices, computer logic, and neuromorphic/neuromemristive computer architectures. In 2013, Hewlett-Packard CTO Martin Fink suggested that memristor memory may become commercially available as early as 2018. In March 2012, a team of researchers from HRL Laboratories and the University of Michigan announced the first functioning memristor array built on a CMOS chip.
According to the original 1971 definition, the memristor is the fourth fundamental circuit element, forming a non-linear relationship between electric charge and magnetic flux linkage. In 2011, Chua argued for a broader definition that includes all two-terminal non-volatile memory devices based on resistance switching. Williams argued that MRAM, phase-change memory and ReRAM are memristor technologies. Some researchers argued that biological structures such as blood and skin fit the definition. Others argued that the memory device under development by HP Labs and other forms of ReRAM are not memristors, but rather part of a broader class of variable-resistance systems, and that a broader definition of memristor is a scientifically unjustifiable land grab that favored HP's memristor patents.
In 2011, Meuffels and Schroeder noted that one of the early memristor papers included a mistaken assumption regarding ionic conduction. In 2012, Meuffels and Soni discussed some fundamental issues and problems in the realization of memristors. They indicated inadequacies in the electrochemical modeling presented in the "Nature" article "The missing memristor found" because the impact of concentration polarization effects on the behavior of metal−TiO2−"x"−metal structures under voltage or current stress was not considered. This critique was referred to by Valov "et al." in 2013.
In a kind of thought experiment, Meuffels and Soni furthermore revealed a severe inconsistency: If a current-controlled memristor with the so-called "non-volatility property" exists in physical reality, its behavior would violate Landauer's principle, which places a limit on the minimum amount of energy required to change "information" states of a system. This critique was finally adopted by Di Ventra and Pershin in 2013.
Within this context, Meuffels and Soni pointed to a fundamental thermodynamic principle: Non-volatile information storage requires the existence of free-energy barriers that separate the distinct internal memory states of a system from each other; otherwise, one would be faced with an "indifferent" situation, and the system would arbitrarily fluctuate from one memory state to another just under the influence of thermal fluctuations. When unprotected against thermal fluctuations, the internal memory states exhibit some diffusive dynamics, which causes state degradation. The free-energy barriers must therefore be high enough to ensure a low bit-error probability of bit operation. Consequently, there is always a lower limit of energy requirement – depending on the required bit-error probability – for intentionally changing a bit value in any memory device.
In the general concept of memristive system the defining equations are (see Theory):
formula_26
where "u"("t") is an input signal, and "y"("t") is an output signal. The vector x represents a set of "n" state variables describing the different internal memory states of the device. ẋ is the time-dependent rate of change of the state vector x with time.
When one wants to go beyond mere curve fitting and aims at a real physical modeling of non-volatile memory elements, e.g., resistive random-access memory devices, one has to keep an eye on the aforementioned physical correlations. To check the adequacy of the proposed model and its resulting state equations, the input signal "u"("t") can be superposed with a stochastic term "ξ"("t"), which takes into account the existence of inevitable thermal fluctuations. The dynamic state equation in its general form then finally reads:
formula_27
where "ξ"("t") is, e.g., white Gaussian current or voltage noise. On base of an analytical or numerical analysis of the time-dependent response of the system towards noise, a decision on the physical validity of the modeling approach can be made, e.g., would the system be able to retain its memory states in power-off mode?
Such an analysis was performed by Di Ventra and Pershin with regard to the genuine current-controlled memristor. As the proposed dynamic state equation provides no physical mechanism enabling such a memristor to cope with inevitable thermal fluctuations, a current-controlled memristor would erratically change its state in course of time just under the influence of current noise. Di Ventra and Pershin thus concluded that memristors whose resistance (memory) states depend solely on the current or voltage history would be unable to protect their memory states against unavoidable Johnson–Nyquist noise and permanently suffer from information loss, a so-called "stochastic catastrophe". A current-controlled memristor can thus not exist as a solid-state device in physical reality.
The above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.g. "resistance-switching" memory devices (ReRAM)) cannot be associated with the memristor concept, i.e., such devices cannot by itself remember their current or voltage history. Transitions between distinct internal memory or resistance states are of probabilistic nature. The probability for a transition from state {i} to state {j} depends on the height of the free-energy barrier between both states. The transition probability can thus be influenced by suitably driving the memory device, i.e., by "lowering" the free-energy barrier for the transition {i} → {j} by means of, for example, an externally applied bias.
A "resistance switching" event can simply be enforced by setting the external bias to a value above a certain threshold value. This is the trivial case, i.e., the free-energy barrier for the transition {i} → {j} is reduced to zero. In case one applies biases below the threshold value, there is still a finite probability that the device will switch in course of time (triggered by a random thermal fluctuation), but – as one is dealing with probabilistic processes – it is impossible to predict when the switching event will occur. That is the basic reason for the stochastic nature of all observed resistance-switching (ReRAM) processes. If the free-energy barriers are not high enough, the memory device can even switch without having to do anything.
When a two-terminal non-volatile memory device is found to be in a distinct resistance state {j}, there exists therefore no physical one-to-one relationship between its present state and its foregoing voltage history. The switching behavior of individual non-volatile memory devices thus cannot be described within the mathematical framework proposed for memristor/memristive systems.
An extra thermodynamic curiosity arises from the definition that memristors/memristive devices should energetically act like resistors. The instantaneous electrical power entering such a device is completely dissipated as Joule heat to the surrounding, so no extra energy remains in the system after it has been brought from one resistance state x"i" to another one x"j". Thus, the internal energy of the memristor device in state x"i", "U"("V", "T", x"i"), would be the same as in state x"j", "U"("V", "T", x"j"), even though these different states would give rise to different device's resistances, which itself must be caused by physical alterations of the device's material.
Other researchers noted that memristor models based on the assumption of linear ionic drift do not account for asymmetry between set time (high-to-low resistance switching) and reset time (low-to-high resistance switching) and do not provide ionic mobility values consistent with experimental data. Non-linear ionic-drift models have been proposed to compensate for this deficiency.
A 2014 article from researchers of ReRAM concluded that Strukov's (HP's) initial/basic memristor modeling equations do not reflect the actual device physics well, whereas subsequent (physics-based) models such as Pickett's model or Menzel's ECM model (Menzel is a co-author of that article) have adequate predictability, but are computationally prohibitive. As of 2014, the search continues for a model that balances these issues; the article identifies Chang's and Yakopcic's models as potentially good compromises.
Martin Reynolds, an electrical engineering analyst with research outfit Gartner, commented that while HP was being sloppy in calling their device a memristor, critics were being pedantic in saying that it was not a memristor.
Experimental tests.
Chua suggested experimental tests to determine if a device may properly be categorized as a memristor:
According to Chua all resistive switching memories including ReRAM, MRAM and phase-change memory meet these criteria and are memristors. However, the lack of data for the Lissajous curves over a range of initial conditions or over a range of frequencies complicates assessments of this claim.
Experimental evidence shows that redox-based resistance memory (ReRAM) includes a nanobattery effect that is contrary to Chua's memristor model. This indicates that the memristor theory needs to be extended or corrected to enable accurate ReRAM modeling.
Theory.
In 2008, researchers from HP Labs introduced a model for a memristance function based on thin films of titanium dioxide. For RON ≪ ROFF the memristance function was determined to be
formula_28
where ROFF represents the high resistance state, RON represents the low resistance state, "μ"v represents the mobility of dopants in the thin film, and "D" represents the film thickness. The HP Labs group noted that "window functions" were necessary to compensate for differences between experimental measurements and their memristor model due to non-linear ionic drift and boundary effects.
Operation as a switch.
For some memristors, applied current or voltage causes substantial change in resistance. Such devices may be characterized as switches by investigating the time and energy that must be spent to achieve a desired change in resistance. This assumes that the applied voltage remains constant. Solving for energy dissipation during a single switching event reveals that for a memristor to switch from "R"on to "R"off in time "T"on to "T"off, the charge must change by ΔQ = "Q"on−"Q"off.
formula_29
Substituting "V" = "I"("q")"M"("q"), and then ∫d"q"/"V" = ∆"Q"/"V" for constant "V"To produces the final expression. This power characteristic differs fundamentally from that of a metal oxide semiconductor transistor, which is capacitor-based. Unlike the transistor, the final state of the memristor in terms of charge does not depend on bias voltage.
The type of memristor described by Williams ceases to be ideal after switching over its entire resistance range, creating hysteresis, also called the "hard-switching regime". Another kind of switch would have a cyclic "M"("q") so that each "off"-"on" event would be followed by an "on"-"off" event under constant bias. Such a device would act as a memristor under all conditions, but would be less practical.
Memristive systems.
In the more general concept of an "n"-th order memristive system the defining equations are
formula_30
where "u"("t") is an input signal, "y"("t") is an output signal, the vector x represents a set of "n" state variables describing the device, and "g" and "f" are continuous functions. For a current-controlled memristive system the signal "u"("t") represents the current signal "i"("t") and the signal "y"("t") represents the voltage signal "v"("t"). For a voltage-controlled memristive system the signal "u"("t") represents the voltage signal "v"("t") and the signal "y"("t") represents the current signal "i"("t").
The "pure" memristor is a particular case of these equations, namely when "x" depends only on charge (x = "q") and since the charge is related to the current via the time derivative d"q"/d"t" = "i"("t"). Thus for "pure" memristors "f" (i.e. the rate of change of the state) must be equal or proportional to the current "i"("t") .
Pinched hysteresis.
One of the resulting properties of memristors and memristive systems is the existence of a pinched hysteresis effect. For a current-controlled memristive system, the input "u"("t") is the current "i"("t"), the output "y"("t") is the voltage "v"("t"), and the slope of the curve represents the electrical resistance. The change in slope of the pinched hysteresis curves demonstrates switching between different resistance states which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory. At high frequencies, memristive theory predicts the pinched hysteresis effect will degenerate, resulting in a straight line representative of a linear resistor. It has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristors.
Memristive networks and mathematical models of circuit interactions.
The concept of memristive networks was first introduced by Leon Chua in his 1965 paper "Memristive Devices and Systems." Chua proposed the use of memristive devices as a means of building artificial neural networks that could simulate the behavior of the human brain. In fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws.
A memristive network is a type of artificial neural network that is based on memristive devices, which are electronic components that exhibit the property of memristance.
In a memristive network, the memristive devices are used to simulate the behavior of neurons and synapses in the human brain. The network consists of layers of memristive devices, each of which is connected to other layers through a set of weights. These weights are adjusted during the training process, allowing the network to learn and adapt to new input data.
One advantage of memristive networks is that they can be implemented using relatively simple and inexpensive hardware, making them an attractive option for developing low-cost artificial intelligence systems. They also have the potential to be more energy efficient than traditional artificial neural networks, as they can store and process information using less power. However, the field of memristive networks is still in the early stages of development, and more research is needed to fully understand their capabilities and limitations.
For the simplest model with only memristive devices with voltage generators in series,
there is an exact and in closed form equation (Caravelli–Traversa–Di Ventra equation, CTDV) which describes the evolution of the internal memory of the network for each device.
For a simple memristor model (but not realistic) of a switch between two resistance values, given by the Williams-Strukov model formula_31, with formula_32,
there is a set of nonlinearly coupled differential equations that takes the form:
formula_33
where formula_34 is the diagonal matrix with elements formula_35 on the diagonal, formula_36 are based on the memristors physical parameters. The vector formula_37 is the vector of voltage generators in series to the memristors. The circuit topology enters only in the projector operator formula_38, defined in terms of the cycle matrix of the graph. The equation provides a concise mathematical description of the interactions due to Kirchhoff 's laws. Interestingly, the equation shares many properties in common with a Hopfield network, such as the existence of Lyapunov functions and classical tunnelling phenomena. In the context of memristive networks, the CTD equation may be used to predict the behavior of memristive devices under different operating conditions, or to design and optimize memristive circuits for specific applications.
Extended systems.
Some researchers have raised the question of the scientific legitimacy of HP's memristor models in explaining the behavior of ReRAM. and have suggested extended memristive models to remedy perceived deficiencies.
One example attempts to extend the memristive systems framework by including dynamic systems incorporating higher-order derivatives of the input signal "u"("t") as a series expansion
formula_39
where "m" is a positive integer, "u"("t") is an input signal, "y"("t") is an output signal, the vector x represents a set of "n" state variables describing the device, and the functions "g" and "f" are continuous functions. This equation produces the same zero-crossing hysteresis curves as memristive systems but with a different frequency response than that predicted by memristive systems.
Another example suggests including an offset value formula_40 to account for an observed nanobattery effect which violates the predicted zero-crossing pinched hysteresis effect.
formula_41
Implementation of hysteretic current-voltage memristors.
There exist implementations of memristors with a hysteretic current-voltage curve or with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with hysteretic current-voltage curve use a resistance dependent on the history of the current and voltage and bode well for the future of memory technology due to their simple structure, high energy efficiency, and high integration [DOI: 10.1002/aisy.202200053].
Titanium dioxide memristor.
Interest in the memristor revived when an experimental solid-state version was reported by R. Stanley Williams of Hewlett Packard in 2007. The article was the first to demonstrate that a solid-state device could have the characteristics of a memristor based on the behavior of nanoscale thin films. The device neither uses magnetic flux as the theoretical memristor suggested, nor stores charge as a capacitor does, but instead achieves a resistance dependent on the history of current.
Although not cited in HP's initial reports on their TiO2 memristor, the resistance switching characteristics of titanium dioxide were originally described in the 1960s.
The HP device is composed of a thin (50 nm) titanium dioxide film between two 5 nm thick electrodes, one titanium, the other platinum. Initially, there are two layers to the titanium dioxide film, one of which has a slight depletion of oxygen atoms. The oxygen vacancies act as charge carriers, meaning that the depleted layer has a much lower resistance than the non-depleted layer. When an electric field is applied, the oxygen vacancies drift (see "Fast-ion conductor"), changing the boundary between the high-resistance and low-resistance layers. Thus the resistance of the film as a whole is dependent on how much charge has been passed through it in a particular direction, which is reversible by changing the direction of current. Since the HP device displays fast-ion conduction at nanoscale, it is considered a nanoionic device.
Memristance is displayed only when both the doped layer and depleted layer contribute to resistance. When enough charge has passed through the memristor that the ions can no longer move, the device enters hysteresis. It ceases to integrate "q"=∫"I" d"t", but rather keeps "q" at an upper bound and "M" fixed, thus acting as a constant resistor until current is reversed.
Memory applications of thin-film oxides had been an area of active investigation for some time. IBM published an article in 2000 regarding structures similar to that described by Williams. Samsung has a U.S. patent for oxide-vacancy based switches similar to that described by Williams.
In April 2010, HP labs announced that they had practical memristors working at 1 ns (~1 GHz) switching times and 3 nm by 3 nm sizes, which bodes well for the future of the technology. At these densities it could easily rival the current sub-25 nm flash memory technology.
Silicon dioxide memristor.
It seems that memristance has been reported in nanoscale thin films of silicon dioxide as early as the 1960s
However, hysteretic conductance in silicon was associated to memristive effects
only in 2009.
More recently, beginning in 2012, Tony Kenyon, Adnan Mehonic and their group clearly demonstrated that the resistive switching in silicon oxide thin films is due to the formation of oxygen vacancy filaments in defect-engineered silicon dioxide, having probed directly the movement of oxygen under electrical bias, and imaged the resultant conductive filaments using conductive atomic force microscopy.
Polymeric memristor.
In 2004, Krieger and Spitzer described dynamic doping of polymer and inorganic dielectric-like materials that improved the switching characteristics and retention required to create functioning nonvolatile memory cells. They used a passive layer between electrode and active thin films, which enhanced the extraction of ions from the electrode. It is possible to use fast-ion conductor as this passive layer, which allows a significant reduction of the ionic extraction field.
In July 2008, Erokhin and Fontana claimed to have developed a polymeric memristor before the more recently announced titanium dioxide memristor.
In 2010, Alibart, Gamrat, Vuillaume et al. introduced a new hybrid organic/nanoparticle device (the NOMFET : Nanoparticle Organic Memory Field Effect Transistor), which behaves as a memristor and which exhibits the main behavior of a biological spiking synapse. This device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning).
In 2012, Crupi, Pradhan and Tozer described a proof of concept design to create neural synaptic memory circuits using organic ion-based memristors. The synapse circuit demonstrated long-term potentiation for learning as well as inactivity based forgetting. Using a grid of circuits, a pattern of light was stored and later recalled. This mimics the behavior of the V1 neurons in the primary visual cortex that act as spatiotemporal filters that process visual signals such as edges and moving lines.
In 2012, Erokhin and co-authors have demonstrated a stochastic three-dimensional matrix with capabilities for learning and adapting based on polymeric memristor.
Layered memristor.
In 2014, Bessonov et al. reported a flexible memristive device comprising a MoOx/MoS2 heterostructure sandwiched between silver electrodes on a plastic foil. The fabrication method is entirely based on printing and solution-processing technologies using two-dimensional layered transition metal dichalcogenides (TMDs). The memristors are mechanically flexible, optically transparent and produced at low cost. The memristive behaviour of switches was found to be accompanied by a prominent memcapacitive effect. High switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies.
Atomristor.
Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets. In 2018, Ge and Wu et al. in the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX2, M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structure. The work was later extended to monolayer hexagonal boron nitride, which is the thinnest memory material of around 0.33 nm. These atomristors offer forming-free switching and both unipolar and bipolar operation. The switching behavior is found in single-crystalline and poly-crystalline films, with various conducting electrodes (gold, silver and graphene). Atomically thin TMD sheets are prepared via CVD/MOCVD, enabling low-cost fabrication. Afterwards, taking advantage of the low "on" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS2 or h-BN atomristors, indicating a new application of memristors for 5G, 6G and THz communication and connectivity systems. In 2020, atomistic understanding of the conductive virtual point mechanism was elucidated in an article in nature nanotechnology.
Ferroelectric memristor.
The ferroelectric memristor is based on a thin ferroelectric barrier sandwiched between two metallic electrodes. Switching the polarization of the ferroelectric material by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation: ROFF ≫ RON (an effect called Tunnel Electro-Resistance). In general, the polarization does not switch abruptly. The reversal occurs gradually through the nucleation and growth of ferroelectric domains with opposite polarization. During this process, the resistance is neither RON or ROFF, but in between. When the voltage is cycled, the ferroelectric domain configuration evolves, allowing a fine tuning of the resistance value. The ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involved.
Carbon nanotube memristor.
In 2013, Ageev, Blinov et al. reported observing memristor effect in structure based on vertically aligned carbon nanotubes studying bundles of CNT by scanning tunneling microscope.
Later it was found that CNT memristive switching is observed when a nanotube has a non-uniform elastic strain Δ"L"0. It was shown that the memristive switching mechanism of strained СNT is based on the formation and subsequent redistribution of non-uniform elastic strain and piezoelectric field "Edef" in the nanotube under the influence of an external electric field "E"("x","t").
Biomolecular memristor.
Biomaterials have been evaluated for use in artificial synapses and have shown potential for application in neuromorphic systems. In particular, the feasibility of using a collagen‐based biomemristor as an artificial synaptic device has been investigated, whereas a synaptic device based on lignin demonstrated rising or lowering current with consecutive voltage sweeps depending on the sign of the voltage furthermore a natural silk fibroin demonstrated memristive properties; spin-memristive systems based on biomolecules are also being studied.
In 2012, Sandro Carrara and co-authors have proposed the first biomolecular memristor with aims to realize highly sensitive biosensors. Since then, several memristive sensors have been demonstrated.
Spin memristive systems.
Spintronic memristor.
Chen and Wang, researchers at disk-drive manufacturer Seagate Technology described three examples of possible magnetic memristors. In one device resistance occurs when the spin of electrons in one section of the device points in a different direction from those in another section, creating a "domain wall", a boundary between the two sections. Electrons flowing into the device have a certain spin, which alters the device's magnetization state. Changing the magnetization, in turn, moves the domain wall and changes the resistance. The work's significance led to an interview by IEEE Spectrum. A first experimental proof of the spintronic memristor based on domain wall motion by spin currents in a magnetic tunnel junction was given in 2011.
Memristance in a magnetic tunnel junction.
The magnetic tunnel junction has been proposed to act as a memristor through several potentially complementary mechanisms, both extrinsic (redox reactions, charge trapping/detrapping and electromigration within the barrier) and intrinsic (spin-transfer torque).
Extrinsic mechanism.
Based on research performed between 1999 and 2003, Bowen et al. published experiments in 2006 on a magnetic tunnel junction (MTJ) endowed with bi-stable spin-dependent states(resistive switching).
The MTJ consists in a SrTiO3 (STO) tunnel barrier that separates half-metallic oxide LSMO and ferromagnetic metal CoCr electrodes. The MTJ's usual two device resistance states, characterized by a parallel or antiparallel alignment of electrode magnetization, are altered by applying an electric field. When the electric field is applied from the CoCr to the LSMO electrode, the tunnel magnetoresistance (TMR) ratio is positive. When the direction of electric field is reversed, the TMR is negative. In both cases, large amplitudes of TMR on the order of 30% are found. Since a fully spin-polarized current flows from the half-metallic LSMO electrode, within the Julliere model, this sign change suggests a sign change in the effective spin polarization of the STO/CoCr interface. The origin to this multistate effect lies with the observed migration of Cr into the barrier and its state of oxidation. The sign change of TMR can originate from modifications to the STO/CoCr interface density of states, as well as from changes to the tunneling landscape at the STO/CoCr interface induced by CrOx redox reactions.
Reports on MgO-based memristive switching within MgO-based MTJs appeared starting in 2008
and 2009. While the drift of oxygen vacancies within the insulating MgO layer has been proposed to describe the observed memristive effects, another explanation could be charge trapping/detrapping on the localized states of oxygen vacancies
and its impact on spintronics. This highlights the importance of understanding what role oxygen vacancies play in the memristive operation of devices that deploy complex oxides with an intrinsic property such as ferroelectricity or multiferroicity.
Intrinsic mechanism.
The magnetization state of a MTJ can be controlled by Spin-transfer torque, and can thus, through this intrinsic physical mechanism, exhibit memristive behavior. This spin torque is induced by current flowing through the junction, and leads to an efficient means of achieving a MRAM. However, the length of time the current flows through the junction determines the amount of current needed, i.e., charge is the key variable.
The combination of intrinsic (spin-transfer torque) and extrinsic (resistive switching) mechanisms naturally leads to a second-order memristive system described by the state vector x = ("x"1,"x"2), where "x"1 describes the magnetic state of the electrodes and "x"2 denotes the resistive state of the MgO barrier. In this case the change of "x"1 is current-controlled (spin torque is due to a high current density) whereas the change of "x"2 is voltage-controlled (the drift of oxygen vacancies is due to high electric fields). The presence of both effects in a memristive magnetic tunnel junction led to the idea of a nanoscopic synapse-neuron system.
Spin memristive system.
A fundamentally different mechanism for memristive behavior has been proposed by Pershin and Di Ventra. The authors show that certain types of semiconductor spintronic structures belong to a broad class of memristive systems as defined by Chua and Kang. The mechanism of memristive behavior in such structures is based entirely on the electron spin degree of freedom which allows for a more convenient control than the ionic transport in nanostructures. When an external control parameter (such as voltage) is changed, the adjustment of electron spin polarization is delayed because of the diffusion and relaxation processes causing hysteresis. This result was anticipated in the study of spin extraction at semiconductor/ferromagnet interfaces, but was not described in terms of memristive behavior. On a short time scale, these structures behave almost as an ideal memristor. This result broadens the possible range of applications of semiconductor spintronics and makes a step forward in future practical applications.
Self-directed channel memristor.
In 2017, Kris Campbell formally introduced the self-directed channel (SDC) memristor.
The SDC device is the first memristive device available commercially to researchers, students and electronics enthusiast worldwide.
The SDC device is operational immediately after fabrication. In the Ge2Se3 active layer, Ge-Ge homopolar bonds are found and switching occurs. The three layers consisting of Ge2Se3/Ag/Ge2Se3, directly below the top tungsten electrode, mix together during deposition and jointly form the silver-source layer. A layer of SnSe is between these two layers ensuring that the silver-source layer is not in direct contact with the active layer. Since silver does not migrate into the active layer at high temperatures, and the active layer maintains a high glass transition temperature of about , the device has significantly higher processing and operating temperatures at and at least , respectively. These processing and operating temperatures are higher than most ion-conducting chalcogenide device types, including the S-based glasses (e.g. GeS) that need to be photodoped or thermally annealed. These factors allow the SDC device to operate over a wide range of temperatures, including long-term continuous operation at .
Implementation of hysteretic flux-charge memristors.
There exist implementations of memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve [arXiv:2403.20051]. Memristors with both hysteretic current-voltage curve and hysteretic flux-charge curve use a memristance dependent on the history of the flux and charge. Those memristors can merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365].
Time-integrated Formingfree memristor.
Time-integrated Formingfree (TiF) memristors reveal a hysteretic flux-charge curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. And TiF memristors also reveal a hysteretic current-voltage curve with two distinguishable branches in the positive bias range and with two distinguishable branches in the negative bias range. The memristance state of a TiF memristor can be controlled by both the flux and the charge [DOI: 10.1063/1.4775718]. A TiF memristor was first demonstrated by Heidemarie Schmidt and her team in 2011 [DOI: 10.1063/1.3601113]. This TiF memristor is composed of a BiFeO3 thin film between metallically conducting electrodes, one gold, the other platinum. The hysteretic flux-charge curve of the TiF memristor changes its slope continuously in one branch in the positive and in one branch in the negative bias range (write branches) and has a constant slope in one branch in the positive and in one branch in the negative bias range (read branches) [arXiv:2403.20051]. According to Leon O. Chua [Reference 1: "10.1.1.189.3614"] the slope of the flux-charge curve corresponds to the memristance of a memristor or to its internal state variables. The TiF memristors can be considered as memristors with a constant memristance in the two read branches and with a reconfigurable memristance in the two write branches. The physical memristor model which describes the hysteretic current-voltage curves of the TiF memristor implements static and dynamic internal state variables in the two read branches and in the two write branches [arXiv:2402.10358].
The static and dynamic internal state variables of a non-linear memristors can be used to implement operations on non-linear memristors representing linear, non-linear, and even transcendental, e.g. exponential or logarithmic, input-output functions.
The transport characteristics of the TiF memristor in the small current – small voltage range are non-linear. This non-linearity well compares to the non-linear characteristics in the small current – small voltage range of the basic former and present building blocks in the arithmetic logic unit of von-Neumann computers, i.e. of vacuum tubes and of transistors. In contrast to vacuum tubes and transistors, the signal output of hysteretic flux-charge memristors, i.e. of TiF memristors, is not lost when the operation power is switched off before storing the signal output to the memory. Therefore, hysteretic flux-charge memristors are said to merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [DOI: 10.1002/adfm.201303365]. The transport characteristics in the small current – small voltage range of hysteretic current-voltage memristors are linear. This explains why hysteretic current-voltage memristors are well established memory units and why they can not merge the functionality of the arithmetic logic unit and of the memory unit without data transfer [arXiv:2403.20051].
Potential applications.
Memristors remain a laboratory curiosity, as yet made in insufficient numbers to gain any commercial applications.
A potential application of memristors is in analog memories for superconducting quantum computers.
Memristors can potentially be fashioned into non-volatile solid-state memory, which could allow greater data density than hard drives with access times similar to DRAM, replacing both components. HP prototyped a crossbar latch memory that can fit 100 gigabits in a square centimeter, and proposed a scalable 3D design (consisting of up to 1000 layers or 1 petabit per cm3). In May 2008 HP reported that its device reaches currently about one-tenth the speed of DRAM. The devices' resistance would be read with alternating current so that the stored value would not be affected. In May 2012, it was reported that the access time had been improved to 90 nanoseconds, which is nearly one hundred times faster than the contemporaneous Flash memory. At the same time, the energy consumption was just one percent of that consumed by Flash memory.
Memristors have applications in programmable logic signal processing, super-resolution imaging physical neural networks, control systems, reconfigurable computing, in-memory computing, brain–computer interfaces and RFID. Memristive devices are potentially used for stateful logic implication, allowing a replacement for CMOS-based logic computation Several early works have been reported in this direction.
In 2009, a simple electronic circuit consisting of an LC network and a memristor was used to model experiments on adaptive behavior of unicellular organisms. It was shown that subjected to a train of periodic pulses, the circuit learns and anticipates the next pulse similar to the behavior of slime molds "Physarum polycephalum" where the viscosity of channels in the cytoplasm responds to periodic environment changes. Applications of such circuits may include, e.g., pattern recognition. The DARPA SyNAPSE project funded HP Labs, in collaboration with the Boston University Neuromorphics Lab, has been developing neuromorphic architectures which may be based on memristive systems. In 2010, Versace and Chandler described the MoNETA (Modular Neural Exploring Traveling Agent) model. MoNETA is the first large-scale neural network model to implement whole-brain circuits to power a virtual and robotic agent using memristive hardware. Application of the memristor crossbar structure in the construction of an analog soft computing system was demonstrated by Merrikh-Bayat and Shouraki. In 2011, they showed how memristor crossbars can be combined with fuzzy logic to create an analog memristive neuro-fuzzy computing system with fuzzy input and output terminals. Learning is based on the creation of fuzzy relations inspired from Hebbian learning rule.
In 2013 Leon Chua published a tutorial underlining the broad span of complex phenomena and applications that memristors span and how they can be used as non-volatile analog memories and can mimic classic habituation and learning phenomena.
Derivative devices.
Memistor and memtransistor.
The memistor and memtransistor are transistor-based devices which include memristor function.
Memcapacitors and meminductors.
In 2009, Di Ventra, Pershin, and Chua extended the notion of memristive systems to capacitive and inductive elements in the form of memcapacitors and meminductors, whose properties depend on the state and history of the system, further extended in 2013 by Di Ventra and Pershin.
Memfractance and memfractor, 2nd- and 3rd-order memristor, memcapacitor and meminductor.
In September 2014, Mohamed-Salah Abdelouahab, Rene Lozi, and Leon Chua published a general theory of 1st-, 2nd-, 3rd-, and nth-order memristive elements using fractional derivatives.
History.
Precursors.
Sir Humphry Davy is said by some to have performed the first experiments which can be explained by memristor effects as long ago as 1808. However the first device of a related nature to be constructed was the memistor (i.e. memory resistor), a term coined in 1960 by Bernard Widrow to describe a circuit element of an early artificial neural network called ADALINE. A few years later, in 1968, Argall published an article showing the resistance switching effects of TiO2 which was later claimed by researchers from Hewlett Packard to be evidence of a memristor.
Theoretical description.
Leon Chua postulated his new two-terminal circuit element in 1971. It was characterized by a relationship between charge and flux linkage as a fourth fundamental circuit element. Five years later he and his student Sung Mo Kang generalized the theory of memristors and memristive systems including a property of zero crossing in the Lissajous curve characterizing current vs. voltage behavior.
Twenty-first century.
On May 1, 2008, Strukov, Snider, Stewart, and Williams published an article in "Nature" identifying a link between the two-terminal resistance switching behavior found in nanoscale systems and memristors.
On 23 January 2009, Di Ventra, Pershin, and Chua extended the notion of memristive systems to capacitive and inductive elements, namely capacitors and inductors, whose properties depend on the state and history of the system.
In July 2014, the MeMOSat/LabOSat group (composed of researchers from Universidad Nacional de General San Martín (Argentina), INTI, CNEA, and CONICET) put memory devices into a Low Earth orbit. Since then, seven missions with different devices are performing experiments in low orbits, onboard Satellogic's Ñu-Sat satellites.
On 7 July 2015, Knowm Inc announced Self Directed Channel (SDC) memristors commercially.
These devices remain available in small numbers.
On 13 July 2018, MemSat (Memristor Satellite) was launched to fly a memristor evaluation payload.
In 2021, Jennifer Rupp and Martin Bazant of MIT started a "Lithionics" research programme to investigate applications of lithium beyond their use in battery electrodes, including lithium oxide-based memristors in neuromorphic computing.
In May 2023, TECHiFAB GmbH announced TiF memristors commercially. [arXiv: 2403.20051, arXiv: 2402.10358] These TiF memristors remain available in small and medium numbers.
In the September 2023 issue of "Science Magazine", Chinese scientists Wenbin Zhang "et al." described the development and testing of a memristor-based integrated circuit, designed to dramatically increase the speed and efficiency of Machine Learning and Artificial Intelligence tasks, optimized for Edge Computing applications.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f(\\mathrm \\Phi_\\mathrm m(t),q(t))=0"
},
{
"math_id": 1,
"text": "M(q)=\\frac{\\mathrm d\\Phi_{\\rm m}}{\\mathrm dq} "
},
{
"math_id": 2,
"text": "M(q(t)) = \\cfrac{\\mathrm d\\Phi_{\\rm}/\\mathrm dt}{\\mathrm dq/\\mathrm dt}=\\frac{V(t)}{I(t)}"
},
{
"math_id": 3,
"text": "V(t) =\\ M(q(t)) I(t)"
},
{
"math_id": 4,
"text": "W(\\phi(t))"
},
{
"math_id": 5,
"text": "i(t)=W(\\phi(t))v(t)"
},
{
"math_id": 6,
"text": "P(t) =\\ I(t)V(t) =\\ I^2(t) M(q(t))"
},
{
"math_id": 7,
"text": "\\begin{align}\n i_M(v) &= \\epsilon\\cos(\\phi_0)v \\\\\n&=W(\\phi_0)v\n\\end{align}"
},
{
"math_id": 8,
"text": "\\epsilon"
},
{
"math_id": 9,
"text": "v"
},
{
"math_id": 10,
"text": "i_M"
},
{
"math_id": 11,
"text": "n_e"
},
{
"math_id": 12,
"text": "\\begin{align}\nv_M&=R(n_e)i_M\\\\\n\\frac{dn_e}{dt}&=\\beta n+\\alpha R(n_e)i^2_M\n\\end{align}"
},
{
"math_id": 13,
"text": "v_M"
},
{
"math_id": 14,
"text": "R(n_e)=\\frac{F}{n_e}"
},
{
"math_id": 15,
"text": "\\alpha,\\beta"
},
{
"math_id": 16,
"text": "F"
},
{
"math_id": 17,
"text": "v-i"
},
{
"math_id": 18,
"text": "\\begin{align}\nv&=R_0(T_0)\\exp\\left[\\beta\\left(\\frac{1}{T}-\\frac{1}{T_0}\\right)\\right]i \\\\\n&\\equiv R(T)i \\\\\n\\frac{dT}{dt}&=\\frac{1}{C}\\left[-\\delta\\cdot(T-T_0)+R(T)i^2\\right]\n\\end{align}\n"
},
{
"math_id": 19,
"text": "\\beta"
},
{
"math_id": 20,
"text": "T"
},
{
"math_id": 21,
"text": "T_0"
},
{
"math_id": 22,
"text": "R_0(T_0)"
},
{
"math_id": 23,
"text": "T=T_0"
},
{
"math_id": 24,
"text": "C"
},
{
"math_id": 25,
"text": "\\delta"
},
{
"math_id": 26,
"text": "\\begin{align}\n y(t) &= g(\\mathbf{x},u,t) u(t), \\\\\n \\dot{\\mathbf{x}} &= f(\\mathbf{x},u,t),\n\\end{align}"
},
{
"math_id": 27,
"text": "\\dot{\\mathbf{x}} = f(\\mathbf{x}, u(t) + \\xi(t), t),"
},
{
"math_id": 28,
"text": "M(q(t)) = R_\\mathrm{OFF} \\cdot \\left(1-\\frac{\\mu_{v}R_\\mathrm{ON}}{D^2} q(t)\\right)"
},
{
"math_id": 29,
"text": "E_{\\mathrm{switch}}\n=\\ V^2\\int_{T_\\mathrm{off}}^{T_\\mathrm{on}} \\frac{\\mathrm dt}{M(q(t))}\n=\\ V^2\\int_{Q_\\mathrm{off}}^{Q_\\mathrm{on}}\\frac{\\mathrm dq}{I(q)M(q)}\n=\\ V^2\\int_{Q_\\mathrm{off}}^{Q_\\mathrm{on}}\\frac{\\mathrm dq}{V(q)} =\\ V\\Delta Q "
},
{
"math_id": 30,
"text": "\\begin{align}\n y(t) &= g(\\textbf{x},u,t)u(t), \\\\\n \\dot{\\textbf{x}} &= f(\\textbf{x},u,t)\n\\end{align}"
},
{
"math_id": 31,
"text": "R(x)=R_{off} (1-x)+R_{on} x"
},
{
"math_id": 32,
"text": "dx/dt=I/\\beta-\\alpha x"
},
{
"math_id": 33,
"text": " \\frac{d\\vec{x}}{dt} = -\\alpha \\vec{x}+\\frac{1}{\\beta} (I-\\chi \\Omega X)^{-1} \\Omega \\vec S "
},
{
"math_id": 34,
"text": "X"
},
{
"math_id": 35,
"text": "x_i"
},
{
"math_id": 36,
"text": "\\alpha,\\beta,\\chi"
},
{
"math_id": 37,
"text": "\\vec S "
},
{
"math_id": 38,
"text": "\\Omega^2=\\Omega"
},
{
"math_id": 39,
"text": "\\begin{align}\n y(t) &= g_0(\\textbf{x}, u)u(t) + g_1(\\textbf{x}, u){\\operatorname{d}^2u\\over\\operatorname{d}t^2} +\n g_2(\\textbf{x}, u){\\operatorname{d}^4u\\over\\operatorname{d}t^4} + \\ldots +\n g_m(\\textbf{x}, u){\\operatorname{d}^{2m}u\\over\\operatorname{d}t^{2m}}, \\\\\n \\dot{\\textbf{x}} &= f(\\textbf{x}, u)\n\\end{align}"
},
{
"math_id": 40,
"text": "a"
},
{
"math_id": 41,
"text": "\\begin{align}\n y(t) &= g_0(\\textbf{x},u)(u(t)-a), \\\\\n \\dot{\\textbf{x}} &= f(\\textbf{x},u)\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=14246162
|
14246404
|
Holocytochrome-c synthase
|
The enzyme holocytochrome-"c" synthase (EC 4.4.1.17) catalyzes the chemical reaction
holocytochrome "c" formula_0 apocytochrome "c" + heme
This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is holocytochrome-"c" apocytochrome-"c"-lyase (heme-forming). Other names in common use include cytochrome "c" heme-lyase, holocytochrome "c" synthetase, and holocytochrome-"c" apocytochrome-"c"-lyase. This enzyme participates in porphyrin and chlorophyll metabolism.
Cytochrome "c" heme-lyase (CCHL) and cytochrome Cc1 heme-lyase (CC1HL) are mitochondrial enzymes that catalyze the covalent attachment of a heme group on two cysteine residues of cytochrome c and c1. These two enzymes are functionally and evolutionary related. There are two conserved regions, the first is located in the central section and the second in the "C"-terminal section. Both patterns contain conserved histidine, tryptophan and acidic residues which could be important for the interaction of the enzymes with the apoproteins and/or the heme group.
The human enzyme, HCCS, processes both cytochromes "c" and "c1".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246404
|
14246449
|
L-2-amino-4-chloropent-4-enoate dehydrochlorinase
|
The enzyme -2-amino-4-chloropent-4-enoate dehydrochlorinase (EC 4.5.1.4) catalyzes the reaction
-2-amino-4-chloropent-4-enoate + H2O formula_0 2-oxopent-4-enoate + chloride + NH3
This enzyme belongs to the family of lyases, specifically the class of carbon-halide lyases. The systematic name of this enzyme class is -2-amino-4-chloropent-4-enoate chloride-lyase (adding water; deaminating; 2-oxopent-4-enoate-forming). Other names in common use include -2-amino-4-chloro-4-pentenoate dehalogenase, and -2-amino-4-chloropent-4-enoate chloride-lyase (deaminating).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246449
|
14246468
|
L-3-cyanoalanine synthase
|
The enzyme -3-cyanoalanine synthase (EC 4.4.1.9) catalyzes the chemical reaction
-cysteine + hydrogen cyanide formula_0 -3-cyanoalanine + hydrogen sulfide
This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is -cysteine hydrogen-sulfide-lyase (adding hydrogen cyanide -3-cyanoalanine-forming). Other names in common use include β-cyanoalanine synthase, β-cyanoalanine synthetase, β-cyano--alanine synthase, and -cysteine hydrogen-sulfide-lyase (adding HCN). This enzyme participates in cyanoamino acid metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246468
|
14246526
|
Leukotriene-C4 synthase
|
The enzyme leukotriene-C4 synthase (EC 4.4.1.20) catalyzes the reaction
leukotriene C4 formula_0 leukotriene A4 + glutathione
This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is leukotriene-C4 glutathione-lyase (leukotriene-A4-forming). Other names in common use include leukotriene C4 synthetase, LTC4 synthase, LTC4 synthetase, leukotriene A4:glutathione "S"-leukotrienyltransferase, (7"E",9"E",11"Z",14"Z")-(5S,6R)-5,6-epoxyicosa-7,9,11,14-tetraenoate:glutathione leukotriene-transferase, (epoxide-ring-opening), (7"E",9"E",11"Z",14"Z")-(5"S",6"R")-6-(glutathion-"S"-yl)-5-hydroxyicosa-7,9,11,14-tetraenoate glutathione-lyase (epoxide-forming). This enzyme participates in arachidonic acid metabolism.
Structural studies.
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 2PNO, 2UUH, and 2UUI.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246526
|
14246563
|
Methionine gamma-lyase
|
The enzyme methionine γ-lyase (EC 4.4.1.11, MGL) is in the γ-family of PLP-dependent enzymes. It degrades sulfur-containing amino acids to α-keto acids, ammonia, and thiols:
-methionine + H2O = methanethiol + NH3 + 2-oxobutanoate (overall reaction)
(1a) -methionine = methanethiol + 2-aminobut-2-enoate
(1b) 2-aminobut-2-enoate = 2-iminobutanoate (spontaneous)
(1c) 2-iminobutanoate + H2O = 2-oxobutanoate + NH3 (spontaneous)
Because sulfur-containing amino acids play a role in multiple biological processes, the regulation of these amino acids is essential. Additionally, it is crucial to maintain low homocysteine levels for the proper functioning of various pathways and for preventing the toxic effects of the cysteine homologue. Methionine γ-lyase has been found in several bacteria "(Clostridiums porogenes, Pseudomonas ovalis, Pseudomonas putida, Aeromonas sp., Citrobacter intermedius, Brevibacterium linens, Citrobacter freundii, Porphyromonas gingivalis, Treponema denticola)", parasitic protozoa "(Trichomonas vaginalis, Entamoeba histolytica)", and plants "(Arabidopsis thaliana)".
This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is -methionine methanethiol-lyase (deaminating; 2-oxobutanoate-forming). Other names in common use include -methioninase, methionine lyase, methioninase, methionine dethiomethylase, -methionine γ-lyase, and -methionine methanethiol-lyase (deaminating). This enzyme participates in selenoamino acid metabolism. It employs one cofactor, pyridoxal phosphate.
Structure.
The enzyme is made up of 389-441 amino acids and forms four identical subunits. The active molecule is composed of two tightly associated dimers, the interface at which lies the active site. Each of the dimers has a pyridoxal 5’-phosphate (PLP) cofactor. Six amino acids located near the active site are involved in the reaction, namely Tyr59, Arg61, Tyr114, Cys116, Lys240, and Asp241. Unlike the other amino acids, Cys116 is not typically found in PLP γ-family enzymes, which instead have glycine or proline. Although there is no direct contact between Cys116 and either MGL or the methionine substrate, studies show that the amino acid is involved in retaining substrate specificity.
Reaction mechanism.
In enzymology, a methionine gamma-lyase (EC 4.4.1.11) is an enzyme that catalyzes the chemical reaction
L-methionine + H2O formula_0 methanethiol + NH3 + 2-oxobutanoate
Thus, the two substrates of this enzyme are L-methionine and H2O, whereas its 3 products are methanethiol, NH3, and 2-oxobutanoate.
MGL also catalyzes α, β-elimination L-cysteine, degradation of O-substituted serine or homoserine, β- or γ-replacement, as well as deamination and γ-addition of L-vinylglycine. The reaction mechanism initially consists of the amino group of the substrate connected by a Schiff-base linkage to PLP. When a lysine residue replaces the amino group, an external aldimine is formed and hydrogens from the substrate are shifted to PLP. A neighboring tyrosine amino acid acts as an acid catalyst and attacks the substrate, consequently eliminating the thiol group from the substrate. Lastly, α-keto acid and ammonia are released from PLP.
Function.
Because MGL has differing substrate specificity among organisms, the enzyme also has varying physiological roles among organisms. In anaerobic bacteria and parasitic protozoa, MGL generates 2-oxobutyrate from methionine. 2-oxobutyrate is ultimately decomposed by acetate-CoA ligase and produces ATP, thus contributing to ATP metabolism. MGL also plays a role in the pathogenicity of periodontal bacterium such as "P. gingivalis". A study finds a correlation between the presence of MGL and an increase in mice survival after subcutaneous injection of the bacteria. In B. linens, a cheese ripening bacterium, MGL activity is tightly linked with carbohydrate metabolism.
In plants, MGL mRNA is found in dry seeds although the protein itself is not. However, the enzyme is highly expressed in wet seeds, suggesting that MGL is a vital part of early germination. MGL may also be involved in the formation of volatile sulfur compounds such as methanethiol on damaged plant leaves to defend against insects. However, it is undetermined whether MGL is present in guava, which was first discovered to have this protection mechanism, and whether other plants use a similar technique.
Isozymes of MGL are only found in the parasitic protists "E. histolytica" and "T. vaginalis". The isozymes differ in their ability to efficiently degrade methionine, homocysteine, and cysteine. "E. histolytica" MGL is derived from archaea MGL whereas "T. vaginalis" MGL share more similarities with bacterial MGL. Therefore, the inclusion of MGL in the genome of these two species occurred independently.
Drug development.
Trifluoromethionine (TFM) is a fluorinated methionine prodrug, which only presents its toxicity after degradation by MGL. Studies show that TFM is toxic to and slows the growth of anaerobic microorganisms "(Mycobacterium smegmatis, Mycobacterium phlei, Candida lipolytica)", periodontal bacteria "(P. gingivalis, F. nucleatum)", and parasitic protists "(E. histolytica, T. vaginalis)". Studies have shown that TFM is also efficacious in vivo. Furthermore, TFM has limited toxicity to mammalian cells, which do not have MGL. Therefore, TFM only exhibits toxic effects on pathogens that contain MGL.
Cancer therapy.
Some tumors, such as glioblastomas, medulloblastoma, and neuroblastoma, are much more sensitive to the methionine starvation than the normal tissues. Therefore, methionine depletion arises as a relevant therapeutical approach to treat cancer. For that reason, MGL has been studied to decrease the methionine levels in the blood serum and decrease the tumor growth and also to kill, by starvation, those malignant cells.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246563
|
14246585
|
Methylaspartate ammonia-lyase
|
The enzyme methylaspartate ammonia-lyase (EC 4.3.1.2) catalyzes the chemical reaction
-"threo"-3-methylaspartate formula_0 mesaconate + NH3
This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is -"threo"-3-methylaspartate ammonia-lyase (mesaconate-forming). Other names in common use include β-methylaspartase, 3-methylaspartase, and -"threo"-3-methylaspartate ammonia-lyase. This enzyme participates in c5-branched dibasic acid metabolism and nitrogen metabolism. It employs one cofactor, cobamide.
Structural studies.
Several structures of this enzyme have been deposited in the Protein Data Bank (linked in the infobox) which show it possesses a TIM barrel domain.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246585
|
14246600
|
Ornithine cyclodeaminase
|
The enzyme ornithine cyclodeaminase (EC 4.3.1.12) catalyzes the chemical reaction
-ornithine formula_0 -proline + NH4+
This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is ornithine ammonia-lyase (cyclizing; -proline-forming). Other names in common use include ornithine cyclase, ornithine cyclase (deaminating), and -ornithine ammonia-lyase (cyclizing). This enzyme participates in arginine and proline biosynthesis. It employs one cofactor, NAD+.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1U7H and 1X7D.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246600
|
14246654
|
Phosphosulfolactate synthase
|
The enzyme phosphosulfolactate synthase (EC 4.4.1.19) catalyzes the reaction
(2"R")-2-"O"-phospho-3-sulfolactate formula_0 phospho"enol"pyruvate + sulfite
This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is (2"R")-2-"O"-phospho-3-sulfolactate hydrogen-sulfite-lyase (phospho"enol"pyruvate-forming). Other names in common use include (2"R")-phospho-3-sulfolactate synthase, and (2"R")-"O"-phospho-3-sulfolactate sulfo-lyase.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1U83.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246654
|
14246674
|
Purine imidazole-ring cyclase
|
The enzyme purine imidazole-ring cyclase (EC 4.3.2.4) catalyzes the chemical reaction
DNA 4,6-diamino-5-formamidopyrimidine formula_0 DNA adenine + H2O
This enzyme belongs to the family of lyases, specifically amidine lyases. The systematic name of this enzyme class is DNA-4,6-diamino-5-formamidopyrimidine C8-N9-lyase (cyclizing DNA-adenine-forming). Other names in common use include DNA-4,6-diamino-5-formamidopyrimidine 8-"C",9-"N"-lyase (cyclizing), DNA-4,6-diamino-5-formamidopyrimidine 8-"C",9-"N"-lyase (cyclizing, DNA-adenine-forming).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246674
|
14246697
|
S-alkylcysteine lyase
|
In enzymology, a S-alkylcysteine lyase (EC 4.4.1.6) is an enzyme that catalyzes the chemical reaction
an S-alkyl-L-cysteine + water formula_0 an alkyl thiol + ammonia + pyruvate
Thus, the two substrates of this enzyme are S-alkyl-L-cysteine and water, whereas its three products are alkyl thiol, ammonia, and pyruvate.
This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is S-alkyl-L-cysteine alkyl-thiol-lyase (deaminating; pyruvate-forming). Other names in common use include S-alkylcysteinase, alkylcysteine lyase, S-alkyl-L-cysteine sulfoxide lyase, S-alkyl-L-cysteine lyase, S-alkyl-L-cysteinase, alkyl cysteine lyase, and S-alkyl-L-cysteine alkylthiol-lyase (deaminating). It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246697
|
14246724
|
S-carboxymethylcysteine synthase
|
The enzyme "S"-carboxymethylcysteine synthase (EC 4.5.1.5) catalyzes the reaction
3-chloro--alanine + thioglycolate formula_0 "S"-carboxymethyl--cysteine + chloride
This enzyme belongs to the family of lyases, specifically the class of carbon-halide lyases. The systematic name of this enzyme class is 3-chloro--alanine chloride-lyase (adding thioglycolate; "S"-carboxymethyl-L-cysteine-forming). This enzyme is also called S-carboxymethyl-L-cysteine synthase. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246724
|
14246758
|
Selenocysteine lyase
|
The enzyme selenocysteine lyase (SCL) (EC 4.4.1.16) catalyzes the chemical reaction
-selenocysteine + reduced acceptor formula_0 selenide + -alanine + acceptor
Nomenclature.
This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is -selenocysteine selenide-lyase (-alanine-forming). Other names in common use include selenocysteine reductase, and selenocysteine β-lyase.
Function.
This enzyme participates in selenoamino acid metabolism by recycling Se from selenocysteine during the degradation of selenoproteins, providing an alternate source of Se for selenocysteine biosynthesis.
Structure and mechanism.
Mammalian SCL forms a homodimer while bacterial SCL is monomeric. In mammals, highest SCL activity is found in the liver and kidney.
While selenocysteine lyases generally catalyze the removal of both selenium or sulfur from selenocysteine or cysteine, respectively, human selenocysteine lyases are specific for selenocysteine. Asp146 has been identified as the key residue that preserves specificity in human SCL.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246758
|
14246773
|
Serine-sulfate ammonia-lyase
|
The enzyme Serine-sulfate ammonia-lyase (EC 4.3.1.10) catalyzes the chemical reaction
-serine "O"-sulfate + H2O formula_0 pyruvate + NH3 + sulfate
This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is -serine-"O"-sulfate ammonia-lyase (pyruvate-forming). It is also called (-SOS)lyase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246773
|
14246787
|
S-(hydroxymethyl)glutathione synthase
|
The enzyme "S"-(hydroxymethyl)glutathione synthase (EC 4.4.1.22) catalyzes the reaction
"S"-(hydroxymethyl)glutathione formula_0 glutathione + formaldehyde
This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is S"-(hydroxymethyl)glutathione formaldehyde-lyase (glutathione-forming). Other names in common use include glutathione-dependent formaldehyde-activating enzyme, Gfa, and S"-(hydroxymethyl)glutathione formaldehyde-lyase. This enzyme participates in methane metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246787
|
14246843
|
Sulfolactate sulfo-lyase
|
The enzyme (2"R")-sulfolactate sulfo-lyase (EC 4.4.1.24) catalyzes the reaction
(2"R")-3-sulfolactate formula_0 pyruvate + hydrogensulfite
This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is (2"R")-3-sulfolactate hydrogensulfite-lyase (pyruvate-forming). Other names in common use include Suy, SuyAB, and 3-sulfolactate bisulfite-lyase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246843
|
14246857
|
Threo-3-hydroxyaspartate ammonia-lyase
|
The enzyme "threo"-3-hydroxyaspartate ammonia-lyase (EC 4.3.1.16) is an enzyme that catalyzes the chemical reaction
"threo"-3-hydroxy--aspartate formula_0 oxaloacetate + NH3
Nomenclature.
This enzyme belongs to the family of lyases, specifically ammonia lyases, which cleave carbon-nitrogen bonds. The systematic name of this enzyme class is threo"-3-hydroxy--aspartate ammonia-lyase (oxaloacetate-forming). Other names in common use include threo"-3-hydroxyaspartate dehydratase, -"threo"-3-hydroxyaspartate dehydratase, and "threo"-3-hydroxy--aspartate ammonia-lyase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246857
|
14246922
|
Ureidoglycolate lyase
|
Class of enzymes
The enzyme ureidoglycolate lyase (EC 4.3.2.3) catalyzes the chemical reaction
("S")-ureidoglycolate formula_0 glyoxylate + urea
This enzyme belongs to the family of lyases, specifically amidine lyases. The systematic name of this enzyme class is ("S")-ureidoglycolate urea-lyase (glyoxylate-forming). Other names in common use include ureidoglycolatase, ureidoglycolase, ureidoglycolate hydrolase, and ("S")-ureidoglycolate urea-lyase. This enzyme participates in purine metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14246922
|
1424857
|
Følner sequence
|
In mathematics, a Følner sequence for a group is a sequence of sets satisfying a particular condition. If a group has a Følner sequence with respect to its action on itself, the group is amenable. A more general notion of Følner nets can be defined analogously, and is suited for the study of uncountable groups. Følner sequences are named for Erling Følner.
Definition.
Given a group formula_0 that acts on a countable set formula_1, a Følner sequence for the action is a sequence of finite subsets formula_2 of formula_1 which exhaust formula_1 and which "don't move too much" when acted on by any group element. Precisely,
For every formula_3, there exists some formula_4 such that formula_5 for all formula_6, and
formula_7 for all group elements formula_8 in formula_0.
Explanation of the notation used above:
Thus, what this definition says is that for any group element formula_8, the proportion of elements of formula_10 that are moved away by formula_8 goes to 0 as formula_4 gets large.
In the setting of a locally compact group acting on a measure space formula_19 there is a more general definition. Instead of being finite, the sets are required to have finite, non-zero measure, and so the Følner requirement will be that
analogously to the discrete case. The standard case is that of the group acting on itself by left translation, in which case the measure in question is normally assumed to be the Haar measure.
Proof of amenability.
We have a group formula_0 and a Følner sequence formula_13, and we need to define a measure formula_29 on formula_0, which philosophically speaking says how much of formula_0 any subset formula_16 takes up. The natural definition that uses our Følner sequence would be
formula_30
Of course, this limit doesn't necessarily exist. To overcome this technicality, we take an ultrafilter formula_31 on the natural numbers that contains intervals formula_32. Then we use an ultralimit instead of the regular limit:
formula_33
It turns out ultralimits have all the properties we need. Namely,
by the Følner sequence definition.
|
[
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "F_1, F_2, \\dots"
},
{
"math_id": 3,
"text": "x\\in X"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "x \\in F_j"
},
{
"math_id": 6,
"text": " j > i"
},
{
"math_id": 7,
"text": "\\lim_{i\\to\\infty}\\frac{|gF_i\\mathbin\\triangle F_i|}{|F_i|} = 0"
},
{
"math_id": 8,
"text": "g"
},
{
"math_id": 9,
"text": "gF_i\\ "
},
{
"math_id": 10,
"text": "F_i\\ "
},
{
"math_id": 11,
"text": "gf"
},
{
"math_id": 12,
"text": "f"
},
{
"math_id": 13,
"text": "F_i"
},
{
"math_id": 14,
"text": "\\triangle"
},
{
"math_id": 15,
"text": "A\\mathbin\\triangle B"
},
{
"math_id": 16,
"text": "A"
},
{
"math_id": 17,
"text": "B"
},
{
"math_id": 18,
"text": "|A|"
},
{
"math_id": 19,
"text": "(X,\\mu)"
},
{
"math_id": 20,
"text": "\\lim_{i\\to\\infty}\\frac{\\mu(gF_i\\mathbin\\triangle F_i)}{\\mu(F_i)} = 0"
},
{
"math_id": 21,
"text": "F_i=G"
},
{
"math_id": 22,
"text": "-i"
},
{
"math_id": 23,
"text": "gF_i"
},
{
"math_id": 24,
"text": "g-i"
},
{
"math_id": 25,
"text": "g+i"
},
{
"math_id": 26,
"text": "2g"
},
{
"math_id": 27,
"text": "2i+1"
},
{
"math_id": 28,
"text": "2g/(2i+1)"
},
{
"math_id": 29,
"text": "\\mu"
},
{
"math_id": 30,
"text": "\\mu(A)=\\lim_{i\\to\\infty}{|A\\cap F_i|\\over|F_i|}."
},
{
"math_id": 31,
"text": "U"
},
{
"math_id": 32,
"text": "[n,\\infty)"
},
{
"math_id": 33,
"text": "\\mu(A)=U - \\lim{|A\\cap F_i|\\over|F_i|}."
},
{
"math_id": 34,
"text": "\\mu(G)=U - \\lim1=1"
},
{
"math_id": 35,
"text": "\\left|{|gA\\cap F_i|\\over|F_i|}-{|A\\cap F_i|\\over|F_i|}\\right| = \\left|{|A\\cap g^{-1}F_i|\\over|F_i|}-{|A\\cap F_i|\\over|F_i|}\\right|"
},
{
"math_id": 36,
"text": "\\leq{|A\\cap(g^{-1}F_i\\mathbin\\triangle F_i)|\\over|F_i|}\\to0"
}
] |
https://en.wikipedia.org/wiki?curid=1424857
|
142488
|
Harmonic series (mathematics)
|
Divergent sum of all positive unit fractions
In mathematics, the harmonic series is the infinite series formed by summing all positive unit fractions:
formula_0
The first formula_1 terms of the series sum to approximately formula_2, where formula_3 is the natural logarithm and formula_4 is the Euler–Mascheroni constant. Because the logarithm has arbitrarily large values, the harmonic series does not have a finite limit: it is a divergent series. Its divergence was proven in the 14th century by Nicole Oresme using a precursor to the Cauchy condensation test for the convergence of infinite series. It can also be proven to diverge by comparing the sum to an integral, according to the integral test for convergence.
Applications of the harmonic series and its partial sums include Euler's proof that there are infinitely many prime numbers, the analysis of the coupon collector's problem on how many random trials are needed to provide a complete range of responses, the connected components of random graphs, the block-stacking problem on how far over the edge of a table a stack of blocks can be cantilevered, and the average case analysis of the quicksort algorithm.
History.
The name of the harmonic series derives from the concept of overtones or harmonics in music: the wavelengths of the overtones of a vibrating string are formula_5, formula_6, formula_7, etc., of the string's fundamental wavelength. Every term of the harmonic series after the first is the harmonic mean of the neighboring terms, so the terms form a harmonic progression; the phrases "harmonic mean" and "harmonic progression" likewise derive from music.
Beyond music, harmonic sequences have also had a certain popularity with architects. This was so particularly in the Baroque period, when architects used them to establish the proportions of floor plans, of elevations, and to establish harmonic relationships between both interior and exterior architectural details of churches and palaces.
The divergence of the harmonic series was first proven in 1350 by Nicole Oresme. Oresme's work, and the contemporaneous work of Richard Swineshead on a different series, marked the first appearance of infinite series other than the geometric series in mathematics. However, this achievement fell into obscurity. Additional proofs were published in the 17th century by Pietro Mengoli and by Jacob Bernoulli. Bernoulli credited his brother Johann Bernoulli for finding the proof, and it was later included in Johann Bernoulli's collected works.
The partial sums of the harmonic series were named harmonic numbers, and given their usual notation formula_8, in 1968 by Donald Knuth.
Definition and divergence.
The harmonic series is the infinite series
formula_9
in which the terms are all of the positive unit fractions. It is a divergent series: as more terms of the series are included in partial sums of the series, the values of these partial sums grow arbitrarily large, beyond any finite limit. Because it is a divergent series, it should be interpreted as a formal sum, an abstract mathematical expression combining the unit fractions, rather than as something that can be evaluated to a numeric value. There are many different proofs of the divergence of the harmonic series, surveyed in a 2006 paper by S. J. Kifowit and T. A. Stamps.
Two of the best-known are listed below.
Comparison test.
One way to prove divergence is to compare the harmonic series with another divergent series, where each denominator is replaced with the next-largest power of two:
formula_10
Grouping equal terms shows that the second series diverges (because every grouping of convergent series is only convergent):
formula_11
Because each term of the harmonic series is greater than or equal to the corresponding term of the second series (and the terms are all positive), and since the second series diverges, it follows (by the comparison test) that the harmonic series diverges as well. The same argument proves more strongly that, for every positive integer formula_12,
formula_13
This is the original proof given by Nicole Oresme in around 1350. The Cauchy condensation test is a generalization of this argument.
Integral test.
It is possible to prove that the harmonic series diverges by comparing its sum with an improper integral. Specifically, consider the arrangement of rectangles shown in the figure to the right. Each rectangle is 1 unit wide and formula_14 units high, so if the harmonic series converged then the total area of the rectangles would be the sum of the harmonic series. The curve formula_15 stays entirely below the upper boundary of the rectangles, so the area under the curve (in the range of formula_16 from one to infinity that is covered by rectangles) would be less than the area of the union of the rectangles. However, the area under the curve is given by a divergent improper integral,
formula_17
Because this integral does not converge, the sum cannot converge either.
In the figure to the right, shifting each rectangle to the left by 1 unit, would produce a sequence of rectangles whose boundary lies below the curve rather than above it.
This shows that the partial sums of the harmonic series differ from the integral by an amount that is bounded above and below by the unit area of the first rectangle:
formula_18
Generalizing this argument, any infinite sum of values of a monotone decreasing positive function of formula_1 (like the harmonic series) has partial sums that are within a bounded distance of the values of the corresponding integrals. Therefore, the sum converges if and only if the integral over the same range of the same function converges. When this equivalence is used to check the convergence of a sum by replacing it with an easier integral, it is known as the integral test for convergence.
Partial sums.
Adding the first formula_1 terms of the harmonic series produces a partial sum, called a harmonic number and denoted formula_8:
formula_19
Growth rate.
These numbers grow very slowly, with logarithmic growth, as can be seen from the integral test. More precisely, by the Euler–Maclaurin formula,
formula_20
where formula_21 is the Euler–Mascheroni constant and formula_22 which approaches 0 as formula_1 goes to infinity.
Divisibility.
No harmonic numbers are integers, except for formula_23. One way to prove that formula_8 is not an integer is to consider the highest power of two formula_24 in the range from 1 to formula_1. If formula_25 is the least common multiple of the numbers from 1 to formula_1, then
formula_26 can be rewritten as a sum of fractions with equal denominators formula_27
in which only one of the numerators, formula_28, is odd and the rest are even, and (when formula_29) formula_25 is itself even. Therefore, the result is a fraction with an odd numerator and an even denominator, which cannot be an integer. More strongly, any sequence of consecutive integers has a unique member divisible by a greater power of two than all the other sequence members, from which it follows by the same argument that no two harmonic numbers differ by an integer.
Another proof that the harmonic numbers are not integers observes that the denominator of formula_8 must be divisible by
all prime numbers greater than formula_30, and uses Bertrand's postulate to prove that this set of primes is non-empty. The same argument implies more strongly that, except for formula_23, formula_31, and formula_32, no harmonic number can have a terminating decimal representation. It has been conjectured that every prime number divides the numerators of only a finite subset of the harmonic numbers, but this remains unproven.
Interpolation.
The digamma function is defined as the logarithmic derivative of the gamma function
formula_33
Just as the gamma function provides a continuous interpolation of the factorials, the digamma function provides a continuous interpolation of the harmonic numbers, in the sense that formula_34.
This equation can be used to extend the definition to harmonic numbers with rational indices.
Applications.
Many well-known mathematical problems have solutions involving the harmonic series and its partial sums.
Crossing a desert.
The jeep problem or desert-crossing problem is included in a 9th-century problem collection by Alcuin, "Propositiones ad Acuendos Juvenes" (formulated in terms of camels rather than jeeps), but with an incorrect solution. The problem asks how far into the desert a jeep can travel and return, starting from a base with formula_1 loads of fuel, by carrying some of the fuel into the desert and leaving it in depots. The optimal solution involves placing depots spaced at distances formula_36 from the starting point and each other, where formula_37 is the range of distance that the jeep can travel with a single load of fuel. On each trip out and back from the base, the jeep places one more depot, refueling at the other depots along the way, and placing as much fuel as it can in the newly placed depot while still leaving enough for itself to return to the previous depots and the base. Therefore, the total distance reached on the formula_1th trip is formula_38
where formula_8 is the formula_1th harmonic number. The divergence of the harmonic series implies that crossings of any length are possible with enough fuel.
For instance, for Alcuin's version of the problem, formula_39: a camel can carry 30 measures of grain and can travel one leuca while eating a single measure, where a leuca is a unit of distance roughly equal to . The problem has formula_35: there are 90 measures of grain, enough to supply three trips. For the standard formulation of the desert-crossing problem, it would be possible for the camel to travel formula_40 leucas and return, by placing a grain storage depot 5 leucas from the base on the first trip and 12.5 leucas from the base on the second trip. However, Alcuin instead asks a slightly different question, how much grain can be transported a distance of 30 leucas without a final return trip, and either strands some camels in the desert or fails to account for the amount of grain consumed by a camel on its return trips.
Stacking blocks.
In the block-stacking problem, one must place a pile of formula_1 identical rectangular blocks, one per layer, so that they hang as far as possible over the edge of a table without falling. The top block can be placed with formula_5 of its length extending beyond the next lower block. If it is placed in this way, the next block down needs to be placed with at most formula_41 of its length extending beyond the next lower block, so that the center of mass of the top two block is supported and they do not topple. The third block needs to be placed with at most formula_42 of its length extending beyond the next lower block, and so on. In this way, it is possible to place the formula_1 blocks in such a way that they extend formula_43 lengths beyond the table, where formula_8 is the formula_1th harmonic number. The divergence of the harmonic series implies that there is no limit on how far beyond the table the block stack can extend. For stacks with one block per layer, no better solution is possible, but significantly more overhang can be achieved using stacks with more than one block per layer.
Counting primes and divisors.
In 1737, Leonhard Euler observed that, as a formal sum, the harmonic series is equal to an Euler product in which each term comes from a prime number: formula_44 where formula_45 denotes the set of prime numbers. The left equality comes from applying the distributive law to the product and recognizing the resulting terms as the prime factorizations of the terms in the harmonic series, and the right equality uses the standard formula for a geometric series. The product is divergent, just like the sum, but if it converged one could take logarithms and obtain formula_46
Here, each logarithm is replaced by its Taylor series, and the constant formula_47 on the right is the evaluation of the convergent series of terms with exponent greater than one. It follows from these manipulations that the sum of reciprocals of primes, on the right hand of this equality, must diverge, for if it converged these steps could be reversed to show that the harmonic series also converges, which it does not. An immediate corollary is that there are infinitely many prime numbers, because a finite sum cannot diverge. Although Euler's work is not considered adequately rigorous by the standards of modern mathematics, it can be made rigorous by taking more care with limits and error bounds. Euler's conclusion that the partial sums of reciprocals of primes grow as a double logarithm of the number of terms has been confirmed by later mathematicians as one of Mertens' theorems, and can be seen as a precursor to the prime number theorem.
Another problem in number theory closely related to the harmonic series concerns the average number of divisors of the numbers in a range from 1 to formula_1, formalized as the average order of the divisor function,
formula_48
The operation of rounding each term in the harmonic series to the next smaller integer multiple of formula_14 causes this average to differ from the harmonic numbers by a small constant, and Peter Gustav Lejeune Dirichlet showed more precisely that the average number of divisors is formula_49 (expressed in big O notation). Bounding the final error term more precisely remains an open problem, known as Dirichlet's divisor problem.
Collecting coupons.
Several common games or recreations involve repeating a random selection from a set of items until all possible choices have been selected; these include the collection of trading cards and the completion of parkrun bingo, in which the goal is to obtain all 60 possible numbers of seconds in the times from a sequence of running events. More serious applications of this problem include sampling all variations of a manufactured product for its quality control, and the connectivity of random graphs. In situations of this form, once there are formula_12 items remaining to be collected out of a total of formula_1 equally-likely items, the probability of collecting a new item in a single random choice is formula_50 and the expected number of random choices needed until a new item is collected is formula_51. Summing over all values of formula_12 from formula_1 down to 1 shows that the total expected number of random choices needed to collect all items is formula_52, where formula_8 is the formula_1th harmonic number.
Analyzing algorithms.
The quicksort algorithm for sorting a set of items can be analyzed using the harmonic numbers. The algorithm operates by choosing one item as a "pivot", comparing it to all the others, and recursively sorting the two subsets of items whose comparison places them before the pivot and after the pivot. In either its average-case complexity (with the assumption that all input permutations are equally likely) or in its expected time analysis of worst-case inputs with a random choice of pivot, all of the items are equally likely to be chosen as the pivot. For such cases, one can compute the probability that two items are ever compared with each other, throughout the recursion, as a function of the number of other items that separate them in the final sorted order. If items formula_16 and formula_53 are separated by formula_12 other items, then the algorithm will make a comparison between formula_16 and formula_53 only when, as the recursion progresses, it picks formula_16 or formula_53 as a pivot before picking any of the other formula_12 items between them. Because each of these formula_54 items is equally likely to be chosen first, this happens with probability formula_55. The total expected number of comparisons, which controls the total running time of the algorithm, can then be calculated by summing these probabilities over all pairs, giving formula_56
The divergence of the harmonic series corresponds in this application to the fact that, in the comparison model of sorting used for quicksort, it is not possible to sort in linear time.
Related series.
Alternating harmonic series.
The series
formula_57
is known as the alternating harmonic series. It is conditionally convergent by the alternating series test, but not absolutely convergent. Its sum is the natural logarithm of 2.
Explicitly, the asymptotic expansion of the series is
formula_58
Using alternating signs with only odd unit fractions produces a related series, the Leibniz formula for π
formula_59
Riemann zeta function.
The Riemann zeta function is defined for real formula_60 by the convergent series
formula_61
which for formula_62 would be the harmonic series. It can be extended by analytic continuation to a holomorphic function on all complex numbers except formula_62, where the extended function has a simple pole. Other important values of the zeta function include formula_63, the solution to the Basel problem, Apéry's constant formula_64, proved by Roger Apéry to be an irrational number, and the "critical line" of complex numbers with real part formula_5, conjectured by the Riemann hypothesis to be the only values other than negative integers where the function can be zero.
Random harmonic series.
The random harmonic series is
formula_65
where the values formula_66 are independent and identically distributed random variables that take the two values formula_67 and formula_68 with equal probability formula_5. It converges with probability 1, as can be seen by using the Kolmogorov three-series theorem or of the closely related Kolmogorov maximal inequality. The sum of the series is a random variable whose probability density function is close to formula_7 for values between formula_68 and formula_69, and decreases to near-zero for values greater than formula_70 or less than formula_71. Intermediate between these ranges, at the values formula_72, the probability density is formula_73 for a nonzero but very small value formula_74.
Depleted harmonic series.
The depleted harmonic series where all of the terms in which the digit 9 appears anywhere in the denominator are removed can be shown to converge to the value . In fact, when all the terms containing any particular string of digits (in any base) are removed, the series converges.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sum_{n=1}^\\infty\\frac{1}{n} = 1 + \\frac{1}{2} + \\frac{1}{3} + \\frac{1}{4} + \\frac{1}{5} + \\cdots."
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\ln n + \\gamma"
},
{
"math_id": 3,
"text": "\\ln"
},
{
"math_id": 4,
"text": "\\gamma\\approx0.577"
},
{
"math_id": 5,
"text": "\\tfrac12"
},
{
"math_id": 6,
"text": "\\tfrac13"
},
{
"math_id": 7,
"text": "\\tfrac14"
},
{
"math_id": 8,
"text": "H_n"
},
{
"math_id": 9,
"text": "\\sum_{n=1}^\\infty\\frac{1}{n} = 1 + \\frac{1}{2} + \\frac{1}{3} + \\frac{1}{4} + \\frac{1}{5} + \\cdots"
},
{
"math_id": 10,
"text": "\\begin{alignat}{8}\n 1 & + \\frac{1}{2} && + \\frac{1}{3} && + \\frac{1}{4} && + \\frac{1}{5} && + \\frac{1}{6} && + \\frac{1}{7} && + \\frac{1}{8} && + \\frac{1}{9} && + \\cdots \\\\[5pt]\n\n {} \\geq 1 & + \\frac{1}{2} && + \\frac{1}{\\color{red}{\\mathbf{4}}} && + \\frac{1}{4} && + \\frac{1}{\\color{red}{\\mathbf{8}}} && + \\frac{1}{\\color{red}{\\mathbf{8}}} && + \\frac{1}{\\color{red}{\\mathbf{8}}} && + \\frac{1}{8} && + \\frac{1}{\\color{red}{\\mathbf{16}}} && + \\cdots \\\\[5pt]\n \\end{alignat}"
},
{
"math_id": 11,
"text": "\\begin{align}\n & 1 + \\left(\\frac{1}{2}\\right) + \\left(\\frac{1}{4} + \\frac{1}{4}\\right) + \\left(\\frac{1}{8} + \\frac{1}{8} + \\frac{1}{8} + \\frac{1}{8}\\right) + \\left(\\frac{1}{16} + \\cdots + \\frac{1}{16}\\right) + \\cdots \\\\[5pt]\n {} = {} & 1 + \\frac{1}{2} + \\frac{1}{2} + \\frac{1}{2} + \\frac{1}{2} + \\cdots.\n \\end{align}"
},
{
"math_id": 12,
"text": "k"
},
{
"math_id": 13,
"text": "\\sum_{n=1}^{2^k} \\frac{1}{n} \\geq 1 + \\frac{k}{2}"
},
{
"math_id": 14,
"text": "\\tfrac1n"
},
{
"math_id": 15,
"text": "y=\\tfrac1x"
},
{
"math_id": 16,
"text": "x"
},
{
"math_id": 17,
"text": "\\int_1^\\infty\\frac{1}{x}\\,dx = \\infty."
},
{
"math_id": 18,
"text": "\\int_1^{N+1}\\frac1x\\,dx<\\sum_{i=1}^N\\frac1i<\\int_1^{N}\\frac1x\\,dx+1."
},
{
"math_id": 19,
"text": "H_n = \\sum_{k = 1}^n \\frac{1}{k}."
},
{
"math_id": 20,
"text": "H_n = \\ln n + \\gamma + \\frac{1}{2n} - \\varepsilon_n"
},
{
"math_id": 21,
"text": "\\gamma\\approx 0.5772"
},
{
"math_id": 22,
"text": "0\\le\\varepsilon_n\\le 1/8n^2"
},
{
"math_id": 23,
"text": "H_1=1"
},
{
"math_id": 24,
"text": "2^k"
},
{
"math_id": 25,
"text": "M"
},
{
"math_id": 26,
"text": "H_k"
},
{
"math_id": 27,
"text": "H_n=\\sum_{i=1}^n \\tfrac{M/i}{M}"
},
{
"math_id": 28,
"text": "M/2^k"
},
{
"math_id": 29,
"text": "n>1"
},
{
"math_id": 30,
"text": "n/2"
},
{
"math_id": 31,
"text": "H_2=1.5"
},
{
"math_id": 32,
"text": "H_6=2.45"
},
{
"math_id": 33,
"text": "\\psi(x)=\\frac{d}{dx}\\ln\\big(\\Gamma(x)\\big)=\\frac{\\Gamma'(x)}{\\Gamma(x)}."
},
{
"math_id": 34,
"text": "\\psi(n)=H_{n-1}-\\gamma"
},
{
"math_id": 35,
"text": "n=3"
},
{
"math_id": 36,
"text": "\\tfrac{r}{2n}, \\tfrac{r}{2(n-1)}, \\tfrac{r}{2(n-2)}, \\dots"
},
{
"math_id": 37,
"text": "r"
},
{
"math_id": 38,
"text": "\\frac{r}{2n}+\\frac{r}{2(n-1)}+\\frac{r}{2(n-2)}+\\cdots=\\frac{r}{2} H_n,"
},
{
"math_id": 39,
"text": "r=30"
},
{
"math_id": 40,
"text": "\\tfrac{30}{2}\\bigl(\\tfrac13+\\tfrac12+\\tfrac11)=27.5"
},
{
"math_id": 41,
"text": "\\tfrac12\\cdot\\tfrac12"
},
{
"math_id": 42,
"text": "\\tfrac12\\cdot\\tfrac13"
},
{
"math_id": 43,
"text": "\\tfrac12 H_n"
},
{
"math_id": 44,
"text": "\\sum_{i=1}^{\\infty}\\frac{1}{i}=\\prod_{p\\in\\mathbb{P}}\\left(1+\\frac1p+\\frac1{p^2}+\\cdots\\right)=\\prod_{p\\in\\mathbb{P}} \\frac{1}{1-1/p},"
},
{
"math_id": 45,
"text": "\\mathbb{P}"
},
{
"math_id": 46,
"text": "\\ln \\prod_{p\\in\\mathbb{P}} \\frac{1}{1-1/p}=\\sum_{p\\in\\mathbb{P}}\\ln\\frac{1}{1-1/p}=\\sum_{p\\in\\mathbb{P}}\\left(\\frac1p+\\frac1{2p^2}+\\frac1{3p^3}+\\cdots\\right)=\\sum_{p\\in\\mathbb{P}}\\frac1p+K."
},
{
"math_id": 47,
"text": "K"
},
{
"math_id": 48,
"text": "\\frac1n\\sum_{i=1}^n\\left\\lfloor\\frac{n}i\\right\\rfloor\\le\\frac1n\\sum_{i=1}^n\\frac{n}i=H_n."
},
{
"math_id": 49,
"text": "\\ln n+2\\gamma-1+O(1/\\sqrt{n})"
},
{
"math_id": 50,
"text": "k/n"
},
{
"math_id": 51,
"text": "n/k"
},
{
"math_id": 52,
"text": "nH_n"
},
{
"math_id": 53,
"text": "y"
},
{
"math_id": 54,
"text": "k+2"
},
{
"math_id": 55,
"text": "\\tfrac2{k+2}"
},
{
"math_id": 56,
"text": "\\sum_{i=2}^n\\sum_{k=0}^{i-2}\\frac2{k+2}=\\sum_{i=1}^{n-1}2H_i=O(n\\log n)."
},
{
"math_id": 57,
"text": "\\sum_{n = 1}^\\infty \\frac{(-1)^{n + 1}}{n} = 1 - \\frac{1}{2} + \\frac{1}{3} - \\frac{1}{4} + \\frac{1}{5} - \\cdots"
},
{
"math_id": 58,
"text": "\\frac{1}{1} - \\frac{1}{2} +\\cdots + \\frac{1}{2n-1} - \\frac{1}{2n} = H_{2n} - H_n = \\ln 2 - \\frac{1}{2n} + O(n^{-2})"
},
{
"math_id": 59,
"text": "\\sum_{n = 0}^\\infty \\frac{(-1)^{n}}{2n+1} = 1 - \\frac{1}{3} + \\frac{1}{5} - \\frac{1}{7} + \\cdots = \\frac{\\pi}{4}."
},
{
"math_id": 60,
"text": "x>1"
},
{
"math_id": 61,
"text": "\\zeta(x)=\\sum_{n=1}^{\\infty}\\frac{1}{n^x}=\\frac1{1^x}+\\frac1{2^x}+\\frac1{3^x}+\\cdots,"
},
{
"math_id": 62,
"text": "x=1"
},
{
"math_id": 63,
"text": "\\zeta(2)=\\pi^2/6"
},
{
"math_id": 64,
"text": "\\zeta(3)"
},
{
"math_id": 65,
"text": "\\sum_{n=1}^{\\infty}\\frac{s_{n}}{n},"
},
{
"math_id": 66,
"text": "s_n"
},
{
"math_id": 67,
"text": "+1"
},
{
"math_id": 68,
"text": "-1"
},
{
"math_id": 69,
"text": "1"
},
{
"math_id": 70,
"text": "3"
},
{
"math_id": 71,
"text": "-3"
},
{
"math_id": 72,
"text": "\\pm 2"
},
{
"math_id": 73,
"text": "\\tfrac18-\\varepsilon"
},
{
"math_id": 74,
"text": "\\varepsilon< 10^{-42}"
}
] |
https://en.wikipedia.org/wiki?curid=142488
|
1424913
|
Vis viva
|
Historical term
Vis viva (from the Latin for "living force") is a historical term used to describe a quantity similar to kinetic energy in an early formulation of the principle of conservation of energy.
Overview.
Proposed by Gottfried Leibniz over the period 1676–1689, the theory was controversial as it seemed to oppose the theory of conservation of quantity of motion advocated by René Descartes. Descartes' quantity of motion was different from momentum, but Newton defined the quantity of motion as the conjunction of the quantity of matter and velocity in Definition II of his Principia. In Definition III, he defined the force that resists a change in motion as the "vis inertia" of Descartes. Newton’s Third Law of Motion (for every action there is an equal and opposite reaction) is also equivalent to the principle of conservation of momentum. Leibniz accepted the principle of conservation of momentum, but rejected the Cartesian version of it. The difference between these ideas was whether the quantity of motion was simply related to a body's resistance to a change in velocity (vis inertia) or whether a body's amount of force due to its motion (vis viva) was related to the square of its velocity.
The theory was eventually absorbed into the modern theory of energy, though the term still survives in the context of celestial mechanics through the "vis viva" equation. The English equivalent "living force" was also used, for example by George William Hill.
The term is due to the German philosopher Gottfried Wilhelm Leibniz, who was the first to attempt a mathematical formulation from 1676 to 1689. Leibniz noticed that in many mechanical systems (of several masses, "mi" each with velocity "vi") the quantity
<templatestyles src="Block indent/styles.css"/>formula_0
was conserved. He called this quantity the "vis viva" or "living force" of the system. The principle represented an accurate statement of the conservation of kinetic energy in elastic collisions that was independent of the conservation of momentum.
However, many physicists at the time were unaware of this fact and, instead, were influenced by the prestige of Sir Isaac Newton in England and of René Descartes in France, both of whom advanced the conservation of momentum as a guiding principle. Thus the momentum:
<templatestyles src="Block indent/styles.css"/>formula_1
was held by the rival camp to be the conserved "vis viva". It was largely engineers such as John Smeaton, Peter Ewart, Karl Holtzmann, Gustave-Adolphe Hirn and Marc Seguin who objected that conservation of momentum alone was not adequate for practical calculation and who made use of Leibniz's principle. The principle was also championed by some chemists such as William Hyde Wollaston.
The French mathematician Émilie du Châtelet, who had a sound grasp of Newtonian mechanics, developed Leibniz's concept and, combining it with the observations of Willem 's Gravesande, showed that "vis viva" was dependent on the square of the velocities.
Members of the academic establishment such as John Playfair were quick to point out that kinetic energy is clearly not conserved. This is obvious to a modern analysis based on the second law of thermodynamics, but in the 18th and 19th centuries, the fate of the lost energy was still unknown. Gradually it came to be suspected that the heat inevitably generated by motion was another form of "vis viva". In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing theories of "vis viva" and caloric theory.[#endnote_] Count Rumford's 1798 observations of heat generation during the boring of cannons added more weight to the view that mechanical motion could be converted into heat. "Vis viva" began to be known as "energy" after Thomas Young first used the term in 1807.
The recalibration of "vis viva" to include the coefficient of a half, namely:
<templatestyles src="Block indent/styles.css"/>formula_2
was largely the result of the work of Gaspard-Gustave Coriolis and Jean-Victor Poncelet over the period 1819–1839, although the present-day definition can occasionally be found earlier (e.g., in Daniel Bernoulli's texts). The former called it the "quantité de travail" (quantity of work) and the latter, "travail mécanique" (mechanical work) and both championed its use in engineering calculation.
|
[
{
"math_id": 0,
"text": "\\sum_{i} m_i v_i^2"
},
{
"math_id": 1,
"text": "\\,\\!\\sum_{i} m_i \\mathbf{v}_i"
},
{
"math_id": 2,
"text": "E = \\frac {1} {2}\\sum_{i} m_i v_i^2"
}
] |
https://en.wikipedia.org/wiki?curid=1424913
|
14250649
|
3-hydroxymethylcephem carbamoyltransferase
|
Class of enzymes
In enzymology, a 3-hydroxymethylcephem carbamoyltransferase (EC 2.1.3.7) is an enzyme that catalyzes the chemical reaction
carbamoyl phosphate + a 3-hydroxymethylceph-3-em-4-carboxylate formula_0 phosphate + a 3-carbamoyloxymethylcephem
Thus, the two substrates of this enzyme are carbamoyl phosphate and 3-hydroxymethylceph-3-em-4-carboxylate, whereas its two products are phosphate and 3-carbamoyloxymethylcephem.
This enzyme belongs to the family of transferases, that transfer one-carbon groups, specifically the carboxy- and carbamoyltransferases.
The systematic name of this enzyme class is carbamoyl-phosphate:3-hydroxymethylceph-3-em-4-carboxylate carbamoyltransferase.
This enzyme has at least one effector, ATP.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250649
|
14250670
|
3-methyl-2-oxobutanoate hydroxymethyltransferase
|
Class of enzymes
In enzymology, a 3-methyl-2-oxobutanoate hydroxymethyltransferase (EC 2.1.2.11) is an enzyme that catalyzes the chemical reaction
5,10-methylenetetrahydrofolate + 3-methyl-2-oxobutanoate + H2O formula_0 tetrahydrofolate + 2-dehydropantoate
The 3 substrates of this enzyme are 5,10-methylenetetrahydrofolate, 3-methyl-2-oxobutanoate, and H2O, whereas its two products are tetrahydrofolate and 2-dehydropantoate.
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the hydroxymethyl-, formyl- and related transferases. The systematic name of this enzyme class is 5,10-methylenetetrahydrofolate:3-methyl-2-oxobutanoate hydroxymethyltransferase. Other names in common use include alpha-ketoisovalerate hydroxymethyltransferase, dehydropantoate hydroxymethyltransferase, ketopantoate hydroxymethyltransferase, oxopantoate hydroxymethyltransferase, 5,10-methylene tetrahydrofolate:alpha-ketoisovalerate, and hydroxymethyltransferase. This enzyme participates in pantothenate and coa biosynthesis.
Structural studies.
As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1M3U, 1O66, 1O68, and 1OY0.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250670
|
14250688
|
D-alanine 2-hydroxymethyltransferase
|
In enzymology, a D-alanine 2-hydroxymethyltransferase (EC 2.1.2.7) is an enzyme that catalyzes the chemical reaction
5,10-methylenetetrahydrofolate + D-alanine + H2O formula_0 tetrahydrofolate + 2-methylserine
The 3 substrates of this enzyme are 5,10-methylenetetrahydrofolate, D-alanine, and H2O, whereas its two products are tetrahydrofolate and 2-methylserine.
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the hydroxymethyl-, formyl- and related transferases. The systematic name of this enzyme class is 5,10-methylenetetrahydrofolate:D-alanine 2-hydroxymethyltransferase. This enzyme is also called 2-methylserine hydroxymethyltransferase. This enzyme participates in one carbon pool by folate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250688
|
14250702
|
Deoxycytidylate 5-hydroxymethyltransferase
|
In enzymology, a deoxycytidylate 5-hydroxymethyltransferase (EC 2.1.2.8) is an enzyme that catalyzes the chemical reaction
5,10-methylenetetrahydrofolate + H2O + deoxycytidylate formula_0 tetrahydrofolate + 5-hydroxymethyldeoxycytidylate
The 3 substrates of this enzyme are 5,10-methylenetetrahydrofolate, H2O, and deoxycytidylate, whereas its two products are tetrahydrofolate and 5-hydroxymethyldeoxycytidylate.
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the hydroxymethyl-, formyl- and related transferases. The systematic name of this enzyme class is 5,10-methylenetetrahydrofolate:deoxycytidylate 5-hydroxymethyltransferase. Other names in common use include dCMP hydroxymethylase, d-cytidine 5'-monophosphate hydroxymethylase, deoxyCMP hydroxymethylase, deoxycytidylate hydroxymethylase, and deoxycytidylic hydroxymethylase. This enzyme participates in pyrimidine metabolism and one carbon pool by folate.
Structural studies.
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1B49, 1B5D, and 1B5E.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250702
|
14250722
|
Glycine formimidoyltransferase
|
In enzymology, a glycine formimidoyltransferase (EC 2.1.2.4) is an enzyme that catalyzes the chemical reaction
5-formimidoyltetrahydrofolate + glycine formula_0 tetrahydrofolate + N-formimidoylglycine
Thus, the two substrates of this enzyme are 5-formimidoyltetrahydrofolate and glycine, whereas its two products are tetrahydrofolate and N-formimidoylglycine.
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the hydroxymethyl-, formyl- and related transferases. The systematic name of this enzyme class is 5-formimidoyltetrahydrofolate:glycine N-formimidoyltransferase. Other names in common use include formiminoglycine formiminotransferase, FIG formiminotransferase, and glycine formiminotransferase. This enzyme participates in purine metabolism and one carbon pool by folate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250722
|
14250751
|
Lysine carbamoyltransferase
|
In enzymology, a lysine carbamoyltransferase (EC 2.1.3.8) is an enzyme that catalyzes the chemical reaction
carbamoyl phosphate + L-lysine formula_0 phosphate + L-homocitrulline
Thus, the two substrates of this enzyme are carbamoyl phosphate and L-lysine, whereas its two products are phosphate and L-homocitrulline.
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the carboxy- and carbamoyltransferases. The systematic name of this enzyme class is carbamoyl-phosphate:L-lysine carbamoyltransferase. This enzyme is also called lysine transcarbamylase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250751
|
14250769
|
Methionyl-tRNA formyltransferase
|
In enzymology, a methionyl-tRNA formyltransferase (EC 2.1.2.9) is an enzyme that catalyzes the chemical reaction
10-formyltetrahydrofolate + -methionyl-tRNAfMet + H2O formula_0 tetrahydrofolate + "N"-formylmethionyl-tRNAfMet
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the hydroxymethyl-, formyl- and related transferases. The systematic name of this enzyme class is 10-formyltetrahydrofolate:L-methionyl-tRNA N-formyltransferase. This enzyme participates in three metabolic pathways: methionine metabolism, one carbon pool by folate, and aminoacyl-tRNA biosynthesis.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1FMT and 2FMT.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250769
|
14250789
|
Methylmalonyl-CoA carboxytransferase
|
In enzymology, a methylmalonyl-CoA carboxytransferase (EC 2.1.3.1) is an enzyme that catalyzes the chemical reaction
(S)-methylmalonyl-CoA + pyruvate formula_0 propanoyl-CoA + oxaloacetate
Thus, the two substrates of this enzyme are (S)-methylmalonyl-CoA and pyruvate, whereas its two products are propanoyl-CoA and oxaloacetate.
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the carboxy- and carbamoyltransferases. The systematic name of this enzyme class is (S)-methylmalonyl-CoA:pyruvate carboxytransferase. Other names in common use include transcarboxylase, methylmalonyl coenzyme A carboxyltransferase, methylmalonyl-CoA transcarboxylase, oxalacetic transcarboxylase, methylmalonyl-CoA carboxyltransferase, methylmalonyl-CoA carboxyltransferase, (S)-2-methyl-3-oxopropanoyl-CoA:pyruvate carboxyltransferase, (S)-2-methyl-3-oxopropanoyl-CoA:pyruvate carboxytransferase, and carboxytransferase [incorrect]. This enzyme participates in propanoate metabolism. It has 3 cofactors: zinc, Biotin, and Cobalt.
Structural studies.
As of late 2007, 12 structures have been solved for this class of enzymes, with PDB accession codes 1DCZ, 1DD2, 1ON3, 1ON9, 1RQB, 1RQE, 1RQH, 1RR2, 1S3H, 1U5J, 2D5D, and 2EVB.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250789
|
14250807
|
N-acetylornithine carbamoyltransferase
|
Enzyme
In enzymology, a N-acetylornithine carbamoyltransferase (EC 2.1.3.9) is an enzyme that catalyzes the chemical reaction
carbamoyl phosphate + N2-acetyl-L-ornithine formula_0 phosphate + N-acetyl-L-citrulline
Thus, the two substrates of this enzyme are carbamoyl phosphate and N2-acetyl-L-ornithine, whereas its two products are phosphate and N-acetyl-L-citrulline.
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the carboxy- and carbamoyltransferases. The systematic name of this enzyme class is carbamoyl-phosphate:N2-acetyl-L-ornithine carbamoyltransferase. Other names in common use include acetylornithine transcarbamylase, N-acetylornithine transcarbamylase, AOTC, and carbamoyl-phosphate:2-N-acetyl-L-ornithine carbamoyltransferase.
Structural studies.
As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 2G65, 2G68, 2G6A, and 2G6C.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250807
|
14250821
|
Oxamate carbamoyltransferase
|
In enzymology, an oxamate carbamoyltransferase (EC 2.1.3.5) is an enzyme that catalyzes the chemical reaction
carbamoyl phosphate + oxamate formula_0 phosphate + oxalureate
Thus, the two substrates of this enzyme are carbamoyl phosphate and oxamate, whereas its two products are phosphate and oxalureate.
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the carboxy- and carbamoyltransferases. The systematic name of this enzyme class is carbamoyl-phosphate:oxamate carbamoyltransferase. This enzyme is also called oxamic transcarbamylase. This enzyme participates in purine metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250821
|
14250839
|
Phosphoribosylaminoimidazolecarboxamide formyltransferase
|
In enzymology, a phosphoribosylaminoimidazolecarboxamide formyltransferase (EC 2.1.2.3), also known by the shorter name AICAR transformylase, is an enzyme that catalyzes the chemical reaction
10-formyltetrahydrofolate + AICAR formula_0 tetrahydrofolate + FAICAR
Thus, the two substrates of this enzyme are 10-formyltetrahydrofolate and AICAR, whereas its two products are tetrahydrofolate and FAICAR.
This enzyme participates in purine metabolism and one carbon pool by folate.
Nomenclature.
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the hydroxymethyl-, formyl- and related transferases. The systematic name of this enzyme class is 10-formyltetrahydrofolate:5-phosphoribosyl-5-amino-4-imidazole-carb oxamide N-formyltransferase. Other names in common use include:
<templatestyles src="Div col/styles.css"/>
Structural studies.
As of late 2007, 11 structures have been solved for this class of enzymes, with PDB accession codes 1G8M, 1M9N, 1OZ0, 1P4R, 1PKX, 1PL0, 1THZ, 2B1G, 2B1I, 2IU0, and 2IU3.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250839
|
14250859
|
Putrescine carbamoyltransferase
|
Enzyme
In enzymology, a putrescine carbamoyltransferase (EC 2.1.3.6) is an enzyme that catalyzes the chemical reaction
carbamoyl phosphate + putrescine formula_0 phosphate + N-carbamoylputrescine
Thus, the two substrates of this enzyme are carbamoyl phosphate and putrescine, whereas its two products are phosphate and N-carbamoylputrescine.
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the carboxy- and carbamoyltransferases. The systematic name of this enzyme class is carbamoyl-phosphate:putrescine carbamoyltransferase. Other names in common use include PTCase, putrescine synthase, and putrescine transcarbamylase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250859
|
14250872
|
Scyllo-inosamine-4-phosphate amidinotransferase
|
Enzyme
In enzymology, a scyllo-inosamine-4-phosphate amidinotransferase (EC 2.1.4.2) is an enzyme that catalyzes the chemical reaction
L-arginine + 1-amino-1-deoxy-scyllo-inositol 4-phosphate formula_0 L-ornithine + 1-guanidino-1-deoxy-scyllo-inositol 4-phosphate
Thus, the two substrates of this enzyme are L-arginine and 1-amino-1-deoxy-scyllo-inositol 4-phosphate, whereas its two products are L-ornithine and 1-guanidino-1-deoxy-scyllo-inositol 4-phosphate.
This enzyme belongs to the family of transferases that transfer one-carbon groups, specifically the amidinotransferases. The systematic name of this enzyme class is L-arginine:1-amino-1-deoxy-scyllo-inositol-4-phosphate amidinotransferase. Other names in common use include L-arginine:inosamine-P-amidinotransferase, inosamine-P amidinotransferase, L-arginine:inosamine phosphate amidinotransferase, and inosamine-phosphate amidinotransferase. This enzyme participates in streptomycin biosynthesis.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1BWD.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250872
|
14250961
|
Aliphatic aldoxime dehydratase
|
In enzymology, an aliphatic aldoxime dehydratase (EC 4.99.1.5) is an enzyme that catalyzes the chemical reaction
an aliphatic aldoxime formula_0 an aliphatic nitrile + H2O
This dehydratase converts an aldoxime on an aliphatic substrate to a nitrile as the product structure with water as byproduct.
This enzyme belongs to the family of lyases, specifically the "catch-all" class of lyases that do not fit into any other sub-class. The systematic name of this enzyme class is aliphatic aldoxime hydro-lyase (aliphatic-nitrile-forming). Other names in common use include OxdA, and aliphatic aldoxime hydro-lyase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250961
|
14250988
|
Alkylmercury lyase
|
The enzyme alkylmercury lyase (EC 4.99.1.2) catalyzes the reaction
an alkylmercury + H+ formula_0 an alkane + Hg2+
This enzyme belongs to the family of lyases, specifically the "catch-all" class of lyases that do not fit into any other sub-class. The systematic name of this enzyme class is alkylmercury mercury(II)-lyase (alkane-forming). Other names in common use include organomercury lyase, organomercurial lyase, and alkylmercury mercuric-lyase.
The enzyme converts methyl mercury to the much less toxic elemental form of the metal.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14250988
|
14251008
|
Indoleacetaldoxime dehydratase
|
In enzymology, an indoleacetaldoxime dehydratase (EC 4.99.1.6) is an enzyme that catalyzes the chemical reaction
(indol-3-yl)acetaldehyde oxime formula_0 (indol-3-yl)acetonitrile + H2O
Hence, this enzyme has one substrate, (indol-3-yl)acetaldehyde oxime, and two products, (indol-3-yl)acetonitrile and H2O.
This enzyme belongs to the family of lyases, specifically the "catch-all" class of lyases that do not fit into any other sub-class. The systematic name of this enzyme class is (indol-3-yl)acetaldehyde-oxime hydro-lyase [(indol-3-yl)acetonitrile-forming]. Other names in common use include indoleacetaldoxime hydro-lyase, 3-indoleacetaldoxime hydro-lyase, indole-3-acetaldoxime hydro-lyase, indole-3-acetaldehyde-oxime hydro-lyase, and (indol-3-yl)acetaldehyde-oxime hydro-lyase. This enzyme participates in cyanoamino acid metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251008
|
14251029
|
Phenylacetaldoxime dehydratase
|
Class of enzymes
In enzymology, a phenylacetaldoxime dehydratase (EC 4.99.1.7) is an enzyme that catalyzes the chemical reaction
(Z)-phenylacetaldehyde oxime formula_0 phenylacetonitrile + H2O
Hence, this enzyme has one substrate, (Z)-phenylacetaldehyde oxime, and two products, phenylacetonitrile and H2O.
This enzyme belongs to the family of lyases, specifically the "catch-all" class of lyases that do not fit into any other sub-class. The systematic name of this enzyme class is (Z)-phenylacetaldehyde-oxime hydro-lyase (phenylacetonitrile-forming). Other names in common use include PAOx dehydratase, arylacetaldoxime dehydratase, OxdB, and (Z)-phenylacetaldehyde-oxime hydro-lyase. This enzyme participates in styrene degradation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251029
|
14251066
|
Sirohydrochlorin ferrochelatase
|
Enzyme
The enzyme sirohydrochlorin ferrochelatase (EC 4.99.1.4) catalyzes the following reaction:
siroheme + 2H+ formula_0 sirohydrochlorin + Fe2+
This enzyme belongs to the family of lyases, to be specific the "catch-all" class of lyases that do not fit into any other sub-class. The systematic name of this enzyme class is siroheme ferro-lyase (sirohydrochlorin-forming). The enzyme is also known as SirB and present in all plants and nitrate and sulfate assimilating/dissimilating bacteria. Siroheme is a co-factor of both assimilatory and dissimilatory nitrite and sulfite reductases. Siroheme is synthesized from the central tetrapyrrole molecule uroporphyrinogen III, which forms the first branch-point of tetrapyrrole biosynthetic pathway, the other branch being the heme/chlorophyll branch. The siroheme branch consists of three steps: methylation, dehydrogenation, and ferrochelation, with the last step carried out by sirohydrochlorin ferrochelatase.
Sirohydrochlorin ferrochelatase is a class II chelatase, i.e. it does not require ATP for its activity unlike class I chelatases such as Mg-chelatase. In "E. coli", all three steps of siroheme biosynthesis are carried out by a single multifunctional enzyme called CysG, while in yeast "Saccharomyces cerevisiae" the last two steps are carried out by a bifunctional enzyme called Met8p. CysG and Met8p share common folds but are unrelated to SirB and constitute the so-called class III chelatase. SirB belongs to CbiX family protein and the plant SirB is half the length of bacterial SirB and aligns with its N- and C-terminal halves suggesting that the longer form evolved from the gene duplication and fusion of the shorter form.
Sirohydrochlorin ferrochelatase in all land plants and certain green algae, but not bacteria or other algae, consists of an iron sulfur cluster, which can switch between [2Fe-2S] and [4Fe-4S] forms depending on the redox status of the cellular milieu. Although it is not clearly determined what role this switching of the cluster might play, it is postulated to be involved in a critical redox regulation of siroheme biosynthesis.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251066
|
14251275
|
3-oxoacid CoA-transferase
|
Enzyme family
In enzymology, a 3-oxoacid CoA-transferase (EC 2.8.3.5) is an enzyme that catalyzes the chemical reaction
succinyl-CoA + a 3-oxo acid formula_0 succinate + a 3-oxoacyl-CoA
Thus, the two substrates of this enzyme are succinyl-CoA and 3-oxo acid, whereas its two products are succinate and 3-oxoacyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is succinyl-CoA:3-oxo-acid CoA-transferase. Other names in common use include succinyl-CoA-3-ketoacid-CoA transferase, 3-oxoacid coenzyme A-transferase, 3-ketoacid CoA-transferase, 3-ketoacid coenzyme A transferase, 3-oxo-CoA transferase, 3-oxoacid CoA dehydrogenase, acetoacetate succinyl-CoA transferase, acetoacetyl coenzyme A-succinic thiophorase, succinyl coenzyme A-acetoacetyl coenzyme A-transferase, and succinyl-CoA transferase. This enzyme participates in 3 metabolic pathways: synthesis and degradation of ketone bodies, valine, leucine and isoleucine degradation, and butanoate metabolism.
This protein may use the morpheein model of allosteric regulation.
Structural studies.
As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 1M3E, 1O9L, 1OOY, 1OOZ, 1OPE, 2NRB, and 2NRC.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251275
|
14251288
|
3-oxoadipate CoA-transferase
|
Class of enzymes
In enzymology, a 3-oxoadipate CoA-transferase (EC 2.8.3.6) is an enzyme that catalyzes the chemical reaction
succinyl-CoA + 3-oxoadipate formula_0 succinate + 3-oxoadipyl-CoA
Thus, the two substrates of this enzyme are succinyl-CoA and 3-oxoadipate, whereas its two products are succinate and 3-oxoadipyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is succinyl-CoA:3-oxoadipate CoA-transferase. Other names in common use include 3-oxoadipate coenzyme A-transferase, and 3-oxoadipate succinyl-CoA transferase. This enzyme participates in benzoate degradation via hydroxylation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251288
|
14251305
|
5-hydroxypentanoate CoA-transferase
|
Class of enzymes
In enzymology, a 5-hydroxypentanoate CoA-transferase (EC 2.8.3.14) is an enzyme that catalyzes the chemical reaction
acetyl-CoA + 5-hydroxypentanoate formula_0 acetate + 5-hydroxypentanoyl-CoA
Thus, the two substrates of this enzyme are acetyl-CoA and 5-hydroxypentanoate, whereas its two products are acetate and 5-hydroxypentanoyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is acetyl-CoA:5-hydroxypentanoate CoA-transferase. Other names in common use include 5-hydroxyvalerate CoA-transferase, and 5-hydroxyvalerate coenzyme A transferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251305
|
14251319
|
Acetate CoA-transferase
|
Class of enzymes
In enzymology, an acetate CoA-transferase (EC 2.8.3.8) is an enzyme that catalyzes the chemical reaction
acyl-CoA + acetate formula_0 a fatty acid anion + acetyl-CoA
Thus, the two substrates of this enzyme are acyl-CoA and acetate, whereas its two products are long-chain carboxylate anion and acetyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is acyl-CoA:acetate CoA-transferase. Other names in common use include acetate coenzyme A-transferase, butyryl CoA:acetate CoA transferase, butyryl coenzyme A transferase, and succinyl-CoA:acetate CoA transferase.
This enzyme participates in 4 metabolic pathways:
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1K6D.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251319
|
14251338
|
Amine sulfotransferase
|
Class of enzymes
In enzymology, an amine sulfotransferase (EC 2.8.2.3) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + an amine formula_0 adenosine 3',5'-bisphosphate + a sulfamate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and amine, whereas its two products are adenosine 3',5'-bisphosphate and sulfamate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:amine N-sulfotransferase. Other names in common use include arylamine sulfotransferase, and amine N-sulfotransferase. This enzyme participates in sulfur metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251338
|
14251354
|
Aryl-sulfate sulfotransferase
|
Class of enzymes
In enzymology, an aryl-sulfate sulfotransferase (EC 2.8.2.22) is an enzyme that catalyzes the chemical reaction
an aryl sulfate + a phenol formula_0 a phenol + an aryl sulfate
Thus, the two substrates of this enzyme are aryl sulfate and phenol, whereas its two products are phenol and aryl sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is aryl-sulfate:phenol sulfotransferase. Other names in common use include arylsulfate-phenol sulfotransferase, arylsulfotransferase, ASST, arylsulfate sulfotransferase, and arylsulfate:phenol sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251354
|
14251373
|
Aryl sulfotransferase
|
Class of enzymes
An aryl sulfotransferase (EC 2.8.2.1) is an enzyme that
transfers a sulfate group from phenolic sulfate esters to a phenolic acceptor substrate.
3'-phosphoadenylyl sulfate + a phenol formula_0 adenosine 3',5'-bisphosphate + an aryl sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and phenol, whereas its two products are adenosine 3',5'-bisphosphate and aryl sulfate.
These enzymes are transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:phenol sulfotransferase. Other names in common use include phenol sulfotransferase, sulfokinase, 1-naphthol phenol sulfotransferase, 2-naphtholsulfotransferase, 4-nitrocatechol sulfokinase, arylsulfotransferase, dopamine sulfotransferase, p-nitrophenol sulfotransferase, phenol sulfokinase, ritodrine sulfotransferase, and PST. This enzyme participates in sulfur metabolism.
Structural studies.
As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes 1LS6, 1Z28, 1Z29, 2A3R, and 2D06.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251373
|
14251390
|
Biotin synthase
|
Enzyme
Biotin synthase (BioB) (EC 2.8.1.6) is an enzyme that catalyzes the conversion of dethiobiotin (DTB) to biotin; this is the final step in the biotin biosynthetic pathway. Biotin, also known as vitamin B7, is a cofactor used in carboxylation, decarboxylation, and transcarboxylation reactions in many organisms including humans. Biotin synthase is an S-Adenosylmethionine (SAM) dependent enzyme that employs a radical mechanism to thiolate dethiobiotin, thus converting it to biotin.
This radical SAM enzyme belongs to the family of transferases, specifically the sulfurtransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is dethiobiotin:sulfur sulfurtransferase. This enzyme participates in biotin metabolism. It employs one cofactor, iron-sulfur.
Structure.
In 2004, the crystal structure of biotin synthase in complex with SAM and dethiobiotin was determined to 3.4 angstrom resolution. The PDB accession code for this structure is 1R30. The protein is a homodimer, meaning it is composed of two identical amino acid chains that fold together to form biotin synthase. Each monomer in the structure shown in figure contains a TIM barrel with an [4Fe-4S]2+cluster, SAM, and an [2Fe-2S]2+cluster.
The [4Fe-4S]2+cluster is used as a catalytic cofactor, directly coordinating to SAM. Orbital overlap between SAM and a unique Fe atom on the [4Fe-4S]2+cluster has been observed. The predicted role of the [4Fe-4S]2+cofactor is to transfer an electron onto SAM through an inner sphere mechanism, forcing it into an unstable high energy state that ultimately leads to the formation of the 5’deoxyadenosyl radical.
The [2Fe-2S]2+cluster is thought to provide a source of sulfur from which to thiolate DTB. Isotopic labelling and spectroscopic studies show destruction of the [2Fe-2S]2+cluster accompanies BioB turnover, indicating that it is likely sulfur from [2Fe-2S]2+that is being incorporated into DTB to form biotin.
Mechanism.
The reaction catalyzed by biotin synthase can be summarized as follows:dethiobiotin + sulfur + 2 S-adenosyl-L-methionine formula_0 biotin + 2 L-methionine + 2 5'-deoxyadenosineThe proposed mechanism for biotin synthase begins with an inner sphere electron transfer from the sulfur on SAM, reducing the [4Fe-4S]2+cluster. This results in a spontaneous C-S bond cleavage, generating a 5’-deoxyadenosyl radical (5’-dA). This carbon radical abstracts a hydrogen from dethiobiotin, forming a dethiobiotinyl C9 carbon radical, which is immediately quenched by bonding to a sulfur atom on the [2Fe-2S]2+. This reduces one of the iron atoms from FeIII to FeII. At this point, the 5’-deoxyadenosyl and methionine formed earlier are exchanged for a second equivalent of SAM. Reductive cleavage generates another 5’-deoxyadenosyl radical, which abstracts a hydrogen from C6 of dethiobiotin. This radical attacks the sulfur attached to C9 and forms the thiophane ring of biotin, leaving behind an unstable diferrous cluster that likely dissociates.
The use of an inorganic sulfur source is quite unusual for biosynthetic reactions involving sulfur. However, dethiobiotin contains nonpolar, unactivated carbon atoms at the locations of desired C-S bond formation. The formation of the 5’-dA radical allows for hydrogen abstraction of the unactivated carbons on DTB, leaving behind activated carbon radicals ready to be functionalized. By nature, radical chemistry allows for chain reactions because radicals are easily quenched through C-H bond formation, resulting in another radical on the atom the hydrogen came from. We can consider the possibility of a free sulfide, alkane thiol, or alkane persulfide being used as the sulfur donor for DTB. At physiological pH, these would all be protonated, and the carbon radical would likely be quenched by hydrogen atom transfer rather than by C-S bond formation.
Relevance to humans.
Biotin synthase is not found in humans. Since biotin is an important cofactor for many enzymes, humans must consume biotin through their diet from microbial and plant sources. However, the human gut microbiome has been shown to contain "Escherichia coli" that do contain biotin synthase, providing another source of biotin for catalytic use. The amount of "E. coli" that produce biotin is significantly higher in adults than in babies, indicating that the gut microbiome and developmental stage should be taken into account when assessing a person's nutritional needs.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251390
|
14251410
|
Butyrate—acetoacetate CoA-transferase
|
Class of enzymes
In enzymology, a butyrate-acetoacetate CoA-transferase (EC 2.8.3.9) is an enzyme that catalyzes the chemical reaction
butanoyl-CoA + acetoacetate formula_0 butanoate + acetoacetyl-CoA
Thus, the two substrates of this enzyme are butanoyl-CoA and acetoacetate, whereas its two products are butanoate and acetoacetyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is butanoyl-CoA:acetoacetate CoA-transferase. Other names in common use include butyryl coenzyme A-acetoacetate coenzyme A-transferase, and butyryl-CoA-acetoacetate CoA-transferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251410
|
14251435
|
Choline sulfotransferase
|
Class of enzymes
In enzymology, a choline sulfotransferase (EC 2.8.2.6) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + choline formula_0 adenosine 3',5'-bisphosphate + choline sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and choline, whereas its two products are adenosine 3',5'-bisphosphate and choline sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:choline sulfotransferase. This enzyme is also called choline sulphokinase. This enzyme participates in sulfur metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251435
|
14251457
|
Chondroitin 4-sulfotransferase
|
Class of enzymes
In enzymology, a chondroitin 4-sulfotransferase (EC 2.8.2.5) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenosine-5'-phosphosulfate + chondroitin formula_0 adenosine 3',5'-bisphosphate + chondroitin 4'-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and chondroitin, whereas its two products are adenosine 3',5'-bisphosphate and chondroitin 4'-sulfate.
This enzyme belongs to the family of transferases, to be specific, the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:chondroitin 4'-sulfotransferase. This enzyme is also called chondroitin sulfotransferase. This enzyme participates in 3 metabolic pathways: chondroitin sulfate biosynthesis, sulfur metabolism, and the biosynthesis of glycan structures.
References.
<templatestyles src="Refbegin/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251457
|
14251475
|
Chondroitin 6-sulfotransferase
|
Enzyme family
In enzymology, a chondroitin 6-sulfotransferase (EC 2.8.2.17) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + chondroitin formula_0 adenosine 3',5'-bisphosphate + chondroitin 6'-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and chondroitin, whereas its two products are adenosine 3',5'-bisphosphate and Chondroitin 6-sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:chondroitin 6'-sulfotransferase. Other names in common use include chondroitin 6-O-sulfotransferase, 3'-phosphoadenosine 5'-phosphosulfate (PAPS):chondroitin sulfate, sulfotransferase, and terminal 6-sulfotransferase. This enzyme participates in chondroitin sulfate biosynthesis and glycan structures - biosynthesis 1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251475
|
1425149
|
Holonomic
|
Holonomic (introduced by Heinrich Hertz in 1894 from the Greek ὅλος meaning "whole", "entire" and νόμος meaning "law") may refer to:
See also.
<templatestyles src="Dmbox/styles.css" />
Topics referred to by the same termThis page lists mathematics articles associated with the same title.
|
[
{
"math_id": 0,
"text": "e_k = {\\partial \\over \\partial x^k}"
},
{
"math_id": 1,
"text": "x_j\\,\\!"
},
{
"math_id": 2,
"text": "t\\,\\!"
}
] |
https://en.wikipedia.org/wiki?curid=1425149
|
14251510
|
Citramalate CoA-transferase
|
Class of enzymes
In enzymology, a citramalate CoA-transferase (EC 2.8.3.11) is an enzyme that catalyzes the chemical reaction
acetyl-CoA + citramalate formula_0 acetate + (3S)-citramalyl-CoA
Thus, the two substrates of this enzyme are acetyl-CoA and citramalate, whereas its two products are acetate and (3S)-citramalyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is acetyl-CoA:citramalate CoA-transferase. This enzyme participates in c5-branched dibasic acid metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251510
|
14251525
|
Citrate CoA-transferase
|
Class of enzymes
In enzymology, a citrate CoA-transferase (EC 2.8.3.10) is an enzyme that catalyzes the following chemical reaction:
acetyl-CoA + citrate formula_0 acetate + (3S)-citryl-CoA
Thus, the two substrates of this enzyme are acetyl-CoA and citrate, whereas its two products are acetate and (3S)-citryl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is acetyl-CoA:citrate CoA-transferase. This enzyme participates in citrate cycle (tca cycle).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251525
|
14251560
|
Cortisol sulfotransferase
|
Enzyme
In enzymology, a cortisol sulfotransferase (EC 2.8.2.18) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + cortisol formula_0 adenosine 3',5'-bisphosphate + cortisol 21-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and cortisol, whereas its two products are adenosine 3',5'-bisphosphate and cortisol 21-sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:cortisol 21-sulfotransferase. Other names in common use include glucocorticosteroid sulfotransferase, and glucocorticoid sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251560
|
14251572
|
Cysteine desulfurase
|
Class of enzymes
In enzymology, a cysteine desulfurase (EC 2.8.1.7) is an enzyme that catalyzes the chemical reaction
L-cysteine + [enzyme]-cysteine formula_0 L-alanine + [enzyme]-S-sulfanylcysteine
Thus, the two substrates of this enzyme are L-cysteine and [enzyme]-cysteine], whereas its two products are L-alanine and [enzyme]-S-sulfanylcysteine.
This enzyme belongs to the family of transferases, specifically the sulfurtransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is L-cysteine:[enzyme cysteine] sulfurtransferase. Other names in common use include IscS, NIFS, NifS, SufS, and cysteine desulfurylase. This enzyme participates in thiamine metabolism.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1T3I.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251572
|
14251584
|
Desulfoglucosinolate sulfotransferase
|
Class of enzymes
In enzymology, a desulfoglucosinolate sulfotransferase (EC 2.8.2.24) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + desulfoglucotropeolin formula_0 adenosine 3',5'-bisphosphate + glucotropeolin
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and desulfoglucotropeolin, whereas its two products are adenosine 3',5'-bisphosphate and glucotropeolin.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:desulfoglucosinolate sulfotransferase. Other names in common use include PAPS-desulfoglucosinolate sulfotransferase, 3'-phosphoadenosine-5'-phosphosulfate:desulfoglucosinolate, and sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251584
|
14251596
|
Estrone sulfotransferase
|
Enzyme
Estrone sulfotransferase (EST) (EC 2.8.2.4), also known as estrogen sulfotransferase, is an enzyme that catalyzes the transformation of an unconjugated estrogen like estrone into a sulfated estrogen like estrone sulfate. It is a steroid sulfotransferase and belongs to the family of transferases, to be specific, the sulfotransferases, which transfer sulfur-containing groups. This enzyme participates in androgen and estrogen metabolism and sulfur metabolism.
Steroid sulfatase is an enzyme that catalyzes the reverse reaction, the transfer of a sulfate to an unconjugated estrogen.
Reaction.
In enzymology, an EST is an enzyme that catalyzes the following chemical reaction:
3'-phosphoadenylyl sulfate + estrone formula_0 adenosine 3',5'-bisphosphate + estrone 3-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and estrone, whereas its two products are adenosine 3',5'-bisphosphate and estrone 3-sulfate.
The enzyme also catalyzes the same reaction for estradiol, with estradiol sulfate as the product.
Types.
Two enzymes have been identified that together are thought to represent estrone sulfotransferase (EST):
Structure.
As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes 1AQU, 1AQY, 1BO6, 1G3M, and 1HY3.
Names.
The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:estrone 3-sulfotransferase. Other names in common use include 3'-phosphoadenylyl sulfate-estrone 3-sulfotransferase, estrogen sulfotransferase, estrogen sulphotransferase, oestrogen sulphotransferase, and 3'-phosphoadenylylsulfate:oestrone sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251596
|
14251609
|
Flavonol 3-sulfotransferase
|
Class of enzymes
In enzymology, a flavonol 3-sulfotransferase (EC 2.8.2.25) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + flavonolformula_0 adenosine 3',5'-bisphosphate + flavonol 3-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and a flavonol, whereas its two products are adenosine 3',5'-bisphosphate and flavonol 3-sulfate. A specific examples of a flavonol that can act as a substrate is quercetin.
This enzyme belongs to the family of transferases termed the sulfotransferases, which transfer sulfate groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:flavonol 3-sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251609
|
14251633
|
Formyl-CoA transferase
|
InterPro Family
In enzymology, a formyl-CoA transferase (EC 2.8.3.16) is an enzyme that catalyzes the chemical reaction
formyl-CoA + oxalate formula_0 formate + oxalyl-CoA
Thus, the two substrates of this enzyme are formyl-CoA and oxalate, whereas its two products are formate and oxalyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is formyl-CoA:oxalate CoA-transferase. Other names in common use include formyl-coenzyme A transferase, and formyl-CoA oxalate CoA-transferase.
Structural studies.
As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1T3Z, 1T4C, 1VGQ, and 1VGR.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251633
|
14251649
|
Galactosylceramide sulfotransferase
|
Class of enzymes
In enzymology, a galactosylceramide sulfotransferase (EC 2.8.2.11) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + a galactosylceramide formula_0 adenosine 3',5'-bisphosphate + a galactosylceramidesulfate
Thus, its two substrates are 3'-phosphoadenylyl sulfate and galactosylceramide, and its two products are adenosine 3',5'-bisphosphate and galactosylceramidesulfate.
It belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:galactosylceramide 3'-sulfotransferase. Other names in common use include GSase, 3'-phosphoadenosine-5'-phosphosulfate-cerebroside sulfotransferase, galactocerebroside sulfotransferase, galactolipid sulfotransferase, glycolipid sulfotransferase, and glycosphingolipid sulfotransferase. This enzyme participates in sphingolipid metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251649
|
14251665
|
Glutaconate CoA-transferase
|
Class of enzymes
In enzymology, a glutaconate CoA-transferase (EC 2.8.3.12) is an enzyme that catalyzes the chemical reaction
acetyl-CoA + (E)-glutaconate formula_0 acetate + glutaconyl-1-CoA
Thus, the two substrates of this enzyme are acetyl-CoA and (E)-glutaconate, whereas its two products are acetate and glutaconyl-1-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is acetyl-CoA:(E)-glutaconate CoA-transferase. This enzyme participates in styrene degradation and butanoate metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251665
|
14251685
|
Glycochenodeoxycholate sulfotransferase
|
Class of enzymes
In enzymology, a glycochenodeoxycholate sulfotransferase (EC 2.8.2.34) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + glycochenodeoxycholate formula_0 adenosine 3',5'-bisphosphate + glycochenodeoxycholate 7-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and glycochenodeoxycholate, whereas its two products are adenosine 3',5'-bisphosphate and glycochenodeoxycholate 7-sulfate.
Nomenclature.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:glycochenodeoxycholate 7-sulfotransferase. Other names in common use include bile acid:3'-phosphoadenosine-5'-phosphosulfate sulfotransferase, bile acid:PAPS:sulfotransferase, and BAST.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251685
|
14251697
|
(heparan sulfate)-glucosamine 3-sulfotransferase 1
|
Class of enzymes
In enzymology, a [heparan sulfate]-glucosamine 3-sulfotransferase 1 (EC 2.8.2.23) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + [heparan sulfate]-glucosamine formula_0 adenosine 3',5'-bisphosphate + [heparan sulfate]-glucosamine 3-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and heparan sulfate-glucosamine, whereas its two products are adenosine 3',5'-bisphosphate and heparan sulfate-glucosamine 3-sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:[heparan sulfate]-glucosamine 3-sulfotransferase. Other names in common use include heparin-glucosamine 3-O-sulfotransferase, 3'-phosphoadenylyl-sulfate:heparin-glucosamine 3-O-sulfotransferase, glucosaminyl 3-O-sulfotransferase, heparan sulfate D-glucosaminyl 3-O-sulfotransferase, and isoform/isozyme 1 (3-OST-1, HS3ST1). This enzyme participates in heparan sulfate biosynthesis and glycan structures - biosynthesis 1.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1VKJ and 1ZRH.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251697
|
14251718
|
(heparan sulfate)-glucosamine 3-sulfotransferase 2
|
Class of enzymes
In enzymology, a [heparan sulfate]-glucosamine 3-sulfotransferase 2 (EC 2.8.2.29) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + [heparan sulfate]-glucosamine formula_0 adenosine 3',5'-bisphosphate + [heparan sulfate]-glucosamine 3-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and heparan sulfate-glucosamine, whereas its two products are adenine 3',5'-bis-phosphate and heparan sulfate-glucosamine 3-sulfate.
This enzyme belongs to the family of transferences, specifically the transformer, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphorylation-sulfate:[heparin sulfate]-glucose 3-nontransferable. Other names in common use include glucose 3-O-nontransferable, heparin sulfate D-glucose 3-O-nontransferable, and formalism/isomerism 2 (3-OAT-2, HST). This enzyme participates in heparan sulfate biosynthesis and glycogen structures - biosynthesis 1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251718
|
14251736
|
(heparan sulfate)-glucosamine 3-sulfotransferase 3
|
Class of enzymes
In enzymology, a [heparan sulfate]-glucosamine 3-sulfotransferase 3 (EC 2.8.2.30) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + [heparan sulfate]-glucosamine formula_0 adenosine 3',5'-bisphosphate + [heparan sulfate]-glucosamine 3-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and heparan sulfate-glucosamine, whereas its two products are adenosine 3',5'-bisphosphate and heparan sulfate-glucosamine 3-sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:[heparan sulfate]-glucosamine 3-sulfotransferase. This enzyme participates in heparan sulfate biosynthesis and glycan structures - biosynthesis 1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251736
|
14251748
|
(heparan sulfate)-glucosamine N-sulfotransferase
|
Class of enzymes
In enzymology, a [heparan sulfate]-glucosamine N-sulfotransferase (EC 2.8.2.8) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + [heparan sulfate]-glucosamine formula_0 adenosine 3',5'-bisphosphate + [heparan sulfate]-N-sulfoglucosamine
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and heparan sulfate-glucosamine, whereas its two products are adenosine 3',5'-bisphosphate and heparan sulfate-N-sulfoglucosamine.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:[heparan sulfate]-glucosamine N-sulfotransferase. Other names in common use include heparin N-sulfotransferase, 3'-phosphoadenylylsulfate:N-desulfoheparin sulfotransferase, PAPS:N-desulfoheparin sulfotransferase, PAPS:DSH sulfotransferase, N-HSST, N-heparan sulfate sulfotransferase, heparan sulfate N-deacetylase/N-sulfotransferase, heparan sulfate 2-N-sulfotransferase, heparan sulfate N-sulfotransferase, heparan sulfate sulfotransferase, N-desulfoheparin sulfotransferase, desulfoheparin sulfotransferase, 3'-phosphoadenylyl-sulfate:N-desulfoheparin N-sulfotransferase, heparitin sulfotransferase, and 3'-phosphoadenylyl-sulfate:heparitin N-sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251748
|
14251781
|
Keratan sulfotransferase
|
Class of enzymes
In enzymology, a keratan sulfotransferase (EC 2.8.2.21) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + keratan formula_0 adenosine 3',5'-bisphosphate + keratan 6'-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and keratan, whereas its two products are adenosine 3',5'-bisphosphate and keratan 6'-sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:keratan 6'-sulfotransferase. Other names in common use include 3'-phosphoadenylyl keratan sulfotransferase, keratan sulfate sulfotransferase, and 3'-phosphoadenylylsulfate:keratan sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251781
|
14251824
|
L-seryl-tRNASec selenium transferase
|
Selenocysteine biosynthesis enzyme
In enzymology, a L-seryl-tRNASec selenium transferase (EC 2.9.1.1) is an enzyme that catalyzes the chemical reaction
L-seryl-tRNASec + selenophosphate formula_0 L-selenocysteinyl-tRNASec + phosphate
Thus, the two substrates of this enzyme are L-seryl-tRNASec and selenophosphate, whereas its two products are L-selenocysteinyl-tRNASec and phosphate.
This enzyme belongs to the family of transferases, specifically those transferring selenium-containing groups selenotransferases. The systematic name of this enzyme class is selenophosphate:L-seryl-tRNASec selenium transferase. Other names in common use include L-selenocysteinyl-tRNASel synthase, L-selenocysteinyl-tRNASec synthase selenocysteine synthase, cysteinyl-tRNASec-selenium transferase, and cysteinyl-tRNASec-selenium transferase. This enzyme participates in selenoamino acid metabolism. It employs one cofactor, pyridoxal phosphate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14251824
|
1425185
|
Spray (mathematics)
|
Vector field on tangent bundle
In differential geometry, a spray is a vector field "H" on the tangent bundle "TM" that encodes a quasilinear second order system of ordinary differential equations on the base manifold "M". Usually a spray is required to be homogeneous in the sense that its integral curves "t"→ΦHt(ξ)∈"TM" obey the rule ΦHt(λξ)=ΦHλt(ξ) in positive re-parameterizations. If this requirement is dropped, "H" is called a semi-spray.
Sprays arise naturally in Riemannian and Finsler geometry as the geodesic sprays whose integral curves are precisely the tangent curves of locally length minimizing curves.
Semisprays arise naturally as the extremal curves of action integrals in Lagrangian mechanics. Generalizing all these examples, any (possibly nonlinear) connection on "M" induces a semispray "H", and conversely, any semispray "H" induces a torsion-free nonlinear connection on "M". If the original connection is torsion-free it coincides with the connection induced by "H", and homogeneous torsion-free connections are in one-to-one correspondence with full sprays.
Formal definitions.
Let "M" be a differentiable manifold and ("TM",π"TM","M") its tangent bundle. Then a vector field "H" on "TM" (that is, a section of the double tangent bundle "TTM") is a semi-spray on "M", if any of the three following equivalent conditions holds:
A semispray "H" on "M" is a (full) spray if any of the following equivalent conditions hold:
Let formula_0 be the local coordinates on formula_1 associated with the local coordinates formula_2) on formula_3 using the coordinate basis on each tangent space. Then formula_4 is a semi-spray on formula_3 if it has a local representation of the form
formula_5
on each associated coordinate system on "TM". The semispray "H" is a (full) spray, if and only if the spray coefficients "G""i" satisfy
formula_6
Semi-sprays in Lagrangian mechanics.
A physical system is modeled in Lagrangian mechanics by a Lagrangian function "L":"TM"→R on the tangent bundle of some configuration space "M". The dynamical law is obtained from the Hamiltonian principle, which states that the time evolution γ:["a","b"]→"M" of the state of the system is stationary for the action integral
formula_7.
In the associated coordinates on "TM" the first variation of the action integral reads as
formula_8
where "X":["a","b"]→R is the variation vector field associated with the variation γ"s":["a","b"]→"M" around γ("t") = γ0("t"). This first variation formula can be recast in a more informative form by introducing the following concepts:
If the Legendre condition is satisfied, then "d"α∈Ω2("TM") is a symplectic form, and there exists a unique Hamiltonian vector field "H" on "TM" corresponding to the Hamiltonian function "E" such that
formula_20.
Let ("X""i","Y""i") be the components of the Hamiltonian vector field "H" in the associated coordinates on "TM". Then
formula_21
and
formula_22
so we see that the Hamiltonian vector field "H" is a semi-spray on the configuration space "M" with the spray coefficients
formula_23
Now the first variational formula can be rewritten as
formula_24
and we see γ["a","b"]→"M" is stationary for the action integral with fixed end points if and only if its tangent curve γ':["a","b"]→"TM" is an integral curve for the Hamiltonian vector field "H". Hence the dynamics of mechanical systems are described by semisprays arising from action integrals.
Geodesic spray.
The locally length minimizing curves of Riemannian and Finsler manifolds are called geodesics. Using the framework of Lagrangian mechanics one can describe these curves with spray structures. Define a Lagrangian function on "TM" by
formula_25
where "F":"TM"→R is the Finsler function. In the Riemannian case one uses "F"2("x",ξ) = "g""ij"("x")ξ"i"ξ"j". Now introduce the concepts from the section above. In the Riemannian case it turns out that the fundamental tensor "g""ij"("x",ξ) is simply the Riemannian metric "g""ij"("x"). In the general case the homogeneity condition
formula_26
of the Finsler-function implies the following formulae:
formula_27
In terms of classical mechanics, the last equation states that all the energy in the system ("M","L") is in the kinetic form. Furthermore, one obtains the homogeneity properties
formula_28
of which the last one says that the Hamiltonian vector field "H" for this mechanical system is a full spray. The constant speed geodesics of the underlying Finsler (or Riemannian) manifold are described by this spray for the following reasons:
formula_31
Therefore, a curve formula_30 is stationary to the action integral if and only if it is of constant speed and stationary to the length functional. The Hamiltonian vector field "H" is called the "geodesic spray" of the Finsler manifold ("M","F") and the corresponding flow Φ"H"t(ξ) is called the "geodesic flow".
Correspondence with nonlinear connections.
A semi-spray formula_4 on a smooth manifold formula_3 defines an Ehresmann-connection formula_32 on the slit tangent bundle through its horizontal and vertical projections
formula_33
formula_34
This connection on "TM"\0 always has a vanishing torsion tensor, which is defined as the Frölicher-Nijenhuis bracket
"T"=["J","v"]. In more elementary terms the torsion can be defined as
formula_35
Introducing the canonical vector field "V" on "TM"\0 and the adjoint structure Θ of the induced connection the horizontal part of the semi-spray can be written as "hH"=Θ"V". The vertical part ε="vH" of the semispray is known as the first spray invariant, and the semispray "H" itself decomposes into
formula_36
The first spray invariant is related to the tension
formula_37
of the induced non-linear connection through the ordinary differential equation
formula_38
Therefore, the first spray invariant ε (and hence the whole semi-spray "H") can be recovered from the non-linear connection by
formula_39
From this relation one also sees that the induced connection is homogeneous if and only if "H" is a full spray.
Jacobi fields of sprays and semi-sprays.
A good source for Jacobi fields of semisprays is Section 4.4, "Jacobi equations of a semi-spray" of the publicly available book "Finsler-Lagrange Geometry" by Bucătaru and Miron. Of particular note is their concept of a dynamic covariant derivative. In another paper Bucătaru, Constantinescu and Dahl relate this concept to that of the Kosambi bi-derivative operator.
For a good introduction to Kosambi's methods, see the article, What is Kosambi-Cartan-Chern theory?.
|
[
{
"math_id": 0,
"text": "(x^i,\\xi^i)"
},
{
"math_id": 1,
"text": "TM"
},
{
"math_id": 2,
"text": "(x^i"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "H"
},
{
"math_id": 5,
"text": " H_\\xi = \\xi^i\\frac{\\partial}{\\partial x^i}\\Big|_{(x,\\xi)} - 2G^i(x,\\xi)\\frac{\\partial}{\\partial \\xi^i}\\Big|_{(x,\\xi)}."
},
{
"math_id": 6,
"text": "G^i(x,\\lambda\\xi) = \\lambda^2G^i(x,\\xi),\\quad \\lambda>0.\\,"
},
{
"math_id": 7,
"text": "\\mathcal S(\\gamma) := \\int_a^b L(\\gamma(t),\\dot\\gamma(t))dt"
},
{
"math_id": 8,
"text": "\\frac{d}{ds}\\Big|_{s=0}\\mathcal S(\\gamma_s)\n= \\Big|_a^b \\frac{\\partial L}{\\partial\\xi^i}X^i - \\int_a^b \\Big(\\frac{\\partial^2 L}{\\partial \\xi^j\\partial \\xi^i} \\ddot\\gamma^j\n+ \\frac{\\partial^2 L}{\\partial x^j\\partial\\xi^i} \\dot\\gamma^j - \\frac{\\partial L}{\\partial x^i} \\Big) X^i dt,\n"
},
{
"math_id": 9,
"text": "\\alpha_\\xi = \\alpha_i(x,\\xi) dx^i|_x\\in T_x^*M"
},
{
"math_id": 10,
"text": "\\alpha_i(x,\\xi) = \\tfrac{\\partial L}{\\partial \\xi^i}(x,\\xi)"
},
{
"math_id": 11,
"text": "\\xi \\in T_xM "
},
{
"math_id": 12,
"text": "\\alpha\\in\\Omega^1(TM)"
},
{
"math_id": 13,
"text": "\\alpha_\\xi = \\alpha_i(x,\\xi) dx^i|_{(x,\\xi)}\\in T^*_\\xi TM"
},
{
"math_id": 14,
"text": "g_\\xi = g_{ij}(x,\\xi)(dx^i\\otimes dx^j)|_x"
},
{
"math_id": 15,
"text": "g_{ij}(x,\\xi) = \\tfrac{\\partial^2 L}{\\partial \\xi^i \\partial \\xi^j}(x,\\xi)"
},
{
"math_id": 16,
"text": "\\displaystyle g_\\xi"
},
{
"math_id": 17,
"text": "\\displaystyle g_{ij}(x,\\xi)"
},
{
"math_id": 18,
"text": "\\displaystyle g^{ij}(x,\\xi)"
},
{
"math_id": 19,
"text": "\\displaystyle E(\\xi) = \\alpha_\\xi(\\xi) - L(\\xi)"
},
{
"math_id": 20,
"text": "\\displaystyle dE = - \\iota_H d\\alpha"
},
{
"math_id": 21,
"text": " \\iota_H d\\alpha = Y^i \\frac{\\partial^2 L}{\\partial\\xi^i\\partial x^j} dx^j - X^i \\frac{\\partial^2 L}{\\partial\\xi^i\\partial x^j} d\\xi^j "
},
{
"math_id": 22,
"text": " dE = \\Big(\\frac{\\partial^2 L}{\\partial x^i \\partial \\xi^j}\\xi^j - \\frac{\\partial L}{\\partial x^i}\\Big)dx^i +\n\\xi^j \\frac{\\partial^2 L}{\\partial\\xi^i\\partial x^j} d\\xi^i "
},
{
"math_id": 23,
"text": "G^k(x,\\xi) = \\frac{g^{ki}}{2}\\Big(\\frac{\\partial^2 L}{\\partial\\xi^i\\partial x^j}\\xi^j - \\frac{\\partial L}{\\partial x^i}\\Big). "
},
{
"math_id": 24,
"text": "\\frac{d}{ds}\\Big|_{s=0}\\mathcal S(\\gamma_s)\n= \\Big|_a^b \\alpha_i X^i - \\int_a^b g_{ik}(\\ddot\\gamma^k+2G^k)X^i dt,\n"
},
{
"math_id": 25,
"text": "L(x,\\xi) = \\tfrac{1}{2}F^2(x,\\xi),"
},
{
"math_id": 26,
"text": "F(x,\\lambda\\xi) = \\lambda F(x,\\xi), \\quad \\lambda>0"
},
{
"math_id": 27,
"text": " \\alpha_i=g_{ij}\\xi^i, \\quad F^2=g_{ij}\\xi^i\\xi^j, \\quad E = \\alpha_i\\xi^i - L = \\tfrac{1}{2}F^2. "
},
{
"math_id": 28,
"text": " g_{ij}(\\lambda\\xi) = g_{ij}(\\xi), \\quad \\alpha_i(x,\\lambda\\xi) = \\lambda \\alpha_i(x,\\xi), \\quad\nG^i(x,\\lambda\\xi) = \\lambda^2 G^i(x,\\xi), "
},
{
"math_id": 29,
"text": "F(\\gamma(t),\\dot\\gamma(t))=\\lambda"
},
{
"math_id": 30,
"text": "\\gamma:[a,b]\\to M"
},
{
"math_id": 31,
"text": " \\mathcal S(\\gamma) = \\frac{(b-a)\\lambda^2}{2} = \\frac{\\ell(\\gamma)^2}{2(b-a)}. "
},
{
"math_id": 32,
"text": "T(TM\\setminus 0) = H(TM\\setminus 0) \\oplus V(TM\\setminus 0)"
},
{
"math_id": 33,
"text": " h:T(TM\\setminus 0)\\to T(TM\\setminus 0) \\quad ; \\quad h = \\tfrac{1}{2}\\big( I - \\mathcal L_H J \\big),"
},
{
"math_id": 34,
"text": " v:T(TM\\setminus 0)\\to T(TM\\setminus 0) \\quad ; \\quad v = \\tfrac{1}{2}\\big( I + \\mathcal L_H J \\big)."
},
{
"math_id": 35,
"text": "\\displaystyle T(X,Y) = J[hX,hY] - v[JX,hY] - v[hX,JY]. "
},
{
"math_id": 36,
"text": "\\displaystyle H = \\Theta V + \\epsilon. "
},
{
"math_id": 37,
"text": " \\tau = \\mathcal L_Vv = \\tfrac{1}{2}\\mathcal L_{[V,H]-H} J"
},
{
"math_id": 38,
"text": " \\mathcal L_V\\epsilon+\\epsilon = \\tau\\Theta V. "
},
{
"math_id": 39,
"text": "\n\\epsilon|_\\xi = \\int\\limits_{-\\infty}^0 e^{-s}(\\Phi_V^{-s})_*(\\tau\\Theta V)|_{\\Phi_V^s(\\xi)} ds.\n"
}
] |
https://en.wikipedia.org/wiki?curid=1425185
|
1425324
|
Niven's laws
|
Author Larry Niven's rules about how the universe works
Niven's laws were named after science fiction author Larry Niven, who has periodically published them as "how the Universe works" as far as he can tell. These were most recently rewritten on January 29, 2002 (and published in "Analog" magazine in the November 2002 issue). Among the rules are:
Others.
Niven's Law (Time travel).
A different law is given this name in Niven's essay "The Theory and Practice of Time Travel":
If the universe of discourse permits the possibility of time travel and of changing the past, then no time machine will be invented in that universe.
Hans Moravec glosses this version of Niven's Law as follows:
There is a spookier possibility: Suppose it is easy to send messages to the past, but that forward causality also holds (i.e. past events determine the future). In one way of reasoning about it, a message sent to the past will "alter" the entire history following its receipt, including the event that sent it, and thus the message itself. Thus altered, the message will change the past in a different way, and so on, until some "equilibrium" is reached – the simplest being the situation where no message at all is sent. Time travel may thus act to erase itself (an idea Larry Niven fans will recognize as "Niven's Law").
Ryan North examines this law in Dinosaur Comics #1818.
This proposition is also extensively examined in James P. Hogan's "Thrice Upon a Time".
Niven's Law (re: Clarke's Third Law).
Niven's Law is also a term given to the converse of Clarke's third law, so Niven's Law reads: "Any sufficiently advanced magic is indistinguishable from technology." However, it has also been credited as being from Terry Pratchett. "Keystone Folklore" identifies it as a "fan-composed corollary slogan" of Arthur C. Clarke fans. Gregory Benford in his January 30, 2013 "Variations on Clarke's Third Law" identifies it as a corollary to Clarke’s third law,
Both Clarke's Third Law and Niven's Law are referenced in part 2 of the serial "Battlefield" from season 26 of "Doctor Who", first aired September 13, 1989. In this episode, the Doctor and his companion Ace have entered a trans-dimensional spaceship. While discussing the ship itself, the Doctor asks his companion if she knows Clarke's Law, which she then recites: "Any advanced form of technology is indistinguishable from magic." The Doctor replies that the reverse is true and Ace voices this, working through the inverse, "any advanced form of magic is indistinguishable from technology."
"Niven's Laws" (stories).
Niven's Laws is also the title of a 1984 collection of Niven's short stories.
Included in the 1989 collection "N-Space" are six laws titled "Niven's Laws for Writers". They are:
In the acknowledgments of his 2003 novel "Conquistador", S.M. Stirling wrote:
And a special acknowledgment to the author of Niven's Law: "There is a technical, literary term for those who mistake the opinions and beliefs of characters in a novel for those of the author. The term is 'idiot'."
"Niven's Laws" (from "Known Space").
Drawn from "Known Space: The Future Worlds of Larry Niven"
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " F \\times S = k ~."
}
] |
https://en.wikipedia.org/wiki?curid=1425324
|
1425430
|
Hull speed
|
Speed at which the wavelength of a vessel's bow wave is equal to the waterline length
Hull speed or displacement speed is the speed at which the wavelength of a vessel's bow wave is equal to the waterline length of the vessel. As boat speed increases from rest, the wavelength of the bow wave increases, and usually its crest-to-trough dimension (height) increases as well. When hull speed is exceeded, a vessel in displacement mode will appear to be climbing up the back of its bow wave.
From a technical perspective, at hull speed the bow and stern waves interfere constructively, creating relatively large waves, and thus a relatively large value of wave drag. Ship drag for a displacement hull increases smoothly with speed as hull speed is approached and exceeded, often with no noticeable inflection at hull speed.
The concept of hull speed is not used in modern naval architecture, where considerations of speed/length ratio or Froude number are considered more helpful.
Background.
As a ship moves in the water, it creates standing waves that oppose its movement. This effect increases dramatically in full-formed hulls at a Froude number of about 0.35 (which corresponds to a speed/length ratio (see below for definition) of slightly less than 1.20 knot·ft−½) because of the rapid increase of resistance from the transverse wave train. When the Froude number grows to ~0.40 (speed/length ratio ~1.35), the wave-making resistance increases further from the divergent wave train. This trend of increase in wave-making resistance continues up to a Froude number of ~0.45 (speed/length ratio ~1.50), and peaks at a Froude number of ~0.50 (speed/length ratio ~1.70).
This very sharp rise in resistance at speed/length ratio around 1.3 to 1.5 probably seemed insurmountable in early sailing ships and so became an apparent barrier. This led to the concept of hull speed.
Empirical calculation and speed/length ratio.
Hull speed can be calculated by the following formula:
formula_0
where
formula_1 is the length of the waterline in feet, and
formula_2 is the hull speed of the vessel in knots
If the length of waterline is given in metres and desired hull speed in knots, the coefficient is 2.43 kn·m−½. The constant may be given as 1.34 to 1.51 knot·ft−½ in imperial units (depending on the source), or 4.50 to 5.07 km·h−1·m−½ in metric units, or 1.25 to 1.41 m·s−1·m−½ in SI units.
The ratio of speed to formula_3 is often called the "speed/length ratio", even though it is a ratio of speed to the square root of length.
First principles calculation.
Because the hull speed is related to the length of the boat and the wavelength of the wave it produces as it moves through water, there is another formula that arrives at the same values for hull speed based on the waterline length.
formula_4
where
formula_1 is the length of the waterline in meters,
formula_2 is the hull speed of the vessel in meters per second, and
formula_5 is the acceleration due to gravity in meters per second squared.
This equation is the same as the equation used to calculate the speed of surface water waves in deep water. It dramatically simplifies the units on the constant before the radical in the empirical equation, while giving a deeper understanding of the principles at play.
Hull design implications.
Wave-making resistance depends on the proportions and shape of the hull: many modern displacement designs can exceed their hull speed even without planing. These include hulls with very fine ends, long hulls with relatively narrow beam and wave-piercing designs. Such hull forms are commonly used by canoes, competitive rowing boats, catamarans, and fast ferries. For example, racing kayaks can exceed hull speed by more than 100% even though they do not plane.
Heavy boats with hulls designed for planing generally cannot exceed hull speed without planing.
Ultra light displacement boats are designed to plane and thereby circumvent the limitations of hull speed.
Semi-displacement hulls are usually intermediate between these two extremes.
|
[
{
"math_id": 0,
"text": "v_{hull} \\approx 1.34 \\times \\sqrt{L_{WL}}"
},
{
"math_id": 1,
"text": "L_{WL}"
},
{
"math_id": 2,
"text": "v_{hull}"
},
{
"math_id": 3,
"text": "\\sqrt{L_{WL}}"
},
{
"math_id": 4,
"text": "v_{hull} = \\sqrt{L_{WL}\\cdot g \\over 2 \\pi}"
},
{
"math_id": 5,
"text": "g"
}
] |
https://en.wikipedia.org/wiki?curid=1425430
|
1425449
|
Coupling (computer programming)
|
Degree of interdependence between software modules
In software engineering, coupling is the degree of interdependence between software modules; a measure of how closely connected two routines or modules are; the strength of the relationships between modules. Coupling isn't binary but it's multi-dimensional.
Coupling is usually contrasted with cohesion. Low coupling often correlates with high cohesion, and vice versa. Low coupling is often thought to be a sign of a well-structured computer system and a good design, and when combined with high cohesion, supports the general goals of high readability and maintainability.
History.
The software quality metrics of coupling and cohesion were invented by Larry Constantine in the late 1960s as part of a structured design, based on characteristics of “good” programming practices that reduced maintenance and modification costs. Structured design, including cohesion and coupling, were published in the article "Stevens, Myers & Constantine" (1974) and the book "Yourdon & Constantine" (1979), and the latter subsequently became standard terms.
Types of coupling.
Coupling can be "low" (also "loose" and "weak") or "high" (also "tight" and "strong"). Some types of coupling, in order of highest to lowest coupling, are as follows:
Procedural programming.
A module here refers to a subroutine of any kind, i.e. a set of one or more statements having a name and preferably its own set of variable names.
In this situation, a modification in a field that a module does not need may lead to changing the way the module reads the record.
Object-oriented programming.
In recent work various other coupling concepts have been investigated and used as indicators for different modularization principles used in practice.
Dynamic coupling.
The goal of defining and measuring this type of coupling is to provide a run-time evaluation of a software system. It has been argued that static coupling metrics lose precision when dealing with an intensive use of dynamic binding or inheritance. In the attempt to solve this issue, dynamic coupling measures have been taken into account.
Semantic coupling.
This kind of a coupling metric considers the conceptual similarities between software entities using, for example, comments and identifiers and relying on techniques such as latent semantic indexing (LSI).
Logical coupling.
Logical coupling (or evolutionary coupling or change coupling) analysis exploits the release history of a software system to find change patterns among modules or classes: e.g., entities that are likely to be changed together or sequences of changes (a change in a class A is always followed by a change in a class B).
Dimensions of coupling.
According to Gregor Hohpe, coupling is multi-dimensional:
Disadvantages of tight coupling.
Tightly coupled systems tend to exhibit the following developmental characteristics, which are often seen as disadvantages:
Performance issues.
Whether loosely or tightly coupled, a system's performance is often reduced by message and parameter creation, transmission, translation (e.g. marshaling) and message interpretation (which might be a reference to a string, array or data structure), which require less overhead than creating a complicated message such as a SOAP message. Longer messages require more CPU and memory to produce. To optimize runtime performance, message length must be minimized and message meaning must be maximized.
Solutions.
One approach to decreasing coupling is functional design, which seeks to limit the responsibilities of modules along functionality. Coupling increases between two classes A and B if:
Low coupling refers to a relationship in which one module interacts with another module through a simple and stable interface and does not need to be concerned with the other module's internal implementation (see Information Hiding).
Systems such as CORBA or COM allow objects to communicate with each other without having to know anything about the other object's implementation. Both of these systems even allow for objects to communicate with objects written in other languages.
Coupling versus cohesion.
Coupling and cohesion are terms which occur together very frequently. Coupling refers to the interdependencies between modules, while cohesion describes how related the functions within a single module are. Low cohesion implies that a given module performs tasks which are not very related to each other and hence can create problems as the module becomes large.
Module coupling.
Coupling in Software Engineering describes a version of metrics associated with this concept.
For data and control flow coupling:
For global coupling:
For environmental coupling:
formula_0
codice_0 makes the value larger the more coupled the module is. This number ranges from approximately 0.67 (low coupling) to 1.0 (highly coupled)
For example, if a module has only a single input and output data parameter
formula_1
If a module has 5 input and output data parameters, an equal number of control parameters, and accesses 10 items of global data, with a fan-in of 3 and a fan-out of 4,
formula_2
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{Coupling}(C) = 1 - \\frac{1}{d_{i} + 2\\times c_{i} + d_{o} + 2\\times c_{o} + g_{d} + 2\\times g_{c} + w + r}"
},
{
"math_id": 1,
"text": "C = 1 - \\frac{1}{1+0+1+0+0+0+1+0} = 1 - \\frac{1}{3} = 0.67"
},
{
"math_id": 2,
"text": "C = 1 - \\frac{1}{5 + 2\\times 5 + 5 + 2\\times 5 + 10 + 0 + 3 + 4} = 0.98"
}
] |
https://en.wikipedia.org/wiki?curid=1425449
|
1425534
|
Multipole expansion
|
Mathematical series
A multipole expansion is a mathematical series representing a function that depends on angles—usually the two angles used in the spherical coordinate system (the polar and azimuthal angles) for three-dimensional Euclidean space, formula_0. Similarly to Taylor series, multipole expansions are useful because oftentimes only the first few terms are needed to provide a good approximation of the original function. The function being expanded may be real- or complex-valued and is defined either on formula_0, or less often on formula_1 for some other formula_2.
Multipole expansions are used frequently in the study of electromagnetic and gravitational fields, where the fields at distant points are given in terms of sources in a small region. The multipole expansion with angles is often combined with an expansion in radius. Such a combination gives an expansion describing a function throughout three-dimensional space.
The multipole expansion is expressed as a sum of terms with progressively finer angular features (moments). The first (the zeroth-order) term is called the monopole moment, the second (the first-order) term is called the dipole moment, the third (the second-order) the quadrupole moment, the fourth (third-order) term is called the octupole moment, and so on. Given the limitation of Greek numeral prefixes, terms of higher order are conventionally named by adding "-pole" to the number of poles—e.g., 32-pole (rarely dotriacontapole or triacontadipole) and 64-pole (rarely tetrahexacontapole or hexacontatetrapole). A multipole moment usually involves powers (or inverse powers) of the distance to the origin, as well as some angular dependence.
In principle, a multipole expansion provides an exact description of the potential, and generally converges under two conditions: (1) if the sources (e.g. charges) are localized close to the origin and the point at which the potential is observed is far from the origin; or (2) the reverse, i.e., if the sources are located far from the origin and the potential is observed close to the origin. In the first (more common) case, the coefficients of the series expansion are called "exterior multipole moments" or simply "multipole moments" whereas, in the second case, they are called "interior multipole moments".
Expansion in spherical harmonics.
Most commonly, the series is written as a sum of spherical harmonics. Thus, we might write a function formula_3 as the sum
formula_4
where formula_5 are the standard spherical harmonics, and formula_6 are constant coefficients which depend on the function. The term formula_7 represents the monopole; formula_8 represent the dipole; and so on. Equivalently, the series is also frequently written as
formula_9
where the formula_10 represent the components of a unit vector in the direction given by the angles formula_11 and formula_12, and indices are implicitly summed. Here, the term formula_13 is the monopole; formula_14 is a set of three numbers representing the dipole; and so on.
In the above expansions, the coefficients may be real or complex. If the function being expressed as a multipole expansion is real, however, the coefficients must satisfy certain properties. In the spherical harmonic expansion, we must have
formula_15
In the multi-vector expansion, each coefficient must be real:
formula_16
While expansions of scalar functions are by far the most common application of multipole expansions, they may also be generalized to describe tensors of arbitrary rank. This finds use in multipole expansions of the vector potential in electromagnetism, or the metric perturbation in the description of gravitational waves.
For describing functions of three dimensions, away from the coordinate origin, the coefficients of the multipole expansion can be written as functions of the distance to the origin, formula_17—most frequently, as a Laurent series in powers of formula_17. For example, to describe the electromagnetic potential, formula_18, from a source in a small region near the origin, the coefficients may be written as:
formula_19
Applications.
Multipole expansions are widely used in problems involving gravitational fields of systems of masses, electric and magnetic fields of charge and current distributions, and the propagation of electromagnetic waves. A classic example is the calculation of the "exterior" multipole moments of atomic nuclei from their interaction energies with the "interior" multipoles of the electronic orbitals. The multipole moments of the nuclei report on the distribution of charges within the nucleus and, thus, on the shape of the nucleus. Truncation of the multipole expansion to its first non-zero term is often useful for theoretical calculations.
Multipole expansions are also useful in numerical simulations, and form the basis of the fast multipole method of Greengard and Rokhlin, a general technique for efficient computation of energies and forces in systems of interacting particles. The basic idea is to decompose the particles into groups; particles within a group interact normally (i.e., by the full potential), whereas the energies and forces between groups of particles are calculated from their multipole moments. The efficiency of the fast multipole method is generally similar to that of Ewald summation, but is superior if the particles are clustered, i.e. the system has large density fluctuations.
Multipole expansion of a potential outside an electrostatic charge distribution.
Consider a discrete charge distribution consisting of N point charges "q""i" with position vectors r"i". We assume the charges to be clustered around the origin, so that for all "i": "r""i" < "r"max, where "r"max has some finite value. The potential "V"(R), due to the charge distribution, at a point R outside the charge distribution, i.e., , can be expanded in powers of 1/"R". Two ways of making this expansion can be found in the literature: The first is a Taylor series in the Cartesian coordinates "x", "y", and "z", while the second is in terms of spherical harmonics which depend on spherical polar coordinates. The Cartesian approach has the advantage that no prior knowledge of Legendre functions, spherical harmonics, etc., is required. Its disadvantage is that the derivations are fairly cumbersome (in fact a large part of it is the implicit rederivation of the Legendre expansion of , which was done once and for all by Legendre in the 1780s). Also it is difficult to give a closed expression for a general term of the multipole expansion—usually only the first few terms are given followed by an ellipsis.
Expansion in Cartesian coordinates.
Assume "v"(r) = "v"(−r) for convenience. The Taylor expansion of "v"(r − R) around the origin r = 0 can be written as
formula_20
with Taylor coefficients
formula_21
If "v"(r − R) satisfies the Laplace equation, then by the above expansion we have
formula_22
and the expansion can be rewritten in terms of the components of a traceless Cartesian second rank tensor:
formula_23
where "δ""αβ" is the Kronecker delta and "r"2 ≡ |r|2. Removing the trace is common, because it takes the rotationally invariant "r"2 out of the second rank tensor.
Example.
Consider now the following form of "v"(r − R):
formula_24
Then by direct differentiation it follows that
formula_25
Define a monopole, dipole, and (traceless) quadrupole by, respectively,
formula_26
and we obtain finally the first few terms of the multipole expansion of the total potential, which is the sum of the Coulomb potentials of the separate charges:
formula_27
This expansion of the potential of a discrete charge distribution is very similar to the one in real solid harmonics given below. The main difference is that the present one is in terms of linearly dependent quantities, for
formula_28
Note:
If the charge distribution consists of two charges of opposite sign which are an infinitesimal distance d apart, so that "d"/"R" ≫ ("d"/"R")2, it is easily shown that the dominant term in the expansion is
formula_29
the electric dipolar potential field.
Spherical form.
The potential "V"(R) at a point R outside the charge distribution, i.e. , can be expanded by the Laplace expansion:
formula_30
where formula_31 is an irregular solid harmonic (defined below as a spherical harmonic function divided by formula_32) and formula_33 is a regular solid harmonic (a spherical harmonic times r"ℓ"). We define the "spherical multipole moment" of the charge distribution as follows
formula_34
Note that a multipole moment is solely determined by the charge distribution (the positions and magnitudes of the "N" charges).
A spherical harmonic depends on the unit vector formula_35. (A unit vector is determined by two spherical polar angles.) Thus, by definition, the irregular solid harmonics can be written as
formula_36
so that the "multipole expansion" of the field "V"(R) at the point R outside the charge distribution is given by
formula_37
This expansion is completely general in that it gives a closed form for all terms, not just for the first few. It shows that the spherical multipole moments appear as coefficients in the 1/"R" expansion of the potential.
It is of interest to consider the first few terms in real form, which are the only terms commonly found in undergraduate textbooks.
Since the summand of the "m" summation is invariant under a unitary transformation of both factors simultaneously and since transformation of complex spherical harmonics to real form is by a unitary transformation, we can simply substitute real irregular solid harmonics and real multipole moments. The "ℓ" = 0 term becomes
formula_38
This is in fact Coulomb's law again. For the "ℓ" = 1 term we introduce
formula_39
Then
formula_40
This term is identical to the one found in Cartesian form.
In order to write the "ℓ" = 2 term, we have to introduce shorthand notations for the five real components of the quadrupole moment and the real spherical harmonics. Notations of the type
formula_41
can be found in the literature. Clearly the real notation becomes awkward very soon, exhibiting the usefulness of the complex notation.
Interaction of two non-overlapping charge distributions.
Consider two sets of point charges, one set {"q""i"} clustered around a point A and one set {"q""j"} clustered around a point B. Think for example of two molecules, and recall that a molecule by definition consists of electrons (negative point charges) and nuclei (positive point charges). The total electrostatic interaction energy "U""AB" between the two distributions is
formula_42
This energy can be expanded in a power series in the inverse distance of A and B.
This expansion is known as the multipole expansion of "U""AB".
In order to derive this multipole expansion, we write rXY = r"Y" − r"X", which is a vector pointing from X towards Y. Note that
formula_43
We assume that the two distributions do not overlap:
formula_44
Under this condition we may apply the Laplace expansion in the following form
formula_45
where formula_46 and formula_47 are irregular and regular solid harmonics, respectively. The translation of the regular solid harmonic gives a finite expansion,
formula_48
where the quantity between pointed brackets is a Clebsch–Gordan coefficient. Further we used
formula_49
Use of the definition of spherical multipoles "Q" and covering of the summation ranges in a somewhat different order (which is only allowed for an infinite range of L) gives finally
formula_50
This is the multipole expansion of the interaction energy of two non-overlapping charge distributions which are a distance "R""AB" apart. Since
formula_51
this expansion is manifestly in powers of 1 / "RAB". The function "Y""m""l" is a normalized spherical harmonic.
Molecular moments.
All atoms and molecules (except "S"-state atoms) have one or more non-vanishing permanent multipole moments. Different definitions can be found in the literature, but the following definition in spherical form has the advantage that it is contained in one general equation. Because it is in complex form it has as the further advantage that it is easier to manipulate in calculations than its real counterpart.
We consider a molecule consisting of "N" particles (electrons and nuclei) with charges "eZ""i". (Electrons have a "Z"-value of −1, while for nuclei it is the atomic number). Particle "i" has spherical polar coordinates "r""i", "θ""i", and φ"i" and Cartesian coordinates "x""i", "y""i", and "z""i".
The (complex) electrostatic multipole operator is
formula_52
where formula_53 is a regular solid harmonic function in Racah's normalization (also known as Schmidt's semi-normalization).
If the molecule has total normalized wave function Ψ (depending on the coordinates of electrons and nuclei), then the multipole moment of order formula_54 of the molecule is given by the expectation (expected) value:
formula_55
If the molecule has certain point group symmetry, then this is reflected in the wave function: Ψ transforms according to a certain irreducible representation λ of the group ("Ψ has symmetry type λ"). This has the consequence that selection rules hold for the expectation value of the multipole operator, or in other words, that the expectation value may vanish because of symmetry. A well-known example of this is the fact that molecules with an inversion center do not carry a dipole (the expectation values of formula_56 vanish for "m" = −1, 0, 1). For a molecule without symmetry, no selection rules are operative and such a molecule will have non-vanishing multipoles of any order (it will carry a dipole and simultaneously a quadrupole, octupole, hexadecapole, etc.).
The lowest explicit forms of the regular solid harmonics (with the Condon-Shortley phase) give:
formula_57
(the total charge of the molecule). The (complex) dipole components are:
formula_58
formula_59
Note that by a simple linear combination one can transform the complex multipole operators to real ones. The real multipole operators are of cosine type
formula_60 or sine type formula_61. A few of the lowest ones are:
formula_62
Note on conventions.
The definition of the complex molecular multipole moment given above is the complex conjugate of the definition given in this article, which follows the definition of the standard textbook on classical electrodynamics by Jackson, except for the normalization. Moreover, in the classical definition of Jackson the equivalent of the "N"-particle quantum mechanical expectation value is an integral over a one-particle charge distribution. Remember that in the case of a one-particle quantum mechanical system the expectation value is nothing but an integral over the charge distribution (modulus of wavefunction squared), so that the definition of this article is a quantum mechanical "N"-particle generalization of Jackson's definition.
The definition in this article agrees with, among others, the one of Fano and Racah and Brink and Satchler.
Examples.
There are many types of multipole moments, since there are many types of potentials and many ways of approximating a potential by a series expansion, depending on the coordinates and the symmetry of the charge distribution. The most common expansions include:
Examples of 1/"R" potentials include the electric potential, the magnetic potential and the gravitational potential of point sources. An example of a ln "R" potential is the electric potential of an infinite line charge.
General mathematical properties.
Multipole moments in mathematics and mathematical physics form an orthogonal basis for the decomposition of a function, based on the response of a field to point sources that are brought infinitely close to each other. These can be thought of as arranged in various geometrical shapes, or, in the sense of distribution theory, as directional derivatives.
Multipole expansions are related to the underlying rotational symmetry of the physical laws and their associated differential equations. Even though the source terms (such as the masses, charges, or currents) may not be symmetrical, one can expand them in terms of irreducible representations of the rotational symmetry group, which leads to spherical harmonics and related sets of orthogonal functions. One uses the technique of separation of variables to extract the corresponding solutions for the radial dependencies.
In practice, many fields can be well approximated with a finite number of multipole moments (although an infinite number may be required to reconstruct a field exactly). A typical application is to approximate the field of a localized charge distribution by its monopole and dipole terms. Problems solved once for a given order of multipole moment may be linearly combined to create a final approximate solution for a given source.
|
[
{
"math_id": 0,
"text": "\\R^3"
},
{
"math_id": 1,
"text": "\\R^n"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "f(\\theta,\\varphi)"
},
{
"math_id": 4,
"text": "f(\\theta,\\varphi) = \\sum_{\\ell=0}^\\infty\\, \\sum_{m=-\\ell}^\\ell\\, C^m_\\ell\\, Y^m_\\ell(\\theta,\\varphi)"
},
{
"math_id": 5,
"text": "Y^m_\\ell(\\theta,\\varphi)"
},
{
"math_id": 6,
"text": "C^m_\\ell"
},
{
"math_id": 7,
"text": "C^0_0"
},
{
"math_id": 8,
"text": "C^{-1}_1,C^0_1,C^1_1"
},
{
"math_id": 9,
"text": "f(\\theta,\\varphi) = C + C_i n^i + C_{ij}n^i n^j + C_{ijk}n^i n^j n^k + C_{ijk\\ell}n^i n^j n^k n^\\ell + \\cdots"
},
{
"math_id": 10,
"text": "n^i"
},
{
"math_id": 11,
"text": "\\theta"
},
{
"math_id": 12,
"text": "\\varphi"
},
{
"math_id": 13,
"text": "C"
},
{
"math_id": 14,
"text": "C_i"
},
{
"math_id": 15,
"text": "C_\\ell^{-m} = (-1)^m C^{m\\ast}_\\ell \\, ."
},
{
"math_id": 16,
"text": "C = C^\\ast;\\ C_i = C_i^\\ast;\\ C_{ij} = C_{ij}^\\ast;\\ C_{ijk} = C_{ijk}^\\ast;\\ \\ldots"
},
{
"math_id": 17,
"text": "r"
},
{
"math_id": 18,
"text": "V"
},
{
"math_id": 19,
"text": "V(r,\\theta,\\varphi) = \\sum_{\\ell=0}^\\infty\\, \\sum_{m=-\\ell}^\\ell C^m_\\ell(r)\\, Y^m_\\ell(\\theta,\\varphi)= \\sum_{j=1}^\\infty\\, \\sum_{\\ell=0}^\\infty\\, \\sum_{m=-\\ell}^\\ell \\frac{D^m_{\\ell,j}}{r^j}\\, Y^m_\\ell(\\theta,\\varphi) ."
},
{
"math_id": 20,
"text": "v(\\mathbf{r}- \\mathbf{R}) = v(\\mathbf{R}) - \\sum_{\\alpha=x,y,z} r_\\alpha v_\\alpha(\\mathbf{R}) +\\frac{1}{2} \\sum_{\\alpha=x,y,z}\\sum_{\\beta=x,y,z} r_\\alpha r_\\beta v_{\\alpha\\beta}(\\mathbf{R})\n- \\cdots + \\cdots"
},
{
"math_id": 21,
"text": "v_\\alpha(\\mathbf{R}) \\equiv\\left( \\frac{\\partial v(\\mathbf{r}-\\mathbf{R}) }{\\partial r_\\alpha}\\right)_{\\mathbf{r} = \\mathbf 0} \\quad\\text{and} \\quad\nv_{\\alpha\\beta}(\\mathbf{R}) \\equiv\\left( \\frac{\\partial^2 v(\\mathbf{r}-\\mathbf{R}) }{\\partial r_{\\alpha}\\partial r_{\\beta}}\\right)_{\\mathbf{r}= \\mathbf0} ."
},
{
"math_id": 22,
"text": "\\left(\\nabla^2 v(\\mathbf{r}- \\mathbf{R})\\right)_{\\mathbf{r}=\\mathbf0} = \\sum_{\\alpha=x,y,z} v_{\\alpha\\alpha}(\\mathbf{R}) = 0,"
},
{
"math_id": 23,
"text": "\\sum_{\\alpha=x,y,z}\\sum_{\\beta=x,y,z} r_\\alpha r_\\beta v_{\\alpha\\beta}(\\mathbf{R})\n= \\frac{1}{3} \\sum_{\\alpha=x,y,z}\\sum_{\\beta=x,y,z} \\left(3r_\\alpha r_\\beta - \\delta_{\\alpha\\beta} r^2\\right) v_{\\alpha\\beta}(\\mathbf{R}) ,"
},
{
"math_id": 24,
"text": "v(\\mathbf{r}- \\mathbf{R}) \\equiv \\frac{1}{|\\mathbf{r}- \\mathbf{R}|} ."
},
{
"math_id": 25,
"text": "v(\\mathbf{R}) = \\frac{1}{R},\\quad v_\\alpha(\\mathbf{R})= -\\frac{R_\\alpha}{R^3},\\quad \\hbox{and}\\quad v_{\\alpha\\beta}(\\mathbf{R}) = \\frac{3R_\\alpha R_\\beta- \\delta_{\\alpha\\beta}R^2}{R^5} ."
},
{
"math_id": 26,
"text": "q_\\mathrm{tot} \\equiv \\sum_{i=1}^N q_i , \\quad P_\\alpha \\equiv\\sum_{i=1}^N q_i r_{i\\alpha} , \\quad \\text{and}\\quad Q_{\\alpha\\beta} \\equiv \\sum_{i=1}^N q_i (3r_{i\\alpha} r_{i\\beta} - \\delta_{\\alpha\\beta} r_i^2) ,"
},
{
"math_id": 27,
"text": "\\begin{align}\n4\\pi\\varepsilon_0 V(\\mathbf{R}) &\\equiv \\sum_{i=1}^N q_i v(\\mathbf{r}_i-\\mathbf{R}) \\\\\n&= \\frac{q_\\mathrm{tot}}{R} + \\frac{1}{R^3}\\sum_{\\alpha=x,y,z} P_\\alpha R_\\alpha +\n\\frac{1}{2 R^5}\\sum_{\\alpha,\\beta=x,y,z} Q_{\\alpha\\beta} R_\\alpha R_\\beta + \\cdots\n\\end{align}"
},
{
"math_id": 28,
"text": "\\sum_{\\alpha} v_{\\alpha\\alpha} = 0 \\quad \\hbox{and} \\quad \\sum_{\\alpha} Q_{\\alpha\\alpha} = 0 ."
},
{
"math_id": 29,
"text": "V(\\mathbf{R}) = \\frac{1}{4\\pi \\varepsilon_0 R^3} (\\mathbf{P}\\cdot\\mathbf{R}) ,"
},
{
"math_id": 30,
"text": "V(\\mathbf{R}) \\equiv \\sum_{i=1}^N \\frac{q_i}{4\\pi \\varepsilon_0 |\\mathbf{r}_i - \\mathbf{R}|}\n=\\frac{1}{4\\pi \\varepsilon_0} \\sum_{\\ell=0}^\\infty \\sum_{m=-\\ell}^{\\ell}\n(-1)^m I^{-m}_\\ell(\\mathbf{R}) \\sum_{i=1}^N q_i R^m_\\ell(\\mathbf{r}_i),"
},
{
"math_id": 31,
"text": "I^{-m}_{\\ell}(\\mathbf{R})"
},
{
"math_id": 32,
"text": "R^{\\ell+1}"
},
{
"math_id": 33,
"text": "R^m_{\\ell}(\\mathbf{r})"
},
{
"math_id": 34,
"text": "Q^m_\\ell \\equiv \\sum_{i=1}^N q_i R^m_\\ell(\\mathbf{r}_i),\\quad\\ -\\ell \\le m \\le \\ell."
},
{
"math_id": 35,
"text": "\\hat{R}"
},
{
"math_id": 36,
"text": "I^m_{\\ell}(\\mathbf{R}) \\equiv \\sqrt{\\frac{4\\pi}{2\\ell+1}} \\frac{Y^m_{\\ell}(\\hat{R})}{R^{\\ell+1}}"
},
{
"math_id": 37,
"text": "\\begin{align}\nV(\\mathbf{R})\n& = \\frac{1}{4\\pi\\varepsilon_{0}}\\sum_{\\ell=0}^{\\infty}\n\\sum_{m=-\\ell}^{\\ell}(-1)^{m} I^{-m}_{\\ell}(\\mathbf{R}) Q^{m}_{\\ell}\\\\\n& = \\frac{1}{4\\pi\\varepsilon_{0}}\\sum_{\\ell=0}^{\\infty}\\left[\\frac{4\\pi}{2\\ell + 1}\\right]^{1/2}\\;\\frac{1}{R^{\\ell + 1}}\n\\sum_{m=-\\ell}^{\\ell}(-1)^{m} Y^{-m}_{\\ell}(\\hat{R}) Q^{m}_{\\ell}, \\qquad R > r_{\\mathrm{max}}\n\\end{align}"
},
{
"math_id": 38,
"text": "V_{\\ell=0}(\\mathbf{R}) =\n\\frac{q_\\mathrm{tot}}{4\\pi \\varepsilon_0 R} \\quad\\hbox{with}\\quad q_\\mathrm{tot}\\equiv\\sum_{i=1}^N q_i."
},
{
"math_id": 39,
"text": "\\mathbf{R} = (R_x, R_y, R_z),\\quad \\mathbf{P} = (P_x, P_y, P_z)\\quad\n\\hbox{with}\\quad P_\\alpha \\equiv \\sum_{i=1}^N q_i r_{i\\alpha}, \\quad \\alpha=x,y,z."
},
{
"math_id": 40,
"text": "V_{\\ell=1}(\\mathbf{R}) =\n\\frac{1}{4\\pi \\varepsilon_0 R^3} (R_x P_x +R_y P_y + R_z P_z) = \\frac{\\mathbf{R} \\cdot \\mathbf{P} }{4\\pi \\varepsilon_0 R^3} =\n\\frac{\\hat\\mathbf{R} \\cdot \\mathbf{P} }{4\\pi \\varepsilon_0 R^2}."
},
{
"math_id": 41,
"text": "Q_{z^2} \\equiv \\sum_{i=1}^N q_i\\; \\frac{1}{2}(3z_i^2 - r_i^2),"
},
{
"math_id": 42,
"text": "U_{AB} = \\sum_{i\\in A} \\sum_{j\\in B} \\frac{q_i q_j}{4\\pi\\varepsilon_0 r_{ij}}."
},
{
"math_id": 43,
"text": "\\mathbf{R}_{AB}+\\mathbf{r}_{Bj}+\\mathbf{r}_{ji}+\\mathbf{r}_{iA} = 0\n\\quad \\iff \\quad\n\\mathbf{r}_{ij} = \\mathbf{R}_{AB}-\\mathbf{r}_{Ai}+\\mathbf{r}_{Bj} ."
},
{
"math_id": 44,
"text": " |\\mathbf{R}_{AB}| > |\\mathbf{r}_{Bj}-\\mathbf{r}_{Ai}| \\text{ for all } i,j."
},
{
"math_id": 45,
"text": "\\frac{1}{|\\mathbf{r}_{j}-\\mathbf{r}_i|} = \\frac{1}{|\\mathbf{R}_{AB} - (\\mathbf{r}_{Ai}- \\mathbf{r}_{Bj})| } =\n\\sum_{L=0}^\\infty \\sum_{M=-L}^L \\, (-1)^M I_L^{-M}(\\mathbf{R}_{AB})\\;\nR^M_L( \\mathbf{r}_{Ai} - \\mathbf{r}_{Bj}),"
},
{
"math_id": 46,
"text": "I^M_L"
},
{
"math_id": 47,
"text": "R^M_L"
},
{
"math_id": 48,
"text": "R^M_L(\\mathbf{r}_{Ai}-\\mathbf{r}_{Bj}) = \\sum_{\\ell_A=0}^L (-1)^{L-\\ell_A} \\binom{2L}{2\\ell_A}^{1/2}\n\\times \\sum_{m_A=-\\ell_A}^{\\ell_A} R^{m_A}_{\\ell_A}(\\mathbf{r}_{Ai})\nR^{M-m_A}_{L-\\ell_A}(\\mathbf{r}_{Bj})\\;\n\\langle \\ell_A, m_A; L-\\ell_A, M-m_A\\mid L M \\rangle,\n"
},
{
"math_id": 49,
"text": "R^{m}_{\\ell}(-\\mathbf{r}) = (-1)^{\\ell} R^{m}_{\\ell}(\\mathbf{r}) ."
},
{
"math_id": 50,
"text": "\\begin{align}\nU_{AB} = {} & \\frac{1}{4\\pi\\varepsilon_0} \\sum_{\\ell_A=0}^\\infty \\sum_{\\ell_B=0}^\\infty (-1)^{\\ell_B} \\binom{2\\ell_A+2\\ell_B}{2\\ell_A}^{1/2} \\\\[5pt]\n& \\times \\sum_{m_A=-\\ell_A}^{\\ell_A} \\sum_{m_B=-\\ell_B}^{\\ell_B}(-1)^{m_A+m_B} I_{\\ell_A+\\ell_B}^{-m_A-m_B}(\\mathbf{R}_{AB})\\;\nQ^{m_A}_{\\ell_A} Q^{m_B}_{\\ell_B}\\;\n\\langle \\ell_A, m_A; \\ell_B, m_B\\mid \\ell_A+\\ell_B, m_A+m_B \\rangle.\n\\end{align}"
},
{
"math_id": 51,
"text": "I_{\\ell_A+\\ell_B}^{-(m_A+m_B)}(\\mathbf{R}_{AB}) \\equiv \\left[\\frac{4\\pi}{2\\ell_A+2\\ell_B+1}\\right]^{1/2}\\;\n\\frac{Y^{-(m_A+m_B)}_{\\ell_A+\\ell_B}\\left(\\widehat{\\mathbf{R}}_{AB}\\right)}{R^{\\ell_A+\\ell_B+1}_{AB}},"
},
{
"math_id": 52,
"text": "Q^m_\\ell \\equiv \\sum_{i=1}^N e Z_i \\; R^m_{\\ell}(\\mathbf{r}_i),"
},
{
"math_id": 53,
"text": "R^m_{\\ell}(\\mathbf{r}_i)"
},
{
"math_id": 54,
"text": "\\ell"
},
{
"math_id": 55,
"text": "M^m_\\ell \\equiv \\langle \\Psi \\mid Q^m_\\ell \\mid \\Psi \\rangle."
},
{
"math_id": 56,
"text": " Q^m_1 "
},
{
"math_id": 57,
"text": " M^0_0 = \\sum_{i=1}^N e Z_i, "
},
{
"math_id": 58,
"text": " M^1_1 = - \\tfrac{1}{\\sqrt 2} \\sum_{i=1}^N e Z_i \\langle \\Psi | x_i+iy_i | \\Psi \\rangle\\quad \\hbox{and} \\quad\nM^{-1}_{1} = \\tfrac{1}{\\sqrt 2} \\sum_{i=1}^N e Z_i \\langle \\Psi | x_i - iy_i | \\Psi \\rangle. "
},
{
"math_id": 59,
"text": " M^0_1 = \\sum_{i=1}^N e Z_i \\langle \\Psi | z_i | \\Psi \\rangle."
},
{
"math_id": 60,
"text": " C^m_\\ell"
},
{
"math_id": 61,
"text": "S^m_\\ell"
},
{
"math_id": 62,
"text": "\\begin{align}\nC^0_1 &= \\sum_{i=1}^N eZ_i \\; z_i \\\\\nC^1_1 &= \\sum_{i=1}^N eZ_i \\;x_i \\\\\nS^1_1 &= \\sum_{i=1}^N eZ_i \\;y_i \\\\\nC^0_2 &= \\frac{1}{2}\\sum_{i=1}^N eZ_i\\; (3z_i^2-r_i^2)\\\\\nC^1_2 &= \\sqrt{3}\\sum_{i=1}^N eZ_i\\; z_i x_i \\\\\nC^2_2 &= \\frac{1}{3}\\sqrt{3}\\sum_{i=1}^N eZ_i\\; (x_i^2-y_i^2) \\\\\nS^1_2 &= \\sqrt{3}\\sum_{i=1}^N eZ_i\\; z_i y_i \\\\\nS^2_2 &= \\frac{2}{3}\\sqrt{3}\\sum_{i=1}^N eZ_i\\; x_iy_i\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=1425534
|
14259066
|
Dual EC DRBG
|
Controversial pseudorandom number generator
Dual_EC_DRBG (Dual Elliptic Curve Deterministic Random Bit Generator) is an algorithm that was presented as a cryptographically secure pseudorandom number generator (CSPRNG) using methods in elliptic curve cryptography. Despite wide public criticism, including the public identification of the possibility that the National Security Agency put a backdoor into a recommended implementation, it was, for seven years, one of four CSPRNGs standardized in NIST SP 800-90A as originally published circa June 2006, until it was withdrawn in 2014.
Weakness: a potential backdoor.
Weaknesses in the cryptographic security of the algorithm were known and publicly criticised well before the algorithm became part of a formal standard endorsed by the ANSI, ISO, and formerly by the National Institute of Standards and Technology (NIST). One of the weaknesses publicly identified was the potential of the algorithm to harbour a cryptographic backdoor advantageous to those who know about it—the United States government's National Security Agency (NSA)—and no one else. In 2013, "The New York Times" reported that documents in their possession but never released to the public "appear to confirm" that the backdoor was real, and had been deliberately inserted by the NSA as part of its Bullrun decryption program. In December 2013, a Reuters news article alleged that in 2004, before NIST standardized Dual_EC_DRBG, NSA paid RSA Security $10 million in a secret deal to use Dual_EC_DRBG as the default in the RSA BSAFE cryptography library, which resulted in RSA Security becoming the most important distributor of the insecure algorithm. RSA responded that they "categorically deny" that they had ever knowingly colluded with the NSA to adopt an algorithm that was known to be flawed, but also stated "we have never kept [our] relationship [with the NSA] a secret".
Sometime before its first known publication in 2004, a possible kleptographic backdoor was discovered with the Dual_EC_DRBG's design, with the design of Dual_EC_DRBG having the unusual property that it was theoretically impossible for anyone but Dual_EC_DRBG's designers (NSA) to confirm the backdoor's existence. Bruce Schneier concluded shortly after standardization that the "rather obvious" backdoor (along with other deficiencies) would mean that nobody would use Dual_EC_DRBG. The backdoor would allow NSA to decrypt for example SSL/TLS encryption which used Dual_EC_DRBG as a CSPRNG.
Members of the ANSI standard group to which Dual_EC_DRBG was first submitted were aware of the exact mechanism of the potential backdoor and how to disable it, but did not elect to disable or publicize the backdoor. The general cryptographic community was initially not aware of the potential backdoor, until Dan Shumow and Niels Ferguson's publication, or of Certicom's Daniel R. L. Brown and Scott Vanstone's 2005 patent application describing the backdoor mechanism.
In September 2013, "The New York Times" reported that internal NSA memos leaked by Edward Snowden indicated that the NSA had worked during the standardization process to eventually become the sole editor of the Dual_EC_DRBG standard, and concluded that the Dual_EC_DRBG standard did indeed contain a backdoor for the NSA. In response, NIST stated that "NIST would not deliberately weaken a cryptographic standard", but according to the "New York Times" story, the NSA had been spending $250 million per year to insert backdoors in software and hardware as part of the Bullrun program. A Presidential advisory committee subsequently set up to examine NSA's conduct recommended among other things that the US government "fully support and not undermine efforts to create encryption standards".
On April 21, 2014, NIST withdrew Dual_EC_DRBG from its draft guidance on random number generators recommending "current users of Dual_EC_DRBG transition to one of the three remaining approved algorithms as quickly as possible."
Description.
Overview.
The algorithm uses a single integer s as state. Whenever a new random number is requested, this integer is updated. The k-th state is given by
formula_0
The returned random integer r is a function of the state. The k-th random number is
formula_1
The function formula_2 depends on the fixed elliptic curve point P. formula_3 is similar except that it uses the point Q. The points P and Q stay constant for a particular implementation of the algorithm.
Details.
The algorithm allows for different constants, variable output length and other customization. For simplicity, the one described here will use the constants from curve P-256 (one of the 3 sets of constants available) and have fixed output length. The algorithm operates exclusively over a prime finite field formula_4 (formula_5) where p is prime. The state, the seed and the random numbers are all elements of this field. Field size is
formula_6
An elliptic curve over formula_4 is given
formula_7
where the constant b is
formula_8
The points on the curve are formula_9. Two of these points are given as the fixed points P and Q
formula_10
Their coordinates are
formula_11
A function to extract the x-coordinate is used. It "converts" from elliptic curve points to elements of the field.
formula_12
Output integers are truncated before being output
formula_13
The functions formula_14 and formula_15. These functions raise the fixed points to a power. "Raising to a power" in this context, means using the special operation defined for points on elliptic curves.
formula_16
formula_17
The generator is seeded with an element from formula_4
formula_18
The k-th state and random number
formula_0
formula_1
The random numbers
formula_19
Security.
The stated purpose of including the Dual_EC_DRBG in NIST SP 800-90A is that its security is based on computational hardness assumptions from number theory. A mathematical security reduction proof can then prove that as long as the number theoretical problems are hard, the random number generator itself is secure. However, the makers of Dual_EC_DRBG did not publish a security reduction for Dual_EC_DRBG, and it was shown soon after the NIST draft was published that Dual_EC_DRBG was indeed not secure, because it output too many bits per round. The output of too many bits (along with carefully chosen elliptic curve points "P" and "Q") is what makes the NSA backdoor possible, because it enables the attacker to revert the truncation by brute force guessing. The output of too many bits was not corrected in the final published standard, leaving Dual_EC_DRBG both insecure and backdoored.
In many other standards, constants that are meant to be arbitrary are chosen by the "nothing up my sleeve number" principle, where they are derived from pi or similar mathematical constants in a way that leaves little room for adjustment. However, Dual_EC_DRBG did not specify how the default "P" and "Q" constants were chosen, possibly because they were constructed by NSA to be backdoored. Because the standard committee were aware of the potential for a backdoor, a way for an implementer to choose their own secure "P" and "Q" was included. But the exact formulation in the standard was written such that use of the alleged backdoored "P" and "Q" was required for FIPS 140-2 validation, so the OpenSSL project chose to implement the backdoored "P" and "Q", even though they were aware of the potential backdoor and would have preferred generating their own secure "P" and "Q". New York Times would later write that NSA had worked during the standardization process to eventually become the sole editor of the standard.
A security proof was later published for Dual_EC_DRBG by Daniel R.L. Brown and Kristian Gjøsteen, showing that the generated elliptic curve points would be indistinguishable from uniformly random elliptic curve points, and that if fewer bits were output in the final output truncation, and if the two elliptic curve points "P" and "Q" were independent, then Dual_EC_DRBG is secure. The proof relied on the assumption that three problems were hard: the "decisional Diffie–Hellman assumption" (which is generally accepted to be hard), and two newer less-known problems which are not generally accepted to be hard: the "truncated point problem", and the "x-logarithm problem". Dual_EC_DRBG was quite slow compared to many alternative CSPRNGs (which don't have security reductions), but Daniel R.L. Brown argues that the security reduction makes the slow Dual_EC_DRBG a valid alternative (assuming implementors disable the obvious backdoor). Note that Daniel R.L. Brown works for Certicom, the main owner of elliptic curve cryptography patents, so there may be a conflict of interest in promoting an EC CSPRNG.
The alleged NSA backdoor would allow the attacker to determine the internal state of the random number generator from looking at the output from a single round (32 bytes); all future output of the random number generator can then easily be calculated, until the CSPRNG is reseeded with an external source of randomness. This makes for example SSL/TLS vulnerable, since the setup of a TLS connection includes the sending of a randomly generated cryptographic nonce in the clear. NSA's alleged backdoor would depend on their knowing of the single "e" such that formula_20. This is a hard problem if "P" and "Q" are set ahead of time, but it's easier if "P" and "Q" are chosen. "e" is a secret key presumably known only by NSA, and the alleged backdoor is a kleptographic asymmetric hidden backdoor. Matthew Green's blog post "The Many Flaws of Dual_EC_DRBG" has a simplified explanation of how the alleged NSA backdoor works by employing the discrete-log kleptogram introduced in Crypto 1997.
Standardization and implementations.
NSA first introduced Dual_EC_DRBG in the ANSI X9.82 DRBG in the early 2000s, including the same parameters which created the alleged backdoor, and Dual_EC_DRBG was published in a draft ANSI standard. Dual_EC_DRBG also exists in the ISO 18031 standard.
According to John Kelsey (who together with Elaine Barker was listed as author of NIST SP 800-90A), the possibility of the backdoor by carefully chosen "P" and "Q" was brought up at an ANSI X9F1 Tool Standards and Guidelines Group meeting. When Kelsey asked Don Johnson of Cygnacom about the origin of "Q", Johnson answered in a 27 October 2004 email to Kelsey that NSA had prohibited the public discussion of generation of an alternative "Q" to the NSA-supplied one.
At least two members of the Members of the ANSI X9F1 Tool Standards and Guidelines Group which wrote ANSI X9.82, Daniel R. L. Brown and Scott Vanstone from Certicom, were aware of the exact circumstances and mechanism in which a backdoor could occur, since they filed a patent application in January 2005 on exactly how to insert or prevent the backdoor in DUAL_EC_DRBG. The working of the "trap door" mentioned in the patent is identical to the one later confirmed in Dual_EC_DRBG. Writing about the patent in 2014, commentator Matthew Green describes the patent as a "passive aggressive" way of spiting NSA by publicizing the backdoor, while still criticizing everybody on the committee for not actually disabling the backdoor they obviously were aware of. Brown and Vanstone's patent list two necessary conditions for the backdoor to exist:
1) Chosen "Q"
<templatestyles src="Template:Blockquote/styles.css" />An elliptic curve random number generator avoids escrow keys by choosing a point "Q" on the elliptic curve as verifiably random. Intentional use of escrow keys can provide for back up functionality. The relationship between "P" and "Q" is used as an escrow key and stored by for a security domain. The administrator logs the output of the generator to reconstruct the random number with the escrow key.
2) Small output truncation
<templatestyles src="Template:Blockquote/styles.css" />[0041] Another alternative method for preventing a key escrow attack on the output of an ECRNG, shown in Figures 3 and 4 is to add a truncation function to ECRNG to truncate the ECRNG output to approximately half the length of a compressed elliptic curve point. Preferably, this operation is done in addition to the preferred method of Figure 1 and 2, however, it will be appreciated that it may be performed as a primary measure for preventing a key escrow attack. The benefit of truncation is that the list of R values associated with a single ECRNG output r is typically infeasible to search. For example, for a 160-bit elliptic curve group, the number of potential points R in the list is about 280, and searching the list would be about as hard as solving the discrete logarithm problem. The cost of this method is that the ECRNG is made half as efficient, because the output length is effectively halved.
According to John Kelsey, the option in the standard to choose a verifiably random "Q" was added as an option in response to the suspected backdoor, though in such a way that FIPS 140-2 validation could only be attained by using the possibly backdoored "Q". Steve Marquess (who helped implement NIST SP 800-90A for OpenSSL) speculated that this requirement to use the potentially backdoored points could be evidence of NIST complicity. It is not clear why the standard did not specify the default "Q" in the standard as a verifiably generated nothing up my sleeve number, or why the standard did not use greater truncation, which Brown's patent said could be used as the "primary measure for preventing a key escrow attack". The small truncation was unusual compared to previous EC PRGs, which according to Matthew Green had only output 1/2 to 2/3 of the bits in the output function. The low truncation was in 2006 shown by Gjøsteen to make the RNG predictable and therefore unusable as a CSPRNG, even if "Q" had not been chosen to contain a backdoor. The standard says that implementations "should" use the small max_outlen provided, but gives the option of outputting a multiple of 8 fewer bits. Appendix C of the standard gives a loose argument that outputting fewer bits will make the output less uniformly distributed. Brown's 2006 security proof relies on outlen being much smaller the default max_outlen value in the standard.
The ANSI X9F1 Tool Standards and Guidelines Group which discussed the backdoor also included three employees from the prominent security company RSA Security. In 2004, RSA Security made an implementation of Dual_EC_DRBG which contained the NSA backdoor the default CSPRNG in their RSA BSAFE as a result of a secret $10 million deal with NSA. In 2013, after the New York Times reported that Dual_EC_DRBG contained a backdoor by the NSA, RSA Security said they had not been aware of any backdoor when they made the deal with NSA, and told their customers to switch CSPRNG. In the 2014 RSA Conference keynote, RSA Security Executive Chairman Art Coviello explained that RSA had seen declining revenue from encryption, and had decided to stop being "drivers" of independent encryption research, but to instead to "put their trust behind" the standards and guidance from standards organizations such as NIST.
A draft of NIST SP 800-90A including the Dual_EC_DRBG was published in December 2005. The final NIST SP 800-90A including Dual_EC_DRBG was published in June 2006. Documents leaked by Snowden have been interpreted as suggesting that the NSA backdoored Dual_EC_DRBG, with those making the allegation citing the NSA's work during the standardization process to eventually become the sole editor of the standard. The early usage of Dual_EC_DRBG by RSA Security (for which NSA was later reported to have secretly paid $10 million) was cited by the NSA as an argument for Dual_EC_DRBG's acceptance into the NIST SP 800-90A standard. RSA Security subsequently cited Dual_EC_DRBG's acceptance into the NIST standard as a reason they used Dual_EC_DRBG.
Daniel R. L. Brown's March 2006 paper on the security reduction of Dual_EC_DRBG mentions the need for more output truncation and a randomly chosen "Q", but mostly in passing, and does not mention his conclusions from his patent that these two defects in Dual_EC_DRBG together can be used as a backdoor. Brown writes in the conclusion: "Therefore, the ECRNG should be a serious consideration, and its high efficiency makes it suitable even for constrained environments." Note that others have criticised Dual_EC_DRBG as being extremely slow, with Bruce Schneier concluding "It's too slow for anyone to willingly use it", and Matthew Green saying Dual_EC_DRBG is "Up to a thousand times slower" than the alternatives. The potential for a backdoor in Dual_EC_DRBG was not widely publicised outside of internal standard group meetings. It was only after Dan Shumow and Niels Ferguson's 2007 presentation that the potential for a backdoor became widely known. Shumow and Ferguson had been tasked with implementing Dual_EC_DRBG for Microsoft, and at least Furguson had discussed the possible backdoor in a 2005 X9 meeting. Bruce Schneier wrote in a 2007 Wired article that the Dual_EC_DRBG's flaws were so obvious that nobody would use Dual_EC_DRBG: "It makes no sense as a trap door: It's public, and rather obvious. It makes no sense from an engineering perspective: It's too slow for anyone to willingly use it." Schneier was apparently unaware that RSA Security had used Dual_EC_DRBG as the default in BSAFE since 2004.
OpenSSL implemented all of NIST SP 800-90A including Dual_EC_DRBG at the request of a client. The OpenSSL developers were aware of the potential backdoor because of Shumow and Ferguson's presentation, and wanted to use the method included in the standard to choose a guaranteed non-backdoored "P" and "Q", but were told that to get FIPS 140-2 validation they would have to use the default "P" and "Q". OpenSSL chose to implement Dual_EC_DRBG despite its dubious reputation for completeness, noting that OpenSSL tried to be complete and implements many other insecure algorithms. OpenSSL did not use Dual_EC_DRBG as the default CSPRNG, and it was discovered in 2013 that a bug made the OpenSSL implementation of Dual_EC_DRBG non-functioning, meaning that no one could have been using it.
Bruce Schneier reported in December 2007 that Microsoft added Dual_EC_DRBG support to Windows Vista, though not enabled by default, and Schneier warned against the known potential backdoor. Windows 10 and later will silently replace calls to Dual_EC_DRBG with calls to CTR_DRBG based on AES.
On September 9, 2013, following the Snowden leak, and the "New York Times" report on the backdoor in Dual_EC_DRBG, the National Institute of Standards and Technology (NIST) ITL announced that in light of community security concerns, it was reissuing SP 800-90A as draft standard, and re-opening SP800-90B/C for public comment. NIST now "strongly recommends" against the use of Dual_EC_DRBG, as specified in the January 2012 version of SP 800-90A. The discovery of a backdoor in a NIST standard has been a major embarrassment for the NIST.
RSA Security had kept Dual_EC_DRBG as the default CSPRNG in BSAFE even after the wider cryptographic community became aware of the potential backdoor in 2007, but there does not seem to have been a general awareness of BSAFE's usage of Dual_EC_DRBG as a user option in the community. Only after widespread concern about the backdoor was there an effort to find software which used Dual_EC_DRBG, of which BSAFE was by far the most prominent found. After the 2013 revelations, RSA security Chief of Technology Sam Curry provided Ars Technica with a rationale for originally choosing the flawed Dual EC DRBG standard as default over the alternative random number generators. The technical accuracy of the statement was widely criticized by cryptographers, including Matthew Green and Matt Blaze. On December 20, 2013, it was reported by Reuters that RSA had accepted a secret payment of $10 million from the NSA to set the Dual_EC_DRBG random number generator as the default in two of its encryption products. On December 22, 2013, RSA posted a statement to its corporate blog "categorically" denying a secret deal with the NSA to insert a "known flawed random number generator" into its BSAFE toolkit
Following the New York Times story asserting that Dual_EC_DRBG contained a backdoor, Brown (who had applied for the backdoor patent and published the security reduction) wrote an email to an IETF mailing list defending the Dual_EC_DRBG standard process:
<templatestyles src="Template:Blockquote/styles.css" />1. Dual_EC_DRBG, as specified in NIST SP 800-90A and ANSI X9.82-3, allows an alternative choice of constants "P" and "Q". As far as I know, the alternatives do not admit a known feasible backdoor. In my view, it is incorrect to imply that Dual_EC_DRBG always has a backdoor, though I admit a wording to qualify the affected cases may be awkward.
2. Many things are obvious in hindsight. I'm not sure if this was obvious.
8. All considered, I don't see how the ANSI and NIST standards for Dual_EC_DRBG can be viewed as a subverted standard, per se. But maybe that's just because I'm biased or naive.
Software and hardware which contained the possible backdoor.
Implementations which used Dual_EC_DRBG would usually have gotten it via a library. At least RSA Security (BSAFE library), OpenSSL, Microsoft, and Cisco have libraries which included Dual_EC_DRBG, but only BSAFE used it by default. According to the Reuters article which revealed the secret $10 million deal between RSA Security and NSA, RSA Security's BSAFE was the most important distributor of the algorithm. There was a flaw in OpenSSL's implementation of Dual_EC_DRBG that made it non-working outside test mode, from which OpenSSL's Steve Marquess concludes that nobody used OpenSSL's Dual_EC_DRBG implementation.
A list of products which have had their CSPRNG-implementation FIPS 140-2 validated is available at the NIST. The validated CSPRNGs are listed in the Description/Notes field. Note that even if Dual_EC_DRBG is listed as validated, it may not have been enabled by default. Many implementations come from a renamed copy of a library implementation.
The BlackBerry software is an example of non-default use. It includes support for Dual_EC_DRBG, but not as default. BlackBerry Ltd has however not issued an advisory to any of its customers who may have used it, because they do not consider the probable backdoor a vulnerability. Jeffrey Carr quotes a letter from Blackberry:
The Dual EC DRBG algorithm is only available to third party developers via the Cryptographic APIs on the [Blackberry] platform. In the case of the Cryptographic API, it is available if a 3rd party developer wished to use the functionality and explicitly designed and developed a system that requested the use of the API.
Bruce Schneier has pointed out that even if not enabled by default, having a backdoored CSPRNG implemented as an option can make it easier for NSA to spy on targets which have a software-controlled command-line switch to select the encryption algorithm, or a "registry" system, like most Microsoft products, such as Windows Vista:
<templatestyles src="Template:Blockquote/styles.css" />A Trojan is really, really big. You can’t say that was a mistake. It’s a massive piece of code collecting keystrokes. But changing a bit-one to a bit-two [in the registry to change the default random number generator on the machine] is probably going to be undetected. It is a low conspiracy, highly deniable way of getting a backdoor. So there’s a benefit to getting it into the library and into the product.
In December 2013, a proof of concept backdoor was published that uses the leaked internal state to predict subsequent random numbers, an attack viable until the next reseed.
In December 2015, Juniper Networks announced that some revisions of their ScreenOS firmware used Dual_EC_DRBG with the suspect "P" and "Q" points, creating a backdoor in their firewall. Originally it was supposed to use a Q point chosen by Juniper which may or may not have been generated in provably safe way. Dual_EC_DRBG was then used to seed ANSI X9.17 PRNG. This would have obfuscated the Dual_EC_DRBG output thus killing the backdoor. However, a "bug" in the code exposed the raw output of the Dual_EC_DRBG, hence compromising the security of the system. This backdoor was then backdoored itself by an unknown party which changed the Q point and some test vectors. Allegations that the NSA had persistent backdoor access through Juniper firewalls had already been published in 2013 by "Der Spiegel". The kleptographic backdoor is an example of NSA's NOBUS policy, of having security holes that only they can exploit.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\ns_{k} = g_P(s_{k-1})\n"
},
{
"math_id": 1,
"text": "\nr_k = g_Q(s_{k})\n"
},
{
"math_id": 2,
"text": "g_P(x)"
},
{
"math_id": 3,
"text": "g_Q(x)"
},
{
"math_id": 4,
"text": "F_p"
},
{
"math_id": 5,
"text": "\\mathbb{Z}/p\\mathbb{Z}"
},
{
"math_id": 6,
"text": "p = \\mathtt{ffffffff00000000ffffffffffffffffbce6faada7179e84f3b9cac2fc632551}_{16}"
},
{
"math_id": 7,
"text": "\ny^2= x^3- 3x + b\n"
},
{
"math_id": 8,
"text": "b = \\mathtt{5ac635d8aa3a93e7b3ebbd55769886bc651d06b0cc53b0f63bce3c3e27d2604b}_{16}\n"
},
{
"math_id": 9,
"text": "E({\\displaystyle F_{p}})"
},
{
"math_id": 10,
"text": "P, Q \\in E(F_p)"
},
{
"math_id": 11,
"text": "\\begin{align}\nP_x &= \\mathtt{6b17d1f2\\ e12c4247\\ f8bce6e5\\ 63a440f2\\ 77037d81\\ 2deb33a0\\ f4a13945\\ d898c296}_{~16} \\\\\nP_y &= \\mathtt{4fe342e2\\ fe1a7f9b\\ 8ee7eb4a\\ 7c0f9e16\\ 2bce3357\\ 6b315ece\\ cbb64068\\ 37bf51f5}_{~16} \\\\\nQ_x &= \\mathtt{c97445f4\\ 5cdef9f0\\ d3e05e1e\\ 585fc297\\ 235b82b5\\ be8ff3ef\\ ca67c598\\ 52018192}_{~16} \\\\\nQ_y &= \\mathtt{b28ef557\\ ba31dfcb\\ dd21ac46\\ e2a91e3c\\ 304f44cb\\ 87058ada\\ 2cb81515\\ 1e610046}_{~16}\n\\end{align}"
},
{
"math_id": 12,
"text": "X(x,y) = x"
},
{
"math_id": 13,
"text": "t(x) = x\\ \\text{mod} \\ \\frac{p}{2^{16}}"
},
{
"math_id": 14,
"text": "g_P"
},
{
"math_id": 15,
"text": "g_Q"
},
{
"math_id": 16,
"text": "\ng_P(x) = X(P^x)\n"
},
{
"math_id": 17,
"text": "\ng_Q(x) = t(X(Q^x))\n"
},
{
"math_id": 18,
"text": "\ns_1 = g_P(seed)\n"
},
{
"math_id": 19,
"text": "\nr_1, r_2, \\ldots\n"
},
{
"math_id": 20,
"text": "eQ=P"
}
] |
https://en.wikipedia.org/wiki?curid=14259066
|
1425916
|
Fulkerson Prize
|
Award for advancements in discrete mathematics
The Fulkerson Prize for outstanding papers in the area of discrete mathematics is sponsored jointly by the Mathematical Optimization Society (MOS) and the American Mathematical Society (AMS). Up to three awards of $1,500 each are presented at each (triennial) International Symposium of the MOS. Originally, the prizes were paid out of a memorial fund administered by the AMS that was established by friends of the late Delbert Ray Fulkerson to encourage mathematical excellence in the fields of research exemplified by his work. The prizes are now funded by an endowment administered by MPS.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "O(\\log n)"
},
{
"math_id": 1,
"text": "O(\\sqrt{\\log n})"
},
{
"math_id": 2,
"text": "O^*(n^3)"
}
] |
https://en.wikipedia.org/wiki?curid=1425916
|
14259252
|
Deformation mechanism
|
Microscopic processes responsible for changes in a material's structure, shape and volume
In geology, a deformation mechanism is a process occurring at a microscopic scale that is responsible for changes in a material's internal structure, shape and volume. The process involves planar discontinuity and/or displacement of atoms from their original position within a crystal lattice structure. These small changes are preserved in various microstructures of materials such as rocks, metals and plastics, and can be studied in depth using optical or digital microscopy.
Processes.
Deformation mechanisms are commonly characterized as brittle, ductile, and brittle-ductile. The driving mechanism responsible is an interplay between internal (e.g. composition, grain size and lattice-preferred orientation) and external (e.g. temperature and fluid pressure) factors. These mechanisms produce a range of micro-structures studied in rocks to constrain the conditions, rheology, dynamics, and motions of tectonic events. More than one mechanism may be active under a given set of conditions and some mechanisms can develop independently. Detailed microstructure analysis can be used to define the conditions and timing under which individual deformation mechanisms dominate for some materials. Common deformation mechanisms processes include:
<br>
<br>
<br>
<br>
<br>
(recovery)
Fracturing.
Fracturing is a brittle deformation process that creates permanent linear breaks, that are not accompanied by displacement within materials. These linear breaks or openings can be independent or interconnected. For fracturing to occur, the ultimate strength of the materials need to be exceeded to a point where the material ruptures. Rupturing is aided by the accumulations of high differential stress (the difference between the maximum and minimum stress acting on the object). Most fracture grow into faults. However, the term fault is only used when the fracture plane accommodate some degree of movement. Fracturing can happen across all scales, from microfractures to macroscopic fractures and joints in the rocks.
Cataclastic flow.
Cataclasis, or comminution, is a non-elastic brittle mechanism that operates under low to moderate homologous temperatures, low confining pressure and relatively high strain rates. It occurs only above a certain differential stress level, which is dependent on fluid pressure and temperature. Cataclasis accommodates the fracture and crushing of grains, causing grain size reduction, along with frictional sliding on grain boundaries and rigid body grain rotation. Intense cataclasis occurs in thin zones along slip or fault surfaces where extreme grain size reduction occurs. In rocks, cataclasis forms a cohesive and fine-grained fault rock called cataclasite. Cataclastic flow occurs during shearing when a rock deform by microfracturing and frictional sliding where tiny fractures (microcracks), and associated rock fragments move past each other. Cataclastic flow usually occurs at diagenetic to low-grade metamorphic conditions. However, this depends on the mineralogy of the material and the extent of pore fluid pressure. Cataclastic flow is generally unstable and will terminate by the localization of deformation into slip on fault planes.
Grain boundary sliding.
Grain boundary sliding is a plastic deformation mechanism where crystals can slide past each other without friction and without creating significant voids as a result of diffusion. The deformation process associated with this mechanism is referred to as granular flow. The absence of voids results from solid-state diffusive mass transfer, locally enhanced crystal plastic deformation, or solution and precipitation of a grain boundary fluid. This mechanism operates at a low strain rate produced by neighbor switching. Grain boundary sliding is grain size- and temperature-dependent. It is favored by high temperatures and the presence of very fine-grained aggregates where diffusion paths are relatively short. Large strains operating in this mechanism do not result in the development of a lattice preferred orientation or any appreciable internal deformation of the grains, except at the grain boundary to accommodate the grain sliding; this process is called superplastic deformation.
Diffusive mass transfer.
In this group of mechanisms, the strain is accommodated by migration of vacancies in crystallographic lattice. This results in a change in crystal shape involving the transfer of mass by diffusion. These migrations are oriented towards sites of maximum stress and are limited by the grain boundaries; which conditions a crystallographic shape fabric or strain. The result is a more perfect crystal. This process is grain-size sensitive and occurs at low strain rates or very high temperatures, and is accommodated by migration of lattice defects from areas of low to those of high compressive stress. The main mechanisms of diffusive mass transfer are Nabarro-Herring creep, Coble creep, and pressure solution.
Nabarro–herring creep, or volume diffusion, acts at high homologous temperatures and is grain size dependent with the strain-rate inversely proportional to the square of the grain size (creep rate decreases as the grain size increases). During Nabarro-Herring creep, the diffusion of vacancies occurs through the crystal lattice (microtectonics), which causes grains to elongate along the stress axis. Nabarro-Herring creep has a weak stress dependence.
Coble creep, or grain-boundary diffusion, is the diffusion of vacancies occurs along grain-boundaries to elongate the grains along the stress axis. Coble creep has a stronger grain-size dependence than Nabarro–Herring creep, and occurs at lower temperatures while remaining temperature dependent. It play a more important role than Nabarro–Herring creep and is more important in the deformation of the plastic crust.
Dislocation creep.
Dislocation creep is a non-linear (plastic) deformation mechanism in which vacancies in the crystal glide and climb past obstruction sites within the crystal lattice. These migrations within the crystal lattice can occur in one or more directions and are triggered by the effects of increased differential stress. It occurs at lower temperatures relative to diffusion creep. The mechanical process presented in dislocation creep is called slip. The principal direction in which dislocation takes place are defined by a combination of slip planes and weak crystallographic orientations resulting from vacancies and imperfections in the atomic structure. Each dislocation causes a part of the crystal to shift by one lattice point along the slip plane, relative to the rest of the crystal. Each crystalline material has different distances between atoms or ions in the crystal lattice, resulting in different lengths of displacement. The vector that characterizes the length and orientation of the displacement is called the Burgers vector. The development of strong lattice preferred orientation can be interpreted as evidence for dislocation creep as dislocations move only in specific lattice planes.
Dislocation glide cannot act on its own to produce large strains due to the effects of strain-hardening, where a dislocation ‘tangle’ can inhibit the movement of other dislocations, which then pile up behind the blocked ones causing the crystal to become difficult to deform. Diffusion and dislocation creep can occur simultaneously. The effective viscosity of a stressed material under given conditions of temperature, pressure, and strain rate will be determined by the mechanism that delivers the smallest viscosity. Some form of recovery process, such as dislocation climb or grain-boundary migration must also be active. Slipping of the dislocation results in a more stable state for the crystal as the pre-existing imperfection is removed. It requires much lower differential stress than that required for brittle fracturing. This mechanism does not damage the mineral or reduce the internal strength of crystals.
Dynamic recrystallization.
Dynamic recrystallization is the process of removing the internal strain that remains in grains during deformation. This happens by the reorganization of a material with a change in grain size, shape, and orientation within the same mineral. When recrystallization occurs after deformation has come to an end and particularly at high temperatures, the process is called static recrystallization or annealing. Dynamic recrystallization results in grain size-reduction and static recrystallization results in the formation of larger equant grains.
Dynamic recrystallization can occur under a wide range of metamorphic conditions, and can strongly influence the mechanical properties of the deforming material. Dynamic recrystallization is the result of two end-member processes: (1) The formation and rotation of subgrains (rotation recrystallization) and (2) grain-boundary migration (migration recrystallization).
Deformation mechanism map.
A "deformation mechanism map" is a way of representing the dominant deformation mechanism in a material loaded under a given set of conditions. The technique is applicable to all crystalline materials, metallurgical as well as geological. Additionally, work has been conducted regarding the use of deformation maps to nanostructured or very fine grain materials. Deformation mechanism maps usually consist of some kind of stress plotted against some kind of temperature axis, typically stress normalized using the shear modulus versus homologous temperature with contours of strain rate. The normalized shear stress is plotted on a log scale. While plots of normalized shear stress vs. homologous temperature are most common, other forms of deformation mechanism maps include shear strain rate vs. normalized shear stress and shear strain rate vs. homologous temperature. Thus deformation maps can be constructed using any two of stress (normalized), temperature (normalized), and strain rate, with contours of the third variable. A stress/strain rate plot is useful because power-law mechanisms then have contours of temperature which are straight lines.
For a given set of operating conditions, calculations are conducted and experiments performed to determine the predominant mechanism operative for a given material. Constitutive equations for the type of mechanism have been developed for each deformation mechanism and are used in the construction of the maps. The theoretical shear strength of the material is independent of temperature and located along the top of the map, with the regimes of plastic deformation mechanisms below it. Constant strain rate contours can be constructed on the maps using the constitutive equations of the deformation mechanisms which makes the maps extremely useful.
Process maps.
The same technique has been used to construct "process maps" for sintering, diffusion bonding, hot isostatic pressing, and indentation.
Construction.
Repeated experiments are performed to characterize the mechanism by which the material deforms. The dominant mechanism is the one which dominates the continuous deformation rate (strain rate), however at any given level of stress and temperature, more than one of the creep and plasticity mechanisms may be active. The boundaries between the fields are determined from the constitutive equations of the deformation mechanisms by solving for stress as a function of temperature. Along these boundaries, the deformation rates for the two neighboring mechanisms are equal. The programming code used for many of the published maps is open source
and an archive of its development is online. Many researchers have also written their own codes to make these maps.
The main regions in a typical deformation mechanism map and their constitutive equations are shown in the following subsections.
Plasticity region.
The plasticity region is at the top of deformation map (at the highest normalized stresses), and is below the boundary set by the ideal strength. In this region the strain rate involves an exponential term. This equation is shown below, where formula_0 is the applied shear stress, formula_1 is the shear modulus, formula_2 is the energy barrier to dislocation glide, "k" is the Boltzmann constant, and formula_3 is the "athermal flow strength" which is a function of the obstacles to dislocation glide.
formula_4
Power Law creep region.
In this region, the dominant deformation mechanism is power law creep, such that the strain rate goes as the stress raised to a stress exponent "n." This region is dominated by dislocation creep. The value of this stress exponent is dependent upon the material and the microstructure. If deformation is occurring by slip, "n"=1-8, and for grain boundary sliding "n"=2 or 4.
The general equation for power law creep is as follows, where formula_5 is a dimensionless constant relating shear strain rate and stress, "μ" is the shear modulus, "b" is the Burger's vector, "k" is the Boltzmann constant, "T" is the temperature, "n" is the stress exponent, formula_0 is the applied shear stress, and formula_6 is the effective diffusion constant.
formula_7
Within the power law creep region, there are two subsections corresponding to low temperature power law creep that is dominated by core controlled dislocation motion and high temperature power law creep that is controlled by diffusion in the lattice. Low temperature core diffusion, sometimes called pipe diffusion, occurs because dislocations are more quickly able to diffuse through the pipe-like core of a dislocation. The effective diffusion coefficient in the strain rate equation depends on whether or not the system is dominated by core diffusion or lattice diffusion and can be generalized as follows where formula_8 is the volumetric lattice diffusion constant, formula_9 is the area corresponding to the dislocation core, formula_10 is the diffusion coefficient for the core, and "b" is the Burger's vector.
formula_11
In the high temperature region, the effective diffusion constant is simply the volumetric lattice diffusion constant, whereas at low temperatures the diffusion constant is given by the expression formula_12. Thus in the high temperature power law creep region, the strain rate goes as formula_13, and in the low temperature power law creep region the strain rate goes as formula_14.
Diffusional flow region.
Diffusional flow is a regime typically below dislocation creep and occurs at high temperatures due to the diffusion of point defects in the material. Diffusional flow can be further broken down into more specific mechanisms: Nabarro–Herring creep, Coble creep, and Harper–Dorn creep.
While most materials will exhibit Nabarro-Herring creep and Coble creep, Harper-Dorn creep is quite rare, having only been reported in a select few materials at low stresses including aluminium, lead, and tin.
The equation for Nabarro-Herring creep is dominated by vacancy diffusion within the lattice, whereas Coble creep is dominated by vacancy diffusion within the grain boundaries. The equation for these mechanisms is shown below where formula_0 is the applied shear stress, Ω is the atomic volume, "k" is the Boltzmann constant ,"d" is the grain size, "T" is the temperature, and formula_6 is the effective diffusion coefficient.
formula_15
The effective diffusion coefficient, formula_6= formula_8 (the volumetric diffusion constant) for Nabarro-Herring creep which dominates at high temperatures, and formula_16 (where formula_17 is the grain boundary width and formula_18 is the diffusion coefficient in the boundary) for Coble creep which dominates at low-temperatures.
From these equations it becomes clear that the boundary between boundary diffusion and lattice diffusion is heavily dependent on grain size. For systems with larger grains, the Nabarro-Herring lattice diffusion region of the deformation mechanism map will be larger than in maps with very small grains. Additionally, the larger the grains, the less diffusional creep and thus the power-law creep region of the map will be larger for large grained materials. Grain boundary engineering is thus an effective strategy to manipulate creep rates.
Reading.
For a given stress profile and temperature, the point lies in a particular "deformation field". If the values place the point near the center of a field, it is likely that the primary mechanism by which the material will fail, i.e.: the type and rate of failure expected, grain boundary diffusion, plasticity, Nabarro–Herring creep, etc. If however, the stress and temperature conditions place the point near the boundary between two deformation mechanism regions then the dominating mechanism is less clear. Near the boundary of the regimes there can be a combination of mechanisms of deformation occurring simultaneously. Deformation mechanism maps are only as accurate as the number of experiments and calculations undertaken in their creation.
For a given stress and temperature, the strain rate and deformation mechanism of a material is given by a point on the map. By comparing maps of various materials, crystal structures, bonds, grain sizes, etc., studies of these materials properties on plastic flow can be conducted and a more complete understanding of deformation in materials is obtained.
Examples.
Above the theoretical shear strength of the material, a type of defect-less flow can still occur, shearing the material. Dislocation motion through glide (any temperature) or dislocation creep (at high temperatures) is a typical mechanism found at high stresses in deformation maps.
Deformation Mechanisms in Polymers.
Polymer melts exhibit different deformation mechanisms when subjected to shear or tensile stresses. For example, a polymer melt’s ductility can increase when a stimulus, such as light, causes fragmentation of the polymer chains through bond breaking. This process is known as chain scission. In the low temperature regime of a polymer melt (T < Tg), crazing or shear banding can occur. The former mechanism resembles crack formation, but this deformation mechanism actually involves the formation of fibrils separated by porous domains or voids. The latter mechanism (shear banding) involves the formation of localized regions of plastic deformation, which typically arise near the position of the maximal shear point in a polymer melt. It is important to note that crazing and shear banding are deformation mechanisms observed in glassy polymers.
For crystalline polymers, the deformation mechanism is best described by a stress-strain curve for a crystalline polymer, such as nylon. The stress-strain behavior exhibits four characteristic regions. The first region is the linear-elastic regime, where the stress-strain behavior is elastic with no plastic deformation. The characteristic deformation mechanism in the second region is yielding, where plastic deformation can occur in the form phenomena such as twinning. The third region shows the formation of a neck, and the fourth region is characterized as a steep increase in stress due to viscous flow. Additionally, region four corresponds to alignment and elongation of the polymer backbone from its coiled or folded state—eventually leading to fracture.
|
[
{
"math_id": 0,
"text": "\\sigma_s"
},
{
"math_id": 1,
"text": "\\mu "
},
{
"math_id": 2,
"text": "\\Delta E"
},
{
"math_id": 3,
"text": "\\widehat{\\tau}"
},
{
"math_id": 4,
"text": "\\dot{\\gamma}\\propto (\\frac{\\sigma_s}{\\mu})^2 \\exp[-\\frac{\\Delta E}{kT}(1-\\frac{\\sigma_s}{\\widehat{\\tau}})]"
},
{
"math_id": 5,
"text": "A_2\n"
},
{
"math_id": 6,
"text": "D_{eff}"
},
{
"math_id": 7,
"text": "\\dot{\\gamma}= \\frac{A_2D_{eff}\\mu b}{kT}(\\frac{\\sigma_s}{\\mu})^n"
},
{
"math_id": 8,
"text": "D_v"
},
{
"math_id": 9,
"text": "a_c"
},
{
"math_id": 10,
"text": "D_c"
},
{
"math_id": 11,
"text": "D_{eff}= D_v +10(\\frac{\\sigma_s}{\\mu})^2\\frac{a_cD_c}{b^2}"
},
{
"math_id": 12,
"text": "10(\\frac{\\sigma_s}{\\mu})^2\\frac{a_cD_c}{b^2}"
},
{
"math_id": 13,
"text": "(\\frac{\\sigma_s}{\\mu})^n"
},
{
"math_id": 14,
"text": "(\\frac{\\sigma_s}{\\mu})^{n+2}"
},
{
"math_id": 15,
"text": "\\dot{\\gamma}= \\alpha'' \\frac{D_{eff}}{d^2}\\frac{\\Omega\\sigma}{kT}"
},
{
"math_id": 16,
"text": "D_{eff}=\\frac{\\pi\\delta}{d}D_b"
},
{
"math_id": 17,
"text": "\\delta"
},
{
"math_id": 18,
"text": "D_b"
}
] |
https://en.wikipedia.org/wiki?curid=14259252
|
14260056
|
Malonate CoA-transferase
|
Class of enzymes
In enzymology, a malonate CoA-transferase (EC 2.8.3.3) is an enzyme that catalyzes the chemical reaction
acetyl-CoA + malonate formula_0 acetate + malonyl-CoA
Thus, the two substrates of this enzyme are acetyl-CoA and malonate, whereas its two products are acetate and malonyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is acetyl-CoA:malonate CoA-transferase. This enzyme is also called malonate coenzyme A-transferase. This enzyme participates in beta-alanine metabolism and propanoate metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14260056
|
14260066
|
Oxalate CoA-transferase
|
Class of enzymes
In enzymology, an oxalate CoA-transferase (EC 2.8.3.2) is an enzyme that catalyzes the chemical reaction
succinyl-CoA + oxalate formula_0 succinate + oxalyl-CoA
Thus, the two substrates of this enzyme are succinyl-CoA and oxalate, whereas its two products are succinate and oxalyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is succinyl-CoA:oxalate CoA-transferase. Other names in common use include succinyl-beta-ketoacyl-CoA transferase, and oxalate coenzyme A-transferase. This enzyme participates in glyoxylate and dicarboxylate metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14260066
|
14260078
|
Petromyzonol sulfotransferase
|
Class of enzymes
In enzymology, a petromyzonol sulfotransferase (EC 2.8.2.31) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + 5alpha-cholan-3alpha,7alpha,12alpha,24-tetrol formula_0 adenosine 3',5'-bisphosphate + 5alpha-cholan-3alpha,7alpha,12alpha-triol 24-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and 5alpha-cholan-3alpha,7alpha,12alpha,24-tetrol, whereas its two products are adenosine 3',5'-bisphosphate and 5alpha-cholan-3alpha,7alpha,12alpha-triol 24-sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:5alpha-cholan-3alpha,7alpha,12alpha,24-te trol sulfotransferase. This enzyme is also called PZ-SULT.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14260078
|
14260093
|
Propionate CoA-transferase
|
Class of enzymes
In enzymology, a propionate CoA-transferase (EC 2.8.3.1) is an enzyme that catalyzes the chemical reaction
acetyl-CoA + propanoate formula_0 acetate + propanoyl-CoA
Thus, the two substrates of this enzyme are acetyl-CoA and propanoate, whereas its two products are acetate and propanoyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is acetyl-CoA:propanoate CoA-transferase. Other names in common use include propionate coenzyme A-transferase, propionate-CoA:lactoyl-CoA transferase, propionyl CoA:acetate CoA transferase, and propionyl-CoA transferase. This enzyme participates in 3 metabolic pathways: pyruvate metabolism, propanoate metabolism, and styrene degradation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14260093
|
14260104
|
Psychosine sulfotransferase
|
Type of enzyme
In enzymology, a psychosine sulfotransferase (EC 2.8.2.13) is an enzyme that catalyzes the chemical reaction:
3'-phosphoadenylyl sulfate + galactosylsphingosine formula_0 adenosine 3',5'-bisphosphate + psychosine sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and galactosylsphingosine, whereas its two products are adenosine 3',5'-bisphosphate and psychosine sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:galactosylsphingosine sulfotransferase. Other names in common use include PAPS:psychosine sulphotransferase, and 3'-phosphoadenosine 5'-phosphosulfate-psychosine sulphotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14260104
|
14260118
|
Quercetin-3,3'-bissulfate 7-sulfotransferase
|
Class of enzymes
In enzymology, a quercetin-3,3'-bissulfate 7-sulfotransferase (EC 2.8.2.28) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + quercetin 3,3'-bissulfate formula_0 adenosine 3',5'-bisphosphate + quercetin 3,7,3'-trisulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and quercetin 3,3'-bissulfate, whereas its two products are adenosine 3',5'-bisphosphate and quercetin 3,7,3'-trissulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:quercetin-3,3'-bissulfate 7-sulfotransferase. Other names in common use include flavonol 7-sulfotransferase, 7-sulfotransferase, and PAPS:flavonol 3,3'/3,4'-disulfate 7-sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14260118
|
14260556
|
Scymnol sulfotransferase
|
Class of enzymes
In enzymology, a scymnol sulfotransferase (EC 2.8.2.32) is an enzyme that catalyzes the chemical reaction
3'-Phosphoadenosine-5'-phosphosulfate + 5beta-scymnol formula_0 adenosine 3',5'-bisphosphate + 5beta-scymnol sulfate
Thus, the two substrates of this enzyme are 3'-Phosphoadenosine-5'-phosphosulfate and 5Beta-Scymnol, whereas its two products are adenosine 3',5'-bisphosphate and 5beta-scymnol sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-Phosphoadenosine-5'-phosphosulfate:5beta-scymnol sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14260556
|
14260678
|
Strain rate
|
Rate of change in the linear deformation of a material with respect to time
In mechanics and materials science, strain rate is the time derivative of strain of a material. Strain rate has dimension of inverse time and SI units of inverse second, s−1 (or its multiples).
The strain rate at some point within the material measures the rate at which the distances of adjacent parcels of the material change with time in the neighborhood of that point. It comprises both the rate at which the material is expanding or shrinking (expansion rate), and also the rate at which it is being deformed by progressive shearing without changing its volume (shear rate). It is zero if these distances do not change, as happens when all particles in some region are moving with the same velocity (same speed and direction) and/or rotating with the same angular velocity, as if that part of the medium were a rigid body.
The strain rate is a concept of materials science and continuum mechanics that plays an essential role in the physics of fluids and deformable solids. In an isotropic Newtonian fluid, in particular, the viscous stress is a linear function of the rate of strain, defined by two coefficients, one relating to the expansion rate (the bulk viscosity coefficient) and one relating to the shear rate (the "ordinary" viscosity coefficient). In solids, higher strain rates can often cause normally ductile materials to fail in a brittle manner.
Definition.
The definition of strain rate was first introduced in 1867 by American metallurgist Jade LeCocq, who defined it as "the rate at which strain occurs. It is the time rate of change of strain." In physics the strain rate is generally defined as the derivative of the strain with respect to time. Its precise definition depends on how strain is measured.
The strain is the ratio of two lengths, so it is a dimensionless quantity (a number that does not depend on the choice of measurement units). Thus, strain rate has dimension of inverse time and units of inverse second, s−1 (or its multiples).
Simple deformations.
In simple contexts, a single number may suffice to describe the strain, and therefore the strain rate. For example, when a long and uniform rubber band is gradually stretched by pulling at the ends, the strain can be defined as the ratio formula_0 between the amount of stretching and the original length of the band:
formula_1
where formula_2 is the original length and formula_3 its length at each time formula_4. Then the strain rate will be
formula_5
where formula_6 is the speed at which the ends are moving away from each other.
The strain rate can also be expressed by a single number when the material is being subjected to parallel shear without change of volume; namely, when the deformation can be described as a set of infinitesimally thin parallel layers sliding against each other as if they were rigid sheets, in the same direction, without changing their spacing. This description fits the laminar flow of a fluid between two solid plates that slide parallel to each other (a Couette flow) or inside a circular pipe of constant cross-section (a Poiseuille flow). In those cases, the state of the material at some time formula_4 can be described by the displacement formula_7 of each layer, since an arbitrary starting time, as a function of its distance formula_8 from the fixed wall. Then the strain in each layer can be expressed as the limit of the ratio between the current relative displacement formula_9 of a nearby layer, divided by the spacing formula_10 between the layers:
formula_11
Therefore, the strain rate is
formula_12
where formula_13 is the current linear speed of the material at distance formula_8 from the wall.
The strain-rate tensor.
In more general situations, when the material is being deformed in various directions at different rates, the strain (and therefore the strain rate) around a point within a material cannot be expressed by a single number, or even by a single vector. In such cases, the rate of deformation must be expressed by a tensor, a linear map between vectors, that expresses how the relative velocity of the medium changes when one moves by a small distance away from the point in a given direction. This strain rate tensor can be defined as the time derivative of the strain tensor, or as the symmetric part of the gradient (derivative with respect to position) of the velocity of the material.
With a chosen coordinate system, the strain rate tensor can be represented by a symmetric 3×3 matrix of real numbers. The strain rate tensor typically varies with position and time within the material, and is therefore a (time-varying) tensor field. It only describes the local rate of deformation to first order; but that is generally sufficient for most purposes, even when the viscosity of the material is highly non-linear.
Strain rate testing.
Materials can be tested using the so-called epsilon dot (formula_14) method which can be used to derive viscoelastic parameters through lumped parameter analysis.
Sliding rate or shear strain rate.
Similarly, the sliding rate, also called the deviatoric strain rate or shear strain rate is the derivative with respect to time of the shear strain. Engineering sliding strain can be defined as the angular displacement created by an applied shear stress, formula_15.
formula_16
Therefore the unidirectional sliding strain rate can be defined as:
formula_17
|
[
{
"math_id": 0,
"text": "\\epsilon"
},
{
"math_id": 1,
"text": "\\epsilon(t) = \\frac{L(t) - L_0}{L_0}"
},
{
"math_id": 2,
"text": "L_0"
},
{
"math_id": 3,
"text": "L(t)"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": " \\dot {\\epsilon}(t) = \\frac {d \\epsilon} {dt} = \\frac {d}{dt} \\left ( \\frac{L(t) - L_0}{L_0} \\right ) = \\frac{1}{L_0} \\frac{dL(t)}{dt} = \\frac{v(t)}{L_0}"
},
{
"math_id": 6,
"text": "v(t)"
},
{
"math_id": 7,
"text": "X(y,t)"
},
{
"math_id": 8,
"text": "y"
},
{
"math_id": 9,
"text": "X(y+d,t) - X(y,t)"
},
{
"math_id": 10,
"text": "d"
},
{
"math_id": 11,
"text": "\\epsilon(y,t) = \\lim_{d\\rightarrow 0} \\frac{X(y+d,t) - X(y,t)}{d} = \\frac{\\partial X}{\\partial y}(y,t)"
},
{
"math_id": 12,
"text": "\\dot \\epsilon(y,t) = \\left(\\frac{\\partial}{\\partial t}\\frac{\\partial X}{\\partial y}\\right)(y,t) = \\left(\\frac{\\partial}{\\partial y}\\frac{\\partial X}{\\partial t}\\right)(y,t) = \\frac{\\partial V}{\\partial y}(y,t) "
},
{
"math_id": 13,
"text": "V(y,t)"
},
{
"math_id": 14,
"text": "\\dot{\\varepsilon}"
},
{
"math_id": 15,
"text": "\\tau"
},
{
"math_id": 16,
"text": "\\gamma = \\frac{w}{l} = \\tan(\\theta)"
},
{
"math_id": 17,
"text": "\\dot{\\gamma}=\\frac{d\\gamma}{dt}"
}
] |
https://en.wikipedia.org/wiki?curid=14260678
|
14261888
|
Studentized range distribution
|
In probability and statistics, studentized range distribution is the continuous probability distribution of the studentized range of an i.i.d. sample from a normally distributed population.
Suppose that we take a sample of size "n" from each of "k" populations with the same normal distribution "N"("μ", "σ"2) and suppose that formula_0 is the smallest of these sample means and formula_1 is the largest of these sample means, and suppose "s"² is the pooled sample variance from these samples. Then the following statistic has a Studentized range distribution.
formula_2
Definition.
Probability density function.
Differentiating the cumulative distribution function with respect to "q" gives the probability density function.
formula_3
Note that in the outer part of the integral, the equation
formula_4
was used to replace an exponential factor.
Cumulative distribution function.
The cumulative distribution function is given by
formula_5
Special cases.
If "k" is 2 or 3, the studentized range probability distribution function can be directly evaluated, where formula_6 is the standard normal probability density function and formula_7 is the standard normal cumulative distribution function.
formula_8
formula_9
When the degrees of freedom approaches infinity the studentized range cumulative distribution can be calculated for any "k" using the standard normal distribution.
formula_10
Applications.
Critical values of the studentized range distribution are used in Tukey's range test.
The studentized range is used to calculate significance levels for results obtained by data mining, where one selectively seeks extreme differences in sample data, rather than only sampling randomly.
The Studentized range distribution has applications to hypothesis testing and multiple comparisons procedures. For example, Tukey's range test and Duncan's new multiple range test (MRT), in which the sample "x"1, ..., "x""n" is a sample of means and "q" is the basic test-statistic, can be used as post-hoc analysis to test between which two groups means there is a significant difference (pairwise comparisons) after rejecting the null hypothesis that all groups are from the same population (i.e. all means are equal) by the standard analysis of variance.
Related distributions.
When only the equality of the two groups means is in question (i.e. whether "μ"1 = "μ"2), the studentized range distribution is similar to the Student's t distribution, differing only in that the first takes into account the number of means under consideration, and the critical value is adjusted accordingly. The more means under consideration, the larger the critical value is. This makes sense since the more means there are, the greater the probability that at least some differences between pairs of means will be significantly large due to chance alone.
Derivation.
The studentized range distribution function arises from re-scaling the sample range "R" by the sample standard deviation "s", since the studentized range is customarily tabulated in units of standard deviations, with the variable "q"
<templatestyles src="Fraction/styles.css" />"R"⁄"s" . The derivation begins with a perfectly general form of the distribution function of the sample range, which applies to any sample data distribution.
In order to obtain the distribution in terms of the "studentized" range "q", we will change variable from "R" to "s" and "q". Assuming the sample data is normally distributed, the standard deviation "s" will be "χ" distributed. By further integrating over "s" we can remove "s" as a parameter and obtain the re-scaled distribution in terms of "q" alone.
General form.
For any probability density function "f""X", the range probability density "f""R" is:
formula_11
What this means is that we are adding up the probabilities that, given "k" draws from a distribution, two of them differ by "r", and the remaining "k" − 2 draws all fall between the two extreme values.
If we change variables to "u" where formula_12 is the low-end of the range, and define "F""X" as the cumulative distribution function of "f""X", then the equation can be simplified:
formula_13
We introduce a similar integral, and notice that differentiating under the integral-sign gives
formula_14
which recovers the integral above, so that last relation confirms
formula_15
because for any continuous cdf
formula_16
Special form for normal data.
The range distribution is most often used for confidence intervals around sample averages, which are asymptotically normally distributed by the central limit theorem.
In order to create the studentized range distribution for normal data, we first switch from the generic "f"X and "F"X to the distribution functions "φ" and Φ for the standard normal distribution, and change the variable "r" to "s·q", where "q" is a fixed factor that re-scales "r" by scaling factor "s":
formula_17
Choose the scaling factor "s" to be the sample standard deviation, so that "q" becomes the number of standard deviations wide that the range is. For normal data "s" is chi distributed and the distribution function "f""S" of the chi distribution is given by:
formula_18
Multiplying the distributions "f""R" and "f""S" and integrating to remove the dependence on the standard deviation "s" gives the studentized range distribution function for normal data:
formula_19
where
"q" is the width of the data range measured in standard deviations,
"ν" is the number of degrees of freedom for determining the sample standard deviation, and
"k" is the number of separate averages that form the points within the range.
The equation for the pdf shown in the sections above comes from using
formula_20
to replace the exponential expression in the outer integral.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\bar{y}_{\\min}"
},
{
"math_id": 1,
"text": "\\bar{y}_{\\max}"
},
{
"math_id": 2,
"text": "q = \\frac{\\overline{y}_{\\max} - \\overline{y}_{\\min}}{s/\\sqrt{n\\,}}"
},
{
"math_id": 3,
"text": "f_\\text{R}(q;k,\\nu) = \\frac{\\sqrt{2\\pi\\,}\\,k\\,(k-1)\\,\\nu^{\\nu/2}}{\\Gamma(\\nu /2)\\,2^{\\left(\\nu/2-1\\right)}}\\int_0^\\infty s^\\nu \\, \\varphi(\\sqrt{\\nu\\,} \\,s)\\,\\left[\\int_{-\\infty}^\\infty \\varphi(z+q\\,s)\\,\\varphi(z)\\, \\left[\\Phi(z+q\\,s)-\\Phi(z)\\right]^{k-2} \\, \\mathrm{d}z\\right] \\, \\mathrm{d}s"
},
{
"math_id": 4,
"text": "\\varphi(\\sqrt{\\nu\\,}\\,s) \\, \\sqrt{2\\pi\\,} = e^{-\\left(\\nu\\, s^2/2\\right)} "
},
{
"math_id": 5,
"text": "F_\\text{R}(q;k,\\nu) = \\frac{\\sqrt{2\\pi\\,}\\,k\\,\\nu^{\\nu/2}}{\\,\\Gamma(\\nu/2)\\,2^{(\\nu/2-1)}\\,} \\int_0^\\infty s^{\\nu-1} \\varphi(\\sqrt{\\nu\\,}\\,s) \\left[\\int_{-\\infty}^\\infty \\varphi(z) \\left[\\Phi(z+q\\,s)-\\Phi(z)\\right]^{k-1} \\, \\mathrm{d}z \\right] \\, \\mathrm{d}s"
},
{
"math_id": 6,
"text": "\\varphi(z)"
},
{
"math_id": 7,
"text": "\\Phi(z)"
},
{
"math_id": 8,
"text": "f_R(q;k=2) = \\sqrt{2\\,}\\,\\varphi\\left(\\,q/\\sqrt{2\\,}\\right)"
},
{
"math_id": 9,
"text": "f_R(q;k=3) = 6 \\sqrt{2\\,}\\, \\varphi\\left(\\,q/\\sqrt{2\\,}\\right)\\left[\\Phi\\left( q / \\sqrt{6\\,} \\right)-\\tfrac{1}{2} \\right]"
},
{
"math_id": 10,
"text": "F_R(q;k) = k\\, \\int_{-\\infty}^\\infty \\varphi(z)\\,\\Bigl[\\Phi(z+q)-\\Phi(z)\\Bigr]^{k-1} \\, \\mathrm{d}z = k\\, \\int_{-\\infty}^\\infty \\,\\Bigl[\\Phi(z+q)-\\Phi(z)\\Bigr]^{k-1} \\, \\mathrm{d}\\Phi(z)"
},
{
"math_id": 11,
"text": "f_R(r;k) = k\\,(k-1)\\int_{-\\infty}^\\infty f_X\\left(t+\\tfrac{1}{2} r\\right)f_X \\left(t - \\tfrac{1}{2} r\\right) \\left[\\int_{t-\\tfrac{1}{2}r}^{t+\\tfrac{1}{2} r} f_X(x) \\, \\mathrm{d}x\\right]^{k-2} \\, \\mathrm{d}\\,t"
},
{
"math_id": 12,
"text": "u=t-\\tfrac{1}{2} r"
},
{
"math_id": 13,
"text": "f_R(r;k) = k\\,(k-1)\\int_{-\\infty}^\\infty f_X(u+r)\\, f_X(u)\\, \\left[\\, F_X(u+r)-F_X(u)\\, \\right]^{k-2} \\, \\mathrm{d}\\,u"
},
{
"math_id": 14,
"text": "\n\\begin{align}\n\\frac{\\partial}{\\partial r} & \\left[ k\\,\\int_{-\\infty}^\\infty f_X(u)\\, \\Bigl[\\, F_X(u+r)-F_X(u)\\, \\Bigr]^{k-1} \\, \\mathrm{d}\\,u \\right] \\\\[5pt]\n= {} & k\\,(k-1)\\int_{-\\infty}^\\infty f_X(u+r)\\, f_X(u)\\, \\Bigl[\\, F_X(u+r)-F_X(u)\\, \\Bigr]^{k-2} \\, \\mathrm{d}\\,u\n\\end{align}\n"
},
{
"math_id": 15,
"text": "\n\\begin{align}\nF_R(r;k)\n & = k \\int_{-\\infty}^\\infty f_X(u) \\Bigl[\\, F_X(u+r)-F_X(u)\\, \\Bigr]^{k-1} \\, \\mathrm{d}\\,u \\\\\n & = k \\int_{-\\infty}^\\infty \\Bigl[\\, F_X(u+r)-F_X(u)\\, \\Bigr]^{k-1} \\, \\mathrm{d}\\,F_X(u)\n\\end{align}\n"
},
{
"math_id": 16,
"text": "\\frac{\\partial F_R(r;k)}{\\partial r} = f_R(r;k)"
},
{
"math_id": 17,
"text": "f_R(q;k) = s\\,k\\,(k-1)\\int_{-\\infty}^\\infty \\varphi(u+sq) \\varphi(u)\\, \\left[\\, \\Phi(u+sq) - \\Phi(u) \\right]^{k-2} \\, \\mathrm{d}u"
},
{
"math_id": 18,
"text": "\nf_S(s;\\nu)\\,\\mathrm{d}s\n= \\begin{cases}\n\\dfrac{\\nu^{\\nu/2}\\,s^{\\nu-1} e^{-\\nu\\,s^2/2}\\,}{2^{\\left(\\nu/2 - 1\\right)} \\Gamma(\\nu/2)} \\, \\mathrm{d}s & \\text{for }\\, 0 < s < \\infty, \\\\[4pt] 0 & \\text{otherwise}.\n\\end{cases}\n"
},
{
"math_id": 19,
"text": "f_R(q;k,\\nu) = \\frac{\\nu^{\\nu/2}\\,k\\,(k-1)}{2^{\\left( \\nu/2 - 1\\right)} \\Gamma(\\nu/2)} \\int_0^\\infty s^\\nu e^{-\\nu s^2/2} \\int_{-\\infty}^\\infty \\varphi(u+sq)\\, \\varphi(u)\\, \\left[\\, \\Phi(u+sq) - \\Phi(u) \\right]^{k-2} \\, \\mathrm{d}u \\,\\mathrm{d}s"
},
{
"math_id": 20,
"text": "e^{-\\nu\\,s^2/2} = \\sqrt{2\\pi\\,}\\,\\varphi(\\sqrt{\\nu\\,}\\,s)"
}
] |
https://en.wikipedia.org/wiki?curid=14261888
|
142622
|
Enriched category
|
Category whose hom sets have algebraic structure
In category theory, a branch of mathematics, an enriched category generalizes the idea of a category by replacing hom-sets with objects from a general monoidal category. It is motivated by the observation that, in many practical applications, the hom-set often has additional structure that should be respected, e.g., that of being a vector space of morphisms, or a topological space of morphisms. In an enriched category, the set of morphisms (the hom-set) associated with every pair of objects is replaced by an object in some fixed monoidal category of "hom-objects". In order to emulate the (associative) composition of morphisms in an ordinary category, the hom-category must have a means of composing hom-objects in an associative manner: that is, there must be a binary operation on objects giving us at least the structure of a monoidal category, though in some contexts the operation may also need to be commutative and perhaps also to have a right adjoint (i.e., making the category symmetric monoidal or even symmetric closed monoidal, respectively).
Enriched category theory thus encompasses within the same framework a wide variety of structures including
In the case where the hom-object category happens to be the category of sets with the usual cartesian product, the definitions of enriched category, enriched functor, etc... reduce to the original definitions from ordinary category theory.
An enriched category with hom-objects from monoidal category M is said to be an enriched category over M or an enriched category in M, or simply an M-category. Due to Mac Lane's preference for the letter V in referring to the monoidal category, enriched categories are also sometimes referred to generally as V-categories.
Definition.
Let (M, ⊗, "I", "α", "λ", "ρ") be a monoidal category. Then an "enriched category" C (alternatively, in situations where the choice of monoidal category needs to be explicit, a "category enriched over" M, or M-"category"), consists of
The first diagram expresses the associativity of composition:
That is, the associativity requirement is now taken over by the associator of the monoidal category M.
For the case that M is the category of sets and (⊗, "I", "α", "λ", "ρ") is the monoidal structure (×, {•}, ...) given by the cartesian product, the terminal single-point set, and the canonical isomorphisms they induce, then each "C"("a", "b") is a set whose elements may be thought of as "individual morphisms" of C, while °, now a function, defines how consecutive morphisms compose. In this case, each path leading to "C"("a", "d") in the first diagram corresponds to one of the two ways of composing three consecutive individual morphisms "a" → "b" → "c" → "d", i.e. elements from "C"("a", "b"), "C"("b", "c") and "C"("c", "d"). Commutativity of the diagram is then merely the statement that both orders of composition give the same result, exactly as required for ordinary categories.
What is new here is that the above expresses the requirement for associativity without any explicit reference to individual morphisms in the enriched category C — again, these diagrams are for morphisms in monoidal category M, and not in C — thus making the concept of associativity of composition meaningful in the general case where the hom-objects "C"("a", "b") are abstract, and C itself need not even "have" any notion of individual morphism.
The notion that an ordinary category must have identity morphisms is replaced by the second and third diagrams, which express identity in terms of left and right unitors:
and
Returning to the case where M is the category of sets with cartesian product, the morphisms id"a": "I" → "C"("a", "a") become functions from the one-point set "I" and must then, for any given object "a", identify a particular element of each set "C"("a", "a"), something we can then think of as the "identity morphism for "a" in C". Commutativity of the latter two diagrams is then the statement that compositions (as defined by the functions °) involving these distinguished individual "identity morphisms in C" behave exactly as per the identity rules for ordinary categories.
Note that there are several distinct notions of "identity" being referenced here:
"b" ≤ "c" and "a" ≤ "b" ⇒ "a" ≤ "c" (transitivity)
"TRUE" ⇒ "a" ≤ "a" (reflexivity)
which are none other than the axioms for ≤ being a preorder. And since all diagrams in 2 commute, this is the "sole" content of the enriched category axioms for categories enriched over 2.
d("b", "c") + d("a", "b") ≥ d("a", "c") (triangle inequality)
0 ≥ d("a", "a")
Relationship with monoidal functors.
If there is a monoidal functor from a monoidal category M to a monoidal category N, then any category enriched over M can be reinterpreted as a category enriched over N. Every monoidal category M has a monoidal functor M("I", –) to the category of sets, so any enriched category has an underlying ordinary category. In many examples (such as those above) this functor is faithful, so a category enriched over M can be described as an ordinary category with certain additional structure or properties.
Enriched functors.
An enriched functor is the appropriate generalization of the notion of a functor to enriched categories. Enriched functors are then maps between enriched categories which respect the enriched structure.
If "C" and "D" are M-categories (that is, categories enriched over monoidal category M), an M-enriched functor "T": "C" → "D" is a map which assigns to each object of "C" an object of "D" and for each pair of objects "a" and "b" in "C" provides a morphism in M "Tab" : "C"("a", "b") → "D"("T"("a"), "T"("b")) between the hom-objects of "C" and "D" (which are objects in M), satisfying enriched versions of the axioms of a functor, viz preservation of identity and composition.
Because the hom-objects need not be sets in an enriched category, one cannot speak of a particular morphism. There is no longer any notion of an identity morphism, nor of a particular composition of two morphisms. Instead, morphisms from the unit to a hom-object should be thought of as selecting an identity, and morphisms from the monoidal product should be thought of as composition. The usual functorial axioms are replaced with corresponding commutative diagrams involving these morphisms.
In detail, one has that the diagram
commutes, which amounts to the equation
formula_5
where "I" is the unit object of M. This is analogous to the rule "F"(id"a") = id"F"("a") for ordinary functors. Additionally, one demands that the diagram
commute, which is analogous to the rule "F"("fg")="F"("f")"F"("g") for ordinary functors.
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "f: a \\rightarrow b"
},
{
"math_id": 1,
"text": "f:I\\rightarrow C(a,b)"
},
{
"math_id": 2,
"text": "f:a\\rightarrow b"
},
{
"math_id": 3,
"text": "g:b\\rightarrow c"
},
{
"math_id": 4,
"text": "g \\circ_{\\textbf{C}} f = {^\\circ}_{abc}(g\\otimes f)"
},
{
"math_id": 5,
"text": "T_{aa}\\circ \\operatorname{id}_a=\\operatorname{id}_{T(a)},"
}
] |
https://en.wikipedia.org/wiki?curid=142622
|
14263
|
Horner's method
|
Algorithm for polynomial evaluation
In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians. After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials.
The algorithm is based on Horner's rule, in which a polynomial is written in "nested form":
formula_0
This allows the evaluation of a polynomial of degree n with only formula_1 multiplications and formula_1 additions. This is optimal, since there are polynomials of degree n that cannot be evaluated with fewer arithmetic operations.
Alternatively, Horner's method also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by the application of Horner's rule. It was widely used until computers came into general use around 1970.
Polynomial evaluation and long division.
Given the polynomial
formula_2
where formula_3 are constant coefficients, the problem is to evaluate the polynomial at a specific value formula_4 of formula_5
For this, a new sequence of constants is defined recursively as follows:
Then formula_6 is the value of formula_7.
To see why this works, the polynomial can be written in the form
formula_8
Thus, by iteratively substituting the formula_9 into the expression,
formula_10
Now, it can be proven that;
This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of;
formula_11
with formula_6 (which is equal to formula_7) being the division's remainder, as is demonstrated by the examples below. If formula_4 is a root of formula_12, then formula_13 (meaning the remainder is formula_14), which means you can factor formula_12 as formula_15.
To finding the consecutive formula_16-values, you start with determining formula_17, which is simply equal to formula_18. Then you then work recursively using the formula:
formula_19
till you arrive at formula_6.
Examples.
Evaluate formula_20 for formula_21.
We use synthetic division as follows:
"x"0│ "x"3 "x"2 "x"1 "x"0
3 │ 2 −6 2 −1
│ 6 0 6
2 0 2 5
The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the x-value ( in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of formula_22 on division by formula_23 is .
But by the polynomial remainder theorem, we know that the remainder is formula_24. Thus, formula_25.
In this example, if formula_26 we can see that formula_27, the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method.
As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of formula_22 on division by formula_28.
The remainder is . This makes Horner's method useful for polynomial long division.
Divide formula_29 by formula_30:
2 │ 1 −6 11 −6
│ 2 −8 6
1 −4 3 0
The quotient is formula_31.
Let formula_32 and formula_33. Divide formula_34 by formula_35 using Horner's method.
0.5 │ 4 −6 0 3 −5
│ 2 −2 −1 1
2 −2 −1 1 −4
The third row is the sum of the first two rows, divided by . Each entry in the second row is the product of with the third-row entry to the left. The answer is
formula_36
Efficiency.
Evaluation using the monomial form of a degree formula_1 polynomial requires at most formula_1 additions and formula_37 multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced to formula_1 additions and formula_38 multiplications by evaluating the powers of formula_39 by iteration.
If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately formula_40 times the number of bits of formula_39: the evaluated polynomial has approximate magnitude formula_41, and one must also store formula_41 itself. By contrast, Horner's method requires only formula_1 additions and formula_1 multiplications, and its storage requirements are only formula_1 times the number of bits of formula_39. Alternatively, Horner's method can be computed with formula_1 fused multiply–adds. Horner's method can also be extended to evaluate the first formula_42 derivatives of the polynomial with formula_43 additions and multiplications.
Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal. Victor Pan proved in 1966 that the number of multiplications is minimal. However, when formula_39 is a matrix, Horner's method is not optimal.
This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree-formula_1 polynomial can be evaluated using only ⌊"n"/2⌋+2 multiplications and formula_1 additions.
Parallel evaluation.
A disadvantage of Horner's rule is that all of the operations are sequentially dependent, so it is not possible to take advantage of instruction level parallelism on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation.
If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows:
formula_44
More generally, the summation can be broken into "k" parts:
formula_45
where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows "k"-way SIMD execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for floating-point calculations this requires enabling (unsafe) reassociative math.
Application to floating-point multiplication and division.
Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a microcontroller with no hardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation) formula_46, and formula_47. Then, "x" (or "x" to some power) is repeatedly factored out. In this binary numeral system (base 2), formula_47, so powers of 2 are repeatedly factored out.
Example.
For example, to find the product of two numbers (0.15625) and "m":
formula_48
Method.
To find the product of two binary numbers "d" and "m":
Derivation.
In general, for a binary number with bit values (formula_49) the product is
formula_50
At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or division by zero is not an issue, despite this implication in the factored equation:
formula_51
The denominators all equal one (or the term is absent), so this reduces to
formula_52
or equivalently (as consistent with the "method" described above)
formula_53
In binary (base-2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift. The factor (2−1) is a right arithmetic shift, a (0) results in no operation (since 20 = 1 is the multiplicative identity element), and a (21) results in a left arithmetic shift.
The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction.
The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used) and uses only 20% of the code space.
Other applications.
Horner's method can be used to convert between different positional numeral systems – in which case "x" is the base of the number system, and the "a""i" coefficients are the digits of the base-"x" representation of a given number – and can also be used if "x" is a matrix, in which case the gain in computational efficiency is even greater. However, for such cases faster methods are known.
Polynomial root finding.
Using the long division algorithm in combination with Newton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial formula_54 of degree formula_1 with zeros formula_55 make some initial guess formula_56 such that formula_57. Now iterate the following two steps:
These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials.
Example.
Consider the polynomial
formula_61
which can be expanded to
formula_62
From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next formula_12 is divided by formula_63 to obtain
formula_64
which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by formula_65 to obtain
formula_66
which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain
formula_67
which is shown in green and found to have a zero at −3. This polynomial is further reduced to
formula_68
which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing formula_69 and solving the linear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found.
Divided difference of a polynomial.
Horner's method can be modified to compute the divided difference formula_70 Given the polynomial (as before)
formula_2
proceed as follows
formula_71
At completion, we have
formula_72
This computation of the divided difference is subject to less round-off error than evaluating formula_12 and formula_73 separately, particularly when formula_74. Substituting formula_75 in this method gives formula_76, the derivative of formula_12.
History.
Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation", was read before the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823. Horner's paper in Part II of "Philosophical Transactions of the Royal Society of London" for 1819 was warmly and expansively welcomed by a reviewer in the issue of "The Monthly Review: or, Literary Journal" for April, 1820; in comparison, a technical paper by Charles Babbage is dismissed curtly in this review. The sequence of reviews in "The Monthly Review" for September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820).
Unlike his English contemporaries, Horner drew on the Continental literature, notably the work of Arbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work of Paolo Ruffini.
Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to:
Qin Jiushao, in his "Shu Shu Jiu Zhang" ("Mathematical Treatise in Nine Sections"; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. Yoshio Mikami in "Development of Mathematics in China and Japan" (Leipzig 1913) wrote:<templatestyles src="Template:Blockquote/styles.css" />"... who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe ... We of course don't intend in any way to ascribe Horner's invention to a Chinese origin, but the lapse of time sufficiently makes it not altogether impossible that the Europeans could have known of the Chinese method in a direct or indirect way."
Ulrich Libbrecht concluded: "It is obvious that this procedure is a Chinese invention ... the method was not known in India". He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese. The extraction of square and cube roots along similar lines is already discussed by Liu Hui in connection with Problems IV.16 and 22 in "Jiu Zhang Suan Shu", while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method described in his book Jigu Suanjing.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\n &a_0 + a_1x + a_2x^2 + a_3x^3 + \\cdots + a_nx^n \\\\\n ={} &a_0 + x \\bigg(a_1 + x \\Big(a_2 + x \\big(a_3 + \\cdots + x(a_{n-1} + x \\, a_n) \\cdots \\big) \\Big) \\bigg).\n\\end{align}"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "p(x) = \\sum_{i=0}^n a_i x^i = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \\cdots + a_n x^n,"
},
{
"math_id": 3,
"text": "a_0, \\ldots, a_n"
},
{
"math_id": 4,
"text": "x_0"
},
{
"math_id": 5,
"text": "x."
},
{
"math_id": 6,
"text": "b_0"
},
{
"math_id": 7,
"text": "p(x_0)"
},
{
"math_id": 8,
"text": "p(x) = a_0 + x \\bigg(a_1 + x \\Big(a_2 + x \\big(a_3 + \\cdots + x(a_{n-1} + x \\, a_n) \\cdots \\big) \\Big) \\bigg) \\ ."
},
{
"math_id": 9,
"text": "b_i"
},
{
"math_id": 10,
"text": "\\begin{align}\np(x_0) & = a_0 + x_0\\Big(a_1 + x_0\\big(a_2 + \\cdots + x_0(a_{n-1} + b_n x_0) \\cdots \\big)\\Big) \\\\\n& = a_0 + x_0\\Big(a_1 + x_0\\big(a_2 + \\cdots + x_0 b_{n-1}\\big)\\Big) \\\\\n& ~~ \\vdots \\\\\n& = a_0 + x_0 b_1 \\\\\n& = b_0.\n\\end{align}"
},
{
"math_id": 11,
"text": "p(x) / (x-x_0) "
},
{
"math_id": 12,
"text": "p(x)"
},
{
"math_id": 13,
"text": "b_0 = 0"
},
{
"math_id": 14,
"text": "0"
},
{
"math_id": 15,
"text": "x-x_0"
},
{
"math_id": 16,
"text": "b"
},
{
"math_id": 17,
"text": "b_n"
},
{
"math_id": 18,
"text": "a_n"
},
{
"math_id": 19,
"text": " b_{n-1} = a_{n-1} + b_{n}x_0 "
},
{
"math_id": 20,
"text": "f(x)=2x^3-6x^2+2x-1"
},
{
"math_id": 21,
"text": "x=3"
},
{
"math_id": 22,
"text": "f(x)"
},
{
"math_id": 23,
"text": "x-3"
},
{
"math_id": 24,
"text": "f(3) "
},
{
"math_id": 25,
"text": "f(3) = 5"
},
{
"math_id": 26,
"text": "a_3 = 2, a_2 = -6, a_1 = 2, a_0 = -1"
},
{
"math_id": 27,
"text": "b_3 = 2, b_2 = 0, b_1 = 2, b_0 = 5 "
},
{
"math_id": 28,
"text": " x-3 "
},
{
"math_id": 29,
"text": "x^3-6x^2+11x-6"
},
{
"math_id": 30,
"text": "x-2"
},
{
"math_id": 31,
"text": "x^2-4x+3"
},
{
"math_id": 32,
"text": "f_1(x)=4x^4-6x^3+3x-5"
},
{
"math_id": 33,
"text": "f_2(x)=2x-1"
},
{
"math_id": 34,
"text": "f_1(x)"
},
{
"math_id": 35,
"text": "f_2\\,(x)"
},
{
"math_id": 36,
"text": "\\frac{f_1(x)}{f_2(x)}=2x^3-2x^2-x+1-\\frac{4}{2x-1}."
},
{
"math_id": 37,
"text": "(n^2+n)/2"
},
{
"math_id": 38,
"text": "2n-1"
},
{
"math_id": 39,
"text": "x"
},
{
"math_id": 40,
"text": "2n"
},
{
"math_id": 41,
"text": "x^n"
},
{
"math_id": 42,
"text": "k"
},
{
"math_id": 43,
"text": "kn"
},
{
"math_id": 44,
"text": "\\begin{align}\np(x)\n& = \\sum_{i=0}^n a_i x^i \\\\[1ex]\n& = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \\cdots + a_n x^n \\\\[1ex]\n& = \\left( a_0 + a_2 x^2 + a_4 x^4 + \\cdots\\right) + \\left(a_1 x + a_3 x^3 + a_5 x^5 + \\cdots \\right) \\\\[1ex]\n& = \\left( a_0 + a_2 x^2 + a_4 x^4 + \\cdots\\right) + x \\left(a_1 + a_3 x^2 + a_5 x^4 + \\cdots \\right) \\\\[1ex]\n& = \\sum_{i=0}^{\\lfloor n/2 \\rfloor} a_{2i} x^{2i} + x \\sum_{i=0}^{\\lfloor n/2 \\rfloor} a_{2i+1} x^{2i} \\\\[1ex]\n& = p_0(x^2) + x p_1(x^2).\n\\end{align}"
},
{
"math_id": 45,
"text": "p(x)\n= \\sum_{i=0}^n a_i x^i\n= \\sum_{j=0}^{k-1} x^j \\sum_{i=0}^{\\lfloor n/k \\rfloor} a_{ki+j} x^{ki}\n= \\sum_{j=0}^{k-1} x^j p_j(x^k)"
},
{
"math_id": 46,
"text": "a_i = 1"
},
{
"math_id": 47,
"text": "x = 2"
},
{
"math_id": 48,
"text": "\\begin{align}\n (0.15625) m & = (0.00101_b) m = \\left( 2^{-3} + 2^{-5} \\right) m = \\left( 2^{-3})m + (2^{-5} \\right)m \\\\\n & = 2^{-3} \\left(m + \\left(2^{-2}\\right)m\\right) = 2^{-3} \\left(m + 2^{-2} (m)\\right).\n\\end{align}"
},
{
"math_id": 49,
"text": " d_3 d_2 d_1 d_0 "
},
{
"math_id": 50,
"text": " (d_3 2^3 + d_2 2^2 + d_1 2^1 + d_0 2^0)m = d_3 2^3 m + d_2 2^2 m + d_1 2^1 m + d_0 2^0 m. "
},
{
"math_id": 51,
"text": " = d_0\\left(m + 2 \\frac{d_1}{d_0} \\left(m + 2 \\frac{d_2}{d_1} \\left(m + 2 \\frac{d_3}{d_2} (m)\\right)\\right)\\right). "
},
{
"math_id": 52,
"text": " = d_0(m + 2 {d_1} (m + 2 {d_2} (m + 2 {d_3} (m)))),"
},
{
"math_id": 53,
"text": " = d_3(m + 2^{-1} {d_2} (m + 2^{-1}{d_1} (m + {d_0} (m)))). "
},
{
"math_id": 54,
"text": "p_n(x)"
},
{
"math_id": 55,
"text": " z_n < z_{n-1} < \\cdots < z_1,"
},
{
"math_id": 56,
"text": " x_0 "
},
{
"math_id": 57,
"text": " z_1 < x_0 "
},
{
"math_id": 58,
"text": "z_1"
},
{
"math_id": 59,
"text": "(x-z_1)"
},
{
"math_id": 60,
"text": "p_{n-1}"
},
{
"math_id": 61,
"text": "p_6(x) = (x+8)(x+5)(x+3)(x-2)(x-3)(x-7)"
},
{
"math_id": 62,
"text": "p_6(x) = x^6 + 4x^5 - 72x^4 -214x^3 + 1127x^2 + 1602x -5040."
},
{
"math_id": 63,
"text": "(x-7)"
},
{
"math_id": 64,
"text": "p_5(x) = x^5 + 11x^4 + 5x^3 - 179x^2 - 126x + 720"
},
{
"math_id": 65,
"text": "(x-3)"
},
{
"math_id": 66,
"text": "p_4(x) = x^4 + 14x^3 + 47x^2 - 38x - 240"
},
{
"math_id": 67,
"text": "p_3(x) = x^3 + 16x^2 + 79x + 120"
},
{
"math_id": 68,
"text": "p_2(x) = x^2 + 13x + 40"
},
{
"math_id": 69,
"text": "p_2(x)"
},
{
"math_id": 70,
"text": "(p(y) - p(x))/(y - x)."
},
{
"math_id": 71,
"text": "\\begin{align}\nb_n & = a_n, &\\quad d_n &= b_n, \\\\\nb_{n-1} & = a_{n-1} + b_n x, &\\quad d_{n-1} &= b_{n-1} + d_n y, \\\\\n& {}\\ \\ \\vdots &\\quad & {}\\ \\ \\vdots\\\\\nb_1 & = a_1 + b_2 x, &\\quad d_1 &= b_1 + d_2 y,\\\\\nb_0 & = a_0 + b_1 x.\n\\end{align}"
},
{
"math_id": 72,
"text": "\\begin{align}\np(x) &= b_0, \\\\\n\\frac{p(y) - p(x)}{y - x} &= d_1, \\\\\np(y) &= b_0 + (y - x) d_1.\n\\end{align}"
},
{
"math_id": 73,
"text": "p(y)"
},
{
"math_id": 74,
"text": " x \\approx y"
},
{
"math_id": 75,
"text": "y = x"
},
{
"math_id": 76,
"text": "d_1 = p'(x)"
}
] |
https://en.wikipedia.org/wiki?curid=14263
|
14263013
|
Spacetime topology
|
Spacetime topology is the topological structure of spacetime, a topic studied primarily in general relativity. This physical theory models gravitation as the curvature of a four dimensional Lorentzian manifold (a spacetime) and the concepts of topology thus become important in analysing local as well as global aspects of spacetime. The study of spacetime topology is especially important in physical cosmology.
Types of topology.
There are two main types of topology for a spacetime "M".
Manifold topology.
As with any manifold, a spacetime possesses a natural manifold topology. Here the open sets are the image of open sets in formula_0.
Path or Zeeman topology.
"Definition": The topology formula_1 in which a subset formula_2 is open if for every timelike curve formula_3 there is a set formula_4 in the manifold topology such that formula_5.
It is the finest topology which induces the same topology as formula_6 does on timelike curves.
Properties.
Strictly finer than the manifold topology. It is therefore Hausdorff, separable but not locally compact.
A base for the topology is sets of the form formula_7 for some point formula_8 and some convex normal neighbourhood formula_9.
(formula_10 denote the chronological past and future).
Alexandrov topology.
The Alexandrov topology on spacetime, is the coarsest topology such that both formula_11 and formula_12 are open for all subsets formula_2.
Here the base of open sets for the topology are sets of the form formula_13 for some points formula_14.
This topology coincides with the manifold topology if and only if the manifold is strongly causal but it is coarser in general.
Note that in mathematics, an Alexandrov topology on a partial order is usually taken to be the coarsest topology in which only the upper sets formula_11 are required to be open. This topology goes back to Pavel Alexandrov.
Nowadays, the correct mathematical term for the Alexandrov topology on spacetime (which goes back to Alexandr D. Alexandrov) would be the interval topology, but when Kronheimer and Penrose introduced the term this difference in nomenclature was not as clear, and in physics the term Alexandrov topology remains in use.
Planar spacetime.
Events connected by light have zero separation. The plenum of spacetime in the plane is split into four quadrants, each of which has the topology of R2. The dividing lines are the trajectory of inbound and outbound photons at (0,0). The planar-cosmology topological segmentation is the future F, the past P, space left L, and space right D. The homeomorphism of F with R2 amounts to polar decomposition of split-complex numbers:
formula_15 so that
formula_16 is the split-complex logarithm and the required homeomorphism F → R2, Note that "b" is the rapidity parameter for relative motion in F.
F is in bijective correspondence with each of P, L, and D under the mappings "z" → –"z", "z" → j"z", and z → – j "z", so each acquires the same topology. The union U = F ∪ P ∪ L ∪ D then has a topology nearly covering the plane, leaving out only the null cone on (0,0). Hyperbolic rotation of the plane does not mingle the quadrants, in fact, each one is an invariant set under the unit hyperbola group.
|
[
{
"math_id": 0,
"text": "\\mathbb{R}^4"
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "E \\subset M"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "O"
},
{
"math_id": 5,
"text": "E \\cap c = O \\cap c"
},
{
"math_id": 6,
"text": "M"
},
{
"math_id": 7,
"text": "Y^+(p,U) \\cup Y^-(p,U) \\cup p"
},
{
"math_id": 8,
"text": "p \\in M"
},
{
"math_id": 9,
"text": "U \\subset M"
},
{
"math_id": 10,
"text": "Y^\\pm"
},
{
"math_id": 11,
"text": "Y^+(E)"
},
{
"math_id": 12,
"text": "Y^-(E)"
},
{
"math_id": 13,
"text": "Y^+(x) \\cap Y^-(y)"
},
{
"math_id": 14,
"text": "\\,x,y \\in M"
},
{
"math_id": 15,
"text": "z = \\exp(a + j b) = e^a (\\cosh b + j \\sinh b) \\to (a, b),"
},
{
"math_id": 16,
"text": "z \\to (a, b)"
}
] |
https://en.wikipedia.org/wiki?curid=14263013
|
14266410
|
Quercetin-3-sulfate 3'-sulfotransferase
|
Class of enzymes
In enzymology, a quercetin-3-sulfate 3'-sulfotransferase (EC 2.8.2.26) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + quercetin 3-sulfate formula_0 adenosine 3',5'-bisphosphate + quercetin 3,3'-bissulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and quercetin 3-sulfate, whereas its two products are adenosine 3',5'-bisphosphate and quercetin 3,3'-bissulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:quercetin-3-sulfate 3'-sulfotransferase. Other names in common use include flavonol 3'-sulfotransferase, 3'-Sulfotransferase, and PAPS:flavonol 3-sulfate 3'-sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14266410
|
14266420
|
Quercetin-3-sulfate 4'-sulfotransferase
|
Class of enzymes
In enzymology, a quercetin-3-sulfate 4'-sulfotransferase (EC 2.8.2.27) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + quercetin 3-sulfate formula_0 adenosine 3',5'-bisphosphate + quercetin 3,4'-bissulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and quercetin 3-sulfate, whereas its two products are adenosine 3',5'-bisphosphate and quercetin 3,4'-bissulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:quercetin-3-sulfate 4'-sulfotransferase. Other names in common use include flavonol 4'-sulfotransferase, and PAPS:flavonol 3-sulfate 4'-sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14266420
|
14266437
|
Renilla-luciferin sulfotransferase
|
Class of enzymes
In enzymology, a Renilla-luciferin sulfotransferase (EC 2.8.2.10) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + Renilla luciferin formula_0 adenosine 3',5'-bisphosphate + luciferyl sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and Renilla luciferin, whereas its two products are adenosine 3',5'-bisphosphate and luciferyl sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:Renilla luciferin sulfotransferase. Other names in common use include luciferin sulfotransferase, luciferin sulfokinase, luciferin sulfokinase (3'-phosphoadenylyl sulfate:luciferin, and sulfotransferase).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14266437
|
14266451
|
Steroid sulfotransferase
|
Class of enzymes
In enzymology, a steroid sulfotransferase (EC 2.8.2.15) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + a phenolic steroid formula_0 adenosine 3',5'-bisphosphate + steroid O-sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and phenolic steroid, whereas its two products are adenosine 3',5'-bisphosphate and steroid O-sulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:phenolic-steroid sulfotransferase. This enzyme is also called steroid alcohol sulfotransferase. This enzyme participates in steroid metabolism.
Genes.
Of 62 sulfotransferase genes in the human genome, 16 represent cytoplasmic sulfotransferases, and of these 16 cytoplasmic sulfotransferases, five have been found to act as steroid sulfotransferases. These five sulfotransferase genes are SULT1A1, SULT1E1, and SULT2A1, as well as the two isoforms of SULT2B1, SULT2B1a and SULT2B1b. Their substrate specificity is as follows:
Traditionally, steroid sulfotransferases have been named according to their preferred substrate, for instance estrogen sulfotransferase (SULT1E1) and DHEA sulfotransferase (SULT2A1). However, cytosolic steroid sulfotransferases show broad substrate specificity, and SULT1E1 and SULT2A1 are not the only steroid sulfotransferases that sulfate estrogens and DHEA, respectively.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14266451
|
14266460
|
Succinate—citramalate CoA-transferase
|
Enzyme family
In enzymology, a succinate-citramalate CoA-transferase (EC 2.8.3.7) is an enzyme that catalyzes the chemical reaction
succinyl-CoA + citramalate formula_0 succinate + citramalyl-CoA
Thus, the two substrates of this enzyme are succinyl-CoA and citramalate, whereas its two products are succinate and citramalyl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is succinyl-CoA:citramalate CoA-transferase. Other names in common use include itaconate CoA-transferase, citramalate CoA-transferase, citramalate coenzyme A-transferase, and succinyl coenzyme A-citramalyl coenzyme A transferase. This enzyme participates in c5-branched dibasic acid metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14266460
|
14266471
|
Succinate—hydroxymethylglutarate CoA-transferase
|
Class of enzymes
In enzymology, a succinate-hydroxymethylglutarate CoA-transferase (EC 2.8.3.13) is an enzyme that catalyzes the chemical reaction
succinyl-CoA + 3-hydroxy-3-methylglutarate formula_0 succinate + (S)-3-hydroxy-3-methylglutaryl-CoA
Thus, the two substrates of this enzyme are succinyl-CoA and 3-hydroxy-3-methylglutarate, whereas its two products are succinate and (S)-3-hydroxy-3-methylglutaryl-CoA.
This enzyme belongs to the family of transferases, specifically the CoA-transferases. The systematic name of this enzyme class is succinyl-CoA:3-hydroxy-3-methylglutarate CoA-transferase. Other names in common use include hydroxymethylglutarate coenzyme A-transferase, and dicarboxyl-CoA:dicarboxylic acid coenzyme A transferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14266471
|
14266505
|
Thiol sulfotransferase
|
Class of enzymes
In enzymology, a thiol sulfotransferase (EC 2.8.2.16) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + a thiol formula_0 adenosine 3',5'-bisphosphate + an S-alkyl thiosulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and thiol, whereas its two products are adenosine 3',5'-bisphosphate and S-alkyl thiosulfate.
This enzyme belongs to the family of transferases, specifically the sulfotransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate:thiol S-sulfotransferase. Other names in common use include phosphoadenylylsulfate-thiol sulfotransferase, PAPS sulfotransferase, and adenosine 3'-phosphate 5'-sulphatophosphate sulfotransferase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14266505
|
14266521
|
Thiosulfate—dithiol sulfurtransferase
|
Class of enzymes
In enzymology, a thiosulfate-dithiol sulfurtransferase (EC 2.8.1.5) is an enzyme that catalyzes the chemical reaction
thiosulfate + dithioerythritol formula_0 sulfite + 4,5-cis-dihydroxy-1,2-dithiacyclohexane (i.e. oxidized dithioerythritol) + sulfide
Thus, the two substrates of this enzyme are thiosulfate and dithioerythritol, whereas its 3 products are sulfite, 4,5-cis-dihydroxy-1,2-dithiacyclohexane, and sulfide.
This enzyme belongs to the family of transferases, specifically the sulfurtransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is thiosulfate:dithioerythritol sulfurtransferase. Other names in common use include thiosulfate reductase, and TSR. This enzyme participates in sulfur metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14266521
|
14266556
|
Thiosulfate—thiol sulfurtransferase
|
Class of enzymes
In enzymology, a thiosulfate-thiol sulfurtransferase (EC 2.8.1.3) is an enzyme that catalyzes the chemical reaction
thiosulfate + 2 glutathione formula_0 sulfite + glutathione disulfide + sulfide
Thus, the two substrates of this enzyme are thiosulfate and glutathione, whereas its 3 products are sulfite, glutathione disulfide, and sulfide.
This enzyme belongs to the family of transferases, specifically the sulfurtransferases, which transfer sulfur-containing groups. The systematic name of this enzyme class is thiosulfate:thiol sulfurtransferase. Other names in common use include glutathione-dependent thiosulfate reductase, sulfane reductase, and sulfane sulfurtransferase. This enzyme participates in glutathione metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14266556
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.