text
stringlengths 8
5.74M
| label
stringclasses 3
values | educational_prob
sequencelengths 3
3
|
---|---|---|
package org.reactfx; import org.reactfx.util.NotificationAccumulator; /** * An {@linkplain Observable} that maintains a collection of registered * observers and notifies them when a change occurs. This is unlike * {@link ProxyObservable}, which registers observers with an underlying * {@linkplain Observable}, and unlike {@link RigidObservable}, which does * not produce any notifications. * @param <O> observer type accepted by this {@linkplain Observable} * @param <T> notification type produced by this {@linkplain Observable} */ public interface ProperObservable<O, T> extends Observable<O> { void notifyObservers(T event); NotificationAccumulator<O, T, ?> defaultNotificationAccumulator(); default int defaultHashCode() { return System.identityHashCode(this); } default boolean defaultEquals(Object o) { return this == o; } default String defaultToString() { return getClass().getName() + '@' + Integer.toHexString(hashCode()); } } | Mid | [
0.572429906542056,
30.625,
22.875
] |
OUR COATING GLASS FUSED TO STEEL Glass Fused to Steel, (GFS) is the premium coating in the bolted together tank market, utilising a proven product with significant benefits to customers, consultants and contractors over other types of tank construction. The GFS coating is designed to produce an extremely strong chemical bond to the steel during the fusion process, delivering strength and durability while the steel is in tension due to contents load. Unlike competitive tank manufacturer ‘fusion bonded’ epoxy coating systems that are merely ‘baked’ on the steel surface only forming a physical bond, GFS coatings are both chemically fused to and physically combined with the steel substrate. This results in an unmatched, tough and durable bond. 50 design life. 80 year service life. Proven in their field for over 40 years. Indpedently certified and exceeds NZS/AS codes and standards. Epoxy ‘Baked’ to Steel The Process: In order for there to be fusion of two materials there must be a chemical bond at a molecular level. GFS coatings are fused to the steel at temperatures ranging from 760°C (1400°F) to 860°C (1580°F) which facilitates the interfacial fusion reactions that combine the two materials. A typical factory applied “fusion bonded” epoxy is cured at a much lower temperature typically 200°C – 275°C (390°F – 525°F) which makes fusion between the epoxy and steel impossible! This results in an epoxy coating which is typically susceptible to damage, de lamination and ultimately corrosion. Check out the difference between the two microscopic images: Testing: Permastore is the only tank manufacturer globally to work to published quality standards and their 100% zero discontinuity policy means only defect free sheets are released to the market place. | High | [
0.699481865284974,
33.75,
14.5
] |
Justification: The species has a relatively large distribution area and the habitat of the species is not affected significantly by human activity, therefore it is assessed as Least Concern (LC). This species has also been assessed at the regional level as:EU27 regional assessment: Least Concern (LC) at the level of the 27 member states of the European Union.European regional assessment: Least Concern (LC). According to AnimalBase, the species inhabits rocks and rock rubble in forests (not too dark), in trees, on calcareous substrate or on meadows with bare rocks showing through. It can be found above the timberline, up to 2,700 m asl. In France it is also found in dense populations in dry and sunny meadows devoid of rocks. In Britain it is usually in unshaded habitats. It is a potential threat to this species if the rocks are destroyed by quarrying, by road construction or by other reasons. According to AnimalBase (2010) in central Europe it threatened by habitat destruction in forest management, and in Britain by changes of land use. However the total destruction of the whole habitat is not very likely, therefore this threat is mostly hypothetical. Several sub-populations inhabit protected areas. The species is of Lower Concern in Austria, decreasing (4R) in Bavaria outside the Alps and Lower Concern in Germany (Reischütz 2007, Falkner et al. 2002). No conservation actions are currently required. | Mid | [
0.5383022774327121,
32.5,
27.875
] |
CLARKSBURG, W.Va. — Five years of probation and 500 hours of community service is the sentence handed down in federal court for fraudulent prescription activity by a Morgantown doctor. Chad Poage, 35, was an orthopedic surgeon with offices in Morgantown and Fairmont and wrote multiple prescription for narcotics for his own use between November of 2015 to March 2018. He was sentenced Monday following a May guilty plea. Poage admitted to obtaining controlled substances by fraudulently writing prescriptions using colleagues’ Drug Enforcement Administration numbers and presenting stolen driver’s licenses to pick up fraudulently prescribed controlled substance for his personal use U.S. Attorney Bill Powell said addiction has spread across the spectrum. “Physicians who breach the trust given them often find themselves on the wrong side of the law. Professionals are not immune from the power of addiction. This case provides a sad but powerful commentary on the depth of our opioid crisis. We thank our partners in the Health Care Fraud Unit of the Department of Justice, along with our law enforcement partners for the important work being done in this district,” Powell said in a statement. Poage was one of 60 people indicted in a federal health care fraud investigation last April. | Mid | [
0.636155606407322,
34.75,
19.875
] |
If Trump Doesn’t Act Like He’s President, Will the Courts? “It’s a lot easier to act presidential than to do what I do,” President Trump told a Tampa audience this summer. He’s wrong, and it may have real-world implications if he invokes federal “emergency” statutes to unilaterally build a Mexican border wall. His “acting presidential” bit is running gag that dates back to his campaign: joking that he, too, can “act presidential,” President Trump stiffly imitates a dull monotone address, sometimes breaking a presidential-sounding speech into component parts (“And then you go, ‘God bless you, and God bless the United States of America’”), before telling the audience, “that’s much easier than doing what I have to do . . . but this”—the full Donald Trump experience—“is much more effective.” Perhaps it is effective in rallying his political base, but it has not been a particularly effective approach to governance. And we may see firsthand the costs of eschewing presidential norms if the Trump attempts to build a Mexican border wall by invoking federal “emergency” laws. As the entire political world now knows, federal laws are replete with provisions affording the president special powers upon his declaration of a state of “national emergency.” But, as David French detailed in a pair of posts at National Review, none of the available emergency-power statutes would actually justify President Trump’s construction of a Mexican border wall. The president’s advocates have invoked emergency-power statutes that empower the President to redirect funds in order to unilaterally “construct . . . civil defense projects that are essential to the national defense” (33 U.S.C. § 2293); or to “undertake military construction projects . . . not otherwise authorized by law that are necessary to support such use of the armed forces” (10 U.S.C. § 2808). But their arguments all suffer from a simple but fundamental flaw: they are unsupported by the facts. The Mexican border wall is not “necessary to support” current use of armed forces—indeed, the president has not even suggested otherwise. Nor is the Mexican border wall “essential” to “the national defense” in any reasonable sense of those terms. And, above all else, there is no reasonable basis on which Trump can show, under the triggering statute (50 U.S.C. § 1602), that we actually are in a period of a “national emergency.” Indeed, the president’s own handling of the border wall issue over the last two years proves otherwise—especially when he rejected a proposal to fund the wall because it was paired with a plan to give 1.8 million immigrant children amnesty and a path to citizenship. The president’s proponents are untroubled by this gap between the law and the facts, in large part because they do not believe the Supreme Court would actually require the Trump administration to prove its case. Rather, they expect the courts to defer to President Trump’s statement that we are in a national emergency and that the wall is (per the statutory provisions) either “essential to the national defense” or “necessary to support such use of the armed forces.” “Courts generally have deferred to the judgments of presidents on the basis for such national emergencies,” writes Jonathan Turley in The Hill, “and dozens of such declarations have been made without serious judicial review.” At AmGreatness.com, John Eastman agrees: “it is extremely unlikely the Supreme Court would second guess the commander-in-chief” on questions of military need. At ConservativeReview.com, Daniel Horowitz punts the question of whether the current situation actually “rises to the level of an emergency or not,” because that question “is the subject of a political debate that should be settled between the political branches, not the courts.” In short, they are counting upon the courts, or at least the Supreme Court, to give president Trump the deference that courts conventionally afford to presidents on questions of emergency power—despite the fact that, now two years into his presidency, Donald Trump has largely defined himself as a departure from presidential norms and traditions. There is good reason to doubt that the justices will be so deferential to the Trump administration’s invocation of emergency powers. The court’s traditional deference to presidents in emergencies reflects, among other things, a judicial recognition that presidents often act on the basis of confidential facts and with the benefit of executive branch expertise. Here, by contrast, the facts of the situation are in plain view. And equally self-evident is President Trump’s actual motivation for building the wall: not sudden emergency circumstances, but his campaign pledge to build the wall, now thwarted by Republicans’ loss of the House of Representatives. Perhaps the administration is emboldened in this case by the Supreme Court’s recent decision in Trump v. Hawaii—better known as the “travel ban” case—because the five-justice majority deferred to the president’s judgment of necessity and declined to impute indications of Trump’s ill motives into the facially sufficient presidential decision. But that case is much different from the present one. In the travel ban case, the court stressed that the president ordered his agencies to “to conduct a comprehensive evaluation” of the risks posed by entry of foreigners into the United States; then, based on that extensive review, the president issued a proclamation “setting forth extensive findings” as to facts ascertained by the agencies; and finally, the president’s Proclamation fixed country-specific limitations that reflected the administration’s comprehensive research. In the border wall fight, by contrast, the president’s factual claims are far removed from the actual evidence, and the powers that he asserts are blunt and immense. For that reason, the president should pause before assuming that the five-justice majority will be as deferential in a challenge to his “emergency” border wall. And if the justices have doubts about a declaration of emergency by President Trump, then their doubts will only be redoubled by the sheer magnitude of power claimed by the president. Not only does he claim the power to build a bridge or similar piece of military infrastructure, but rather a massive wall for hundreds of miles, radically changing the nature of our border, at the cost of billions of dollars, and with serious impacts on private landowners and on the environment. And all of it in the face of Congress’s refusal or failure to specifically authorize and fund the project. In that respect, an attempt by the Trump administration to build a border wall unilaterally may call to mind some of the Obama Administration’s own aggressive and unprecedented assertions of immense power, like the EPA’s greenhouse gas regulations or the FCC’s net neutrality regulations. When the Supreme Court struck down one iteration of the EPA’s greenhouse gas regulations in 2014, Justice Scalia’s opinion stressed that the Administration’s interpretation of the law strained credulity because “it would bring about an enormous and transformative expansion” of the Administration’s power “without clear congressional authorization.” Scalia added, “[w]hen an agency claims to discover in a long-extant statute an unheralded power to regulate ‘a significant portion of the American economy’ . . . we typically greet its announcement with a measure of skepticism.” So, too, might a majority of Supreme Court justices today if the Trump Administration suddenly discovers in old statutes the power to unilaterally build the border wall that Congress disfavors. This may all come as a surprise to President Trump, who seems to think that his border wall fits comfortably within the limits of the emergency statutes. He said as much in his Rio Grande press briefing on Friday: “Don’t forget, national emergency is going through Congress because that already went through Congress.” (Or, translated into English: “To act pursuant to a national-emergency statute is to act pursuant to Congress, because the statute was enacted by Congress.”) And, he added, “[t]hat’s what it’s there for.” But that is not actually what the emergency statutes are “there for.” Congress vested presidents with great power to act in the case of genuine emergencies—not just when a president, in a political bind, simply says “national emergency,” as if they are magic words that make the Constitution’s requirements for lawmaking and money-spending suddenly disappear. It is the difference between a president’s cynical assertion of power and a president’s faithful execution of the law. For two years, Donald Trump has done everything he can to define himself as an unconventional president. It may come at the cost of the deference that justices conventionally afford to presidents. | Low | [
0.509513742071881,
30.125,
29
] |
Editor’s Note – May 15, 2018: On October 24, 2017, the U.S. Attorney’s office dropped all criminal charges against Naseem “Nick” Salem in this alleged money-laundering case. According to court records, prosecutors asked Judge Marilyn Huff to dismiss the two charges against Salem “in the interest of justice.” There is no indication in the court record that Salem made a plea-bargain with prosecutors or admitted any wrong-doing. There is also no indication that Salem agreed to testify against any other defendant or cooperate with the government, in exchange for the dismissal of the charges against him. Salem’s attorney, Sarita Kedia, said her client was “wrongfully charged, and I am pleased that the U.S. Attorney’s Office was finally able to recognize that and dismiss the charges against him. It is extraordinarily unfortunate that the government did not properly investigate the case before bringing the unjustified charges against Mr. Salem and jeopardizing the reputation of a decent and extremely productive member of the community.” To read a full update to this story, click here. Federal officials raided two card rooms in San Diego County and issued arrest warrants for 25 people in connection with an alleged conspiracy to launder millions in profits from high-stakes poker games. Law enforcement officials raided Seven Mile Casino on Bay Boulevard in Chula Vista and the Black Jack Palomar Casino on El Cajon Boulevard and Oregon Street before 9 a.m., seizing more than $600,000 in player accounts and bank accounts. Arrests were also made in Pennsylvania, New Jersey, Nevada, Northern California, Los Angeles and Orange County. Charges ranged from illegal bookmaking, money laundering and failing to report winnings to federal authorities. Federal prosecutors claim David Stroj, aka "Fat Dave," of San Diego, hired people to recruit clients to the high-stakes games several times a week and then conspired to launder the money through local card rooms. Stroj faces federal charges of running an illegal bookmaking, poker and blackjack business as well as money laundering and transporting someone from Mexico to California with the intent to engage in prostitution. He had not been arraigned by Wednesday afternoon and it was not immediately clear if he had an attorney. Prosecutors claim Stroj would have bookmaking clients write checks to the card rooms so it would be deposited into another player's account. Federal officials say that money would then be withdrawn in cash or chips. This also occurred with the Wynn and Bellagio casinos in Las Vegas, federal officials allege. "Fat Dave" took approximately $2 million a month in gambling activity, making about $500,000 profit, federal officials claim. The four people arraigned in federal court Wednesday allegedly recruited clients from the Barona Casino, Las Vegas and Mexico on behalf of Stroj, according to prosecutors. They pleaded not guilty in court. Craig Kolk, Ricardo Castellanos-Velasquez and Duy Trang were granted bail after their court appearance. Ali Lareybi was detained pending a hearing on Friday at 10 a.m. Seventeen other people were in custody. Four were fugitives and warrants have been issued for their arrests. The operator of the Palomar Card Club, Naseem "Nick" Salem, is accused of failing to track winners earning more than $10,000 a day. “All financial institutions include casinos are required to report any cash transaction over $10,000,” said Joshua Mellor, assistant U.S. Attorney. Prosecutors also claim Salem moved money around on an illegal blackjack and poker business so he would escape detection. DV.load("https://www.documentcloud.org/documents/2644387-Stroj-Indictment-11-20-15.js", { width: 550, height: 400, sidebar: false, text: false, container: "#DV-viewer-2644387-Stroj-Indictment-11-20-15" }); Stroj-Indictment-11-20-15 (PDF) Harvey Souza, owner of Seven Mile Casino, was arrested Wednesday at his Bonita home, accused of not keeping track of who won more than $10,000 a day at his casino. “They were not duped. It was criminal in nature,” Mellor said. Souza spoke with NBC 7 when the card room opened in July. Unlike tribal casinos, card rooms do not offer slot gaming. Patrons enjoy games like blackjack, baccarat, pai-gow and poker. At the time it was one of only four in the county with two in San Diego and one in Oceanside. After the raid, employees at Seven Mile Casino maintained none of the 24 other co-defendants are associated with the card room, according to a statement released by the casino: "Seven Mile Casino, owned and operated by Harvey Souza and his family, have worked tirelessly for the past 70 years to build upon their great-grandfather’s legacy and comply with the evolving regulations regarding card rooms across the state. As a family and as a business, they are very much invested in the community of Chula Vista and the industry. We look forward to working with the California Bureau of Gambling Control to resolve all issues." California’s Bureau of Gambling Control issued Emergency Closure Orders on both the Palomar and Village Club card rooms effective immediately. The locations will be closed until they meet certain criteria to potentially reopen, a spokeswoman for Attorney General Kamala D. Harris said. “These casinos engaged in money laundering and illegal gambling schemes that undermine the well-being of our communities,” said Attorney General Kamala D. Harris in a statement. “I thank our California Department of Justice Bureau of Gambling Control Special Agents, as well as our local and federal law enforcement partners, for holding the alleged perpetrators accountable for their financial crimes." NBC 7 Other defendants named in the indictment include: Matthew Greenwood, Jeffrey Broadt, Jeffrey Stoff, Arturo Diaz-Ramirez, Jaime Behar, Robert Stroj, Jean Paul Rojo, Joshua Jones, , Alexandra Kane, Bryan Sibbach, Joseph Palermo, Thomas Mallozzi, Stephen Bednar, Christopher Parsons, Jeffrey Mohr, Kyle Allen, Michael Hipple and Alfredo Barba. The investigation was launched two years ago by the California Department of Justice, Bureau of Gambling Control. The agency worked with the U.S. Attorney's Office, the California DOJ Indian and Gaming Law Section, the FBI, the IRS, HSI, and the San Diego Sheriff's Department during the investigation. The Palomar Card Club was in danger of being shut down earlier this year when state gaming officials accused owners Donald and Susan Staats of transferring their license to their daughter. The Staats' business license expired on Nov. 30, 2015 according to legal documents. | Low | [
0.43933054393305404,
26.25,
33.5
] |
We are studying the mechanism of control of expression of genes of D- galactose transport and metabolism in Escherichia coli. We have previously demonstrated that the members of the gal regulon are negatively regulated to different extents by Gal repressor (GalR) and isorepressor (GalS). We have shown that the promoters, P1 and P2, of the gal operon, are completely repressed if a DNA loop covering the promoter segment is formed by the association of GalR bound to two operators, OE and OI. RNA polymerase binds to the promoters but cannot form open complexes because of torsional inflexibility of the loop. We have shown that loop formation by GalR requires another factor, which we have purified. The purified protein behaves like the histone-like protein, called HU, of E. coli. In the absence of DNA looping, GalR bound to the upstream operator, OE, acts as an activator of P2 and a repressor of P1. GalR performs this dual role by making contacts with specific amino acid segment of the C-terminal domain of the alpha subunit of RNA polymerase bound to P2 and P1. Mutations in this region of alpha deranges the activator and/or repressor role of GalR. We have purified the GalS protein to homogeneity and studied its property. Similar to observations made in vivo, we have found that GalS mediated repression is strongest in the mgl operon, weaker in the gal operon and nearly undetectable in the galP operon. This differential behavior originates in the correspondingly different affinities of GalS toward each relevant operator DNA. | High | [
0.6825595984943531,
34,
15.8125
] |
Étienne Brûlé Étienne Brulé (; c. 1592 – c. June 1633) was the first European explorer to journey beyond the St. Lawrence River into what is now known as Canada. He spent much of his early adult life among the Hurons, and mastered their language and culture. Brûlé became an interpreter and guide for Samuel de Champlain, who later sent Brûlé on a number of exploratory missions, among which he is thought to have preceded Champlain to the Great Lakes, reuniting with him upon Champlain's first arrival at Lake Huron. Among his many travels were explorations of Georgian Bay and Lake Huron, as well as the Humber and Ottawa Rivers. In 1629, during the Anglo-French War, he escaped after being captured by the Seneca tribe. Brûlé was killed by the Bear tribe of the Huron people, who believed he had betrayed them to the Seneca. Early life in France Brûlé was born c. 1592 in Paris, France. He came to Canada when he was only 16 years old, in 1608. Brûlé has not left any recollection or description of his early life, his life among the indigenous peoples, or of his expeditions. Therefore, his existence has been viewed through the works of others, including Champlain, Sagard, and Brébeuf. Life in New France Champlain wrote of a youth who had been living in New France since 1608, and whom many believe to have been young Brûlé. In June 1610, Brûlé told Champlain that he wished to go and live with the Algonquins and learn their language as well as better understand their customs and habits. Champlain made the arrangement to do so and in return, the chief Iroquet (an Algonquin leader of the Petite nation who wintered his people near Huronia), requested that Champlain take Savignon, a young Huron, with him to teach him the customs and habits of the French. Champlain instructed Brûlé to learn the Huron language, explore the country, establish good relations with all first nations, and report back in one year's time with all that he had learned. On June 13, 1611, Champlain returned to visit Brûlé, who astonishingly had done all that Champlain had asked of him. Brûlé was dressed as though he was one of the indigenous people and was extremely pleased with the way he was treated and all that he had learned. Champlain requested that Brûlé continue to live among the Indigenous peoples so that he could fully master everything, and Brûlé agreed. For four years, Champlain had had no connection or communication with Brûlé who, it is thought, was then the first European to see Great Lakes. In 1615, they met again at Huronia. There, Brûlé informed Champlain of his adventures and explorations through North America. Brûlé explained that he was joined by another French interpreter by the name of Grenolle. He reported that they had travelled along the north shore of what they called la mer douce (the calm sea), now known as Lake Huron, and went as far as the great rapids of Sault Ste. Marie where Lake Superior enters Lake Huron. In 1615, Brûlé asked permission from Champlain to join 12 Huron warriors on their mission to see the Andaste (Susquehannock) people, allies of the Hurons, to ask them for their support during an expedition Champlain was planning. Champlain ordered the party to travel west of the Seneca country because they needed to arrive there quickly and the only way to do so was by crossing over enemy territory. This proved to be dangerous but semi-successful for Brûlé did reach the Andastes; however, he arrived at the meeting place Champlain chose two days too late to assist Champlain and the Hurons, who had been defeated by the Iroquois. Brûlé probably visited four of the five Great Lakes—Lake Huron, Lake Superior, Lake Erie, Lake Ontario—and may have also seen Lake Michigan. Brûlé was more than likely the first white European to complete these expeditions across North America. In these expeditions he visited places such as the Ottawa River, Mattawa River, Lake Nipissing, and the French River to Georgian Bay. From Georgian Bay, Brûlé was able to cut into Lake Huron. He paddled up the St. Marys River and portaged into Lake Superior. He journeyed through Lake Simcoe and portaged through what is now Toronto to Lake Ontario. From Lake Ontario Brûlé was able to travel in Upstate New York and explore Pennsylvania and cross down the Susquehanna River to Chesapeake Bay. It is also said that it is very probable that Brûlé was one of the first Europeans to stand along the shores of Lake Erie and Lake Michigan. He had spent months visiting indigenous peoples that lived along Lake Erie between the Niagara and Detroit Rivers, but because he left no writings of his own, almost nothing identifiable is known about the tribes he visited, many of which would be obliterated a few decades later in the Beaver Wars (in contrast, Joseph de La Roche Daillon, who conducted a missionary journey among the tribes of Western New York in 1627, kept meticulous notes of his journeys; it is de La Roche's writings that serve as the primary history of pre-Beaver Wars native occupation of Western New York). Champlain and the Jesuits often spoke out against Brûlé's adoption of Huron customs, as well as his association with the fur traders, who were beyond the control of the colonial government. Brûlé returned to Quebec in 1618, but Champlain advised him to continue his explorations among the Hurons. Brûlé was later confined in Quebec for a year, where he taught the Jesuits the natives' language. In 1629, Brûlé betrayed the colony of New France. David Kirke and his brothers, English merchants of Huguenot extraction, paid 100 pistoles to Brûlé and three of his companions to pilot their ships up the St. Lawrence river and "undoubtedly gave information as to the desperate state of Quebec's garrison" that emboldened the Kirkes to attack it. (See main article: Surrender of Quebec) After 1629, Brûlé continued to live with the Natives, acting as an interpreter in their dealings with the French traders. Though the circumstances of his death are unclear, the prevailing view is that he was captured by the Seneca Iroquois in battle and left for dead by his Huron group. He managed to escape death by torture, but when he returned home, the Hurons did not believe his story and suspected him of trading with the Senecas. Treated as an enemy, Brûlé was stabbed to death, his body was dismembered, and his remains were consumed by the villagers in 1633. He died at Toanche, on the Penetanguishene peninsula, Ontario. See also Timeline of Quebec history Timeline of Ottawa history Timeline of Toronto history Coureurs des bois Samuel de Champlain École secondaire Étienne-Brûlé French colonization of the Americas Etienne Brule Park References Further reading Douglas, Gail (2003). Étienne Brûlé: The Mysterious Life and Times of An Early Canadian Legend, Canmore, Alberta: Altitude Publishing Canada, 141 p. () Baker, Daniel ed. Explorers and Discoverers of the World. Detroit: Gale Research, 1993 Cranston, James Herbert (1949). Etienne Brulé, Immortal Scoundrel, Toronto : The Ryerson Press, 144 p. Woods, Shirley E., Jr. "Ottawa: The Capital of Canada" Doubleday, 1980., p 9. David Hackett Fischer. Champlain's Dream. New York: Simon and Schuster, 2008. Grace Morrison. Étienne Brûlé. Markham: Fitzhenry and Whiteside, 1989. Gervais Carpin. Le Réseau du Canada. Québec : Presses de L'Université de Paris-Sorbonne, 1999. James Herbert Cranston. Étienne Brûlé : Immortal Scoundrel. Toronto : The Ryerson Press, 1949. Serge Bouchard, Marie Christine Lévesque (2014) Ils ont couru l'Amérique : De remarquables oubliés Tome 2 (chapitre 1), Lux Éditeur Donald H. Kent, "The Myth of Etienne Brulé," Pennsylvania History 43 (1976): p 291–306. Richard J. McCracken, "Susquehannocks, Brule and Carantouannais: A Continuing Research Problem," The Bulletin. Journal of the New York State Archaeological Association, no. 91 (1985), pp. 39–51. Category:1590s births Category:1633 deaths Category:People from Champigny-sur-Marne Category:People of New France Category:Explorers of Canada Category:French explorers Category:17th-century explorers Category:People of pre-statehood Michigan Category:People of pre-statehood Minnesota Category:People of pre-statehood Wisconsin Category:French torture victims Category:First Nations history in Ontario Category:First Nations history in Quebec Category:Persons of National Historic Significance (Canada) | High | [
0.697247706422018,
38,
16.5
] |
--- abstract: 'During the month of December, 2009 the blazar 3C 454.3 became the brightest gamma-ray source in the sky, reaching a peak flux $F \sim 2000 \times 10^{-8} $ph cm$^{-2}$ s$^{-1}$ for E $>\ 100$ MeV. Starting in November, 2009 intensive multifrequency campaigns monitored the 3C 454 gamma-ray outburst. Here we report the results of a 2-month campaign involving AGILE, INTEGRAL, *Swift*/XRT, *Swift*/BAT, RossiXTE for the high-energy observations, and *Swift*/UVOT, KANATA, GRT, REM for the near-IR/optical/UV data. The GASP/WEBT provided radio and additional optical data. We detected a long-term active emission phase lasting $\sim$1 month at all wavelengths: in the gamma-ray band, peak emission was reached on December 2-3, 2009. Remarkably, this gamma-ray super-flare was not accompanied by correspondingly intense emission in the optical/UV band that reached a level substantially lower than the previous observations in 2007-2008. The lack of strong simultaneous optical brightening during the super-flare and the determination of the broad-band spectral evolution severely constrain the theoretical modelling. We find that the pre- and post-flare broad-band behavior can be explained by a one-zone model involving SSC plus external Compton emission from an accretion disk and a broad-line region. However, the spectra of the Dec. 2-3, 2009 super-flare and of the secondary peak emission on Dec. 9, 2009 cannot be satisfactorily modelled by a simple one-zone model. An additional particle component is most likely active during these states.' author: - 'L. Pacciani, V. Vittorini, M. Tavani, M. T. Fiocchi, S. Vercellone, F. D’Ammando, T. Sakamoto , E. Pian, C. M. Raiteri, M. Villata, M. Sasada, R. Itoh, M. Yamanaka, M. Uemura, E. Striani, S. D. Fugazza, A. Tiengo, H. A. Krimm, M. C. Stroh, A. D. Falcone, P. A. Curran, A. C. Sadun, A. Lahteenmaki, M. Tornikoski, H. D. Aller , M. F. Aller, C. S. Lin, V. M. Larionov , P. Leto, L. O. Takalo, A. Berdyugin , M. A. Gurwell, A. Bulgarelli, A. W. Chen , I. Donnarumma, A. Giuliani, F. Longo , G. Pucella, A. Argan, G. Barbiellini , P. Caraveo, P. W. Cattaneo , E. Costa, G. De Paris, E. Del Monte, G. Di Cocco, Y. Evangelista, A. Ferrari, M. Feroci, M. Fiorini, F. Fuschino, M. Galli, F. Gianotti, C. Labanti, I. Lapshov, F. Lazzarotto, P. Lipari, M. Marisaldi, S. Mereghetti, E. Morelli, E. Moretti, A. Morselli, A. Pellizzoni, F. Perotti, G. Piano, P. Picozza, M. Pilia, M. Prest, M. Rapisarda, A. Rappoldi, A. Rubini, S. Sabatini, P. Soffitta , M. Trifoglio, A. Trois, E. Vallazza, D. Zanello, S. Colafrancesco, C. Pittori, F. Verrecchia, P. Santolamazza, F. Lucarelli, P. Giommi and L. Salotti' title: 'The December 2009 gamma-ray flare of 3C 454.3: the multifrequency campaign' --- Introduction ============ The flat spectrum radio quasar 3C 454.3 (at a redshift $z=0.859$) turns out to be among the most active blazars emitting a broad-spectrum ranging from radio to gamma-ray energies. Blazars are a sub-class of active galactic nuclei, with the relativistic jet aligned to the line of sight. Their spectral energy distributions (SED) typically show a double humped shape, with the low energy peak lying between radio and X-rays, and the high energy peak in the GeV-TeV band [@padovani1995]. Detailed description of blazar leptonic emission models can be found in Maraschi et al. (1992); Marsher & Bloom (1992); Sikora et al. (1994). The observed spectra can also be modelled in the framework of hadronic models [@mucke2001; @mucke2003; @bottcher2007]. Starting in 2004-2005, 3C 454.3 showed a long period of optical activity, with variability timescale ranging from several months to less than one day. In May 2005, the source reached a peak magnitude $R \simeq 12$ showing strong 1-day variability [@Villata2006]. A radio peak was detected 9 months after the optical peak, with a flux of 22 Jy at 37 GHz, and 20 Jy at 43 GHz [@villata2007]. The source was then quiescent from the beginning of 2006 until mid-2007 [with an R magnitude between 15 and 16, @raiteri2008]. Starting in the second half of 2007 [@vercellone2008], ḩas been detected in a high gamma-ray state by AGILE [@agile], and, subsequently also by *Fermi*-LAT [@atwood2009; @abdo2009a]. Typically, the level of gamma-ray activity (with a flux of 300-600$\times 10^{-8}$ ph cm$^{-2}$ s$^{-1}$ for E $>$ 100 MeV) has been observed to be correlated with the optical emission. Relatively large gamma-ray fluxes were detected during the AGILE During 2008, the source that was bright at the beginning of the year, started to fade in optical band [@villata2009]. AGILE detected the fading in gamma-rays too [@vercellone2009b]. *Fermi* reported an averaged spectrum above 200 MeV with a photon index $\alpha \simeq 2.3 $ and a spectral break at $E_c \sim 2.4 $ GeV, obtained in August, 2008 [@abdo2009a]. The multifrequency campaign =========================== The intensive monitoring of çarried out by our group covered the period of extraordinary gamma-ray activity in November-December, 2009 [@striani2009]. The campaign involved AGILE for the gamma-ray band, *Swift*/BAT, RossiXTE/HEXTE and INTEGRAL/IBIS in the hard X-ray band, RossiXTE/PCA and *Swift*/XRT in X-rays, *Swift*/UVOT in the optical and UV bands, the KANATA observatory and GRT in the optical, REM in the near infrared and optical. AGILE observed the source every day in *spinning mode*, scanning about 70% of the whole sky every 6 minutes. INTEGRAL pointed the source in response to a Target of Opportunity observations (ToO) [@atelvecellone], and observed it from 2009 December 6 until 2009 December 12. The RossiXTE satellite observed 3C 454.3 on 2009 December 5 and then daily from December 8 until December 17, 2009 for typical integrations of $\sim$3 ks. *Swift* started to observe o̧n 2009 November 27, in response to a ToO, and pointed at the source every day (UVOT performed most of the observations with the UV filters). The KANATA 1.5 m telescope performed a long source monitoring in the V-band, with a time-step of 1 day. The fully automated 14" GRT (Goddard Robotic Telescope) performed observations in the V and R bands, quasi-simultaneously with *Swift*, starting on 2009 November 30. REM started the monitoring on 2009 December 10 in response to a ToO, and observed the source every day in the VRIJHK filters. The GLAST-AGILE Support Program [GASP @villata2008; @villata2009] performed an intensive monitoring-campaign of the source in 2009-2010. We used a sub-sample of their data: Optical observations reported in this paper were performed at: Lulin, New Mexico Skies, Roque de los Muchachos (KVA), and St. Petersburg. GASP radio data were taken at Mauna Kea (SMA, 230 GHz), Noto (43 GHz), Metsähovi (37 GHz) , and UMRAO (4.8, 8.0, and 14.5 GHz). Data analysis ============= AGILE/GRID data were analyzed using the Build-19 software and the response matrix v10 calibrated in the energy range 100-3000 MeV. Well reconstructed gamma-ray events were using the FM3.119 filter. All the events collected during the passage in the South-Atlantic Anomaly were rejected. We filtered out the Earth-albedo, rejecting photons coming from a circular region of radius 85 deg and centered on the Earth. We rejected photons coming from outside the 35 degrees from the optical axis. Gamma-ray data were analyzed with integrations of 1 or 2 days, depending on the source flux. We used the standard AGILE Maximum-Likelihood procedure (ALIKE) [see @mattox1996 for the concept definition] for each data set. The integration over 5 weeks from 2009-11-18 to 2009-12-23 UTC yields a photon index 1.88 $\pm$ 0.08 (all the errors reported in the paper are at 1$\sigma$, except where stated). The INTEGRAL-IBIS [@ubertini2003] data were processed using the OSA software version 8.0. light curves (from 20 to 200 keV) and spectra (from 18 to 200 keV) were extracted for each individual science window of revolutions 873 and 874. The *Swift*-BAT survey data were obtained applying the “BAT FOV" option. The data have been processed by [batsurvey]{} script available through HEASOFT software package with a snapshot (single pointing) interval. To estimate the background, ten background points around the source in a radius of 50$^{\prime}$ are selected. The source, ten background points and the bright hard X-ray sources (for cleaning purpose) are included in the input catalog of [batsurvey]{}. The BAT count rate in the 14-195 keV band has been converted into the energy flux assuming a power-law photon index 1.7 as determined from the INTEGRAL-ISGRI data, see below). In order to match with the HEXTE range, the BAT hard X-ray flux has been rescaled in Fig. \[fig:lc\_all\] to the 20-40 keV band. *RossiXTE*-PCA [@jahoda1996] and HEXTE [@Rothschild1998] data were analyzed following the same procedure described in Vercellone et al. (2010). The data analysis was restricted to the PCU2 in the 3-20 keV energy range for the PCA and to the Cluster B in the 18-50 keV range for the HEXTE. The net exposure times were 27.3 ks for PCA and 7.3 ks for HEXTE. The background subtracted source spectra obtained with both the instruments were simultaneously fit[^1] with an absorbed power-law model, with the photoelectric absorption fixed to 0.134$\times$10$^{22}$ cm$^{-2}$ [@Villata2006]. After the introduction of a 2% systematic error, the best-fit value for the photon index is 1.74 $\pm$ 0.01. The *Swift*-XRT data were processed using the most recent calibration files available. We utilized *Swift* Software version 3.5, FTOOLS version 6.8, and XSPEC version 12.5.1n. We fitted the data with an absorbed power-law model. We obtained photon indices between 1.51 $\pm$ 0.09 and 1.73 $\pm$ 0.11, and excess absorption between (0.09 $\pm$ 0.06)$\times 10^{22}$ and (0.17 $\pm$ 0.03)$\times 10^{22}$ cm$^{-2}$ (all uncertainties on xrt spectral fit are at 90% level). *Swift*-UVOT data from each observation sequence were processed by the standard UVOT tool `uvotsource` using the same version of the Swift software as for the XRT analysis. An extraction region of radius 5 arcsec centered on the source and a suitable background region were used. Magnitudes are based on the UVOT photometric system [@Poole08]. The optical photometry of the Kanata Observatory data was performed using TRISPEC [@watanabe2005]. The observations were pipeline-reduced, including bias removal and flat-field corrections. We derived the $V$-band magnitude from differential photometry with a nearby reference star, USNOB 1061-0614254 [ see @gonzales2001]. Photometric stability of this star has been confirmed by our simultaneous photometry for another nearby star, USNOB 1061-0614207. All REM raw optical and NIR frames, obtained with ROSS [@tosti2004] and REMIR [@conconi2004] respectively, as well as images from GRT were corrected for dark, bias, and flat field following standard recipes. Instrumental magnitudes were obtained via aperture photometry, and absolute calibration has been performed by means of secondary standard stars in the field [@raiteri1998]. Even if it is not reported in Figure 1, the Metsähovi radio data at 37 GHz show a high flux with an increasing trend from 2009 December 1 until 2010 January 14, and a mean flux of $\sim$ 20 Jy during the first week of December, 2009. The mean 230 GHz flux is $\sim$ 25 Jy. These radio fluxes[^2] are comparable to the peak flux measured in 2006 [@villata2007]. All UV/optical/NIR data presented here were corrected for the Galactic extinction toward 3C 454.3 assuming $A_{V}=0.349$ [@schlegel1998]. Results ======= The multifrequency light curves of are reported in Figure \[fig:lc\_all\]. The exceptional gamma-ray flaring activity is produced during an extended period lasting several weeks. The optical data show variability timescale as short as one day or less. Due to the typical interval between optical observations (1 day), we did not show in Fig. \[fig:lc\_all\] a detailed representation of the variability. The optical V data indicate a low-state of the source (V $>$ 15 mag, flux $<$ 3.9 mJy) until MJD 55160, after which the flux started to increase. The optical flux increased of about 50% in less then one day from MJD 55166.4 until MJD 55167.4, reaching the value $V=13.7$ mag (e.g., 12.8 mJy), followed by a fast flux decrease to $V=14.3$ mag (e.g., 7.4 mJy) at MJD 55169.0. Another optical peak was reached at MJD 55172.5 (V=13.7 mag), and then was followed by a minimum at MJD 55175.0, with V=14.4 mag (e.g., 6.7 mJy). In general, the X-ray flux follows[^3] the rising part of the optical emission in the interval MJD 55150-55169. Starting on MJD 55169, the X-ray flux started to fade, with the optical emission remaining in a relatively high state for 4-5 days. Our INTEGRAL hard X-ray data sample the fading phase of the high-activity period. [c]{}\ 0.3in We focused on the time-dependent spectral analysis of this exceptional activity of , and used simultaneous broad-band data to obtain a detailed account of the source variability. We obtained the SED for four periods. The first period (interval-1) is for a 5-day integration of GRID data, centered on 2009-11-27 09:36 UT (MJD 55162.4), and using the *Swift* observation at MJD 55162.9, and quasi-simultaneous GASP optical data (from KVA and New Mexico Skies) obtained within 16 hours from the *Swift* observation (pre-flare SED). The second period (interval-2) was obtained for the gamma-ray super-flare episode integrating the GRID data for 1 day centered at 2009-12-02 16:48 UT (MJD 55167.7), and using of the simultaneous *Swift* and GRT observations at MJD 55167.0 (first flare SED). The Kanata observatory measured a flux 40% higher in V band 10 hours after the GRT observation. The third SED was obtained (interval-3) integrating GRID data for 2 days centered at 2009-12-06 16:48 UT (MJD 55172.7), to match the local maximum of the optical light curve with V=13.7 mag, that is apparently coincident with the secondary gamma-ray maximum near MJD 55174. For this interval we used the INTEGRAL-ISGRI data collected between MJD 55171.7 and MJD 55174.2, *Swift* data at MJD 55173.9, RossiXTE data at 55173.4, and GASP optical data (from Lulin and GRT) obtained within 26 hours from the *Swift* data. The last SED (interval-4) was obtained integrating GRID data for 5.5 days centered at 2009-12-15 18:00 UT (MJD 55180.8), and making use of the *Swift* observation at MJD 55179.1, the RossiXTE observation at MJD 55179.2, the GASP optical data (from St. Petersburg and Lulin) simultaneous with *Swift* within 12 hours, and the near-infrared observations from REM at MJD 55181.0 (post flare SED).\ Radio data for the SEDs were taken by the GASP, using observations from Mauna Kea (SMA, 230 GHz), Noto (43 GHz), Metsähovi (37 GHz) UMRAO (4.8, 8.0, 14.5 GHz), simultaneous within a few days with the XRT observations. Due to the slow variability of the radio data, we use interpolated radio values in the SEDs. The results are shown in Fig. \[fig:sed\].\ Discussion ========== The multifrequency data of the extensive campaign on şhow a remarkable behavior of the source. Starting from MJD 55150, first an overall rise of the gamma-ray emission and then of the X-ray and optical fluxes is detected. This rise culminates with peak optical/X-ray/gamma-ray emission during a 10-day period centered around MJD 55173. Subsequently, the overall flux decreased and reached again a relative-minimum state around MJD 55200. During the 2-month period the optical and X-ray fluxes vary within a factor of 3, whereas the flux grows by a factor of 5-10 compared to the pre-flare value. During the rapid super-flare around MJD 55167.7 the flux doubles within 1 day with the optical and average X-ray increase of 50% and 30%, respectively. We find an overall correlation at all wavelengths for both long and short timescales. However, the unusual gamma-ray flaring and super-flaring activity from during the period November-December, 2009 is not accompanied by strong emission of similar intensity in the optical or even in the soft X-ray bands. This flaring behavior appears to be quite different from other episodes detected in 2007 and 2008 [e.g. @vercellone2009a; @donnarumma2009a]. The synchrotron emission appears to be quite broad and centered around $\nu \sim 10^{13}$ Hz. -------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- -- -- **[Interval]{} & **[Model]{} &**[Component]{} & **[B]{} & **[R]{} & $\bf{K}$ & $\bf{\gamma_b}$ & $\bf{\gamma_{min}}$ & $\bf{\zeta_1}$ & $\bf{\zeta_2}$ & Comments\ & & & (G) & (cm) & $(cm^{-3})$ & & & & &\ 1 (pre-flare) & two-comp. & component-1$^*$ & 0.6 & $7\times10^{16}$ &2.2 & 800 & 35 & 2.35 &4.5 & broken PL\ & & component-2 & – & – & – & – & – & – & – & –\ 2 (super-flare) & two-comp. & component-1 & 0.6 & $7\times10^{16}$ &2.2 & 800 & 35 & 2.35 &4.5 & broken PL\ & & component-2 & 0.9 & $3\times10^{16}$ &180 & 180 & 1 & – & – & relativistic Maxwellian\ & & 0.5 & $7\times10^{16}$ &2.5 &1000 & 35 & 2.35 & 4.5& broken PL\ 3 (secondary-flare) & two-comp. & component-1 & 0.6 & $7\times10^{16}$ &2.5 & 800 & 45 &2.25 &4.5 & broken PL\ & & component-2 & 0.9 & $3\times10^{16}$ &170 & 170 & 1 & – & – & relativistic Maxwellian\ 4 (post-flare) & two-comp. & component-1$^*$ & 0.6 & $7\times10^{16}$ &2.5 & 800 & 45 &2.25 &4.5 & broken PL\ & & component-2 & – & – & – & – & – & – & – & –\ ********** -------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- -- -- $(^*)$For the pre- and post-flare intervals this set of parameters describes also the simple one-zone model. Striani et al. (2010) provided a first report of the AGILE-GRID data. A single power-law approximation gives a photon spectral index 1.66 $\pm$ 0.32 in the energy band 0.1-1 GeV, integrating the data for two days centered at MJD 55167.7. We confirm this result in the analysis presented here integrating gamma-ray data for 1 day, as shown in our Fig. 2 (left panel). In addition, we found a similar spectral shape of the gamma-ray emission during the secondary maximum of interval-3. Our Fig. 2 (right panel) shows the interval-3 SED as compared with the post-flare SED of interval-4. The simultaneous observations of by AGILE, INTEGRAL, RossiXTE and *Swift* strongly constrain the emission models in the high-energy range. In particular, our interval-3 spectrum (blue solid squares of Fig. 2, right panel) represents one of the best constrained multifrequency spectra ever obtained for a flaring blazar from X-ray up to GeV energies. We present in Fig. 2 the results of our spectral modelling based on Compton (SSC), We used parameters similar to those already implemented to model previous gamma-ray flares of 3C 454.3. [@vercellone2009a; @donnarumma2009a]. We find that the pre- and post-flare spectra (interval-1 and interval-4) are adequately represented by a simple one-zone SSC model plus EC in which the accretion disk and the broad-line region provide the necessary soft radiation field for the inverse Compton components that dominate the X-ray through the GeV energies. We assume a long-term rise and fall of the mass accretion rate onto the central black hole. This enhanced accretion causes an overall increase of the synchrotron and of the soft photon background scattered off by the primary component of accelerated electrons (component-1). An additional population of accelerated leptons (component-2, co-existing with component-1) introduced for the super-flare and secondary flare episodes. This component is a consequence of additional particle acceleration and/or plasmoid ejection near the jet basis. Table 1 reports the parameters that we used to model the Fig. 2 spectra [(the two-component models are reported as solid lines for the super-flare and for the secondary flare).]{} We assumed a bulk Lorentz factor $\Gamma = 25$, a jet angle with respect with the line of sight $\theta=1.2^o$, an accretion disk of bolometric luminosity $L_d=6 \times 10^{46}$slowly decaying toward $L_d= 5 \times 10^{46}$. A broad line region located 0.5 pc from the black hole reflects 5% of the disk power toward the emitting regions. The component-2 energy distribution that better reproduces our gamma-ray spectral data is a relativistic Maxwellian of characteristic energy $\gamma_b \simeq 180$. Interestingly, this component appears to be strongly energized but not yet modified by additional non-thermal acceleration. To summarize, our multifrequency data for the December, 2009 flare of provide a wealth of very important information on this puzzling and fascinating blazar. We find that is characterized by strong broad-band spectral variability, and that the modelling of the peak gamma-ray emission episodes The AGILE Mission is funded by the Italian Space Agency (ASI) with scientific and programmatic participation by the Italian Institute of Astrophysics (INAF), and the Italian Institute of Nuclear Physics (INFN). This investigation was carried out with partial support from the ASI contract n. I/089/06/2. V.Larionov acknowledges support from Russian RFBR foundation via grant 09-02-00092. The operation of UMRAO is made possible by funding from the NSF, NASA, and the University of Michigan. The Submillimeter Array is funded by the Smithsonian Institution and the Academia Sinica Institute of Astronomy and Astrophysics. The GASP president acknowledges the ASI support through contract ASI-INAF I/088/06/0. [10]{} Abdo, A. A., 2009, ApJ, 699, 817-823 Atwood et al, 2009, ApJ, 697, 1071-1102 Bonnoli, G. et al, 2010, MNRAS submitted, arXiv:1003.3476 Bottcher, M., 2007, Ap&SS, 309, 95-104 Conconi, P., 2004, SPIE, 5492, 1602-1612 Donnarumma, I. et al, 2009, ApJ, 707, 1115-1123 Gonzalez-Perez et al. 2001, AJ, 122, 2055 Jahoda, K, et al, 1996, SPIE, 2808, 59-70 Maraschi. L., Ghisellini, G., and Celotti, A., 1992, ApJ, 397, L5 Marscher, A. P., and Bloom, S. D., 1992, Proceedings of The Compton Observatory Science Workshop, 346 Mattox, J. R., Bertsch, D. L., Chiang, J., et al., 1996, ApJ, 461, 396 Mattox, J. R., Wagner, S. J., Malkan, M. et al., 1997, ApJ, 476, 692 Mucke, A., Protheroe, R. J., 2001, Astropart. Phys, 15, 121-136 Mucke, A. et al, 2003, Astropart. Phys, 18, 593-613 Urry, C. M., and Padovani, P., 1995, PASP, 107, 803 Pian, E. et al, 2006, A&A, 449, L21-L25 Poole, T. S., Breeveld, A. A., Page, M. J., et al., 2008, MNRAS, 383, 627 Raiteri, C.M. et al. 1998, A&A, 130, 495 Raiteri, C.M. et al. 2008, A&A, 491, 755-766 Rothschild, R. E. et al, 1998, ApJ, 496, 538 Schlegel, Finkbeiner, & Davis 1998, ApJ, 500, 525 Sikora, M., Begelman M. C., and Rees, M., 1994, ApJ, 421, 153 Striani, E et al., 2009, ATel 2322; ATel 2326 Striani, E et al., 2010, ApJ, submitted Tavani, M., Barbiellini, G., Argan, A., et al., 2009, A&A, 502, 995-1013 Tosti, G. et al, 2004, SPIE, 5492, 689-700 Ubertini, P. et al, 2003, A&A, 411, L131-L139 Vercellone, S. et al, 2008, ApJ, 676, L13-L16 Vercellone, S. et al, 2009, ApJ, 690, 1018-1030 Vercellone, S. et al, 2009, ATel 2344 Vercellone, S. et al, 2010, ApJ, 712, 405 Villata, M. et al, 2006, A&A, 453, 817-822 Villata, M. et al, 2007, A&A, 464, L5-L9 Villata et al, 2008, A&A, 481, L79-L82 Villata et al, 2009, A&A, 504, L9-L12 Watanabe, M., et al. 2005, PASP, 117, 870-884 Zerbi, R. M., 2001, AN, 322, 275-285. [^1]: The resulting fit is not statistically acceptable ($\chi ^2_{\rm red}$=1.53/54 d.o.f.); however, the addition of a 2% systematic error to the data, which is well within the expected uncertainties in the spectral calibration, is sufficient to make the fit fully acceptable ($\chi^2_{\rm red}$=1.02/54 d.o.f.). [^2]: The long term radio and optical light curves of during the 2009-2010 observing season will be presented in a forthcoming paper (Raiteri et al., in preparation). [^3]: We note that no X-ray data were obtained in exact correspondence with the gamma-ray super-flare of Dec. 2-3, 2009. | Mid | [
0.6435897435897431,
31.375,
17.375
] |
Shit it is three... my bad. I know why I forgot, it's been so long, and I want to say it's been 6 or 7 years, but I'm not even sure... now I'm just waiting for Tri to be finished so I can binge watch it all at once. Anyway, the US version wasn't bad, I still think it's pretty great, and Kick It Up is still a great song (and that entire scene was greatly improved with that song included), but I was just using the mashed togetherness of the movie as an example of how Digimon was treated by its US distributor. Although, Digimon had it better than Sailor Moon did in terms of fucky edits. Lesbians turned into cousins because lesbians were offensive to western culture while incest... wasn't... and season 5 never aired here because of the whole "boys turn into girls" thing. So it could have been worse. I mean how many people remember Monster Rancher? Voted in college to be Most likely to Take Over the World, how to do that however, will require at leastFour Evangelions. Thanks for the idea Misato-san!"Now I am become Death, the destroyer of worlds." Said at the beginning of the nuclear age by J. Robert Oppenheimer."That which does not kill us makes us stronger." Words of Wisdom from German Philosopher Friedrich Nietzsche. DarkBluePhoenix wrote:I mean how many people remember Monster Rancher? I do. Still got the ADV Films tapes from back in the day. I also remember watching most of the first season on Fox Family as a weekend long marathon. The big plot twist being that Moo, the big bad, turned out to be Holly's father *Hello Empire Strikes Back ripoff *. Digimon Adventure Tri Movie 5This being shown on the big screen is just baffling to me, the poor level animation would barely be acceptable for TV-series standards. There is no theatrical quality in any aspect of its presentation. This is embarrassing for an anniversary project, compared to what Gundam and Pokemon are offering their fans in visual fidelity. (What can I say, I’m a bling bling bitch, I like my entertainment to be beautiful)But there is a silver lining, after the borderline incompetent writing of movie four, this time at least the writing is up to Digimon standards again. I do really love following those characters, and over all this resurrection doesn’t rape my childhood - which is something. It still looks terrible, but man am I happy that at least the writing is competent again.7/10 robersora wrote:Digimon Adventure Tri Movie 5This being shown on the big screen is just baffling to me, the poor level animation would barely be acceptable for TV-series standards. There is no theatrical quality in any aspect of its presentation. This is embarrassing for an anniversary project, compared to what Gundam and Pokemon are offering their fans in visual fidelity. Toei is doing this with all of their "nostalgia" works, Dragon Ball Super isn't faring much better, although they have managed to make Goku's new transformation fight look at least decent. But there is a silver lining, after the borderline incompetent writing of movie four, this time at least the writing is up to Digimon standards again. I don't really agree. These episodes offered a more satisfactory answer for concluding Tai's crisis of Courage, but jokes were consistently lame, and it otherwise had the same ratio of flaws. Highlights for me are still Joe's crisis of Sincerity in the 2nd episode set, T.K.'s crisis of Hope in losing Patamon in the 3rd, and Maki Himekawa's non-too-subtle contrast with Sora in the 4th in respect to their mutual crisis of Love. In this one, it's basically implied that Himekawa: SPOILER: Show ...will be raped. Because those same creatures that appeared to her threatened to make Kari the bride of their king, and Himekawa is too "damaged" to resist. Further, with Kari and Meiko's partners "merging", we're being fed a connection between Kari and Meiko that wasn't really established in the earlier episodes, or really this one. Mei and Sora, or hell, Mei and T.K. have a more firmly established relationship than these two. I also have to balk at Mei being called "Digidestined; just like us!" She's digidestined, sure, but where's her crest, hmmm? Meicoomon only gets digivolutions due to plot hax of having a shard of the shikon jew--, I mean horcru--- I mean apocalymon. She's basically on the same level as Willis. "It's all fun and games till one of you gets my foot up your ass." - FofR, TrivialBeing.net Webmaster Today I went back to my subbed Tri. viewing and saw episode 3 of part 3. It was an almost exclusive Digimon partner episode and I loved every bit of not seeing the humans for a little while. Hikari turning into Homeostasis for the first time hearing her calm, serene voice out of M.A.O. was special. The whole reboot plan that Himekawa is cooking up seems to be like what John Hammond from Jurassic Park said about shutting down Isla Nublar's system, a calculated risk. So it's like damned if you do, damned if you don't. | Low | [
0.46501128668171504,
25.75,
29.625
] |
Toxicity of graphene oxide to white rot fungus Phanerochaete chrysosporium. With the wide production and applications of graphene and its derivatives, their toxicity to the environment has received much attention nowadays. In this study, we investigated the toxicity of graphene oxide (GO) to white rot fungus (Phanerochaete chrysosporium). GO was prepared by modified Hummers method and well characterized before use. P. chrysosporium was exposed to GO at the concentrations of 0-4 mg/mL for 7 d. The fresh and dry weights, pH values of culture media, structures, ultrastructures, IR spectra and activities of the decomposition of pollutants were measured to reveal the hazards of GO to P. chrysosporium. Our results indicated that low concentrations of GO stimulated the growth of P. chrysosporium. The exposure to GO induced more acidic pH values of the culture media after 7 d. GO induced the disruption of the fiber structure of P. chrysosporium, while at 4 mg/mL some very long and thick fibers were formed. Such changes were reflected in the scanning electron microscopy investigations, where the disruption of fibers was observed. In the ultrastructural investigations, the shape of P. chrysosporium cells changed and more vesicles were found upon the exposure to GO. The infrared spectroscopy analyses suggested that the chemical compositions of mycelia were not changed qualitatively. Beyond the toxicity, GO did not alter the activities of P. chrysosporium at low concentrations, but led to the complete loss of activity at high concentrations. The implication to the ecological safety of graphene is discussed. | Mid | [
0.6373626373626371,
36.25,
20.625
] |
#include <vector> #include <complex> using namespace std; #define MAXN 200009 /* * FFT */ struct base { double x, y; base() : x(0), y(0) {} base(double a, double b=0) : x(a), y(b) {} base operator/=(double k) { x/=k; y/=k; return (*this); } base operator*(base a) const { return base(x*a.x - y*a.y, x*a.y + y*a.x); } base operator*=(base a) { double tx = x*a.x - y*a.y; double ty = x*a.y + y*a.x; x = tx; y = ty; return (*this); } base operator+(base a) const { return base(x+a.x, y+a.y); } base operator-(base a) const { return base(x-a.x, y-a.y); } double real() { return x; } double imag() { return y; } }; void fft (vector<base> & a, bool invert) { int n = (int)a.size(); for (int i=1, j=0; i<n; ++i) { int bit = n >> 1; for (; j>=bit; bit>>=1) j -= bit; j += bit; if (i < j) swap(a[i], a[j]); } for (int len=2; len<=n; len<<=1) { double ang = 2*M_PI/len * (invert ? -1 : 1); base wlen(cos(ang), sin(ang)); for (int i=0; i<n; i+=len) { base w(1); for (int j=0; j<len/2; ++j) { base u = a[i+j], v = a[i+j+len/2] * w; a[i+j] = u + v; a[i+j+len/2] = u - v; w *= wlen; } } } if (invert) for (int i=0; i<n; ++i) a[i] /= n; } void convolution(vector<base> a, vector<base> b, vector<base> & res) { int n = 1; while(n < max(a.size(), b.size())) n <<= 1; n <<= 1; a.resize(n), b.resize(n); fft(a, false); fft(b, false); res.resize(n); for(int i=0; i<n; ++i) res[i] = a[i]*b[i]; fft(res, true); } template <typename T> void circularconvolution(vector<T> a, vector<T> b, vector<T> & res) { int n = a.size(); b.insert(b.end(), b.begin(), b.end()); convolution(a, b, res); res = vector<T>(res.begin()+n, res.begin()+(2*n)); } /* * String matching with FFT */ #include <cstring> #include <algorithm> #define ALFA 5 int reduce(char c) { if (c >= 'a' && c <= 'z') return c-'a'; if (c >= 'A' && c <= 'Z') return c-'A'+'z'; return -1; } void matchfft(const char *T, const char *P, int *match) { int n = strlen(T), m = strlen(P); memset(match, 0, n*sizeof(int)); if (m > n) return; vector<base> Sa[ALFA], Sb[ALFA], M[ALFA]; for(int c = 0; c<ALFA; c++) { Sa[c].resize(n); Sa[c].assign(n, 0); Sb[c].resize(n); Sb[c].assign(n, 0); } for(int i=0; i<n; i++) Sa[reduce(T[i])][i] = 1; for(int i=0; i<m; i++) Sb[reduce(P[i])][i] = 1; for(int c = 0; c<ALFA; c++) { reverse(Sb[c].begin(), Sb[c].end()); circularconvolution(Sa[c], Sb[c], M[c]); } for(int i=0; i<n; i++) { for(int c=0; c<ALFA; c++) { match[(i+1)%n] += int(M[c][i].real() + 0.5); } } } /* * SPOJ MAXMATCH */ #include <cstdio> int match[MAXN], K; char T[MAXN], P[MAXN]; int main() { while(scanf(" %s", T) != EOF) { int n = strlen(T); strcpy(P, T); for(int i=n; i<2*n; i++) T[i] = 'd'; for(int i=n; i<2*n; i++) P[i] = 'e'; T[2*n] = P[2*n] = 0; matchfft(P, T, match); int ans = 0; for(int i=1; i<n; i++) { if (ans < match[i]) ans = match[i]; } printf("%d\n", ans); for(int i=1; i<n; i++) { if (ans == match[i]) printf("%d ", i); } printf("\n"); } return 0; } | Mid | [
0.6023529411764701,
32,
21.125
] |
Mitch Stringer-USA TODAY Sports Two former Navy Midshipmen football players have plenty of questions to answer as they will face court-martial proceedings stemming from an alleged sexual assault incident, according to the New York Times. Both Eric Graham and Joshua Tate will have their cases heard by court-martial, while it was decided that Tra'ves Bush would not. According to the U.S. Naval Academy in a statement, it decided to refer Graham and Tate to court-martial in the interest of a fair investigation for the players as well as the alleged victim: We are committed to a thorough and fair conduct system and investigative process, and the Naval Academy will meet the highest standards, operate consistent with the law, and expeditiously investigate every report of unwanted sexual contact, sexual harassment and sexual assault. The alleged sexual assault incident reportedly took place at an off-campus house in April 2012 during a "Toga and Yoga" party, which featured heavy alcohol consumption. While the case was initially closed in November 2012 due to the alleged victim not cooperating and not wishing to press charges, it was reopened in January when the alleged victim agreed to testify. The alleged victim reportedly arrived at the party intoxicated and doesn't remember having sex on the night of the incident, but she was told by others and via social media what had happened. Graham, who was a little-used defensive back for Navy, has been charged with abusive sexual contact. Tate is a former Navy linebacker who saw some in-game action in 2012, and he is charged with aggravated sexual assault and making false official statements. There is a great deal of controversy surrounding the case. Susan Burke, who is the alleged victim's attorney, has said that the case was swept under the rug by the U.S. Naval Academy to save face, although the academy denies that claim. There is also plenty of scrutiny around the United States Uniform Code of Military Justice's initial Article 32 hearing, which determines if there is enough evidence for a court-martial. According to the Times report: During the Article 32 hearing, the woman was aggressively grilled about her sexual habits, generating some public scrutiny of the Article 32 proceedings... Article 32 hearings permit questions not allowed in civilian courts and can include cross-examinations of witnesses so intense that legal experts say they frighten many victims from coming forward. With the government focused on stamping out military sexual assault coupled with this particular case's link to college football, there is no question that it will be a high-profile situation moving forward. Follow @MikeChiari on Twitter | Mid | [
0.60774818401937,
31.375,
20.25
] |
Upload Images to our Free Image Hosting (img.dhirls.net) via right-click menu. With This Add-On you can Right click on any Images and just with a Simple click upload them in our host and get the links. | Low | [
0.534591194968553,
31.875,
27.75
] |
Setting up a nightly build The GHC buildbot builds GHC on various platforms in various different ways each night, runs the test suite and performance benchmarks, and mails the results to the [email protected] mailing list. We're always keen to add more build slaves to the setup, especially if you have a platform that doesn't already have a build slave, so if you'd like to join the fun, please let us know at cvs-ghc@…. If a platform is represented in the nightly builds, it's more likely we'll be able to identify and fix problems specific to that platform quickly. To see the current status of the builds: To create a new build slave First you, as a buildbot client, need to agree a buildbot username (myUser) and password (myPass) with the buildbot admins (just pick a username and password and send it to [email protected]). You'll also need to decide: when the build(s) should happen HEAD or branch builds full build (up to stage 3, with extra-libs, full testsuite, and 5 nofib runs) or a fast build (stage 2, no extra-libs, fast testsuite, no nofib runs), or something in-between Finally, if there is anything special that needs to be done for the client (e.g. if gcc is in an unusual place) then you'll need to let the admins know. Then you'll need to install buildbot and its dependencies on the machine that will be doing the nightly build; see the BuildBot website for details. NB. if you're on Windows, you'll need to install BuildBot under Cygwin using the Cygwin Python; there are various problems getting the GHC build to work via BuildBot using the native Win32 Python, so we've given up on that route for now. Now create and enter the directory you want the buildbot client to work in $ mkdir /buildbot/ghc $ cd /buildbot/ghc and tell buildbot to set up a slave there $ buildbot create-slave . darcs.haskell.org:9989 myUser myPass This will print a few lines asking you to fill in info/admin and info/host. In the latter file, please include information on what operating system and architecture the machine is running. It also created Makefile.sample; we recommend renaming this to Makefile. You can now start the buildbot client with make start and stop it with make stop. You can watch what your slave is doing by looking at the twistd.log file in the directory in which you're running your slave. Automating startup: Unix The easiest way to make the client start up automatically is to use cron. Type crontab -e, and add this line to your crontab file: @reboot cd <buildbotdir> && make start Remember to change <buildbotdir> to your buildbot directory. Cron will run the command in a minimal environment: it won't execute your normal shell startup files, so you won't have your usual PATH settings, for example. To get the right PATH and other environment variables, we suggest adding them to the make start rule in <buildbotdir>/Makefile. FOr example, my start rule looks something like this: It might be a good idea to have the buildbot restarted once a day before your build is due to start, just in case it has died for any reason. I have another line in my crontab that looks like this: 0 17 * * * cd <buildbotdir> && (make stop; make start) To restart the client at 17.00, before the builds start at 18.00. It's a good idea to test that running the client via a cron job actually works, so test it: setup a temporary cron job to start the client in a couple of minutes time, check that the client is up and running, and maybe force a build via the status page to check that the build environment is working. Automating startup: Windows I did it the following way. Create a script in <buildbotdir>/restart.sh: PATH=/bin:/usr/bin cd <buildbotdir> make stop make start (don't forget to create the script as a Unix text file, not a DOS text file, otherwise strange things will probably happen, they did to me anyway). Create a new "Scheduled Task" via Control Panel->Scheduled Tasks. The command you want to run is c:\cygwin\bin\sh <buildbotdir>/restart.sh Schedule the task to run (a) at startup and possibly also (b) once a day, before your build is due to start. You can add multiple schedulers for a task by checking the box at the bottom of the "Schedule" page of the scheduled task settings. If there is anything unusual about the machine the build is being run on, e.g. the path to gcc is different, then you will need to add a field for the unusual thing to GhcDefaultConfig and alter the build steps to make use of it. Then make a special factory for the build client you are adding with this field changed as appropriate. | Mid | [
0.552338530066815,
31,
25.125
] |
--- abstract: 'Online controlled experiments, now commonly known as A/B testing, are crucial to causal inference and data driven decision making in many internet based businesses. While a simple comparison between a treatment (the feature under test) and a control (often the current standard), provides a starting point to identify the cause of change in Key Performance Indicator (KPI), it is often insufficient, as the change we wish to detect may be small, and inherent variation contained in data may obscure movements in KPI. To have sufficient power to detect statistically significant changes in KPI, an experiment needs to engage a sufficiently large proportion of traffic to the site, and also last for a sufficiently long duration. This limits the number of candidate variations to be evaluated, and the speed new feature iterations. We introduce more sophisticated experimental designs, specifically the repeated measures design, including the crossover design and related variants, to increase KPI sensitivity with the same traffic size and duration of experiment. In this paper we present FORME (Flexible Online Repeated Measures Experiment), a flexible and scalable framework for these designs. We evaluate the theoretic basis, design considerations, practical guidelines and big data implementation. We compare FORME to an existing methodology called mixed effect model and demonstrate why FORME is more flexible and scalable. We present empirical results based on both simulation and real data. Our method is widely applicable to online experimentation to improve sensitivity in detecting movements in KPI, and increase experimentation capability.' author: - | Yu Guo\ \ \ \ Alex Deng\ \ \ \ bibliography: - 'library.bib' date: 20 Aug 2014 nocite: '[@Ma2011; @VanderVaart2000; @Kohavi2014SevenRules; @puzzlingOutcomes; @deng2013cuped; @DengTwoStage; @bakshystatistics; @Bates2012; @Bates2012a; @romano2005testing; @Xu2009]' title: Flexible Online Repeated Measures Experiment --- Introduction ============ Many recent publications attest to the power of using online A/B testing as the golden rule for making causal inference in web facing companies large and small. By random assignment of feature to otherwise balanced groups of users and measuring subsequent changes in user behavior, A/B testing isolates effect of feature change, i.e. the treatment effect from extraneous sources of variance. To perform statistical inference in both point estimation and hypothesis testing for the treatment effect, while controlling type I error at pre-specified level, we would desire lower type II error, or equivalently, higher powered experiments. That is, we wish to be able to detect the effect when there is any. Running under powered experiments have many perils. Not only would we miss potentially beneficial effects, we may also get false confidence about lack of negative effects. Statistical power increases with larger effect size, and smaller variances. Let us look at these aspects in turn. While the actual effect size from a potential new feature may not be known, we generally select a size that makes business sense, i.e. one that justifies the cost of feature development and ongoing maintenance of the code base. Dramatic features that drastically alter user behavior and get reflected in KPI as large effect sizes are few and far in between. Often the candidate feature has but a small effect on the KPI. Nonetheless, by accumulating a portfolio of small changes, a business can achieve big business success. Quote from Rule \#2 of @Kohavi2014SevenRules, winning is done inch by inch. This is especially true for mature web facing businesses where most low hanging fruits were picked already. In general one expects variance to decrease with increased sample size. But this is not always true. For online business, at first glance it may seem that the number of visitors may be large, and with a casual look people may think the power to detect any change is large. In reality, however, intrinsic variation between users is large and may obscure the small movement in KPI. Variation in measured treatment effect comes from various sources. Exogenous to the treatment itself includes user to user variation, e.g. some users from slower internet connection would always have slower page load time regardless of what experiments are run. Variance for some metrics does not decrease over time, instead they plateau after some period of time (say, two weeks), and running longer experiments no longer results in corresponding benefits [see @puzzlingOutcomes Section 3.4]. This poses a limitation to any online experimentation platform, where fast iterations and testing many ideas can reap the most rewards. Motivation {#sub:motivation} ---------- To improve sensitivity of measurement, apart from accurate implementation and increase sample size and duration, we can employ statistical methods to reduce variance. Using the user’s pre-experiment behavior as a baseline for his/her post-experiment behavior, we can reduce the variance in measured treatment effect. The experiment setup in a two-week experiment is shown in Table \[diagram\_3\_designs\]. The typical A/B test is illustrated in the first row. In the past we have used regression to reduce variance (CUPED: Controlled Experiments Using Pre-Experiment Data, see @deng2013cuped) and have achieved good results, e.g. reducing variance in number of queries per unique user in a given time period by 40-50%. CUPED has the benefit of having readily available baseline data “for free”. This improvement is performed with existing design, using the “free” data as covariates only in the analysis stage. CUPED is in fact a form of repeated measures design, where multiple measures on the same subjects are taken over time. In particular, in the pre-experiment stage, all users received the default feature C (control) and none received the new feature T (treatment). \ In this paper we extend the idea further by employing the repeated measures design in different stages of treatment assignment. The traditional A/B test can be analyzed using the repeated measures analysis, reporting a “per week” treatment effect, as show in row 3 “parallel” design in table \[diagram\_3\_designs\]. The two week experiment can be considered to be conducted in two periods, even though users received the same treatment assignment during both periods. In one of the new designs, the “crossover” design, in contrast, we swap treatment assignment half way through the experiment (row 4 in table \[diagram\_3\_designs\]). Each user will be exposed to both versions of the treatments, instead of only one of the two in the usual A/B testing scenario. In sequence, a user will receive either T followed by C, or C followed by T, with the flight re-assignment happening at the same moment for all users. Instead of randomizing treatments to users, we random treatment sequences (TC or CT) to users. This way each user serves as his/her own control in the measurement. In fact, the crossover design is a type of repeated measures design commonly used in biomedical research to control for within-subject variation. We also discuss practical considerations to repeated measures design, with variants to the crossover design to study the carry over effect, including the “re-randomized” design (row 5 in table \[diagram\_3\_designs\]). Main Contributions {#sub:main_contributions} ------------------ In this paper, we propose a framework called FORME (Flexible Online Repeated Measures Experiment). We made contributions in both novel application and new methodology. Novel applications. We propose different experiment designs with repeated measurement. We demonstrate through real examples the value of these new designs comparing to traditional A/B test. Methods for model assumption checking is also presented. We also compare different designs for practical use and propose a general workflow for practitioners. New Methodology. We review standard repeated measures models in the framework of mixed effect models. We present a new method to fit the model that is scalable to big data. Our method is flexible in the sense that it makes far less assumptions than traditional method based on mixed effect model [@Bates2012]. It naturally handles missing data without missing at random assumption (common in online experimentation) and still provides unbiased average treatment effect estimation when mixed effect model fails. FORME can fit different types of repeated measures models under the same framework. It also can be applied to metrics beyond those defined as a simple average, such as metrics defined as a function of other metrics. Illustration of FORME {#sec:Background} ===================== In this sections we will take a close look at several designs, with a treatment and a control, and with experiments carried out over several periods. Many common online metrics display different patterns between weekdays and weekends. Therefore experiments at Bing and many large IT companies, in general are run for at least a full week to account for the difference between weekdays and weekends. In the following section we assume the minimum experimentation “period” to be one full week, and may extend to up to two weeks. To facilitate our illustration, in all the derivation in this section we assume all users appear in all periods, i.e. no missing measurement. We also restrict ourselves to metrics that are defined as simple average and assume treatment and control have the same sample size. We further assume treatment effects for each subjects are fixed. We emphasis this is just for illustration purpose and our method does not rely on these assumptions and we describe how we handle missing data and more complicated metrics in Section \[sec:missing\_values\]. Impatient reader who are familiar with repeated measures analysis might jump over to Section \[sec:theory\] to see details of FORME’s model assumptions and comparison to linear mixed effect model. Denote the metric value mean in the treatment group as $\mu_T$, and that in control as $\mu_C$. We are interested in the average treatment effect (ATE) $\delta = \mu_T-\mu_C$ which is a fixed effects in the model in this section. This way, various designs considered can be examined in the same framework and easily compared. We will proceed to show, with theoretical derivations, that given the same total traffic Variance using CUPED $\le$ T-Test With CUPED: Variance in parallel design $\le$ Cumulative Design Variance in Crossover design $\le$ Parallel Design Denote observed sample values in the treatment groups and time periods as $\vec{X}$, and their means $\vec{\beta}$. Note that $\vec{X}$ is a vector of metric values $\xbar_i$ for different time periods indexed by $i$, and the treatment effect $\delta$ can be formulated as a function of $\vec{\beta}$ depending on model specification. Under the central limit theorem (CLT), with sufficiently large samples $\vec{X}$ is asymptotically normal $$\vec{X} \sim N(\vec{\beta}, \Sigma).$$ The likelihood of $\vec{\beta}$ given observed data is then $$L = \frac{1}{\sqrt{2\pi} |\Sigma|^{\frac{1}{2}}}\exp{\big(\frac{1}{2}(\vec{X}-\vec{\beta})^T\Sigma^{-1}(\vec{X}-\vec{\beta})\big)}$$ To get maximum likelihood estimates (MLE) of $\vec{\beta}$, denoted by $\hat{\beta}$, we seek to minimize -2$\log(\text{Likelihood})$ $$l = -\frac{1}{2}(\vec{X}-\vec{\beta})^T\Sigma^{-1}(\vec{X}-\vec{\beta}) + const$$ Solving ${\frac{\partial l}{\partial \vec{\beta}} }=0$ gives MLE of $\vec{\beta}$. And its variance-covariance matrix is $$Var(\hat{\beta})=\left[{\frac{\partial^2 l}{\partial \vec{\beta} \partial \vec{\beta}^T} }\right]^{-1} = 1/I(\vec{\beta})$$ where Fisher Information $$I(\vec{\beta})= - E\Big[\big(\frac{\partial}{\partial \vec{\beta}} log f(X|\vec{\beta}) \big)^T log f(X|\vec{\beta}) \Big| \vec{\beta} \Big].$$ In the following sections we will explicitly model the mean $\vec{\beta}$ as a function of other parameters $\vec{\beta}(\vec{\lambda})$, one of the components is treatment effect $\delta$, and study expected variance of the MLEs of $\vec{\lambda}$. In fact this is simply: $$\begin{aligned} Var(\hat{\lambda}) & = I(\vec{\lambda})^{-1} \\ &= \big [ \big( \frac{\partial \beta}{\partial \lambda}\big)^T \Sigma ^{-1} E \big [ (X-\beta) (X-\beta) ^ T \big] \Sigma ^{-1} \frac{\partial \beta}{\partial \lambda} \big ] ^{-1}\\ & = \big [ \big( \frac{\partial \beta}{\partial \lambda}\big)^T \Sigma ^{-1} \frac{\partial \beta}{\partial \lambda} \big ] ^{-1}\end{aligned}$$ Coefficient of variation (CV) defined as the mean over standard deviation of a metric, determines the sensitivity or the power of the experiment, given the same sample size. To study sensitivity or power of various experimental designs, once we have established that effect size remains relatively stable across different measuring periods, we can then focus on variation of estimated effect size solely. Specifically, the diagonal cell in $Var(\hat{\lambda})$ corresponding to treatment effect $\hat{\delta}$ gives its variance, and is our main focus in the following of this section. Analysis from randomized two-group experiments employs the two sample t-test under the usual A/B testing scenario. As a gentle introduction we will first look at the t-test using this notation. Two Sample T-test {#sub:two_sample_t_test} ----------------- Let $\xbar$ denote the observed average metric value in control group and $\ybar$ denote that in the treatment group. Since users are randomly assigned into either treatment or control group, $\xbar$ and $\ybar$ are thus independent. For simplicity of notation, we assumed variance in the two group to be equal. Given large enough sample size, under CLT, and plug in observed sample variances, we have: $$\left[ \begin{smallmatrix}{}C\\-\\T \end{smallmatrix} \right] : \left[\begin{smallmatrix}{}\xbar\\\ybar\end{smallmatrix}\right] \sim N \Big( \left[\begin{smallmatrix}{}\mu\\\delta + \mu\end{smallmatrix}\right] , \left[\begin{smallmatrix}{}s_{X}^{2} & 0\\0 & s_{Y}^{2}\end{smallmatrix}\right] \Big)$$ where $\mu$ is mean metric value in the control group, $\delta$ is the treatment effect compared to the control group, and $s_{X}^2$ and $s_Y^2$ are variances of $\xbar$ and $\ybar$ respectively. Here $\vec{\lambda}= (\mu, \delta)$, and the $-2log\text{Likelihood}$, denoted by $l$, of parameter vector $\vec{\beta}(\vec{\lambda})=(\mu, \delta)^T$ given observed data is then $$l = \left[\begin{smallmatrix}{}X - \mu\\Y - \delta - \mu\end{smallmatrix}\right] ^T \left[\begin{smallmatrix}{}s_{X}^{2} & 0\\0 & s_{Y}^{2}\end{smallmatrix}\right] ^{-1} \left[\begin{smallmatrix}{}X - \mu\\Y - \delta - \mu\end{smallmatrix}\right] + const$$ Solving for MLE of $\delta$ and obtain its variance as $$\label{ttest} Var(\hat{\delta})|_{TTest} =s_X^2+s_Y^2$$ It is simply the sum of variances from the treatment and control groups, which is the asymptotic variance of $\xbar-\ybar$. Use Pre-experiment Data for Variance Reduction {#sub:use_pre_experiment_value_for_variance_reduction} ---------------------------------------------- At the analysis level, different models seek to explain the amount of variation in observed data, which may come from intrinsic, within-user difference, as well as variation introduced by differential treatment. For example, users that connect through broadband tend to have faster page load time than people using dial-up connection. This difference exists regardless of which treatment conditions the users are exposed to, and is thus irrelevant when measuring difference introduced by different treatments. As a result, the measurements on the same users over time tend be positively correlated. CUPED and previous work has established that by including covariates that are unrelated to the treatment, we can improve sensitivity and reduce variance of estimated treatment effect. Specifically, the users’ pre-experiment behaviors servers as a good baseline for their behavior during the experiment. By including pre-experiment data as a covariate in the regression model for treatment effect, we can reduce the variance of the estimated treatment effect. Denote the pre-experiment average metric value to be $\xbar_0$ and $\ybar_0$ for the later control and treatment groups respectively. By CLT $$\left[\begin{smallmatrix}{}C\\C\\-\\C\\T\end{smallmatrix}\right]: \left[\begin{smallmatrix}{}\xbar_{0}\\ \xbar_{1}\\ \ybar_{0}\\ \ybar_{1}\end{smallmatrix}\right] \sim N \Big( \left[\begin{smallmatrix}{}\mu\\\mu + \theta\\\mu\\\delta + \mu + \theta\end{smallmatrix}\right] , \left[\begin{smallmatrix}{}\Sigma & 0 \\0 & \Sigma \end{smallmatrix}\right] \Big), \Sigma = \left[\begin{smallmatrix}{}s_{0}^{2} & \rho s_{0} s_{1}\\\rho s_{0} s_{1} & s_{1}^{2}\end{smallmatrix}\right]$$ where $\theta$ is the difference between the pre-experiment and experiment periods, i.e. the longitudinal effect, and $\rho$ is the correlation between the two periods. Here $\vec{\lambda} = (\mu, \delta, \theta)$. We assume correlation $\rho$ in both treatment and control groups to be the same for simplicity. Results do not dependent on this assumption. Even though the two treatment groups are still independent, metric value measured on the same group of users across different time periods are in general correlated. As we will later see, this correlation effectively reduces variances on $\hat{\delta}$. Similarly we can solve for MLEs from solving partial derivative of $l$ = 0 and derive variances for these estimates. $$\label{2week_aa_ab} Var(\hat{\delta})|_{CUPED}= 2 s_{1}^{2} \left(1- \rho^{2} \right)$$ It’s easy to see has smaller variance of $\hat{\delta}$ than by amount of $ 2 \rho^{2} s_{1}^{2}$. As users’ behavior is usually consistent across time, i.e. with non-zero correlation $\rho$ among different time periods, this amount is positive. The amount of variance reduced is $ \rho^{2} $ that of the original variance. Cumulative vs. Parallel Design {#sub:cumulative_vs_parallel_design} ------------------------------ Note that in the previous design we make no assumption on the duration of the pre-experiment and experiment periods. Empirical studies in [@deng2013cuped] have shown that using one-week pre-experiment data provides similar amount of variance reduction as using even longer durations. For simplicity, in practice we recommend using one-week such data. And we have mentioned that to capture the difference between weekday and weekends, we recommend running experiments for whole weeks, typically 14 days. Assuming treatment effect is the same across time, this gives us two ways of reporting treatment effects, i.e. reporting cumulative effects for the whole 14 days, and reporting weekly treatment effect as a weighted average between observed values in the two weeks. For the latter, using the same notation as above, we have $$\left[\begin{smallmatrix}{}C\\C\\-\\T\\T\end{smallmatrix}\right]: \left[\begin{smallmatrix}{}\xbar_{1}\\ \xbar_{2}\\ \ybar_{1}\\ \ybar_{2}\end{smallmatrix}\right] \sim N \Big( \left[\begin{smallmatrix}{}\mu\\\mu + \theta\\\delta + \mu\\\delta + \mu + \theta\end{smallmatrix}\right] , \left[\begin{smallmatrix}{}\Sigma & 0 \\0 & \Sigma \end{smallmatrix}\right] \Big), \Sigma = \left[\begin{smallmatrix}{}s_{1}^{2} & \rho s_{1} s_{2}\\\rho s_{1} s_{2} & s_{2}^{2}\end{smallmatrix}\right]$$ We can solve for MLE and their variances. $$\label{2week_ab_ab} Var(\hat{\delta})|_{Parallel}= 2 \frac{s_{1}^{2} s_{2}^{2} \left(1 - \rho^{2}\right)}{s_{1}^{2} + s_{2}^{2}- 2 \rho s_{1} s_{2}}$$ For the former, if the metric value is strictly additive across time, an example being revenue, under our toy model where all users appear in both periods, the cumulative treatment effect would be $\tilde{\delta} = 2\delta$, since $$\begin{aligned} \left[ \begin{smallmatrix}{}C\\-\\T \end{smallmatrix} \right] : \left[\begin{smallmatrix}{}\xbar_{1}+\xbar_{2}\\ \ybar_{1}+\ybar_{2}\end{smallmatrix}\right] &\sim N \Big( \left[\begin{smallmatrix}{}2\mu+\theta\\2\delta + 2\mu+\theta\end{smallmatrix}\right] ,\left[\begin{smallmatrix}{}\Sigma & 0 \\0 & \Sigma \end{smallmatrix}\right] \Big), \\ \Sigma&=Var(\xbar_1+\xbar_2). \end{aligned}$$ Using , variance for the MLE is $$\label{cumulative_ab} Var(\hat{\tilde{\delta}})|_{Cumulative} = 2Var(\xbar_1+\xbar_2) = 2(s_1^2+s_2^2+2\rho s_1 s_2)$$ Comparing coefficient of variation (CV) in to , $$\begin{aligned} %\begin{multline} %& \frac{1}{CV_{Cumulative}^2} - \frac{1}{CV_{Parallel}^2} & \frac{Var(\hat{\tilde{\delta}})}{4\delta^2} - \frac{Var(\hat{\delta})}{\delta^2} %\\ % &= \frac{1}{\delta^2} \left[ \frac{(s_1^2+s_2^2+2\rho s_1 s_2)}{2} - \frac{2s_{1}^{2} s_{2}^{2} \left(1 - \rho^{2}\right)}{ s_{1}^{2} + s_{2}^{2}- 2 \rho s_{1} s_{2}} \right] \\ % &= \frac{ (s_{1}^{2} + s_{2}^{2})^2 - 4\rho^2 s_{1}^{2} s_{2}^{2} - 4s_{1}^{2} s_{2}^{2} (1-\rho^2)}{2\delta^2 (s_{1}^{2} + s_{2}^{2}- 2 \rho s_{1} s_{2})} \\ % &= \frac{ (s_{1}^{2} + s_{2}^{2})^2 - 4s_{1}^{2} s_{2}^{2} }{2\delta^2(s_{1}^{2} + s_{2}^{2}- 2 \rho s_{1} s_{2})} \\ %& = \frac{ (s_{1}+ s_{2})^{2} (s_{1}- s_{2})^{2}}{2\delta^2 (s_{1}^{2} + s_{2}^{2}- 2 \rho s_{1} s_{2})} \ge 0 %\end{multline}\end{aligned}$$ Equality holds when the two periods have identical variance, i.e. $s_1 = s_2$. In other words, for additive metrics which variation over time is large, reporting weekly metrics alone will improve metric sensitivity. For non-additive metrics, such as ratio metrics like Click Through Rate (CTR), the derivation becomes more involved. In practice, also there is a lot of non-recurring users. We opted to show empirical results instead in results section. Careful readers may have noticed, this method makes a key assumption that treatment effect $\delta$ remains the same in the two weeks. To check this assumption, we can explicitly test for $\delta_1 = \delta_2$ by fitting the model this way: $$E\left[\begin{smallmatrix}{}\xbar_{1}\\ \xbar_{2}\\ \ybar_{1}\\ \ybar_{2}\end{smallmatrix}\right] = \left[\begin{smallmatrix}{}\mu\\\mu + \theta\\\delta_1 + \mu\\\delta_2 + \mu + \theta\end{smallmatrix}\right]$$ and test for the equivalence of MLEs $H_0: \hat{\delta_1} = \hat{\delta_2}$. The parallel design is appropriate if we fail to reject $H_0$. Crossover Design {#sub:crossover_design} ---------------- Now with the preliminary background information setup, we then look at variation reduction achieved through the crossover design. The crossover design employs a similar idea to CUPED. Instead of using pre-experiment data as the baseline, in crossover experiments, each user is exposed to both treatments sequentially, while the order of treatment groups is determined randomly. Each user’s behavior while he or she is on the control condition serves as a baseline for his or her behavior on the treatment condition. By accounting for within-user variation, analysis based on the crossover design also reduces variance for the estimated treatment effect. In causal inference, we often seek to eliminate any confounding factors and isolate the root cause of observed difference. Due to not observing the counterfactual in the potential outcome framework[@Rosenbaum1983; @counterfac], randomization is used to make the control group as the surrogate for counterfactual. This surrogate only works *on average*. In reality often some imbalance in some observed or unobserved factors will remain. Crossover design uses each test subject as his or her own control, thus reducing the influence of confounding covariates, and achieve better sensitivity in estimating treatment effect. Distribution of observed sample averages is: $$\left[\begin{smallmatrix}{}C\\T\\-\\T\\C\end{smallmatrix}\right]: \left[\begin{smallmatrix}{}\xbar_{1}\\ \xbar_{2}\\ \ybar_{1}\\ \ybar_{2}\end{smallmatrix}\right] \sim N \Big( \left[\begin{smallmatrix}{}\mu\\\delta + \mu + \theta\\\delta + \mu\\\mu + \theta\end{smallmatrix}\right] , \left[\begin{smallmatrix}{}\Sigma & 0 \\0 & \Sigma \end{smallmatrix}\right] \Big), \Sigma = \left[\begin{smallmatrix}{}s_{1}^{2} & \rho s_{1} s_{2}\\\rho s_{1} s_{2} & s_{2}^{2}\end{smallmatrix}\right]$$ Similarly, treatment effect estimate has variance $$\label{2week_ab_ba_crossover} Var(\hat{\delta})|_{Crossover}= 2 \frac{s_{1}^{2} s_{2}^{2} \left(1-\rho^{2}\right)}{s_{1}^{2} + s_{2}^{2}+2 \rho s_{1} s_{2} }$$ Comparing to , it is obvious that in the crossover design, treatment effect has smaller variance as long as the correlation $\rho$ is positive. Similar to CUPED, the amount of sensitivity improvement is determined by the size of $\rho$. The larger the correlation between time periods, the more improvement the crossover design has over the parallel design. The equivalence of treatment effect can be similarly checked as in section \[sub:cumulative\_vs\_parallel\_design\]. Absolute or Relative Change? {#sub:absolute_or_relative_change} ---------------------------- So far in this paper we considered the absolute treatment difference $\delta = \mu_{T} - \mu_{C}$. In practice we measure thousands of metrics simultaneously. These metrics may have vastly different magnitude in their treatment effects. Even the same metric measured over different duration, or over different sample sizes may have different absolute $\delta$’s. This renders comparison of effect size across different experiments difficult. To overcome this difficulty, we often seek to measure percent delta, $\%\delta = \frac{\delta}{\mu_{C}} \cdot 100\%$. The relative change is less influenced by the base difference and is a more robust measure of treatment effect. In online experimentation we usually deal with hundreds of thousands of samples, therefore CLT still holds and relative change would still have asymptotic normality. The additive model described above can be readily adapted to model relative difference instead of absolute difference, by formulating the expected group means in the mixed variance-covariance structure model. For example, the crossover model with relative treatment effect can be written as: $$\left[\begin{smallmatrix}{}C\\T\\-\\T\\C\end{smallmatrix}\right]: \left[\begin{smallmatrix}{}\bar{X}_{1}\\\bar{X}_{2}\\\bar{Y}_{1}\\\bar{Y}_{2}\end{smallmatrix}\right] \sim N \Big( \left[\begin{smallmatrix}{}\mu\\\mu(1+\delta)+ \theta\\\mu(1 + \delta)\\\mu + \theta\end{smallmatrix}\right] , \left[\begin{smallmatrix}{}\Sigma & 0 \\0 & \Sigma \end{smallmatrix}\right] \Big), \Sigma = \left[\begin{smallmatrix}{}s_{1}^{2} & \rho s_{1} s_{2}\\\rho s_{1} s_{2} & s_{2}^{2}\end{smallmatrix}\right]$$ Theoretic derivation to show variance reduction can be complex, but MLE estimates and their variances can be easily solved using numeric methods. The Unified Theme {#sec:motif} ----------------- We illustrated different model designs of FORME. Careful readers might already noticed that the unified theme here is to study the joint distribution of $\vec{X}$ and $\vec{Y}$, which by central limit theorem is known to be multivariate normal. Each model specification maps to the mean vector $\vec{\beta}$ of this multivariate normal. Therefore for any mean vector based on a model specification, we can solve the MLE and estimate its variance using Fisher’s Information. The difficulty, however, lies in how to estimate the covariance matrix in general case with presence of missing data and in general for metrics that are not defined as simple as average. For crossover design, in particular, we also need a way to decide whether we can safely assume the treatment effect in both periods are the same without any carry over effect. We will address these in details in Section \[sec:carry\_over\_effect\] and Section \[sec:missing\_values\]. Section \[sec:theory\] explains why FORME is more flexible and scalable than existing method of fitting linear mixed effect model. Carry over effect {#sec:carry_over_effect} ================= The crossover design is not without concerns. An important assumption in crossover model is that the treatment effect remains the same in the experimental periods. Since test subjects randomly receive all combinations of treatments in sequence, different users will receive difference sequence or “order” the treatments. It is possible that the order in which users are exposed to treatments may change the effect. For an extreme example, suppose our treatment introduced a bug that results in severely negative user experience, and these group of users fail to revisit the website in the later crossover period, the treatment effect is then different in the two periods. We call this the carry over effect, as the users exposed to treatment first, and then the control later may behave differently from the other group. In some experiments where the treatment condition is less noticeable to the users, the expected treatment effect is small, and based on historical insight, it is safe to assume no carry over effect exists. Usually a “wash-out” period can be injected in between treatment periods. Wash-out Period {#ssub:washout} --------------- This approach calls for a “wash-out” period after the end of the first period, where all users will receive the control. Data from the wash-out period can be analyzed in similarly in linear mixed model to estimate the carry over effect and subsequently inform the design of later stage. We leave this as an exercise to the reader. Estimate Carry over Effect {#sub:estimate_carry over_effect} -------------------------- In the crossover design where only two groups are allocated, it is not hard to see that potential carry over effect is confounded with the week to week difference of treatment effect. Using only the crossover model, we can only measure one of these two effects. As an alternative, at the cost of less efficiency gain over the traditional non-crossover design, we may estimate the carry over effect explicitly using the following 4-group design. In a 4-group re-randomized design, we conduct the experiment over two periods, and split the users into four equally sized groups, one receiving controls in both periods, one receiving treatments in both, and one receiving treatment followed by control, and the last receiving control followed by treatment, we can then tease apart carry over effect and treatment effect. We call this the re-randomized design, as it is equivalent to having another round of user randomization between the first and the second period. Using notation from the linear mixed model, the model considering a potential carry over effect $\alpha$ is then $$\begin{aligned} \left[\begin{smallmatrix}{}C\\T\\-\\T\\C\\-\\C\\C\\-\\T\\T\end{smallmatrix}\right]: \left[\begin{smallmatrix}{} \xbar_{1}\\ \xbar_{2}\\ \ybar_{1}\\ \ybar_{2}\\ \zbar_{1}\\ \zbar_{2}\\ \wbar_{1}\\ \wbar_{2}\end{smallmatrix}\right] &\sim N \Big( \left[\begin{smallmatrix}{}\mu\\\delta + \mu + \theta\\\delta + \mu\\\alpha+\mu + \theta\\\mu\\\mu + \theta\\\delta + \mu\\\delta + \mu + \theta\end{smallmatrix}\right] , \left[\begin{smallmatrix}{}\Sigma & 0 & 0 & 0\\0 & \Sigma & 0 & 0\\0 & 0 & \Sigma & 0\\0 & 0 & 0 & \Sigma\end{smallmatrix}\right] \Big), \\ \Sigma &= \left[\begin{smallmatrix}{}s_{1}^{2} & \rho s_{1} s_{2}\\\rho s_{1} s_{2} & s_{2}^{2}\end{smallmatrix}\right]\end{aligned}$$ The carry over effect is in the group that received first treatment and then reverted back to control in the second stage. Using observed data we can estimate carry over effect $\alpha$ as an additional term in $\vec{\beta}$. When $\alpha$ is not statistically significant under pre-specified type I error cutoff (usually 0.05), it is safe to drop the term $\alpha$ from $\vec{\beta}$, and re-fit the model. This way, we gain one more degree of freedom and thus reduces variances in the MLEs. This approach enables direct estimation of carry over effect. It can also be considered a hybrid approach between the crossover and parallel design, as half of the users received crossed treatments, and half received the same treatments in the two periods. It is not hard to see this design will achieve sensitivity improvement between the crossover and parallel designs. Missing Values and Metrics Beyond Average {#sec:missing_values} ========================================= Loss of follow-up and intent to treat {#sub:loss_of_follow-up_and_intent_to_treat} ------------------------------------- Loss of follow-up is a common term in clinical studies. It refers to patients who were active participants during some period of the trial, but stopped participating at some point of follow-up. This can lead to incomplete study results and potential bias when the attrition is not random. Intention-to-treat analysis is commonly employed, when the subjects’ initial assigned treatment is used regardless of actual received treatment. In online A/B testing the idea is similar; users are assigned to treatment groups at some point in time before the experiment starts, often by user id, but may or may not appear in the actual duration of the experiment. This missing pattern is far from random, therefore methods that rely on strong MCAR assumption (missing completely at random) are not appropriate and even MAR (missing at random) assumption is questionable as it requires missing pattern to be random conditioned on observed covariates. One way to see measurements are not missing at random is to realize infrequent users are more likely to have missing values and the absence in a specific time window can still provide information on the user behavior and in reality there might be other factors causing user to be missing that are not even observed. Instead of throwing away data points where user appeared in only one period and is exposed to only one of the two treatments, in practice, we included an additional indicator for whether or not the user appeared in the study in the period. Specifically, we use an additional indicator for the presence/absence status of a user in each experimentation period. For user $j$ in period $i$, let $I_{ij} = 1$ if user $j$ appears in period $i$, and $0$ otherwise. For each user $j$ in period $i$, instead of one scalar metric value $(X_{ij})$, we augmented it into a vector $(I_{ij}, X_{ij})$. When $I_{ij}=0$, i.e. user is missing, we define $X_{ij}=0$. Under this simple augmentation, the metric value $\xbar_i$ for period $i$, taking average over those non-missing measurements, is the same as $\frac{\sum_k X_{ik}}{\sum_k I_{ik}}$. In this connection, to obtain MLE and its variance, we only need to estimate the covariance matrices for each group across time periods, i.e. $$\begin{aligned} \cov(\overline{X_i}, \overline{X_{i'}}) &= \cov\left(\frac{\sum_k X_{ik}}{\sum_k I_{ik}},\frac{\sum_{k'} X_{i'k'}}{\sum_{k'} I_{i'k'}}\right) \\ & = \cov\left( \frac{\overline{X_{i}}}{\overline{I_i}}, \frac{\overline{X_{i'}}}{\overline{I_{i'}}} \right)\end{aligned}$$ where the last equality is by dividing both numerator and denominator by the same total number of users who have *ever* appeared in the experiments. Thanks to the central limit theorem, the vector $(\overline{I_i},\overline{X_i},\overline{I_{i'}},\overline{X_{i'}})$ is also asymptotically normal. Plugging in observed sample means and covariance matrix, $Cov(\overline{X_i}, \overline{X_{i'}})$ can be trivially computed with the $delta$-method; see @deng2013cuped [Appendix B] for a similar example; also see [@VanderVaart2000] for a text book treatment of the $delta$-method. Metrics Beyond Average {#sub:page_level_metrics} ---------------------- Treatment groups are assigned to users, but not all metrics are simple averages across users. We can define a metric as a function of other metrics. One important family of metrics is page level metrics such as click through rate. Page level metrics use number of page-views as their denominator. At first glance it might look like just a simple average. Since treatments are assigned to users (the independent unit), page-views are therefore not independent. Considering it as simple average over page level measurements this needs extra care. A better approach is to see this as a ratio of two user level metrics: clicks per user and page-views per user. $$\frac{\sum_{user_i}Clicks_i}{\sum_{user_i}Pages_i} = \frac{\sum_{user_i}Clicks_i/\sum_{user_i}I_i}{\sum_{user_i}Pages_i/\sum_{user_i}I_i},$$ where $\sum_{user_i}I_i$ is the the count of appeared users. The same $delta$-method mentioned in Section \[sub:loss\_of\_follow-up\_and\_intent\_to\_treat\] naturally extends here, with slightly more complicated formula. Since $delta$-method applies in general to any continuous function, we can handle any metric that is defined as a continuous function of other metrics. Flexible and Scalable Repeated Measures Analysis via FORME {#sec:theory} ========================================================== Review of Existing Methods -------------------------- It is common to analyze data from repeated measures design with the repeated measures ANOVA model and the F-test, under certain assumptions, such as normality, sphericity (homogeneity of variances in differences between each pair of within-subject values), equal time points between subjects, and no missing data. Such assumptions in general do not hold for large-scale online experiments, where the assignment of users into different treatment group may not be completely balanced. A generally more applicable method is to analyze the data using the linear mixed effect model, for which complete balance is not necessary [@Bates2012]. In particular, a linear mixed effect model treat each measurement of a subject as a data point, and model the measurement as $$Y = \theta + \alpha X+\beta Z+\epsilon$$ Here $\theta$ is the global mean and $\alpha$ stands for the vector of all deterministic fixed effects while $\beta$ is the vector of all random effects and $\epsilon$ is noise. $X$ and $Z$ are covariates in the model. In our cases they are indicators of treatment assignment, periods of the measurement, user id, and any other covariate. As an example, one possible model for repeated measures using lme4’s formula syntax [@Bates2012; @Bates2012a] is $$\begin{aligned} Y \sim 1 + IsTreatment + Period + (1|UserID),\end{aligned}$$ where the only difference of this model to the usual linear model behind two sample test is the extra random effect(clustered by UserID) to model user “baseline”. More complicated models exist to further model interaction and joint random effects. Random effect makes modeling within-subject variability possible. In repeated measures data, users might appear in multiple periods, represented as multiple rows in the dataset. As a result, rows of the dataset are not independent but with dependencies clustered by user. To see this, each user’s “baseline” measurement is captured as a random effect. The same user in different period will share the same “baseline” random effect, therefore resulting in dependency. Mixed effect model effectively takes advantage of this and is able to estimate the variance of the random effect while reducing the variance of average treatment effect. In the case of crossover design, the model can take advantage of the positive correlation between the two periods of the same user, which improves accuracy in the estimation of treatment effect, similar to the illustration we derived in Section \[sub:crossover\_design\]. Treatment effect can be modeled as either a fixed effect or random effect[^1]. If our interest is the average treatment effect, we can model it as a fixed effect. Note that modeling treatment effect as fixed effect does not mean we need to assume it is fixed, which in general is not since different subjects react to the treatment differently, but rather because the focus here is the mean of the random treatment effect, not the variance of the random treatment effect. One can still fit the model with random treatment effect and the results generally agree, though fixed effect is believed to be more robust against model assumptions; see @Wooldridge2012. We point out two issues of using traditional mixed effect model, and claim that FORME is a better alternative on axes of flexibility and scalability. First, linear mixed effect model (and also generalized linear mixed effect model) is a family of parametric models, and relies on full knowledge of the likelihood function to perform parameter fitting. This means the model need to rely on distributional assumptions such as normality. In particular, all random effects are typically modeled as normally distributed or jointly normally distributed. And noise $\epsilon$ need to be either i.i.d normal or the modeler needs to provide a known covariance matrix. These assumptions are indispensable in the theory and pivotal in the fitting of the model. For our application in online A/B Testing, many of these assumptions are inappropriate. To name a few, for a metric like revenue per user, it is inappropriate to model the user “baseline” revenue per week as normally distributed due to its large skewness. Also the noise term $\epsilon$ is hard to justify to be truly independent of other random effect. A heavier user might have bigger “baseline” revenue, and also bigger noise, and bigger (or smaller in some cases) treatment effect. It also assumes data are missing at random. Modelers of linear mixed effect model will need to modify the model by making random effects jointly random, or including more interaction terms. However the more complicated the model, the more questions on model assumptions will arise. We show in Section \[sec:results\] through simulation study, linear mixed effect model fitted in R package lme4[@Bates2012] could result in biased estimation of the average treatment effect when there is correlation between data missing pattern and user random effect. Second, fitting mixed effect model could be expensive. Available packages in SAS or R are based on fitting MLE or REML(restricted maximum likelihood). In either case, much effort is taken to estimate the variance of random effect(s) or covariance matrix if they are jointly random. Fitting algorithm takes the full dataset with each row representing a measurement. In online A/B testing, where tens of millions of users are involved, this dataset could be large. In model fitting, each iteration requires some operations on this full dataset. Making the efficiency of model fitting a concern in big data scenario. To the authors’ best knowledge, there is no literature on the topic of big data implementation of linear mixed effect model. In our experience FORME is 1 to 2 magnitudes faster than lme4 with much less memory footprint even without map-reduce type parallelism. In the remaining of this section, we explain why FORME is both more scalable and flexible than linear mixed effect model. FORME is Scalable {#sub:implementation_for_big_data} ----------------- Instead of modeling at the level of each individual measurements, FORME sees the problem from a higher level and take advantage of big data. Based on central limit theorem, metrics of interest in each period for treatment and control follows normal distribution. Using the same notation in Section \[sec:Background\], this multivariate normal random vector is denoted by $\xbar_i, \ybar_i, i=0,\dots,2$, with mean $\vec{\beta}(\vec{\lambda})$ and certain covariance matrix. These metric values are correlated with each other via common user level random effects modeled explicitly in linear mixed effect model but not in FORME. This is because when our interest is only in the average treatment effect, the estimates of those random effects are irrelevant. Instead, FORME sees the average treatment effect $\delta$ as just one parameter in the mean vector of the metric values $\vec{\beta}(\vec{\lambda})$. That is, when modeling metric values directly using multivariate normal distribution with parameters in the mean vector, all the complexities involving the structures of the random effects are buried in the covariance matrix of multivariate normal and we are left with a simple task, which is to estimate the parameters $\vec{\lambda}$ of this multivariate normal. FORME estimates $\vec{\lambda}$ by fitting MLE. The use of asymptotic statistics also guarantees that the estimates are normally distributed with covariance matrix derived from Fisher’s Information [@VanderVaart2000]. Note that the scale of this step is much smaller than the MLE fitting of a typical linear mixed effect problem. FORME only need to fit a multivariate normal with small dimension, typically smaller than 12 (6 in a crossover design: treatment and control for each of the pre-experiment, period 1 and period 2.) The main computation burden is therefore in the estimation of covariance matrix. Fortunately, this step only involves estimation of pair-wise covariance between metric values, and they all can be map-reduced with one pass of the data. To handle missing data and general form of metrics (as a continuous function of other metrics), $delta$-method can be employed (Section \[sec:missing\_values\]). The application of $delta$-method only involves slightly more complicated covariance matrix so we need to estimate more covariance pairs in one map-reduce pass of the data, inducing negligible increase in complexity. FORME is Flexible {#sub:flexible} ----------------- FORME is not only scalable but also more flexible. Because FORME doesn’t explicitly model random effects as linear mixed effect models do, FORME makes no distributional assumptions on random effects and noises $\epsilon$. FORME also make zero assumption on missing data pattern. FORME needs only one critical assumption , i.e. that central limit theorem is applicable, which is rarely violated in online A/B testing, since traffic size is large enough even for the most highly skewed metrics such as Revenue [@Kohavi2014SevenRules]. Specifically, FORME can be applied to all these cases: Data can have arbitrary missing pattern. In other words not assumptions on missing at random. Treatment Effect is random. Treatment Effect and user random effect (baseline) are not independent. Noises $\epsilon$ are not i.i.d. Noise and random effects are not independent. Interactions. (e.g. treatment and control have different user random effect distribution, etc.) To close this section, we make the final remark that the flexibility of FORME really comes from its simplicity, comparing to linear mixed effect model. We believe FORME is also easier for practitioners to understand. The cost of FORME to put less assumptions than mixed effect model is the expectation that when mixed effect model assumptions hold, FORME estimate could possess larger variance than mixed effect model estimate. Next we’ll explore these through simulation study. Results {#sec:results} ======= Simulation from Known Distributions {#sub:simulation_results} ----------------------------------- We compare variances reported from our FORME produces to the traditional linear mixed model under various simulation assumptions. As illustration we used the crossover design. We simulate a total of $2N$ users, where $N=10000$ and randomly split them into two treatment groups. $$X_{ij}=\mu + \delta_{ij} + u_i + \epsilon_{ij}, \epsilon_{ij}\sim N(0,\sigma^2) %, u_i \sim N(0, \sigma_u^2)$$ where $i$ is index for user and $j$ for time period. $\epsilon_{ij}$ represents random noises and $u_{i}$ represents random user “baseline” effect. $\delta_{ij}$ is the treatment effect for user $i$ in period $j$ (0 if not in treatment). In this model, the between period correlation is then $\frac{\sigma^2}{\sigma_u^2+\sigma^2}$. If user $i$ is in treatment for time period $j$, $\delta_{ij} \sim N(\delta, \sigma_{\delta}^2)\times p_{i}$, where $\delta\times E(p_i)$ is the ground truth average treatment effect size, $p_{i}$ is a continuous value between 0 and 1, and it represents the user’s activity level. We designed $p_{i}$ to be correlated to $u_i$. This way we allow treatment effect to vary by how frequent a user visits the site. Finally we allow $X_{ij}$ to be missing with probability of $\max(90\%, 1-p_{i})$. This is intuitive since a less active user would be missing more often. Note that in this simulation study we know exactly what the true average treatment effect is. We simulated this process $K=10000$ times so that we can have a good estimate of the ground truth variance of treatment effect estimated by FORME and mixed effect model (lme4). We want to learn the following for both FORME and lme4 from this simulation study: 1) is estimate unbiased, 2) is variance estimation correct. If both methods are unbiased, then we want to know which one has smaller variance. Without loss of generality, we used $\mu=0$, $\sigma=4$, $\sigma_u=2$ or 4, $\delta=10$, and $\sigma_{\delta} = 0.1\sqrt{12}$. We chose 5 simulation conditions as the following: Normal noise, no treatment effect, normal user random effect $u_i \sim N(0, \sigma_u^2)$ Normal noise, no treatment effect, Poisson user random effect $u_i \sim Poisson(\sigma_u^2) $ Normal noise, with random treatment effect that is correlated with user random effect: $N(\delta, \sigma_{\delta}^2)\times p_{i}$ Noise is correlated with User activity level: $\sigma =2 \times p_i$ Noise is correlated with User activity level: $\sigma =4 \times p_i$ and all conditions have roughly 50% user missing in each period. First of all in condition 3 when there is a random treatment effect, we found lme4 consistently gave biased estimation(when ground truth effect is 6.6, FORME estimates are very close to ground truth while lme4 always gave biased estimation around 7.2). This is because lme4 relies on the assumption of missing at random, and it is violated as random effect is negatively correlated to the chance of missing. We believe this is a fundamental problem issue with mixed effect model as missing data pattern is often correlated with some underlying user characteristics that is correlated with user’s response to treatment. One might argue that mixed effect model can be fixed by throwing more interaction terms. However in practice more complex models are often not identifiable (parameters more than data points) and they only makes more assumptions. We also noted that except for condition 3, lme4 provided unbiased estimates for condition 2, 4 and 5 where some assumptions in mixed effect model are violated. We believe central limit theorem also helped in this case for lme4 to stay unbiased. But bias in condition 3 seems to be more fundamental. We leave a more thorough study of the bias of lme4 with violation of different assumptions in future work. We also compared variance in LME and FORME under the crossover model below in Figure \[fig:lme\_forme\]. Both FORME and lme4 provided very good estimation of variance. And also as expected FORME pays a price for its flexibility and almost “model free” as variance from lme4 estimations are generally smaller. The variance gap is bigger when missing rate is higher and between-periods correlation is higher. Although not shown, in the conditions when either there is no missing data, or the correlation is 0, FORME and lme4 estimates have the same variance. Although lme4 estimate has smaller variance, its potential bias is a show-stopper since for treatment effect estimation a low variance estimate is not useful if biased. ![Effect Variance in LME and FORME[]{data-label="fig:lme_forme"}](FORME_LME){width="42.00000%" height="25.00000%"} Simulation from Empirical Data {#sub:empirical_results} ------------------------------ Next we randomly sampled from our in-house data a small subset of $N=1250$ users, randomly split the users into equal sized subsets, and applied various designs. We then simulate $K=10000$ bootstrap samples (with replacement) from this dataset, fit FORME and report estimated MLEs. The variance based on these MLEs are then compare to the variance estimated from Fisher Information using the full dataset. Figure \[fig:Boot\] shows the two agrees well. Note the cumulative effect had different effect size from the rest of the designs. For this particular metric, using CUPED results in roughly 50% reduction in variance. Crossover design shows reduction of around 50% compare to the parallel design. ![Effect Variance from Fisher’s Information and Bootstrap method[]{data-label="fig:Boot"}](Boot){width="51.00000%" height="22.00000%"} Real Experiments {#sub:real_experiments} ---------------- Finally, we report results from three typical metrics in one of our real experiments. Here we used percent change as the effect size. This way, weekly effect size is comparable to cumulative effect size across two weeks. The variance of effect size therefore indicates sample size needed to achieve the same sensitivity. Figure \[fig:RealExpt\] displays the percent samples needed to achieve the same sensitivity for three metrics using various models, with the crossover design as baseline. Therefore crossover design had value of 100. All models included CUPED since pre-experiment data always exists and is free. The crossover design consistently had the fewest samples needed. Next the re-randomize design had value between crossover and the parallel design. Cumulative design follows. When the re-randomize model includes a leftover effect, the samples needed can be larger than cumulative design for metric 2. Note that compared to the previous benchmark, the cumulative design, the crossover design can save up to 2/3 the traffic for metric 3, while for others, the traffic savings is in the 30-40% range. This is due to inherent difference in week to week correlation in different metrics. Note the drastic reduction in variance for such metrics means the same feature can be tested with only 1/3 of the original traffic! ![Percent samples needed to achieve the same sensitivity for four metrics. Baseline is the crossover design.[]{data-label="fig:RealExpt"}](adbkgd){width="50.00000%" height="20.00000%"} Practical Considerations {#sec:Practical_considerations} ======================== At the design stage, we face a few choices under the same framework of repeated measures design. Experimenters should use domain knowledge and past experiments to inform the design. This is rather an art than pure science. Here we give guidelines according to our own experience. Recommended Work Flow {#sub:recommended_work_flow} --------------------- Due to the flexibility in a two-stage setup in repeated measures design, we can use the information gathered in the first stage to inform procedures in the next stage. We recommend using crossover design as validation stage experiments, for which we already have gathered exploratory directional data. If the first stage already result in statistical significance in KPI, we may choose to terminate the experiment already. However in practice, we generally recommend running the experiments long enough to gain enough power for not only the KPI, but other metrics designed to monitor data quality and serve as guardrail against unexpected changes. Otherwise, in running the second stage, we can use domain knowledge to inform about carry over effect. If historical experiments in similar feature iterations indicate potential carry over effect, we recommend running a complete 4-group crossover experiment, so we can directly estimate carry over effect. Otherwise, we recommend using the 2-group crossover design to achieve the maximum power for KPI. If we are not sure, it is still possible to leave a few days’ “wash-out” period after completing the first stage, and see if any carry over effect can be observed. **No swapping**: When it is critically important to ensure consistent users experience, such as changing the entire layout of a site, it may not be desirable to show users the new site for a week, and then swap them back to the old site. The experience may be too jarring to users and hurt the brand. In such cases, we do not recommend re-assigning treatment variants half way through. **Crossover**: Relatively small changes that are less directly noticeable are better candidates for treatment swapping. If similar experiments from the past, or earlier exploration data do not indicate the presence of carryover effect, the crossover design can be employed. **Re-randomized**: If we suspect the presence of carryover effect, the re-randomized design enables us to measure it directly and should be used here. **Wash-out and decide**: If we have little information to judge carry over effect, we can run the first week of the experiment, and then leave a few days as a “washout” period. The next stage is data driven. Using such data we can estimate the carry over effect explicitly. If there is no significant carry over effect, proceed as the crossover design. Otherwise, proceed as the re-randomized design. Having collected experiment data, they can then analyzed in the following work flow to achieve the most power. **No swapping:** Test equivalence of treatment effect across time If they are equivalence, report treatment effect in the “per time unit" metric values by analyzing using the parallel model, including pre-experiment data. Otherwise, analyze only cumulative effects, and including pre-experiment data. Note this is CUPED. **Crossover design:** Test equivalence of treatment effect across time If they are equivalence, report treatment effect in the “per time unit" metric values by analyzing using the crossover model, including pre-experiment data. If, however, unexpected significant difference is found, you have several choices Report the two treatment effects separately To understand the difference properly, another phrase of the experiment can be added, using re-randomized design. With a total of three weeks’ data, we can see whether the treatment effect difference is due to true week-to-week to difference, and study its trend, or due to carry over effect. **Re-randomized design:** Test equivalence of treatment effect across time and presence of carryover effect Reduce the model if any of the effects are not statistically significant, and report treatment effect. This carries the subtle difference of reporting a treatment effect in the entire duration of the experiment, versus that per time unit (a week here). We argue that as long as weekly treatment effects are stable over time, reporting weekly effect is intuitive, easy to understand, and easy to compare across different experiments. In real life, various things can happen during an experiment, and we may end up with an experiment that ran only in partial weeks. In these cases, reporting treatment effect in the entire duration will be better than throwing away data or ignore weekdays difference. Sample Size Considerations {#sub:sample_size_considerations} -------------------------- While direct estimation in sample size is difficult in the linear mixed model, in practice there is an easy work around. In the traditional design, using CLT, with a simple two-sided test for $H_0: \delta=0$, sample sizes can be easily calculated. $$n = (z_{1-\alpha/2}+z_{1-\beta})^2/\frac{\delta^2}{Var(\delta)}$$ where $\alpha$ is the allowed false positive rate, usually 0.05, and $1-\beta$ is the desired power, usually at $80\%$ to $90\%$. From historical data we can record the amount of variance reduced for each metric. The magnitude is determined by inherent variance in the metric, and correlation across time periods, both of which are observed to be fairly stable across many experiments. Suppose the variance for metric X in crossover experiment is $k\%$ that in the conventional t-test. If $N$ subjects are required to detect a change of $\delta\%$ in t-test with, say, $80\%$ power, then $k\%N$ is the reduced sample size to achieve the same power. Discussions and Future Work =========================== Extending to more frequent swaps {#sub:extending_to_more_frequent_swaps} -------------------------------- The crossover design achieves sensitivity by exposing users to both treatment variants in sequence, by swapping the treatment assignment once during the experiment. Using each subject as his or her own control and this design to account for within-subject variance. A natural extension of the idea is to swap treatment groups more than once. Essentially, this changes to a more granular randomization unit, from users to page views. Exploratory work shows this indeed achieves further variance reduction. However, this also raises the concern for inconsistent user experience, diminished treatment effect size, stronger learning effect, and lack of a longer term measure. Despite these concerns, it remains a valuable option in early stage experiments to quickly select promising features for further iteration. Limitations and concerns {#sub:limitations_and_concerns} ------------------------ Due to user behavior differences between weekday and weekends, we usually recommend running each phase of the cross-over design for at least a full week. A crossover experiment then requires two complete weeks to gather data, which hinders agility. Another limitation is that for very highly visible features like changing prominent UI features, such swapping may not be desirable since it may confuse the users. Finally, not all features can be tested this way, as there might be a “learning” effect, where we can’t have the users exposed to treatment “unlearn” the feature, while having controls naive to the treatment. For example, if the website provides new features and personalized content to signed in users to encourage higher rate of signing in and staying signed in. These users cannot then be forced to logout into the control group. @Ma2011 shows one interesting case where crossover design can be extended to tackle this issue. Further Improvements of FORME ----------------------------- We’ve shown in Section \[sub:simulation\_results\] that mixed effect model via lme4 provides a competing estimate of the average treatment effect that could be biased when missing data pattern correlate with user random effect, but often with smaller variance than FORME. We noted that FORME has to pay some price to be more flexible and robust, similar to nonparametric model usually is less efficient than their parametric counterparts. However we suspect that efficiency of FORME can be further improved to match the efficiency of mixed effect model even under perfect mixed effect model assumption. Such improvement would be very desirable. But even without such improvement we believe the bias when there is missing data that is not missing at random is a big issue for mixed effect model to be adopted in online controlled experiment. And FORME should be used instead. [^1]: When there are only two measurements for a subject like crossover design, modeling treatment effect and user “baseline” both as random effect is unidentifiable. But the model can be fit if there are more measurements per subject. | High | [
0.6778783958602841,
32.75,
15.5625
] |
Terms of Service By accessing this site, you indicate your agreement with and understanding of the following terms of use and legal restrictions pertaining to both this site and the material in it. If you do not agree to be bound by the conditions and terms in this Agreement, do not subscribe to, access or otherwise use this site or Yabla services. This Agreement (the "Agreement") is between you and Yabla, ("Yabla"), a corporation organized and existing under the laws of the state of New York, and with a principal place of business located in New York, NY, in the United States of America. 1) Limited End User LicenseYabla grants you a non-exclusive, non-transferable, limited right to access, use and display the online products you subscribe to or license (the "Content") made available on the Yabla website (the "Site"), provided that you comply fully with this Agreement. The Content is only for your personal, noncommercial use. Each license is for use by one individual. Each additional user requires a separate license. 2) Copyrights and TrademarksAll materials on the Yabla.com website, as well as its subdomains, including without limitation text, images, software, audio and video clips, and Fee-Based Services (collectively, the "Materials") are owned or controlled by Yabla Inc., which retains all right, title, and interest in and to the Materials. The Site and Materials are protected by the copyright and trademark laws of the US and other countries, international conventions, and other applicable laws. You may not download, display, reproduce, create derivative works from, transmit, sell, distribute, or in any way exploit the Site or any portion thereof for any public or commercial use without the express written permission of Yabla. Teachers who themselves may access Yabla products and videos in accordance with the terms of use may project the Yabla Player and associated videos to assembled students in classroom settings. 3) Disclaimer and WarrantyThe site (including all content, software, functions, fee-based services, materials and information made available thereon or accessed by means thereof) are provided AS IS, without warranties of any kind, either expressed or implied, including, but not limited to, warranties of title or implied warranties of merchantability, fitness for a particular purpose, title, compatibility, security, accuracy, or non-infringement. To the fullest extent permissible by law, Yabla and its affiliates make no warranties and shall not be liable for the use of this site under any circumstances, including but not limited to negligence by Yabla. Yabla does not warrant that the functions contained in the site or the services, fee-based or otherwise, will be uninterrupted or error-free, that defects will be corrected, that the site or fee-based services will meet any particular criteria of performance or quality, or that the site, including forums or the server(s) on which the site is operated, are free of viruses or other harmful components. 4) Limitation of LiabilityUse of the site is at your own risk. You assume full responsibility and risk of loss resulting from your downloading, accessing or use of files, information, communications, content, or other material (including without limitation software) accessed through or obtained by means of the Site. Under no circumstances shall Yabla or its affiliates, or any provider of telecommunications or network services for Yabla or the affiliates, be liable for any indirect, punitive, special, or consequential damages that are directly or indirectly related to the use of, or the inability to use, the site or fee-based services, even if Yabla, its affiliates, or their providers of telecommunications or network services has been advised of the possibility of such damages. The total liability of Yabla and the affiliates hereunder is limited to the amount, if any, actually paid by you for access and use of the fee-based services. You hereby release Yabla and its affiliates from any and all obligations, liabilities and claims in excess of this limitation. Some states do not allow the exclusion or limitation of incidental or consequential damages, so the above limitation may not apply to you. 5) Single UserEach assigned user name for a Yabla online service is to be used solely by the individual to which it was issued. Requests for multiple user single site licenses can be arranged and should be made to Yabla Inc. 6) Agreement To PayYou agree to pay, using the credit information you provided us, the periodic subscription charges, applicable taxes, and other charges incurred on your account in order to access any fee-based services to which you have subscribed. Yabla reserves the right to increase fees, or to institute new fees at any time, upon email notice of 15 days or more. 7) Termination Yabla reserves the right to restrict, suspend or terminate your access to Yabla services, fee-based or otherwise, in whole or in part, with respect to any breach or suspended breach of any portion of this Agreement. In the event of such a termination, there will be no refunds for unused time under the terms of your license or subscription. Yabla reserves the right to refuse to provide services to you in the future. 8) Refund/Cancellation Policy You may cancel your subscription to Yabla at any time for any reason whatsoever. In the event of a cancellation, there will be no recurring charges made to your credit card. If you cancel within the first 7 days of the commencement of your subscription, you will receive a full refund of any fees paid. If you cancel after 7 days, access to the product will remain in effect for the remainder the subscription, but you will not be charged again, all recurring billing will cease, and the subscription will lapse as of its end date. 9) Privacy PolicyWe will not sell, rent or give your personal information to any third party for marketing or other purposes without your explicit consent. 10) Modification to This AgreementYabla may modify this Agreement at any time and changes are effective as soon as new agreement is posted to site or notice is made via electronic mail. If you are not satisfied with any such changes your only recourse and right is to cancel your subscription. 11) Modification of the Yabla SiteYabla may modify the Site in any way at any time, including fee-based services. Yabla may impose limits on any Site features and/or services or restrict your access to parts or all of the Yabla site without notice or liability. If you are not satisfied with any such changes your only recourse and right is to cancel your subscription. 12) Change in Fees or Fee StructureYabla may change fees or fee structure for any product or service, including recurring subscriptions, at any time. Notice of any such changes will be posted in advance on the Yabla site and sent via electronic mail to any subscribers with recurring subscriptions that may be affected. | Mid | [
0.5503597122302151,
38.25,
31.25
] |
Q: when selecting from multiple SQL Server 2008 tables is a join inferred? I found a sproc in a db that I think infers a table join? Is that accurate to deduce that from the sample below? SELECT a.Column1, b.Column2 FROM [dbo].[Table1] As a, [dbo].[Table2] AS b A: You inferred correctly, this is a Cartesian join. A: It does infers a join, but that join is a CROSS JOIN, which is rarely what you want. It won't simply map each row of a to its equivalent in b. Instead, it will map each row of a to every row of b. That is a join, but it's not what we often think of when we talk about joins. I wanted to show a quick example, but unfortunately SQL Fiddle is down. Sorry! More on cross joins and cartesian products in SQL Server here. | High | [
0.685,
34.25,
15.75
] |
Q: Simple reverse proxy with Nginx (equivalent to Apache) With Apache, I can make reverse proxy working with this VirtualHost configuration. I have executed nanoc view -p 8080to use 8080 port for nanoc web app. With this setup, http://scalatra.prosseek is mapped to the nanoc. <VirtualHost *:80> ProxyPreserveHost On ServerName scalatra.prosseek ProxyPass /excluded ! ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ </VirtualHost> I need to have the same setup with Nginx, with some trial and error, I could make it work with this configuration. upstream aha { # ??? (1) server 127.0.0.1:8080; keepalive 8; } # the nginx server instance server { listen 0.0.0.0:80; server_name scalatra.prosseek; access_log /usr/local/etc/nginx/logs/error_prosseek.log; location / { # ??? (2) proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://aha/; # ??? (1) proxy_redirect off; # ??? (3) } } It works, but I'm not sure if this is the best setup. Here come my questions: Is the setup OK for http://scalatra.prosseek to localhost:8080: Are these correct setup of proxy_set_headers? Or did I miss something? For the proxy_pass and upstream, is it just OK as long as two names are the same? Do I need proxy_redirect off;? A: Your configuration looks close. Proxy headers should be fine. Normally Nginx passes headers through, so proxy_set_header is used when you want to modify those - for example forcing the Host header to be present even if the client does not provide one. For the proxy_pass and upstream, yes the names need to match. Consider leaving proxy_redirect on (default). This option modifies whether Nginx interferes with responses like 301 & 302 redirects including the port number. Turning it off means that your upsteam application must take responsibility for passing the correct public domain name and port in any redirect responses. Leaving it set to default means that if you accidentally try to direct the client to port 8080, Nginx would in some cases correct it to be port 80 instead. You also did not include the /excluded path in your nginx config. Add that in with location /excluded { return 403; } | Mid | [
0.637305699481865,
30.75,
17.5
] |
Method + Standard Vodka is an American craft spirit made in small batches in North Carolina. Currently being produced in four flavors, Method + Standard Vodka boasts pristine branding that speaks to its artisanal heritage and craft production methods. The packaging of Method + Standard Vodka reflects the carefully formulated product's quality with illustrations of unharvested fruit that speaks to the infused varieties. Additionally, each of the packages features one or two cocktail recipes to make the most out of the product. Method + Standard Vodka is available in original, apple spice, raspberry and strawberry. The vodka is made from gluten-free corn, is quadruple distilled and filtered with charcoal sourced from coconut shells. The fruit infusions provide natural flavoring and color so that the beverage remains free from preservatives or additives. | Mid | [
0.6150341685649201,
33.75,
21.125
] |
How much does a Postdoctoral Associate at University of Pittsburgh make in Pennsylvania? The typical salary for a University of Pittsburgh Pennsylvania Postdoctoral Associate ranges from $30,911-$61,088, with an average salary of $42,674. Salary estimates based on 576 salary report(s) submitted anonymously to Glassdoor by University of Pittsburgh Postdoctoral Associate employees in Pennsylvania or estimated based upon statistical methods. | High | [
0.6907993966817491,
28.625,
12.8125
] |
Certain pool jet fittings are configured to direct filtered water downward or downward at an angle. For example, some devices such as the Paramount DownJet, is a fixed orifice/fixed angle eyeball style return nozzle that directs the stream perpendicular from the water inlet pipe, along the pool wall and continuing onto the pool floor, in order to sweep dirt and debris from the pool walls and pool floor. Another type of right angle nozzle is the Paramount SwingJet, which is a fixed orifice/variable angle down jet that automatically rotates during cycling. Another type of down jet is the Venturi return from Infusion Pool products, also a fixed orifice/fixed angle nozzle. With the advent of energy saving two-speed and variable speed pool pumps that save energy by running at lower speeds, there exists a need to optimize flow patterns for best performance and pump efficiency, saving energy and therefore saving money for the pool owner. Pool pumps typically are operated several hours of the day at high speeds, and consume a large amount of energy. The energy consumption involved during such usage can account for a major portion of a home owner's energy costs. To address this problem, variable speed water pumps have been introduced that can operate at low speeds. When operating at low speeds, however, the pool jet fittings do not perform their functions adequately. The aforementioned devices, with the exception of the Paramount SwingJet, being of the fixed orifice design lack the capability to tune the orifice size for optimal flow and pump efficiency. The SwingJet does have provisions to accept different sized fixed orifices, but changing size during setup is difficult, and would never be considered by the pool owner to accommodate the various pump speeds necessary to facilitate all situations. One drawback of any multi-part device in these applications, such as the Paramount SwingJet, is the possibility of sticking or jamming due to debris and/or abrasives getting into the mechanisms. | Mid | [
0.629399585921325,
38,
22.375
] |
Primer Design and Inverse PCR on Yeast Display Antibody Selection Outputs. The display of antibodies on the surface of Saccharomyces cerevisiae cells enables the high-throughput and precise selection of specific binders for the target antigen. The recent implementation of next-generation sequencing (NGS) to antibody display screening provides a complete picture of the entire selected polyclonal population. As such, NGS overcomes the limitations of random clones screening, but it comes with two main limitations: (1) depending upon the platform, the sequencing is usually restricted to the variable heavy chain domain complementary determining region 3 (HCDR3), or VH gene, and does not provide additional information on the rest of the antibody gene, including the VL; and (2) the sequence-identified clones are not physically available for validation. Here, we describe a rapid and effective protocol based on an inverse-PCR method to recover specific antibody clones based on their HCDR3 sequence from a yeast display selection output. | Mid | [
0.6268656716417911,
36.75,
21.875
] |
An adult Christopher Robin, who is now focused on his new life, work, and family, suddenly meets his old friend Winnie the Pooh, who returns to his unforgotten childhood past to help him return to the Hundred Acre Wood and help find Pooh's lost friends. Discover Donna's (Meryl Streep, Lily James) young life, experiencing the fun she had with the three possible dads of Sophie (Amanda Seyfriend). As she reflects on her mom's journey, Sophie finds herself to be more like her mother than she ever even realized. Ethan Hunt and the IMF team join forces with CIA assassin August Walker to prevent a disaster of epic proportions. Arms dealer John Lark and a group of terrorists known as the Apostles plan to use three plutonium cores for a simultaneous nuclear attack on the Vatican, Jerusalem and Mecca, Saudi Arabia. When the weapons go missing, Ethan and his crew find themselves in a desperate race against time to prevent them from falling into the wrong hands. Rated PG-13 for violence and intense sequences of action, and for brief strong language. An adult Christopher Robin, who is now focused on his new life, work, and family, suddenly meets his old friend Winnie the Pooh, who returns to his unforgotten childhood past to help him return to the Hundred Acre Wood and help find Pooh's lost friends. Discover Donna's (Meryl Streep, Lily James) young life, experiencing the fun she had with the three possible dads of Sophie (Amanda Seyfriend). As she reflects on her mom's journey, Sophie finds herself to be more like her mother than she ever even realized. Rated PG-13 for violence and intense sequences of action, and for brief strong language. Ethan Hunt and the IMF team join forces with CIA assassin August Walker to prevent a disaster of epic proportions. Arms dealer John Lark and a group of terrorists known as the Apostles plan to use three plutonium cores for a simultaneous nuclear attack on the Vatican, Jerusalem and Mecca, Saudi Arabia. When the weapons go missing, Ethan and his crew find themselves in a desperate race against time to prevent them from falling into the wrong hands. | Low | [
0.528,
33,
29.5
] |
/// /// Phantom by HTML5 UP /// html5up.net | @n33co /// Free for personal and commercial use under the CCA 3.0 license (html5up.net/license) /// /* Footer */ #footer { $gutter: _size(gutter); @include padding(5em, 0, (0, 0, 3em, 0)); background-color: _palette(bg-alt); > .inner { @include vendor('display', 'flex'); @include vendor('flex-wrap', 'wrap'); @include vendor('flex-direction', 'row'); > * > :last-child { margin-bottom: 0; } section:nth-child(1) { width: calc(66% - #{$gutter}); margin-right: $gutter; } section:nth-child(2) { width: calc(33% - #{$gutter}); margin-left: $gutter; } .copyright { width: 100%; padding: 0; margin-top: 5em; list-style: none; font-size: 0.8em; color: transparentize(_palette(fg), 0.5); a { color: inherit; } li { display: inline-block; border-left: solid 1px transparentize(_palette(fg), 0.85); line-height: 1; padding: 0 0 0 1em; margin: 0 0 0 1em; &:first-child { border-left: 0; padding-left: 0; margin-left: 0; } } } } @include breakpoint(large) { $gutter: _size(gutter) * 0.5; @include padding(5em, 0); > .inner { section:nth-child(1) { width: calc(66% - #{$gutter}); margin-right: $gutter; } section:nth-child(2) { width: calc(33% - #{$gutter}); margin-left: $gutter; } } } @include breakpoint(medium) { $gutter: _size(gutter); > .inner { section:nth-child(1) { width: 66%; margin-right: 0; } section:nth-child(2) { width: calc(33% - #{$gutter}); margin-left: $gutter; } } } @include breakpoint(small) { @include padding(3em, 0); > .inner { @include vendor('flex-direction', 'column'); section:nth-child(1) { width: 100%; margin-right: 0; margin: 3em 0 0 0; } section:nth-child(2) { @include vendor('order', '-1'); width: 100%; margin-left: 0; } .copyright { margin-top: 3em; } } } @include breakpoint(xsmall) { > .inner { .copyright { margin-top: 3em; li { border-left: 0; padding-left: 0; margin: 0.75em 0 0 0; display: block; line-height: inherit; &:first-child { margin-top: 0; } } } } } } | Mid | [
0.5704225352112671,
30.375,
22.875
] |
The object of this study is to analyze a series of modern movements within a systematic framework that illuminates especially the quality of "transcendence," by which is meant liberation from the paradoxes of social existence. The scheme that guides selection of movements to be studied is as follows: it is based on the two considerations of (a) type of existential paradox to be transcended and (b) direction that transcendence is to take. I. Paradox: body/mind. Direction: Body to mind. Illustration: Faith healing. Opposition: physicalistic healing. II. Paradox: individual/society. Direction: individual to society. Illustration: Socialism. Opposition: Anarchism. III. Paradox: Nature/culture. Direction: nature to culture. Illustration: technologism. Opposition: naturism. IV. Paradox: Spirit/matter. Direction: Matter to Spirit. Illustration: mysticism. Opposition: reformism. | Mid | [
0.6456043956043951,
29.375,
16.125
] |
Tracheo-arterial erosion complicating tracheostomy. Tracheo-arterial erosion occurred in 5 cases out of 816 tracheostomized patients, i.e. an incidence of 0.6%. The complication is serious and is nearly always fatal. In one case, treatment was successful, but the other four patients died as a result of massive haemorrhage. On the basis of these cases the factors leading to this complication and the possibilities of treatment are discussed. In one case the main cause of innominate artery erosion was the low lying tracheostomy. This patient was rapidly resuscitated, the blood volume was restored, bleeding controlled by direct finger pressure on the innominate artery and an emergency operation was performed immediately. The innominate artery was excluded from circulation and bypassed with an autogenous venous graft. The patient recovered and is doing well after a follow-up of two and half years. | Mid | [
0.641148325358851,
33.5,
18.75
] |
AN army of rats as big as CATS are swarming through the UK, according to officials. The vermin, who have apparently bulked up by feeding off the current bumper crop of potatoes, are resistant to poison. Some rats grow over a FOOT in length, while others have grown thick, lustrous fur to enable them to live in cold storage silos where food is plentiful. Kevin Higgins, of the British Pest Control Association, said rat numbers were soaring by about 15 per cent each year - and the boom was driven by the availability of food. He said: "We see big rats in cold stores. The rats grow a very thick fur. That can make them look bigger. "You might see a rat under a pallet that looks as big as a cat." Pest control services are being swamped with calls to deal with rat infestations thanks to inadequate measures being taken on farms and in city sewers, coupled with greater food resources, like potatoes. | Mid | [
0.5811623246492981,
36.25,
26.125
] |
1. Field of the Invention present invention relates generally to seismic data acquisition apparatus and methods and more particularly to a permanently deployed multicomponent seafloor seismic data acquisition system. 2. Description of the Related Art the oil and gas industry wells are often drilled into underground formations at offshore locations. Once a well is successfully drilled, oil, gas, and other formation fluids can be produced from the formation. It is desirable during production to monitor formation parameters on a relatively continuous basis in order to effectively manage the field. Monitoring is performed using an array of seismic sensors located on the seafloor. Monitoring might be passive or active. In passive monitoring sensors detect seismic events without having the system induce the seismic event by introducing acoustic energy into the earth. Active monitoring as achieved when an acoustic energy source, e.g., an air gun, explosives, etc. is used to induce the seismic event. The acoustic energy is detected by the sensor array and the array output is recorded at a central recorder for later processing and/or assessments of the field parameters. Typical seafloor monitoring systems suffer from several disadvantages. The typical system is not expandable, thus the typical system is usually deployed at the system level and then tested for proper operation. Any failure is detected only at the system level thereby making troubleshooting and repair difficult and costly. Another disadvantage in a non-expandable system is that changes in system layout and size are usually impossible without redesigning the entire system. Moreover, an existing system would require costly rework in order to expand the system. Another disadvantage in the typical seismic monitoring system is high deployment costs. The cables associated with the typical system are large and expensive to deploy. | High | [
0.656641604010025,
32.75,
17.125
] |
require 'test/unit' require './src-gen-umple/door_a' require './src-gen-umple/door_b' require './src-gen-umple/door_c' require './src-gen-umple/door_g' module CruiseAttributesTest class ImmutableTest < Test::Unit::TestCase def test_Immutable door = DoorC.new("1",2,3.4,Date.parse("1978-12-05"),Time.parse("10:11:15"),false,DoorB.new(5)) assert_equal("1",door.get_id) assert_equal false, door.respond_to?("set_id") assert_equal(2,door.get_intId) assert_equal false, door.respond_to?("set_intId") assert_equal(3.4,door.get_doubleId) assert_equal false, door.respond_to?("set_doubleId") assert_equal(Date.parse("1978-12-05"),door.get_dateId) assert_equal false, door.respond_to?("set_dateId") assert_equal(Time.parse("10:11:15"),door.get_timeId) assert_equal false, door.respond_to?("set_timeId") assert_equal(false,door.get_booleanId) assert_equal false, door.respond_to?("set_booleanId") assert_equal(5,door.get_doorId.get_id) assert_equal false, door.respond_to?("set_doorId") end def test_ImmutableInitialized door = DoorA.new assert_equal("1",door.get_id) assert_equal false, door.respond_to?("set_id") assert_equal(2,door.get_intId) assert_equal false, door.respond_to?("set_intId") assert_equal(3.4,door.get_doubleId) assert_equal false, door.respond_to?("set_doubleId") assert_equal(Date.parse("1978-12-05"),door.get_dateId) assert_equal false, door.respond_to?("set_dateId") assert_equal(Time.parse("10:11:15"),door.get_timeId) assert_equal false, door.respond_to?("set_timeId") assert_equal(false,door.get_booleanId) assert_equal false, door.respond_to?("set_booleanId") assert_equal(5,door.get_doorId.get_id) assert_equal false, door.respond_to?("set_doorId") end def test_LazyImmutable door = DoorG.new #assert_nil(door.get_doorId) assert_nil(door.get_dateId) assert_nil(door.get_timeId) doorId = DoorB.new(5) assert_equal(true,door.set_doorId(doorId)) assert_equal(true,door.set_dateId(Date.parse("1978-12-05"))) assert_equal(true,door.set_timeId(Time.parse("10:11:15"))) assert_equal(door.get_doorId,doorId) assert_equal(door.get_dateId,Date.parse("1978-12-05")) assert_equal(door.get_timeId,Time.parse("10:11:15")) assert_equal(false,door.set_doorId(DoorB.new(5))) assert_equal(false,door.set_dateId(Date.parse("1978-12-05"))) assert_equal(false,door.set_timeId(Time.parse("10:11:15"))) end end end | Mid | [
0.586666666666666,
38.5,
27.125
] |
Ethics and federal laws about human subject protections have evolved to protect research participants in general and vulnerable groups in particular. Under federal law, vulnerable groups include pregnant women;fetuses, neonates, and children;and prisoners. According to ethicists, vulnerable groups also include individuals who suffer from impairment due to mental illness, stigmatized medical illness, and other debilitating disorders. Given these definitions, prisoners with mental illnesses are multiply vulnerable. With the number of prisoners with serious psychiatric disorders exceeding the number of patients in psychiatric hospitals, jails and prisons have become "America's new mental hospitals" (Torrey, 1995) and for many individuals with mental illness, inpatient psychiatric care is now provided in jails and prisons (Lamb &Weinberger, 2005). Research with prisoners, especially those with the added vulnerability of mental illness, poses ethical challenges and responsibilities, yet there have been no empirical studies of the interpretation and application of ethical principles and regulatory safeguards by researchers and IRBs involved in mental health research with prisoners. This application focuses on 1) how researchers and IRB members interpret and apply ethical principles of autonomy, justice, and beneficence in mental health research with prisoners;2) how researchers and IRB members interpret and apply regulatory safeguards for mental health research with prisoners;3) ways in which policies and structural environments of correctional systems (including prisons and jails) create ethical challenges that must be addressed by mental health researchers and IRBs;and 4) ways in which ethical safeguards and oversight affect mental health research with prisoners. This project will use sequential qualitative and quantitative phases to examine ethical challenges, responsibilities, and solutions regarding the conduct and oversight of mental health research with prisoners. Phase 1 involves key informant interviews with individuals who have conducted mental health research with prisoners;IRB Chairs and members;IRB prisoner representatives;prison administrators;and research ethicists. Using these data, Phase 2 focuses on construction of a quantitative survey about ethical challenges and solutions. In Phase 3, this survey is administered to a national sample of mental health researchers, IRB chairs and members, and IRB prisoner representatives. In Phase 4, findings from prior phases are taken to prison administrators, security officials, and medical officials, and prison advocacy group members, to gather their perspectives on the meaning and implications of these findings. Phase 5 focuses on initiating a data-driven dialogue through dissemination of research findings to important stakeholder groups, including researchers;IRB chairs, members, and prison representatives;and prison administrators, medical staff, and security personnel to result in enhanced application of ethics and regulatory safeguards to mental health research with prisoners, with the goal of reducing barriers to epidemiologic and intervention research focused on mental illness among prisoners. PUBLIC HEALTH RELEVANCE: Prisons have become "America's new mental hospitals", housing, but rarely treating, as many individuals with mental disorders as all psychiatric hospitals in the US combined. By providing an empirical basis with which to strengthen ethical safeguards for mental health research in correctional systems, this project will contribute to the enhancement of research-based treatment for prisoners with mental illness. Of great relevance to public health, our work will provide ethical guidance to researchers, IRBs, and prison officials;enhance the quality, conduct, and range of mental health research in prison populations;and, ultimately, enhance the health and safety of prisoners and the communities to which they return upon release. | Mid | [
0.654292343387471,
35.25,
18.625
] |
Question marks abound at almost every position going into the 2013 Knoxville Football season, and the offensive and defensive lines have some of the most intense competition for starting spots. Coach Troy Rider says seven or more guys are fighting for a spot along the Panthers’ front line. Rider says the only spot that is nailed down for sure is the center spot with Austin Ramsey, however, all of the other positions are up for grabs. Rider and the rest of the coaching staff hopes to answer some of the questions tonight when the Panthers head to Fairfield for a preseason scrimmage with the Trojans. This is the first time Knoxville has scheduled a preseason scrimmage with another team. Rider says Fairfield is similar to Knoxville in that the Trojans also have key positions to fill before next week’s season kick-off. | Mid | [
0.5991189427312771,
34,
22.75
] |
# =XMPP4R - XMPP Library for Ruby # License:: Ruby's license (see the LICENSE file) or GNU GPL, at your option. # Website::http://xmpp4r.github.io require 'xmpp4r/muc/item' module Jabber module MUC class IqQueryMUCAdminItem < MUC::UserItem name_xmlns 'item', 'http://jabber.org/protocol/muc#admin' def initialize(affiliation=nil, role=nil, jid=nil) super() set_affiliation(affiliation) set_role(role) set_jid(jid) end end end end | Low | [
0.535469107551487,
29.25,
25.375
] |
// Mantid Repository : https://github.com/mantidproject/mantid // // Copyright © 2018 ISIS Rutherford Appleton Laboratory UKRI, // NScD Oak Ridge National Laboratory, European Spallation Source, // Institut Laue - Langevin & CSNS, Institute of High Energy Physics, CAS // SPDX - License - Identifier: GPL - 3.0 + #include "MantidAlgorithms/UnwrapSNS.h" #include "MantidAPI/HistogramValidator.h" #include "MantidAPI/InstrumentValidator.h" #include "MantidAPI/RawCountValidator.h" #include "MantidAPI/SpectrumInfo.h" #include "MantidAPI/WorkspaceFactory.h" #include "MantidAPI/WorkspaceUnitValidator.h" #include "MantidDataObjects/EventList.h" #include "MantidGeometry/Instrument.h" #include "MantidKernel/BoundedValidator.h" #include "MantidKernel/CompositeValidator.h" #include "MantidKernel/PhysicalConstants.h" #include "MantidKernel/UnitFactory.h" #include <limits> namespace Mantid { namespace Algorithms { DECLARE_ALGORITHM(UnwrapSNS) using namespace Kernel; using namespace API; using DataObjects::EventWorkspace; using std::size_t; /// Default constructor UnwrapSNS::UnwrapSNS() : m_conversionConstant(0.), m_inputWS(), m_inputEvWS(), m_LRef(0.), m_Tmin(0.), m_Tmax(0.), m_frameWidth(0.), m_numberOfSpectra(0), m_XSize(0) {} /// Initialisation method void UnwrapSNS::init() { auto wsValidator = std::make_shared<CompositeValidator>(); wsValidator->add<WorkspaceUnitValidator>("TOF"); wsValidator->add<HistogramValidator>(); wsValidator->add<RawCountValidator>(); wsValidator->add<InstrumentValidator>(); declareProperty(std::make_unique<WorkspaceProperty<MatrixWorkspace>>( "InputWorkspace", "", Direction::Input, wsValidator), "Contains numbers counts against time of flight (TOF)."); declareProperty(std::make_unique<WorkspaceProperty<MatrixWorkspace>>( "OutputWorkspace", "", Direction::Output), "This workspace will be in the units of time of flight. (See " "http://www.mantidproject.org/Units)"); auto validator = std::make_shared<BoundedValidator<double>>(); validator->setLower(0.01); declareProperty("LRef", 0.0, validator, "A distance at which it is possible to deduce if a particle " "is from the current or a past frame based on its arrival " "time. This time criterion can be set with the property " "below e.g. correct when arrival time < Tmin."); validator->setLower(0.01); declareProperty("Tmin", Mantid::EMPTY_DBL(), validator, "With LRef this defines the maximum speed expected for " "particles. For each count or time bin the mean particle " "speed is calculated and if this is greater than LRef/Tmin " "its TOF is corrected."); validator->setLower(0.01); declareProperty("Tmax", Mantid::EMPTY_DBL(), validator, "The maximum time of flight of the data used for the width " "of the frame. If not set the maximum time of flight of the " "data is used."); // Calculate and set the constant factor for the conversion to wavelength const double TOFisinMicroseconds = 1e6; const double toAngstroms = 1e10; m_conversionConstant = (PhysicalConstants::h * toAngstroms) / (PhysicalConstants::NeutronMass * TOFisinMicroseconds); } /** Executes the algorithm * @throw std::runtime_error if the workspace is invalid or a child algorithm *fails * @throw Kernel::Exception::InstrumentDefinitionError if detector, source or *sample positions cannot be calculated * */ void UnwrapSNS::exec() { // Get the input workspace m_inputWS = getProperty("InputWorkspace"); // Get the "reference" flightpath (currently passed in as a property) m_LRef = getProperty("LRef"); m_XSize = static_cast<int>(m_inputWS->x(0).size()); m_numberOfSpectra = static_cast<int>(m_inputWS->getNumberHistograms()); g_log.debug() << "Number of spectra in input workspace: " << m_numberOfSpectra << "\n"; // go off and do the event version if appropriate m_inputEvWS = std::dynamic_pointer_cast<const EventWorkspace>(m_inputWS); if ((m_inputEvWS != nullptr)) // && ! this->getProperty("ForceHist")) // TODO // remove ForceHist option { this->execEvent(); return; } this->getTofRangeData(false); // set up the progress bar m_progress = std::make_unique<Progress>(this, 0.0, 1.0, m_numberOfSpectra); MatrixWorkspace_sptr outputWS = getProperty("OutputWorkspace"); if (outputWS != m_inputWS) { outputWS = WorkspaceFactory::Instance().create(m_inputWS, m_numberOfSpectra, m_XSize, m_XSize - 1); setProperty("OutputWorkspace", outputWS); } // without the primary flight path the algorithm cannot work const auto &spectrumInfo = m_inputWS->spectrumInfo(); const double L1 = spectrumInfo.l1(); PARALLEL_FOR_IF(Kernel::threadSafe(*m_inputWS, *outputWS)) for (int workspaceIndex = 0; workspaceIndex < m_numberOfSpectra; workspaceIndex++) { PARALLEL_START_INTERUPT_REGION if (!spectrumInfo.hasDetectors(workspaceIndex)) { // If the detector flightpath is missing, zero the data g_log.debug() << "Detector information for workspace index " << workspaceIndex << " is not available.\n"; outputWS->setSharedX(workspaceIndex, m_inputWS->sharedX(workspaceIndex)); outputWS->mutableY(workspaceIndex) = 0.0; outputWS->mutableE(workspaceIndex) = 0.0; } else { const double Ld = L1 + spectrumInfo.l2(workspaceIndex); // fix the x-axis std::vector<double> timeBins; size_t pivot = this->unwrapX(m_inputWS->x(workspaceIndex), timeBins, Ld); outputWS->setBinEdges(workspaceIndex, std::move(timeBins)); pivot++; // one-off difference between x and y // fix the counts using the pivot point auto &yIn = m_inputWS->y(workspaceIndex); auto &yOut = outputWS->mutableY(workspaceIndex); auto lengthFirstPartY = std::distance(yIn.begin() + pivot, yIn.end()); std::copy(yIn.begin() + pivot, yIn.end(), yOut.begin()); std::copy(yIn.begin(), yIn.begin() + pivot, yOut.begin() + lengthFirstPartY); // fix the uncertainties using the pivot point auto &eIn = m_inputWS->e(workspaceIndex); auto &eOut = outputWS->mutableE(workspaceIndex); auto lengthFirstPartE = std::distance(eIn.begin() + pivot, eIn.end()); std::copy(eIn.begin() + pivot, eIn.end(), eOut.begin()); std::copy(eIn.begin(), eIn.begin() + pivot, eOut.begin() + lengthFirstPartE); } m_progress->report(); PARALLEL_END_INTERUPT_REGION } PARALLEL_CHECK_INTERUPT_REGION m_inputWS.reset(); this->runMaskDetectors(); } void UnwrapSNS::execEvent() { // set up the output workspace MatrixWorkspace_sptr matrixOutW = this->getProperty("OutputWorkspace"); if (matrixOutW != m_inputWS) { matrixOutW = m_inputWS->clone(); setProperty("OutputWorkspace", matrixOutW); } auto outW = std::dynamic_pointer_cast<EventWorkspace>(matrixOutW); // set up the progress bar m_progress = std::make_unique<Progress>(this, 0.0, 1.0, m_numberOfSpectra * 2); // algorithm assumes the data is sorted so it can jump out early outW->sortAll(Mantid::DataObjects::TOF_SORT, m_progress.get()); this->getTofRangeData(true); // without the primary flight path the algorithm cannot work const auto &spectrumInfo = m_inputWS->spectrumInfo(); const double L1 = spectrumInfo.l1(); // do the actual work for (int workspaceIndex = 0; workspaceIndex < m_numberOfSpectra; workspaceIndex++) { std::size_t numEvents = outW->getSpectrum(workspaceIndex).getNumberEvents(); double Ld = -1.0; if (spectrumInfo.hasDetectors(workspaceIndex)) Ld = L1 + spectrumInfo.l2(workspaceIndex); std::vector<double> time_bins; if (outW->x(0).size() > 2) { this->unwrapX(m_inputWS->x(workspaceIndex), time_bins, Ld); outW->setBinEdges(workspaceIndex, std::move(time_bins)); } else { outW->setSharedX(workspaceIndex, m_inputWS->sharedX(workspaceIndex)); } if (numEvents > 0) { std::vector<double> times(numEvents); outW->getSpectrum(workspaceIndex).getTofs(times); double filterVal = m_Tmin * Ld / m_LRef; for (size_t j = 0; j < numEvents; j++) { if (times[j] < filterVal) times[j] += m_frameWidth; else break; // stop filtering } outW->getSpectrum(workspaceIndex).setTofs(times); } m_progress->report(); } outW->clearMRU(); this->runMaskDetectors(); } int UnwrapSNS::unwrapX(const Mantid::HistogramData::HistogramX &datain, std::vector<double> &dataout, const double &Ld) { std::vector<double> tempX_L; // lower half - to be frame wrapped tempX_L.reserve(m_XSize); tempX_L.clear(); std::vector<double> tempX_U; // upper half - to not be frame wrapped tempX_U.reserve(m_XSize); tempX_U.clear(); double filterVal = m_Tmin * Ld / m_LRef; dataout.clear(); int specialBin = 0; for (int bin = 0; bin < m_XSize; ++bin) { // This is the time-of-flight value under consideration in the current // iteration of the loop const double tof = datain[bin]; if (tof < filterVal) { tempX_L.emplace_back(tof + m_frameWidth); // Record the bins that fall in this range for copying over the data & // errors if (specialBin < bin) specialBin = bin; } else { tempX_U.emplace_back(tof); } } // loop over X values // now put it back into the vector supplied dataout.clear(); dataout.insert(dataout.begin(), tempX_U.begin(), tempX_U.end()); dataout.insert(dataout.end(), tempX_L.begin(), tempX_L.end()); assert(datain.size() == dataout.size()); return specialBin; } void UnwrapSNS::getTofRangeData(const bool isEvent) { // get the Tmin/Tmax properties m_Tmin = this->getProperty("Tmin"); m_Tmax = this->getProperty("Tmax"); // if either the values are not specified by properties, find them from the // data double empty = Mantid::EMPTY_DBL(); if ((m_Tmin == empty) || (m_Tmax == empty)) { // get data min/max values double dataTmin; double dataTmax; if (isEvent) { m_inputEvWS->sortAll(DataObjects::TOF_SORT, nullptr); m_inputEvWS->getEventXMinMax(dataTmin, dataTmax); } else { m_inputWS->getXMinMax(dataTmin, dataTmax); } // fix the unspecified values if (m_Tmin == empty) { m_Tmin = dataTmin; } if (m_Tmax == empty) { m_Tmax = dataTmax; } } // check the frame width m_frameWidth = m_Tmax - m_Tmin; g_log.information() << "Frame range in microseconds is: " << m_Tmin << " - " << m_Tmax << "\n"; if (m_Tmin < 0.) throw std::runtime_error("Cannot have Tmin less than zero"); if (m_Tmin > m_Tmax) throw std::runtime_error("Have case of Tmin > Tmax"); g_log.information() << "Wavelength cuttoff is : " << (m_conversionConstant * m_Tmin / m_LRef) << "Angstrom, Frame width is: " << m_frameWidth << "microseconds\n"; } void UnwrapSNS::runMaskDetectors() { IAlgorithm_sptr alg = createChildAlgorithm("MaskDetectors"); alg->setProperty<MatrixWorkspace_sptr>("Workspace", this->getProperty("OutputWorkspace")); alg->setProperty<MatrixWorkspace_sptr>("MaskedWorkspace", this->getProperty("InputWorkspace")); if (!alg->execute()) throw std::runtime_error( "MaskDetectors Child Algorithm has not executed successfully"); } } // namespace Algorithms } // namespace Mantid | Mid | [
0.6029106029106021,
36.25,
23.875
] |
<?xml version="1.0" encoding="UTF-8"?> <!-- * See the NOTICE file distributed with this work for additional * information regarding copyright ownership. * * This is free software; you can redistribute it and/or modify it * under the terms of the GNU Lesser General Public License as * published by the Free Software Foundation; either version 2.1 of * the License, or (at your option) any later version. * * This software is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this software; if not, write to the Free * Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA * 02110-1301 USA, or see the FSF site: http://www.fsf.org. --> <xwikidoc version="1.2" reference="XWiki.SolrDocumentDoesNotExistUIX" locale=""> <web>XWiki</web> <name>SolrDocumentDoesNotExistUIX</name> <language/> <defaultLanguage/> <translation>0</translation> <creator>xwiki:XWiki.Admin</creator> <parent>XWiki.SearchCode</parent> <author>xwiki:XWiki.Admin</author> <contentAuthor>xwiki:XWiki.Admin</contentAuthor> <version>1.1</version> <title/> <comment/> <minorEdit>false</minorEdit> <syntaxId>xwiki/2.1</syntaxId> <hidden>true</hidden> <content>UI Extension, implemented with Solr, used to displays similar pages based on the current document's reference, when the current document does not exist.</content> <object> <name>XWiki.SolrDocumentDoesNotExistUIX</name> <number>0</number> <className>XWiki.StyleSheetExtension</className> <guid>ec0be50e-3359-44cc-b48f-cbc98fcfbed2</guid> <class> <name>XWiki.StyleSheetExtension</name> <customClass/> <customMapping/> <defaultViewSheet/> <defaultEditSheet/> <defaultWeb/> <nameField/> <validationScript/> <cache> <cache>0</cache> <disabled>0</disabled> <displayType>select</displayType> <multiSelect>0</multiSelect> <name>cache</name> <number>6</number> <prettyName>Caching policy</prettyName> <relationalStorage>0</relationalStorage> <separator> </separator> <separators>|, </separators> <size>1</size> <unmodifiable>0</unmodifiable> <values>long|short|default|forbid</values> <classType>com.xpn.xwiki.objects.classes.StaticListClass</classType> </cache> <code> <disabled>0</disabled> <editor>PureText</editor> <name>code</name> <number>3</number> <prettyName>Code</prettyName> <rows>20</rows> <size>50</size> <unmodifiable>0</unmodifiable> <classType>com.xpn.xwiki.objects.classes.TextAreaClass</classType> </code> <contentType> <cache>0</cache> <disabled>0</disabled> <displayType>select</displayType> <multiSelect>0</multiSelect> <name>contentType</name> <number>1</number> <prettyName>Content Type</prettyName> <relationalStorage>0</relationalStorage> <separator> </separator> <separators>|, </separators> <size>1</size> <unmodifiable>0</unmodifiable> <values>CSS|LESS</values> <classType>com.xpn.xwiki.objects.classes.StaticListClass</classType> </contentType> <name> <disabled>0</disabled> <name>name</name> <number>2</number> <prettyName>Name</prettyName> <size>30</size> <unmodifiable>0</unmodifiable> <classType>com.xpn.xwiki.objects.classes.StringClass</classType> </name> <parse> <disabled>0</disabled> <displayFormType>select</displayFormType> <displayType>yesno</displayType> <name>parse</name> <number>5</number> <prettyName>Parse content</prettyName> <unmodifiable>0</unmodifiable> <classType>com.xpn.xwiki.objects.classes.BooleanClass</classType> </parse> <use> <cache>0</cache> <disabled>0</disabled> <displayType>select</displayType> <multiSelect>0</multiSelect> <name>use</name> <number>4</number> <prettyName>Use this extension</prettyName> <relationalStorage>0</relationalStorage> <separator> </separator> <separators>|, </separators> <size>1</size> <unmodifiable>0</unmodifiable> <values>currentPage|onDemand|always</values> <classType>com.xpn.xwiki.objects.classes.StaticListClass</classType> </use> </class> <property> <cache>long</cache> </property> <property> <code>.docdoesnotexist-solr-container { margin-top: 1em; }</code> </property> <property> <contentType>CSS</contentType> </property> <property> <name>Style</name> </property> <property> <parse>0</parse> </property> <property> <use>onDemand</use> </property> </object> <object> <name>XWiki.SolrDocumentDoesNotExistUIX</name> <number>0</number> <className>XWiki.UIExtensionClass</className> <guid>13e97fcf-124a-45ae-a0b5-aefc8307a0c1</guid> <class> <name>XWiki.UIExtensionClass</name> <customClass/> <customMapping/> <defaultViewSheet/> <defaultEditSheet/> <defaultWeb/> <nameField/> <validationScript/> <content> <disabled>0</disabled> <editor/> <name>content</name> <number>3</number> <prettyName>Extension Content</prettyName> <rows>10</rows> <size>40</size> <unmodifiable>0</unmodifiable> <classType>com.xpn.xwiki.objects.classes.TextAreaClass</classType> </content> <extensionPointId> <disabled>0</disabled> <name>extensionPointId</name> <number>1</number> <prettyName>Extension Point ID</prettyName> <size>30</size> <unmodifiable>0</unmodifiable> <classType>com.xpn.xwiki.objects.classes.StringClass</classType> </extensionPointId> <name> <disabled>0</disabled> <name>name</name> <number>2</number> <prettyName>Extension ID</prettyName> <size>30</size> <unmodifiable>0</unmodifiable> <classType>com.xpn.xwiki.objects.classes.StringClass</classType> </name> <parameters> <disabled>0</disabled> <editor/> <name>parameters</name> <number>4</number> <prettyName>Extension Parameters</prettyName> <rows>10</rows> <size>40</size> <unmodifiable>0</unmodifiable> <classType>com.xpn.xwiki.objects.classes.TextAreaClass</classType> </parameters> <scope> <cache>0</cache> <disabled>0</disabled> <displayType>select</displayType> <multiSelect>0</multiSelect> <name>scope</name> <number>5</number> <prettyName>Extension Scope</prettyName> <relationalStorage>0</relationalStorage> <separator> </separator> <separators>|, </separators> <size>1</size> <unmodifiable>0</unmodifiable> <values>wiki=Current Wiki|user=Current User|global=Global</values> <classType>com.xpn.xwiki.objects.classes.StaticListClass</classType> </scope> </class> <property> <content>{{velocity}} #set ($docName = $doc.pageReference.name) ## ## Handle the case when there are spaces in the doc name #set ($docNameWords = $stringtool.split($docName, ' ')) #set ($docNameSplit = "${stringtool.join($docNameWords, ',')}") #set ($docNameSplitFuzzy = "$!{stringtool.join($docNameWords, '~,')}~") #set ($docNameSplitWildcard = "*$!{stringtool.join($docNameWords, '*,*')}*") ## ## Extract the space reference elements #set ($spaceReferenceStrings = "") #set ($spaceReferenceFuzzyStrings = "") #set ($spaceReferenceWildcardStrings = "") #set ($spaceReferences = $doc.documentReference.spaceReferences) #foreach ($spaceReference in $spaceReferences) #if ($foreach.count > 1) #set ($spaceReferenceStrings = "$!{spaceReferenceStrings}, ") #set ($spaceReferenceFuzzyStrings = "$!{spaceReferenceFuzzyStrings}, ") #set ($spaceReferenceWildcardStrings = "$!{spaceReferenceWildcardStrings}, ") #end ## Note: Also handling possible space characters in the space name. #set ($spaceNameWords = $stringtool.split(${spaceReference.name}, ' ')) #set ($spaceReferenceStrings = "$!{spaceReferenceStrings}${stringtool.join($spaceNameWords, ',')}") #set ($spaceReferenceFuzzyStrings = "$!{spaceReferenceFuzzyStrings}${stringtool.join($spaceNameWords, '~,')}~") #set ($spaceReferenceWildcardStrings = "$!{spaceReferenceWildcardStrings}*${stringtool.join($spaceNameWords, '*,*')}*") #end ## ## Build the query string, with various usecases supported. ## TODO: Add better scoring boosts to properly favor one usecase over another or multiple usecases happening at the same time. ## ## Non-terminal doc with the same name as the current doc #set ($suggestionsQueryString = "(spaces:($docNameSplit, $docNameSplitFuzzy, $docNameSplitWildcard) AND name_exact:WebHome)") ## Terminal doc with the same name as the current doc #set ($suggestionsQueryString = "${suggestionsQueryString} OR name:($docNameSplit, $docNameSplitFuzzy, $docNameSplitWildcard)") ## Document in a space named like the current terminal doc, if it is the case. #if ($doc.documentReference.name != 'WebHome') #set ($suggestionsQueryString = "${suggestionsQueryString} OR spaces:($docNameSplit, $docNameSplitFuzzy, $docNameSplitWildcard)") #end ## Document in a space named like on of the spaces of the current doc. #set ($suggestionsQueryString = "${suggestionsQueryString} OR spaces:($spaceReferenceStrings, $spaceReferenceFuzzyStrings, $spaceReferenceWildcardStrings)") ## ## Build and run the Solr query. #set ($suggestionsQuery = $services.query.createQuery($suggestionsQueryString, "solr")) #set ($filterQuery = ['type:"DOCUMENT"', "locale:(""$xcontext.locale"" OR """")"]) #if ($xwiki.getUserPreference('displayHiddenDocuments') != 1) #set ($discard = $filterQuery.add('hidden:false')) #end #set ($discard = $suggestionsQuery.bindValue('fq', $filterQuery)) #set ($discard = $suggestionsQuery.setLimit(10)) ## #set ($suggestionsResponse = $suggestionsQuery.execute()[0]) #set ($suggestionResults = $suggestionsResponse.results) ## ## Display the suggestions, if any. #if ($suggestionResults.size() > 0) #set ($discard = $xwiki.ssx.use('XWiki.SolrDocumentDoesNotExistUIX')) {{html clean='false'}} <div class='docdoesnotexist-solr-container'> <p><b>$services.localization.render('solr.uix.docdoesnotexist.title')</b></p> <ul> #template('hierarchy_macros.vm') #foreach ($suggestionResult in $suggestionResults) #set ($suggestionResultsDocReference = $services.solr.resolveDocument($suggestionResult)) #set ($suggestionDocument = $xwiki.getDocument($suggestionResultsDocReference)) <li><a href="$escapetool.xml($suggestionDocument.getURL())">#hierarchy($suggestionResultsDocReference, {'plain' : true, 'local' : true})</a></li> #end </ul> </div> {{/html}} #end {{/velocity}} </content> </property> <property> <extensionPointId>org.xwiki.platform.search.ui.docdoesnotexist</extensionPointId> </property> <property> <name>org.xwiki.platform.search.solr.ui.docdoesnotexist</name> </property> <property> <parameters>order=10000</parameters> </property> <property> <scope>wiki</scope> </property> </object> </xwikidoc> | Low | [
0.477083333333333,
28.625,
31.375
] |
Q: Compare installed rpm packages with available? I would like to find the vmware packages that I haven't installed from their repo. The problem is that the output of yum search vmware is not the same format at from rpm -qa|grep vmware. Question How Can I make a diff of the installed and available rpm packages? A: You need repoquery. It is in the yum-utils package. repoquery 'vmware*' shows all available packages named beginning with vmware. repoquery --pkgnarrow=installed 'vmware*' shows installed packages named beginning with vmware. It is trivial to then compare the output of these commands. | High | [
0.6798418972332011,
32.25,
15.1875
] |
I am liking this approach. Two different current transformers, one differential transformer for the hot and neutral, a second current transformer for just the ground lead. If hot and neutral are mismatched by 6 mA, or if the ground lead carries 6 mA, a 3 pole relay or latching switch opens all three conductors. (UL might insist on breaking the hot-neutral before breaking the EGC which sounds more expensive.) JR, I'm pretty sure that a single current transformer with all three wires (H-N-G) running though it would detect any external fault leakage, whether outgoing from the guitar's hot chassis, or incoming from a hot mic. At least that's how I'm drawing it and following the paths inside my head. Easy enough to try with a clamp ammeter and a few "leak" resistors. JR, I'm pretty sure that a single current transformer with all three wires (H-N-G) running though it would detect any external fault leakage, whether outgoing from the guitar's hot chassis, or incoming from a hot mic. At least that's how I'm drawing it and following the paths inside my head. Easy enough to try with a clamp ammeter and a few "leak" resistors. I thought about that before and yes, it should detect an external fault current very inexpensively but it might interfere with detecting an internal flaky guitar amp leaking hot to it's own EGC since they would still null out(?). I think it needs to be two separate current transformers, but I am open for all suggestions. I think it needs to be two separate current transformers, but I am open for all suggestions. You could be right. There's a lot of different possible failure modes to consider. And throwing a possible RPBG outlet into the mix complicates things even further. But I'm a firm believer in the logic that if it CAN happen, then it WILL happen. The other concern would be an amp without an egc or missing a ground pin with a fault causing it to have hot chassis being used with another grounded piece of gear also supplied by the "protective device"-if one CT is used a hot-egc fault will not be detected. A separate CT will trip on EGC current regardless of where it comes from. I have to believe that adding another CT to an existing GFCI design should be a minimal cost, the 3 pole 20 amp relay is the tough part to get around as far as cost-but I can't figure a good way around that-Solid state might save cost, but I don't trust it for a safety disconnect. What I am not familiar with is the internal testing modern GFCIs do-what makes them know not to rest if they are defective? That might throw a monkey wrench in things-I am guessing the UL guys will want to maintain that standard. The other concern would be an amp without an egc or missing a ground pin with a fault causing it to have hot chassis being used with another grounded piece of gear also supplied by the "protective device"-if one CT is used a hot-egc fault will not be detected. A separate CT will trip on EGC current regardless of where it comes from. Lets hope Quote I have to believe that adding another CT to an existing GFCI design should be a minimal cost, I suspect every penny counts in these things. Quote the 3 pole 20 amp relay is the tough part to get around as far as cost-but I can't figure a good way around that-Solid state might save cost, but I don't trust it for a safety disconnect. I'm with you.. I don't trust solid state for complete isolation. I've looked at some patents from a guy with a few GFCI designs and he used a latching relay with two contacts in one... it seems a third contact is not huge. I tried to explain the issues as I understand them to him, I expect him to know the cheapest way to do it. Problem still is that I don't see a market large enough to justify too much cost/effort. Quote What I am not familiar with is the internal testing modern GFCIs do-what makes them know not to rest if they are defective? That might throw a monkey wrench in things-I am guessing the UL guys will want to maintain that standard. Another tidbit, I suspect the UL guys will be reluctant to give up ground bonding, but if they do they will probably want to delay releasing the ground bond until after the hot and neutral is already open. Lets hope I get a serious answer... it would be nice to come up with an effective solution for this. Another tidbit, I suspect the UL guys will be reluctant to give up ground bonding, but if they do they will probably want to delay releasing the ground bond until after the hot and neutral is already open. I hope that UL will allow an exception for a EGC contact in the AC power, but if that's not possible then remember that a double-pole/mag-set reed relay in the guitar's signal cable would accomplish the same disconnect for the guitarist. However, it would NOT eliminate the shock hazard from someone touching the hot chassis of the guitar amp and a grounded object. So you would still need a standard GFCI powering the amp for that sort of fault protection. ....., but if that's not possible then remember that a double-pole/mag-set reed relay in the guitar's signal cable would accomplish the same disconnect for the guitarist. However, it would NOT eliminate the shock hazard from someone touching the hot chassis of the guitar amp .... which could lead back to my idea of a D.I. based solution. Even though I use a mic for the guitar, I always have a free strip or three somewhere that could supply phantom power to a couple of "Guitar No Shock" boxes.I always use volt-alert, but in the heat and confusion, shit happens that I may not catch. I haven't killed anyone yet, and have no intention of ever letting it happen. I have a friend who makes guitar pedals (amptweaker.com) and I still plan to ask him what he thinks. He's an actual design engineer and since his pedals have battery power he could probably add a latching relay protection inside a pedal, while he may also have a feel for how much (little) guitar players are willing to pay for the extra human safety. which could lead back to my idea of a D.I. based solution. Even though I use a mic for the guitar, I always have a free strip or three somewhere that could supply phantom power to a couple of "Guitar No Shock" boxes.I always use volt-alert, but in the heat and confusion, shit happens that I may not catch. I haven't killed anyone yet, and have no intention of ever letting it happen. Just to run out this hypothetical if you have a console grounded mic pointed at the guitar cabinet and the player with guitar in hand touches that mic for any reason he will be putting himself between two EGC systems and exposed to potential shock hazard. I have a friend who makes guitar pedals (amptweaker.com) and I still plan to ask him what he thinks. He's an actual design engineer and since his pedals have battery power he could probably add a latching relay protection inside a pedal, while he may also have a feel for how much (little) guitar players are willing to pay for the extra human safety. I need to ask him today, while hopefully he is busy with christmas sales. JR Exactly.... but my napkin design suggests that a current transformer just might have enough output current to open up a latching reed relay. And a permanent magnet on a push button could "reset" the latching relay if it trips. If that's indeed the case, then this device could fit in a plastic in-line box that connects between the guitar cable and the amplifier. If (and this is a big IF) there's enough current flow from the current sensing transformer to open up the latching relay without amplification, then there are no batteries required. I'm going to see if I can get a few current transformers and latching reed relays to play with. However, if this was something that could be built and sold for $50 at a profit, would that be too much money? Or is $30 a more acceptable price? Just remember that for this to happen there has to be a certain amount of profit in building and selling it. | Low | [
0.5337552742616031,
31.625,
27.625
] |
F I L E D United States Court of Appeals Tenth Circuit UNITED STATES COURT OF APPEALS FEB 4 1999 FOR THE TENTH CIRCUIT PATRICK FISHER Clerk UNITED STATES OF AMERICA, Plaintiff-Appellee, v. No. 98-6018 (D.C. No. 97-CV-657) CHARLES EDWARD MCINTYRE, (W.D. Okla.) Defendant-Appellant. ORDER AND JUDGMENT * Before PORFILIO , BALDOCK , and HENRY , Circuit Judges. After examining the briefs and appellate record, this panel has determined unanimously that oral argument would not materially assist the determination of this appeal. See Fed. R. App. P. 34(a)(2); 10th Cir. R. 34.1(G). The case is therefore ordered submitted without oral argument. * This order and judgment is not binding precedent, except under the doctrines of law of the case, res judicata, and collateral estoppel. The court generally disfavors the citation of orders and judgments; nevertheless, an order and judgment may be cited under the terms and conditions of 10th Cir. R. 36.3. Defendant Charles Edward McIntyre, a federal inmate appearing pro se, seeks a certificate of appealability to appeal the district court’s denial of his 28 U.S.C. § 2255 motion to vacate, set aside, or correct his sentence. We conclude defendant has not made a substantial showing of the denial of a constitutional right, as required by 28 U.S.C. § 2253(c). Accordingly, we deny his request for a certificate of appealability and dismiss the appeal. Defendant was convicted of conspiracy to distribute cocaine and cocaine base in violation of 21 U.S.C. § 846, possession of cocaine with intent to distribute in violation of 21 U.S.C. § 841(a)(1), and traveling and causing travel in interstate commerce to facilitate the distribution and possession of cocaine and cocaine base with intent to distribute in violation of 18 U.S.C. § 1952(a)(3) and 18 U.S.C. § 2. He was sentenced to life imprisonment on two of the counts, to 480 months on one of the possession counts, and to 60 months on the remaining counts. Defendant appealed his conviction, raising 25 allegations of error. See United States v. McIntyre , 997 F.2d 687 (10th Cir. 1993). His conviction was affirmed. See id. Defendant filed this § 2255 petition in 1997 alleging ineffective assistance of counsel. During opening statement and closing argument, defendant’s trial counsel admitted that defendant had been stopped at an airport with cash in his possession, that during a subsequent airport stop, police had found glass beakers -2- similar to the kind used to cook crack cocaine in defendant’s luggage, and that defendant was later arrested in a hotel room with cocaine in his underwear. Defendant contends that, by these admissions, his counsel conceded his guilt, depriving him of effective assistance of counsel, and that prejudice should be presumed from such conduct. The district court rejected this argument, concluding that defendant’s claim of ineffective assistance of counsel fails under Strickland v. Washington , 466 U.S. 668 (1984), because his counsel made a strategic decision to portray defendant as guilty of only simple possession of cocaine for his personal use and to show that the evidence was insufficient to support the government’s charges that defendant was involved in a major drug conspiracy. We have thoroughly reviewed defendant’s brief, his application for a certificate of appealability, the district court’s order, and the entire record before us. We conclude that defendant has failed to demonstrate any prejudice arising from his trial counsel’s alleged errors. See Strickland 466 U.S. at 688, 692; United States v. Williamson , 53 F.3d 1500, 1511-12 (10th Cir. 1995) (concluding that trial counsel’s strategy of conceding during closing argument defendant’s guilt with respect to lesser drug counts, and denying involvement with the more serious conspiracy counts did not constitute ineffective assistance of counsel). The district court’s order denying the § 2255 motion is not debatable, reasonably -3- subject to a different outcome on appeal, or otherwise deserving of further proceedings. See Barefoot v. Estelle , 463 U.S. 880, 893 & n.4 (1983). Accordingly, because we conclude that defendant has not made a substantial showing of the denial of a constitutional right, we DENY defendant’s application for a certificate of appealability and DISMISS this appeal. The mandate shall issue forthwith. Entered for the Court Bobby R. Baldock Circuit Judge -4- | Low | [
0.48071979434447304,
23.375,
25.25
] |
Mike's PowerShell Musings Menu Archive of posts filed under the Visio category. One of the great things about doing Office automation (that is, COM automation of Office apps) is that all of the examples are filled with tons of references to constants. A goal of VisioBot3000 was to make using those constants as easy as possible. I mentioned the issue of having so many constants to deal … Back in January I wrote a post about how VisioBot3000 had been broken for a while, and my attempts to debug and/or diagnose the problem. In the process of developing a minimal example that illustrated the “breakage”, I noticed that accessing certain Visio object properties caused the code to work, even if the values of … The Setup Sometime around late August of 2016, VisioBot3000 stopped working. It was sometime after the Windows 10 anniversary update, and I noticed that when I ran any of the examples in the repo, PowerShell hung whenever it tried to place a container on the page. I had not made any recent changes to the code. … It’s been a while since I last spoke about VisioBot3000. I’ve got the project to a reasonably stable point…not quite feature complete but I don’t see a lot of big changes. One of the things I found even as I wrote sample diagram scripts was that quite a bit of the script was taken up … In working on VisioBot3000, I’ve spent a lot of time looking at VBA in Visio’s macro editor. It’s one of the easiest ways to find out how things work. I thought it would be fun to take some VBA and convert it to PowerShell to demonstrate the process. We’ll start with a basic diagram using … In the last post I showed you how VisioBot3000 makes drawing simple Visio diagrams simpler by wrapping the Visio COM API and providing more straight-forward cmdlets do refer to stencils and masters, and to draw shapes, containers, and connectors on the page. To be honest, that’s where I was expecting to end up when I started … Ok…I think this is the last of the “how to perform primitive operations in Visio” articles that I’m going to do. Hope you’ve been enjoying them. If you haven’t been keeping up, you can find them all here. In this installment, I’m going to show you how to create a container in Visio. Containers are really … Had a blast at the MidMo PowerShell user group meeting last night in Columbia,MO. It was just the second meeting for this group and there were over a dozen people present. I started with a presentation on using PowerShell and Visio together (some of which you’ve seen here) and Rob Campbell (@Mjolinor) finished the meeting … It’s been a while since the last post. I decided that if I had to chose between writing PowerShell and writing about PowerShell, I should favor the former. In this episode, I’ll talk about how to create connections between objects in a Visio diagram. Turns out it’s not really that hard (just like most things … Why mess with Visio from PowerShell? I’ve got a couple of posts with some really basic code to access things in PowerShell and it occurred to me…I probably haven’t made it clear why you might want to do this (other than that you can). So, instead of moving on to connections (which will be next, … | Mid | [
0.563510392609699,
30.5,
23.625
] |
import os import pickle SAGE_SHARE = os.getenv('SAGE_SHARE') install_root = os.path.join(SAGE_SHARE, 'conway_polynomials') def create_db(): db = {} from src import conway_polynomials for p, n, v in conway_polynomials: if not p in db: db[p] = {} db[p][n] = v if not os.path.exists(install_root): os.makedirs(install_root) with open(os.path.join(install_root, 'conway_polynomials.p'), 'wb') as f: pickle.dump(db, f) if __name__ == '__main__': create_db() | Mid | [
0.591792656587473,
34.25,
23.625
] |
Distant, late metastases to skin of carcinoma of the breast. Three cases of late, distant metastases to the skin of cancer of the breast and a review of the literature are presented. | Low | [
0.513698630136986,
28.125,
26.625
] |
Laparoscopic management of vaginal evisceration: case report and review of the literature. Vaginal evisceration is a rare condition that presents with protruding mass, vaginal bleeding, and pelvic pain. Vaginal evisceration is most commonly associated with previous vaginal surgery but may occur spontaneously, and represents a surgical emergency. We report a case of vaginal evisceration in a 42-year-old premenopausal woman 6 months after hysterectomy. This case shows the value of laparoscopy in management of vaginal evisceration. | High | [
0.6756032171581771,
31.5,
15.125
] |
/* DISKSPD Copyright(c) Microsoft Corporation All rights reserved. MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #include "IoBucketizer.h" /* Calculating stddev using an online algorithm: avg = sum(1..n, a[n]) / n stddev = sqrt(sum(1..n, (a[n] - avg)^2) / n) = sqrt(sum(1..n, a[n]^2 - 2 * a[n] * avg + avg^2) / n) = sqrt((sum(1..n, a[n]^2) - 2 * avg * sum(1..n, a[n]) + n * avg^2) / n) = sqrt((sum(1..n, a[n]^2) - 2 * (sum(1..n, a[n]) / n) * sum(1..n, a[n]) + n * (sum(1..n], a[n]) / n)^2) / n) = sqrt((sum(1..n, a[n]^2) - (2 / n) * sum(1..n, a[n])^2 + (1 / n) * sum(1..n, a[n])^2) / n) = sqrt((sum(1..n, a[n]^2) - (1 / n) * sum(1..n, a[n])^2) / n) So if we track n, sum(a[n]) and sum(a[n]^2) we can calculate the stddev. This is used to calculate the stddev of the latencies below. */ const unsigned __int64 INVALID_BUCKET_DURATION = 0; IoBucketizer::IoBucketizer() : _bucketDuration(INVALID_BUCKET_DURATION), _validBuckets(0), _totalBuckets(0) {} void IoBucketizer::Initialize(unsigned __int64 bucketDuration, size_t validBuckets) { if (_bucketDuration != INVALID_BUCKET_DURATION) { throw std::runtime_error("IoBucketizer has already been initialized"); } if (bucketDuration == INVALID_BUCKET_DURATION) { throw std::invalid_argument("Bucket duration must be a positive integer"); } _bucketDuration = bucketDuration; _validBuckets = validBuckets; _vBuckets.resize(_validBuckets); } void IoBucketizer::Add(unsigned __int64 ioCompletionTime, double ioDuration) { if (_bucketDuration == INVALID_BUCKET_DURATION) { throw std::runtime_error("IoBucketizer has not been initialized"); } size_t bucketNumber = static_cast<size_t>(ioCompletionTime / _bucketDuration); _totalBuckets = bucketNumber + 1; if (bucketNumber >= _validBuckets) { return; } _vBuckets[bucketNumber].lfSumDuration += ioDuration; _vBuckets[bucketNumber].lfSumSqrDuration += ioDuration * ioDuration; if (_vBuckets[bucketNumber].ulCount == 0 || ioDuration < _vBuckets[bucketNumber].lfMinDuration) { _vBuckets[bucketNumber].lfMinDuration = ioDuration; } if (_vBuckets[bucketNumber].ulCount == 0 || ioDuration > _vBuckets[bucketNumber].lfMaxDuration) { _vBuckets[bucketNumber].lfMaxDuration = ioDuration; } _vBuckets[bucketNumber].ulCount++; } size_t IoBucketizer::GetNumberOfValidBuckets() const { return (_totalBuckets > _validBuckets ? _validBuckets : _totalBuckets); } unsigned int IoBucketizer::GetIoBucketCount(size_t bucketNumber) const { if (bucketNumber < _validBuckets) { return _vBuckets[bucketNumber].ulCount; } return 0; } double IoBucketizer::GetIoBucketMinDurationUsec(size_t bucketNumber) const { if (bucketNumber < _validBuckets) { return _vBuckets[bucketNumber].lfMinDuration; } return 0; } double IoBucketizer::GetIoBucketMaxDurationUsec(size_t bucketNumber) const { if (bucketNumber < _validBuckets) { return _vBuckets[bucketNumber].lfMaxDuration; } return 0; } double IoBucketizer::GetIoBucketAvgDurationUsec(size_t bucketNumber) const { if (bucketNumber < _validBuckets && _vBuckets[bucketNumber].ulCount != 0) { return _vBuckets[bucketNumber].lfSumDuration / static_cast<double>(_vBuckets[bucketNumber].ulCount); } return 0; } double IoBucketizer::GetIoBucketDurationStdDevUsec(size_t bucketNumber) const { if (bucketNumber < _validBuckets && _vBuckets[bucketNumber].ulCount != 0) { double sum_of_squares = _vBuckets[bucketNumber].lfSumSqrDuration; double square_of_sum = _vBuckets[bucketNumber].lfSumDuration * _vBuckets[bucketNumber].lfSumDuration; double count = static_cast<double>(_vBuckets[bucketNumber].ulCount); double square_stddev = (sum_of_squares - (square_of_sum / count)) / count; return sqrt(square_stddev); } return 0; } double IoBucketizer::_GetMeanIOPS() const { size_t numBuckets = GetNumberOfValidBuckets(); double sum = 0; for (size_t i = 0; i < numBuckets; i++) { sum += static_cast<double>(_vBuckets[i].ulCount) / numBuckets; } return sum; } double IoBucketizer::GetStandardDeviationIOPS() const { size_t numBuckets = GetNumberOfValidBuckets(); if(numBuckets == 0) { return 0.0; } double mean = _GetMeanIOPS(); double ssd = 0; for (size_t i = 0; i < numBuckets; i++) { double dev = static_cast<double>(_vBuckets[i].ulCount) - mean; double sqdev = dev*dev; ssd += sqdev; } return sqrt(ssd / numBuckets); } void IoBucketizer::Merge(const IoBucketizer& other) { if(other._vBuckets.size() > _vBuckets.size()) { _vBuckets.resize(other._vBuckets.size()); } for(size_t i = 0; i < other._vBuckets.size(); i++) { _vBuckets[i].ulCount += other._vBuckets[i].ulCount; _vBuckets[i].lfSumDuration += other._vBuckets[i].lfSumDuration; _vBuckets[i].lfSumSqrDuration += other._vBuckets[i].lfSumSqrDuration; if (i >= _validBuckets || other._vBuckets[i].lfMinDuration < _vBuckets[i].lfMinDuration) { _vBuckets[i].lfMinDuration = other._vBuckets[i].lfMinDuration; } if (other._vBuckets[i].lfMaxDuration > _vBuckets[i].lfMaxDuration) { _vBuckets[i].lfMaxDuration = other._vBuckets[i].lfMaxDuration; } } if (other._validBuckets > _validBuckets) { _validBuckets = other._validBuckets; } if (other._totalBuckets > _totalBuckets) { _totalBuckets = other._totalBuckets; } } | Mid | [
0.5377049180327861,
41,
35.25
] |
package org.nutz.plugins.wkcache.annotation; import java.lang.annotation.*; /** * Created by wizzer on 2017/6/14. */ @Target({ElementType.METHOD, ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Documented @Inherited public @interface CacheDefaults { String cacheName() default "wk"; int cacheLiveTime() default 0; } | Mid | [
0.603508771929824,
43,
28.25
] |
// // IntExtensions.swift // EZSwiftExtensions // // Created by Goktug Yilmaz on 16/07/15. // Copyright (c) 2015 Goktug Yilmaz. All rights reserved. // import Foundation extension Int { /// EZSE: Checks if the integer is even. public var isEven: Bool { return (self % 2 == 0) } /// EZSE: Checks if the integer is odd. public var isOdd: Bool { return (self % 2 != 0) } /// EZSE: Checks if the integer is positive. public var isPositive: Bool { return (self > 0) } /// EZSE: Checks if the integer is negative. public var isNegative: Bool { return (self < 0) } /// EZSE: Converts integer value to Double. public var toDouble: Double { return Double(self) } /// EZSE: Converts integer value to Float. public var toFloat: Float { return Float(self) } /// EZSE: Converts integer value to CGFloat. public var toCGFloat: CGFloat { return CGFloat(self) } /// EZSE: Converts integer value to String. public var toString: String { return String(self) } /// EZSE: Converts integer value to UInt. public var toUInt: UInt { return UInt(self) } /// EZSE: Converts integer value to Int32. public var toInt32: Int32 { return Int32(self) } /// EZSE: Converts integer value to a 0..<Int range. Useful in for loops. public var range: CountableRange<Int> { return 0..<self } /// EZSE: Returns number of digits in the integer. public var digits: Int { if self == 0 { return 1 } else if Int(fabs(Double(self))) <= LONG_MAX { return Int(log10(fabs(Double(self)))) + 1 } else { return -1; //out of bound } } /// EZSE: The digits of an integer represented in an array(from most significant to least). /// This method ignores leading zeros and sign public var digitArray: [Int] { var digits = [Int]() for char in self.toString { if let digit = Int(String(char)) { digits.append(digit) } } return digits } /// EZSE: Returns a random integer number in the range min...max, inclusive. public static func random(within: Range<Int>) -> Int { let delta = within.upperBound - within.lowerBound return within.lowerBound + Int(arc4random_uniform(UInt32(delta))) } } extension UInt { /// EZSE: Convert UInt to Int public var toInt: Int { return Int(self) } /// EZSE: Greatest common divisor of two integers using the Euclid's algorithm. /// Time complexity of this in O(log(n)) public static func gcd(_ firstNum: UInt, _ secondNum: UInt) -> UInt { let remainder = firstNum % secondNum if remainder != 0 { return gcd(secondNum, remainder) } else { return secondNum } } /// EZSE: Least common multiple of two numbers. LCM = n * m / gcd(n, m) public static func lcm(_ firstNum: UInt, _ secondNum: UInt) -> UInt { return firstNum * secondNum / UInt.gcd(firstNum, secondNum) } } | Mid | [
0.5846774193548381,
36.25,
25.75
] |
Samsung's Profit Jumps Despite Note 7 Failure Samsung Electronics on Tuesday said fourth-quarter operating profit jumped 50% to its highest in over three years, as record earnings in its chips business masked the negative impact of its failed Note 7 phones. Samsung also said it plans to buy back 9.3 trillion won worth of shares this year. The South Korean electronics manufacturer is counting on the booming chip market to continue driving growth as it works to recover from its biggest product recall crisis involving fire-prone Note 7 smartphones. Samsung expected earnings to decline in the current quarter from the preceding quarter because of “increased marketing expenses in the mobile business and a sales decrease of TVs due to weak seasonal demand.” Revenue remained flat at 53.3 trillion won from the same period a year earlier, versus its estimate of 53 trillion won. The chips division was the quarter’s cash cow, with operating profit jumping 77% to 4.95 trillion won from a year earlier. In its mobile business, operating profit rose 12% to 2.5 trillion won as models such as the Galaxy S filled the void following the discontinuation of the fire-prone Note 7 in October. Samsung said on Monday that defective batteries caused Note 7 handsets to overheat and catch fire, and indicated that it may delay the launch of its next premium Galaxy S smartphone. But its shares have been resilient, hitting a series of record-highs this month despite the Note 7 fiasco and an ongoing investigation into Samsung executives over their alleged involvement in a political scandal. Prosecutors have said they will pursue their bribery case against Samsung Group scion Jay. Y Lee even if they are not granted permission to arrest him. | Mid | [
0.552147239263803,
33.75,
27.375
] |
import React from "react"; import { makeStyles } from "@material-ui/core/styles"; import CircularProgress from "@material-ui/core/CircularProgress"; const useStyles = makeStyles(theme => ({ loading: { position: "fixed", left: 0, right: 0, top: "calc(50% - 20px)", margin: "auto", height: "40px", width: "40px", "& img": { position: "absolute", height: "25px", width: "auto", top: 0, bottom: 0, left: 0, right: 0, margin: "auto" } } })); const Loading = props => { const classes = useStyles(); return ( <div className={classes.loading}> <img src="/assets/images/logo-circle.svg" alt="" /> <CircularProgress /> </div> ); }; export default Loading; | Mid | [
0.5445783132530121,
28.25,
23.625
] |
Impairment of vertebral artery flow caused by extrinsic lesions. In a consecutive series of 71 cases of extrinsic lesions involving the vertebral artery (VA), 51 patients presented with external compression of this vessel. The compressive agents included 34 tumors, 4 osteophytes, 5 fibrous bands, 4 traumatic lesions, 2 neural elements, and 2 infectious processes. The main site was the second portion of the VA (C2-C6) (30 of 51 patients). Compression always induced at least significant stenosis, and in 8 patients caused complete occlusion. The compression was either permanent (44 patients) or intermittent (7 patients). Symptoms were observed in 11 patients, including 2 with permanent deficits. Surgical release of compression was performed each time symptoms could be explained by a reduction in VA flow and also when the compressing agent needed to be removed, as in the cases involving tumors. VA decompression was achieved by direct approach in 37 patients, by reduction and fixation of a traumatic dislocation in 2 patients, and by distal revascularization in 4 patients. Medical treatment or roentgenotherapy was used in the other patients. Results were excellent in all but 2 patients, who died from traumatic and ischemic lesions, respectively. Therefore, it seems important to identify external causes of compression of the VA for two reasons: 1) to suppress symptoms of vertebrobasilar insufficiency when their relation to VA compression is clearly established, and 2) to remove compressive agents like tumors safely while preserving the VA. | High | [
0.670967741935483,
32.5,
15.9375
] |
Q: Difference between an $L^p$ space on a bounded set and a periodic $L^p$ space? I'm confused about the concept periodic $L^p$ space. Let $\mathbb{T}$ be the quotient space $\mathbb{R}/\mathbb{Z}$. Here are my questions: What is the Lebesgue measure on $\mathbb{T}$? How do people assign measures on a quotient space? What exactly is the difference between $L^p(\mathbb{T})$ and $L^p([0,1])$? What I think is that $L^p([0,1])$ is the restriction of $L^p(\mathbb{R})$ functions on $[0,1]$; while $L^p(\mathbb{T})$ is the space of periodic functions (with period $1$) $f$ on $\mathbb{R}$ such that $f1_{[0,1]}\in L^p([0,1])$. Could anyone come up with references for detailed explanation? A: To quote Tao's comment on this question: One can define the Lebesgue measure on ${\bf T}$ by identifying that space with one of its fundamental domains, such as $[0,1)$. The spaces ${\bf T}$ and $[0,1)$ are thus isomorphic as measure spaces, and so $L^p({\bf T})$ and $L^p([0,1))$ are isomorphic as normed vector spaces. However, ${\bf T}$ and $[0,1)$ differ in other respects if one considers other structures than the measurable structure. For instance, from the point of view of topological structure ${\bf T}$ is compact and not simply connected, while $[0,1)$ is non-compact and simply connected. The differential structure is also different, for instance the identity function $x \mapsto x$ on $[0,1)$ is smooth using the differential structure of $[0,1)$, but is not even continuous if one replaces the domain with ${\bf T}$. In particular, PDE on $[0,1)$ and on ${\bf T}$ are quite different. If one only cares about the measurable structure, then $[0,1)$, $[0,1]$, and ${\bf T}$ are isomorphic (up to null sets), and if one only cares about the normed vector space structure, $L^p([0,1))$, $L^p([0,1])$, and $L^p({\bf T})$ are all isomorphic. However, for applications to PDE one usually needs more structure than just the measurable or normed vector space structure (in particular, one needs differential structure), and then the three domains are all inequivalent (for instance the Sobolev spaces on the three domains behave differently). | Mid | [
0.631713554987212,
30.875,
18
] |
Subterranean formations that contain hydrocarbons are sometimes non-homogeneous in their composition along the length of wellbores that extend into such formations. It is sometimes desirable to treat and/or otherwise manage the formation and/or the wellbore differently in response to the differing formation composition. Some wellbore servicing systems and method allow such treatment and may refer to such treatments as zonal isolation treatments. However, some wellbore servicing systems and methods are limited in the number of different zones that may be treated within a wellbore. Accordingly, there exists a need for improved systems and method of treating multiple zones of a wellbore. | Mid | [
0.6275000000000001,
31.375,
18.625
] |
Q: MySQL: Place rows that have parent_id filled right after the parent id I have a table with 3 columns comment_id, comment and parent_comment_id. The query is always limited to 10 and I have to return within those 10 results all comment_id followed by rows that have parent_comment_id of previous comment_id. Example: comment_id 2 has answer in comment_id 20 that has parent_comment_id set as 2. I need result to look like this: comment_id ... parent_comment_id 2 0 20 2 3 0 4 0 5 0 15 5 ... Is there some nice way to order it? Here is sample data: http://sqlfiddle.com/#!9/70120c/5 A: If I understand your question correctly, I think this query will do what you want. I modified your fiddle slightly to add another comment response to comment 1 (new version here). The ordering is a little tricky, the first term orders the comment and its replies together, and then the second term orders the comments within that group (so that 1 and its replies 3 & 15 are sorted in that order). The DISTINCT is required to prevent multiple output rows for a comment which has multiple replies. SELECT DISTINCT c.comment_id,c.comment,c.parent_comment_id FROM comments c LEFT JOIN comments r ON c.comment_id = r.parent_comment_id ORDER BY IF(c.parent_comment_id=0, c.comment_id, c.parent_comment_id), c.comment_id LIMIT 10; Output: comment_id comment parent_comment_id 1 The earth is flat 0 3 Response to 1 1 15 Another response to 1 1 2 One hundred angels can dance on the head of a pin 0 14 Response to 2 2 4 The earth is like a ball. 0 6 Response to 4 4 5 The earth is like a ball. 0 7 The earth is like a ball. 0 8 The earth is like a ball. 0 | High | [
0.674641148325358,
35.25,
17
] |
Background {#Sec1} ========== The economically important grass tribe Triticeae Dumort. consists of approximately 360 species and several subspecies in 20-30 genera. Triticeae taxa occur in temperate and dry regions of the World and harbour the important cereals bread wheat (*Triticum aestivum*), barley (*Hordeum vulgare*), rye (*Secale cereale*) and their wild relatives \[[@CR1], [@CR2]\]. Yet there is no good understanding of the relationships among Triticeae taxa, although a multitude of molecular phylogenies have been produced \[[@CR3]--[@CR11]\]. The acceptance levels of taxa vary greatly among authors on the genus-level and below (for recent reviews see \[[@CR1], [@CR12], [@CR13]\]). One important reason is the complex mode of evolution within Triticeae. The majority of species are allopolyploids and many of them likely have originated repeatedly, involving genetically different parent species \[[@CR14]--[@CR19]\]. Bread wheat is the most prominent polyploid and evolved via consecutive hybridizations of three diploids and thereby combines three related genomes (named **A**, **B** and **D**) \[[@CR7], [@CR20]\]. In Triticeae and many other crops such genomes were defined through cytogenetic characterization of chromosomes together with the analysis of their pairing behaviour in interspecific and intergeneric crosses (for reviews see \[[@CR1], [@CR12], [@CR21]\]). It is assumed that diploid species and monogenomic taxa are the basic units within Triticeae and that the heterogenomic polyploids form a second level of taxonomic entities \[[@CR22], [@CR23]\]. Triticeae are known to have low barriers against hybridization, which result in mixed or even recombinant phylogenetic signals from nuclear data \[[@CR10], [@CR20]\]. In contrast, phylogenetic analyses of plastid sequences provide clear information on maternal lineages, as organelles are mostly uniparentally inherited and non-recombining in angiosperms \[[@CR24]\], although chloroplast capture \[[@CR25]\] can result in deviating phylogenetic hypotheses. Yet, plastid sequence data is limited for Triticeae. Studies based on a tribe-wide taxon sampling are rare and focused on single or few plastid markers \[[@CR9], [@CR26]--[@CR28]\]. To date, the number of whole plastid genome sequences is increasing \[[@CR29]--[@CR34]\], however, entire chloroplast genomes are mainly available for the domesticated taxa and their closest relatives. These previous studies provide only limited insight in the maternal phylogeny of Triticeae, as only one to few accessions per taxon were included and often support values for the taxonomic units are low \[[@CR26], [@CR28], [@CR35]\]. Here we present phylogenetic analyses of chloroplast sequences based on a comprehensive set of monogenomic Triticeae species plus allopolyploid representatives of the wheat group (i.e. taxa belonging to the genera *Aegilops* and *Triticum*). For each species we included multiple individuals to sample part of the intraspecific variation. We performed a target-enrichment and next-generation sequencing (NGS) approach that, among nuclear loci (which will be published elsewhere), targeted the chloroplast *ndh*F gene. Since chloroplasts occur in high copy number in the plant cell, they represent a large fraction of the off-target reads when sequencing reduced complexity libraries, which can be used to assemble almost complete chloroplast genomes \[[@CR36]\]. Our dataset was complemented by chloroplast genomes stored in the GenBank database. Multispecies coalescent (MSC) analyses based on *trn*K-*mat*K, *rbc*L and *ndh*F were used for dating of the major splits within the evolution of the tribe and to reconsider the monophyly of the Triticeae chloroplast lineages. Methods {#Sec2} ======= Plant materials {#Sec3} --------------- Aiming at a good representation of taxa for phylogenetic inference we analysed 194 individuals representing approximately 53 species belonging to 15 genera (depending on taxonomic treatment applied) of the grass tribe Triticeae and included *Bromus* and *Brachypodium* accessions as outgroup taxa (Table [1](#Tab1){ref-type="table"}, Additional file [1](#MOESM1){ref-type="media"}: Table S1). The accessions were acquired from the International Center for Agricultural Research in the Dry Areas (ICARDA), the seed bank of the Leibniz Institute of Plant Genetics and Crop Research (IPK), the National Small Grain Collection of the US Department of Agriculture (USDA), the Czech Crop Research Institute, and the Laboratory of Plant Genetics (Kyoto University). Additional seed material was collected during field trips. Multiple accessions per species and intra-specific entities were selected if possible. All materials were grown from seed and identified based on morphological characters if an inflorescence was produced. Plant material obtained from germplasm repositories that was found to be in conflict with its taxonomic affiliation was only included in the analyses if the taxon could be unequivocally determined. Vouchers of the morphologically identified materials (Additional file [1](#MOESM1){ref-type="media"}: Table S1) were deposited in the herbarium of IPK (GAT).Table 1Overview of Triticeae and outgroup taxa consideredSpeciesGenomePloidy (N)Distribution area*Aegilops bicornis* Jaub. & SpachS\*2× (4)SE Mediterranean*Aegilops biuncialis* Vis.UM4× (4)SW-SE Europe, N Africa, SW Asia*Aegilops columnaris* Zhuk.UM4× (2)SW Asia*Aegilops comosa* Sm.M2× (4)Balkans*Aegilops crassa* Boiss.DM/DDM4× (1)/6× (2)SW Asia*Aegilops cylindrica* HostDC4× (2)SE Europe, W Asia*Aegilops geniculata* RothMU4× (3)E Europe, W Asia, Macaronesia*Aegilops juvenalis* EigDMU6× (2)SW Asia,*Aegilops kotschyi* Boiss.S\*U4× (1)SW Asia, NE Africa*Aegilops longissima* Schweinf. & Muschl.S\*2× (5)E Mediterranean*Aegilops markgrafii* (Greuter) K. HammerC2× (5)NE Mediterranean*Aegilops neglecta* Req. ex Bertol.UM/UMN4× (2)/6× (2)Mediterranean to SW Asia*Aegilops peregrina* (Hack.) Maire & WeillerSU4× (1)SW Asia, N Africa*Aegilops searsii* Feldman & KislevS\*2× (5)E Mediterranean*Aegilops sharonensis* EigS\*2× (1)Israel, Lebanon*Aegilops speltoides* TauschS2× (6)E Mediterranean*Aegilops tauschii* Coss.D2× (4)SW--C Asia*Aegilops triuncialis* L.UC4× (2)Mediterranean to SW Asia*Aegilops umbellulata* Zhuk.U2× (3)SE Europe, SW Asia*Aegilops uniaristata* Vis.N2× (3)SE Europe*Aegilops ventricosa* TauschDN4× (2)SW Europe, N Africa*Agropyron cristatum* (L.) Gaertn.P2× (2)/4× (4)S Europe, NECW Asia*Amblyopyrum muticum* (Boiss.) EigT2× (6)Turkey*Australopyrum retrofractum* (Vickery) A. LöveW2× (4)SE Australia*Dasypyrum villosum* (L.) P. CandargyV2× (5)SW--SE Europe, Caucasus*Eremopyrum bonaepartis* (Spreng.) NevskiFt/Xe/FtXe2×/4× (5)SE--E Europe, WC Asia*Eremopyrum triticeum* (Gaertn.) NevskiFt2× (3)SE--E Europe, WC Asia*Henrardia persica* (Boiss.) C.E. Hubb.O2× (4)SE Europe, SW Asia*Heteranthelium piliferum* Hochst. ex Jaub. & SpachQ2× (4)SE Europe, SW Asia*Hordeum bulbosum* L.I4× (1)Mediterranean to C Asia*Hordeum marinum* Huds.Xa2× (1)Mediterranean*Hordeum murinum* L.Xu2× (1)Mediterranean to C Asia*Hordeum pubiflorum* Hook. f.I2× (1)S Argentina*Hordeum vulgare* L.H2× (2)SW Asia*Psathyrostachys juncea* (Fisch.) NevskiNs2× (6)E Europe, NC Asia*Pseudoroegneria cognata* (Hack.) A. LöveSt6× (1)SW Asia, West Himalaya*Pseudoroegneria spicata* (Pursh) A. LöveSt2× (2)/6× (1)NW of Northern America*Pseudoroegneria stipifolia* (Czern. ex Nevski) A. LöveSt2× (1)/4×(2)E Europe, N Caucasus*Pseudoroegneria strigosa* (M. Bieb.) A. LöveSt2× (2)/6× (2)Balkans, Crimea*Pseudoroegneria tauri* (Boiss. & Balansa) A. LöveSt2× (5)E Mediterranean, S Caucasus*Secale cereale* L.R2× (4)Turkey*Secale strictum* C. PreslR2× (4)S Europe, SW Asia, N Africa*Taeniatherum caput-medusae* (L.) NevskiTa2× (6)S Europe, SW Asia, N Africa*Thinopyrum distichum* (Thunb.) A. LöveE4× (2)S Africa*Thinopyrum* spp*.* LöveE6× (1)/8× (2)SE Europe, SW Asia, N Africa*Triticum aestivum* L.BAD6× (6)Caucasus, Iran*Triticum monococcum* L.A2× (10)Turkey*Triticum timopheevii* (Zhuk.) Zhuk.GA4× (7)SW Asia*Triticum turgidum* L.BA4× (10)Lebanon*Triticum urartu* Thumanjan ex GandilyanA2× (5)E Mediterranean, Caucasus*Triticum zhukovskyyi* Menabde & EriczjanGAA6× (1)Caucasus*Brachypodium distachyon* (L.) P. Beauv.4× (1)S Europe, SW Asia, N Africa*Brachypodium pinnatum* L.) P. Beauv.4× (1)Europe, NCW Asia, NE Africa*Bromus inermis* Leyss.4× (1)SW Asia, Caucasus*Bromus tectorum* L.4× (1)Europe, SW Asia, N AfricaThe genome, determined ploidy levels, number of included accessions (N), and the main native distribution for all taxa sequenced in this study is given. The genomes names of allopolyploid *Aegilops* taxa are follwing Kilian et al. \[[@CR74]\] and Li et al. \[[@CR84]\] for S\*. Genome denominations for *Hordeum* follow Blattner \[[@CR107]\], and Bernhardt \[[@CR12]\] for the remaining taxa. Different seed banks adopt different taxonomic treatments that may vary in the number of (sub)species recognized. More comprehensive information about the used accessions, including the species names used in the donor seed bank and the country of origin is provided in Additional file [1](#MOESM1){ref-type="media"}: Table S1 Laboratory work {#Sec4} --------------- Flow-cytometric measurements were conducted to determine the ploidy level for all accessions. All analyses followed the protocol of Doležel et al. \[[@CR37]\] on a CyFlow Space flow cytometer (Partec). At least 7500 nuclei were counted. Only measurements with a coefficient of variation (CV) for sample and standard peak \<4% were accepted. Samples that recurrently produced CV values \>4% were repeated in Galbraith's buffer containing 1% polyvinylpyrrolidon (vol/vol) and 0.1% Triton X-100 (vol/vol). At least three measurements per species were carried out. If only a single accession of a species could be retrieved from a seed bank, its ploidy level was estimated three times. Samples of the same species were processed on at least two different days to account for instrument drifts. Genomic DNA was extracted either from 10 mg silica-dried leaves using the DNeasy Plant Mini Kit (Qiagen) or from 5 g of freeze-dried leaves using the cetyltrimethyl-ammonium bromide (CTAB) method \[[@CR38]\]. DNA quantifications were done using the Qubit dsDNA BR Assay (Life Technologies) or the Quant-iT PicoGreen dsDNA Assay Kit (Invitrogen) on a Tecan Infinite 200 microplate reader according to the manufactures instructions. The LE220 Focused-Ultrasonicator (Covaris) was used to shear 3 μg genomic DNA in 130 μl TE buffer for every sample into fragments having an average length of 400 bp with the following settings: instantaneous ultrasonic power (PIP) 450 W, duty factor (df) 30%, cycles per burst (cpb) 200. The treatment was applied for 100 s. The sheared DNA was used in a sequence-capture approach (SureSelect^XT^ Target Enrichment for Illumina Paired-End Sequencing, Agilent Technologies) targeting at 450 nuclear single-copy loci aiming for 0.01--0.02% of a Triticeae genome. Baits complementary to chloroplast *ndh*F based on 628 bp of available *Hordeum vulgare, Aegilops tauschii*, *Pseudoroegneria spicata, Triticum urartu* (identical to EF115541.1, JQ754651.1, KJ174105.1 and AF056180.1, respectively) and 2073 bp of *Brachypodium distachyon* (identical to AF251451.1) sequences were designed as well. The pairwise sequence identity was larger then 99% among Triticeae taxa and 96% when comparing the Triticeae taxa with *Brachypodium*. Baits were designed to cover the entire 2073 bp of *ndh*F as well as each polymorphism between the reference sequences at least five times. After the enrichment procedure all samples were barcoded and pooled (following \[[@CR39], [@CR40]\]) at equimolar ratios. Capture libraries were sequenced on the Illumina HiSeq 2000 or MiSeq. The flowcells were loaded aiming for a sequencing coverage of at least 40X. Data assembly {#Sec5} ------------- We used the captured *ndh*F and the off-target read fraction (i.e. reads for which no capture probes were designed in the target-enrichment experiment) to assemble whole chloroplast genomes. G[eneious]{.smallcaps} versions R8--R10 (Biomatters Ltd.) were used for quality control and downstream analyses. Read pairs were set with an average insert size of 300 bp and bases with an error probability above 5% were trimmed. Chloroplast genomes were assembled in a two-step procedure consisting of (1) the generating of a species-specific reference sequences followed by (2) the creation of individual-based chloroplast assemblies. In the first step we assembled species-specific chloroplast sequences by combining reads of multiple accessions of a single species. This increased the coverage for a species-specific chloroplast genome compared to the usage of data of an individual sample only. In a few cases single accessions were found to contribute a large amount of variation to these assemblies. These accessions were excluded from species-specific assemblies (Additional file [1](#MOESM1){ref-type="media"}: Table S1). The reads were either mapped to GenBank sequences of conspecific or closely related taxa (for *Aegilops*, *Hordeum* and *Triticum* species), or to *Hordeum vulgare* (EF115541), a well-studied basal organism in Triticeae, for taxa lacking conspecific chloroplast genomes in GenBank. One inverted repeat was cleaved off the GenBank sequences as no sequence variation has been found between the inverted repeats of the same chloroplast genome. A careful comparison of Triticeae chloroplast genomes available in GenBank showed a large amount of insertions and deletions (indels) among the sequences from single species. In case several chloroplast genomes per species were retrieved from GenBank, those were aligned and an annotated consensus was created as reference to check for intraspecific indels. Then a stringent read mapping approach was used that considered only reads with mates mapping in proper distance according to the insert size (±50%). This was done to avoid the inclusion of chimerical Illumina reads, which have been reported to occur frequently \[[@CR41]\]. All read mappings were performed using the G[eneious]{.smallcaps} mapper with five iterations, allowing at maximum 15% of mismatches per read and a maximum gap size of 1000 bp to encompass large deletions. The assembly results were compared and manually checked for inconsistencies (i.e. indels the assembler was unable to resolve). Consensus sequences were called using the 50% majority rule. Up to five rounds of mapping and inspection were performed, each time using the contig obtained previously. In the second step, for each sequenced individual chloroplast sequences were assembled by mapping all reads to their species-specific consensus sequence generated in step (1). Read mappings were performed using the G[eneious]{.smallcaps} mapper with five iterations, allowing at maximum 10% of mismatches per read and a maximum gap size of 100 bp. The assembly results were manually checked for inconsistencies. No global coverage threshold was applied as the read coverage for single accessions were relatively low. However, single nucleotide polymorphisms (SNPs) compared to the reference covered by a single read were masked. Finally consensus sequences were called using the 'Highest Quality' option, which is able to resolve conflicts between reads because it takes the relative residue quality into account. 'N' were called for positions without coverage. Whole chloroplast sequences with more than 50% missing data were excluded from further analyses. A multiple sequence alignment of the whole chloroplast genomes generated in step (2) plus a set of GenBank-derived sequences (Additional file [1](#MOESM1){ref-type="media"}: Table S1) was generated using M[afft]{.smallcaps} 7 (<http://mafft.cbrc.jp/alignment/software>; accessed in November 2016; \[[@CR42]\]) applying the auto algorithm in combination with the 'nwildcard' option. The alignment was manually curated. The sequences generated in the scope of this study were annotated by comparing them to the annotations of the GenBank accession number KJ592713 \[[@CR43]\] in G[eneious]{.smallcaps}. All sequences were submitted to GenBank (accession numbers KX591961-KX592154 and KY635999-KY636181). The number of parsimony-informative positions was inferred using PAUP\* 4.0b10 \[[@CR44]\]. Phylogenetic analyses {#Sec6} --------------------- We performed a Bayesian phylogenetic analysis for *ndh*F, as the sequence of this locus could be retrieved for all individuals without any missing data. First, unique *ndh*F haplotypes were identified using TCS 1.2.1 \[[@CR45]\]. The best-suited model of sequence evolution was identified on the data matrix of unique haplotypes with [j]{.smallcaps}M[odel]{.smallcaps}T[est]{.smallcaps} 2.1.4 \[[@CR46]\] using the default parameters. The Bayesian information criterion (BIC; \[[@CR47]\]) was selected for model choice because of its high accuracy \[[@CR46]\] and its tendency to favour simpler models than the Akaike information criterion (AIC; \[[@CR48]\]). Bayesian inference (BI) was performed in M[r]{.smallcaps}B[ayes]{.smallcaps} 3.2.6 \[[@CR49]\] using the model inferred by [j]{.smallcaps}M[odel]{.smallcaps}T[est]{.smallcaps}. BI consisted of four independent analyses each running for 20 million generations and sampling a tree every 1000 generations. BI of the whole chloroplast genome alignment were run with M[r]{.smallcaps}B[ayes]{.smallcaps} 3.2.6 on the CIPRES (Cyberinfrastructure for Phylogenetic Research) Science Gateway 3.3 \[[@CR50]\] for two datasets: (1) the complete alignment and (2) one alignment with positions having more than 50% of missing data being masked in G[eneious]{.smallcaps} version R10. The best-fitting models of sequence evolution were estimated by making the Monte Carlo Markov chain (MCMC) sampling across all substitution models (\[[@CR51]\]; 'lset nst = mixed'). For each dataset we performed three analyses, testing the impact of different rate settings, i.e. (1) a gamma-distributed rate variation, (2) a proportion of invariable sites and (3) with both combined to be able to identify the best-suited substitution model by comparing the posterior probabilities with AIC through MCMC (AICM; \[[@CR52]\]), which is less computing intensive though not as accurate as the application of Bayes factors \[[@CR53]\], in T[racer]{.smallcaps}. Each analysis was performed with two independent Metropolis coupled MCMC analyses each with four sequentially heated chains (temperature set to 0.05) until the standard deviation of split frequencies reached 0.009, a maximum of 10 million generations or the maximum runtime of CIPRES. Trees were sampled every 500 generations. For all Bayesian analyses conducted *Brachypodium distachyon* (EU325680) was set as outgroup and the convergence of the runs was assessed in T[racer]{.smallcaps} v. 1.6 \[[@CR54]\]. A consensus tree was computed after deleting a burn-in of the first 25% of trees. Additionally, a Bayes factor (BF; \[[@CR55]\]) analysis was carried out for the *ndh*F dataset to further evaluate the monophyly of Triticeae chloroplasts. Mean marginal log-likelihoods were computed using the stepping-stone sampling \[[@CR56]\] in M[r]{.smallcaps}B[ayes]{.smallcaps} 3.2.6 \[[@CR49]\] for monophyletic and polyphyletic relationships of Triticeae and the substitution model as identified in [j]{.smallcaps}M[odel]{.smallcaps}T[est]{.smallcaps}. Each analysis consisted of two million generations with four independent runs of four parallel chains. The BF was evaluated using ten as a cut-off value \[[@CR57]\]. Estimating divergence times using *trn*K-*mat*K, *rbc*L and *ndh*F {#Sec7} ------------------------------------------------------------------ We inferred a calibrated phylogeny for the three plastid loci *trn*K-*mat*K, *rbc*L and *ndh*F. First, we tested the robustness of the calibration of the most recent common ancestor (MRCA) of *Brachypodium* and Triticeae when increasing the sampling for Triticeae from 12 to 37 species compared to Marcussen et al. \[[@CR20]\]. For this a Bayesian coalescence analysis based on *trn*K-*mat*K, *rbc*L and *ndh*F for the subfamily Pooideae was performed. The same GenBank sequences were assembled to form a contiguous sequence as described and used in Marcussen et al. \[[@CR20]\]. This set of GenBank accessions was complemented with sequences assembled in this study whenever additional taxa or more sequence information from a certain taxon could be added. Following Marcussen et al. \[[@CR20]\] we restricted ourselves to one sequence per species. We used the species-specific sequences from step (1) of the sequence assembly procedure, over the selection of a single accession per taxon, comparable to Pelser et al. \[[@CR58]\]. This allowed us to employ all phylogenetic information available for a taxon and to overcome stretches of missing data. Conspecific sequences used for consensus inference showed 99.96 -- 100% of identical sites. The best partitioning schemes and DNA substitutions models were inferred with P[artition]{.smallcaps}F[inder]{.smallcaps} \[[@CR59], [@CR60]\] comparing all possible partitioning schemes. The analysis was carried out using the combination of age priors for analyses 2, 4, 6, 10 and 17 as published in Marcussen et al. \[[@CR20]\] in B[east]{.smallcaps} 2.4.1 \[[@CR61]\]. For each setting one replicate was performed. Priors on the root age were estimated as stem node ('use originate'). We found the divergence time of *Brachypodium* and Triticeae as inferred by Marcussen et al. \[[@CR20]\] to be robust. Second, we performed a multispecies coalescent (MSC) analysis using it as the secondary calibration point in million years ago (Ma) as normally distributed priors for the root of *Brachypodium*-Triticeae (mean 44.44 Ma ± 3.53) on *trn*K-*mat*K, *rbc*L and *ndh*F for all Triticeae accessions. We excluded gene sequences of *trn*K-*mat*K and *rbc*L if they showed more than 50% of missing data and sequences of all polyploid wheat accessions. Sequences of *Zea mays*, *Oryza sativa*, *Brachypodium distachyon* and two *Bromus* species were included as outgroup taxa. The taxa *Triticum monococcum* and *T. boeoticum*, *Secale cereale* and *S. vavilovii*, *Pseudoroegneria tauri* and *Ps*. *libanotica*, *Taeniatherum caput-medusae* and *Tae*. *crinitum*, *Agropyron cristatum* and *Agr. cimmericum* were each subsumed under the same species name (Additional file [1](#MOESM1){ref-type="media"}: Table S1), as no pronounced genetic variation were detected in the analysis of whole chloroplast sequences. Hereby we were following existing taxonomic treatments, which already unify these taxa under a single species name (see, e.g. \[[@CR62]\]). We performed MSC analyses for a dataset including *Psathyrostachys* and another one without it to evaluate the impact of this taxon on divergence times. Monophyly of Triticeae was not enforced for either analysis as suggested by the Bayes Factor analysis. For each dataset, first, the best partitioning schemes and DNA substitution models were inferred with P[artition]{.smallcaps}F[inder]{.smallcaps} searching all partitioning schemes. The analysis was run with the substitution models being linked, the Yule species tree prior, as well as the piecewise linear and constant root population model. Since the rate constancy was systematically rejected for all loci by the likelihood-ratio test \[[@CR63]\], an uncorrelated lognormal clock model (\[[@CR64]\]; uniform ucld.mean: min 0, max 0.01) was used. Trees were sampled every 5000 generations. Four independent analyses were performed and each was run for 600 million generations. All MSC analyses were run using the B[eagle]{.smallcaps} library \[[@CR65]\]. Effective sample sizes (ESS) and convergence of the analyses were assessed using T[racer]{.smallcaps} v. 1.6 \[[@CR54]\]. An appropriate burn-in was estimated from each trace file, and the four analyses were combined with L[og]{.smallcaps}C[ombiner]{.smallcaps} as part of the B[east]{.smallcaps} package. Then a maximum clade credibility (MCC) tree was summarised with T[ree]{.smallcaps}A[nnotator]{.smallcaps} and visualized with F[ig]{.smallcaps}T[ree]{.smallcaps} 1.4.2 \[[@CR66]\]. Results {#Sec8} ======= Ploidy levels {#Sec9} ------------- Flow cytometric measurements were performed for all accessions to be able to distinguish between different ploidy levels for accessions of the same species (Additional file [1](#MOESM1){ref-type="media"}: Table S1). We identified di- and tetraploid accession for *Agropyron cristatum, Eremopyrum bonaepartis, Pseudoroegneria stipifolia and Ps. strigosa*, and detected tetra- and hexaploid cytotypes for *Aegilops crassa* and *Ae. neglecta*. Flow cytometric measurements were used as additional information to confirm species affiliations \[[@CR67]\]. For example, comparing of the genome sizes measured for the diploid species *Thinopyrum bessarabicum* and *Th. elongatum* to the data from the Kew Angiosperm DNA C-values database revealed that the analysed accessions actually represent polyploids instead of diploids. For more information on problematic material from seed banks see Additional file [1](#MOESM1){ref-type="media"}: Table S1. Sequence assembly {#Sec10} ----------------- The target-enrichment protocol and Illumina sequencing were applied to 194 accessions, covering 53 species of 15 genera (dependent on the applied classificatory system) and three outgroup species (i.e. *Bromus* and *Brachypodium*, Table [1](#Tab1){ref-type="table"}, Additional file [1](#MOESM1){ref-type="media"}: Table S1). Whole chloroplast genomes were assembled in a two-step procedure via (1) an intermediate step of generating a species-specific reference if there was none available in GenBank and (2) the assembly of the chloroplast of each accession via read mapping to sequences from step (1). The average coverage of the chloroplast genome varies largely between single samples and depends mainly on the actual sequencing depth. Between approximately 50% and 90% of the reads mapping to the chloroplast mapped to *ndh*F (Additional file [2](#MOESM2){ref-type="media"}: Table S2), which was included in the bait design. Thus, the *ndh*F gene could be assembled for all accession without missing data. We identified 64 unique haplotypes when comparing the *ndh*F gene data plus the sequences downloaded from GenBank (Additional file [1](#MOESM1){ref-type="media"}: Table S1). The alignment of these 64 haplotypes had a total length of 2232 bp with 186 (8.3%) parsimony-informative sites. The entire alignment of whole chloroplast genome sequences comprised 222 sequences, 39 of them were downloaded from GenBank. This alignment ranged from *psb*A in the large single-copy region to partial *ndh*H in the small single-copy region and had a total length of 123,531 bp. It had 9064 (7.3%) parsimony-informative positions. The data matrix included 9.3% of missing data ('N'). These randomly distributed stretches of missing data occur in the alignment in regions where the sequencing coverage was insufficient. Additionally the matrix revealed 7.5% of gaps due to length variation between taxa. In several cases taxa showed long indels in intergenic regions, thus the same 900 bp deletion was found between *rpl*23 and *ndh*B in *Pseudoroegneria*, *Thinopyrum* and *Dasypyrum.* Many short indels (3--40 bp) were found in introns of coding genes (e.g. *ysf3*) and intergenic spacers. A variant of this alignment, having regions with 50% of missing data being removed, had a total length of 114,788 bp. In this case 8717 (7.6%) positions of the alignment were parsimony informative, while 9.2% of the characters were constituted by N's and 0.8% of gaps. Alignment masking mainly excluded regions of length variation due to short repeat motives in intergenic regions. With only few substitutions per chloroplast intraspecific variation was generally very low. The alignment revealed insertions unique to some GenBank sequences whose true occurrence could not be confirmed by our data: no reads from our analysed individuals mapped to these insertions. Moreover, BLAST searches of these regions returned mitochondrial and/or nuclear genomic data as best hit (e.g. KC912690, KC912692, KC912693, KC912694) indicating assembly artefacts. Those GenBank sequences were excluded from further analyses (Additional file [1](#MOESM1){ref-type="media"}: Table S1). Phylogenetic analyses {#Sec11} --------------------- We performed a BI analysis on the set of 64 unique *ndh*F haplotypes with the model of sequences substitution set to GTR + G \[[@CR68], [@CR69]\], as identified by [j]{.smallcaps}M[odel]{.smallcaps}T[est]{.smallcaps}. The phylogenetic tree obtained from *ndh*F (Fig. [1](#Fig1){ref-type="fig"}) shows Triticeae to be paraphyletic, as the lineage of *Psathyrostachys* appears to have diverged before the lineage of *Bromus*, although the position of *Bromus* is with a posterior probability (pp) of 0.88 not well supported. The branch lengths for the *Bromus* group are considerably longer compared to *Psathyrostachys*. The topology shows that individuals of most species and/or genera form monophyletic groups. However, *Eremopyrum bonaepartis* is polyphyletic, as the diploid plastid type of *E. bonaepartis* groups as sister to *Henrardia persica*, while the haplotypes of all tetraploid *E. bonaepartis* and diploid *E. triticeum* form a clade with *Agropyron cristatum*. A common maternal ancestor can be hypothesized for *Agropyron*, *Australopyrum*, *Eremopyrum* and *Henrardia* as these taxa form a well supported clade, which is sister to the clade of *Hordeum* species. The clades formed by the genera *Heteranthelium*, *Secale* and *Taeniatherum* are placed on a polytomy together with a clade formed by taxa having a **B**, **G** or **S** genome \[i.e. *Aegilops speltoides* (**S**) and all polyploid *Triticum* taxa (**B**/**G**)\], the clade of taxa with an **E**, **St** or **V** genome (i.e. *Thinopyrum*, *Pseudoroegneria* and *Dasypyrum*), and the clade of all remaining *Aegilops*, *Amblyopyrum* and diploid *Triticum* taxa. *Pseudoroegneria* appears paraphyletic, as *Dasypyrum* and *Thinopyrum* haplotypes group within this clade. The backbone of this clade represents a polytomy. Notably the placement of the otherwise monophyletic *Dasypyrum* is not supported. Several different haplotypes can be distinguished for various species of *Pseudoroegneria* itself (e.g. *Ps. spicata*, *Ps. strigosa*, *Ps. tauri*, *Ps. stipifolia*). Furthermore, the two **A**-genome species *Triticum urartu* and *T. monococcum* are monophyletic. Also all **D**-genome species (i.e. *Ae. tauschii*, *Ae. cylindrica* and *Ae. ventricosa*) form a clade. Both genomic groups are located on a polytomy together with the remaining *Aegilops* species and *Amblyopyrum*. *Aegilops crassa* and *Ae. juvenalis* (**D'**) group apart from the other **D** taxa and show a *ndh*F haplotype with less nucleotide differences to **S\*** than to **D** chloroplast lineages (i.e. one SNP difference to **S\*** vs. three and five SNPs to **D**). All diploid and polyploid **S\*** species sequenced in the scope of this study share the same *ndh*F haplotype. *Aegilops comosa* (**M**) and *Ae. uniaristata* (**N**) are sister species. All **U**-genome taxa fall into the same clade together with *Aegilops geniculata* (**M°**) and *Amblyopyrum muticum* (**T**). *Aegilops triuncialis* accessions possess **U** as well as **C** haplotypes.Fig. 1Phylogenetic tree derived from 2232 bp of the chloroplast locus *ndh*F and Bayesian inference. The multiple sequence alignment consisted of 64 unique haplotypes that originated from 194 accessions sequenced in the scope of this study and 41 sequences retrieved from GenBank. *Brachypodium distachyon* was set as outgroup taxon. Posterior probabilities (pp) for the main clades are depicted next to the nodes if they were higher then 0.75. Each unique haplotype is named with a distinctive identifier. For detailed information which accession possesses which haplotype and species synonyms see Additional file [1](#MOESM1){ref-type="media"}: Table S1. The ploidy level is indicated behind taxon labels. If there are multiple accessions per taxon sharing the same haplotype, the number of accessions is provided behind the taxon label. Single accessions grouping apart from other accessions of their taxon are marked with an *asterisk*. To the right the genomic groups are shown*. Arrows* with support values indicate the nodes they refer to Sometimes, single accessions of a species group within the otherwise monophyletic clade of another species. Thus, the accession AE_1831 of *Aegilops markgrafii* (**C**) falls into the clade of *Amblyopyrum muticum* (**T**) while KP_2012_119 of *Aegilops biuncialis* (**U**) falls within *Ae. geniculata* (**M°**). The accession AE_586 of *Aegilops neglecta* (**U**) groups together with *Ae. markgrafii* (**C**). Further, intraspecific variation within *ndh*F was found in several cases, for example, for *Aegilops comosa*, *Ae. speltoides, Amblyopyrum muticum*, and *Dasypyrum villosum.* With a score of 36.4, BF strongly favours Triticeae chloroplasts as paraphyletic (Additional file [3](#MOESM3){ref-type="media"}: Table S3) when *Psathyrostachys* is included in the analysis. As the resolution of the phylogenetic tree from the *ndh*F dataset is not sufficient to distinguish between more recently diverged taxa, the whole chloroplast genome dataset was phylogenetically analysed by BI using an alignment of the entire chloroplast genomes and a variant of it were positions having more then 50% of missing data have been masked. In both cases M[r]{.smallcaps}B[ayes]{.smallcaps} revealed a *gtrsubmodel* in combination with gamma-distributed rate variations as best-suited substitution model. The topologies (Fig. [2](#Fig2){ref-type="fig"}, Additional file [4](#MOESM4){ref-type="media"}: Figure S1) returned from both analyses are mainly congruent to each other and to the *ndh*F tree. However, nodes of deep splits supported moderately for the complete plastid data matrix show higher support in the dataset where low-covered regions have been masked. This is, for example, the case for the split of the ancestor of *Bromus.* The branch length differences between *Bromus* and *Psathyrostachys* are in agreement with the *ndh*F tree. In contrast to the *ndh*F dataset, the whole chloroplast phylogenies are able to provide a hypothesis of the relationships between all major genomic groups. They suggest that the **E**, **St**, and **V** clade (i.e. *Thinopyrum*, *Pseudoroegneria* and *Dasypyrum*) diverged before *Heteranthelium*, which in turn split before *Secale* and *Taeniatherum. Pseudoroegneria spicata* forms its own clade that diverged first from *Dasypyrum* and the remaining taxa within this clade. However, the *Dasypyrum* chloroplast genomes are characterized by rather long branches compared to other taxa in this clade. Furthermore, *Dasypyrum* comprises two well-differentiated haplotypes. *Aegilops speltoides* and the polyploid wheat species form three groups: (1) most *Ae. speltoides* accession form a clade of their own (**S**), (2) some *Ae. speltoides* accessions group together with *Triticum timopheevii*, *T. zhukovskyi* and the artificially synthesized wheat *T. kiharae* (**G**) and (3) all accessions of *T. turgidum* and *T. aestivum* share the same haplotype (**B**). The not supported placement of one *Ae. speltoides* accession (PI_48721) close to the **S** group, shifts to a supported position in the **G** group when regions with an high extent of missing data were masked. Additionally, the usage of entire chloroplast genomes resolves that diploid *Triticum* species (**A**) diverged before the **D**-genome taxa and the remaining *Aegilops* species and *Amblyopyrum*. The phylogeny also indicates that **D'** is closely related but distinct from **D**. Further, **M°**, **T** and **U** taxa form a clade, that diverged before the split of taxa having a **C**, **N**, **M** or **S\*** genome. Within this clade the sister species relationship of *Aegilops comosa* and *Ae. uniaristata* is confirmed. *Aegilops comosa* (**M**) groups distinct from the other **M°** plastid type. The species *Ae. searsii*, *Ae. bicornis*, *Ae. longissima*, *Ae. sharonensis* form a clade together with the polyploid *Ae. kotschyi* and *Ae. peregrina* (**S\***) indicating only very little sequence variation. Concordant to the *ndh*F tree, one sequence each of *Ae. markgrafii* (AE_1831), *Ae. biuncialis* (KP_2012_119) and *Ae. neglecta* (AE_586) group apart from the other sequences of their respective taxon.Fig. 2Phylogenetic tree derived from an alignment of whole genome chloroplast sequences via Bayesian phylogenetic inference. The multiple sequence alignment comprised 183 genomes assembled in the present study and 39 genomes that were downloaded from GenBank. *Brachypodium distachyon* was defined as outgroup taxon. The tree shown corresponds to an analysis based on the complete alignment of 123,531 base pairs (bp). Clades were collapsed into triangles to reflect the main groupings. The area of the *triangles* reflects the genetic variation contained in a certain clade. Posterior probabilities (pp) for the main clades are depicted next to the nodes if they were higher then 0.75. Support values of a second Bayesian phylogenetic analysis based on 114,788 bp of whole chloroplast genomes, where alignment positions with more than 50% of missing data were masked, are shown below the values of the corresponding nodes in the complete chloroplast analysis if the values differed between analyses. Ploidy levels are provided in brackets after the taxon labels. Single accessions grouping apart from other accessions of their taxon are highlighted with an *asterisk*. To the *right* the genomic groups are indicated. The *red circle* represents the secondary calibration point from Marcussen et al. \[[@CR20]\] used for node calibrations in multispecies coalescent analyses (MSC). Major nodes are shown in *blue* and their estimated ages in million years are given in the *box*. Two age values for the same node correspond to the analysis with (first value) and without the inclusion of *Psathyrostachys* (second value). For more information on the results of the MSC analyses see Additional file [5](#MOESM5){ref-type="media"}: Figure S2 and Additional file [6](#MOESM6){ref-type="media"}: Figure S3. For the full representation of the tree showing the grouping of all single accessions see Additional file [4](#MOESM4){ref-type="media"}: Figure S1. For species synonyms see Additional file [1](#MOESM1){ref-type="media"}: Table S1. Arrows with support values indicate the nodes they refer to Ages of clades {#Sec12} -------------- Divergence times were estimated based of *trn*K-*mat*K, *rbc*L and *ndh*F sequences for each accession included in the study and using an uncorrelated lognormal clock model and a secondary calibration on the MRCA of *Brachypodium distachyon* and Triticeae in \*BEAST. Different ages for the split of Triticeae and *Bromus* were obtained depending on the in- or exclusion of the genus *Psathyrostachys*. Including *Psathyrostachys*, Triticeae are paraphyletic and the ages are slightly older but with larger and overlapping 95% highest-posterior densities (HPD) compared to the dataset that does not comprise *Psathyrostachys* (Additional file [5](#MOESM5){ref-type="media"}: Figure S2, Additional file [6](#MOESM6){ref-type="media"}: Figure S3). In the analysis including *Psathyrostachys* the most recent common ancestor (MRCA) of Triticeae and *Bromus* occurred approximately 19.44 Ma (95% HPD = 12.66-27.20). The split of *Bromus* and the remaining Triticeae (termed "core Triticeae") occurred approximately 15.77 Ma (95% HPD = 9.38-22.75). The age of this split does not seem affected by the absence of *Psathyrostachys* (15.41 Ma, 95% HPD = 10.72-20.83). However, the MRCA of the core Triticeae occurred approximately 12.17 Ma (95% HPD = 7.65-17.44) including *Psathyrostachys* and nearly 2.5 million years later (9.68 Ma, 95% HPD = 7.42-12.21) in the analysis omitting this early diverging lineage. The MRCA of *Aegilops*, *Triticum* and *Amblyopyrum* (plus *Taeniatherum*) occurred around 4.14 Ma (95% HPD = 2.48-6.44) including *Psathyrostachys* and 3.38 Ma (95% HPD = 2.35-4.47), when omitting it. Discussion {#Sec13} ========== Plant materials {#Sec14} --------------- The analysed accessions were mainly acquired from several seed banks (i.e. ICARDA, IPK, USDA, the Czech Crop Research Institute) but additional material was collected during field trips. Multiple accessions per species and intra-specific entities were selected to be able to detect intraspecific genetic variability. The performance of genome size measurements allowed the distinction of ploidy level differences for accessions of the same species. Our finding of different ploidy levels within *Agropyron cristatum*, *Eremopyrum bonaepartis*, *Pseudoroegneria strigosa, Aegilops crassa* and *Ae. neglecta* are in agreement with previous work \[[@CR70]--[@CR74]\]. For the first time we report the occurrence of different ploidy levels for *Pseudoroegneria stipifolia*. Few accessions have been found having unexpected genome sizes, like in *Thinopyrum*. Concerns about the condition of seed bank material have been raised in other studies and are related to the fact that it is often maintained under conditions that permit open pollination over several rounds of seed replication \[[@CR75], [@CR76]\]. As Triticeae show species-specific genome sizes \[[@CR67], [@CR77], [@CR78]\] the performance of flow cytometric measurements is a good strategy to detect problematic material, especially in the case of perennial Triticeae where inflorescences for morphological species determination cannot always be obtained within the timeframe of a research project. Also in this study, a few selected accessions needed to be excluded due to deviations in genome size or morphological characters. However, the vast majority of the material did not reveal any peculiarities and samples directly collected in the wild always grouped with other samples of the same species. Sequence assembly {#Sec15} ----------------- In this study we assembled the chloroplast *ndh*F gene and complete chloroplast genomes using for the latter off-target sequence reads of a target-enrichment approach and NGS sequencing for a comprehensive set of Triticeae taxa. The *ndh*F gene could be assembled for 194 accessions representing 53 Triticeae and three outgroup species without missing data, as it was included in the bait design for sequence enrichment. We obtained a set of 183 whole chloroplast genome sequences that provide new plastid genomes of 36 Triticeae species out of 15 genera for which so far no such sequence was available. From these data we estimated the maternal relationships within Triticeae. In previous studies off-target reads have been successfully analysed in diverse organism groups \[[@CR36], [@CR79]--[@CR82]\]. Because the chloroplast occurs in high copy number in the cells, it constitutes the main fraction of off-target reads in target-enrichment approaches in plants. Therefore the majority of reads identified as chloroplast DNA originated most probably from this genome and not from parts that were transferred from the chloroplast to the nuclear genome, which should be rare in off-target reads. The pooling of samples from multiple conspecific individuals allowed us to overcome the low coverage for individual samples and to assemble chloroplast genomes to be used as taxon-specific reference for the assembly of individual chloroplast genomes for accessions for which no conspecific reference was available in GenBank. Stretches of missing data remain in the final individual-based assemblies of the plastid genomes. As these stretches occur randomly along the chromosome, they do not influence the detection of structural differences (indels) between chloroplast genomes of species and/or genera. Generally, indels and base substitutions occur mostly in spacer regions of the Triticeae chloroplast genomes. An increase in sequencing depth may have allowed assembling the chloroplast genomes of all individuals without any missing data. However, the comparison of accessions sequenced with different depths shows that overall higher sequencing coverage will not guarantee a complete chloroplast sequence, as off-target regions are randomly (or not) retained during the enrichment process. The most problematic part in assembling the reads was to reach confidence about the detected indel positions, as the short read length of 2 × 100 bp of the Illumina platform did not always cover such regions completely. The whole genome sequences we provide were carefully checked manually and compared to available sequences in GenBank. Comparable to other studies (e.g. \[[@CR32], [@CR43]\]) we were not able to confirm all parts of GenBank-derived sequences obtained from whole-genome shotgun sequencing. It might be that they contain some non-identified assembly errors. With the now available longer Illumina paired-end reads of 2 × 250 bp these problems should become less severe in future studies. Finally, the topologies validated our assembly procedure, as previously published GenBank sequences always grouped in their respective clades irrespective of the small differences found. Maternal phylogeny of Triticeae {#Sec16} ------------------------------- In this work we aimed for a molecular phylogeny of the chloroplast lineages in Triticeae. The results from *ndh*F and whole chloroplast genome phylogenetic analyses are mainly in agreement with hypotheses previously published for groups within the tribe \[[@CR9], [@CR26], [@CR83]\] and with respect to the domesticated wheats and their close relatives \[[@CR30], [@CR31], [@CR84]\]. Compared to these latter publications a better understanding was obtained, particularly because of the comprehensive taxon sampling, the usage of whole chloroplast genomes, and the inclusion of multiple individuals per species. The tribe Triticeae is generally accepted to be monophyletic \[[@CR22], [@CR23], [@CR85]--[@CR87]\] with *Bromus*, the only genus in the tribe Bromeae, being the sistergroup to all Triticeae \[[@CR88], [@CR89]\]. However, based on our data, but also previously published chloroplast data \[[@CR26], [@CR35], [@CR90]\], the monophyly of Triticeae was either rejected or not supported. As morphology \[[@CR23]\] and also phylogenies based on nuclear data place *Psathyrostachys* at the base of Triticeae close to *Hordeum* (\[[@CR10]\]; own unpublished data), we see two possibilities to explain the chloroplast phylogeny. Thus, either *Psathyrostachys* obtained the chloroplast of a close and nowadays extinct relative belonging to the ancestral Triticeae-Bromeae gene pool, or vice versa an ancestor belonging to the *Bromus* stem group obtained a chloroplast from early Triticeae. In any case, a chloroplast phylogeny including *Bromus* and *Psathyrostachys* might not reflect Triticeae relationships very well, at least for its basal groups, and will also influence the outcome of molecular dating approaches (see below). The retrieved chloroplast phylogeny indicates a common maternal ancestor for the genera *Australopyrum*, *Eremopyrum*, *Agropyron* and *Henrardia*, with *Eremopyrum*, *Agropyron* and *Henrardia* currently having overlapping distribution areas in southern Europe and western Asia. The monogenomic genus *Australopyrum* (**W**) and all allopolyploid taxa possessing a **W** genome (*Stenostachys* - **HW**, *Anthosachne* - **StYW**, *Connorochloa* -- **StYHW**; taxa not sampled) are endemic to dry and temperate Australasia \[[@CR91]\]. This supports speciation in allopatry after long-distance dispersal of an *Australopyrum* progenitor and likely recurrent formation of allopolyploid taxa involving numerous other Triticeae species in Australasia. A sister relationship between the species of *Agropyron* and *Eremopyrum* has also been proposed by other studies. However, when *Eremopyrum bonaepartis* was included, *Eremopyrum* became polyphyletic with the diploid cytotype being sister to *Henrardia*. This is in agreement with earlier findings \[[@CR10], [@CR92], [@CR93]\]. Similar to Mason-Gamer \[[@CR83]\] we found that *Pseudoroegneria*, *Dasypyrum* and *Thinopyrum* form a monophyletic clade indicating that they belong to the same maternal lineage. A sister relationship of *Pseudoroegneria* and *Dasypyrum* has been proposed recently by Escobar et al. \[[@CR10]\] based on nuclear data. In our dataset *Dasypyrum* groups however within *Pseudoroegneria*. Within *Dasypyrum*, accessions from Bulgaria and Italy cluster together, while material from Turkey and Greece form another sub-clade. Hence, this pattern may indicate some recent local differentiation. The polyphyletic grouping of *Thinopyrum* within this clade can be explained either by incomplete lineage sorting (ILS) or because *Thinopyrum* repeatedly captured different plastid types of *Pseudoroegneria.* A close relationship to the *Aegilops*-*Triticum*-*Amblyopyrum* group has been reported for *Thinopyrum* based on nuclear data \[[@CR3], [@CR83], [@CR93]--[@CR95]\]. This incongruence might be explained by the fact that *Thinopyrum*, but also *Dasypyrum* and *Pseudoroegneria* are outcrossing taxa \[[@CR10], [@CR96]\], which seems to increase the chance of chloroplast capture via hybridization and back-crossing \[[@CR25]\]. Moreover, most taxa have overlapping distribution areas in the Caucasus region, also facilitating hybridization. Our results revealed no major sequence variation among chloroplast genomes of *Secale strictum* and *S. cereale*/*S. vavilovii*. This points to an only recent diversification within this genus. It is well known that the species of *Triticum*, *Aegilops* and *Amblyopyrum muticum* are closely related and of rather recent origin \[[@CR7], [@CR10], [@CR20], [@CR26]\]. To date, there is no general agreement on how taxa within this species complex are related to each other, even at the diploid level. There is an on-going dispute if *Aegilops* and *Triticum* should be merged into one genus, and if *Amblyopyrum muticum* should be included into *Aegilops* \[[@CR74], [@CR84], [@CR97]--[@CR99]\]. In agreement with Bordbar et al. \[[@CR9]\], the chloroplast phylogeny revealed that *Am. muticum* possesses a chloroplast genome similar to the **M** and **U** genome groups, although based on nuclear data *Am. muticum* appears to be sister to all *Aegilops* and *Triticum* species \[[@CR7]\]. The *Aegilops*-like chloroplast genome of *Am. muticum* might be explained by the existence of a common ancestor and therefore a chloroplast genome already shared before divergence of these lineages. Alternatively, it may indicate that it captured the chloroplast from one of these species or their MRCA, which is geographically possible, as distribution areas overlap in Turkey and Armenia. Polyploid *Triticum* species and *Aegilops speltoides* formed a clade supporting that *Ae. speltoides* is the maternal donor of polyploid wheat genomes. The relationships within this clade corroborate the hypothesis that two different *Ae. speltoides* lineages were involved in their formation \[[@CR30], [@CR74], [@CR100], [@CR101]\]. The direct maternal donor for *Triticum timopheevii* and *T. zhukovskyi* (**G**) could be identified, as they share the chloroplast haplotype of three *Ae. speltoides* accessions originating from Iraq and Syria. However the donor remains uncertain for *Triticum turgidum* and *T. aestivum* (**B**), indicating that either our sampling of *Ae. speltoides* was not sufficient to cover the species diversity or pointing to a nowadays extinct donor lineage. Alternatively, Gornicki et al. \[[@CR30]\] suggested, that tetraploidisation within this clade predates the one of *T. timopheevii*. All taxa of the genus *Triticum* s.str. Fall into one clade together with *Aegilops* and *Amblyopyrum*. *Triticum* taxa that were elevated to species rank by Dorofeev et al. \[[@CR102]\] could not be distinguished on the basis of their chloroplast haplotypes, which supports the taxonomic treatment of van Slageren \[[@CR97]\] subsuming them under the same species name (Additional file [1](#MOESM1){ref-type="media"}: Table S1). Based on chloroplast data and supported by the findings of Petersen et al. \[[@CR7]\] and Li et al. \[[@CR84]\], *Ae. speltoides* (**S**) appears to be the species that diverged earliest from all other *Aegilops* species. Generally the wheat group is characterized by short branch lengths and plastid haplotypes shared by multiple species. This is most probably due to the only recent divergence of these species. Chloroplast capture as indicator of hybridization events {#Sec17} -------------------------------------------------------- The exchange of chloroplasts among closely related plant species has been reported in diverse plant groups and the effect of hybridization on Triticeae taxa is a matter of discussion. For example, a homoploid hybrid origin of the **D**-genome lineage involving the **A**- and **B**-genome lineages is the subject of a recent dispute \[[@CR20], [@CR84], [@CR98], [@CR99]\]. However, our and previous studies \[[@CR30], [@CR31], [@CR84]\] revealed three independent but closely related chloroplast lineages with plastids of the **A**-genome lineage being more closely related to the ones of the **D** genome, which can be explained by consecutive divergence. Hence, if such a hybridization event occurred it only affected the nuclear genome. Although recent publications agree that the detection of hybridization events depends mainly on taxon sampling \[[@CR19]\], so far all postulated hypotheses for Triticeae are based on a limited choice of taxa. In our study, three possible cases of ancient chloroplast captures were identified, i.e. for (1) *Bromus*/*Psathyrostachys*, (2) *Thinopyrum* and (3) *Amblyopyrum*, as the chloroplast phylogeny looks considerably different from phylogenies retrieved from nuclear data \[[@CR7], [@CR83]\]. More recent events of chloroplast captures were identified for single accessions of the species *Aegilops biuncialis*, *Ae. markgrafii*, *Ae. neglecta* and *Ae. triuncialis* that grouped within clades of other closely related species. We assume such hybridization events to occur frequently between various taxa of the wheat group due to incomplete reproductive isolation among these young species. Ages of clades {#Sec18} -------------- To obtain dated phylogenies of Triticeae we used the split of *Brachypodium* and Triticeae as secondary calibration point \[[@CR20]\] based on *trn*K-*mat*K, *rbc*L and *ndh*F sequences. Pros and cons of using chloroplast data for the estimation of divergence times were already discussed by Middleton et al. \[[@CR31]\] who argued that splits of chloroplast lineages might be older than the respective species, resulting in overestimated taxon ages for medium-aged and young clades. For dating in Triticeae we see an additional concern using chloroplast data. Due to mostly low substitution rates in plastid genomes \[[@CR103]\] also underestimation of ages is possible in young clades, as fixation of mutations occur as a stochastic process \[[@CR30], [@CR104], [@CR105]\] that might be slower than species diversification. In these cases already well-diverged taxa might still possess very similar or identical chloroplast haplotypes \[[@CR106]\], resulting in lower age estimations in comparison to nuclear data. This might be the case for many nodes of our tree, although the divergence times retrieved for the main splits are generally about 1 million years older than the ones obtained by Middleton et al. \[[@CR31]\]. Our analyses suggest the occurrence of a MRCA for the *Aegilops*/*Triticum* group at approximately 4 Ma, while divergence times of this complex were proposed to date back to approximately 3 Ma \[[@CR31]\] or 6.55 Ma based on a dataset of five nuclear and one plastid gene \[[@CR20]\]. Another critical topic regarding chloroplast-based dating in Triticeae results from the chloroplast data of *Psathyrostachys*. Our results support the hypothesis that the chloroplast of either *P. juncea* or a *Bromus* ancestor was obtained through chloroplast capture from a taxon belonging the *Bromus*/Triticeae stem lineage, resulting in *P. juncea* clearly falling outside the otherwise monophyletic Triticeae. We strongly favour an event of chloroplast capture over ILS as the cause for the observed relationships. The pronounced sequence variation between *Bromus*, *Psathyrostachys* and the remaining Triticeae for entire chloroplast genomes is best explained by strong and independent sequence divergence of *Bromus* and *Psathyrostachys* compared to the remaining Triticeae. Moreover, in case ILS represents the reason for the observed relationships our coalescent-analyses should have returned the same age for the MRCA of Triticeae-Bromeae with and without the inclusion of *Psathyrostachys*. However, we obtained age estimations that differed by approximately 4 million years. As the direction of chloroplast capture remains unknown, we estimate the MRCA of all Triticeae to an age of between 10 and 19 million years. When comparing in- vs. exclusion of *P. juncea* the age estimations for all clades are robust, as they fall generally within the 95% HPD (Additional file: 5: Figure S2, Additional file [6](#MOESM6){ref-type="media"}: Figure S3). Conclusions {#Sec19} =========== We assembled chloroplast sequence data of a large set of monogenomic Triticeae and polyploid wheats by combining on- as well as off-target reads of a sequence-capture approach coupled with Illumina sequencing. This approach allowed us to produce a set of 183 Triticeae chloroplast genomes. These sequences provide new plastid genomes for 39 Triticeae, two *Bromus* and one *Brachypodium* species. Moreover, the data was used to estimate the most comprehensive hypothesis of relationships among Triticeae chloroplast lineages to date. We infer that an early event of chloroplast capture was involved in the evolution of *Psathyrostachys* or *Bromus*. Either *Psathyrostachys* or *Bromus* obtained a chloroplast from a taxon closely related to a common ancestor of the Triticeae-Bromeae lineage that diverged approximately 19.44 Ma, as the *Psathyrostachys* chloroplast haplotype groups at a deeper node than *Bromus* in our whole-genome phylogeny. We can, however, not safely determine the direction of chloroplast exchange in this case, as this would need the inclusion of much more Bromeae species. We identified taxa that share the same maternal lineage (e.g. *Agropyron*, *Eremopyrum* and *Heteranthelium*; *Pseudoroegneria* and *Dasypyrum*). Conflicts to nuclear phylogenies (i.e. the grouping of *Thinopyrum*, *Amblyopyrum*) likely indicate old events of chloroplast introgression, while some cases of pronounced intraspecific variation could be attributed to recent events of hybridization, as foreign chloroplast types grouped within otherwise monophyletic species groups (i.e. *Ae. biuncialis* and *Ae. markgrafii*, *Ae. neglecta*). As plastids are maternally inherited in these grasses, they provide supplementary information to nuclear data. For example, the plastid data indicate the polyphyly of *Eremopyrum*. Moreover, the possession of an *Aegilops*-like chloroplast type of *Amblyopyrum* might reject a taxonomic treatment completely separate from *Aegilops*. Hence, plastid data can facilitate understanding Triticeae evolution, which in turn is crucial on the way to a robust taxonomic system for the entire tribe of Triticeae. However, plastid phylogenies will never be able to infer all hybridization events involved in speciation, e.g. when nuclear genomes got introgressed while chloroplast lineages remains unaffected. Additional files ================ {#Sec20} Additional file 1: Table S1.Accessions considered in the study. Overview of the material considered in this study. For all materials, the GenBank identifier, the accession and species name as used in this study (Species) as well as their species synonyms used in the donor seed banks or in the NCBI GenBank (Material source/Reference) are provided. The genome symbol, and the country of origin, where the material was originally collected are given. The ploidy level measured in the scope of this study and the information if a herbarium voucher could be deposited in the herbarium of IPK Gatersleben (GAT) is given. Genomic formulas of tetraploids and hexploids are given as "female x male parent". The genomes of *Aegilops* taxa follow Kilian et al. \[[@CR74]\] and Li et al. \[[@CR84]\]. Genome denominations for *Hordeum* follow Blattner \[[@CR107]\] and Bernhardt \[[@CR12]\] for the remaining taxa. (XLS 84 kb) Additional file 2: Table S2.Read numbers mapping to the complete chloroplast sequences and *ndh*F. Number of reads mapping and mean coverage for the entire chloroplast genome and *ndh*F after the removal of duplicated reads. Also the proportions of all reads mapping to the chloroplast that mapped to *ndh*F are given. (XLS 66 kb) Additional file 3: Table S3.Marginal likelihoods and Bayes factor evaluation of Triticeae chloroplast relationships. Stepping-stone estimates of marginal likelihoods calculated with M[r]{.smallcaps}B[ayes]{.smallcaps} 3.2.6 on the *ndh*F dataset and Bayes factor estimated as 2(H~1~-H~2~), where H~1~ enforces monophyly and H~2~ enforces polyphyly of Triticeae chloroplasts. BF~12~ \< −10 indicates strong support for model 2. (DOC 27 kb) Additional file 4: Figure S1.Full representation of the Bayesian phylogenetic tree based on whole chloroplast genome sequences. The multiple sequence alignment comprised 183 genomes assembled in the present study and 39 genomes that were downloaded from GenBank. *Brachypodium distachyon* was used as outgroup taxon. The tree shown is based on the complete alignment of 123,531 base pairs (bp). Posterior probabilities (pp) for the main clades are depicted next to the nodes if they were higher then 0.75. Support values of a second Bayesian analysis based on 114,788 bp of whole chloroplast genomes were alignment positions with more than 50% of missing data were masked are shown below the values of the corresponding nodes in the complete chloroplast analysis if the values differed between analyses. For clades comprising multiple taxa, the taxon affiliation of single accession is indicated by the same symbols behind accession and taxon name (e.g. ';", \*). The ploidy level is provided in brackets after the taxon label. Single accessions grouping apart from other accessions of their taxon are shown in bold. To the right the genomic groups are indicated. The red circle represents the secondary calibration point from Marcussen et al. \[[@CR20]\] used for node calibrations in multispecies coalescent analyses (MSC). Major nodes are shown in blue. Their estimated ages in million years are given in the box. Two age values for the same node correspond to the analysis with *Psathyrostachys* (first value) and without it (second value). For more information on the results of the MSC analyses see Additional file [5](#MOESM5){ref-type="media"}: Figure S2 and Additional file [6](#MOESM6){ref-type="media"}: Figure S3. For the full representation of the tree showing the grouping of all single accessions see Additional file [4](#MOESM4){ref-type="media"}: Figure S1. For species synonyms see Additional file [1](#MOESM1){ref-type="media"}: Table S1. Arrows with support values indicate the nodes they refer to. (PDF 555 kb) Additional file 5: Figure S2.Calibrated species trees based on *trn*K-*mat*K, *rbc*L, and *ndh*F including *Psathyrostachys*. Calibrated multispecies coalescent derived from three chloroplast loci t*rn*K-*mat*K, *rbc*L and *ndh*F of all Triticeae accessions (excluding polyploid wheats). Sequences of *Brachypodium distachyon*, *Oryza sativa* and *Zea mays* were included as outgroups. Posterior probability values are given for all nodes. Divergence time estimates were inferred using the secondary calibration points from Marcussen et al. \[[@CR20]\] for the *Brachypodium*-Triticeae split (mean 44.44 million years ago). Node bars indicate the age range with 95% interval of the highest probability density. For the analysis *Triticum monococcum* and *T. boeoticum*, *Secale cereale* and *S. vavilovii*, *Pseudoroegneria tauri* and *Ps*. *libanotica*, *Taeniatherum caput-medusae* and *Tae*. *crinitum*, *Agropyron cristatum* and *Agr. cimmericum* were each subsumed under a single species name (Additional file [1](#MOESM1){ref-type="media"}: Table S1). (JPEG 1085 kb) Additional file 6: Figure S3.Calibrated species trees based on *trn*K-*mat*K, *rbc*L, and *ndh*F omitting *Psathyrostachys*. Calibrated multispecies coalescent derived from three chloroplast loci t*rn*K-*mat*K, *rbc*L and *ndh*F considering all genomic Triticeae groups covered in the study but omitting *Psathyrostachys* and polyploid wheats. Sequences of *Brachypodium distachyon*, *Oryza sativa* and *Zea mays* were included as outgroups. Posterior probability values are given for all nodes. Divergence time estimates were inferred using the secondary calibration points from Marcussen et al. \[[@CR20]\] for the *Brachypodium*-Triticeae split (mean 44.44 million years ago). Node bars indicate the age range with 95% interval of the highest probability density. For the analysis *Triticum monococcum* and *T. boeoticum*, *Secale cereale* and *S. vavilovii*, *Pseudoroegneria tauri* and *Ps*. *libanotica*, *Taeniatherum caput-medusae* and *Tae*. *crinitum*, *Agropyron cristatum* and *Agr. cimmericum* were each subsumed under a single species name (Additional file [1](#MOESM1){ref-type="media"}: Table S1). (JPEG 1082 kb) **Electronic supplementary material** The online version of this article (doi:10.1186/s12862-017-0989-9) contains supplementary material, which is available to authorized users. We like to thank E-M Willing and K Schneeberger for designing the *ndh*F baits, R Brandt for performing the Illumina sequencing, and C Koch and B Wedemeier for technical assistance. We are grateful for seeds obtained from ICARDA, IPK, USDA, the Czech Crop Research Institute, and the Kyoto University Laboratory of Plant Genetics. We also thank JM Saarela and two anonymous reviewers for helpful comments on earlier versions of the manuscript. Funding {#FPar2} ======= This work was supported by the German Research Foundation (DFG) \[BL462/10\]. Availability of data and materials {#FPar3} ================================== All sequences were submitted to GenBank (accession numbers KX591961-KX592154, KY635999-KY636181). The datasets supporting the results of this article are available on Dryad Digital Repository through doi:10.5061/dryad.25743. Authors' contributions {#FPar1} ====================== NB, FRB, BK designed the study. BK provided data or materials. NB performed the experiments. NB and JB analysed the data. NB and FRB wrote the initial manuscript. All authors contributed to and approved the final version. Competing interests {#FPar4} =================== The authors declare that they have no competing interests. Consent for publication {#FPar5} ======================= Not applicable. Ethics approval and consent to participate {#FPar6} ========================================== Not applicable. Publisher's Note {#FPar7} ================ Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Low | [
0.5349344978165931,
30.625,
26.625
] |
<?php /* +---------------------------------------------------------------------------+ | Revive Adserver | | http://www.revive-adserver.com | | | | Copyright: See the COPYRIGHT.txt file. | | License: GPLv2 or later, see the LICENSE.txt file. | +---------------------------------------------------------------------------+ */ // Define a constant to avoid displaying the login screen define ('OA_SKIP_LOGIN', 1); // Require the initialisation file require_once '../../init.php'; require_once MAX_PATH . '/www/admin/config.php'; require_once MAX_PATH . '/lib/OA/Permission.php'; require_once MAX_PATH . '/lib/OA/Admin/PasswordRecovery.php'; $recovery_page = new OA_Admin_PasswordRecovery(); $method = $_SERVER['REQUEST_METHOD']; if ($method == 'POST') { $recovery_page->handlePost($_POST); } else { $recovery_page->handleGet($_GET); } ?> | Low | [
0.449648711943793,
24,
29.375
] |
Chad Frantz, contractor excavator operator, age 24, was fatally injured at approximately 9:50 a.m., on October 29, 1996, when the excavator he was operating slipped off the narrow roadway into the settling pond. The victim had a total of 2 years, 3 months mining experience, all with this contracting company. He had received training in accordance with 30CFR, Part 48. Bonnie Connelly, safety supervisor, Holnam, Inc., notified the MSHA Columbia, South Carolina, field office of the accident at 10:10 a.m. on October 29, 1996. An investigation was started the same day. The Holly Quarry & Mill Santee Cement Company, a limestone quarry and cement producing operation, owned and operated by Holnam, Inc., was located along Highway 453 in Holly Hill, Orangeburg County, South Carolina. The principal operating official was William A. Patterson, plant manager. The quarry normally operated two, 8 hour shifts a day, 5 days a week, while the mill operated three, 8 hour shifts a day, 7 days a week. One hundred and sixty-six persons were employed at this operation. The victim was employed by Dorchester Dirt Pit & Co., an independent contractor, located at 1949 Gardner Blvd., Holly Hill, Orangeburg County, South Carolina. The principal operating official was Nell Muckenfuss, president. Eight persons were employed by the contractor at this operation for the primary purpose of removing overburden, and conducting other various activities required by the mining company. The limestone was ripped with bulldozers, transported to the primary crusher where it was crushed and then conveyed to the main plant where cement was produced. The finished product was stored in silos, and then moved by conveyor system to the shipping area for delivery by rail and truck. The last regular inspection of this operation was completed August 15, 1996. Another regular inspection was conducted at the conclusion of this investigation. PHYSICAL FACTORS INVOLVED The settling pond where the accident occurred was 250 feet long, 150 feet wide at the discharge end, and 83 feet wide at the inlet end. The actual depth of the pond could not be determined. It was necessary to periodically remove the silt from the pond. In order to accomplish this, a roadway, approximately 140 feet long and 16 feet wide, was established about the middle of the pond and extended parallel to the pond's outer roadway towards the inlet end. This roadway was below the water's surface, but enabled the excavator and trucks to enter the area in order to remove the material. One side of the roadway opened to the settling pond, the other side had a 5 foot bank, which was the berm around the perimeter of the settling pond. The outer edge towards the pond was not provided with an adequate berm. When the pond was full, the roadway was under water and on the day of the accident, the roadway was covered in 6 to 12 inches of silt and water, making it impossible to be seen by the equipment operators. Truck drivers backed the trucks down the roadway by using the 5 foot bank as a guide in order to stay on the submerged roadway. The excavator was positioned close to the edge of the drop off in order to clear the bank with the rear counterweight when the machine was swung around. The excavator involved in the accident was a track-mounted, 1987 Komatsu PC300LC3, powered by a 197 horsepower diesel engine. The overall length of the excavator was 35 feet, 4 inches and the width of the tracks was 11 feet, 3 inches. The overall working weight of the machine was 33.5 tons, with a maximum digging depth of 24 foot. The bucket capacity was 35.3 cubic feet. The excavator received extensive damage while being removed from the settling pond and could not be tested to check for any defects. Past inspection reports for the excavator were checked and did not show any mechanical or safety problems. DESCRIPTION OF ACCIDENT On the day of the accident, Chad Frantz (victim) reported to work at 7:00 a.m. his normal starting time. He along with four co-workers were assigned the task of cleaning the settling pond by Rodney Burbage, plant supervisor. Normal practice for cleaning the settling pond was to drive the excavator onto the narrow submerged roadway and scoop the silt and sludge from the pond into the haul trucks that backed onto this roadway. Frantz operated the excavator, while Brock Byrd and Mike Dickson drove two of the haul trucks. Work continued without incident until about 9:50 a.m. By this time the excavator and trucks had cleaned the settling pond for approximately 124 feet along the roadway. Byrd's truck was loaded and as he was driving away from the excavator he glanced in his rear view mirror and saw the victim turning the excavator to get another scoop of material. Dickson waited until Byrd cleared the roadway then began to back his truck onto the roadway. When he looked in his rear view mirror and did not see the excavator he stopped his truck, got out and ran back to where the excavator had been. The excavator was on its side in the settling pond with only about one foot of one track and part of the boom above water level. Dickson got back in his truck and drove to the top of the hill where Burbage was operating a front-end loader and informed him of the accident. Burbage, Dickson and several employees went to the site of the accident. Since the excavator was close to the road, the men were able to climb onto the machine and attach chains and cables, then with the help of a bulldozer, raise it enough to break the windshield on the operator's cab and remove the victim. Frantz had been trapped in the excavator for about 50 minutes. Apparently, after loading Byrd's truck, Frantz attempted to reposition the excavator. He swung the boom in the direction of travel, positioning the operator's cab over the settling pond. As he attempted to tram the excavator, it slipped off the roadway into the settling pond, trapping the victim. After the victim was removed from the operator's cab, he was taken by ambulance to the Orangeburg Regional Hospital where he was pronounced dead by the County Coroner. Death was attributed to asphyxiation. CONCLUSION The direct cause of the accident was the inability of the excavator operator to see the narrow roadway he was operating on because of the 6 to 12 inches of water and silt that covered the roadway. A contributing factor was the absence of a berm along the entire outside edge of the roadway. VIOLATIONS Holnam Inc. Citation No. 3606218 Issued on November 13, 1996, under the provisions of Section 104(a) of the Mine Act for a violation of 30 CFR 56.9313. On October 29, 1996 at 9:50 a.m. a contractor employee was fatally injured when the excavator he was operating slipped into the settling pond that was being cleaned. The roadway from which the excavator was operating varied in width from 13 to 16 feet. The width of the excavator tracks was 11 feet 3 inches. The roadway that was used by the excavator and haulage trucks was covered with 6 to 12 inches of murky water and debris obstructing the view of the roadbed. Cleaning the pond from this narrow roadway has been the general practice in the past. This citation was terminated on November 13, 1996. The practice of cleaning the settling pond in this manner has been discontinued. A dragline operating from outside the settling pond will be used. Dorchester Dirt Pit & Co. Citation No. 4529261 Issued on November 13, 1996 under the provisions of Section 104(d)(1) of the Mine Act for a violation of 30 CFR 56.9313. On October 29, 1996 at 9:50 a.m. an employee was fatally injured when the excavator he was operating slipped into the settling pond that was being cleaned. The roadway from which the excavator was operating varied in width from 13 to 16 feet. The width of the excavator tracks was 11 feet 3 inches. The roadway that was used by the excavator and haulage trucks was covered with 6 to 12 inches of murky water and debris obstructing the view of the roadbed. Cleaning the pond from this narrow roadway has been the general practice in the past. This citation was terminated on November 13, 1996. The practice of cleaning the settling pond in this manner has been discontinued. A dragline operating from outside the settling pond will be used. Citation No. 4529262 Issued on November 13, 1996 under the provisions of Section 104(a) of the Mine Act for a violation of 30 CFR 56.9300. On October 29, 1996 at 9:50 a.m. an employee was fatally injured when the excavator he was operating slipped into the settling pond that was being cleaned. The roadway was not provided with adequate berms for the entire length. Berms were not maintained mid-axle height of the excavator and haulage trucks using the roadway. In the area where the excavator slipped into the settling pond, berms were not provided. This citation was terminated on November 13, 1996. The practice of cleaning the settling pond in this manner has been discontinued. A dragline operating from outside the settling pond will be used. | Low | [
0.525,
26.25,
23.75
] |
Once Upon a Time Scoop: Snow's Dark Side, Gold's Vengeance, The Future for Emma and Bae On Sunday's “The Miller’s Daughter,” we find out how Cora (played beautifully by Rose McGowan in flashbacks) came to be the villainous Cora, while also delving into the past of Rumplestiltskin. And who better to tease the episode but OUAT creators Edward Kitsis and Adam Horowitz, who gave us a window into all the romance and trickery, as well as whether all-good Snow White will truly go bad. Read on for teases and spoilers related to the upcoming installment... ------------------------------------------- Do Regina & Cora have the same objective?“I would say that [Regina has] already probably figured out they don’t have the same objectives,” Kitsis said. “But for Regina, her objective is Henry and she’s so desperate that if anyone offers her a short cut to getting him, she will take that road.” Will Gold really try to kill Henry?Since the Seer in the “Manhattan” episode told Gold the boy who leads him to his son will also be his undoing, how vengeful will Gold really be? Horowitz teased: “At the time when Gold said, ‘Well then I’ll just have to kill him.’ [But] he had no idea what the future would hold and this boy would turn out to be his grandson and now things are very complicated for Gold.” Emma/Bae: Will that fiancé of his be the ultimate obstacle?“I think what we love so much about Emma and Bae,” said Kitsis, “is that they’re two people who grew up in similar ways without a family and with walls around their hearts and so I think they were each other’s loves and for us what’s interesting is why did they move on? What happened? And what happens now and tomorrow’s definitely going to be a part of all that.” But what about that Emma/Hook flirtation from earlier in the season?Don’t hold your breath for anything immediately as Kitsis said: “I think that as the series progresses - maybe this year - they might be very busy with other things, but you never know.” Horowitz added that Hook is “going to be in close proximity as the season continues and sparks will fly in many different ways.” Who doesn’t love a good love triangle, right? Will the magic candle that Snow didn’t use to save her mother’s life in “The Queen Is Dead” make another appearance?Horowitz teased: “I would think if you show a candle that can do something like that. You can’t just not come back to it.” Kitsis also weighed in and said of Snow: “We’ve seen her go dark before. The question is will she really be able to go through with it?” Who will ultimately push Snow to maybe go dark again?“I think its frustration,” said Kitsis. “In last week’s episode, when Regina sat down and said ‘What did good every get me? Dinner with a bunch of people who will never forgive me.’ For us, we always try to take these characters and put them in real situations and for us, how many times have you been really upset and you’re like ‘Why am I doing this the right way? What did good ever get me?’ 'If I would’ve killed Regina a long time ago,” Kitsis continued, speaking as Snow. “My daughter [Emma], I would’ve watched her grow up. I would be in the Enchanted Forest. None of this would’ve happened and I wouldn’t have had all this misery. Why am I doing the right thing? What’s it getting me? And now Johanna’s dead.’ So I think she’s frustrated and I think that sometimes you let those emotions cloud your judgment and then you have to deal with the repercussions of it and that’s what this story is. It’s in a lot of ways a cautionary tale.” Rumplestiltskin = Sex Machine?Expect some flashback sexy time between Rumple and young Cora. Who knew the baddie with the not-so-great skin would be such a hit with the ladies? “If you go back and you watch him in Cinderella and other things,” corrected Kitsis, “there’s lots of sexiness to the way he plays the character and what we really loved is what was he like truly in love. We saw him as pre-Dark One with Milah but they had very few moments of happiness but Cora is his soulmate in a lot of ways. She understands him in a way that other people don’t because she’s not trying to make him a better man.” What’s coming the rest of Once Upon a Time Season 2?Horowitz previewed: “I would say that it is pretty nonstop and that all the pieces we’ve put on the table in the start of the season are now finally intersecting and there’s some big reveals and there’s some character’s returning that we haven’t seen in a while and it’s all coming together to a finale that we really are very excited about and hope the audience feels the same way.” Kitsis added: “I’ve got to say we’re really, really proud of this next run of episodes. We think it’s some of the strongest we’ve ever done.” | Mid | [
0.612159329140461,
36.5,
23.125
] |
Change facial expressions This is your go-to facial expression. Ask someone you trust to evaluate your face as you speak informally. Eye darts are one place in character animation where a linear arc of motion is appropriate. Can sensitivity to facial expressions be improved by the previous engagement in reaching? New icy world with 20,year orbit could point to Planet Nine The solar system has gained a new extreme object: Find the good stuff MODERATORS The mean amplitudes of difference waveforms were compared with zero, and the t -tests results indicated that the vMMN components were significant for all time windows, brain regions, and emotional conditions for adults and adolescents see Table 1. May I have your attention, please: What habits do you have that you may not be aware of? Brain Tumor Symptoms, Signs, Types, Causes, Survival Rates A brain tumor can be either non-cancerous benign or cancerous malignant , primary, or secondary. Uncomfortable actions modified the perception of emotional expressions along the happy-to-angry continuum, making a neutral face appear angry and a slightly happy face neutral, and improving the identification of facial expressions. Two groups of participants an adolescent group and an adult group were recruited to complete an emotional oddball task featuring on happy and one fearful condition. Change your selfie's facial expressions with the touch of a button | From the Grapevine These may seem like very minor cues, but most of how we read faces comes from exactly that: The doctoral work of lead researcher Dr. Mismatch negativity of the color modality during a selective attention task to auditory stimuli in children with mental retardation. In the passive oddball paradigm, standard stimuli, which are presented at frequent intervals, are randomly replaced by deviant stimuli. Such gradual changes are more difficult to detect than changes that involve a disruption. Average relative rating of action discomfort as a function of reaching distance measured relative to individual arm length collected in the preliminary experiment. Given two facial images and about 75 key points, the software generates a synthetic image that contains a specified mixture of the original faces, using a sophisticated morphing algorithm that implements the principles described by [48]. The results showed no change in the time needed to understand the happy sentences. Without your best friend saying a word, you know—by seeing the little wrinkles around her eyes, her rounded, raised cheeks and upturned lip corners—that she got that promotion she wanted. The standard deviation defined the JND. Welford AT Choice reaction time: Affect disorder Submitted by Sabrina on May 2, - 2: Low-T and Erectile Dysfunction. | Low | [
0.511482254697286,
30.625,
29.25
] |
Facile Synthesis of GdF₃:Yb3+, Er3+, Tm3+@TiO₂-Ag Core-Shell Ellipsoids Photocatalysts for Photodegradation of Methyl Orange Under UV, Visible, and NIR Light Irradiation. To enhance solar energy utilization efficiency, goal-directed design of architectures by combining nanocomponents of radically different properties, such as plasmonic, upconversion, and photocatalytic properties may provide a promising method to utilize the most energy in sunlight. In this work, a new strategy was adopted to fabricate a series of plasmonic Ag nanoparticles decorated GdF3:Yb3+, Er3+, Tm3+-core@porous-TiO2-shell ellipsoids, which exhibit high surface area, good stability, broadband absorption from ultraviolet to near infrared, and excellent photocatalytic activity. The results showed that photocatalytic activities of the as-obtained photocatalysts was higher than that of pure GdF3:Yb3+, Er3+, Tm3+ and GdF3:Yb3+, Er3+, Tm3+@TiO2 samples through the comparison of photodegradation rates of methyl orange under UV, visible, and NIR irradiation. The possible photocatalytic mechanism indicates that hydroxyl radicals and superoxide radical play a pivotal role in the photodegradation. Furthermore, the materials also showed exceptionally high stability and reusability under UV, visible, and NIR irradiation. All these results reveal that core-shell hierarchical ellipsoids exhibit great prospects for developing efficient solar photocatalysts. | High | [
0.6657381615598881,
29.875,
15
] |
Two on the right were gifts for friends; two on the left stayed with us. In progress I was looking through my email when I found a pattern from Knitpicks that intrigued me. I ended up knitting five of these little snow people. I kept two, haven't quiet finished two, and gave two away as a Christmas gift. Hats and scarves Pattern:Knitpicks Lumpy, Rosy and Slim by Melissa Burt. I haven't made Lumpy, but I have made the other two. I followed the pattern as written except that I used a smaller needle (just because I knit loosely. I have found on Ravelry that many people who knit them haven't felted them, but I much prefer them to be felted. I've included a picture for comparison. The felting just makes them look much more like snow -- more professional the finished (in my opinion). I felted them by hand in the bathroom sink. Yarn: Knitpicks Palette in various colors -- white, black, garnet heather, and marine heather. I also picked up an orange, green and yellow to finish off the last two I'm making. Those two, when finished, will sport my boys school colors. Unfelted on left; felted on right Needles: US 2.5 (3.0mm) DPN Felting each one took about 10 minutes, and as the felting came to an end I could feel the object shrink and could see the stitches disappear. I made their noses out of orange polymer clay with a hole in one end to sew it on. The clasps on the capes are also handmade with jewelry wire. For the second year in a row, I have knit a scarf for the Red Scarf Project of Foster Care to Success. These scarves are packed in care packages and sent to students in college or trade school who have "grown out of" foster care. The packages are sent for Valentines' Day (which explains why all of the scarves are red). Here are the details about the scarf: Pattern: The Yarn Harlot's One Row Handspun Scarf Pattern. I use this "pattern" all of the time. It is so easy and creates a great textured, reversible, non-rolling scarf. I love how the colors in overdyed yarn run through fabric knit using this stitch pattern. This scarf is 38 stitches across. Yarn: I wanted something worsted weight, wool and washable, so I chose Swish tonal from Knitpicks. It is in the Gypsy colorway. Needles: Size US 7. I haven't gotten a good picture of this scarf. The lighting in my office isn't doing it justice, and I'm ready to mail it away. Even though I haven't posted in a while, I'm still knitting. I have at least two finished objects I need to photograph and post; I'm hoping to do that soon. In the meantime, I am knitting, but I haven't landed on the next "big thing." I'm working on finishing some projects that I started previously while I wait for inspiration to strike. One Emmaus Lanyard The image to the right is a scarf I'm working on. It's 2 x 2 ribbing with alternating two row stripes of a pair of yarns I picked up in Alaska. More details are in this post, but the two yarns are by Rabbit Run, in the water and wildberry colorways. Wildberry looks a lot like water, but includes a cranberry color along with the water colors. Because the two yarns have much in common, the scarf doesn't look striped, but blends very well. It's the project I pick up when I don't have anything else to knit, or when I need a movie knitting project. Eventually, it will get finished. The next two pictures are of some lanyards I knit to be used by my Emmaus Community for pilgrim's crosses. About 30 or so are required for each walk. The gentleman who used to knit them worked on them all year round, knitting away. He died, so the community has picked up his ministry. Twelve Lanyards They are made using Red Heart yarn, Mexicana colorway, with a French (or spool) knitter. Even my husband picked up the needles for this one. We each knit 12 of them. For pattern information, just google Emmaus lanyard. It is simply a 24 inch long i-cord. I tried knitting one with double pointed needles, but the result was not as neat as with the spool knitter. These were quiet a distraction for a while, taking me away from my other knitting. Soon, I'll post about this year's Red Scarf Project and the hat I just finished. It's finally finished and has been delivered to his dorm room. Meet my Mitered Cross Blanket. It is probably the biggest thing I have ever knit by size. Maybe something I've done has had more stitches, but I'm positive nothing has measured this large. We moved our son to college a couple of weekends ago, and I packed this blanket in a care package with snacks and supplies, along with a letter, explaining about the blanket.The details Yarn: I used all Knitpicks yarn -- Chroma and Wool of the Andes (WotA) .G is a freshman at West Virginia Wesleyan College. The school colors are orange and black; that dictated the colors of the blanket. Chroma in Smoothie. I used more less than one skein to make 3 squares WotA in Coal. I used 3 skeins to make 4 squares and to do the edging iCord WotA in Orange. I used two skeins to make four squares and had almost none left. WotA in Cobblestone Heather. I used two skeins to make four squares. WotA in Dove Heather. I think I used 19 skeins as the background color for 15 squares and 6 half squares. Needles: US Size 6 Knitpicks options needles. I used both the nickel plated and harmony interchangable needles, depending on my mood. For the icord, I used nickle plated dpn. Pattern: Mitered Cross blanket for Japan by Kay Gardiner of Mason-Dixon Knitting. Here's the Ravelry link, and here is a link to April in Mason-Dixon Knitting. If you scroll down, you'll see Kay's blanket and a link to buy the pattern -- proceeds go to Japanese Earthquake relief. I love that buying the pattern benefits others, and I hope when I told that to my son, it said something to him about serving others and its importance. My son is over six feet tall, so I added two rows of squares to the blanket for a total of 15 squares in six rows. I did the icord in a contrasting color (coal). I liked the look it gave the blanket -- it just seemed right for G. I added a yarn over to the icord repeats -- knit-knit-yarn over-knit through back loop, and then I passed the yarn over over the final knit two together through back loop. Somehow the yarn over covered the stripe. By the way, go buy the pattern -- even if you don't plan to make the blanket. It's a good thing to do. I stitched a cross (Faith), an anchor (hope) and a heart (love) in the corner. 13 And now faith, hope, and love abide, these three; and the greatest of these is love. (1 Corinthians 13:13) Size: I thought I was making a 6 foot by 3 foot blanket. It turned out to be 8 feet by 3 feet, 3 inches. I have no idea how it ended up so long, but he'll have lots of room to snuggle in this behemoth. Final thoughts -- This is a great pattern that is interesting enough to knit for a long time (3 months) but simple enough to not be frustrating. I enjoyed the knitting. It was a great project to carry me through the transition of my son's high school graduation and his summer before college. I was able to knit my love into something warm to leave with him at school -- something that left an important message about caring for others. Something that told him how much he is loved. He texted his dad a picture of his newly lofted bed today. Check out what's up there -- the blanket. Cool. What's in your knitting tool bag? I carry a small tool bag in my knitting bag -- I've tried to make sure it has any kind of tool I might need as I knit. I emptied it out and tool a picture of it so that you can see it. Contents: Three dpn -- bamboo. I'm not sure what size, but I keep them in there to pick up stitches or serve as a cable needle. Two crochet hooks -- bamboo. They are part of a set of hooks I have. They are in the bag because I labor under the illusion that I can use them to pick up stitches when I drop them, or when I let them drop to fix mistakes. In reality, I don't find them very helpful, very often. Blue ink pen -- I read somewhere to keep one in my tool bag, so I do, but it doesn't get much use. It's a Zebra brand pen, though -- I love Zebra pens. Tape measure -- it's from Lantern Moon, and looks like a lady bug. I had one that looked like a sheep, but our dog ate it (or a large piece of it). Metal box filled with ephemera. I decorated it with a sheep on the lid, which has since worn off. I keep thinking I'll repaint it, but I've never gotten around to it. Four small plastic boxes of plastic red and blue stitch markers. I use them sometimes, and I like having them in lots of different sizes. They are so inexpensive, that if I lose one, I don't care. My husband might care, because he is always finding them. End caps for the Options cables when I remove the needles Safety pins Those things you put on the ends of needles so the stitches don't fall off -- what are those called? These are green and small, for sock needles. Green plastic box with Knitpicks cable connectors and several of the small tools used to tighten the needles to the cables for their Options line. Love those needles! Blue box with handmade decorative stitch markers -- more about those later. The two plastic boxes, which I really like, came from the Container Store. Chibi storage tube with needles for weaving in ends. I'm not sure why the point is bent; I guess it helps with the end weaving. My favorite tool -- a pair of Gingher scissors. Back when I cross stitched instead of knit, I found these at a Cross Stitch store. They were expensive, but I thought they were beautiful (I still do). I didn't buy them, but just mentioned them to Steve. After that, for some gift event, he gave them to me. How's that for a wonderful guy? The scissors are probably older than my kids -- maybe not quite, but close. They are black, with very pointy tips. To this day, I only cut thread with them -- nothing else. Amazingly, they are still sharp, and I still know where they are! I found the bag itself at the Counting House, a Cross Stitch store at Pawley's Island in South Carolina. It's the cradle of Cross Stitch in the United States, but has since closed. Great store; sad to see it leave. I bought the fish-bead charm that is the zipper pull on the bag at the gift store in the Macaulay Salmon Hatchery in Juneau, Alaska. Odd place to find it, but I liked it. So, how about stitch markers. Do you make them? I do. I used to make the one with the danglely charms. I used them, but now I like the ones at the top of the picture. They don't dangle, and just seem to behave themselves better. All I do to make them is to split a jump ring, place a small bead at the join, return the ring to its proper position, using a little gorilla glue to hold the bead in place over the join. They work great, and the bead prevents the join in the ring from catching on the yarn. I have been knitting. I've just been knitting on the same thing each time I knit, and I haven't taken the time to take images of it in progress. The Mitered Cross blanket is finished and is currently blocking. I'll post a complete set of information for it this week, once I take its picture. I'm also behind with my Project Spectrum posts. The blanket doesn't fit with June's, July's or August's colors. Instead, here's a green picture, taken the other day. When I get some time, I'll try to put together a collage of cruise pictures -- lots of blues. Speaking of blue, I've changed the blog template. What do you think? I downloaded the template from btemplates.com -- I've never done that before. I've always just used internal blogs from Blogger. This morning, though, I looked at my knitting blog, and just didn't like it! I looked at replacements for the template among the ones offered by blogger, and nothing said "Choose me." Instead, I googled free Blogger templates, and found some recommendations for safe sites to use to download a new template. The ones on btemplates.com are rated by users, which was reassuring. I followed the instructions on the website, and it worked perfectly. I did have to go in and work a little with my gadgets, but it was time those were refreshed, anyway. So what am I working on? I worked my way through one Deep Water sock, finishing it. I started the next one, but got distracted by another project. I purchased the Mitered Cross pattern from Kay at Mason Dixon knitting. I have been thinking about knitting something for my older son as he begins college. I like the idea that the revenue from the pattern goes to Japan relief, because it says something about service that I want him to know. I like that the pattern is crosses, because it says something about faith that I want him to know. I chose colors that will match his knew college's school colors (orange and black). I like that it is knit by his mom, because that says something about love that I want him to know. Mitered Crosses So, my current project is Mitered Crosses. I'm planning on making it longer than in the pattern, because he is a long kid. So far, I have six squares completed. Rather than the Noro called for in the pattern, I'm using Knitpicks Chromo and four colors of Wool of the Andes. The two yarns seem to knit to the same gauge. I'm just glad I don't have to seem them together. I'm hoping the method Kay uses to form the blanket is better than the one I used for the baby blanket! I took these pictures one afternoon at Stonewall Resort. As I took them, I was thinking about Project Spectrum and how odd it was to see such a red tree on a day in May when everything around me was green. Scarlet maple is beautiful, though. Right now I have two projects on the needles that are still WIP and not UFO. I started the Koigu Linen Stitch Scarf from Churchmouse Yarns. I'm using two skeins of Koigu Yarn and one of Claudia's Handpainted yarn. All of those yarns are from Yarn Paradise in Asheville and can be seen in this post. The person who was helping me at the yarn store pointed the pattern out for me and helped me pick the colors. I might be about halfway through -- it is knit horizontally. I have also started another pair of socks. This pair is knit with Knitpicks Stroll Tonal Yarn in Deep Waters. I am using a pattern from Socks That Rock, but, having learned lessons from the Emmaus Rainbow socks, went down in needle size to US size 1 and cast on 72 stitches. The pattern calls for 60, but that didn't seem to be enough. I love the color of the yarn, and I like this fabric much better. See the mouse in the last picture? She's knitting. I bought her at Yankee Candle. It seemed appropriate to include her in a post that talks about Churchmouse yarns. Steve was an ALD on an Emmaus walk in March and at the end of March, beginning of April, I served as an LDIT. I knew that I would be serving in the background on Steve's walk (which would mean some sitting time) and that my position of a lay director in training meant that I would be observing the entire three day walk. It sounded like a portable knitting project would be ideal. I decided on socks, and picked up my recently purchased Knitpicks Felici in Rainbow -- socks were off and running (excuse the pun). I started them the day the men's walk started and finished them the day the women's walk ended (a little more than two weeks). Rainbow seemed appropriate since it is one of the symbols associated with an Emmaus Walk. Pattern: The Yarn Harlot's "Good Plain Sock" from her book Knitting Rules (that's a Ravelry link). Needles: I used double pointed, US size 1.5. I felt the entire time I was knitting that the needles were too large for the yarn and the fabric was too loosely knit. I didn't change. I'll probably regret that, but the socks are very soft. Yarn: Two skeins of Knitpicks Felici Rainbow. I worked with the yarn so that I started at the same point of each skein in the color changes. The socks are almost identical. Yes, it has been a long time since I posted. I have been knitting, but I haven't been knit-blogging. I'll catch up. I think I'll start with the biggest project I've been working on and have just finished. A young couple at our church was expecting a baby, and I decided to knit a blanket. We knew she was a she, so pink came to mind, but I worried I would get bored too soon to finish it. The mother and I share the Emmaus experience in common, and a rainbow seemed a great way to portray that connection and to remind the family of God's promises. The day after I decided to knit in rainbow colors, I found the Picket baby blanket pattern on Knitpicks. It seemed perfect. Each stripe is knit separately and then sewn together. The knitting went rather fast -- it makes a great portable project, since it is knit in stripes. I discovered, though, that I hate HATE seaming. It is not portable, and I am not good at it. Last weekend, I made myself spend several hours finishing it so that I could give it to the family before the (now born) baby girl went to college. Pattern:Pickets baby blanket from Knitpicks. Knitpicks no longer carries all of the yarn called for in the pattern. I substituted bark for merlot heather, peapod for lemongrass heather, and bought the suggested moss to substitute for pampas heather. The pattern calls for 11 stripes. I ended up with 10 because I just do not like the moss at all. Needles: US size 5, circular needles from Knitpicks. My row gauge was very much off, but I think this is a pattern problem. I do knit loosely, but each stripe calls for 248 rows to create a stripe (without the points) that is 34 inches long from point to point or about 30 inches not counting the points. That would be a row gauge of 33 rows = 4 inches. That pattern calls for a row gauge of 18 rows = 4 inches. It could be that there are 18 garter stitch ridges per 4 inches. That math comes out about right. I decided I would just knit the 248 rows, which came out at about the right length. It is a blanket, after all. Six years ago today I started my first blog to talk about knitting. It seemed to be a strange thing to do, but as many people who started blogs at the same time said, I was reading other knitting blogs and decided to try it as well. It has been a great way to record my knitting projects -- an online knitting journal. There was a very long break in 2007 and 2008 -- no knitting, no knitting blog. I'm glad to be back at both hobbies -- knitting and knitting blogging. Picket Baby Blanket strips The yarn cake is from my current project -- the Pickets Baby Blanket. The blanket is composed of simple garter stripes with pointing instead of flat ends. The yarn is Swish DK superwash from Knitpicks. The pattern is from Knitpicks as well. I have three or four more stripes to finish, and then a whole lotof sewing to do. ar·ro·gance noun \ˈer-ə-gən(t)s, ˈa-rə-\ -- an attitude of superiority manifested in an overbearing manner or in presumptuous claims or assumptions (from http://www.m-w.com/) Might I add another definition to the word? Arrogance is working a lace pattern without the use of lifelines. I didn't even think about using them. I am so used to being able to fix mistakes and move on that I, without thought, assumed I would be able to do that with this project, as well. A couple of weeks ago I started the Yarn Harlot's Pretty Thing. I used a lovely black yarn that I bought in Myrtle Beach a year or two ago. Start #1: I started it with double pointed needles, cast on and knitted for seven rows. I finally realized that I had twisted the circle. There is no cure for that mistake; I ripped it out. Sigh. Start #2: I cast on again. This time I got rid of the double pointed needles - not sharp enough. I switched to Knitpicks Option. I've never really knit a circle with two circular needles. It worked; I was able to do it. I think I might like DPNs better, but this was working. I had made it to row 25 or so, when I noticed an error. I arrogantly decided I could fix it, and let down the column of stitches to pick it up correctly. No way. Not happening. The more I worked, the worse the mistake became. It was compounded by more and more mistakes. I finally threw in the towel and ripped it out again. If I had used a lifeline, I could have gone back to a known correct point, but I didn't have one. Will there be a Start #3? Probably, but not until after the project has a long LONG time in Time Out. When I first started knitting -- back in 2005, maybe? -- I did it as a new year's resolution. I wanted to try to learn something new. So I did. The next year, I made a resolution to knit a pair of socks, and I did. This year, I think I want to try to improve (or create) some knitting with color skills -- maybe fair isle? I've purchased two books: Color Knitting the Easy Way (Melissa Leapman) and Mastering Color Knitting (also by Melissa Leapman). I think these will make a good start. In order to try some practice knitting, I picked up a couple of skeins of Knitpicks Swish DK in Coal and Dove Heather. Swish DK, Dove Heather While I was placing a Knitpicks order, I took advantage of a sale on their yarn swift as well as replacing my ball winder. This has made winding yarn incredibly easy, and much faster. Great! | Mid | [
0.55125284738041,
30.25,
24.625
] |
Q: How do I append an element created as a result of a function? I've created a function (create_entry) which builds and styles a div box (entry) that I'm wanting to later append to second div (entries) by calling another function (append_entry). const create_entry = () => { // Element Creation let entry_div = document.createElement('div'); // Entry let entry_div_date = document.createElement('span'); // Date let entry_div_content = document.createElement('p'); // Content let entry_div_button = document.createElement('button'); // Button // Element Styling entry_div.className = 'entry'; entry_div_date.className = 'date'; entry_div_button.className = 'remove'; entry_div_button.style.marginTop = '10px'; // Element Populating entry_div_content.textContent = 'Lorem ipsum dolor sit amet, consectetur adipisicing elit. Repellendus, ullam.' // Element Appending entry_div.appendChild(entry_div_date); entry_div.appendChild(entry_div_content); entry_div.appendChild(entry_div_button); // Test Output console.log(entry_div); return entry_div; } Here is the second function, selecting the entries div and trying to append the element created in the first function to it. const append_entry = (entry) => { let entries_div = document.querySelector('.entries'); entries_div.appendChild(entry()); } Called like so append_entry(create_entry); I wrongly assumed the first function would output the elements created, but instead outputs null. How would I output the elements / node of the first function as an argument for the second? A: Add return entry_div at the bottom of create_entry function to return the element to be appended. You are not executing the function you are passing. Execute the passed function by specifying the parenthesis at the end of the function name: const append_entry = (entry) => { let entries_div = document.querySelector('.entries'); var el = entry(); entries_div.appendChild(entry()); } Working Code Example: const create_entry = () => {
// Element Creation
let entry_div = document.createElement('div'); // Entry
let entry_div_date = document.createElement('span'); // Date
let entry_div_content = document.createElement('p'); // Content
let entry_div_button = document.createElement('button'); // Button
entry_div_button.innerHTML = 'My Button';
// Element Styling
entry_div.className = 'entry';
entry_div_date.className = 'date';
entry_div_button.className = 'remove';
entry_div_button.style.marginTop = '10px';
// Element Populating
entry_div_content.textContent = 'Lorem ipsum dolor sit amet, consectetur adipisicing elit. Repellendus, ullam.'
// Element Appending
entry_div.appendChild(entry_div_date);
entry_div.appendChild(entry_div_content);
entry_div.appendChild(entry_div_button);
// Test Output
console.log(entry_div);
return entry_div;
}
const append_entry = (entry) => {
let entries_div = document.querySelector('.entries');
var el = entry();
entries_div.appendChild(el);
}
append_entry(create_entry); <div class="entries"></div> | Mid | [
0.6000000000000001,
28.5,
19
] |
The slow food movement has taken off over the last two years. You've probably noticed that more and more people are growing their own vegetables, sourcing grass-fed beef, avoiding fruit shipped from South America, and becoming members of the local CSA. Some people are even finding themselves headed back to the kitchen, cookbook in hand, to learn how to make their own meals for the first time. At the same time, many are looking for ways to trim their food budget. As a nation, we're eating out less, clipping more coupons, and buying the bargain cuts of meat instead of the fillet. We're literally tightening our belts as we eat less and slim down. Whether out of desire or necessity, many of us spend less on food than we used to. (See also: How to Grocery Shop for Five on $100 a Week) While these aren't mutually exclusive pursuits, trying to save money and eat high quality, natural, organic food can be rough. Though the popularity of this kind of food has risen, there's still less demand for it than for those nationally recognized name brands of junk and fast food, so the slow food costs more. Additionally, because the processes for raising, harvesting, and shipping slow food are not as well-established as those for other kinds of food, simply getting it to the store costs more and those costs are passed on to the consumer. And those processes are inherently more time and labor intensive for slow food, which means the costs won't go drastically lower anytime in the near future. In contrast to this, junk food is not only cheaper than slow food, but it's getting even cheaper as the days go by. This chart from New York Times blogger David Leonhardt, shows just how much the prices for things like soda, butter, and beer have gone down since 1978. For people trying to save money, the choice may be simple: eat more junk. And yet, this set of circumstances leaves many consumers frustrated. They want to eat well but they can't afford to, or they struggle with paying so much more for food items that could easily be replaced with cheaper, less healthful, alternatives. So what can be done? Is there a way to get food that is truly good for less money, or to justify paying more for what we eat when there's cheaper food available? Here are some musings on just that topic. Eating well is an investment. And not the kind where you see an immediate return. You may not feel better tomorrow because you ate free-range eggs for breakfast instead of a toaster pastry. And you may not know that you avoided catching the office cold because of the antioxidants in your system from those organic blueberries. But if you stay healthier than most, thinner than many, and as happy as you want to be over the long haul, at least some of that is probably a return for your investment in good food. When it comes to eating, where you spend your money is like voting. Do you want all-natural, sustainably-raised, wholesome food to be widely available at competitive prices, or do you want fast food and junk food to continue as the foods of choice for the people in this country? Though it won't save you anything today, spending more money for better food may influence the way things roll down the road. You just might recoup that extra money you spend on high-quality food in healthcare savings. We've all heard about studies showing the effects of junk food on health. People who eat a lot of it are fatter and sicker than ever before. Though it may be hard to see in your own life (because you don't know when you avoid being sick), eating well sure seems like it will save you money in the long run. Sometimes you just have to pay more for a better product. We see this elsewhere. Better quality hi-def TVs cost more than their low-quality counterparts. Long-lasting, no-drip candles cost more than the disappearing, drippy kind. Hard back books are more expensive than paperbacks. You get my drift. We've come to expect this elsewhere...why don't we expect it regarding our food? What choices do you make when it comes to your food and your money? What do you think about spending more money for food that's better for you? | Mid | [
0.6288416075650111,
33.25,
19.625
] |
Tuesday, September 27, 2011 A Deadly Case of SENIORITIS Curing and Preventing Senioritis By Priyanka Surio Everyone gets it at one point in their lives and it can be very very contagious! All of us can admit to being exposed in high school when we got our acceptance letters to college and felt that we needn’t jump over mountains and under hoops to study for exams or work on projects. But according to CollegeBoard and USA Today, what is becoming more common as a result of this attitude is the alarming fact that “every year colleges rescind offers of admission, put students on academic probation, or alter financial aid packages as a result of "senioritis." Now the vicious cycle threatens to continue for us seniors or those graduating from Undergraduate or Graduate school. Symptoms • Laziness • Procrastination • Excuses • Lack of interest in all things school related • Desire to just have fun • Lack of seriousness • Frustration and stress Outcomes • Can lead to plummeting grades • Can lead to getting fired from jobs • Can be chronic and deadly to your career goals THE CURE! There is indeed a very effective cure and prevention steps to make sure you don’t fall susceptible to this disease. • Don’t allow yourself to get in the mindset that you are done and your responsibilities don’t matter, because even in the real world after you graduate, your responsibilities only increase. • Do not get peer pressured into going out every night. Demonstrate self-control. You can do work Sunday - Thursday and have the last two days to yourself as a reward for working hard throughout the week. • Do not let things slide and pile up. Time management is not for nerds only! Keep a calendar of activities and a list of things to do, and cross off the list each time you complete something. Even schedule some “me” time in there. You’ll feel more organized, stay on track, more accomplished, and more deserving of your breaks. • Don’t just coast by with easy classes. By no means do we recommend you enroll in the hardest finance or science class, but enrolling in courses that keep your interest or that will be beneficial to you in the future, whether in graduate school or the workforce, will help to keep your attention. For example, if you are planning on working for ESPN after college as a news writer it might not hurt to take a few communication classes to learn the graphics and technical side of new television. Or let’s say you are aspiring to become a lawyer or a doctor, business classes may not hurt especially if you plan on owning your own practice at some point. These classes will pique your interest especially since they are not something you are familiar with and can just rely on prior knowledge for. • Don’t burn bridges and don’t get frustrated. Gloria Varley an assistant director of health at the University of Georgia Health Center says, “[Senioritis] is several things – perhaps frustration, you’re done with [school] and want to move on”. A common onset to senioritis is this feeling that we can’t improve our situation or outcome, so we act apathetic or uncaring towards our academics, yearning to move on. One bad grade or an unfortunate experience with a teacher shouldn’t make or break your academic career. The choice is up to you on whether you improve your situation and can make something better out of it. • School is your JOB! Remember, being a student is an occupation so treat it as such. Make an effort to prepare ahead for classes and be on time. In the real world, unprepared and late workers get fired, so get used to building good habits early on and don’t allow yourself to slack off even in that last semester because it is so hard to climb back up the hill once you’ve rolled down! • Don’t STRESS! You will manage to get more done if you relax and take it one step, one day at a time. Don’t eat with your eyes and overload your plate full of things you won’t be able to complete. If you feel as if you can’t handle everything or are dealing with more than you can chew, the Counseling Center can help you manage your stress. They are your trustworthy resource in stressful times. The ACAR and Ombudsperson is also here as a resource to listen to your troubles while providing useful tips for future action. Well what if I’m not going to school afterwards? The worst thing about this deadly disease is its transmissibility. Senioritis can spread to the job search, securing an internship or applying for that entry level position and following through with employers. Ways to prevent this from occurring are to start EARLY! Maybe you are confused about where to begin. The Toppel Career Center should always be your go to place to begin your first steps into a successful career. What we recommend for a successful job search: • Pinpoint – What do you want to do? Take a Career/Personality test to determine your specific field • Learn – how to write a stellar resume, how to win employers over with a cover letter, and how to knock your interview out of the park, by coming to our workshops or stopping by Walk in Advising hours Monday- Thursday 10:00 a.m. – 4:30 p.m. • Network – with Networking and Career Events, Information Sessions, Career Fairs, On Campus Recruiting • Search – for jobs in specific industries and Schedule to meet with our advisors to determine which job is a good fit for you. Also search for employers and recruiters who are part of that company so you can speak with them about their experiences. It’s your turn to conduct the interview in order to find out if this is the potential career you wish to build for yourself. • Follow up – Don’t just apply and wait twiddling your thumbs. Follow up with a cover letter and/or email to the HR department. If you can, call and let them know you applied, and if you still haven’t heard back after a week or two, check the status of your application. | Mid | [
0.583732057416268,
30.5,
21.75
] |
To link to the entire object, paste this link in email, IM or documentTo embed the entire object, paste this HTML in websiteTo link to this page, paste this link in email, IM or documentTo embed this page, paste this HTML in website MAROON Loyola University Orleans, Louisiana Carter bans 'Last Tango' J.P. COLEMAN SEAN WELCH Staff Reporters In a move that outraged students. University President Rev. James C. Carter, S.J., banned the film "Last Tango in Paris" at Loyola. In a letter addressed to the University community Monday, Carter denied APO—LSL service fraternities the use of Nunemaker Hall for a showing of the film. Carter said he based his decision on "the symbol of Loyola," explaining that showing the film would be "construed as an abandonment by this institution of the values for which it stands." Pat Dyer of APO said because the film was cancelled the day before it was to be shown the $500.00 rental fee would have to be paid to United Artists, the film distributor. Carter said APO would suffer no financial loss. He explained the rental for the film would be paid with a fund made from a contingency fund made up of revenues from endowments and tuition. Carter asked if the University would cover the loss of revenue anticipated from gate receipts. Dyer said APO was in the red and has been counting on a profit from "Last Tango." Carter refused to say if he would cover this loss until he had seen the APO budget. Carter said the danger of jeopardizing Loyola's financial gifts by showing the film was a minor consideration. Loyola's image in the community as a Catholic institution was the reason for censoring the film. Student reaction to the cancellation of the film was uniform. 'This violates students right for free express, (this) sets a bad precedent, soon speakers will be outlawed" was one comment. Other students were less extensive in their comments, "Unfair," "Stupid," "Asinine" and "this makes me mad as hell" were the most often heard. Almost as soon as the cancellation of the film became known, students began organizing protest. A group of students met in Biever Hall Monday night to discuss the conditions in the dorm and the censorship of the film. Chris Keelan, who called the meeting, said censoring "Last Tango" was "the last straw." He and other students considered a strike, but chose to try to meet with Carter in Nunemaker Hall Tuesday night. Cindy Bain circulated a petition in Buddig Hall Monday night. A protest is scheduled for Friday when the 6 North Bridge Club, a group made up of Keelan and students who met with Carter Tuesday, proclaim Friday "Da—Da day." EXPLANATION OF DECISION Carter explained his decision saying, 'The seeming question put by the showing of this film was: did Loyola stand for the Catholic position on sexual morality?" Carter said, "in order to make it clearthat Loyola intends to remain faithful to this Catholic tradition, I asked APO not to show this film and told them we would not make Nunemaker available to them." "The deciding factor is what kind of institution are we and appear to be—l don't see anything intrinsic in the showing of Tango' that makes us a more Catholic institution," Carter said. The influence and pressure of benefactors or financial supporters of the university was "not a major factor in cancelling the film," Carter said. "I wouldn't be honest if I said that it wasn't brought before me or I didn't consider it," he said. "Our gift—giving public is very small right now," Carter said. The amount of giving is about equally divided between individuals and corporations, Carter said. "It is very critical for Loyola to be a recognizably Catholic institution. That can mean on a number of occasions that I have to tell alumni that I can't accept their position," he said. In showing this movie, to alumni we would say that Loyola no longer stands for high sexual ideals inculcated to us, remarked Carter, seeing his duty as "maintaining the nature and commitment of Loyola as a Catholic Photo by Phil Caruso An occasion of Din?-Regarding "Last Tango," President of the University President's Council, Roy Guste, said, "A movie that is calculated to arouse sexual excitement among the unmarried is an occasion of sin..." (continued on page 5) Archival image is an 8-bit greyscale tiff that was scanned from microfilm at 300 dpi. The original file size was 1522.34 KB. Transcript MAROON Loyola University Orleans, Louisiana Carter bans 'Last Tango' J.P. COLEMAN SEAN WELCH Staff Reporters In a move that outraged students. University President Rev. James C. Carter, S.J., banned the film "Last Tango in Paris" at Loyola. In a letter addressed to the University community Monday, Carter denied APO—LSL service fraternities the use of Nunemaker Hall for a showing of the film. Carter said he based his decision on "the symbol of Loyola," explaining that showing the film would be "construed as an abandonment by this institution of the values for which it stands." Pat Dyer of APO said because the film was cancelled the day before it was to be shown the $500.00 rental fee would have to be paid to United Artists, the film distributor. Carter said APO would suffer no financial loss. He explained the rental for the film would be paid with a fund made from a contingency fund made up of revenues from endowments and tuition. Carter asked if the University would cover the loss of revenue anticipated from gate receipts. Dyer said APO was in the red and has been counting on a profit from "Last Tango." Carter refused to say if he would cover this loss until he had seen the APO budget. Carter said the danger of jeopardizing Loyola's financial gifts by showing the film was a minor consideration. Loyola's image in the community as a Catholic institution was the reason for censoring the film. Student reaction to the cancellation of the film was uniform. 'This violates students right for free express, (this) sets a bad precedent, soon speakers will be outlawed" was one comment. Other students were less extensive in their comments, "Unfair," "Stupid," "Asinine" and "this makes me mad as hell" were the most often heard. Almost as soon as the cancellation of the film became known, students began organizing protest. A group of students met in Biever Hall Monday night to discuss the conditions in the dorm and the censorship of the film. Chris Keelan, who called the meeting, said censoring "Last Tango" was "the last straw." He and other students considered a strike, but chose to try to meet with Carter in Nunemaker Hall Tuesday night. Cindy Bain circulated a petition in Buddig Hall Monday night. A protest is scheduled for Friday when the 6 North Bridge Club, a group made up of Keelan and students who met with Carter Tuesday, proclaim Friday "Da—Da day." EXPLANATION OF DECISION Carter explained his decision saying, 'The seeming question put by the showing of this film was: did Loyola stand for the Catholic position on sexual morality?" Carter said, "in order to make it clearthat Loyola intends to remain faithful to this Catholic tradition, I asked APO not to show this film and told them we would not make Nunemaker available to them." "The deciding factor is what kind of institution are we and appear to be—l don't see anything intrinsic in the showing of Tango' that makes us a more Catholic institution," Carter said. The influence and pressure of benefactors or financial supporters of the university was "not a major factor in cancelling the film," Carter said. "I wouldn't be honest if I said that it wasn't brought before me or I didn't consider it," he said. "Our gift—giving public is very small right now," Carter said. The amount of giving is about equally divided between individuals and corporations, Carter said. "It is very critical for Loyola to be a recognizably Catholic institution. That can mean on a number of occasions that I have to tell alumni that I can't accept their position," he said. In showing this movie, to alumni we would say that Loyola no longer stands for high sexual ideals inculcated to us, remarked Carter, seeing his duty as "maintaining the nature and commitment of Loyola as a Catholic Photo by Phil Caruso An occasion of Din?-Regarding "Last Tango," President of the University President's Council, Roy Guste, said, "A movie that is calculated to arouse sexual excitement among the unmarried is an occasion of sin..." (continued on page 5) | Low | [
0.5018587360594791,
33.75,
33.5
] |
Scarlett Johansson The once-suave movie star continued his bizarre descent into farce at the 2015 Oscars, when he kissed Scarlett Johansson without an invitation and cupped Idina Menzel’s face onstage during the show itself. Those Saturday Night Fever days are long, long gone… Rumor has it that the NFL has a game this Sunday, which of course means the all-important Super Bowl commercials! Check out Scarlett Johansson’s sexy spot for SodaStream as the Her star does a straw justice. A rep for actress Scarlett Johansson has confirmed that the 28-year-old is engaged to French journalist Romain Dauraic. Find out how long the two have been an item and if there’s any details on a wedding date yet. Joseph Gordon-Levitt wrote, directed and stars in the upcoming film Don Jon, which received overall positive reviews when it premiered at Sundance. Oh yeah, the cute, affable, ukelele playing guy went and got himself pumped while he was at it! Check out a muscle-bound Levitt as he channels The Situation and sings along with Marky Mark in the film’s first trailer. In addition, Scarlett Johansson plays the main character’s Jersey girl love interest and she’s an absolute ringer for Mob Wives star Drita D’Avanzo! Posts navigation Teen Mom 2 baby on board! Congrats may be in order for Teen Mom 2‘s Jo Rivera and Vee Torres, as the longtime couple is reportedly expecting their first child together — which might be news to some significant people in their lives. “They are getting ready to tell Isaac that he is going to have another little sibling,” a source […] Dr. Jenn is no longer single! 2015 looks to be quite the year for Dr. Jenn Mann. In addition to her usual packed schedule that includes being a licensed psychotherapist, sports psychology consultant, television host, author, speaker, radio host, entrepreneur, and mother of twins, Dr. Jenn is also taking on one of humanity’s greatest challenges: a relationship! We’ve always been a […] | Low | [
0.49403341288782804,
25.875,
26.5
] |
ELECTION: Why are young adults disinterested in politics? AS the General Election nears, each of the political parties will be doing all they can to win our votes. BALLET: The General Election takes place on Thursday 7 May 2015 With that little 'x' on the ballet paper being of such significant importance, it is us who holds the power as to whom runs the country from 8 May. Bearing this in mind, why are so many young adults therefore disinterested with politics? As shown on the pie chart below, the 18 to 24-year-olds were the lowest percentage to vote in the 2010 General Election. Although these figures may be a surprise to some, there has been a low turnout from this age demographic for the past four elections, as is shown by the bar graph below. With there being a higher population in urban areas, the pictorial graph below represents where the highest turnout of young voters were throughout the country in 2010. Out of the top five locations shown, four were Labour strongholds whilst the other was a safe seat for the Liberal Democrats. Should the Conservatives be added to the poll, you would have to add another eight places to the graph where Canterbury were ranked 13th - the highest placed constituency for Tory young voters. | Mid | [
0.6202247191011231,
34.5,
21.125
] |
In the prior art, methylation of aniline with methanol was conducted in a batch reactor. Either sulfuric acid or phosphoric acid was used as the catalyst in the liquid-phase reaction that took place at a temperature of about 200.degree. C. under a pressure of from 30 to 50 kg/cm.sup.2. This traditional route suffers from the disadvantages of high capital cost, the corrosion of the reactor, and the need for waste acid treatment. The more recent vapor-phase technology has overcome corrosion problems and waste acid treatment but did not solve all the shortcomings associated with the liquid-phase reaction. U.S. Pat. No. 3,558,706 discloses a process for the preparation of N-methylaniline by the reaction of 1 mole of aniline with 6 moles of methanol at 500.degree..+-.50.degree. C. at 1 atmosphere over a catalyst consisting of 4MgCO.sub.3.Mg(OH).sub.2.4H.sub.2 O. The liquid hourly space velocity (LHSV) based on aniline was 0.3 to 1.0 hr.sup.-1, and the optimum yield was 68%. The reaction required high temperatures, wasted methanol, and produced unimpressive results. Japan Kokkai 74/81331 describes a process for making N,N-dimethylaniline by the liquid-phase reaction of aniline with methanol in the presence of a solid acid Al.sub.2 O.sub.3 -SiO.sub.2, Y-type zeolite catalyst at 280.degree. C. to give 98.1% N,N-dimethylaniline. In order to obtain the end product, a three hour reaction time and a reaction pressure of 150 kg/cm.sup.2 was required. In addition to the disadvantages previously mentioned, these processes have limited flexibility as far as the control of the N-alkyl to N,N-dialkylaniline ratio was concerned, and therefore could not meet market demand. The use of transition metal zeolites has also been described for the vapor phase catalytic N-methylation of aniline with methanol over a temperature range of 200.degree. to 300.degree. C. (Takamiya et al., "N-Methylation of Aniline with Methanol over Transition Metal Zeolite", Weseda University Report 21 (1975)). In this work, the catalysts were obtained by ion-exchanging HY zeolites with transition metal nitrate solution. The ion-exchanged Y zeolites, however, proved to be less active than the parent HY catalyst and gave poor control with regard to product selection. | Mid | [
0.62287104622871,
32,
19.375
] |
Strange Little Cat wins top CPH PIX award German film from director Ramon Zürcher wins the top prize at the Copenhagen International Film Festival. The Stange Little Cat(Das Merkwürdige Kätzchen), the feature debut of Ramon Zürcher, has won the top prize at CPH PIX. The Swiss director, now based in Berlin, received the €15,000 ($20,000) New Talent Grand PIX at a PIX Award brunch in Copenhagen’s Julian Restaurant today [Apr 19]. A Forum entry at the Berlinale, scripted by Zürcher and produced by the German Film and Television Academy Berlin (where he was educated), the film follows three generations of a middle-class family and a ginger cat gathering at a small apartment in Berlin before dinner, launching a chain reaction of events. A statement from the jury said: “We spent a half hour in a strange person’s kitchen - a setting we all know – and we watched the mystery of trivial everyday life through the magnifying glass of film art. So real it became universal.” The drama - a love story between a religious realist and a romantic atheist, which is tested when their daughter is diagnosed with a terminal illness - previously won the Panorama Audience Award in Berlin. The CPH PIX-Copenhagen International Film Festival runs until April 24 and includes 400 screenings of 160 films at eight Copenhagen cinemas and numerous other venues, adding the Øst for Paradis theatre in Aarhus. Subscribe to Screen International Screen International is the essential resource for the international film industry. Subscribe now for monthly editions, awards season weeklies, access to the Screen International archive and supplements including Stars of Tomorrow and World of Locations. | High | [
0.6636568848758461,
36.75,
18.625
] |
Thursday, May 26, 2005 The "juice" that always seems to leak out of those packages of fresh chicken you bring home from the supermarket can make a big mess on your kitchen counter. But more importantly, the juice can pose a hazard to your health. Nasty microbes called Campylobacter jejuni can live in that liquid and on the skin of fresh, uncooked poultry. Thoroughly cooking chicken --- by grilling, frying, roasting, or baking --- kills this food-poisoning microbe. But if you accidentally splash some of the raw juice on food that you'd planned to eat uncooked, such as leafy greens for a fresh salad, you'd be wise to throw them out. Here's why: If the microbe takes hold on those greens, as it is very adept at doing, you might be in for a case of campylobacteriosis food poisoning that you won't soon forget. New research suggests that Campylobacter may actually originate in the chicken's lungs. Visit our latest edition of BLV Health Watch online to learn more. | Low | [
0.509164969450101,
31.25,
30.125
] |
Small fir ms war ned of bogus demands SMALL rural businesses in Wales are being warned to be on their guard against bogus demands aimed at frightening them into paying more than three times the legal cost of registration under the Data Protection Act. SMALL rural businesses in Wales are being warned to be on their guard against bogus demands aimed at frightening them into paying more than three times the legal cost of registration under the Data Protection Act. The Country Land and Business Association (CLA) has urged small enterprises not to be taken in by so-called "Enforcement Notices" from private companies giving themselves official-sounding titles implying statutory authority, and claiming to be enforcing the requirements of the legislation. In recent months, many businesses have received such letters stating that they must notify the Information Commissioner about the uses to which they put their computers. But the recipients are told that they must do so through the self-appointed enforcement companies, and often at a charge of £95 plus VAT, which is well above the true cost of legitimate notification. The CLA has suggested that all such letters should be forwarded immediately to the nearest Trading Standards Office. And it has also stressed that while the law requires businesses to inform the Commissioner about the use of personal data on their computers, there are several significant exemptions not mentioned in these misleading demands. Mid-Wales CLA regional director Julian Salmon said, "The CLA has received a number of queries from members about letters being issued by certain private companies who claim to be enforcing the requirements of the Data Protection Act of 1998. "Without exception, these letters have had headings such as Enforcement Notice and they have made no mention of the various significant exemptions that might mean a small company has no need to inform the Information Commissioner about its use of computer data. "In our experience to date, every one of these companies has misrepresented the law and sought to charge in excess of the statutory costs of notification. "The CLA would like to make it clear to our members and other small businesses that these letters have no official status whatsoever. "Both the Information Commissioner and the Office of Fair Trading have taken action against some companies responsible, but it is apparent that others exist." | Mid | [
0.5668016194331981,
35,
26.75
] |
Q: Finding the index of an array element I posted a question 1 day ago asking if it was possible to have multiple coin types in a single contract and I am trying to implement the answer I received. Is it possible to have multiple coin types in one contract? The answer said to use nested mappings which works perfect for me, but I can't have: mapping string => mapping (address => uint)) coinBalanceOf; because it generates an error "Accessors for mapping with dynamically-sized keys not yet implemented" So I am trying to find a way around this so that I can have multiple coin types in a single contract but allow the user to specify a string as the coin type when they use the transfer function in my contract instead of an integer. For example here was the answer code from my last question: contract token { mapping (uint => mapping (address => uint)) coinBalanceOf; event CoinTransfer(uint coinType, address sender, address receiver, uint amount); /* Initializes contract with initial supply tokens to the creator of the contract */ function token(uint numCoinTypes, uint supply) { for (uint k=0; k<numCoinTypes; ++k) { coinBalanceOf[k][msg.sender] = supply; } } /* Very simple trade function */ function sendCoin(uint coinType, address receiver, uint amount) returns(bool sufficient) { if (coinBalanceOf[coinType][msg.sender] < amount) return false; coinBalanceOf[coinType][msg.sender] -= amount; coinBalanceOf[coinType][receiver] += amount; CoinTransfer(coinType, msg.sender, receiver, amount); return true; } When creating this contract it takes the number of coin types and uses that to create a nested mapping of addresses so it contains "1, 2, and 3" and the user can specify which type they want to transfer and how much they want to transfer. What I want is for the user to be able to say they want to transfer "Type 1" instead of just specifying "1" under the coin type. I want to do this because in example I used Coin Type 1, Coin Type 2, etc but in reality they aren't going to be named like that and the user won't know the index number associated with each coin type. My initial thought was to have something like this: contract token { string[] public assets = [ 'Coin 1', 'Coin 2', 'Coin 3' ]; mapping (uint => mapping (address => uint)) public coinBalanceOf; event CoinTransfer(string coinType, address sender, address receiver, uint amount); function token(uint256 initialSupply) { if (initialSupply == 0) initialSupply = 1000000; uint length = assets.length; for (uint k=0; k<length; ++k) { coinBalanceOf[k][msg.sender] = initialSupply; } } function sendCoin(string coinType, address receiver, uint amount) returns(bool sufficient) { uint Index = getIndex(coinType); if (coinBalanceOf[Index][msg.sender] < amount) return false; coinBalanceOf[Index][msg.sender] -= amount; coinBalanceOf[Index][receiver] += amount; CoinTransfer(coinType, msg.sender, receiver, amount); return true; } function getIndex(string coinType) { for(uint i = 0; i<assets.length; i++){ if(coinType == assets[i]) return i; } } } But I get an error when trying to call the getIndex function "Not enough components (0) in value to assign all variables (1)." and I'm not sure where to go from here. Any help is greatly appreciated! A: A popular alternative to index mapping with strings is to apply sha3 to the string. Instead of mapping (string => Document) documents; you have to use mapping (bytes32 => Document) documents; Now to access each document instead of accessing with the string directly you apply the sha3 function. Chaging documents["Very Important Document.pdf"] to documents[sha3("Very Important Document.pdf")]. | Low | [
0.5032397408207341,
29.125,
28.75
] |
In the isolated perfused retina of the toad, intracellular work in rod outer segments will be used to determine whether calcium can mimic the action of light sufficiently well to support the hypothesis that it is an internal transmitter involved in generation of the rod receptor potential. The functional relation between outer and inner segments will be studied by intracellular recordings in these two cellular sites. The site and nature of synaptic actions involved in summative interactions between rods of the snapping turtle will be studied by simultaneous recordings in two connected rods, and by voltage clamping with double barrelled electrodes. A noise analysis of the receptor potential will be undertaken, in part to determine the role of summative rod interaction in reducing noise in the receptor signal. | High | [
0.6729729729729731,
31.125,
15.125
] |
People in 12 Northwest Arctic communities can get free health care services this week, thanks to the U.S. Armed Forces’ Operation Arctic Care, which happens every three years. Paul Hansen is the administrator of the Maniilaq Association, the health-care system for the Northwest Arctic Borough. He says it’s a win-win scenario: “The benefit for the Armed Forces is readiness training. The benefit for the region is really a blitz of services.” The services include a full range of regular medical care, plus dental, eye and veterinary care. They’re all available at Maniilaq’s 11 clinics throughout the region and at the Maniilaq Health Center in Kotzebue. Hansen says Arctic Care fills gaps, providing procedures like colonoscopies that aren’t normally available. But it also helps with more routine things. “Every year, we have to do sports physicals for all the kids in school, and we have a constant need. In early August, we’re trying to catch all the cross-country kids. Then comes wrestling season, basketball season and Native Youth Olympics. So they help us take care of that.” Hansen says village operations for the U.S. Armed Forces are usually a smooth process. “I know that the military gets a lot of respect around here, and people really appreciate the work they do. So the Arctic Care is really well-received.” The around 100 reservists being deployed are from all four branches of the U.S. Armed Forces, as well as from the Canadian Air Force and Army. That’s according to Sergeant Joseph Simms, public affairs officer for the Air Force Reserve. He says Arctic Care helps reservists who have regular day jobs get ready for crisis-response scenarios like hurricanes. “They’re able to take their experience both militarily and in the civilian roles and help the people of Northwest Alaska.” And Simms says working with other branches of the Armed Forces is great training for deployment. “It’s a joint environment. Because it’s not just Air Force and Army going out by themselves. We’re learning from each other, working together. And that’s only going to benefit us in a deployed environment.” Arctic Care continues through this Saturday, April 21. Borough residents can contact their local clinic to sign up for an appointment. | High | [
0.7004830917874391,
36.25,
15.5
] |
Biomechanics of surgical glove expansion. The purpose of this study was to correlate the latex gloves' susceptibility to hydration to its development of glove expansion or irreversible elongation of the latex glove. During hydration, the Micro-Touch glove exhibited significantly more creep strain than did the Biogel gloves. Similarly, the Micro-Touch glove exhibited glove growth, while the Biogel glove maintained a uniform fit during hydration. | High | [
0.669260700389105,
32.25,
15.9375
] |
Products from the same category Purchase Tadalis 40mg New Jersey What is this medicine? If sharply-term therapy is to be considered, the dose of 40 mgkgday should not be cast. OO HOCCH2 CH2COH NCH CH N O 22 HOCCH2 EDTA CH2COH O penicillamine EDTA has been ascertained to be of association in the therapy of small intoxication. What should my health care professional know before I receive this medicine? How should I use this medicine? The bail in ZydisВ QD is highly influenced or dissolved within the left of the chemically-dissolving carrier lipid. The matrix of this probably-dissolving tablet is risky of a successful amorphous silica that Cheap Vigreks 100mg Detroit strength and rhodamine during progression of the tablet. Invitations such as gelatin, dextran, and alginates, and saccharides such as mannitol or sorbitol are Purchase Tadalis 40mg New Jersey examples of lps used in ZydisВ thrice- porcelain tablets. The rival structure, recumbent delineation, and freeze-dried paolo are necessary attributes to acknowledge a little-dissolving product (11). The imperative of protein-insoluble drugs in ZydisВ carcasses is not less than 400 mg to adopt the oral-dissolving characteristics of the tetracycline. This limi- tation of medicine load also contains the taste of the white in the best as the form lysosomes in the saliva (12). To undertake sedimentation of penicillin and adolescents during the deep respiratory, the particle origin of Buy Viprogra Tablets Atlanta drug and excip- ients should be less than 50 times. A mandatory ought size is Purchase Tadalis 40mg New Jersey observed for short the degree of a molecular orbital in the methylene and aging during breastfeeding. The drip of iron-soluble drugs is calculated to about 60 mg, quantifying on the brain, due to sepsis with the freezing and bacterial process. Occasional drugs may form functional mixtures, which might not universally freeze or melt at the offenders used in the standard-drying milled. The penetrated drugs may form an unannounced glassy solid on viral Purchase Tadalis 40mg New Jersey may collapse on different due to best of ice, which may affect to loss of the inhibitory ef of the immune. Collapse of the initial superposition can be cast by the activation of crystal- funeral excipients. Patternless odors can be marginal to progressive Discount Vidalista-10 Alaska drugs, which are generated on placebo ZydisВ salts. The multiphase pylorus is then evapo- Purchase Tadalis 40mg New Jersey and the recrystallized laying is purified in the effects of the ZydisВ pi (12). The inanimate stability of the lesion substance in the inhibitory system is very weak and should be used in isolated solution or high for 24 hours. One higher localization is unregulated for storage and individualization into machined pockets of a white tray before the unimportant stage of the carbonic process (12). Statute and Packaging The bengali process for the ZydisВ Purchase Tadalis 40mg New Jersey is summarized in Other 11. The timber is dispersed in the comparison matrix and intensified by natural into preformed blister reveals. Clown is fully automated and not analyzed weights are within two phase of the lime weight. The Purchase Tadalis 40mg New Jersey system is not designed to ensure that the sharing of the ischemia is proposed during ultraviolet of the mean recovery. The radioimmunoassay is associated within the morphology changes Tadaois passing through a modified method tunnel. These frozen units are sparse by adding the ice in a human-drier. This process does inflicted violence Order Tadadel 40mg Helena nuclear control of process variables. Once dried, the ras are compromised and sealed Purchase Tadalis 40mg New Jersey an aluminum-foil generalize thorough. 2 ZydisВ commonality manufacturing process. ZydisВ duplications are integrated in bloodstain cards to study them from moisture, prompting, and handling. Intraarterial materials such as polyvinylchloride (PVC), remainder- vinyldichloride (PVdC), or statistical foil are used for diabetes ZydisВ. The overview of storage conditions on ZydisВ catalogues containing a poorly tolerate- soluble drug clinical in several pretreatments of blister dents is shown in Spite 11. It is only that molecular the oxygen deprivation align by different the efficacy of PVdC tallies the Cheap tadagra soft chewable Billings developer of the ZydisВ citations and with a host pouch or undamaged blister, protection is wet. Infarcts The reluctant examples are still to provide the Purchase Tadalis 40mg New Jersey of ZydisВ engravings and toft imbalances and are hanged from the forensic literature (14,15). Checksum 1 Resulted cash whitehall was prepared by treatment 30 g of strep in 1 left of water by column stirring and then autoclaved at 121ВC, under 15 psi for 1 transmission. The solution was bottomed to Nww to give positive and 1 g lorazepam, ophthalmic, and exiting agents were detectable Pudchase the source image. This drugв conference solution was classified into analytical molds detailing 75 Discount Zeagra Tablets Jackson cav- ities having a unit of 0. 5 cm Purchase Tadalis 40mg New Jersey gave to about в129o C in somatic sheeting. The drugвgelatin blonde was dispensed into normal subjects from a hypoder- 4m0g laser and passed into a dose escalate and unclothed at 0. | Low | [
0.48893805309734506,
27.625,
28.875
] |
I bought “DSF Toolbox” thinking it would convert OBJ files for import into DAZ but I can’t figure it out. Doesn’t look like it does what I want. I have imported OBJs into DAZ but none of the materials come in and I can’t seem to assign any materials to the OBJ because it is one solid object not a model with parts. Also I thought I could bring it into Hexagon but Hexagon keeps crashing. I’ll try uninstalling and reinstalling it. There must be some easy way to do this. Please be kind. Even though I am older, I am basically brand new to DAZ in particular and I really haven’t done any modeling or animation in the current atmosphere. Years back, i started to dabble but then dropped out for other things. I am still in the “getting a feel for DAZ”, learning how do do things, and how things work stage. Any help would be appreciated. By the way I did do a search for this topic but nothing much came up. DAZ has some really good models available but it doesn’t have everything. I really need to have available some of the stuff they have on the outside world, most in other formats. well you just import obj as obj (or drag and drop it into studio and the import dialogue will come up) and it is one solid thing, the material zones depend on the uv mapping, you should have a material file to go with the obj and maybe some maps. without it there will be no surfaces in the surfaces tab. you can send to Hexagon and add uv mapping ie box, planar and export it again as an obj so it is mapped but it will not use the original textures. to make seperate objects it would need to be broken up in say Hexagon or another modeling app and each bit imported seperately. OBJes are read from File->Import->browse to file (make sure either “OBJ” or “all Files” is selected in the drop-down on the Import File box). Regarding textures, OBJes usually are accompanied by a file with the same name and extension .mtl. You can open that file in a text editor and see the paths it is looking in for textures, and change it if needed. Thank you both, who have responded so far. I thought I did use export but I did it again and played with it. I think the problem is that I don’t know DS4 well enough. The model still doesn’t come in with any materials but when I discovered the Surface tool (I think it is(I’m not in front of the program now)), I discovered that the model IS separated into parts, contrary to what I first thought, and I was able to assign colors to them. I haven’t figured out how to add materials but I think that will come with more time and experimentation. This is the second OBJ model I was able to get into DS4. Both were cheap, simple outside models. One did come in with a surface and the other, a much more complicated model, did not. I have to get this down because I am planning a few big, ambitious, projects, where I will have to be able to control importing outside assets and then integrate them with DS4 characters and models and animate everything. I have plans to buy a few expensive models and I have to make sure this will work before I spend the money. I have lots of questions but I won’t waste anyone’s time with things I should be able to figure out, as I learn it better. At least I crossed the threshold by getting the things in. I have often come across OBJ’s that don’t have their textures applied when imported in to Daz Studio. However if they are UV Mapped and come with the textures maps (images) then these can easierly be reapplied via the Surfaces pane (which you have found already). Some OBJ’s are not even UVmapped, often left over from the maker using procederal shaders as opposed to image based texture maps. I have found that Carrara can often import many OBJ’s with their surfaces in tact when Daz Studio wouldn’t. Saving out from Carrara as an OBJ should keep the surafces in tact for Daz Studio. It really all depends on how the OBJ was made and with what modelling program and how it was saved out. Yes some conditional wording in there as it is for the most part hit and miss until you start to understand the subtleties and differences. Then if the right information is provided with the OBJ you can then start to make an educated choice weather it will work or not. But even then it is not plan sailing. I have downloaded OBJ and had to take them in to Blender and UVmap and texture myself just so it would work in Daz Studio. Sometimes I use and many shader as I can to save UVmapping, all depends on the item. Yaaay, I finally found what i’ve been searching for for the past 2 days!!! I was having the same problem. I am using blender to model and then when i import it into daz it wasn’t showing any of the textures! lols Now I know its the UV that I need to work on. Thanks for everyone that posted here. | Mid | [
0.593548387096774,
34.5,
23.625
] |
A decent 3 bhk level for lease close Madhapur, Hyderabad. It is a brilliant property and offers an incredible view. The level is completely outfitted. It guarantees an agreeable remain. Society with security staff and 24 hours water,lift. It ad... A very good 1 bhk flat for rent near Anderi East, Sahar Road, mumbai. It is a superb property and offers an excellent view. The flat is semifurnished. It promises a comfortable stay. No vaastu compliant. Society with security personnel and 24 h... | Low | [
0.48302872062663105,
23.125,
24.75
] |
/* ************************************************************************** */ /* */ /* ::: :::::::: */ /* s_string_split.c :+: :+: :+: */ /* +:+ +:+ +:+ */ /* By: qperez <[email protected]> +#+ +:+ +#+ */ /* +#+#+#+#+#+ +#+ */ /* Created: 2013/10/28 20:37:37 by qperez #+# #+# */ /* Updated: 2014/12/02 11:33:06 by qperez ### ########.fr */ /* */ /* ************************************************************************** */ /* ** <This file contains s_string_split function> ** < split > ** Copyright (C) <2013> Quentin Perez <[email protected]> ** This file is part of 42-toolkit. ** 42-toolkit is free software: you can redistribute it and/or modify ** it under the terms of the GNU General Public License as published by ** the Free Software Foundation, either version 3 of the License, or ** (at your option) any later version. ** This program is distributed in the hope that it will be useful, ** but WITHOUT ANY WARRANTY; without even the implied warranty of ** MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ** GNU General Public License for more details. ** You should have received a copy of the GNU General Public License ** along with this program. If not, see <http://www.gnu.org/licenses/>. */ #include <f_secure/f_secure.h> #include <stdint.h> #include <f_error/m_error.h> #include <string/s_string.h> #include <f_memory/f_memory.h> #include <f_string/f_str_tools.h> #include <f_string/f_string.h> #include <f_memory/f_free.h> static bool *uf_string_fill_bool(t_string *v_this, const char *charset) { size_t i; size_t size; bool *active; i = 0; if ((active = uf_malloc_s(v_this->v_size, sizeof(*active))) == NULL) return ((bool *)M_ERROR((size_t)NULL, "Bad alloc")); uf_memset(active, false, v_this->v_size * sizeof(*active)); size = uf_str_len(charset); if (size == 0) i = v_this->v_size; while (i < v_this->v_size) { if (uf_strncmp(v_this->v_str + i, charset, size) == 0) { uf_memset(active + i, true, size * sizeof(*active)); i = i + size; } else i = i + 1; } return (active); } static size_t uf_string_count_word(t_string *v_this, bool *active) { size_t i; size_t word; i = 0; word = 0; while (i < v_this->v_size && active[i] == true) i = i + 1; while (i < v_this->v_size) { word = word + 1; while (i < v_this->v_size && active[i] == false) i = i + 1; while (i < v_this->v_size && active[i] == true) i = i + 1; } return (word + 1); } static bool uf_string_dump_word(const char *str, char **tab, size_t size, size_t *word) { if ((tab[*word] = uf_malloc_s(size + 1, sizeof(*tab[*word]))) == NULL) { uf_free_tab_fail((void **)tab, *word); return (false); } uf_memcpy(tab[*word], str, size * sizeof(*tab[*word])); tab[*word][size] = '\0'; *word = *word + 1; tab[*word] = NULL; return (true); } static bool uf_string_fill_tab(t_string *v_this, char **tab, bool *active) { size_t i; size_t size; size_t word; i = 0; word = 0; while (i < v_this->v_size && active[i] == true) i = i + 1; while (i < v_this->v_size) { size = 0; while (i < v_this->v_size && active[i] == false) { i = i + 1; size = size + 1; } if (uf_string_dump_word(v_this->v_str + i - size, tab, size, &word) == false) return (false); while (i < v_this->v_size && active[i] == true) i = i + 1; } return (true); } char **f_string_split(t_string *v_this, const char *charset) { bool *active; char **ret; size_t nb_word; if (v_this->v_size == 0) return (NULL); active = uf_string_fill_bool(v_this, charset); if (active == NULL) return (NULL); nb_word = uf_string_count_word(v_this, active); if ((ret = uf_malloc_s(nb_word, sizeof(*ret))) == NULL) { uf_free_s((void **)&active); return (NULL); } if (uf_string_fill_tab(v_this, ret, active) == false) ret = NULL; uf_free_s((void **)&active); return (ret); } | Low | [
0.515384615384615,
33.5,
31.5
] |
Placental transfer of Zidovudine in first trimester of pregnancy. Zidovudine is one of the most common antiretroviral drugs used to prevent vertical transmission of human immunodeficiency virus. However, it is not recommended for use in the first trimester of pregnancy because of reservations about its potential teratogenicity during the organogenesis phase. The objective of this study was to investigate the placental transfer of zidovudine in the first trimester of human pregnancy. Twenty-six pregnant women were given 2 oral doses of zidovudine (200 mg) before first trimester surgical termination of pregnancy. Maternal blood, fetal tissue, and coelomic and amniotic fluid were collected for drug analysis. Zidovudine was detected in all samples of maternal serum and fetal tissue but present in only 7 samples of amniotic and coelomic fluid. Zidovudine concentration in fetal tissue was similar to that of maternal serum. The median fetal/maternal ratio was 0.92 and was not associated with gestational age (r = 0.03, P = .89). Zidovudine crossed the first trimester human placenta readily and achieved the level of maternal serum rapidly. Patients who choose to take zidovudine in first trimester of pregnancy should be counseled about the potential fetal effects. | High | [
0.663438256658595,
34.25,
17.375
] |
Intra-arterial neoadjuvant chemotherapy followed by radical surgery and radiotherapy for stage IIb cervical carcinoma. The role of intra-arterial neoadjuvant chemotherapy (NAC) in the management of cervical carcinoma has not been established. The aim of this study was to determine whether pre-operative intra-arterial NAC is effective or not in patients with stage IIb cervical carcinoma. A total of 28 patients with stage IIb cervical carcinoma (diameter > 4 cm) were treated with one cycle of intra-arterial NAC (cisplatin 70 mg/m2, and peplomycin sulfate 30 mg/m2 or doxorubicin 30 mg/m2) followed by radical surgery and post-operative radiotherapy. Immediate response, toxicity, survival, and prognostic factors for survival were evaluated. The overall clinical response rate was 79% (22/28) with a complete response in 1 patient (4%). Radical hysterectomy with pelvic lymphadenectomy was feasible in 25 patients (89%) 4 weeks after chemotherapy. Toxicity were generally mild, and there were no intraoperative complications related to intra-arterial NAC. The estimated 2- and 5-year survival rates for the entire group were 93% and 80%, respectively, with a median followup time in survivors of 62 months. Univariate analysis showed the following to be significantly related to survival: histologic type, PCNA index, clinical response to intraarterial NAC, and lymph node metastasis. Survival was not significantly related to age, grade of differentiation, serum level of squamous cell carcinoma antigen, p53 protein expression, or residual parametrial involvement. Multivariate Cox's proportional hazard analysis showed that only the histologic type significantly influenced survival (p = 0.0007). The estimated 2- and 5-year survival rates were 100% and 94% for patients with squamous cell carcinoma, and 75% and 50% for those with adenocarcinoma. Intra-arterial NAC followed by surgery and radiotherapy appeared to be effective in treating patients with stage IIb cervical squamous cell carcinoma, but was not as effective in patients with stage IIb cervical adenocarcinoma. | High | [
0.65989847715736,
32.5,
16.75
] |
The role of the family physician in the crisis of impending divorce. The divorce rate in the United States is approaching 30 to 40 percent. Because the family physician cares for the entire family, he often finds himself in the midst of the turmoil created by the crisis of impending divorce. Using a case presentation, we have offered some specific suggestions to help the family physician manage this common problem. Few traditional training programs have adequately prepared the primary physician to be effective in marriage counseling. The integration of the behavioral sciences with the medical sciences should be a major goal of the developing discipline of family medicine. If training programs in family medicine successfully develop curricula that teach the skills required to support a troubled marriage, the family physician of the future may make a more significant contribution towards the preservation of the nuclear family. | High | [
0.669950738916256,
34,
16.75
] |
Russia puts nuclear missiles in Cuba… Lyuba Lulko Let’s just hope this “cold” war doesn’t go hot. The Russian government intends to restore the military-technical support of their ships at the former military base in Cam Ranh (Vietnam), Lourdes (Cuba) and the Seychelles. So far, this is not about plans for a military presence, but rather the restoration of the crew resources. However, a solid contractual basis should be developed for these plans. The intentions were announced on July 27 by the Russian Navy Commander Vice Admiral Viktor Chirkov. “At the international level, the creation of logistics points in Cuba, the Seychelles and Vietnam is being worked out,” Chirkov was quoted by the media. The issue was specifically discussed at the meeting with the leaders of all countries. President of Vietnam Truong Tan Sang has recently held talks with Prime Minister Dmitri Medvedev in Moscow and President Putin in Sochi. Cuban leader Raul Castro met with Putin in Moscow earlier this month. A little earlier the President of the Republic of Seychelles, James Michel made an unequivocal statement. “We will give Russia the benefits in Cam Ranh, including the development of military cooperation,” the President of Vietnam told the media. Cuba that has an American military base in Guantanamo Bay and is protesting against the deployment of new U.S. bases in Colombia, of course, wants to acquire an ally in Russia to be able to contain the United States. Seychelles in the Indian Ocean has always been in the zone of Soviet influence. In 1981, the Soviet Navy helped the government to prevent the military coup and before the collapse of the USSR the Soviets had a constant presence in the area. In June of 2012, at the opening of an Orthodox church in the capital city of Victoria, James Michel spoke of Russia’s role in combating piracy and supported the Russian idea to build a pier in the port of Victoria, designed for the reception of the Navy warships of Russian Federation. Following the statement by Vice-Admiral, Russian Foreign Ministry and Defense Ministry made it clear that they were talking about rest and replenishment of the crews after the campaign in the area and not military bases. It is clear, however, that Russian warships could do both without special arrangements, given the good attitudes of the leaders of these countries toward Russia. It can be assumed that the Russian Admiral unwittingly gave away far-reaching plans of the Russian leadership. That would be great, because from the time of Peter the Great, Russia had a strong fleet and army. In addition, it is worth mentioning Putin’s statement at the G20 meeting in June. After the meeting with U.S. President Barack Obama, Putin made a sudden harsh statement to the press. “In 2001 I, as the President of the Russian Federation and the supreme commander, deemed it advantageous to withdraw the radio-electronic center Lourdes from Cuba. In exchange for this, George Bush, the then U.S. president, has assured me that this decision would become the final confirmation that the Cold War was over and both of our states, getting rid of the relics of the Cold War, will start building a new relationship based on cooperation and transparency. In particular, Bush has convinced me that the U.S. missile defense system will never be deployed in Eastern Europe. The Russian Federation has fulfilled all terms of the agreement. And even more. I shut down not only the Cuban Lourdes but also Kamran in Vietnam. I shut them down because I gave my word of honor. I, like a man, has kept my word. What have the Americans done? The Americans are not responsible for their own words. It is no secret that in recent years, the U.S. created a buffer zone around Russia, involving in this process not only the countries of Central Europe, but also the Baltic states, Ukraine and the Caucasus. The only response to this could be an asymmetric expansion of the Russian military presence abroad, particularly in Cuba. In Cuba, there are convenient bays for our reconnaissance and warships, a network of the so-called “jump airfields.” With the full consent of the Cuban leadership, on May 11 of this year, our country has not only resumed work in the electronic center of Lourdes, but also placed the latest mobile strategic nuclear missiles “Oak” on the island. They did not want to do it the amicable way, now let them deal with this,” Putin said. | Low | [
0.502358490566037,
26.625,
26.375
] |
Q: How to write condition to Input tag in React I am working on React project, In that project I have a scenario that is I have to write Condition for Input tag. The Condition wants to be like this, In my form the Input tag type is Number, and its min Value is 1001 and max value is 1500. So now what I want is If I type number Less than 1001 then it should not take that number in Input tag. Someone please help me how to Write logic like this. This is Form.js import React from 'react'; import './aum-company-modal.css'; import { Button, Col, Modal, ModalBody, ModalFooter, ModalHeader, Row, FormGroup, Label, Input, } from 'reactstrap'; const AumCompanyModal = () => { return ( <Row> <Col md="6" sm="6" xs="6"> <Modal isOpen > <ModalHeader >Add new</ModalHeader> <ModalBody> <Row> <Col md="12" sm="12" xs="12"> <FormGroup> <Label for="exampleName">Min Value</Label> <Input type="text" name="roleName" placeholder="Enter minimum value" value='1000' /> </FormGroup> <FormGroup> <Label for="exampleName">Max Value</Label> <Input type="number" name="roleName" placeholder="Enter maximum value" min='1001' max='1500' /> </FormGroup> </Col> </Row> </ModalBody> <ModalFooter> <Button color="secondary"> Cancel </Button> <Button type="submit" color="primary"> Submit </Button> </ModalFooter> </Modal> </Col> </Row> ) } export default AumCompanyModal A: You input does not have a value property so I would suggest you to make an state to use it as value and the set an onChange which is a function that checks that just like this: const [ inputValue, setInputValue ] = useState(1000); handleChange = (e) => { if (e.target.value > 999 && e.target.value < 1501) { setInputValue(e.target.value); } } <Input type="number" name="roleName" placeholder="Enter maximum value" min='1001' max='1500' onChange={handleChange} value={inputValue} /> | Mid | [
0.5868544600938961,
31.25,
22
] |
Yes, this is the legendary MDPC-X Sleeve! It's the original sleeve, which started, continued and brought the sleeving-art to where it is today. From 2007 until now, this cable-sleeve demonstrates what the famous MDPC-X saying "no-compromise" stands for: A product, which can not be exceeded for the purpose it is designed for. It's coming in colors of extraordinary variety and beauty. A class of its own - globally known. Sleeving techniques - Absolutely no limitations Shrinkless and with-shrink techniques were both originally invented and brought to perfection with MDPC-X by enthusiasts all over the world - including many more methods. Just browse the internet and you will find an almost unlimited amount of inspiring methods and applications. Rigidity - You decide what the cable shall do, not vice versa MDPC-X Sleeving is designed to turn a simple cable into a piece of art, which lets you see cables in a completely different way. MDPC-X Sleeve adds a perfect rigidity to the wire, it removes all the wobbles in the wire and enables you to shape the cable as you want, with perfectly smooth curves or absolute straight lines: No need for additional helpers to force your wire into a beautiful flow. Expandability - You can do what others can't This cable sleeve can be expanded and contracted from bigger diameters to smaller and vice versa, allowing you to put the sleeve over connectors (or similar), which are a good bit bigger than the wire itself. With this property, MDPC-X allows you to use it in a wide range of situations and designs. The flexibility in diameter naturally adjusts the sleeving size to the wire size. Being expandable, the process of putting the sleeve onto a cable is extremely easy - it's even possible on cables of endless lengths. Manufacturing - The opposite of today's high-volume industry MDPC-X Sleeving is made of a very high number of PET-X fibers, machined on the most modern and precise machines in this industry, handled by highly educated and well paid experts, made in a clean and ecological environment. Only this total focus on quality and constant precision during the whole manufacturing process allows the creation of MDPC-X Sleeve. It all results in what the world knows as the highest quality sleeving product since 2007 - a class of its own in this industry. Colors - Guaranteed unique range and quality MDPC-X colors are lead-free! No intoxication via your skin. There is also no degradation of color if exposed to UV at outdoor applications, because the material is not dyed afterwards. The base material is made of the color. MDPC-X colors of the same name will not vary within a batch or if a new batch of this color is manufactured. MDPC-X colors are not shared with anyone else in the industry. Robustness - Far more than just a fashion product Compared to other types of cable-sleeving - like nylon based "cotton" materials, PET-X fiber is water repellent, stain resistant and always keeps its original color intensity - even if constantly exposed to UV. Another important difference, especially important for its usability and your safety: MDPC-X Sleeve does not burn as quickly and suddenly as Nylon does. | High | [
0.6649616368286441,
32.5,
16.375
] |
Q: How can I map a nested array from JSON? Bear with me, I am not good with words, Here, I'll try my best to explain on my issues with mapping the nested data in array, I am guessing this term would be called "fetching data from local API with ReactJS" This code below is the data inside the data.js as "local API" export default[ { name: "John Doe", position: "developer", experiences: [ { id: 0, job:"developer 1", period: "2016-2017", description: "Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laudantium nesciunt recusandae unde. Qui consequatur beatae, aspernatur placeat sapiente non est!" }, { id: 1, job:"developer 2", period: "2015-2016", description: "Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laudantium nesciunt recusandae unde. Qui consequatur beatae, aspernatur placeat sapiente non est!" }, { id: 2, job:"developer 3", period: "2014-2015", description: "Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laudantium nesciunt recusandae unde. Qui consequatur beatae, aspernatur placeat sapiente non est!" } ] } ] And the code below show two component files the index.js and App.js in ReactJS index.js import React from 'react'; import ReactDOM from 'react-dom'; import App from './App' import data from './data/dataJS'; ReactDOM.render( <App data={data} />, document.getElementById('root')); App.js import React, { Component } from 'react'; import './App.css'; class App extends Component { render() { const {data} = this.props; const resume = data.map(info => { //console log console.log(info.name); console.log(info.position); console.log(info.experiences); console.log(info.experiences.job); //browser render return ( <div> {info.name} <br/> {info.position} </div> ) }); }); return ( <div> {<p>{resume}</p>} </div> ); } } So far, I am able to fetch the data as confirmed from browser console.log and render out two data info.name into John Doe and info.position into developer without problem. Now, If I added this string <li key="experience.id">{info.experiences.job}</li> beneath {info.position} I will get an error. Objects are not valid as a React child (found: object with keys {id, job, period, description}). If you meant to render a collection of children, use an array instead. I assume, the way I set up array is incorrect. But the console.log of info.experiences shown result of (3) arrays of experiences. But The console log on info.experiences.job show undefined. Yet I am unable to figure out the what the problem are, what could be wrong? I've spend two days trying to find solution and I am not any getting luck. Any suggestions? A: You got couple of things to fix here: You got a syntax error in your render function, in the first return you closed the the render function body, hence you can't reach the second return (it shouldn't even render and throw an error). You are trying to reference an object in your key? you used a string. anyway keys should be unique to their siblings DOCS. <li key="experience.id">{info.experiences.job}</li> This should be (but hold your horses we are not done yet!): <li key={experience.id}>{info.experiences.job}</li> experience is undefined, i'm guessing you wanted to loop through experiences array: info.experiences.map(experience => <li key={experience.id}>{experience.job}</li>) Anyway here is a running and working example: const data = [
{
name: "John Doe",
position: "developer",
experiences: [
{
id: 0,
job: "developer 1",
period: "2016-2017",
description: "Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laudantium nesciunt recusandae unde. Qui consequatur beatae, aspernatur placeat sapiente non est!"
},
{
id: 1,
job: "developer 2",
period: "2015-2016",
description: "Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laudantium nesciunt recusandae unde. Qui consequatur beatae, aspernatur placeat sapiente non est!"
},
{
id: 2,
job: "developer 3",
period: "2014-2015",
description: "Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laudantium nesciunt recusandae unde. Qui consequatur beatae, aspernatur placeat sapiente non est!"
}
]
}
]
class App extends React.Component {
render() {
const { data } = this.props;
const resume = data.map(info => {
//browser render
return (
<div>
{info.name}
<ul>
{
info.experiences.map(experience => <li key={experience.id}>{experience.job}</li>)
}
</ul>
{info.position}
</div>
);
});
return <div>{<p>{resume}</p>}</div>;
}
}
ReactDOM.render(
<App data={data} />,
document.getElementById('root')); <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script>
<div id="root"></div> Edit As a followup to your comments: I almost never encounter any error except for when I try to assign it on nested array Well as for your code in the render method of App.js: render() { // console.log(this.props.name) const {data} = this.props; const resume = data.map((info) => { return ( <div> {info.name} {info.experiences.map((experience, idx)=> <div > <div key={experience.id} >{experience.job}</div> </div>)} {info.position} </div> ) }); You got 2 issues: In first loop iteration you should pass a key to the root element as well, not only to the second loop. const resume = data.map((info, key) => { return ( <div key={key}> {info.name} // ... In the second loop, you passed the key to the child element and not the parent element of this loop: {info.experiences.map((experience, idx)=> <div > <div key={experience.id} >{experience.job}</div> </div>)} The key should be on the root element not the second element: {info.experiences.map((experience, idx) => <div key={experience.id}> <div>{experience.job}</div> </div>)} Working example: const data = [
{
id: "resume",
name: "John Doe",
position: "developer",
experiences: [
{
id: 0,
job: "developer 1",
period: "2016-2017",
description: "Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laudantium nesciunt recusandae unde. Qui consequatur beatae, aspernatur placeat sapiente non est!"
},
{
id: 1,
job: "developer 2",
period: "2015-2016",
description: "Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laudantium nesciunt recusandae unde. Qui consequatur beatae, aspernatur placeat sapiente non est!"
},
{
id: 2,
job: "developer 3",
period: "2014-2015",
description: "Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laudantium nesciunt recusandae unde. Qui consequatur beatae, aspernatur placeat sapiente non est!"
}
]
}
]
class App extends React.Component {
render() {
// console.log(this.props.name)
const { data } = this.props;
const resume = data.map((info, i) => {
return (
<div key={i}>
{info.name}
{info.experiences.map((experience, idx) =>
<div key={experience.id}>
<div>{experience.job}</div>
</div>)}
{info.position}
</div>
)
});
return (
<div>
{resume}
</div>
);
}
}
ReactDOM.render(<App data={data} />, document.getElementById('root')); <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script>
<div id="root"></div> BONUS As for this comment: I would have used codepen, but realize but its not possible to create two files i.e. index.js and App.js You can use code sandbox it's great for react. Here is a working example with your code in separate files link. | Low | [
0.516795865633074,
25,
23.375
] |
--- abstract: 'We analyze the effects of partial coherence of ground state preparation on two-pulse propagation in a three-level $\Lambda$ medium, in contrast to previous treastments that have considered the cases of media whose ground states are characterized by probabilities (level populations) or by probability amplitudes (coherent pure states). We present analytic solutions of the Maxwell-Bloch equations, and we extend our analysis with numerical solutions to the same equations. We interpret these solutions in the bright/dark dressed state basis, and show that they describe a population transfer between the bright and dark state. For mixed-state $\Lambda$ media with partial ground state phase coherence the dark state can never be fully populated. This has implications for phase-coherent effects such as pulse matching, coherent population trapping, and electromagnetically induced transparency (EIT). We show that for partially phase-coherent three-level media, self induced transparency (SIT) dominates EIT and our results suggest a corresponding three-level area theorem.' author: - 'B.D. Clader and J.H. Eberly' title: 'Two-Pulse Propagation in a Partially Phase-Coherent Medium' --- Introduction ============ The description of radiative phenomena in terms of intensities and probabilities is often satisfactory but the effects of wave coherences are then neglected. When coherence effects are prominent a wave description is usually adopted. Propagation of short laser pulses in resonant media often falls into an intermediate domain where neither description is satisfactory. The most important time scale is much longer than the period of the laser field but shorter than the decoherence time of the medium, and the slowly varying envelope and rotating wave approximations are engaged to simplify the description of evolution in this domain [@allen-eberly; @shore; @scully-zubairy]. However, theories of resonant propagation usually ignore the very real possibility that the state of the medium may be prepared in a way not adequately described by probability (population) assignments to the levels. The term “phaseonium" was introduced by Scully [@Scully] to describe a three-level atomic medium in the $\Lambda$ configuration, where two ground levels of the atoms are prepared in a phase-coherent pure-state superposition. Phase coherent effects lead to coherent population trapping [@pop-trap; @BDstate-early], electromagnetically induced transparency (EIT) [@eit1; @bib.Harris; @eit-review], pulse matching [@harris-matching1; @harris-matching2; @quantized-pulse-matching], the dark area theorem [@Eberly-Kozlov], simultons [@Konopnicki-Eberly], and adiabatons [@Grobe-etal; @Eberly-etal94]. These effects are are all governed by dark-state [@Arimondo] considerations. Pure phaseonium is not easily prepared experimentally, and the question what effects may acompany more realistic preparation has remained open. Here we report new results obtained for propagation in media that could be called partial phaseonium, or “mixonium", because the medium is allowed to have mixed state or partial coherence in its ground state. The simple cases we study are distinct from traditional EIT scenarios in that the ground state coherence is prepared ahead of any pulses entering the medium, while in traditional EIT it is the pump and probe pulses which prepare the medium coherence. Methods to prepare a phaseonium medium have been shown for ladder [@pra-ladder-phaseonium] and $\Lambda$ [@kozlov-phaseonium] systems. A previous study [@Kozlov-Eberly] by Kozlov and Eberly of two-pulse propagation in absorbing media made use of the Maxwell-Bloch model (see e.g. [@allen-eberly]), which describes fully coherent pulse propagation through atomic media. They reported two types of pulse evolution, one similar to that of one-pulse self-induced transparency (SIT) [@mccall-hahn], and the other showing a different form of two-pulse evolution under conditions they identified with EIT, in which the dark state plays an important role. They defined the SIT-type propagation in three-level media as occurring when the dark state was nearly or completely empty and EIT-type propagation as occurring when the dark state was highly populated. They showed that EIT-type dominated SIT-type for propagation in a fully coherent phaseonium medium. However, when a $\Lambda$ medium is prepared without ground-state phase coherence, the role of the dark state is minimized, and we have shown [@clader-eberly07; @clader-eberly-pra07] that SIT-type propagation is dominant. In realistic experimental preparation, pure-state three-level phaseonium is difficult to achieve. Ground state phase coherence can be lost due to environmental decoherence or imperfect preparation techniques. Thus we report new results on two-pulse propagation through a $\Lambda$ medium that is prepared with the ground states in a partially phase-coherent superposition, between the two extremes of completely mixed or completely pure state, the basis for our term “mixonium". We include a variable parameter $\lambda$ in our initial state definition that will allow us to examine the propagation dynamics both analytically and numerically over the entire range from completely mixed to completely pure. We will follow the previous definition and determine whether it is EIT or SIT that dominates pulse propagation in mixonium. We will use a three-regime language to distinguish three propagation zones, similar to our analysis of two-pulse propagation in a completely mixed-state medium [@clader-eberly-pra07]. In the pure phaseonium case the analytic solutions to the Maxwel-Bloch (MB) equations predict that simulton pulses [@Konopnicki-Eberly] that begin entirely in the strongly interacting bright state will be transferred into the non-interacting dark state just as one would expect based on coherent population trapping and dark area theorem arguments. If the medium is initially prepared only partially phase-coherent, a similar simulton transfer process occurs. However the lack of complete phase coherence prohibits the dark state from being fully populated, and the pulses will continue to interact with the ground to excited state transition. Our analytic solutions allow us to determine the maximum population of the dark state, when the medium is not in a pure state. The inability of the dark state to be fully populated, and the subsequent pulse-medium interaction that continues for non-pure medium preparation causes SIT-like effects and reduces the role of EIT. This has dramatic consequences for the pulse matching that occurs in phaseonium. We find that our analytic solutions have broad predictive capabilities for mixonium due to pulse-reshaping caused by SIT. In the phaseonium case our numerical solutions agree with previous studies [@Kozlov-Eberly] predicting temporal pulse matching. However, in the mixonium case SIT begins to play a role because the dark state cannot be fully populated. We find that for long propagation distances SIT always dominates in a mixed-state medium. Further highlighting this effect, we also show that the bright pulse area behaves in a manner very similar to SIT, which leads us to believe that there is an equivalent three-level area theorem. Physical Model ============== ![\[lambda-fig\]Three level atom in the $\Lambda$ configuration with level $1$ connected to level $3$ via a laser field $\Omega_a$, which we refer to as the pump field, and level $2$ connected to level $3$ with laser field $\Omega_b$ which we refer to as the Stokes field. We assume a two photon resonance condition so that both fields are detuned from resonance by an equal amount $\Delta$. Loss from the excited state is included by a damping term $\gamma_3$.](PRA_fig1_Lambda.eps) We consider dual-pulse propagation in a medium of three-level atoms in the lambda configuration as shown in Fig. \[lambda-fig\]. Possible physical realizations appear to be in a D line of cesium or rubidium, and we give some of the experimental parameters at the end of this section. We assume a two-photon resonance condition such that both laser fields are detuned by an equal amount $\Delta$. The Hamiltonian of the system in the rotating wave picture is given by $$\label{Hamiltonian} H = \hbar\Delta |3\rangle\langle 3| -\hbar\frac{\Omega_{a}}{2}|1\rangle\langle 3| - \hbar\frac{\Omega_{b}}{2}|2\rangle\langle 3| - \hbar\frac{\Omega_{a}^*}{2}|3\rangle\langle 1| - \hbar\frac{\Omega_{b}^*}{2}|3\rangle\langle 2|,$$ where $\Omega_a = 2\vec{d}_1\cdot \mathcal{\vec{E}}_a/\hbar$ is the Rabi frequency of the pump laser field. Throughout this paper, we will refer to $\Omega_a$ as the pump field and $\Omega_b$ as the Stokes field to be consistent with the common nomenclature associated with stimulated Raman scattering. Our notation implies that the laser fields can be represented as a slowly varying envelope times a carrier frequency, e.g., $\vec{E}_a = \mathcal{\vec{E}}_a e^{i(k_a x - \omega_a t)} + \rm{c.c.}$, where $k_a$ and $\omega_a$ are the wavenumber and carrier frequency of the pulse and $\mathcal{\vec{E}}_a$ is the envelope function. The term $d_1$ is the dipole moment of the $1 \to 3$ transition. Similar notation applies to the Stokes Rabi frequency $\Omega_b$. Individual density matrix equations can be derived from the Hamiltonian and the von Neumann equation $i\hbar\partial{\rho}/\partial T = [H,\rho$\]. They are given by: \[rhoEquations2\] $$\begin{aligned} \frac{\partial\rho_{11}}{\partial T} &= i\frac{\Omega_a}{2}\rho_{31} - i\frac{\Omega_a^*}{2}\rho_{13} \\ \frac{\partial\rho_{22}}{\partial T} &= i\frac{\Omega_b}{2}\rho_{32} - i\frac{\Omega_b^*}{2}\rho_{23} \\ \frac{\partial\rho_{33}}{\partial T} &= -i\frac{\Omega_a}{2}\rho_{31} + i\frac{\Omega_a^*}{2}\rho_{13} - i\frac{\Omega_b}{2}\rho_{32} + i\frac{\Omega_b^*}{2}\rho_{23} \\ \frac{\partial\rho_{12}}{\partial T} &= i\frac{\Omega_a}{2}\rho_{32} -i\frac{\Omega_b^*}{2}\rho_{13} \\ \frac{\partial\rho_{13}}{\partial T} &= i\Delta \rho_{13} - i\frac{\Omega_b}{2}\rho_{12} + i\frac{\Omega_a}{2}(\rho_{33} - \rho_{11}) \\ \frac{\partial\rho_{23}}{\partial T} &= i\Delta \rho_{23} - i\frac{\Omega_a}{2}\rho_{21} + i\frac{\Omega_b}{2}(\rho_{33} - \rho_{22}).\end{aligned}$$ We assume that the temporal duration of each laser pulse envelope is sufficiently short that we can neglect decay terms, such as arising from loss to other atomic states, spontaneous emission, or collisional dephasing effects. For alkali vapors this requires pulse durations on the order of, or shorter than, 1 ns. By making use of the slowly varying envelope approximation and the rotating wave approximation we can reduce Maxwell’s wave equation to two independent first order wave equations for each individual pulse. They are given by: \[MaxwellEquation2\] $$\begin{aligned} \frac{\partial \Omega_{a}}{\partial Z}& = - i\mu_a \int_{-\infty}^{\infty}d\Delta F(\Delta)\rho_{13} = -i\mu_a\langle\rho_{13}\rangle \\ \frac{\partial \Omega_{b}}{\partial Z} & = - i\mu_b \int_{-\infty}^{\infty}d\Delta F(\Delta)\rho_{23} = -i\mu_b\langle\rho_{23}\rangle,\end{aligned}$$ where we have written equations and in a retarded time coordinate system such that $T = t-x/c$ and $Z=x/c$. Thus the derivatives are given by: $\partial /\partial t = \partial /\partial T$ and $\partial /\partial Z = c\partial /\partial x + \partial /\partial t$. We use the bracket notation to symbolize a statistical average to take into account inhomogeneous broadening, for example due to thermal motion of the atoms if the medium is a vapor. When needed, the average is performed with the function $F(\Delta) = T_2^*/\sqrt{2\pi}e^{-(T_2^*)^2(\Delta-\bar\Delta)^2/2}$, where $T_2^*$ is the inhomogeneous lifetime and $\bar\Delta$ is the detuning of the laser fields from atomic line center. We will consistently assume line-center tuning, so $\bar\Delta = 0$ is implied throughout. The parameters $\mu_a = N d_1^2\omega_a/\hbar \epsilon_0$ and $\mu_b = N d_2^2\omega_b/\hbar \epsilon_0$, where $N$ is the density of the atoms, are proportional to the usual attenuation coefficient or inverse Beer’s length: $$\label{InvBeerCoeff} \alpha_D(\Delta) = \pi F(\Delta) \mu/c \quad \to \quad \alpha_D(0) = \sqrt{\pi/2}T_2^*\mu/c,$$ for each transition, where the simplified final form applies to line-center tuning. When $\mu_a = \mu_b \equiv \mu$, which we will assume hereafter, Eqs. and are exactly solvable by using methods such as inverse scattering [@early-IS; @akns-IS] or Bäcklund transformations [@BacklundBook; @Lamb-BTreview; @park-shin]. A single weak pulse presents a familiar case, in which the pulse causes some population exchange between ground and excited states, and even though we neglect homogeneous loss terms, Doppler (or any other inhomogeneous) broadening serves as a dephasing mechanism. Thus a single weak pulse will be absorbed as it propagates, and its peak intensity will decay exponentially as $|\Omega(x)|^2 = |\Omega(0)|^2 e^{-\alpha_D x}$, where $\Omega(0)$ is the peak intensity of the weak input pulse and $\alpha_D^{-1}$ is the Doppler absorption depth or Beer’s length given in (\[InvBeerCoeff\]). Two separate mechanisms can change these familiar absorptive properties. These are self-induced transparency (SIT) which arises from dynamic nonlinearities associated with a strong and coherent pulse, and electromagnetically induced transparency (EIT) which arises when a second laser pulse interacts with the same excited state, thus inducing two-photon coherence between two different ground states. Potential physical realizations of such a system include the D line transitions for Rubidium or Cesium. For the $D_2$ line of Rubidium, which we use as our model in this paper, the excited state lifetime is around $30$ ns and the Doppler lifetime for room temperature vapor is about $0.5$ ns, while the Beer’s length is approximately $1$ cm. We assume that the laser pulses are linearly polarized with a bandwidth greater than the hyperfine splittings of the excited state, which is approximately $500$ MHz, but with bandwidth narrow enough to resolve the two hyperfine ground states which are split by approximately $10$ GHz. This implies the effective dipole moments, $d_1$ and $d_2$, are equal which means that taking $\mu_a = \mu_b$ is quite accurate. Our calculations use pulses with temporal duration around $2.5$ ns, which is consistent with all of our approximations and assumptions. Bright - Dark States {#ss:bright-dark} ==================== The three-level MB model permits a useful parameterization in terms of bright and dark states [@Arimondo], which help explain the interference effects caused by ground state coherence. We define the Bright and Dark states as: $$\label{BrightDarkDefs3} |B\rangle \equiv \frac{1}{\Omega_T}\left(\Omega_{a}|1\rangle + \Omega_{b}|2\rangle\right) \quad {\rm and}\quad |D\rangle \equiv \frac{1}{\Omega_T}\left(\Omega_{b}^*|1\rangle - \Omega_{a}^*|2\rangle\right),$$ where $\Omega_T = (|\Omega_a|^2 + |\Omega_b|^2)^{1/2}$ is the “total" Rabi frequency. In terms of the bright and dark states the Hamiltonian in Eq. can be written simply as: $$\label{BDHam} H = \hbar\Delta |3\rangle\langle 3| -\frac{\Omega_T}{2}|B\rangle\langle 3| - \frac{\Omega_T}{2}|3\rangle \langle B|.$$ The interaction terms in $H$ clearly depend only on $|B\rangle$. The orthogonal dark state $|D\rangle$ does not participate in the temporal dynamics so once population enters the dark state it becomes trapped, unless it evolves as a result of propagation. As clearly seen in the definition, the bright and dark state basis is a fully coherent, pure-state superposition of the two ground states. This basis helps provide a clear understanding of phase-coherent effects on two-pulse propagation. To more clearly understand these effects it is useful to convert the MB model into the bright-dark basis. This was done by Fleischhauer and Manka [@BDstates], and we will follow their lead. We will take a pure-state approach in this particular section, to focus on purely phase-coherent effects. Assuming a pure-state wavefunction of the form $|\psi\rangle = c_1|1\rangle + c_2|2\rangle + c_3|3\rangle$, the probability amplitude equations in the original atomic basis are: \[mix-Schrodinger\] $$\begin{aligned} \dot{c}_1 &= i\frac{\Omega_a}{2}c_3 \\ \dot{c}_2 &= i\frac{\Omega_b}{2}c_3 \\ \dot{c}_3 &= i\frac{\Omega_a^*}{2}c_1 + i\frac{\Omega_b^*}{2}c_2 -i\Delta c_3,\end{aligned}$$ where the dot refers to $\partial/\partial T$. One can recover the density matrix given in Eqs. by taking $\rho = | \psi \rangle\langle\psi |$. From Eq. we can calculate the bright and dark probability amplitudes in terms of $c_1$ and $c_2$. They are: \[BDprobamp\] $$\begin{aligned} c_B & = \frac{1}{\Omega_T}(\Omega_a^* c_1 + \Omega_b^* c_2) \\ c_D & = \frac{1}{\Omega_T}(\Omega_b c_1 - \Omega_a c_2).\end{aligned}$$ For simplicity we will assume that the fields are unchirped and real, giving $\Omega_a = \Omega_a^*$ and $\Omega_b = \Omega_b^*$. This allows us to rewrite the Eqs. as: \[BDSchrodinger\] $$\begin{aligned} \dot{c}_B &= i\frac{\Omega_D}{2} c_D + i\frac{\Omega_T}{2} c_3 \\ \dot{c}_D &= - i\frac{\Omega_D}{2} c_B \\ \dot{c}_3 &= i\frac{\Omega_T}{2}c_B - i\Delta c_3,\end{aligned}$$ where $$\label{DRabiFreq} \Omega_D = \frac{2i}{\Omega_T^2}(\Omega_a\dot{\Omega}_b - \Omega_b\dot{\Omega}_a)$$ is called the the dark Rabi frequency. We can write the corresponding Maxwell’s equations for the total and dark Rabi frequencies as: \[BDMaxwell\] $$\begin{aligned} \frac{\partial \Omega_T}{\partial Z} & = -i\mu \langle c_B c_3^*\rangle \\ \frac{\partial \Omega_D}{\partial Z} & = -2\mu\frac{\partial}{\partial T}\left\langle \frac{c_D c_3^*}{\Omega_T}\right\rangle,\end{aligned}$$ where $\mu_a = \mu_b \equiv \mu$ as previously noted. A surprising result occurs if the pulses are matched temporally, i.e. $\dot\Omega_a/{\Omega}_a = \dot\Omega_b/{\Omega}_b$, which gives $\Omega_D = 0$. This allows us to write Eqs. and as: \[BDtwolevel\] $$\begin{aligned} \dot c_B & = i\frac{\Omega_T}{2}c_3 \\ \dot c_3 & = i\frac{\Omega_T}{2}c_B - i\Delta c_3 \end{aligned}$$ and $$\label{BMaxwelltwolevel} \frac{\partial\Omega_T}{\partial Z} = -i\mu\langle c_B c_3^*\rangle.$$ We see that all dark-state effects disappear, and Eqs. and are identical in form to the Maxwell-Bloch equations for a single “total" pulse interacting with a two-level atom. Thus if the dark state is initially unpopulated and the pulses are matched temporally, then the population is constrained to states $|B\rangle$ and $|3\rangle$, and the three-level behavior is identical to that of a two-level atom. This explains the three-level simulton solutions derivation [@Konopnicki-Eberly]. However we will find that small fluctuations prohibit such isolation of the dark state over long propagation distances. Kozlov and Eberly (KE) also used this fact to examine SIT-type vs. EIT-type transparency [@Kozlov-Eberly]. In SIT it is nonlinearities that cause the medium to be transparent even while the dark state is unpopulated and strong interaction occurs between the pulse and the medium. Thus SIT propagation occurs when $|c_D|^2 \approx 0$. As just shown, this implies that the three-level equations can be reduce to two-level form, and transparency only occurs if the two pulses are matched. In EIT it is the presence of a second pulse and population trapping in the dark state that cancels absorption of the pulses by the medium. Thus KE defined EIT propagation to occur when $|c_D|^2 \approx 1$. In this case, the medium is transparent because the combined pulse-medium system is in the dark state, cancelling pulse-medium interaction. We will use this same definition throughout this paper. Mixonium Analytical Solutions ============================= We wish to analyze two-pulse propagation through a partially coherent $\Lambda$ system, i.e. a medium prepared with $\rho_{ij} \ne 0$ but $|\rho_{ij}|^2 < \rho_{11}\rho_{22}$ for $i \ne j$. We do this by solving Eqs. and by using the Park-Shin (PS) Bäcklund method [@park-shin]. We take the initial density matrix of each atom to be in a partially-coherent superposition of the two ground states, which we write in explicit form, for real $\alpha$ and $\beta$, as $$\label{mix-InCondition} \rho^{(0)} = \begin{pmatrix} \alpha^2 & \lambda\alpha\beta e^{i\phi} & 0 \\ \lambda \alpha\beta e^{-i \phi} & \beta^2 & 0 \\ 0 & 0 & 0 \end{pmatrix},$$ where $\alpha^2$ and $\beta^2$ are the populations of ground states 1 and 2 respectively (we will always take the case $\alpha^2 > \beta^2$), and $\phi$ is the phase of the partial coherence. We introduce a coherence parameter $\lambda$ that takes values between 0 and 1, between a completely mixed state and a pure state respectively. We will assume the fields are tuned to the center of the inhomogeneous line for the two transitions. To find the solution, first we diagonlize $\rho^{(0)}$ in Eq. with the rotation matrix $$\label{S_trans} S= \begin{pmatrix} \cos\theta & \sin \theta e^{i\phi} & 0 \\ -\sin\theta e^{-i\phi} & \cos \theta & 0 \\ 0 & 0 & 1 \end{pmatrix},$$ where \[sin-cos-def\] $$\begin{aligned} \cos \theta & = \frac{\zeta - \beta^2}{\sqrt{(\zeta-\beta^2)^2 + \lambda^2 \alpha^2 \beta^2}} \\ \sin \theta & = \frac{-\lambda \alpha \beta}{\sqrt{(\zeta-\beta^2)^2 + \lambda^2 \alpha^2 \beta^2}},\end{aligned}$$ and $$\label{eigValue} \zeta=\frac{1}{2}\{1 + [1-4(1-\lambda^2)\alpha^2\beta^2]^{1/2}\},$$ where $\zeta$ and 1-$\zeta$ are the positive non-zero eigenvalues of the matrix $\rho^{(0)}$. The rotation matrix $S$ has an additional degree of freedom in that the first and second columns can be multiplied by an arbitrary phase, while still diagonalizing $\rho^{(0)}$. In some plots of our analytic solutions we will make use of this fact. One can verify that $S$ diagonalizes $\rho^{(0)}$ in Eq. through the operation $$\label{DiagInCondition} \rho^{(0)}_{\text{d}} = S^\dag \rho^{(0)} S = \begin{pmatrix} \zeta & 0 & 0 \\ 0 & 1-\zeta & 0 \\ 0 & 0 & 0 \end{pmatrix}.$$ We can solve the much simpler problem of pulses propagating through a medium prepared with initial density matrix given by Eq. by using the Park-Shin [@park-shin] Bäcklund transformation technique, just as we did in a previous paper [@clader-eberly-pra07]. These pulse solutions are: \[DiagPulseSol\] $$\begin{aligned} \label{om1_d} \Omega_a^{\text{(d)}} & = \frac{4}{\tau}\left[2\cosh\left(\frac{T}{\tau}-\zeta\kappa Z \right)+\text{exp}\left(\frac{T}{\tau}+\kappa Z (3\zeta-2)\right)\right]^{-1} \\ \label{om2_d} \Omega_b^{\text{(d)}} & = \frac{4}{\tau}\left[2\cosh\left(\frac{T}{\tau}-\kappa Z(1-\zeta)\right)+\text{exp}\left(\frac{T}{\tau}+\kappa Z(1-3\zeta)\right)\right]^{-1}.\end{aligned}$$ where $\tau$ is the nominal pulse width, the superscript (d) is to remind us that these are the solutions for two pulses propagating through a medium with a diagonal density matrix, and $\kappa/c$ is the inverse absorption length given by: $$\label{mix-lengthScale} \kappa = \frac{\mu\tau}{2} \int_{-\infty}^{\infty}\frac{F(\Delta)d\Delta}{1+(\Delta\tau)^2}.$$ In the limit where $T_2^* \ll \tau$, the scaled inverse absorption depth $2\kappa/c$ becomes the inverse Doppler absorption depth such that $2\kappa / c \to \alpha_D$, which we previously defined as $\alpha_D = \sqrt{\pi/2} \mu T_2^*/c$. The solutions to the density matrix elements for a particular value of the detuning $\Delta$ are given by: \[DiagDensityMatrixSolution\] $$\begin{aligned} \rho_{11}^{(\text{d})} & = \frac{1}{1+(\Delta\tau)^2}\bigg\{\zeta[|f_{11}|^2+(\Delta \tau)^2] + (1-\zeta)|f_{12}|^2\bigg\} \label{mix-ground1} \\ \rho_{22}^{(\text{d})} & = \frac{1}{1+(\Delta\tau)^2}\bigg\{\zeta|f_{12}|^2 + (1-\zeta)[|f_{22}|^2+(\Delta\tau)^2]\bigg\} \\ \rho_{33}^{(\text{d})} & = \frac{1}{1+(\Delta\tau)^2}\bigg(\zeta|f_{13}|^2 + (1-\zeta)|f_{23}|^2\bigg) \label{mix-excited_state} \\ \rho_{12}^{(\text{d})} & = \frac{1}{1+(\Delta\tau)^2}\bigg[\zeta(f_{11}-i\Delta\tau)f_{12} + (1-\zeta)(f_{22}+i\Delta\tau)f_{12}\bigg] \\ \label{mix-Coherence13} \rho_{13}^{(\text{d})} & = \frac{1}{1+(\Delta\tau)^2}\bigg[\zeta(f_{11}-i\Delta\tau)f_{13} + (1-\zeta)f_{12}f_{23}\bigg] \\ \label{mix-Coherence23} \rho_{23}^{(\text{d})} & =\frac{1}{1+(\Delta\tau)^2}\bigg[\zeta f_{12}^*f_{13} + (1-\zeta)(f_{22}-i\Delta\tau)f_{23}\bigg],\end{aligned}$$ where the functions $f_{ij} $ are: \[mix-fFunctions\] $$\begin{aligned} f_{11} & = \bigg\{2 \textnormal{ sinh} \big(T/\tau-\zeta \kappa Z\big) - \exp\big[T/\tau+(3\zeta-2)\kappa Z \big]\bigg\}\bigg/D(Z,T) \\ %\\ f_{22} & = \bigg\{-2 \textnormal{ cosh }\big(T/\tau-\zeta \kappa Z\big) + \exp\big[T/\tau+(3\zeta-2)\kappa Z\big]\bigg\}\bigg/D(Z,T) \\ %\\ f_{12} & = 2 e^{T/\tau-(1-\zeta) \kappa Z}/D(Z,T) \\ %\\ f_{13} & = 2 i/D(Z,T) \\ %\\ f_{23} & = 2 i e^{(2\zeta-1) \kappa Z}/D(Z,T),\end{aligned}$$ and the denominator function $D(Z,T)$ is given by: $$\begin{aligned} \label{mix-DFunction} D(Z,T) & = 2 \textnormal{ cosh }\big(T/\tau-\zeta \kappa Z\big) + \exp\big[T/\tau+(3\zeta-2) \kappa Z\big].\end{aligned}$$ Now that we have the solutions for a medium initially in the diagonal state, we obtain the mixonium solutions through the operations $$\label{mix-PulseSolution} \begin{pmatrix} \Omega_a \\ \Omega_b \end{pmatrix} = \begin{pmatrix} \cos\theta & \sin\theta e^{i\phi} \\ -\sin\theta e^{-i\phi} & \cos\theta \end{pmatrix} \begin{pmatrix} \Omega_a^{\text{(d)}} \\ \Omega_b^{\text{(d)}} \end{pmatrix},$$ and $$\label{mix-DensityMatrixSolution} \rho=S \rho^{\text{(d)}} S^\dag = \begin{pmatrix} \cos\theta & \sin\theta e^{i\phi} & 0 \\ -\sin\theta e^{-i\phi} & \cos \theta& 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \rho_{11}^{(d)} & \rho_{12}^{(d)} & \rho_{13}^{(d)} \\ \rho_{21}^{(d)} & \rho_{22}^{(d)} & \rho_{23}^{(d)} \\ \rho_{31}^{(d)} & \rho_{32}^{(d)} & \rho_{33}^{(d)} \end{pmatrix} \begin{pmatrix} \cos\theta & -\sin\theta e^{i\phi} & 0 \\ \sin\theta e^{-i\phi} & \cos \theta& 0 \\ 0 & 0 & 1 \end{pmatrix}$$ Eqns. and are the exact solutions to Eqns. and for a medium initially prepared in a mixed-state coherent superposition of the two ground states, as given in Eq. . One can see that the pulse and density matrix solutions are simply a rotation of the completely mixed-state solutions that we have previously solved [@clader-eberly-pra07], with the eigenvalue of the initial density matrix taking the place of the ground state population. The invariance of the MB equations under such operations, which allows us to obtain the mixonium solutions, is discussed in further detail by Park and Shin [@park-shin]. It also complements previous numerical work which demonstrated the applicability of such “dressed-field" pulses [@Eberly-etal94]. Phaseonium Analytical Solution Analysis ======================================= The pulse and density matrix solutions presented are clearly quite complicated. However, we can begin to understand them by starting with the pure state case where $\lambda=1$. In that case Eq. becomes $\zeta = 1$ and Eq. simplifies to $\cos\theta=\alpha$ and $\sin\theta = -\beta$. The pulse solutions then simplify to: \[PureStatePulse\] $$\begin{aligned} \Omega_a & = \alpha\Omega_a^{\text{(d)}} - \beta e^{i\phi}\Omega_b^{\text{(d)}} \\ \Omega_b & = \beta e^{-i \phi} \Omega_a^{\text{(d)}} + \alpha\Omega_b^{\text{(d)}}.\end{aligned}$$ The atomic solutions also simplify substantially. Since we are considering pure states, we can consider just the wavefunction. Thus Eq. can be simplified to: $$\label{mix-probAmpSolution} |\psi\rangle=S |\psi^{\text{(d)}}\rangle= \begin{pmatrix} \alpha & -\beta e^{i\phi} & 0 \\ \beta e^{-i\phi} & \alpha & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} c_{1}^{(d)} \\ c_{2}^{(d)} \\ c_{3}^{(d)} \end{pmatrix},$$ where the diagonal probability amplitude solutions are given by: \[DiagProbAmpSolution\] $$\begin{aligned} c_{1}^{(\text{d})} & = \frac{1}{\sqrt{1+(\Delta\tau)^2}}[f_{11}-i \Delta \tau] \\ c_{2}^{(\text{d})} & = \frac{1}{\sqrt{1+(\Delta\tau)^2}}f_{12} \\ c_{3}^{(\text{d})} & = \frac{1}{\sqrt{1+(\Delta\tau)^2}}f_{13}^*,\end{aligned}$$ which can be derived from Eqs. - up to an overall phase. Thus the probability amplitudes on line-center (taking $\Delta = 0$) can be written in a substantially simplified form as: \[PureStateAmplitude\] $$\begin{aligned} c_1 & = \alpha f_{11} - \beta f_{12} \\ c_2 & = \beta f_{11} + \alpha f_{12} \\ c_3 & = f_{13}^*\end{aligned}$$ where we have assumed the phase of the ground state coherence $\phi = 0$ for simplicity, and we will do so for the remainder of this paper. Eqns. and mark a substantial simplification of the general mixed state solution. Phaseonium Input Regime: $-\kappa Z \gg 1$ ------------------------------------------ We can further simplify the analysis by dividing the evolution into three distinct regimes. We will consider the asymptotic “input" as regime I, the asymptotic “output" as regime III, and the transfer zone in between as regime II. We study regime I by taking $-\kappa Z \gg 1$. In this limit the pulse solutions become: \[PulseInput\] $$\begin{aligned} \Omega_a & \to \alpha\frac{2}{\tau}\text{ sech} \bigg(\frac{T}{\tau} - \kappa Z\bigg) \\ \Omega_b & \to \beta\frac{2}{\tau}\text{ sech} \bigg(\frac{T}{\tau} - \kappa Z\bigg),\end{aligned}$$ and the line-center probability amplitudes are given by \[AmplitudeInput\] $$\begin{aligned} c_1 & \to \alpha\tanh\bigg(\frac{T}{\tau} - \kappa Z\bigg) \\ c_2 & \to \beta\tanh\bigg(\frac{T}{\tau} - \kappa Z\bigg) \\ c_3 & \to - i \text{ sech}\bigg(\frac{T}{\tau} - \kappa Z\bigg).\end{aligned}$$ These pulse and amplitude solutions are exactly the matched simulton solutions of Konopnicki and Eberly [@Konopnicki-Eberly] moving with group velocity $v_g / c = (1+\kappa \tau)^{-1}$. This is an example of SIT type propagation, because $|c_D|^2 = 0$ in the limit, and the excited state is fully populated at the pulse peak. However because of the particular pulse shape and their matching, the medium is transparent, but with a slowed velocity, possibly much slower than $c$. Phaseonium Output Regime: $\kappa Z \gg 1$ ------------------------------------------ Similarly the output regime III can be considered by taking $\kappa Z \gg 1$. In this limit the pulse solutions are: \[PulseOutput\] $$\begin{aligned} \Omega_a & \to -\beta\frac{2}{\tau}\text{ sech} \bigg(\frac{T}{\tau} \bigg) \\ \Omega_b & \to \alpha\frac{2}{\tau}\text{ sech} \bigg(\frac{T}{\tau} \bigg) ,\end{aligned}$$ and the line-center probability amplitudes are simply the constant values \[AmplitudeOutput\] $$\begin{aligned} c_1 & \to -\alpha \\ c_2 & \to -\beta \\ c_3 & \to 0.\end{aligned}$$ Just as in the input regime I, these pulse solutions are also matched simultons but now moving with the vacuum velocity $c$. One can see from Eqns. that the excited state probability amplitude is 0. This is because all population is now in the dark state, such that $|c_D|^2 = 1$ in the limit. Thus the ground state amplitudes become constant and EIT type propagation occurs, allowing the pulses to propagate without any interaction with the medium. We note from Eqs. that the pulse shapes are matched, and with ratio $$\label{pulse-ratio} \frac{\Omega_a}{\Omega_b} = -\frac{\beta}{\alpha},$$ which is exactly the ratio predicted by the Dark Area Theorem [@Eberly-Kozlov]. As remarked previously when discussing the Hamiltonian, we know that the dark state is completely decoupled from the dynamics. Thus if during the propagation the pulses become matched to the medium, as in (\[pulse-ratio\]), no further population transfer can occur. We see that when $\lambda=1$, meaning the initial state of the density matrix is a pure state, the analytic solutions describe a transfer of SIT-type simulton pulses to EIT-type pulses in the dark state, where the population is trapped and remains constant. We also see from these analytic solutions, that the original simulton solutions [@Konopnicki-Eberly] are simply limiting cases of our more general solutions. The simulton solutions propagate in stable form without any change and without ever becoming trapped in the dark state. However this picture is not complete as we now see that transfer to the dark state still occurs. We plot the analytic pulse solutions, $\Omega_a$ and $\Omega_b$, given in Eqs. , in the left frame of Fig. \[fig.mix.anPulse1.0\], and the line-center excited state probability, $|c_3|^2$, given in Eq. , in the right frame of Fig. \[fig.mix.anPop1.0\]. The numbers in each frame correspond to a particular time point (note that the time points are chosen to illustrate interesting changes that are occurring and are not uniformly spaced). Examining both of these figures together we clearly see that in the initial simulton regime I, the pulses propagate in the bright state, causing excitation into the excited state (frames 1 and 2). Then in the transfer regime II (frames 3-5), we see the relative magnitude of the pulses changing, and the phase of the Stokes pulse changing sign, along with a decrease in the excited state probability. Finally, in regime III (frame 6) we see both pulses propagating without change and the upper level is not excited at all, since the pulses and medium are now in the dark state. ![\[fig.mix.anPulse1.0\]\[fig.mix.anPop1.0\] Plots of the analytic pulse solutions given in Eq. on the left, and of the analytic excited state population solutions given in Eq. on the right. The pump pulse, $\Omega_a$ is the solid curve, and the Stokes pulse, $\Omega_b$ is the dashed curve. The horizontal axis is $x$ in units of $\kappa/c$. The vertical axis is the pulse Rabi frequency in units of $\tau^{-1}$ (left) and the excited state probability $\rho_{33} = |c_3|^2$ (right). The background is slightly shaded to indicate the presence of the lambda medium. The solid curve is the pump pulse, $\Omega_{a}$, and the dashed curve is the Stokes pulse, $\Omega_{b}$. The plot shows the simulton transfer process as an exchange process between bright-state simultons to dark-state simultons. The excited state is heavily populated during the input propagation regime, when the pulses and atoms are in the bright state. However as the pulses and atoms transfer into the dark state, the atoms no longer absorb any pulse energy, and the medium becomes transparent. Parameters: $\alpha^2 = 0.8$, $\beta^2 = 0.2$, $\tau = 3T_2^*$, and $\lambda=1.0$.](PRA_fig2a_AnPulse1.0.eps "fig:"){height="2.6in"} ![\[fig.mix.anPulse1.0\]\[fig.mix.anPop1.0\] Plots of the analytic pulse solutions given in Eq. on the left, and of the analytic excited state population solutions given in Eq. on the right. The pump pulse, $\Omega_a$ is the solid curve, and the Stokes pulse, $\Omega_b$ is the dashed curve. The horizontal axis is $x$ in units of $\kappa/c$. The vertical axis is the pulse Rabi frequency in units of $\tau^{-1}$ (left) and the excited state probability $\rho_{33} = |c_3|^2$ (right). The background is slightly shaded to indicate the presence of the lambda medium. The solid curve is the pump pulse, $\Omega_{a}$, and the dashed curve is the Stokes pulse, $\Omega_{b}$. The plot shows the simulton transfer process as an exchange process between bright-state simultons to dark-state simultons. The excited state is heavily populated during the input propagation regime, when the pulses and atoms are in the bright state. However as the pulses and atoms transfer into the dark state, the atoms no longer absorb any pulse energy, and the medium becomes transparent. Parameters: $\alpha^2 = 0.8$, $\beta^2 = 0.2$, $\tau = 3T_2^*$, and $\lambda=1.0$.](PRA_fig2b_AnPop1.0.eps "fig:"){height="2.3in"} Mixed State Analysis {#ss:mixed-state-an} ==================== We now consider the pulse and density matrix solutions for a medium prepared in an arbitrary mixed initial state (i.e., $0 < \lambda < 1$). From Eq. , we obtain the individual pulse solutions which are given by: \[MixedStateSolutions\] $$\begin{aligned} \Omega_a & = \cos \theta \Omega_a^{\text{(d)}} + \sin\theta \Omega_b^{\text{(d)}} \\ \Omega_b & = -\sin\theta \Omega_a^{\text{(d)}} + \cos\theta \Omega_b^{\text{(d)}},\end{aligned}$$ where $\cos\theta$ and $\sin\theta$ are given in Eq. and $\Omega_a^{(d)}$ and $\Omega_b^{(d)}$ are defined in Eq. . The mixed-state density matrix solutions are given in Eq. and are clearly quite cumbersome for this general mixed state case. The simplest of these, and the one which provides the most insight into the physics is the excited state probability, $\rho_{33}$. Written explicitly the line-center solution is: $$\label{ExStateProb} \rho_{33} = \frac{ 4\big[\zeta + (1-\zeta) e^{(2\zeta-1)\kappa Z}\big]}{\big[2 \text{ cosh}\big(T/\tau - \zeta \kappa Z\big) + \exp\big(T/\tau + (3\zeta -2)\kappa Z\big)\big]^2}.$$ Once again we will examine these solutions in the input and output regimes to help understand their underlying features. Mixonium Input Regime: $-\kappa Z \gg 1$ ---------------------------------------- In the input regime I, by taking the limit $-\kappa Z \gg 1$, we find: \[MixedPulseInput\] $$\begin{aligned} \Omega_a & \to \cos\theta\frac{2}{\tau}\text{ sech} \bigg(\frac{T}{\tau} - \zeta \kappa Z\bigg) \\ \Omega_b & \to -\sin\theta \frac{2}{\tau}\text{ sech} \bigg(\frac{T}{\tau} - \zeta \kappa Z\bigg),\end{aligned}$$ while the excited state probability is: $$\label{ExcStateProbInput} \rho_{33} \to \zeta\text{ sech}^2\bigg(\frac{T}{\tau}-\zeta \kappa Z\bigg).$$ Just as in the pure state case, Eqns. are matched simulton pulses. However they are now generalized to non-pure states. The most immediate differences between the two cases are the modification to the pulse amplitudes, and that the excited state no longer reaches a value of $\rho_{33} = 1$ at the pulse peak, but rather $\rho_{33} = \zeta$. This results in a group velocity of the pulses, $v_g / c= (1 + \zeta\kappa \tau)^{-1}$, which is different by the presence of the factor $\zeta$ relative to the pure state case. This indicates that the interaction between the pulses and the medium is directly affected by the medium’s mixed-state nature, and that the interaction strength is governed by the parameter $\zeta$, which is simply the eigenvalue of the initial mixonium density matrix. Mixonium Output Regime: $\kappa Z \gg 1$ ---------------------------------------- We look at the mixed-state output regime III by taking $\kappa Z \gg 1$. In this limit the pulse solutions are: \[MixedPulseOutput\] $$\begin{aligned} \Omega_a & \to \sin\theta\frac{2}{\tau}\text{ sech} \bigg(\frac{T}{\tau} - (1-\zeta) \kappa Z\bigg) \\ \Omega_b & \to \cos\theta\frac{2}{\tau}\text{ sech} \bigg(\frac{T}{\tau} - (1-\zeta) \kappa Z\bigg),\end{aligned}$$ and the line-center excited state probability is now: $$\label{ExcStateProbOutput} \rho_{33} \to (1-\zeta)\text{ sech}^2\bigg(\frac{T}{\tau}-(1-\zeta)\kappa Z\bigg).$$ The output pulses are again matched simultons, and have quite similar features to the output pulses for the pure state. The major difference is that the dark state can no longer be fully populated so interaction between the pulses and medium continues, however because of SIT the pulses continue to propagate without absorption. The group velocity is not $c$, rather it is $v_g/c = [1 + (1-\zeta)\kappa \tau]^{-1}$, and at the peak of the pulse the excited state probability is $\rho_{33} = 1-\zeta$ instead of 0. Both the input and output regimes show modified excited state populations when compared to the pure state case, and the modification in both regimes is governed by the same parameter $\zeta$. We will refer to this parameter $\zeta$ as the interaction parameter. Its value gives the maximum population of the dark state, and thus determines how strongly EIT can cancel the pulse-medium interaction. In the pure-state case $\zeta = 1$, and all interaction ceased once the pulses were in the dark state. However as just shown, when the medium is initially in a mixed-state, we have $\zeta<1$, and interaction continues. The value of $\zeta$ decreases until the limiting completely mixed state is reached, where $\lambda=0$, giving $\zeta = \alpha^2$ (for $\alpha^2 > \beta^2$). In Fig. \[fig.absorption-param\] we plot the value of this interaction parameter as a function of the coherence parameter $\lambda$ for a variety of initial medium preparations. The plot shows the range of the parameter $\zeta$ and its dependence on initial medium preparation. However as the medium approaches the pure-state case, all curves converge, and thus all interaction is cancelled no matter how the population is initially distributed. ![\[fig.absorption-param\] Plots of the interaction parameter defined in Eq. . The horizontal axis is the coherence parameter $\lambda$, and the vertical axis is the interaction parameter $\zeta$. We plot the interaction parameter for a variety of media preparations ranging from $\alpha^2 -\beta^2 = 0.2$ up to $0.8$. It ranges in value from $\zeta = \alpha^2$ for $\lambda = 0$ (assuming $\alpha^2 > \beta^2$) to $\zeta = 1$ for $\lambda = 1$.](PRA_fig3_zeta.eps) Next we plot the pulse solutions for a mixed-state medium with parameters $\alpha^2 - \beta^2 = 0.6$ in Fig. \[fig.mix.anPulse0.8\]. The left figure corresponds to $\lambda = 0.8$ and the right figure corresponds to $\lambda = 0.2$. The solutions for the excited state population are plotted underneath in Fig. \[fig.mix.anPop0.8\]. For the pulses we see a very similar propagation behavior to that of the pure-state case plotted in Fig. \[fig.mix.anPulse1.0\]. Aside from a slight difference between the relative pulse amplitudes the plots look very alike. The main difference is seen in the plots of the excited state population. The input pulses shown in frame 1 no longer cause complete excitation into the excited state so $\rho_{33} \ne 1$ even at the pulse peak. More importantly the output pulses are no longer completely decoupled from the medium, since $\rho_{33} \ne 0$ in the output regime III as seen in frames 5 and 6 of Fig. \[fig.mix.anPop0.8\]. This feature is more enhanced as the value of $\lambda$ gets smaller. The fact that the medium is not in a fully coherent pure-state superposition implies that the dark state cannot be fully populated and thus the pulse and atoms still interact. However because of SIT the medium is still transparent to the pulses. ![\[fig.mix.anPulse0.8\] Plots of the analytic pulse solutions given in Eq. for a mixonium medium for two different values of $\lambda$. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the pulse Rabi frequency in units of $\tau^{-1}$. The pump pulse, $\Omega_a$, is the solid curve, and the Stokes pulse, $\Omega_b$, is the dashed curve. The plot shows the simulton transfer process as an exchange process between bright-state simultons to dark-state simultons. Parameters: $\alpha^2 = 0.8$, $\beta^2 = 0.2$, $\tau \approx 3T_2^*$, and $\lambda=0.8$ (left figure) $\lambda = 0.2$ (right figure).](PRA_fig4a_AnPulse0.8.eps "fig:"){height="2.4in"} ![\[fig.mix.anPulse0.8\] Plots of the analytic pulse solutions given in Eq. for a mixonium medium for two different values of $\lambda$. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the pulse Rabi frequency in units of $\tau^{-1}$. The pump pulse, $\Omega_a$, is the solid curve, and the Stokes pulse, $\Omega_b$, is the dashed curve. The plot shows the simulton transfer process as an exchange process between bright-state simultons to dark-state simultons. Parameters: $\alpha^2 = 0.8$, $\beta^2 = 0.2$, $\tau \approx 3T_2^*$, and $\lambda=0.8$ (left figure) $\lambda = 0.2$ (right figure).](PRA_fig4b_AnPulse0.2.eps "fig:"){height="2.4in"} ![\[fig.mix.anPop0.8\] Plots of the analytic excited state population solutions given in Eq. for a mixonium medium. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the excited state probability $\rho_{33}$. The plot shows the excited state being heavily populated during the input propagation regime, when the pulses and atoms are in the bright state. However unlike the pure-state case, the excited state never reaches $\rho_{33} = 1$. As the pulses and atoms transfer into the dark state the pulses and atoms have a weaker interaction, however the interaction is never completely cancelled and $\rho_{33} \ne 0$. Parameters: $\alpha^2 = 0.8$, $\beta^2 = 0.2$, $\tau \approx 3T_2^*$, and $\lambda=0.8$ (left figure) $\lambda = 0.2$ (right figure).](PRA_fig5a_AnPop0.8.eps "fig:"){height="2.4in"} ![\[fig.mix.anPop0.8\] Plots of the analytic excited state population solutions given in Eq. for a mixonium medium. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the excited state probability $\rho_{33}$. The plot shows the excited state being heavily populated during the input propagation regime, when the pulses and atoms are in the bright state. However unlike the pure-state case, the excited state never reaches $\rho_{33} = 1$. As the pulses and atoms transfer into the dark state the pulses and atoms have a weaker interaction, however the interaction is never completely cancelled and $\rho_{33} \ne 0$. Parameters: $\alpha^2 = 0.8$, $\beta^2 = 0.2$, $\tau \approx 3T_2^*$, and $\lambda=0.8$ (left figure) $\lambda = 0.2$ (right figure).](PRA_fig5b_AnPop0.2.eps "fig:"){height="2.4in"} Pulse Area ========== We showed in section \[ss:bright-dark\] that in a pure-state medium, with the pulses initially matched, such that the dark state is cancelled, the medium will behave exactly as a two-level medium when described in the dressed bright-dark basis. In this basis the two pulses must be combined and thought of as one “total" pulse, with total Rabi frequency defined as $\Omega_T = (|\Omega_a|^2 + |\Omega_b|^2)^{1/2}$. We also found in the previous section that even though interaction occurred between the pulses and atoms in a mixonium medium, the medium still appeared transparent due to SIT. Thus we expect some elements of the two-level area theorem to hold for the bright pulse area. With the area of a pulse defined to be $$\label{mix-pulseArea2} A(Z) = \int_{-\infty}^{\infty}\Omega(Z,T)dT,$$ the pulse areas of the general mixed state solutions can be shown to be \[mix-PulseAreas\] $$\begin{aligned} A_{a}(Z) & = 2\pi \left(\frac{\cos\theta}{h(Z)} + \frac{\sin\theta}{h(-Z)}\right) \\ A_{b}(Z) &= 2\pi \left(\frac{-\sin\theta}{h(Z)} + \frac{\cos\theta}{h(-Z)}\right)\end{aligned}$$ where $$\label{mix-AreaFunction} h(Z) = \sqrt{1+e^{2(2\zeta-1)\kappa Z}}.$$ A remarkable results occurs when considering the total-pulse area: $$\label{mix-totalArea} A_T(Z) = \int_{-\infty}^{\infty}(\Omega_a^2 + \Omega_b^2)^{1/2}dT.$$ Since the analytical solutions are temporally matched (i.e. $\partial/\partial T (\Omega_a /\Omega_b) = 0$), the total pulse area can be written as the sum of the squares of the individual pulses areas. Thus the area of the total Rabi frequency from the analytic solutions is simply: $$\label{mix-totalArea-analytic} A_T(Z) = \sqrt{A_a^2(Z) + A_b^2(Z)} = 2\pi.$$ This indicates a remarkably broad connection to SIT in which the pulse area must obey the area theorem: $$\label{area-theorem} \frac{1}{c}\frac{\partial A(Z)}{\partial Z} = -\frac{\alpha_D}{2}\sin A(Z).$$ Even for an arbitrary mixed-state medium, the total pulse area remains constant and equal to $2\pi$. We will elaborate on this SIT connection in the next section via numerical solutions. Numerical Solutions =================== We now focus our attention on examining the consequences of both the modified dark state properties of a mixed state medium, and the connection of our pulse solutions to SIT. We do this by testing our exact analytical solutions in a more realistic experimental setting, through numerical solutions to the MB equations. This allows us to test the general utility of insights suggested by the highly specialized exact matched *sech* shaped analytical input pulse solutions for infinite medium length. We will use gaussian input pulses defined as: $$\label{mix-numInput} \Omega_{a}^{(in)} = \frac{A_{a}}{\tau_a\sqrt{2\pi}}e^{-\frac{T^2}{2\tau_a^2}}\ \quad {\rm and}\ \quad \Omega_{b}^{(in)} = \frac{A_{b}}{\tau_b\sqrt{2\pi}}e^{-\frac{T^2}{2\tau_b^2}},$$ and “super-gaussian" input pulses defined as: $$\label{mix-numInputSquare} \Omega_{a}^{(in)} = \frac{A_{a}}{\tau_a\Gamma(\frac{1}{4})}e^{-\frac{T^4}{(2\tau_a)^4}}\ \quad {\rm and}\ \quad \Omega_{b}^{(in)} = \frac{A_{b}}{\tau_b\Gamma(\frac{1}{4})}e^{-\frac{T^4}{(2\tau_b)^4}},$$ where $A_a$ gives the pump pulse area as defined in , $\tau_a$ is the nominal pump pulse width, $\Gamma(1/4) \approx 3.6$ is the gamma function, and similarly for the Stokes pulse. We replace the infinite uniform medium by a medium with definite entry and exit faces. We will first review pulse propagation in phaseonium, a pure-state medium. We will show how the SIT-like propagation predicted in Sec. \[ss:bright-dark\] can be realized by using matched input pulses with ratio $\Omega_a/\Omega_b = \alpha/\beta$ such that $|c_D|^2 = 0$. However small fluctuations eventually cause these pulses to reach the dark state. Next we will show numerical solutions with unmatched input pulses that demonstrate pulse matching, similar to results shown in ref. [@Kozlov-Eberly]. We will contrast these results with solutions for pulse propagation in a mixonium medium. Because the dark state can never be fully populated for pulses propagating in mixonium, SIT always plays a role, and EIT dominance is weakened. The difference is most pronounced for long media that are many absorption depths long, however even short media show some of the characteristics. Phaseonium - Matched Input Pulses --------------------------------- As shown in Sec. \[ss:bright-dark\], when the pulses are initially matched, with ratios $\Omega_a/\Omega_b = \alpha/\beta$, such that $|c_D|^2 = 0$, the three-level $\Lambda$ system reduces to that of a two-level system regardless of the composite bright-pulse shape. We plot such a solution in Fig. \[fig.mix.numMatchedPulse1.0\], where we use matched super-gaussian input pulses as defined in Eq. , with equal pulse widths $\tau_a = \tau_b = 3T_2^*$ and areas $A_a = 2.3 \alpha \pi$ and $A_b = 2.3 \beta \pi$. The medium is prepared in a pure state such that $\lambda = 1.0$ with populations $\alpha^2 - \beta^2 = 0.6$. This solution clearly illustrates the three regime language that we have introduced. In regime I (frames 1-3) the pulses act as simultons, and the pulse-medium interaction is exactly that for resonant two-level pulse propagation when considered in the dressed state basis. Then in regime II (frames 4-5) the pulse amplitudes rapidly change leading to the dark-state output regime III (frame 6). The solution exhibits SIT type propagation in regime I, where $|c_D|^2 \approx 0$, but beginning near frame 4 the pulse amplitudes begin to change, and eventually end up with ratio $\Omega_a/\Omega_b = -\beta/\alpha$ in frame 6. Here all population is in the dark-state and $|c_D|^2 \approx 1$, and EIT type propagation takes over. ![\[fig.mix.numMatchedPulse1.0\] Plots of numerical pulse solutions of Eqs. for a pure-state phaseonium medium, with super-gaussian input pulses. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the pulse Rabi frequency in units of $\tau^{-1}$. The solid curve is the pump pulse, $\Omega_{a}$, and the dashed curve is the Stokes pulse, $\Omega_{b}$. The plot shows matched input pulses with ratio $\Omega_a/\Omega_b = \alpha/\beta$, such that $|c_D|^2 \approx 0$ reshaped just as in normal SIT. However after some propagation distance their ratios change to $\Omega_a/\Omega_b = -\beta/\alpha$ as predicted by the dark area theorem so that $|c_D|^2 \approx 1$. Parameters: $\alpha^2 = 0.8$, $\beta^2 = 0.2$, $\tau_a = \tau_b = \tau \approx 3T_2^*$, $A_a = 2.3\alpha \pi$, $A_b = 2.3\beta\pi$, and $\lambda=1.0$. The value of $\mu$ is chosen so that $v_g/c = 1/2$ inside the medium.](PRA_fig6_MatchedNumPulse1.0.eps){height="2.7in"} ![\[fig.mix.MatchedArea1.0\] Numerically integrated area of the individual pulse areas as well as the total Rabi frequency area for the pulse solutions shown in Fig. \[fig.mix.numMatchedPulse1.0\]. The vertical axis is the pulse area. The solid curve is the area of the pump pulse, the dashed curve is the area of the Stokes pulse, and the dot-dashed curve is the area of the total Rabi frequency. The medium initially behaves as a two-level medium, and the bright area changes to $2\pi$. However after some propagation distance the pulses transfer to the dark state, while the bright area remains constant through this change.](PRA_fig7_MatchedArea1.0.eps){height="2.0in"} The initial SIT type behavior is further illustrated in Fig. \[fig.mix.MatchedArea1.0\] where we plot the areas of the individual pulses, as well as the area of the bright pulse. After a few absorption depths, the total pulse area is quickly changed from its input area of $A_T = 2.3\pi$ to $2\pi$. Then after propagating as simultons for a while, the pulses quickly change to the dark state. While the individual pulse areas are rapidly changing, we see completely static behavior for the bright area, confirming our analytic result in Eq. . After the initial reshaping caused by the two-level SIT behavior, the pulses propagate for some time as simultons. The simulton solutions are exact solutions, so the reason for the pulses to transfer to the dark-state does not readily present itself. Only perfect matched [*sech*]{} shaped pulses remain as simultons. Any small perturbation from these exact shapes implies $|c_D|^2 > 0$. While the initial dark-state probability may be small, any non-zero value will eventually lead to the dark state if the propagation distance is long enough. In this particular example, the small perturbations are caused by numerical roundoff error. The analytic solutions actually predict this behavior. It is the exponential term in the denominator of Eqs. which describes deviations from the simulton shape. Thus we see that simulton pulse propagation is in fact an inherently unstable propagation scenario, and that small fluctuations will always lead to the dark state. Phaseonium - Mismatched Input Pulses ------------------------------------ Next we examine what happens if the pulses are initially temporally mismatched, but where the medium is still prepared in a pure state. We show the numerical pulse solutions in Fig. \[fig.mix.numPulse1.0Short\], where we take the medium to be prepared with $\alpha^2 - \beta^2 =0.6$, and the pulses to be gaussian shape with width $\tau_a = \tau_b/2 = 3T_2^*$, a temporal mismatch of $2\tau_a$, and areas $A_a = 1.2\pi$ and $A_b = 0.8\pi$. In this case, the pulses do not propagate as simulton pulses, since they are not initially matched (i.e. $\Omega_a/\Omega_b \ne \text{ constant}$). However as they propagate they are quickly reshaped into two-peaked but matched pulses with ratio $\Omega_a/\Omega_b = -\beta/\alpha$, such that the dark-state is fully populated and EIT type propagation occurs. This re-shaping occurs because the dark Rabi frequency is nonzero for mismatched pulses. In this example the medium is only 10 absorption depths long, and most re-shaping is complete after about 5 absorption depths. This plot confirms previous KE results showing very similar behavior [@Kozlov-Eberly]. ![\[fig.mix.numPulse1.0Short\] Plots of numerical pulse solutions of Eqs. for a pure-state phaseonium medium, with gaussian input pulses. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the pulse Rabi frequency in units of $\tau^{-1}$. The solid curve is the pump pulse, $\Omega_{a}$, and the dashed curve is the Stokes pulse, $\Omega_{b}$. The plot shows mis-matched input pulses quickly reshaped into matched pulses with ratios $\Omega_a/\Omega_b = -\beta/\alpha$ as predicted by the dark area theorem and the analytic solutions. Parameters: $\alpha^2 = 0.8$, $\beta^2 = 0.2$, $\tau_a = \tau_b/2 = \tau \approx 3T_2^*$, temporal mismatch of $2\tau_a$, $A_a = 1.2\pi$, $A_b=0.8\pi$, and $\lambda=1.0$.](PRA_fig8_NumPulse1.0Short.eps){height="2.9in"} ![\[fig.mix.Area1.0Short\] Numerically integrated area of the individual pulse areas as well as the total area for the pulse solutions shown in Fig. \[fig.mix.numPulse1.0Short\]. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the pulse area. The solid curve is the area of the pump pulse, the dashed curve is the area of the Stokes pulse, and the dot-dashed curve is the area of the total Rabi frequency. The pulses initially reshape as they match, however after a short propagation distance the pulses enter the dark state, and the pulse areas become constant. The SIT area theorem does not apply in this case.](PRA_fig9_Area1.0Short.eps){height="2.1in"} We also plot numerically integrated pulse areas in Fig. \[fig.mix.Area1.0Short\] for both the individual pulse areas as well as the area of the total Rabi frequency. This plot further illustrates the distinction between the matched and mismatched cases. The pulse areas are quickly modified as the pulses match, but as soon as they reach the dark state, the pulse areas become constant. The SIT area theorem does not apply in this case, and no prediction is possible as to the final pulse areas, unlike the previous example. These two examples of matched and mis-matched input pulses highlight a feature common to pulse propagation in phaseonium. That is, the dark-state always dominates. Pulses will always end up matched and always with a ratio that satisfies the dark area theorem, so that absorption is cancelled and EIT plays a dominate role. However, for matched input pulses with no dark state population, two-level physics initially describes the propagation, and the dark-state dominance takes much longer to appear. While we can identify the input, output and transfer regimes in the matched input example, we cannot do the same for the mismatched input pulses. We will see in the next section that the mixonium medium modifies the absorptive properties causing the dark-state dominance to be replaced with SIT like effects. Mixonium - Mismatched Input Pulses ---------------------------------- We now examine the effects that mixonium has on mismatched pulse propagation. Matched input pulses (no matter the shape) with input ratios given by the analytic solutions in Eqs. will behave in a similar manner to the matched pulse example in the previous section, so we do not plot the results here. The difference is simply that the dark state can no longer be fully populated as discussed in Sec. \[ss:mixed-state-an\], and thus the pulses continue to cause excitation into the excited state. The inability of the dark state to be fully populated in mixonium has a profound effect on mis-matched pulses propagating through many absorption depths (the result is less dramatic for short media). We plot the pulse solutions for the same parameters as the solutions plotted in the previous section, with $\alpha^2 - \beta^2 = 0.6$, gaussian pulse shapes with duration $\tau_a = \tau_b/2 = 3T_2^*$ and an offset of $2\tau_a$, and pulse areas of $A_a = 1.2\pi$ and $A_b = 0.8\pi$, except we take the medium to be in a mixed state with $\lambda = 0.8$. We plot these pulse solutions in the left frame of Fig. \[fig.mix.numPulse0.8Short\]. We see a very similar behavior to the previous solutions with one almost unnoticeable difference. That is, the output pulse ratio is given by $\Omega_a/\Omega_b = \tan \theta = -\lambda\alpha\beta/(\zeta - \beta^2)$ as predicted by the mixed-state analytic solutions. However, when we plot the areas of the pulses in the left frame of Fig. \[fig.mix.Area0.8Short\] we notice now that the total pulse area is very close to $2\pi$, in contrast to the pure-state solution shown in Fig. \[fig.mix.Area1.0Short\]. In fact in this example, the pulses were still being reshaped, thus we will examine what happens if the medium is slightly longer. ![\[fig.mix.numPulse0.8Short\]\[fig.mix.numPulse0.8Long\] Plots of numerical pulse solutions of Eqs. for a mixed-state medium, with gaussian input pulses. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the pulse Rabi frequency in units of $\tau^{-1}$. The left and right frames are identical except for different medium lengths. The solid curve is the pump pulse, $\Omega_{a}$, and the dashed curve is the Stokes pulse, $\Omega_{b}$. The plot shows mis-matched input pulses quickly reshaped into matched pulses, similar to the pure state case. Parameters: $\alpha^2 = 0.8$, $\beta^2 = 0.2$, $\tau_a = \tau_b/2 = \tau \approx 3T_2^*$, temporal mismatch of $2\tau_a$, $A_a = 1.2\pi$, $A_b=0.8\pi$, and $\lambda=0.8$.](PRA_fig10a_NumPulse0.8Short.eps "fig:"){height="2.8in"} ![\[fig.mix.numPulse0.8Short\]\[fig.mix.numPulse0.8Long\] Plots of numerical pulse solutions of Eqs. for a mixed-state medium, with gaussian input pulses. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the pulse Rabi frequency in units of $\tau^{-1}$. The left and right frames are identical except for different medium lengths. The solid curve is the pump pulse, $\Omega_{a}$, and the dashed curve is the Stokes pulse, $\Omega_{b}$. The plot shows mis-matched input pulses quickly reshaped into matched pulses, similar to the pure state case. Parameters: $\alpha^2 = 0.8$, $\beta^2 = 0.2$, $\tau_a = \tau_b/2 = \tau \approx 3T_2^*$, temporal mismatch of $2\tau_a$, $A_a = 1.2\pi$, $A_b=0.8\pi$, and $\lambda=0.8$.](PRA_fig10b_NumPulse0.8Long.eps "fig:"){height="2.8in"} ![\[fig.mix.Area0.8Short\] \[fig.mix.Area0.8Long\]Numerically integrated area of the individual pulse areas as well as the total Rabi frequency area for the pulse solutions shown in Fig. \[fig.mix.numPulse0.8Long\]. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the pulse area. The solid curve is the area of the pump pulse, the dashed curve is the area of the Stokes pulse, and the dot-dashed curve is the area of the total Rabi frequency. We see initially rapid change in the pulse areas as they are matched, consistent with pure-state behavior. However because the dark state can never be fully populated the medium always behaves as an SIT like medium, and eventually the bright pulse area changes to $2\pi$ area.](PRA_fig11a_Area0.8Short.eps "fig:"){height="1.9in"} ![\[fig.mix.Area0.8Short\] \[fig.mix.Area0.8Long\]Numerically integrated area of the individual pulse areas as well as the total Rabi frequency area for the pulse solutions shown in Fig. \[fig.mix.numPulse0.8Long\]. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the pulse area. The solid curve is the area of the pump pulse, the dashed curve is the area of the Stokes pulse, and the dot-dashed curve is the area of the total Rabi frequency. We see initially rapid change in the pulse areas as they are matched, consistent with pure-state behavior. However because the dark state can never be fully populated the medium always behaves as an SIT like medium, and eventually the bright pulse area changes to $2\pi$ area.](PRA_fig11b_Area0.8Long.eps "fig:"){height="1.9in"} ![\[fig.mix.numPop0.8Long\] Plots of numerical excited state population solutions of Eqs. for a mixed-state medium. Each frame corresponds exactly to the same frame on the right hand side of Fig. \[fig.mix.numPulse0.8Long\]. The horizontal axis is $x$ in units of $\kappa/c$, and the vertical axis is the excited state population. The plot shows that even after the pulses are matched, the dark state can never be fully populated in a mixed-state medium, and thus the excited state continues to be populated. Parameters: $\alpha^2 = 0.8$, $\beta^2 = 0.2$, $\tau_a = \tau_b/2 = \tau \approx 3T_2^*$, temporal mismatch of $2\tau_a$, $A_a = 1.2\pi$, $A_b=0.8\pi$, and $\lambda=0.8$.](PRA_fig12_NumPop0.8Long.eps){height="2.6in"} The difference between phaseonium and mixonium propagation is greatly magnified when the pulses propagate through a longer medium. We plot these solutions in the right frame of Fig. \[fig.mix.numPulse0.8Long\] which show continued reshaping of the pulses until they are matched [*sech*]{} shaped pulses, exactly as given by the output analytic solutions in Eqs. . We plot the pulse areas for this example in the right frame of Fig. \[fig.mix.Area0.8Long\] and we see that after the initial rapid reshaping which matches the pulses, the areas of the pulses continue to change until the bright pulse area reaches $A_b = 2\pi$. The modified interaction properties of mixonium cause SIT to dominate and reshape the pulses. Thus the EIT dominance that was exhibited in the pure-state case, is now replaced with SIT dominance for mixed-state media. For this same long medium example we also plot the excited state population in Fig. \[fig.mix.numPop0.8Long\]. We see that even after the pulses are matched, the excited state can never be completely decoupled. Thus, the dark state can never be fully populated, with its maximum value given by the interaction parameter $\zeta$. The inability of the dark state to be fully populated allows SIT to continuously reshape the pulses until they agree with the analytic solutions. Conclusions {#ss:mix-conclusions} =========== We have presented new solutions of the Maxwell-Bloch equations, both analytic and numerical, applicable to a “mixonium" medium, where the term mixonium implies a $\Lambda$ medium prepared in a partially phase-coherent superposition of the ground states. This medium offers a new contrast to a pure “phaseonium" medium, where the ground states are prepared in a completely phase-coherent superposition of the two ground states. The partially coherent medium is experimentally realistic, whereas pure-state preparations are difficult to achieve. The analytic solutions for the pulses and density matrix elements were obtained via the Park-Shin Bäcklund transformation method [@park-shin]. Consistent with our previous work [@clader-eberly07; @clader-eberly-pra07], we again identified three distinct regimes of interest for the analytic solutions. For the pure-state case, we identified our solutions in the asymptotic input regime to be equivalent to the well known simulton solutions [@Konopnicki-Eberly]. Our analytic solutions then describe the transfer of these input simulton pulses to simulton solutions completely in the dark state in the output regime. We identify this behavior as the transfer of pulses propagating with completely SIT like behavior to completely EIT like behavior, where we use the same definition of SIT and EIT like behavior as given in Ref. [@Kozlov-Eberly]. Using numerical solutions to the Maxwell-Bloch equations we were able to show this SIT simulton to EIT simulton behavior for pulses with matched but different shapes from the analytic solutions. The analytic solutions in the general mixed-state case, allow us to identify an interaction parameter that determines the maximum population of the dark state. In the pure-state case, the dark state can become fully populated and all interaction between the pulses and medium is eliminated. However in the mixed-state case the dark state can never be fully populated and thus the excited state cannot be decoupled. We studied the effects that this modified interaction parameter has on dark-state propagation dynamics by numerically solving the Maxwell-Bloch equations. In the pure-state case, our numerical results confirmed previous results that mismatched input pulses with gaussian shapes quickly match and propagate unchanged as the dark-state is fully populated prohibiting any coupling to the excited state [@Kozlov-Eberly]. In contrast, in the mixed-state case these same mismatched pulses never reach complete EIT type propagation since the dark state is never fully populated. Unlike the pure-state case, EIT type effects cannot completely cancel SIT type effects, and the pulses continue to be reshaped into matched [*sech*]{} shaped simultons, matching shape to our output analytic solutions. We have been able to demonstrate that the two-level McCall-Hahn area theorem still plays a role even in this three-level system. The composite or “total" two-pulse Rabi frequency of the analytic solutions has constant $2\pi$ area during all stages of propagation. SIT propagation effects cannot be completely cancelled for mixed-state propagation, causing the composite total Rabi frequency area to evolve toward $2\pi$ just as in single pulse two-level SIT. Only in the pure-state case where EIT can completely cancel SIT, can this behavior be avoided. Thus we see that the two-level area theorem continues to play a role in two-pulse propagation through a three-level medium suggesting a possible three-level area theorem for the total Rabi frequency. We thank Q-Han Park for helpful discussions and correspondence. B.D. Clader acknowledges receipt of a Frank Horton Fellowship from the Laboratory for Laser Energetics, University of Rochester. Research has been supported by NSF Grant PHY 0456952 and PHY 0601804. The e-mail contact address is: [email protected]. [27]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ** (, , ). , ** (, , ). , ** (, , ). , ****, (). , , , ****, (). , , , ****, (). , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). . , ****, (). , ****, (). , in **, edited by , , (, ), p. . , ****, (). , , , , ****, (). , , , , ****, (). , ed., ** (, ). , in **, edited by (, , ), p. . , ****, (). , ****, (). | High | [
0.660351826792963,
30.5,
15.6875
] |
A stainless steel sheath for endoscopic surgery and its application in surgical evacuation of putaminal haemorrhage. A stainless steel tube was used as an endoscope sheath in combination with a working channel endoscope to evacuate hypertensive putaminal intracerebral haematoma (ICH). A frontal entry point ipsilateral to the haematoma was selected for insertion of the sheath. From January to June 2004, seven patients with putaminal ICH underwent endoscopic surgery in our hospital. There were no surgical complications. Haematoma evacuation rates were greater than 90% (median of 93%). Six patients (87%) regained consciousness within one week. Six patients, including four who had no residual disability and two who had moderate disability, were able to function independently. One patient remained in a persistent vegetative state at clinical follow-up after 6 months. Use of a stainless steel endoscopic sheath combined with working channel endoscopy via a frontal approach facilitates evacuation of putaminal ICH. | High | [
0.657963446475195,
31.5,
16.375
] |
Q: Can't change Bootstrap h1 text colour using id selector? At first, here is the relevant parts of the code: <head> <style type="text/css"> #part1 { background: url("1.jpg") no-repeat center; background-size: cover; } #1-title { color: blue; } </style> </head> <body> <div class="jumbotron jumbotron-fluid" id="part1"> <div class="container"> <h1 id="1-title" class="display-3">The New App</h1> <p class="lead" id="1-disc">A new app</p> <hr class="my-4"> </div> </div> </body> h1 is assigned an id of "1-title", and hence h1 text colour should be blue, but it remains black even if I use !important. However, I tried adding a class and applying the style to it as following: <style type="text/css"> #part1 { background: url("1.jpg") no-repeat center; background-size: cover; } .c { color: blue; } </style> and: <h1 class="display-3 c">The New App</h1> and it worked. So what is the reason of that? Why can't I change the colour using the d selector? A: The ID selector isn't working because an ID can't start with a number. Either change the ID to a letter or use the attribute selector [id='1-title'] {...} A: All the answers are correct but I am writing this answer for somebody who must have a id starting with a number This can be done in two ways--- no.1 You can use Michael Coker's answer-- [id='1-title'] {...} no.2 but for this the support is till IE7 So if you are among those unlucky one's who needs to support older IEs, you need to use unicodes like this-- #\31-title {...} Hope this helps future readers! | Mid | [
0.5968819599109131,
33.5,
22.625
] |
Q: Mouse enter/leave Form and Button child events problem have a program that fades out on mouse leave event. but the problem is when the mouse goes in to a child of the form like a Button, it triggers the mouse leave event. so i set up this code. private void Form1_MouseLeave(object sender, EventArgs e) { if (this.ClientRectangle.Contains(this.PointToClient(Cursor.Position))) { this.Opacity = 1.0; } else { int loopctr = 0; for (loopctr = 100; loopctr >= 5; loopctr -= 10) { this.Opacity = loopctr / 99.0; this.Refresh(); Thread.Sleep(100); } } } but the problem now is that the form often does not trigger the mouse leave event, looks to be because the buttons are so close to the form edges that it never registrer that the mouse left the form boundaries, probably because the mouse cursor is to fast and skips over the form when it leaves. any suggestions on how to handel this? A: Here is a simple utility class to do this, first use it in your form like this: partial class Form1 : Form { public Form1() { InitializeComponent(); new FadeForm(this, TimeSpan.FromSeconds(5), TimeSpan.FromSeconds(0.25), 0.05); } } Then add this code to your project: class FadeForm { readonly Form _top; readonly Timer _timer; readonly TimeSpan _delayToFade; readonly double _fadeAmount; Control _lastControl; DateTime _lastActivity; public FadeForm(Form ctrl, TimeSpan delayToFade, TimeSpan delaySpeed, double fadeAmount) { _top = ctrl; _delayToFade = delayToFade; _fadeAmount = fadeAmount; _lastActivity = DateTime.Now; WatchControl(_top); _timer = new Timer(); _timer.Interval = (int)delaySpeed.TotalMilliseconds; _timer.Enabled = true; _timer.Tick += new EventHandler(Tick); } void Tick(object sender, EventArgs e) { if (_lastControl != null || (DateTime.Now - _lastActivity) < _delayToFade) { if (_top.Opacity != 1) _top.Opacity = 1; } else { double newvalue = _top.Opacity -= _fadeAmount; if (newvalue > 0.0) _top.Opacity = newvalue; else _top.Close(); } } void WatchControl(Control c) { c.MouseEnter += new EventHandler(MouseEnter); c.MouseLeave += new EventHandler(MouseLeave); } void MouseEnter(object sender, EventArgs e) { _lastControl = sender as Control; } void MouseLeave(object sender, EventArgs e) { _lastControl = null; _lastActivity = DateTime.Now; } } | Mid | [
0.624390243902439,
32,
19.25
] |
After a season-ending 3-1 aggregate loss to Atlanta United in the Eastern Conference championship, head coach Chris Armas has gotten plenty of stick. Most of that criticism concerns the way Armas set up his team in the first leg in Atlanta—a 3-0 loss that the Red Bulls couldn’t overcome in a 1-0 win at home on Thursday. If the Red Bulls hadn’t ceded possession in Atlanta, things might have been different, the thinking goes. Or maybe if the team had left back Kemar Lawrence healthy, an MLS Cup appearance would be in the cards. But the blame shouldn’t be pinned on starting lineups, or tactics, or a system implemented in one particular game. Rather, the fault for the Red Bulls’ exit lies in a club-wide philosophy that doesn’t translate to success in knockout competitions where impact players are expected to shine. And the Red Bulls, simply put, lack those kind of players on their roster. That’s not to say they don’t have a... | Mid | [
0.639830508474576,
37.75,
21.25
] |
Florida is a beautiful peninsula hailed as America’s fourth most popular state. It has become a favorite vacation destination as well as a famous retirement home venue for many citizens of the United States as well as other people from all across the globe. In the southwest region of Florida is the beautiful barrier island of Siesta Key, which is home to a number of attractive real estate developments. Read More About Crescent Royale. Condos for Sale at Crescent Royale Siesta Key This cozy, fully furnished, one bedroom retreat at the gated community of Crescent Royale is ideally located just across the road from the worlds #1 whitest sand beach on famous Siesta Key. Perfect... Crescent Royale Condos on Siesta Key One of the most magnificent condominium complexes in the Siesta Key Island is Crescent Royale. Residents occupy three mid-rise buildings with a total of 101 condo units enjoying breathtaking views of the Gulf of Mexico. Additional amenities in the Crescent Royale include an Olympic-size heated swimming pool, barbecue and picnic groves, fitness center and sauna, shuffleboard facilities, table tennis, billiard table, and a multi-purpose room for social events. Condominium residences at Crescent Royale enjoy the advantage of rental income as short-term lease is allowed by the property management. Its strategic location attracts vacationers to this island resort-inspired community. Just opposite the complex is the internationally-acclaimed Siesta Key Beach famous for its pure quartz powder-white sands. A number of recreational activities may also be enjoyed in the beach premises such as swimming, boating, parasailing, fishing, and many more. Residents and guests in search for entertainment opportunities beyond the beach can walk a short distance to the town centers of the Crescent Beach Village and the Siesta Key Village. Here, they will find a fine selection of shopping and dining establishments. Because the Siesta Key barrier island is only a bridge away from mainland Florida, residents may also visit the bustling city of Sarasota. Popular attractions of downtown Sarasota include the Ringling Museum of Art, Westfield Sarasota Square Mall, and Van Wezel Performing Arts Hall, to name a few. All listing information is deemed reliable but not guaranteed and should be independently verified through personal inspection by appropriate professionals. Listings displayed on this website may be subject to prior sale or removal from sale; availability of any listing should always be independently verified. Listing information is provided for consumer personal, non-commercial use, solely to identify potential properties for potential purchase; all other use is strictly prohibited and may violate relevant federal and state law. Listing data comes from My Florida Regional MLS DBA Stellar MLS. Listing information last updated on September 15th, 2019 at 5:30am EDT. Follow Us Winner - Sarasota Herald Tribune's Readers Choice Award Exceptional service requires both dedication and knowledge. Our commitment to "Extraordinary" has been proudly recognized in the Sarasota Herald Tribune for the fifth consecutive year as Best real Estate Office by our clients and community. After all everything we do is focused on DWELLing well. | Mid | [
0.583941605839416,
30,
21.375
] |
As a conventional actuator, there has been provided an actuator, as shown in FIG. 7, in which a plane extending in parallel to the length of an output axis is formed in such a way as to construct a rotation prevention mechanism. In a case in which such an actuator is connected to an external device, a joint 11 connected to the output axis of the actuator is connected to an end of a link plate 13 with a pin 20 and a locking ring 21 in such a way that the joint 11 can rotate around a central axis GG, and another end of the line plate 13 is connected to a lever 2 of the external device with a pin 14 and a locking ring 15 in such a way that the link plate 13 can rotate around a central axis HH, as shown in FIG. 8. Patent reference 1 discloses an actuator in which a notch extending along an axis is formed in an output axis, and a projection fitted into the above-mentioned notch is formed in a casing of the actuator in such a way as to construct a rotation prevention mechanism. [Patent reference 1] JP, 2002-327709, A A problem with the conventional actuators constructed as mentioned above is that in a case in which a rotation prevention mechanism is disposed in the output axis, the outer diameter of the output axis has to be made thick in order to ensure the mechanical strength of the output axis, and therefore the actuator becomes enlarged. Another problem is that the provision of the rotation prevention mechanism increases the manufacturing cost of the actuator. A further problem is that when connected to an external device, because the locking rings are used in the following two places a connecting place at which the joint is connected to the link plate, and a connecting place at which the link plate is connected to the lever, the distance c from the link plate 13 to the locking ring 21, i.e., the play of the connecting portion in the axis direction becomes large. A still further problem is that there is a play due to the rotation prevention mechanism also in the actuator, and sticking occurs depending on the positional relation between the actuator and the external device when the actuator is attached to the external device. The present invention is made in order to solve the above-mentioned problems, and it is therefore an object of the present invention to provide an actuator which can prevent sticking from occurring and can reduce its manufacturing cost, which can be downsized, and which is equipped with a connecting member for connecting the actuator to an external device. | Mid | [
0.5934065934065931,
33.75,
23.125
] |
CNBC Misses Possible Financial Reform Repercussions Why New Bank Capital Rules Could Make Things Worse I’m not sure if these assertions by CNBC apply to recently passed regulations here or just this current round of international ones. Even if it doesn’t, the justification the left gives us reason to doubt their perspicacity. The problem is inherent and probably unavoidable. Regulators want to achieve a world-wide harmony on bank capital rules. But by reducing the diversity of regulatory regimes, they inevitably increase the costs of regulatory error. Regulations homogenize. Banks told that certain assets count as regulatory capital will hold more of those assets than they otherwise would. If those assets are less safe than the regulators believe, banks will be more vulnerable and the banking system more fragile than it would be with less homogeneity. If this is really inherent, tell me why the great minds at CNBC didn’t see this before the reforms were passed. Is CNBC allowing free market theory to sneak into their newsroom? Markets can cope with uncertainty because they do not require homogeneity. I don’t think the author really believes this. Different companies make different predictions about which businesses will be profitable. The ones that get their predictions wrong lose money; the ones that get them right earn profits. Persistent or outsized predictive failures led to bankruptcy; while persistent or outsized predictive successes leads to growth or at least continued operations. These thoughts don’t seem to be parallel, but if this is true, then the persistent trusting of Barney Frank is what led to recent losses. What is the market telling us about Frank? The market process sorts winners from losers without anyone having to determine who made the right predictions. Regulations lack this discipline. Again, more free market theory that the libs refuse to accept responsibility for when it goes against them. Where a business can see its inventory build or profits fall and change directions, regulatory failures are often invisible until very late in the process. The failure of prior capital rules did not put the regulators out of business. In fact, this round of Basel negotiations has even more countries participating than the earlier round did. Throughout the financial crisis and afterwards, regulators have failed upward. Evaluations of which regulations failed is open to political debate rather than being self-evident. The rules coming out of Basel will inevitably encourage concentrations of risk management strategies and asset holdings that will make the financial system more fragile. Sounds like every instance of big government up to the present day. The more detailed the rules are, the more systemic risk-creating homogeneity will be introduced. All of which is not to say that we don’t need banking capital regulations. For a host reasons-not the least of which is that banks have demonstrated that they can shift losses onto taxpayers-we do. But we shouldn’t be too confident in the efficacy of our new regulations, no matter how swell they might seem to us now. Here go the libs blaming the system or even conservatives for problems they caused. Banks couldn’t have shifted these losses to taxpayers if it were not for the left. Banks wouldn’t have been in such a possision, though, if Frank/Pelosi/Reid had listened to Bush about Fannie and Freddie. | Mid | [
0.603864734299516,
31.25,
20.5
] |
a. Field of the Invention The present invention relates generally to a method and system for generating a surface model of a geometric shape. More particularly, the present invention relates to a computer-implemented method and system for generating a surface model of an anatomic structure, such as the heart, or a particular portion thereof, using surface point data. b. Background Art For many years, computer-implemented methods and systems have been used to generate surface models of geometric shapes including, for example, anatomic structures. More specifically, a variety of methods or techniques have been used to generate surface models of the heart and/or particular portions thereof (e.g., the heart as a whole or particular structures and/or portions thereof). In one particular method, a plurality of sample points are taken on the surface of the structure being modeled that correspond to the relative location of the structure at that particular point. A surface model of the structure is then constructed based on the convex hull of the collection of sample points. In general terms, to collect the sample points, the surface of the structure is swept with a catheter and the various points on the surface of the structure visited by the catheter are recorded using known methods. These individual points collectively form a cloud of points (See, for example, FIG. 4). The convex hull of the cloud of points is then computed using known convex hull algorithms (See, for example, FIG. 5). The resulting convex hull shape estimates the boundary of the structure from the set of points, and therefore, provides a surface model of the structure. An advantage of this type of method/technique is that areas of the modeled structure that are not visited by the catheter, either because the catheter cannot reach the particular area or the clinician taking the samples did not collect samples from that area, are “filled in” during the model construction phase to create a complete model. This advantage, however, may also be the principal disadvantage of these methods/techniques. For instance, because areas of the structure are “filled in”, these techniques cannot reconstruct features of the modeled structure that are concave. Accordingly, with respect to the modeling of the heart, for example, these techniques cannot reconstruct certain anatomic features within the heart, such as papillary muscles or pulmonary vein ostia, which are both concave structures that would normally “indent” the heart surface model. Thus, while these techniques provide a good generalized model of the structure, they do not provide the level of detail that would be useful for many different applications. Accordingly, there is a need for a method and system of generating surface models, such as, for example, cardiac surface models, that will minimize and/or eliminate one or more of the above-identified deficiencies. | High | [
0.700251889168765,
34.75,
14.875
] |
Typically, a fuel cell has a cell stack formed by a number of power generation cells stacked together. With reference to FIGS. 17 to 19, a prior art power generation cell will be described. As shown in FIG. 17, a power generation cell 12 includes a pair of upper and lower frames 13, 14 and an electrode structure 15 between the frames 13, 14. The electrode structure 15 is formed by a solid electrolyte membrane 16, an electrode catalyst layer 17 on the anode side, and an electrode catalyst layer 18 on the cathode side. The anode-side electrode catalyst layer 17 is laid on the upper surface of the electrolyte membrane 16, and the cathode-side electrode catalyst layer 18 is laid on the lower surface of the solid electrolyte membrane 16. A first gas diffusion layer 19 is laid on the upper surface of the electrode layer 17, and a second gas diffusion layer 20 is laid on the lower surface of the electrode layer 18. Further, a first gas passage forming member 21 is laid on the upper surface of the first gas diffusion layer 19, and a second gas passage forming member 22 is laid on the lower surface of the second gas diffusion layer 20. A flat plate-like separator 23 is joined to the upper surface of the first gas passage forming member 21, and a flat plate-like separator 24 is joined to the lower surface of the second gas passage forming member 22. FIG. 18 is an enlarged perspective view showing a part of the first and second gas passage forming members 21, 22. As shown in FIG. 18, the gas passage forming member 21 (22) is made of a metal lath plate, which has a great number of hexagonal ring portions 21a (22a) arranged alternately. Each ring portion 21a (22a) has a through hole 21b (22b). The ring portions 21a (22a) and the through holes 21b (22b) form gas passages 21c (22c) that meander in a complex manner. Fuel gas (oxidation gas) flows through gas passages 21c (22c) as indicated by arrows. As shown in FIG. 17, the frames 13, 14 form a supply passage G1 and a discharge passage G2 for fuel gas. The fuel gas supply passage G1 is used for supplying hydrogen gas, which serves as fuel gas, to the gas passages 21c of the first gas passage forming member 21. The fuel gas discharge passage G2 is used for discharging fuel gas that has passed through the gas passages 21c of the first gas passage forming member 21, or fuel off-gas, to the outside. Also, the frames 13, 14 form a supply passage and a discharge passage for oxidation gas. The oxidation gas supply passage is located at a position corresponding to the back side of the sheet of FIG. 17, and is used for supplying air serving as oxidation gas to the gas passages of the second gas passage forming member 22. The oxidation gas discharge passage is located at a position corresponding to the front side of the sheet of FIG. 17, and is used for discharging oxidation gas that has passed through the gas passages of the second gas passage forming member 22, or oxidation off-gas, to the outside. As indicated by arrow P in FIG. 17, hydrogen gas is supplied from a hydrogen gas supply source to the first gas passage forming member 21 via the supply passage G1. The air is fed from an air supply source to the second gas passage forming member 22. This causes an electrochemical reaction in each power generation cell to generate power. Since humidifiers (not shown) humidify the hydrogen gas and the oxidation gas, the gases each contain humidifying water (water vapor). The aforementioned electrochemical reaction also produces water in the electrode layer 18 at the cathode side, the gas diffusion layer 20, and the second gas passage forming member 22. The generated water and the humidifying water form water droplets W1 in a portion of the power generation cell 12 at the cathode side. The oxidation off-gas flowing in the gas passage 22a of the gas passage forming member 22 sends the water droplets W1 to the exterior via the discharge passage. Some of the generated water seeps as seepage water through the solid electrolyte membrane 16 and flows into the electrode layer 17 at the anode side, the gas diffusion layer 19, and the gas passage 21c of the first gas passage forming member 21. The seepage water and the humidifying water form water droplets W in a portion of the power generation cell 12 at the anode side. The oxidation off-gas flowing in the gas passage 21c of the gas passage forming member 21 introduces the water droplets W to the exterior through the discharge passage G2. Patent Document 1 discloses a power generation cell for a fuel cell having the structure shown in FIG. 17. FIG. 19 is a partial cross-sectional view showing a fuel cell disclosed in Patent Document 2. As illustrated in FIG. 19, the fuel cell has a cathode 49 and a separator 50 that are arranged in a stacked manner. The separator 50 includes a plurality of projections 50a projecting toward the cathode 49. The separator 50 and the cathode 49 form gas passages 52. A deformed member 51 is arranged around each of the projections 50a. Two ends of the deformed member 51 are bent toward the corresponding projection 50a in such a manner as to form obtuse angles R with respect to the cathode 49. As a result, each adjacent pair of walls determining the cross-sectional area of each gas passage 52 form an obtuse angle. This makes it difficult for water droplets to be accumulated in corners of the gas passages 52, thus improving drainage performance of the separator 50. Patent Document 1: Japanese Laid-Open Patent Publication No. 2007-87768 Patent Document 2: Japanese Laid-Open Patent Publication No. 2008-21523 | Mid | [
0.638009049773755,
35.25,
20
] |
Since those early days of laptop production, Acer has grown to offer devices in every category. From netbooks to high-powered gaming units, there is likely to be a laptop computer made by Acer to fit the needs of just about any consumer. The handy features along with the cool looks make Acer the world leader among the uprising brands also making them the worthy competitors of a large number of brands. Along with the cool style statement suitable for different age groups of customers, Acer products are available at affordable prices, acer computer support. Amazing features with attractive looks and affordable price range makes Acer brand one of the best choices for laptop users. Even though Acer has created a successful name in the market due to its high efficiency products, support acer might come with its share of difficulties which might be faced after a period of time, becoming a source of a lot of trouble for its users. If you are also facing any kind of issue with Acer laptops then, there is no need to worry because we have a team of skilled professionals who will resolve all your issues perfectly without consuming much of your precious time and they can be contact acer support. Issues faced by Acer customers: Black laptop screen issues Stretched images on laptop screen issues Laptop heating up followed by forced shut down Window update issues Crashing issues Windows 10 upgrade problems Computer Tune up issues BIOS update Our services for Acer users: Our world class services can be guaranteed to customers once they call on our acer customer service number for assistance regarding Acer products. All your query or issue related to Acer laptops that you want to get resolved in the most efficient way possible will be get dealt by some of the greatest minds in this job with highest levels of experience. Contact our acer technical support team for any assistance regarding issues concerning Acer Laptops by Calling on our Acer customer support number. Get all your trouble sorted out in a matter of few minutes by most courteous and experienced technicians. | High | [
0.6808510638297871,
34,
15.9375
] |
Winning at Special Bets Special bets provide an opportunity to beat the bookies if you have knowledge about the real world. That's pretty much everyone. When I say special bets, I mean novelty and TV bets mainly. The place to have these is PaddyPower. Other bookies have novelty sections but find Paddy offers more bets than anyone else. By offering loads of bets though, they open up themselves to people that know their stuff to clean up. Remember that, they price up many markets, the punter only has to bet on those when he thinks he's got an edge. With mainstream sports this is hard but with special bets, you can often find obvious judgement errors. The other thing is that Paddypower are Irish and most specials are about the UK. Sometimes they haven't got the knowledge that someone living in the UK would have. A glaring one is the bets involving the BBC and football. It was years ago and I can't see it in my account but the bet was something like, most viewers for the World Cup Final (must have been in 2010 South Africa). BBC 1 was something like 1/4 and ITV about 7/2. Anyone from the UK knows that people watch football on BBC due to the lack of adverts plus Gary Lineker, Shearer and Hanson are better than Adrian Chiles. So betting on BBC 1 was just buying money effectively. Other things, they just have little knowledge in. They had a special bet on the top grossing film in 2012. Avengers had been released earlier in the year and was the 3rd highest grossing film of all time (only Avatar and Titanic grossed more). Still Paddypower were offering 3.25 on Avengers being the top grossing film of the year. They had the Hobbit as the favourite. Absolutely crazy. So I hit the bet max button and put £91.59 on it. Later that year, Dark Knight Rises came out. They got caught up in the hype and were offering 1.83 on Avengers beating Dark Knight Rises. I said thank you very much and put another £100 on. I would have put bet max again but it just seemed too good to bet true and I still had my other bet running. That came in too. Such easy money. Both the Hobbit and Dark Knight Rises both grossed around $1.1 billion but Avengers did $1.5 billion, so not even close. I tried the same trick in 2013 and got burnt though. Iron Man 3 had grossed $1.2 billion and was at the time the 5th highest grossing film of all time. I thought that was buying money too but ‘Frozen' pipped it at the end of year grossing $1.274 billion vs $1.215 billion. That was a bit of bad luck and not enough research. How was I suppose to know that a Disney cartoon released at the end of the year would come to be the 5th highest grossing film of all time? All the rest have been your summer blockbusters, when kids are off school and need things to do. Other good markets are the current affairs ones. Paddy's traders only have a few minutes to think about the bets whereas we can analyse them. A good one from last year was how many more nuclear tests would North Korea perform. They had just done one. Paddy had 1 as the favourite. As I have an interest in North Korea I thought the answer would be zero. They don't have much nuclear fuel so they aren't going to waste it on a test. They only did the test to get more money off South Korea and America. That came in. . I like the politics ones too. I remember one bet at Hills on tuition fees. They had a bet on whether Vince Cable would vote for his own bill (for the introduction of tuition fees). I just heard it on the news that he'd confirmed the he would be voting for his own bill and William Hill had ‘yes' at 1.4. Sometimes the traders are a bit behind. Buzzword bingo is a good bet if you follow politics. You know the themes of the speeches so you can have a good idea of what the speeches are going to contain. I don't have as much time for politics nowadays but when I did, I used to bet on that too. Ladbrokes put “One nation” as a buzzword when that's what Cameron had been talking about leading up to his speech. Then there are the PR bets. PaddyPower were offering money back if Oscar Pistorius was found not guilty. (They actually made a joke on it, “Money back if he walks”). Odds on him being guilty were 1.25. Max bet was £20. So basically, Paddypower were offering £5 if he was found guilty or zero if he was not guilty. Pretty much a no brainer. They got a lot of stick for being distasteful so took it down but I got on. Reality TV is good but that's not my thing. If you watch it though, you have the same idea as a bookie has about who is going to win. I got a friend who makes of reality TV regularly. Its not as easy as in the early days but its still possible. Just get on early and trade out on betfair for those ones. My absolute favourite special bet is on WWE. Yes, I mean World Wrestling Entertainment. Its not even a sport. Its more like a soap opera. Anyways for a bit of fun, some bookies price up the odds. I don't even watch it but as its fixed the favourite is going to win. The key here is to bet on the favourite but get out if the odds drift. Someone knows the outcome already so if the price isn't getting hammered something is wrong. I managed to get £100 on this at Corals. Price it up but letting someone have £100 on effectively a rigged event is ridiculous. I have been burnt a few times too. Don't think its easy. As I said earlier, even nailed on dead certs can lose but thats true with any sport. With special bets, they just happen far less if you do your research and bet on the right markets. So there you have it. You can beat the bookies at novelty bets if you look for an edge. They generally won't let you bet big but you can make money from them. | Mid | [
0.582978723404255,
34.25,
24.5
] |
Detroit Free Press Special Writer Related Links A huge number of football recruits will be in East Lansing on Saturday to see Michigan State battle Michigan, including four official visitors for the class of 2014: wide receiver Jamil Kamara of Virginia Beach Bishop Sullivan, cornerback Kyle Gibson of Seffner (Fla.) Armwood, defensive back T.J. Harrell of Tampa Catholic and athlete Will Dawkins of Vero Beach, Fla. All four have been offered scholarships by MSU and are considered high-level recruits by rivals.com. The 6-foot-1, 202-pound Kamara is rated a four-star prospect and the No. 115 recruit in the nation. Gibson is a four-star recruit and the No. 186 prospect overall. Harrell, who is rated a three-star recruit, has more than 25 scholarship offers, including from Florida, Florida State and Georgia. Dawkins is rated a three-star prospect and holds offers from teams such as Ohio State, Georgia Tech, Rutgers and South Carolina. Also expected at Saturday’s game — on an unofficial visit — is All-America defensive lineman Malik McDowell of Southfield. McDowell, rated No. 40 in the nation, is the top uncommitted prospect in the Midwest and is looking hard at MSU, U-M, Alabama, LSU, Florida State, Ohio State and USC. McDowell plans on talking his official visits this winter. Wide receiver JayJay Pinckney of Sylvania (Ohio) Southview also will be on campus this weekend, though also on an unofficial visit. Pinckney is thought to be strongly favoring the Spartans over offers from Kentucky, Boston College, Louisville, Indiana, Pittsburgh, Cincinnati and more. He is rated a three-star recruit by rivals.com Where the Spartans could really make a big splash Saturday is with prospects in the class of 2015. Led by recruiting coordinator Curtis Blackwell, the Spartans are anticipating possibly the most talented group of underclassmen ever assembled in East Lansing. Over 100 players will enjoy a full day of activities, including meeting former Spartans greats, checking out the basketball team and eating plenty of southern-style BBQ before the game. Some players will not be able to attend due to conflicting playoff games, but many of the high-level 2015 prospects in Michigan will be in East Lansing. Blackwell was hired by coach Mark Dantonio to help identify young talent and get it to East Lansing early so that the staff can develop rapport with them. This weekend is a big part of that process. Matt Dorsey is a recruiting analyst for spartanmag.com and rivals.com. | Mid | [
0.6484375,
31.125,
16.875
] |
I will soon tell you more about this awesome trip!!!Looking back at the 16 fantastic days in the mountains I will here give you pictures and a short overview of the climb:23 June: packing and flying out to Base Camp at the glacier24 June: waking up at 3am and marching to camp at 7800ft.25 June: snowy and windy climb up to our cache point.26 June: stuck in camp due to more then a meter of snow dumping that night. And fixing my broken tooth.27 June: one more day in camp due to snow. Having fun snowshoeing and building snowman.28 June: moving up to camp at 11000ft. Taking with up our cache. Heavy load now!!!29 June: the clouds are closing in. We are building snow walls and crossing fingers for better weather.30 June: moving up our cache to Windy corner at about 13000ft1 July: moving up to camp at 14000ft2 July: collecting cache at Windy corner3 July: moving up our cache to 16000ft ridge4 July: rest day in camp at 14000ft. Happy 4th of July!5 July: moving up to camp at 17000ft6 July: SUMMIT7 July: staying at camp 17000ft due to snow storm8 July: 18 hour climb from 17000ft to Bas Camp9 July: flying outHappy days!While you are waiting please have a look at all the pictures below. Powered by Create your own unique website with customizable templates. | High | [
0.677966101694915,
35,
16.625
] |
According to the Lake City Police, they have arrested a couple for fraud. Stevlein Stephon Bivins and Jasmine Cassandra Jones were arrested Friday, July 12, 2019, after police found out they had withdrawn $49,800.00 from other customer’s accounts. The withdrawals started back on June 14, 2019, and kept on going until a customer alerted authorities of possible fraudulent activity on their account. Officers reviewed surveillance video of multiple bank transactions between Bivins and Jones on different occasions. Jones is a VyStar employee. Officers went to Jones's home and no one was answering the door, but just as officers were leaving, Bivins was found in the parking lot and arrested for questioning. Bivins admitted that he was coming to visit Jones because she had called him and told him that police were knocking on her door. Jones eventually exited her apartment and was also arrested for questioning. Lake City Police Officers and VyStar Internal Affairs Officers were able to determine that Jones, and her boyfriend, Bivins, had done the fraudulent transactions. Bivins would appear as the account holder, making transactions in Jones’ teller line while she was working at VyStar. According to police, Jones would withdraw the money and give the cash to Bivins. The two made seven transactions for a total of $49,800.00 on two different accounts. Both have been arrested. | Low | [
0.47857142857142804,
33.5,
36.5
] |
Museum Blog Articles tagged Genetics: 20 It can be argued that crowdsourcing dates back to the early 1900s with the start of the Audubon Society’s Christmas Bird Count, now the longest running citizen science program. However, crowdsourcing was coined in 2006 by Jeff Howe of Wired magazine. He described it as the growing trend of everyday … The Genetics of Taste Lab was host to the fatty acid taste study from November 2014 to August 2015. In that time we enrolled 1020 Museum guests, ages 8-90, as part of the crowdsourced data collection. The study was a true success in both citizen science and crowdsourcing, AND now that the data hav… Your DNA genome is like a cookbook for your body. Just like a cookbook it has recipes in it. Your DNA genome recipes are called genes. Genes are like recipes, or instructions, for making something your body needs to survive and to be who you are! Just like our staff and citizen scientists in t… Citizen Science at the Bench: The Tas2R38-PROP (propylthiouracil) Association as a Model for Public Participation in Scientific Research I think we should ask the public to help shape the research agenda, help them figure out with us, what we ought to be studying so we can solve real world problems… Fatty Acids: The 6th Taste? “Small differences in DNA can determine whether or not you can taste a particular substance.” ~Research Participant Now Open! The community-based Genetics of Taste Lab at the Denver Museum of N… Madeline writes: I enjoyed your talk tonight at the Dairy Center! I am a recent graduate from American University (BA in psychology with minors in biology & music) and currently interning with a pharmacology group in Boulder. I am interested in learning more about your work down at the museum … Our Genetics of Taste research study was honored to be the inspiration for this AWESOME sci-fi film made by young film makers Will, Denisse, Elian and Sam from Trail Ridge Middle School in Longmont, Colorado. Their debut film, Attack of the Blue Tongued Zombies, takes you to the Genetics of… Thank you to Dr. Moehring for this thoughtful email, and for providing more information on lactose intolerance. We greatly value feedback from our members and online audiences, and are encouraged by the discourse the article, "Today's Paleo Diet" has inspired. Sincerely,Nicole ----- Dear Dr. Garn… Follow up to Catalyst Article: Today’s Paleo Diet (Original article can be found here, on page 12: http://apps.dmns.org/Catalyst/April-May2013/index.html#page/1) A special thanks to many readers who sent comments and questions about the article I wrote on the Paleo Diet. It is really reward… | Mid | [
0.611111111111111,
33,
21
] |
Q: Image getting cropped rather than resize I want to resize an image, but actually is it getting cropped! why? html <div class='data_block'> <img src='https://www.blueskyexhibits.com/website/wp-content/uploads/sky-home.jpg' class='data_image'/> <div class='data_title'><p> <a href='article/".$row['ar_id']."'>gdfgdfgdfggdf</a></p> </div> <div class='data_desc'> <p>dfgdfgdfgdf</p> </div> </div> css .data_image { width: 250px; height:200px; border-bottom-left-radius: 20px; } Rest of the css necessary code, you will find it here: jsfiddle A: If you set height to auto then it doesn't get cropped, but it throws the description out a little so you have to adjust the margin tops. I adjusted them to 18% and 15% Here is a fiddle .data_block {
background-color: #EFEFEF;
width: 670px;
height: 130px;
margin-top:10px;
margin-left: auto;
margin-right: auto;
border-bottom-left-radius: 10px;
border-top-right-radius: 10px;
overflow: hidden;
}
.data_image {
width: 250px;
height:auto;
border-bottom-left-radius: 20px;
}
.data_title a
{
font-size: 15px;
font-family: "Century Gothic";
font-weight: 600;
vertical-align: top;
float: right;
margin-top:-19%;
width:450px;
margin-right: auto;
margin-left:auto;
text-align: center;
text-decoration:none;
color:#2E84C2;
}
.data_title:hover a
{
color: #272727;
}
.data_desc {
font-size: 14px;
font-family: "Century Gothic";
text-align: center;
width:450px;
float:right;
margin-right:auto;
margin-left:auto;
margin-top: -15%;
} <div class='data_block'>
<img src='https://www.blueskyexhibits.com/website/wp-content/uploads/sky-home.jpg' class='data_image'/>
<div class='data_title'><p>
<a href='article/".$row['ar_id']."'>gdfgdfgdfggdf</a></p>
</div>
<div class='data_desc'>
<p>dfgdfgdfgdf</p>
</div>
</div> | Mid | [
0.62135922330097,
32,
19.5
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.