content
stringlengths
275
370k
The Underground Railroad was the network used by enslaved black Americans to obtain their freedom in the 30 years before the Civil War (1860-1865). When did the Underground Railroad begin and end? The Underground Railroad was formed in the early 19th century and reached its height between 1850 and 1860. What year does Underground Railroad take place? The Underground Railroad takes place around 1850, the year of the Fugitive Slave Act’s passage. It makes explicit mention of the draconian legislation, which sought to ensnare runaways who’d settled in free states and inflict harsh punishments on those who assisted escapees. How many years did the Underground Railroad operate? More recently the New York History Net has published a very interesing list of “all” persons and places connected with the Underground Railroad in New York. For the 240 years from the first African slave until 1860, slaves ran and some escaped to freedom. Does the Underground Railroad still exist? It includes four buildings, two of which were used by Harriet Tubman. Ashtabula County had over thirty known Underground Railroad stations, or safehouses, and many more conductors. Nearly two-thirds of those sites still stand today. Did the Underground Railroad really exist? ( Actual underground railroads did not exist until 1863.) According to John Rankin, “It was so called because they who took passage on it disappeared from public view as really as if they had gone into the ground. After the fugitive slaves entered a depot on that road no trace of them could be found. Who founded the Underground Railroad? In the early 1800s, Quaker abolitionist Isaac T. Hopper set up a network in Philadelphia that helped enslaved people on the run. How many episodes were there of the Underground Railroad? Colson Whitehead’s 2016 novel, The Underground Railroad, won a Pulitzer Prize and the National Book Award. Now, it’s a limited series directed by Academy Award-winner Barry Jenkins (Moonlight, If Beale Street Could Talk). In ten episodes, The Underground Railroad chronicles Cora Randall’s journey to escape slavery. Were there tunnels in the Underground Railroad? Contrary to popular belief, the Underground Railroad was not a series of underground tunnels. While some people did have secret rooms in their houses or carriages, the vast majority of the Underground Railroad involved people secretly helping people running away from slavery however they could. How many slaves were saved by the Underground Railroad? According to some estimates, between 1810 and 1850, the Underground Railroad helped to guide one hundred thousand enslaved people to freedom. Underground Railroad was a network of people, both black and white, who helped escaped enslaved persons from the southern United States by providing them with refuge and assistance. It came forth as a result of the convergence of numerous separate covert initiatives. Although the exact dates of its inception are unknown, it was active from the late 18th century until the Civil War, after which its attempts to weaken the Confederacy were carried out in a less-secretive manner until the Civil War ended. The Society of Friends (Quakers) is often regarded as the first organized group to actively assist escaped enslaved persons. In 1786, George Washington expressed dissatisfaction with Quakers for attempting to “liberate” one of his enslaved servants. Abolitionist and Quaker Isaac T. Hopper established a network in Philadelphia in the early 1800s to assist enslaved persons who were on the run from slavery. Abolitionist organisations founded by Quakers in North Carolina lay the basis for escape routes and safe havens for fugitive slaves during the same time period. What Was the Underground Railroad? According to historical records, the Quakers were the first organized organization to actively assist fugitive slaves. When Quakers attempted to “liberate” one of Washington’s enslaved employees in 1786, George Washington took exception to it. Abolitionist and Quaker Isaac T. Hopper established a network in Philadelphia in the early 1800s to assist enslaved persons who were fleeing their masters’ hands. Abolitionist societies founded by Quakers in North Carolina lay the basis for escape routes and safe havens for fugitives at the same time. How the Underground Railroad Worked The majority of enslaved persons aided by the Underground Railroad were able to flee to neighboring states like as Kentucky, Virginia, and Maryland. The Fugitive Slave Act of 1793 made catching fugitive enslaved persons a lucrative industry in the deep South, and there were fewer hiding places for them as a result of the Act. The majority of fugitive enslaved people were on their own until they reached specific places farther north. The escaping enslaved people were escorted by individuals known as “conductors.” Private residences, churches, and schools were also used as hiding places throughout the war. The personnel in charge of running them were referred to as “stationmasters.” There were several well-traveled roads that ran west through Ohio and into Indiana and Iowa. While some traveled north via Pennsylvania and into New England, or through Detroit on their route to Canada, others chose to travel south. The Little-Known Underground Railroad That Ran South to Mexico. Fugitive Slave Acts The Fugitive Slave Acts were a major cause for many fugitive slaves to flee to Canada. This legislation, which was passed in 1793, authorized local governments to catch and extradite fugitive enslaved individuals from inside the borders of free states back to their places of origin, as well as to penalize anybody who assisted the fleeing enslaved people. Personal Liberty Laws were introduced in certain northern states to fight this, but they were overturned by the Supreme Court in 1842. The Fugitive Slave Act of 1850 was intended to reinforce the preceding legislation, which was perceived by southern states to be insufficiently enforced at the time of passage. The northern states were still considered a danger zone for fugitives who had managed to flee. Some Underground Railroad operators chose to station themselves in Canada and sought to assist fugitives who were arriving to settle in the country. Harriet Tubman was the most well-known conductor of the Underground Railroad during its heyday. When she and two of her brothers fled from a farm in Maryland in 1849, she was given the name Harriet (her married name was Tubman). She was born Araminta Ross, and she was raised as Harriet Tubman. They returned a couple of weeks later, but Tubman fled on her own again shortly after, this time making her way to the state of Pennsylvania. In following years, Tubman returned to the plantation on a number of occasions to rescue family members and other individuals. Tubman was distraught until she had a vision of God, which led her to join the Underground Railroad and begin escorting other fugitive slaves to the Maryland state capital. In his house in Rochester, New York, former enslaved person and celebrated author Frederick Douglasshid fugitives who were assisting 400 escapees in their journey to freedom in Canada. Reverend Jermain Loguen, a former fugitive who lived in the adjacent city of Syracuse, assisted 1,500 escapees on their journey north. The Vigilance Committee was established in Philadelphia in 1838 by Robert Purvis, an escaped enslaved person who later became a trader. Josiah Henson, a former enslaved person and railroad operator, founded the Dawn Institute in Ontario in 1842 to assist fugitive slaves who made their way to Canada in learning the necessary skills to find work. Agent,” according to the document. John Parker was a free Black man living in Ohio who worked as a foundry owner and who used his rowboat to ferry fugitives over the Ohio River. William Still was a notable Philadelphia citizen who was born in New Jersey to runaway slaves parents who fled to Philadelphia as children. Who Ran the Underground Railroad? The vast majority of Underground Railroad operators were regular individuals, including farmers and business owners, as well as preachers and religious leaders. Some affluent individuals were active, including Gerrit Smith, a billionaire who stood for president on two separate occasions. Smith acquired a full family of enslaved people from Kentucky in 1841 and freed them from their captivity. Levi Coffin, a Quaker from North Carolina, is credited with being one of the first recorded individuals to assist escaped enslaved persons. Coffin stated that he had discovered their hiding spots and had sought them out in order to assist them in moving forward. Coffin eventually relocated to Indiana and then Ohio, where he continued to assist fugitive enslaved individuals no matter where he was. Abolitionist John Brown worked as a conductor on the Underground Railroad, and it was at this time that he founded the League of Gileadites, which was dedicated to assisting fleeing enslaved individuals in their journey to Canada. Abolitionist John Brown would go on to play a variety of roles during his life. His most well-known duty was conducting an assault on Harper’s Ferry in order to raise an armed army that would march into the deep south and free enslaved people at gunpoint. Ultimately, Brown’s forces were beaten, and he was executed for treason in 1859. - The year 1844, he formed a partnership with Vermont schoolteacher Delia Webster, and the two were jailed for assisting an escaped enslaved lady and her young daughter. - Charles Torrey was sentenced to six years in jail in Maryland for assisting an enslaved family in their attempt to flee through Virginia. - After being apprehended in 1844 while transporting a boatload of freed slaves from the Caribbean to the United States, Massachusetts sea captain Jonathan Walker was sentenced to prison for life. - John Fairfield of Virginia turned down the opportunity to assist in the rescue of enslaved individuals who had been left behind by their families as they made their way north. - He managed to elude capture twice. End of the Line Operation of the Underground Railroad came to an end in 1863, during the American Civil War. In actuality, its work was shifted aboveground as part of the Union’s overall campaign against the Confederate States of America. Once again, Harriet Tubman made a crucial contribution by organizing intelligence operations and serving as a commanding officer in Union Army efforts to rescue the liberated enslaved people who had been freed. MORE INFORMATION CAN BE FOUND AT: Harriet Tubman led a daring Civil War raid after the Underground Railroad was shut down. Bound for Canaan: The Epic Story of the Underground Railroad is a book about the Underground Railroad. Fergus Bordewich is a Scottish actor. A Biography of Harriet Tubman: The Road to Freedom Catherine Clinton is the first lady of the United States. Who Exactly Was in Charge of the Underground Railroad? ‘Henry Louis Gates’ is a pseudonym for Henry Louis Gates. The Underground Railroad’s History in New York is a little known fact. The Smithsonian Institution’s magazine. The Underground Railroad’s Dangerous Allure is well documented. The Underground Railroad |The Underground Railroad, a vast network of people who helped fugitive slaves escape to the North and to Canada, was not run by any single organization or person. Rather, it consisted of many individuals – many whites but predominently black – who knew only of the local efforts to aid fugitives and not of the overall operation. Still, it effectively moved hundreds of slaves northward each year – according to one estimate,the South lost 100,000 slaves between 1810 and 1850. An organized system to assist runaway slaves seems to have begun towards the end of the 18th century. In 1786 George Washington complained about how one of his runaway slaves was helped by a “society of Quakers, formed for such purposes.” The system grew, and around 1831 it was dubbed “The Underground Railroad,” after the then emerging steam railroads. The system even used terms used in railroading: the homes and businesses where fugitives would rest and eat were called “stations” and “depots” and were run by “stationmasters,” those who contributed money or goods were “stockholders,” and the “conductor” was responsible for moving fugitives from one station to the next.For the slave, running away to the North was anything but easy. The first step was to escape from the slaveholder. For many slaves, this meant relying on his or her own resources. Sometimes a “conductor,” posing as a slave, would enter a plantation and then guide the runaways northward. The fugitives would move at night. They would generally travel between 10 and 20 miles to the next station, where they would rest and eat, hiding in barns and other out-of-the-way places. While they waited, a message would be sent to the next station to alert its stationmaster.The fugitives would also travel by train and boat – conveyances that sometimes had to be paid for. Money was also needed to improve the appearance of the runaways – a black man, woman, or child in tattered clothes would invariably attract suspicious eyes. This money was donated by individuals and also raised by various groups, including vigilance committees.Vigilance committees sprang up in the larger towns and cities of the North, most prominently in New York, Philadelphia, and Boston. In addition to soliciting money, the organizations provided food, lodging and money, and helped the fugitives settle into a community by helping them find jobs and providing letters of recommendation.The Underground Railroad had many notable participants, including John Fairfield in Ohio, the son of a slaveholding family, who made many daring rescues, Levi Coffin, a Quaker who assisted more than 3,000 slaves, and Harriet Tubman, who made 19 trips into the South and escorted over 300 slaves to freedom.| When describing a network of meeting spots, hidden routes, passages, and safehouses used by slaves in the United States to escape slave-holding states and seek refuge in northern states and Canada, the Underground Railroad was referred to as the Underground Railroad (UR). The underground railroad, which was established in the early 1800s and sponsored by persons active in the Abolitionist Movement, assisted thousands of slaves in their attempts to escape bondage. Between 1810 and 1850, it is estimated that 100,000 slaves escaped from bondage in the southern United States. Facts, information and articles about the Underground Railroad Aproximate year of birth: 1780 The beginnings of the American Civil War occurred around the year 1862. Estimates range between 6,000 and 10,000. Harriet Tubman is a historical figure. William Still is a well-known author and poet. Levi Coffin is a fictional character created by author Levi Coffin. John Fairfield is a well-known author. The Story of How Canada Became the Final Station on the Underground Railroad Harriet Tubman’s Legacy as a Freedom Fighter and a Spion is well documented. The Beginnings Of the Underground Railroad Even before the nineteenth century, it appears that a mechanism to assist runaways existed. In 1786, George Washington expressed dissatisfaction with the assistance provided to one of his escaped slaves by “a organization of Quakers, founded for such purposes.” The Religious Society of Friends, or Quakers as they are more officially known, were among the first abolitionist organizations to emerge. Their influence may have played a role in Pennsylvania becoming the first state to abolish slavery, which was home to a large number of Quakers. In recognition of his contributions, Levi is often referred to as the “president of the Underground Railroad.” In Fountain City, Ohio, on Ohio’s western border, the eight-room Indiana home they bought and used as a “station” before they came to Cincinnati has been preserved and is now a National Historic Landmark. “Eliza” was one of the slaves who hid within it, and her narrative served as the inspiration for the character of the same name in Harriet Beecher Stowe’s abolitionist classic Uncle Tom’s Cabin. The Underground Railroad Gets Its Name Owen Brown, the father of radical abolitionist John Brown, was a member of the Underground Railroad in the state of New York during the Civil War. An unconfirmed narrative suggests that “Mammy Sally” designated the house where Abraham Lincoln’s future wife, Mary Todd Lincoln, grew up and served as a safe house where fugitives could receive food, but the account is doubtful. Routes of the Underground Railroad It was not until the early 1830s that the phrase “Underground Railroad” was first used. Fugitives going by water or on genuine trains were occasionally provided with clothing so that they wouldn’t give themselves away by wearing their worn-out job attire. Many of them continued on to Canada, where they could not be lawfully reclaimed by their rightful owners. The slave or slaves were forced to flee from their masters, which was frequently done at night. Conductors On The Railroad Abolitionist John Brown’s father, Owen Brown, was involved in the Underground Railroad movement in New York State during the abolitionist movement. An unconfirmed narrative suggests that “Mammy Sally” designated the house where Abraham Lincoln’s future wife, Mary Todd Lincoln, grew up and served as a safe haven where fugitives could obtain food, but the account is untrustworthy. Railway routes that run beneath the surface of the land. It was in the early 1830s when the name “Underground Railroad” first appeared. They were transported from one station to another by “conductors.” Money or products were donated to the Underground Railroad by its “stockholders.” Fugitives going by sea or on genuine trains were occasionally provided with clothing so that they wouldn’t be recognized if they were wearing their old job attire. Many of them continued on to Canada, where they could not be lawfully reclaimed by their families. To escape from their owners, the slave or slaves had to do it at night, which they did most of the time. It was imperative that the runaways maintain their eyes on the North Star at all times; by doing so, they were able to determine that they were heading north. The Civil War On The Horizon Events such as the Missouri Compromise and the Dred Scott decision compelled more anti-slavery activists to take an active part in the effort to liberate slaves in the United States. After Abraham Lincoln was elected president, Southern states began to secede in December 1860, putting an end to the Union’s hopes of achieving independence from the United States. Abolitionist newspapers and even some loud abolitionists warned against giving the remaining Southern states an excuse to separate. Lucia Bagbe (later known as Sara Lucy Bagby Johnson) is considered to be the final slave who was returned to bondage as a result of the Fugitive Slave Law. Her owner hunted her down and arrested her in December 1860. Even the Cleveland Leader, a Republican weekly that was traditionally anti-slavery and pro-the Fugitive Slave Legislation, warned its readers that allowing the law to run its course “may be oil thrown upon the seas of our nation’s difficulties,” according to the newspaper. In her honor, a Grand Jubilee was celebrated on May 6, 1863, in the city of Cleveland. The Reverse Underground Railroad A “reverse Underground Railroad” arose in the northern states surrounding the Ohio River during the Civil War. The black men and women of those states, whether or not they had previously been slaves, were occasionally kidnapped and concealed in homes, barns, and other structures until they could be transported to the South and sold as slaves. The True History Behind Amazon Prime’s ‘Underground Railroad’ If you want to know what this country is all about, I always say, you have to ride the rails,” the train’s conductor tells Cora, the fictitious protagonist of Colson Whitehead’s 2016 novelThe Underground Railroad, as she walks into a boxcar destined for the North. As you race through, take a look about you to see the genuine face of America.” Cora’s vision is limited to “just blackness, mile after mile,” according to Whitehead, as she peers through the carriage’s slats. In the course of her traumatic escape from servitude, the adolescent eventually understands that the conductor’s remark was “a joke. - Cora and Caesar, a young man enslaved on the same Georgia plantation as her, are on their way to liberation when they encounter a dark other world in which they use the railroad to go to freedom. - ” The Underground Railroad,” a ten-part limited series premiering this week on Amazon Prime Video, is directed by Moonlight filmmaker Barry Jenkins and is based on the renowned novel by Alfred North Whitehead. - When it comes to portraying slavery, Jenkins takes a similar approach to Whitehead’s in the series’ source material. - “And as a result, I believe their individuality has been preserved,” Jenkins says Felix. The consequences of their actions are being inflicted upon them.” Here’s all you need to know about the historical backdrop that informs both the novel and the streaming adaptation of “The Underground Railroad,” which will premiere on May 14th. (There will be spoilers for the novel ahead.) Did Colson Whitehead baseThe Underground Railroadon a true story? “The reality of things,” in Whitehead’s own words, is what he aims to portray in his work, not “the facts.” His characters are entirely made up, and the story of the book, while based on historical facts, is told in an episodic style, as is the case with most episodic fiction. This book traces Cora’s trek to freedom, describing her lengthy trip from Georgia to the Carolinas, Tennessee and Indiana.) Each step of the journey presents a fresh set of hazards that are beyond Cora’s control, and many of the people she meets suffer horrible ends.) What distinguishes The Underground Railroad from previous works on the subject is its presentation of the titular network as a physical rather than a figurative transportation mechanism. According to Whitehead, who spoke to NPR in 2016, this alteration was prompted by his “childhood belief” that the Underground Railroad was a “literal tunnel beneath the earth”—a misperception that is surprisingly widespread. Webber Public domain image courtesy of Wikimedia Commons While the Underground Railroad was composed of “local networks of anti-slavery people,” both Black and white, according to Pulitzer Prize–winning historianEric Foner, the Underground Railroad actually consisted of “local networks of anti-slavery people, both Black and white, who assisted fugitives in various ways,” from raising funds for the abolitionist cause to taking cases to court to concealing runaways in safe houses. Although the actual origins of the name are unknown, it was in widespread usage by the early 1840s. Manisha Sinha, author of The Slave’s Cause: A History of Abolition, argues that the Underground Railroad should be referred to as the “Abolitionist Underground” rather than the “Underground Railroad” because the people who ran it “were not just ordinary, well-meaning Northern white citizens, activists, particularly in the free Black community,” she says. As Foner points out, however, “the majority of the initiative, and the most of the danger, fell on the shoulders of African-Americans who were fleeing.” a portrait taken in 1894 of Harriet Jacobs, who managed to hide in an attic for nearly seven years after fleeing from slavery. Public domain image courtesy of Wikimedia Commons “Recognizable historical events and patterns,” according to Foner, are used by Whitehead in a way that is akin to that of the late Toni Morrison. According to Sinha, these effects may be seen throughout Cora’s journey. According to Foner, author of the 2015 bookGateway to Freedom: The Hidden History of the Underground Railroad, “the more you know about this history, the more you can appreciate what Whitehead is doing in fusing the past and the present, or perhaps fusing the history of slavery with what happened after the end of slavery.” What time period doesThe Underground Railroadcover? Caesar (Aaron Pierre) and Cora (Thuso Mbedu) believe they’ve discovered a safe haven in South Carolina, but their new companions’ behaviors are based on a belief in white supremacy, as seen by their deeds. Kyle Kaplan is a producer at Amazon Studios. The Underground Railroad takes place around the year 1850, which coincides with the adoption of the Fugitive Slave Act. Runaways who had landed in free states were targeted by severe regulations, and those who supported them were subjected to heavy punishments. In spite of the fact that it was intended to hinder the Underground Railroad, according to Foner and Sinha, the legislation actually galvanized—and radicalized—the abolitionist cause. “Every time the individual switches to a different condition, the novel restarts,” the author explains in his introduction. ” Cora’s journey to freedom is replete with allusions to pivotal moments in post-emancipation history, ranging from the Tuskegee Syphilis Study in the mid-20th century to white mob attacks on prosperous Black communities in places like Wilmington, North Carolina (targeted in 1898), and Tulsa, Oklahoma (targeted in 1898). According to Spencer Crew, former president of the National Underground Railroad Freedom Center and emeritus director of the Smithsonian’s National Museum of African American History and Culture, this “chronological jumble” serves as a reminder that “the abolition of slavery does not herald the abolition of racism and racial attacks.” This problem has survived in many forms, with similar effects on the African American community,” says the author. What real-life events doesThe Underground Railroaddramatize? In Whitehead’s envisioned South Carolina, abolitionists provide newly liberated people with education and work opportunities, at least on the surface of things. However, as Cora and Caesar quickly discover, their new companions’ conviction in white superiority is in stark contrast to their kind words. (Eugenicists and proponents of scientific racism frequently articulated opinions that were similar to those espoused by these fictitious characters in twentieth-century America.) An inebriated doctor, while conversing with a white barkeep who moonlights as an Underground Railroad conductor, discloses a plan for his African-American patients: I believe that with targeted sterilization, initially for the women, then later for both sexes, we might liberate them from their bonds without worry that they would slaughter us in our sleep. - “Controlled sterilization, research into communicable diseases, the perfecting of new surgical techniques on the socially unfit—was it any wonder that the best medical talents in the country were flocking to South Carolina?” the doctor continues. - The state joined the Union in 1859 and ended slavery inside its borders, but it specifically incorporated the exclusion of Black people from its borders into its state constitution, which was finally repealed in the 1920s. - In this image from the mid-20th century, a Tuskegee patient is getting his blood taken. - There is a ban on black people entering the state, and any who do so—including the numerous former slaves who lack the financial means to flee—are murdered in weekly public rituals. - The plot of land, which is owned by a free Black man called John Valentine, is home to a thriving community of runaways and free Black people who appear to coexist harmoniously with white residents on the property. - An enraged mob of white strangers destroys the farm on the eve of a final debate between the two sides, destroying it and slaughtering innocent onlookers. - There is a region of blackness in this new condition.” Approximately 300 people were killed when white Tulsans demolished the thriving Black enclave of Greenwood in 1921. - Public domain image courtesy of Wikimedia Commons According to an article published earlier this year by Tim Madigan for Smithsonianmagazine, a similar series of events took place in the Greenwood district of Tulsa, which was known locally as “Black Wall Street,” in June 1921. - Madigan pointed out that the slaughter was far from an isolated incident: “In the years preceding up to 1921, white mobs murdered African Americans on hundreds of instances in cities such as Chicago, Atlanta, Duluth, Charleston, and other places,” according to the article. In addition, Foner explains that “he’s presenting you the variety of options,” including “what freedom may actually entail, or are the constraints on freedom coming after slavery?” “It’s about. the legacy of slavery, and the way slavery has twisted the entire civilization,” says Foner of the film. How doesThe Underground Railroadreflect the lived experience of slavery? “How can I construct a psychologically plausible plantation?” Whitehead is said to have pondered himself while writing on the novel. According to theGuardian, the author decided to think about “people who have been tortured, brutalized, and dehumanized their whole lives” rather than depicting “a pop culture plantation where there’s one Uncle Tom and everyone is just incredibly nice to each other.” For the remainder of Whitehead’s statement, “Everyone will be battling for the one additional mouthful of food in the morning, fighting for the tiniest piece of property.” According to me, this makes sense: “If you put individuals together who have been raped and tortured, this is how they would behave.” Despite the fact that she was abandoned as a child by her mother, who appears to be the only enslaved person to successfully escape Ridgeway’s clutches, Cora lives in the Hob, a derelict building reserved for outcasts—”those who had been crippled by the overseers’ punishments,. who had been broken by the labor in ways you could see and in ways you couldn’t see, who had lost their wits,” as Whitehead describes Cora is played by Mbedu (center). With permission from Amazon Studios’ Atsushi Nishijima While attending a rare birthday party for an older enslaved man, Cora comes to the aid of an orphaned youngster who mistakenly spills some wine down the sleeve of their captor, prompting him to flee. Cora agrees to accompany Caesar on his journey to freedom a few weeks later, having been driven beyond the threshold of endurance by her punishment and the bleakness of her ongoing life as a slave. As a result, those who managed to flee faced the potential of severe punishment, he continues, “making it a perilous and risky option that individuals must choose with care.” By making Cora the central character of his novel, Whitehead addresses themes that especially plagued enslaved women, such as the fear of rape and the agony of carrying a child just to have the infant sold into captivity elsewhere. The account of Cora’s sexual assault in the novel is heartbreakingly concise, with the words “The Hob ladies stitched her up” serving as the final word. Although not every enslaved women was sexually assaulted or harassed, they were continuously under fear of being raped, mistreated, or harassed, according to the report. With permission from Amazon Studios’ Atsushi Nishijima The novelist’s account of the Underground Railroad, according to Sinha, “gets to the core of how this venture was both tremendously courageous and terribly perilous.” She believes that conductors and runaways “may be deceived at any time, in situations that they had little control over.” Cora, on the other hand, succinctly captures the liminal state of escapees. - “What a world it is. - “Was she free of bondage or still caught in its web?” “Being free had nothing to do with shackles or how much room you had,” Cora says. - The location seemed enormous despite its diminutive size. - In his words, “If you have to talk about the penalty, I’d prefer to see it off-screen.” “It’s possible that I’ve been reading this for far too long, and as a result, I’m deeply wounded by it. - view of it is that it feels a little bit superfluous to me. - In his own words, “I recognized that my job was going to be coupling the brutality with its psychological effects—not shying away from the visual representation of these things, but focusing on what it meant to the people.” “Can you tell me how they’re fighting back? History of the United States Based on a true story, this film Books Fiction about the American Civil War Racism SlaveryTelevision Videos That Should Be Watched What is the Underground Railroad? – Underground Railroad (U.S. National Park Service) Harvey Lindsley captured a shot of Harriet Tubman. THE CONGRESSIONAL LIBRARY I was the conductor of the Underground Railroad for eight years, and I can say what most conductors can’t say—I neverran my train off the track and I never lost a passenger. Photo by Harvey Lindsley of Harriet Tubman, 1860. CONGRESSIONAL LIBRARY The Secret History of the Underground Railroad Diseases and Peculiarities of the Negro Race was the title of a series published by De Bow’s Review, a leading Southern periodical, a decade before the Civil War. The series was deemed necessary by the editors because it had “direct and practical bearing” on 3 million people whose worth as property totaled approximately $2 billion. When it comes to African Americans’ supposed laziness (“deficiency of red blood in the pulmonary and arterial systems”), love of dancing (“profuse distribution of nervous matter to the stomach, liver, and genital organs”), and extreme aversion to being whipped (“skin. - However, it was Cartwright’s discovery of a previously undiscovered medical illness, which he coined “Drapetomania, or the sickness that causes Negroes to flee,” that grabbed the most attention from readers. - Despite the fact that only a few thousand individuals, at most, fled slavery each year—nearly all of them from states bordering the free North—their migration was seen by many Southern whites as a portent of a greater calamity. - How long do you think it will take until the entire cloth begins to unravel? - Rather, it was intentionally supported and helped by a well-organized network that was both large and diabolical in scope. - The word “Underground Railroad” brings up pictures of trapdoors, flickering lamps, and moonlit routes through the woods in the minds of most people today, just as it did in the minds of most Americans in the 1840s and 1850s. - At least until recently, scholars paid relatively little attention to the story, which is remarkable considering how prominent it is in the national consciousness. - The Underground Railroad was widely believed to be a statewide conspiracy with “conductors,” “agents,” and “depots,” but was it really a fiction of popular imagination conjured up from a succession of isolated, unconnected escapes? - Which historians you trust in will determine the solutions. One historian (white) questioned surviving abolitionists (most of whom were also white) a decade after the Civil War and documented a “great and complicated network” of agents, 3,211 of whom he identified by name, as well as a “great and intricate network” of agents (nearly all of them white). - “I escaped without the assistance. - “I have freed myself in the manner of a man.” In many cases, the Underground Railroad was not concealed at all. - The journal of a white New Yorker who assisted hundreds of runaway slaves in the 1850s was found by an undergraduate student in Foner’s department at Columbia University while working on her final thesis some years ago, and this discovery served as the inspiration for his current book. - One of the book’s most surprising revelations is that, according to the book’s subtitle, the Underground Railroad was not always secret at all. - The New York State Vigilance Committee, established in 1850, the year of the infamous Fugitive Slave Act, officially declared its objective to “welcome, with open arms, the panting fugitive.” Local newspapers published stories about Jermain W. Bazaars with the slogan “Buy for the sake of the slave” provided donated luxury items and handcrafted knickknacks just before the winter holidays, and bake sales in support of the Underground Railroad, no matter how unlikely it may seem, became popular fund-raisers in Northern towns and cities. - Political leaders, especially those who had taken vows to protect the Constitution — including the section ordering the return of runaways to their proper masters — blatantly failed to carry out their obligations. - Judge William Jay, a son of the first chief justice of the United States Supreme Court, made the decision to disregard fugitive slave laws and contributed money to aid runaway slaves who managed to flee. - One overlooked historical irony is that, up until the eve of Southern secession in 1860, states’ rights were cited as frequently by Northern abolitionists as they were by Southern slaveholders, a fact that is worth noting. - It was not recognized for its abolitionist passion, in contrast to places like as Boston and Philadelphia, which had deep-rooted reformer traditions—as well as communities in upstate New York such as Buffalo and Syracuse. Even before the city’s final bondsmen were released, in 1827, its economy had become deeply intertwined with that of the South, as evidenced by a gloating editorial in the De Bow newspaper, published shortly before the Civil War, claiming the city was “nearly as reliant on Southern slavery as Charleston.” New York banks lent money to plantation owners to acquire slaves, while New York merchants made their fortunes off the sale of slave-grown cotton and sugar. - Besides properly recapturing escapees, slave catchers prowled the streets of Manhattan, and they frequently illegally kidnapped free blacks—particularly children—in order to sell them into Southern bondage. - The story begins in 1846, when a man called George Kirk slipped away aboard a ship sailing from Savannah to New York, only to be discovered by the captain and shackled while awaiting return to his owner. - The successful fugitive was escorted out of court by a phalanx of local African Americans who were on the lookout for him. - In this case, the same court found other legal grounds on which to free Kirk, who rolled out triumphantly in a carriage and made his way to the safety of Boston in short order this time. - In addition to being descended from prominent Puritans, Sydney Howard Gay married a wealthy (and radical) Quaker heiress. - Co-conspirator Louis Napoleon, who is thought to be the freeborn son of a Jewish New Yorker and an African American slave, was employed as an office porter in Gay’s office. - Gay was the one who, between 1855 and 1856, maintained the “Record of Fugitives,” which the undergraduate discovered in the Columbia University archives and which chronicled more than 200 escapes. One first-person narrative starts, “I ate one meal a day for eight years.” “It has been sold three times, and it is expected to be sold a fourth time. Undoubtedly, a countrywide network existed, with its actions sometimes shrouded in secrecy. Its routes and timetables were continually changing as well. As with Gay and Napoleon’s collaboration, its operations frequently brought together people from all walks of life, including the affluent and the poor, black and white. Among others who decamped to Savannah were a light-skinned guy who set himself up in a first-class hotel, went around town in a magnificent new suit of clothes, and insouciantly purchased a steamship ticket to New York from Savannah. At the height of the Civil War, the number of such fugitives was still a small proportion of the overall population. It not only played a role in precipitating the political crisis of the 1850s, but it also galvanized millions of sympathetic white Northerners to join a noble fight against Southern slaveholders, whether they had personally assisted fugitive slaves, shopped at abolitionist bake sales, or simply enjoyed reading about slave escapes in books and newspapers. More than anything else, it trained millions of enslaved Americans to gain their freedom at a moment’s notice if necessary. Within a few months, a large number of Union soldiers and sailors successfully transformed themselves into Underground Railroad operatives in the heart of the South, sheltering fugitives who rushed in large numbers to the Yankees’ encampments to escape capture. Cartwright’s most horrific nightmares. On one of the Union’s railway lines, an abolitionist discovered that the volume of wartime traffic was at an all-time high—except on one of them. The number of solo travelers is quite limited.” And it’s possible that New Yorkers were surprised to open their eyes in early 1864. The accompanying essay, on the other hand, soon put their worries at ease. It proposed a plan to construct Manhattan’s first subway line, which would travel northward up Broadway from the Battery to Central Park. It was never built.
Snowflakes start off as ice crystals formed in clouds high above the Earth’s surface. If the temperature of the cloud is below freezing, the droplets can freeze to form tiny ice crystals. As the crystals move through the air, they grow by the condensation of water on their surface and by collision with water droplets. They may also join with other ice crystals to form snowflakes. It is often said that no two snowflakes are identical. There are, however, four common types of snow crystals: Stellar crystals are star-shaped and flat. They are also very complicated in detail. They form in clouds with temperatures between about -13°C and -18°C and can grow as big as a dime; Hexagonal plates form in clouds with temperatures between about -10°C and -13°C and about -18°C and -20°C; Hexagonal columns form in clouds with temperatures between -7°C and -10°C and below -20°C; Long, slender and cylindrical, they usually form in clouds with high moisture content and temperatures warmer than -7°C. Sometimes you will see snowflakes with irregular shapes. This is caused by the crystal passing through clouds with different temperatures and moisture content. A plate-type crystal falling through clouds with temperature and moisture content suited to dendrites will develop star-like extensions. Strong winds and/or turbulence, causing crystals to collide and break, can also result in irregular crystals. The best way to ‘capture’ snowflakes is on a dark background such as construction paper, a dark snow jacket or a dark blanket. The key is to make sure that your chosen background is cold or else the snowflakes will melt!
Giardia infection - including symptoms, treatment and prevention Giardia infection is an infection of the bowel caused by the parasite Giardia duodenalis, also known as Giardia lamblia or Giardia intestinalis. This parasite is a single-celled organism and is found worldwide. Although it occurs in many animals including dogs, cats, sheep and cattle, there is still some uncertainty about the extent of disease transmission between people and animals. How Giardia is spread Spread takes place when hands, objects or food become contaminated with faeces of infected people or animals, or by drinking contaminated water. The parasites must be taken in by mouth to cause infection. In institutions and preschool centres, person-to-person transmission may be a significant means of spreading the illness. Transmission can occur with some sexual practices where there is contact with faecal matter. Re-infection can occur. Signs and symptoms - stomach cramps - excessive gas or bloating - diarrhoea, which may be watery, usually lasting 1 to several weeks - frequent loose or pale, greasy faeces which may float in the toilet bowl - weight loss - lactose intolerance may occur in 20 to 40% of cases and last several weeks. Fever and bloody diarrhoea are not usually seen with Giardia infections. Many infected people have no symptoms. The infection is diagnosed by examining the faeces under a microscope or by detecting Giardia in a faecal specimen using a PCR (polymerase chain reaction) test in a pathology laboratory. (time between becoming infected and developing symptoms) 3 to 25 days or longer (usually seven to 10 days). (time during which an infected person can infect others) For as long as the organism is present in the faeces (often months), whether or not the person is ill. A person with diarrhoea is more likely to spread infection than a well person, but a person without symptoms is still potentially infectious to others. Treatment of an ill person with appropriate antibiotic medication relieves symptoms and usually makes the person non-infectious within a few days. - Exclude people with Giardia infection from childcare, preschool, school and work until there has been no diarrhoea for 24 hours. If working as a food handler in a food business, the exclusion period should be until there has been no diarrhoea or vomiting for 48 hours. - Infants, children and adults with giardia infection should not swim until there has been no diarrhoea for 24 hours. - Follow good hand washing procedures. - Water suspected of contamination should be boiled before drinking. - Babies and small children without diarrhoea who are not toilet trained should wear tight fitting waterproof pants or swimming nappies in swimming pools and be changed regularly in the change room. When faecal accidents occur, swimming pools should be properly disinfected. - Treatment of infected people reduces spread.
Three teams have developed artificial muscles that can lift 1000 times their own weight. They hope the new fibres could be used in prosthetic limbs, robots, exoskeletons, and even in clothing. All three teams have developed their muscles according to a similar principle: that a coiled-up substance can stretch like a muscle. The idea was developed by Ray Baughman and his colleagues at the University of Texas at Dallas, who found that twisting up even a simple material like sewing thread or fishing line can create a muscle-like structure that, for its size, can lift weights 100 times heavier than human muscle can manage. Now, Baughman’s team have developed stronger fibres, using similarly inexpensive materials. Bamboo or silk, for example, are twisted into a coil and coated with a sheath that can respond to heat or electrochemical changes, which can trigger the resulting muscle to contract and move. The team hope that their materials can be used in smart clothing that responds to the weather. In one experiment, they knitted the fibres into a textile that, as a result, responds to moisture by becoming more porous. “You could imagine such a textile could be more open or more insulating,” says Sameh Tawfick at the University of Illinois at Urbana-Champaign. Jinkai Yuan at the University of Bordeaux and his colleagues created their fibres using a polymer and graphene – a material stronger than diamond. Mehmet Kanik at Massachusetts Institute of Technology took a different approach. His team developed a material that coils spontaneously, like the tendrils of a cucumber plant. They tested the muscle in a miniature artificial bicep based on a human arm, which lifts a weight when heat is applied. Engineers still have some way to go to make artificial muscles as efficient as human ones. Currently, only around 3 per cent of the energy put into artificial muscles is used by the fibres, while the rest is lost as heat, says Tawfick. Once this problem has been cracked, he hopes that these artificial muscles, and others like them, could provide cheap and slimline alternatives to the bulky electric motors used to power many devices today. Journal reference: Science, DOI: 10.1126/science.aaw3722, 10.1126/science.aaw2403, 10.1126/science.aaw2502 Article amended on 17 July 2019 We have corrected Ray Baughman’s affiliation. More on these topics:
Table of contents About this book Historical Remarks Bearing on the Discovery of Pan paniscus Whether by accident or by design, it was most fortunate that Robert M. Yerkes, the dean of American primatologists, should have been the first scientist to describe the characteristics of a pygmy chimpanzee, which he acquired in August 1923, when he purchased him and a young female companion from a dealer in New York. The chimpanzees came from somewhere in the eastern region of the Belgian Congo and Yerkes esti mated the male's age at about 4 years. He called this young male Prince Chim (and named his female, com mon chimpanzee counterpart Panzee) (Fig. I). In his popular book, Almost Human, Yerkes (1925) states that in all his experiences as a student of animal behavior, "I have never met an animal the equal of this young chimp . . . in approach to physical perfection, alertness, adaptability, and agreeableness of disposition" (Yerkes, 1925, p. 244). Moreover, It would not be easy to find two infants more markedly different in bodily traits, temperament, intelligence, vocalization and their varied expressions in action, than Chim and Panzee. Here are just a few points of contrast. His eyes were black and in his dark face lacked contrast and seemed beady, cold, expressionless. Hers were brown, soft, and full of emotional value, chiefly because of their color and the contrast with her light complexion. Adaptation adaptive Radiation animal behavior behavior biology classification development ecology evolution molecular biology morphology primates system systematics tissue
Howard Reeves, USGS scientist and lead author on the assessment said, "While there is an abundance of water in the region, we may see local shortages or conflicts because water is not distributed evenly. In some areas, the physical quantity of water may be limiting, and water availability in most of the Great Lakes Basin will be determined by social decisions about impacts of new uses on existing users and the environment." Water availability in the Great Lakes Basin is a balance between storage of surface water and groundwater in the system, flows of water through the system, and existing, sometimes competing, human and ecological uses of water. Water use has a relatively minor effect on regional water availability, because of the large volume of water in storage, large annual flows and abundant, high quality groundwater. Development in the Great Lakes region also has had relatively little effect on basin-wide water availability, though surface-water diversions and pumping of groundwater have affected some flow patterns over large areas of the basin. Tim Eder, Great Lakes Commission Executive Director said, "This Great Lakes Basin study on water availability and use provides important information for restoration and protection of regional water resources and for guiding appropriate economic development of these resources. USGS information on consumptive water use also will be useful to the Great Lakes states and provinces to understand and estimate the cumulative impact of water use on regional water resources." Understanding the impact of climate variation on water use, lake levels, streamflow and groundwater levels was part of this five-year investigation. Results of the study will improve the ability to forecast the balance between water supply and demand for future economic and environmental uses. Reeves said, "The Great Lakes are a dynamic system responding primarily to short- and long-term variations in climate. Understanding the potential for local shortages or conflicts within this dynamic system is important for sound decision making. USGS water availability studies like this one examine water flow and storage in surface-water and groundwater systems and compile water-use information for the region. Studies are designed to quantify the effects of past development and examine the effects of future growth on flows and storage in the system. This type of comprehensive analysis shows how competing uses and demands interact over time across a region. Because most water-management decisions are made at the local level, this information is valuable for managers at state and local levels in making informed decisions regarding the potential effects of future water use on existing water users, aquatic ecosystems and the public. Access a release from USGS and link to a podcast, reports associated with the project, additional information on USGS water availability studies in the Great Lakes Basin, and the USGS Groundwater Resources Program (click here).
THE NCHE CIVIL RIGHTS CONFERENCE The Desegregation of South Carolina Colleges and Schools June 20-21, 2013 Join NCHE and South Carolina State University as we look back on the historic events of the Civil Rights struggle and look forward to imparting those lessons to future generations. The post-World War II movement for civil rights was a multifaceted, multi-directed movement. Civil rights organizations specifically targeted the right to vote, desegregation of public facilities, desegregation of public education, equal protection under the law, an end to job discrimination, fair and equal housing and an end to violence towards African Americans. Each of the objectives or a combination of them was the focus of governmental policies, judicial decisions, new legislation or direct action on the part of civil rights organizations. Since many persons believed that improved educational opportunity was the most important means to improving the status of African Americans, leaders placed greater emphasis on it. The disparities between black and white education were glaring. In addition to having fewer schools, black schools in general were poorly funded, physically dilapidated, paid teachers less and did not update antiquated resources. Many civil rights activists believed that no equity could be achieved in a segregated environment, so desegregation of public schools became a major facet of the Civil Rights Movement. In the 1954 Brown v Board of Education decision, the Supreme Court ruled that segregated schools were unconstitutional. In its subsequent Brown II decision, it mandated that desegregation must proceed at “all deliberate speed”. Reaction to the decision was met with some sense of jubilation by proponents of desegregation, while opponents throughout the South pledged to fight the decision. Throughout the South, opposition to desegregation took the forms of administrative delays, court challenges, private school movements, violence and death. While South Carolinians opposed school desegregation, resistance in the state was relative mild in comparison to her sister southern states.
According to a recent research, algal blooms in world’s lakes could jump 20% over next century due to warming waters. Scientists explained that the situation could lead to more dead-zones within lakes caused by lack of oxygenation and put in peril local ecosystems. Researchers found that the areas with the most extreme temperature jumps are lakes. Of 235 surveyed lakes, over 50 percent saw average temperature increase 0.61 degrees F (0.35 degrees C) every ten years. Although the change may seem minor, study authors noted that the swing is significantly bigger than any changes observed in the air or world’s oceans. This is why the recently-reported changes in lakes could have long-term consequences on animals and humans’ drinking supplies, and food stock. Animals could find it hard to survive in oxygen deprived lakes as more dead zones emerge. Catherine O’Reilly, lead author of the study and researcher at the Illinois State University, explained that the recent findings suggest world’s lakes are under a lot of stress. She is concerned that problems observed just in some locations could soon become widespread. The study which was sponsored by the National Science Foundation and NASA, was recently unveiled at the annual gathering of the American Geophysical Union. Researchers based their study on satellite and ground data on temperatures that were collected over the last two decades. The research revealed that there is a warming trend in the world’s major lakes including Lake Baikal in Russia, Lake Tahoe, and the Dead Sea. But the most sudden changes were observed in cold, deep lakes. For instance, Superior, which is the coolest and deepest lake of the Great Lakes, saw its average temperatures jump threefold than the global average in recent years. But not only air temperatures contribute to warmer lakes. Many lakes, especially in the cold regions, lose winter ice faster, while others are no more blanketed by clouds, thus, being directly exposed to solar heat. Warm waters are an welcoming environment for algae. This may explain why the recent study tied warming lakes to a jump of 20 percent in algal blooms and a 5 percent rise in toxic algae blooms. Toxic algae are dangerous not only to fish but to humans, too. For example, more than a year ago one such bloom deprived 400,000 Ohio residents of tap water for a couple of days. Nevertheless, there are more reasons why algae choke lakes and waterways. An oversupply of nutrients caused by nitrogen and phosphorous pollution and waste waters also helps them thrive. Image Source: Flickr
There are two mysterious words used over and over by experienced pipers and drummers: pointed and round. Each of the five styles we play are either pointed or round--some are really pointed, others are more round depending on the wants and desires of the individual player or pipe band leadership. Marches and some reels: Played fairly pointed (except in certain spots where they should be played with "extreme pointy-ness" Round Reels/Hornpipes: Played round--no exceptions Jigs: Played round--no exceptions 6/8 and 9/8 Marches: Played pointed and, in some cases, with "extreme pointy-ness" Strathspeys: Played very pointed and, in some cases, extremely so So, what do these two words mean? To understand "round" and "pointed" one must first understand the concept of beat. The beat is not a single moment in time. It is not a foot tap or a click of the metronome. Instead, it is helpful to think of the beat as a "box of time". Notes are placed into this box of time and the location/placement of those notes determines the rhythms that must be played. Here's a visual to help with the whole "box of time" idea. This is a graphic representation of one bar of 4/4 time. There are four beats in 4/4 and therefore four "boxes" of time with a beat number at the beginning of each box. Think of a Trumpet... Most people reading this blog post are drummers but let's start our explanation of round vs pointed with an example of how wind instruments deal with beats. A wind instrument produces a sustained sound. Think of a trumpet; a trumpet can play a short note or a long note. Let's examine what these beat boxes would look like if a trumpet filled them with two notes of equal length. The large stretched out ovals represent the sounds of the trumpet notes and the blue arrows indicate the two halves of the beat and point to the counting underneath. The trumpet notes are of equal length and this is what is known in the pipe band world as round. Notes become pointed when the first note is held longer, thereby forcing the second note to become shorter. Now the time intervals between notes are uneven (long short long short) as in the example below: Because there is limited space in the "beat box", as the first note expands in length, the second note must shrink. If the second note is not cut short the correct amount, it can spill over into the next beat and the tempo of the bar will be affected (this should not happen!!). Never Mind the Trumpet, What About Drums? As drummers, the only sustained sound we can produce is a roll. The majority of our music, however, contains short "taps" that only last a fraction of a second. For this reason, our sense of time and note length must be excellent to prevent the overall tempo of the music from being negatively affected. Here's an example of what the notes of a drum look like inside the "beat box": The example above shows "round" notes. Notice that the notes occur at regular intervals and the space between each note is the same. If we want to "point" these notes we need to increase the amount of space between the first and second notes of each beat. Similarly, we will have to decrease the amount of space between the second and third notes. This "reorganizing of the space" between notes should continue throughout the entire drum score. As the second note in each "beat box" gets pushed over, the appearance of the notes in the "beat box" changes. In these examples, the blue arrows show the location of the notes relative to the counting below: These notes are now considered to be pointed. But wait! Sometimes a request comes in from the pipe major to make the notes even MORE pointed!! All this means is that the second note in each "beat box" gets pushed even more toward the next box. The space between the first and second notes increases and an "extreme pointy-ness" is achieved. This extreme pointing of notes can happen occasionally in the march style but much more often in the 6/8 march and Strathspey styles. Put in the context of the beat box, it would look something like this: The degree to which the second notes are shoved from round to pointed depends solely on the musical taste and discretion of the musical leaders of the pipe band. The best grade one bands can achieve extreme pointing without affecting the tempo--a skill that takes many years of practice. The most important thing is for a lead drummer to collaborate and rehearse with the pipe major so that the degree of pointing is agreed upon. A Note About "Swing" Some people use the term "swing" to describe round styles: "We've got to get that jig to swing!" In musical circles outside the pipe band world, "swing" refers to degrees of what we know of as "pointing". Extremely pointed music is said to "swing hard" in the jazz idiom and round playing is said to be "straight". The only styles that should "swing" in the pipe band idiom are marches, reels, hornpipes, strathspeys and 6/8 marches. When I have heard the word "swing" used in reference to a round style, I take it to mean "groove" or "pocket playing" that results from offbeat syncopation. Round vs Pointed in the Strathspey Style The following is a video of one part of strathspey. In the first example, notes are played pointed but the pointing only goes as far as a triplet feel--this would be considered too round of a musical performance for most judges. The strathspey style is one where extreme pointing is encouraged and in the second example, you can hear how the sound of the drum score changes when it is played with a more pointed dotted eighth-note/sixteenth note feel. All of a sudden there is more life in the score, more bounce and more energy. Don't get discouraged if you can't hear the difference between pointed and round playing at first. Developing the facility to play several degrees of pointing takes many years and a relentless attention to detail. The best thing I ever did to improve my knowledge on the subject was to bring in clinicians to our band that are masters of this concept. I asked many questions and practiced playing different degrees of pointedness on my own. Ask questions, work hard and you'll get there!
“Living Green” means finding ways to reduce your negative impact on the environment and increase your positive impact — helping to conserve resources and preserve our world for future generations. It also means looking for ways to share the world with everything, including plants and animals and being a better neighbor in the natural world. It’s not easy being green, but it is possible! Purchase Sustainable Seafood Easy Conservation Tips YOU Can Do at Home - Close your curtains during the day to keep heat out. - Adjust your thermostat. By increasing the thermostat by one degree in the summer and decreasing it one degree in the winter, you can cut ten percent off of your electricity bill - Replace light bulbs with compact fluorescent lights. They use up to 75 percent less electricity. - Turn off computers, printers, copiers, etc. when not in use. - Put lids on pans while cooking. - Only wash full loads in the washing machine and dishwasher and avoid using the dry cycle on the dishwasher. - Use a slow cooker – it doesn’t add heat to the house while cooking. - Newer models of washing machines, and most detergents, can effectively clean clothes in cold water. - Use an outdoor barbeque grill instead of the oven during warm months. - Turn your water heater thermostat down to 120 degrees. - Purchase products that you know are made by green companies. - Try solar power. The panels pay for themselves in 10 – 15 years and have an operating life of 25 years or more. They may also include regulatory and financial incentives, depending on the region. Reduce and Reuse - Bring reusable bags to the grocery store instead of using new plastic or paper bags. - When you can, purchase things in bulk and with minimal packaging. - Find ways to reuse containers (i.e. storage, art projects). - Reduce water and electricity usage in your home. - Use reusable containers to pack lunches instead of plastic baggies. - Shop locally. Buy food and products in your community to reduce pollution caused by shipping products by truck, ship or plane. This also helps to build your local economy. - Find out what your city will accept. - Donate old clothing, furniture and toys to charity. - Compost the materials you can. - Drop off your old cell phone and used printer cartridges in the Zoo’s Guest Services Lobby and the Zoo will receive money to go toward our conservation efforts. - Most recyclable items used on a daily basis including paper, glass, aluminum and cardboard can all be added to the recycle bin instead of the trash can. Reach out to Your Community - Organize a clean-up day. - If you live in a neighborhood that isn’t required to recycle, make sure your neighbors know how and where to recycle. - Contact your local Waste Management Facility Hotline to locate recyclable drop-off sites. - Set a positive example for your friends and family. - Ride Share – start carpools with co-workers or parents for school and activities. Purchase Ink Cartridges Did you know you can purchase inkjet cartridges and benefit the non-profit Phoenix Zoo? Save up to 70 percent on printer cartridges at zoocartridges.com and make a donation to the Zoo in the process. The Zoo receives $2 – $5 per cartridge. Cartridges are shipped FREE for orders totaling more than $50. For more information contact [email protected] The Zoo also accepts used inkjet cartridges and will recycle them for you. Simply bring them to the Guest Service Lobby located at the Zoo entrance and we will do the rest.
On this day, Union troops secure a crucial pass during the Atlanta campaign. In the spring and summer of 1864, Union General William T. Sherman and Confederate General Joseph Johnston conducted a slow and methodical campaign to seize control of Atlanta. Pushing southeast from Chattanooga, Tennessee, toward Atlanta, Sherman continually tried to flank Johnston, but Johnston countered each move. On May 3, 1864, two of Sherman’s corps moved against Confederate defenses at Dalton, Georgia, while another Yankee force under James McPherson swung wide to the south and west of Dalton in an attempt to approach Johnston from the rear. It was along this path that McPherson captured Snake Creek Gap, a crucial opening in a long elevation called Rocky Face Ridge. Seizure of the strategic pass was a brilliant Union move, as Rocky Face Ridge served as a key geographic feature for Johnston and his army. It was a barrier against Sherman’s army that could neutralize the superior numbers of Federal troops. When the Yankees captured the gap, Johnston had to pull his men much further south where the terrain did not offer such advantages. However, securing Snake Creek Gap also resulted in the Union missing another opportunity. McPherson had a chance to cut directly into the Confederate rear but encountered what he judged to be strong Rebel defenses at Resaca. Union troops reached the Western and Atlantic Railroad, Johnston’s supply line, but they did not have adequate numbers to hold the railroad, and did not have enough time to cut the line. McPherson halted his advance on Resaca and fell back to the mouth of Snake Creek Gap, causing Sherman to complain for years later that McPherson was timid and had lost the chance to route the Confederates. The campaign would eventually be successful, but the failure to secure or destroy the Confederate supply line prolonged the campaign, possibly by months.
from The American Heritage® Dictionary of the English Language, 4th Edition - n. A simplified form of speech that is usually a mixture of two or more languages, has a rudimentary grammar and vocabulary, is used for communication between groups speaking different languages, and is not spoken as a first or native language. Also called contact language. from Wiktionary, Creative Commons Attribution/Share-Alike License - n. an amalgamation of two disparate languages, used by two populations having no common language as a lingua franca to communicate with each other, lacking formalized grammar and having a small, utilitarian vocabulary and no native speakers. from The Century Dictionary and Cyclopedia - n. Business; affair; thing. from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. - n. an artificial language used for trade between speakers of different languages "no got pidgin," and _pidgin English_ simply means a workable knowledge of colloquial English as picked up by tradesmen, servants and coolies, in contradistinction to English as taught in the schools. My Mr. told me that the key to pidgin is to pick out the English words and try to make sense of them. We gave them worm tablets and would ask them politely, in pidgin English, to collect their fecal matter in buckets for us. The story of a Korean uprising told in pidgin poetry. A pidgin is what you get when you throw people together who have no common language and gramatically its kind of a mess. It is difficult to defend an American president who speaks in pidgin-English. Anglo used to enroll them in informal classes to learn Fanagolo, a century-old, 200-word pidgin language that was created decades ago so that mining bosses could order illiterate miners to perform basic tasks. A pidgin is a simplified language used to help two groups with distinct languages communicate with each other. A pidgin is a mashup of two languages A and B used when a speaker of A and a speaker of B want/need to communicate. A pidgin is a simplified language that develops when groups of adult speakers without a common language come into prolonged contact.
What Is a Percentages? (Mathematics Lesson) What Is a Percentage?A percentage is a part of a whole. It expresses a part of a whole number as parts out of 100. A percentage is shown by the symbol % (said as "percent"). Dictionary DefinitionThe Merriam-Webster dictionary defines a percentage as "a part of a whole expressed in hundredths." A Real Example of a PercentageA percentage is written as a number followed by the percentage symbol %: This is 20% (said as "20 percent"). It means 20 parts out of 100. A Real Example of What a Percentage MeansImagine you were given 20% in a mathematics exam. You would have got 20 out of every 100 marks. - If there were 100 marks, you would have got 20 marks. - If there were 200 marks, you would have got 40 marks. - If there were 50 marks, you would have got 10 marks. Visualizing PercentagesA percentage shows how many parts out of a 100. A grid with 100 squares is a useful way of visualizing percentages. In the grid below, 20 out of 100 squares are colored blue. 20% of the squares are blue. The slider below shows some more examples of visualizing percentages using a grid of 100 squares. Here's a second test on percentages. Here's a third test on percentages. What's in a Name?"Per cent" comes from the Latin "per centum", meaning "by the hundred". The word "cent" appears in many contexts. In currencies, a hundredth of a US dollar is a cent. A hundredth of a Euro is also a cent. A hundred years is called a century and a hundred year anniversary is a centennial. Other Types of Fractional NumbersPercentages express a part of a whole. There are other ways of expressing parts of a whole: - A fraction: 1/2 - A decimal: 0.5 - A negative exponent: 2-1 - A percentage: 50% - A ratio: 1:2
What is Mimicry? Mimicry is one of several anti-predatory devices found in nature. Specifically it is a situation in which one species called the mimic resembles in color, form, and/or behavior another species called the model. In doing so, the mimic acquires some survival advantage. Terms to Know Mimic: the species that takes on the appearance of another species. Model: the species that is mimicked Palatable: sufficiently agreeable in flavor to be eaten Unpalatable: no suitable for food Camouflage: to conceal by the use of a disguise that blends in with the surrounding enviornment Warning Coloration: obvious, recognizable coloration or markings of an animal that serve to warn off potential predators Types of Mimicry There are 2 basic forms of mimicry: 1. Batesian - the mimic (palatable) resembles the model (unpalatable) and only the mimic benefits. 2. Mullerian - both the mimic and the model are unpalatable and both benefit. Batesian mimicry is most effective when the mimic is rare and its emergence follows that of the model. In Mullerian mimicry as density increases so does the adaptive value. Since mimicry provides potential survival value, the mimic with an adaptation that increases the likelihood of surviving is selected. Natural selection of these favorable variations has led to the coevolution of many species. The distinction among camouflage, warning coloration, and mimicry is not always clear. Mimicry, as opposed to camouflage and warning coloration, is specifically the resemblance between two organisms. The same techniques of deception are sometimes utilized in all three anti-predatory devices. These include variations in color, pattern, and structure. Examples of Mimicry The harmless robber fly (right) resembles the bumblebee (left) even though the two are not closely related. The robber fly is a dipteran, with only a single pair of wings, while the bumblebee is a hymenopteran with two pairs. The viceroy butterfly (bottom) contains no toxic substances in its body and presumably is quite palatable (one entomologist declared it tastes like dried toast). If so, the viceroy's striking resemblance to the monarch (top) enables it to capitalize on the monarch's unpalatability. (Photos courtesy of Tom Eisner.)
Sandbox allows the execution of software(s) with less risk to the operating system. These are often used to execute untested code of dubious origin. We talked about Advanced Persistent Threats or APT in an older article, also we mentioned we use Sandbox mode to check them. The term Sandbox is also used in a broader sense to refer to a test environment for software or websites – we are not talking about all the Sandbox Environments but which are related to Security. Sandbox in Computer Security : Features In the area of decision-making, beyond testing software, it can also be a question of testing data in order to assess the quality and potential uses, before integrating them into the warehouse for production and impose various operating constraints. A sandbox typically provides a set of resources within a controlled environment to execute code (e.g. temporary storage on the environment hard drive). Access to networks, the ability to inspect the host system or the use of devices are usually disabled or severely restricted. In this context, a sandbox is a particular example of virtualization. A sandbox provides an area of ??decision specific learning environment and innovation. It is common to multiply these private spaces tailored to the end-user or computer to test data, loading tools, restitution, prototype applications or services. However from the point of view infrastructure in order to avoid problems with the proliferation of data attacks, it is recommended to deploy the sandbox on the same platform as the production data warehouse in a specially isolated area of database. Examples of Sandbox in Computer Security - The applets are programs that run on a virtual machine or works as an interpreter for scripting language that make sandboxing. This technique is common in web browsers those are running applets embedded in web pages which are potentially hostile. - Virtual machines emulate a host on which an operating system can run to full. This operating system is in a sandbox, in the sense that it does not run natively on the host machine and can not affect that through the emulator or shared resources (such as the disk space). - Systems capabilities can be seen as mechanisms for sandboxing in which programs have the ability to perform specific tasks based on the privileges they have. - Isolation is a particular type of limitation of usage of resources applied to programs operating in case of a problem such as with bug or malicious activities. - The decision to offer the possibility to integrate new data in addition to those are managed by the existing decision-making system to make every possible analytical approaches from simple to more complex data on all decision-making systems of the company permanent or temporary and to facilitate prototyping of BI applications to test some design choices or means.
Describes the features of arrays and the ways in which they are implemented. All arrays have an array buffer allocated from the heap. The implementation of this buffer and the way it is used depends on the specific type of array. In arrays of elements which all have the same length, the elements are contained within the array buffer itself. In arrays of elements with varying length, each element is contained within its own heap cell and the array buffer contains pointers to the elements. In packed arrays, the elements are contained within the array buffer. A packed array is an array of elements of varying length where the length information for an element precedes that element within the array buffer. Logically, an array buffer is linear but, physically, it can be organised either as a flat buffer or a segmented buffer. In general, you can choose between array classes which use a flat buffer and array classes which use a segmented buffer. The choice depends on how the array is to be used. A segmented array buffer is implemented using a CBufSeg object. A flat array buffer is implemented either: The first type is a simple and efficient implementation but has some restrictions on the size of an array element and is limited to holding elements which have the same length. The second is a more general implementation and is only limited by the available memory. Copyright ©2010 Nokia Corporation and/or its subsidiary(-ies). All rights reserved. Unless otherwise stated, these materials are provided under the terms of the Eclipse Public License v1.0.
New evidence gathered by NASA's MESSENGER spacecraft at Mercury solves an apparent enigma about Mercury's evolution. The data indicate the tiny planet closest to the sun, only slightly larger than earth's moon, has shrunk up to 7 kilometers in radius over the past 4 billion years, much more than earlier estimates. Older images of surface features indicated that, despite cooling over its lifetime, the rocky planet had barely shrunk at all. But modeling of the planet's formation and aging could not explain that finding. Paul K. Byrne and Christian Klimczak at the Carnegie Institution of Washington have led a team that used MESSENGER's detailed images and topographic data to build a comprehensive map of tectonic features. That map suggests Mercury shrunk substantially as it cooled, as rock and metal that comprise its interior are expected to. MESSENGER's Wide Angle Camera (WAC), part of the Mercury Dual Imaging System (MDIS), is equipped with 11 narrow-band color filters. As the spacecraft receded from Mercury after making its closest approach on 14 January 2008, the WAC recorded a 3x3 mosaic covering part of the planet not previously seen by spacecraft. The color image shown here was generated by combining the mosaics taken through the WAC filters that transmit light at wavelengths of 1000 nm (infrared), 700 nm (far red), and 430 nm (violet). These three images were placed in the red, green, and blue channels, respectively, to create the visualization presented here. The human eye is sensitive only across the wavelength range from about 400 to 700 nm. Creating a false-color image in this way accentuates color differences on Mercury's surface that cannot be seen in black-and-white (single-color) images. Color differences on Mercury are subtle, but they reveal important information about the nature of the planet's surface material. A number of bright spots with a bluish tinge are visible in this image. These are relatively recent impact craters. Some of the bright craters have bright streaks (called "rays" by planetary scientists) emanating from them. Bright features such as these are caused by the presence of freshly crushed rock material that was excavated and deposited during the highly energetic collision of a meteoroid with Mercury to form an impact crater. The large circular light-colored area in the upper right of the image is the interior of the Caloris basin. Mariner 10 viewed only the eastern (right) portion of this enormous impact basin, under lighting conditions that emphasized shadows and elevation differences rather than brightness and color differences. MESSENGER has revealed that Caloris is filled with smooth plains that are brighter than the surrounding terrain, hinting at a compositional contrast between these geologic units. The interior of Caloris also harbors several unusual dark-rimmed craters, which are visible in this image. The diameter of Mercury is about 4880 km (3030 miles). The image spatial resolution is about 2.5 km per pixel (1.6 miles/pixel). The WAC departure mosaic sequence was executed by the spacecraft from approximately 19:45 to 19:56 UTC on 14 January 2008, when the spacecraft was moving from a distance of roughly 12,800 to 16,700 km (7954 to 10377 miles) from the surface of Mercury. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington "With MESSENGER, we have now obtained images of the entire planet at high resolution and, crucially, at different angles to the sun that show features Mariner 10 could not in the 1970s," said Steven A. Hauck, II, a professor of planetary sciences at Case Western Reserve University and the paper's co-author. Mariner 10, the first spacecraft sent to explore Mercury, gathered images and data over just 45% of the surface during three flybys in 1974 and 1975. MESSENGER, which launched in 2004 and was inserted into orbit in 2011, continues collecting scientific data, completing its 2,900th orbit of Mercury later this month. Mercury's surface differs from Earth's in that its outer shell, called the lithosphere, is made up of one tectonic plate instead of multiple plates. To help gauge how the planet may have shrunk, the researchers looked at tectonic features, called lobate scarps and wrinkle ridges, which result from interior cooling and surface compression. The features resemble long ribbons from above, ranging from 5 to more than 550 miles long. Lobate scarps are cliffs caused by thrust faults that have broken the surface and reach up to nearly 2 miles high. Wrinkle ridges are caused by faults that don't extend as deep and tend to have lower relief. Surface materials from one side of the fault ramp up and fold over, forming a ridge. The scientists mapped a total of 5,934 of the tectonic features. The scarps and ridges have much the same effect as a tailor making a series of tucks to take in the waist of a pair of pants. With the new data, the researchers were able to see a greater number of these faults and estimate the shortening across broad sections of the surface and thus estimate the decrease in the planet's radius. They estimate the planet has contracted between 4.6 and 7 kilometers in radius. "This is significantly greater than the 1 to maybe 2 kilometers reported earlier on the basis of Mariner 10 data," Hauck said. And, importantly, he said, models built on the main heat-producing elements in planetary interiors, as detected by MESSENGER, support contraction in the range now documented. One striking aspect of the form and distribution of surface tectonic features on Mercury is that they are largely consistent with some early explanations about the features of Earth's surface, before the theory of plate tectonics made them obsolete—at least for Earth, Hauck said. So far, Earth is the only planet known to have tectonic plates instead of a single, outer shell. The findings, therefore, can provide limits and a framework to understand how planets cool—their thermal, tectonic and volcanic history. So, by looking at Mercury, scientists learn not just about planets in our solar system, but about the increasing number of rocky planets being found around other stars.
In the guitar lesson we are going to be learning a bit about how the Locrian mode is made as well as a common scale shape for this mode. You should be familiar with how the major scale is made before going through this lesson. If you aren’t quite sure how the major scale works you can go check out the lesson Understanding the major scale. We have supplied you with a scale diagram for the Locrian shape that we will be using in this lesson. In order to get a good understanding of how the Locrian mode is made lets start with an A major scale and alter some notes in it to make it into an A Locrian scale. The A major scale is spelled 1A 2B 3C# 4D 5E 6F# 7G#. In order to make any major scale into a Locrian scale you need to lower the 2nd, 3rd, 5th, 6th, and 7th scale degrees one half step each. Lower these notes in the A major scale and you would end up with an A Locrian scale, spelled 1A 2Bb 3C 4D 5Eb 6F 7G. There is another way to think about coming up with the notes in a Locrian scale. Take any major scale and go to the 7th scale degree. Let’s use a Bb major scale and start with the 7th scale degree, which is an A note. The Locrian mode is based off of the 7th scale degree of any major scale. With that in mind if you start a Bb major scale on the 7th scale degree, the A note, you would be playing an A Locrian scale. Either way you think about it the notes are still the same. The important part is to get the sound of this sale in your head and star experimenting with it for yourself. Record an A minor flat 5 chord and play each note of the Locrian scale over it. Try to remember what each note sounds like over the chord. It will probably be a good idea to go ahead and learn your minor flat 5 arpeggios at this point to go along with your new Locrian scale shape. You will probably hear this scale used in jazz or fusion styles of music more than any other styles. The Locrian mode is very unique because it is the only mode whose 5th scale degree is lowered. This gives it kind of a lurking and ethereal sound that is not found in the other modes.
How to Brush your Teeth? - Place the bristles of your toothbrush at the margin of the gums, establishing a 45 degree angle. - Using gentle vibratory pressure and short back-and-forth motions without dislodging the tip of the bristles make about 20 strokes in the same position. - Do the same thing around the arch, brushing around three teeth at a time. - Now, move onto the inner surfaces of the teeth. To help reach the inner surfaces of the front teeth, insert the brush vertically. - Press the bristles firmly onto the chewing surfaces of the teeth and brush with about 20 back-and-forth strokes. Powered toothbrushes can achieve better cleaning efficiency and plaque control. While anybody can use them, they are ideal for: - Small children, handicapped or hospitalized patients who need to have their teeth cleaned by someone else - Individuals lacking fine motor skills - Patients with orthodontic appliances Flossing is a process that removes plaque that forms in between two teeth surfaces. How to Floss? - Wind 12″ to 18″ of floss around your two middle fingers of each hand. - Gently guide the floss between teeth. - To remove plaque and debris, gently move the floss up and down against the tooth. - As you move from tooth to tooth, use fresh sections of floss each time. Mouth rinses or mouthwashes serve a variety of purposes - It mask your bad breath - Fight cavities - Prevents the building up of plaque, the sticky material that contains germs and can lead to oral diseases. Therapeutic mouth rinses are of two types according to its use: - Anti-plaque / anti-gingivitis rinses - Anti-cavity fluoride rinses. How to rinse your mouth using a mouthwash ? - Brush and floss your teeth, before using a mouthwash. - Measure the proper quantity of rinse recommended on the container or by a dentist. - Swish liquid around the mouth, keeping lips closed and teeth slightly apart. - Thirty seconds is the recommended rinsing time. - Then spit the liquid from mouth thoroughly. - Do not rinse, eat, or smoke for next thirty minutes after using a mouthwash. Doing so will reduce the effects of the mouthwash. There are many causes of bad breath (Halitosis). It may be the result of odor-causing foods, tooth decay, periodontal disease, mouth dryness, use of tobacco products, sinus or respiratory tract infections, some systemic disorders, inadequate oral hygiene or some medications. Your dentist can help recognize the cause and, if it’s due to an oral condition, can plan a treatment to get rid of this common source of embarrassment.
On the definition of memory, the Oxford English Dictionary (OED) outlines the various functions of this complex and varied noun. It offers the cognitive function of remembering, the physical site of retention and custody of sensory experiences of the past. The theme of corporeality continues in a discussion of memory as the capability of an organism to manifest the previous effects or state in another setting, to retain an impression from a past experience. Similarly, the OED discusses memory as subject-specific, by which I mean outlined as a personal repository of experiences. It posits memory as a middle ground for the unconscious, the place that acts as a medium of recovery from the inaccessibility of the unconscious. Implicit in this is the accessibility of memory, the interaction of the will on the brain to recover, to preserve sensory experiences. Singular instances, impressions, also are signified by the term memory. The OED offers such moments as recollections, acts or instances of remembering, specific persons or things remembered, or the fact or condition of being remembered. Similarly, the word can connote a loss or an absence, as in the sense of a memorial or for a person or state no longer present. Objects can also function as such, in the sense of a physical, symbolic replacement for something lost or gone: a memento, monument, or memorial would be an example. Implicit in the above defined uses are that memories are simply an impression of a past experience, importantly that they are no longer present, but rather are a retrieval of a particular previous moment. In this manner, memory functions as a medium of storage, as an intervening substance between temporally past sensory impressions and present consciousness. Memories are a temporal channel, a time machine of sensory traces of the past. The Encyclopedia of Philosophy offers that memory need not refer exclusively to the past, that one could remember an event that is presently occurring or will occur sometime in the future, but argues that memory most often refers to a past experience. "Despite this variety of uses, philosophers writing on memory have tended, until recently, to concentrate on those uses of "remember" in which it takes as its object an expression referring to a particular past event or action." The OED also points to the importance of temporality for the concept of memory, showing temporality as an element of recollection, specifically the span of time in which a reminiscence passes. Movement along the temporal echoes the process of sensory experience, and links the issues of memory to the Hegelian problematization of sense-certainty. Frances Yates chronicles the historical usages of memory in rhetoric in her article Three Latin Sources for the Classical Art of Memory. Utilizing a spatial conceptualization of mnemonic processes, Roman rhetors were capable of recounting lengthy orations with little difficulty or error. By a process of visualization of the space of memory, one could 'place' elements in a linear movement throughout the conceptualized architecture and recapture them through 'moving' again through this image. Yates also likens the mnemonic process to linguistic structures, stating "(t)he art of memory is like an inner writing" . By imagining a spatial inscription or attribution, the orator would be able to revisit the symbolic recollection and summon up the information. Similarly, Walter Benjamin reflects on the social uses of memory in his essay entitled The Storyteller. He contrasts storytelling, a communicative form relying solely on memory, to information, which he defines as the communications of modernity. He sees memory functioning as a medium between generations and varied experiences. " Memory creates the chain of tradition which passes a happening on from generation to generation...in the first place among these is the one practiced by the storyteller. It starts the web which all stories together form in the end. Memory, and more specifically, collective memory, serves to unite and make links between the generations, provide a sense of shared heritage. Shared remembrances, as distinct from individual memories, sit often in a contested place. I would define Collective memory as the communal narratives regarding a past event that enjoy relative acceptance or consensus by a group. There remains however, the possibility of conflict between professional historical methodology and collective knowledges. In the telling of previously marginalized or ignored histories, conflicting narratives raise the question of validity in memory. The accuracy of memory as a media of information regarding the past becomes politicized in this discrepancies. Culture wars occur increasingly around issues of representation and the past. Public History organizations and exhibits present an arena for such disputes. One need only look to the debate regarding the Smithsonian's National Air and Space Museum's 1994 exhibit on the WWII plane the Enola Gay for an example of contestations over the past. The controversy surrounding the proposed exhibit and ensuing backlash from veterans claiming historical revisionism in the name of political correctness cogently portrays the potential contestation between professional history and collective memory, as well as between different collectives. In his chapter "Narrative, Memory, and Slavery," W.J.T. Mitchell problematizes the notion of memory as a direct representation of the past. He argues "representation... not only 'mediates' our knowledge...but obstructs, fragments, and negates that knowledge...(memory) provide(s) something more like a site of cultural labor, a body of textual formations that has to be worked through interminably" . For Mitchell, memory is not interesting for what it tells us, but rather what it hides from us. Calling memory a "medium", he posits it as a process of meaning creation that is both selective and akin to a facade. In describing memory as "a technology for gaining freedom of movement in and mastery over the subjective temporality of consciousness and the objective temporality of discursive performance" , Mitchell politicizes the function of memory. Rather than a recalling of a sensory input of the past, memory is a process by which a subject narrates the past, explains the experiences and gains power over the world he inhabits. We can see how all the uses of memory as outlined above suggest a system of storage and a medium of recovery. Whether referencing the human mental capacity for storing past sensory traces or an artificial system or technology of retrieval, a temporal transmission and mediation is a necessary component of memory. The possible tension in discrepancies between academic history and collective memories demonstrate the politics of remembering and forgetting.
Geysers. What makes them work? Many who have seen a geyser in action know only that it spouts hot water into the air. Many others have never seen one. Chapter 1, Geysers of the World, delineates their distinguishing features, locates the geyser regions of the world, and places investigations by world travelers and scientists in historic perspective. One of the quickest ways to become acquainted with a geyser is to observe it. The descriptions of several well known geysers, some based on past observations by others, but frequently by me, do not neces- sarily portray current behavior. They do, however, represent general features. Geysers exist as a result of a delicate and unique interplay among the heat, the water, and the rocks of the earth. In essence, heat and water must be available, transported, distributed, stored, and finally released. Chapter 2, The Geologic, Thermal, and Hydrologic State of the Earth, especially that close to its surface, sets the stage for Chapter 3, Fundamentals of Geyser Operation. The geyser is treated here as a simple system consisting of three major interacting elements: a source of water, a source of heat, and a reservoir for storing water. The discus- sion centers around the actions occurring within idealized columnar and pool geysers, and more complex systems. Some of the more workable geyser theories are evaluated. Publisher: Springer-Verlag New York Inc. Number of pages: 223 Weight: 373 g Dimensions: 235 x 155 x 13 mm Edition: Softcover reprint of the original 1st ed. 198
Using technology with your digital learning When properly integrated, technology can make digital learning more accessible for the trainer and learner alike. It guarantees a dialogue between the two. New technologies can allow conversations to continue without the constraints of time. This creates a sense of community previously lacking from digital learning, benefitting both the trainer and learner. What tools are useful for digital learning? 1. Social networks Tools developed by social media give digital learning a space where information can be shared. These tools encourage learners to share their views and facilitate dialogue with the trainer. They allow you to create pages and discussion groups on specific topics and maintain your interest in the subject being studied. They give digital learning the social characteristics of classroom-based learning. 2. Blogs and wikis Blogs and wikis have collaborative and participative functionality. It’s a means of creating new resources that can be reused and saved in a protected space. From time to time learners may be asked to contribute to the blog, asked to write on a set theme. This task benefits all participants. In the same way, a wiki creates learning resources that are available 24/7 from any device. Students write and validate an article together using digital tools. 3. Photo sharing The trainer can encourage learners to take and post pictures of their achievements or works in progress. This creates a clear parallel, illustrating the real situations being experienced by those who are being trained. And it could potentially guide them when they have doubts or encounter difficulties. 4. Video sharing Similarly, video sharing by the instructor or learner adds a dimension to training, strengthening the skills of the whole group. Photos and videos can be viewed in class or digitally, and reused from one year to the next. They provide an opportunity to identify the most advanced elements and be used to assist other learners. Podcasts prepared by trainers or recorded in class are another way to reach learners outside the classroom. A webinar offers the same benefits of classroom-based learning without the associated costs. Many multimedia tools are available, such as whiteboards, PowerPoint presentations, demonstration videos, file sharing, etc. The webinar can be complemented with a chat function, meaning questions can be asked in real time without interrupting the training. 7. Flipped classroom The concept of the flipped classroom consists of a course delivered via elearning before a class. Students then participate in the classroom, during which practical work is undertaken in groups. This can be done through m-learning. For more tips on using social technologies in your elearning programme, contact the Dokeos team.
School Ratings: An Overview School ratings — be they labels ranging from “excellent” to “in need of improvement,” one to five stars, or A to F grades — are one of the most powerful tools we have for communicating expectations for school performance, for prompting action whenever those expectations are not met, and for helping parents and others understand how their child’s school is doing. By defining what it takes to earn an “excellent” or “good” rating, rating criteria can make it clear that to be considered a high-performing school, a school has to be serving all — not just some — of its students well. And by identifying schools that are not meeting expectations for one or more groups of students, ratings can prompt action and help districts and states better target resources and supports. Of course, ratings alone are insufficient; states should also provide parents, community members, and the public with more detailed reports (sometimes called data dashboards) that clearly present a range of information on school quality — including how schools are doing for each group of students on all the indicators that go into the rating. Without school ratings, however, parents, educators, and all others are left to cipher through pages and pages of numbers with no guidance about whether their schools’ results are up to par. And most important for all of us who are committed to raising achievement for all students — including low-income students, students of color, English learners, and students with disabilities — is that in the absence of ratings built around the performance of all student groups, it is all too easy for schools and districts to sweep these students’ outcomes under the rug. Resources to Support Your Advocacy The purpose of these fact sheets is to provide advocates with information they need to help make sure that their state leaders put in place school rating systems that truly reflect how schools are serving all groups of students. This overview fact sheet lays out key requirements related to school ratings in the Every Student Succeeds Act and identifies key parameters for an equity-focused school rating system. The fact sheet on setting goals discusses how states could approach setting ambitious and attainable goals for schools. The fact sheet on ensuring that all groups of students matter suggests ways of making sure that how schools are serving each student group counts in its rating. That’s why, even though Congress left a lot of discretion to states in crafting accountability provisions under the Every Student Succeeds Act (ESSA), it was clear that ratings must be purposefully designed to reflect how schools are doing for all groups of students.1 Otherwise, schools’ average all-student results will, by default, become their ratings, removing the incentive for schools to tackle inequities in opportunity and achievement. Why advocates must pay attention The way ratings are designed — meaning, which criteria schools have to meet to get a certain rating — really matters. An accountability system that gives high ratings only to schools that demonstrate high performance or fast improvement for all groups of students that they serve sends one signal. A system that gives high ratings to schools that are doing well on average, but have low results for their African American students, for example, sends another, very different message. In recent years, many states have chosen to rate their schools based mostly or entirely on overall results, often ignoring the performance of individual student groups. As a result, schools in these states have been able to receive high marks despite low outcomes and little to no progress for some groups of students. ESSA requires that school ratings be based on the results of each individual student group. But states will face a great deal of pressure to give as many schools as possible high marks and to minimize the extent to which outcomes for historically underserved groups of students — including low-income students, students of color, students with disabilities, and English learners — count. As advocates, we must push states to ensure that ratings truly reflect how schools are serving all of their students. What does ESSA require in regard to school ratings? Under ESSA2, states have to set goals for improving student performance on state assessments and graduation rates for all students and for each student group. These goals must require bigger gains for groups of students who are further behind. States must also set goals for progress toward English proficiency for English learners. The law then requires states to annually rate schools based on their performance for all students and for each student group on the following measures: - Academic achievement: A measure of how schools’ proficiency rates in reading/language arts and math for all students and each student group compare with state-set goals. For high schools, states can also include student growth as part of this indicator. - Another academic indicator: - For high schools, a measure of how graduation rates for all students and each student group compare with state-set goals. - For elementary and middle schools, this measure may be individual student growth or another statewide, valid, and reliable indicator of student learning (such as science assessment results). - English-language proficiency: A measure of the progress that a school’s English learners are making toward English proficiency. - Additional indicator of school quality or student success: Another valid, reliable, and statewide indicator of school quality. For more on deciding which indicators to include in a school rating, please see https://edtrust.org/students-cant-wait/. How much do each of these indicators have to count? States decide exactly how much different indicators count in a school rating. But the law specifies that the first three indicators — academic achievement, another academic indicator, and progress toward English-language proficiency — must each have substantial weight. Together, these indicators have to carry much more weight (i.e., count much more) than the additional indicator of school quality or student success. How much does the performance of each group of students have to count? ESSA is clear that the school ratings have to reflect how schools are doing for all students and for each student group on each of the indicators (except progress toward English-language proficiency, which is only measured for English learners). In addition, the law requires that if a school is consistently underperforming for any group of students, its rating has to reflect that fact. States, however, decide exactly how much group performance has to count, as well as how to define what it means to be consistently underperforming. Key parameters for an equity-focused school rating system States will need to make lots of decisions when setting their school rating criteria. We discuss some of those decisions — such as what goals to set for their schools and how to include results for individual student groups — in more depth in the accompanying fact sheets. At the end of the day, though, to truly be focused on raising achievement for all students, school rating criteria must meet at least the following key parameters: - Ratings must reflect how schools are serving all groups of students, including low-income students, students from each major ethnic/racial group, English learners, and students with disabilities.In recent years, many states put in place school rating criteria that look only at how schools are doing on average and, sometimes, for a “super subgroup” (such as high-need students, a group that includes any student who is low-income, an English learner, or a student with a disability). Under these rating systems, schools can (and do) receive high ratings even when they are consistently failing to serve some student groups.ESSA makes clear that this needs to change. The law requires that ratings be based on how schools are doing for each group of students that they serve, and if a school is consistently underperforming for any group, its rating must reflect this fact. For more on what it means to meaningfully include subgroup performance in school ratings, see the Ensuring that All Groups of Students Matter in School Ratings fact sheet.Importantly, rating criteria have to work hand-in-hand with action requirements. Schools that are consistently underperforming for any group of students (which, under ESSA, have to take action to improve) should not be able to receive a good rating. And the state must make clear what steps schools receiving each rating have to take and what kinds of support and resources they are eligible to receive. - Goals for improving student outcomes need to be both ambitious and attainable.The ultimate goal for all schools and districts is to prepare every single student for success in college and/or a well-paying, meaningful career. But simply stating that this is the goal won’t change the reality of our deeply unequal and inequitable school system. Accountability goals enable states to communicate clear expectations for improvement — including demanding more improvement for groups of students who are too often underserved — so that they can monitor whether schools are on the right track. In order for goals to serve this function, however, they must be both ambitious (meaning, require bigger gains than schools are currently making, especially for groups of students who are behind) and attainable (meaning, that there needs to be evidence that some schools are making the kind of progress that the state is demanding from all schools). For more on setting ambitious and attainable goals, see the Setting Goals for Accountability fact sheet. - Ratings must be based predominantly on measures of student achievement and graduation rates.Although states can and should include additional indicators — such as chronic absenteeism or discipline rates — in their school ratings, rating criteria must make it clear that students’ academic success (including progress toward English-language proficiency for English learners) and graduation rates hold the most sway. In other words, rating criteria should ensure that schools with continued low performance for any group of children aren’t excused from addressing that problem just because their school quality survey suggested parent satisfaction went up a point or chronic absenteeism went down a point.Importantly, a rating must be accompanied by a detailed dashboard that clearly presents schools’ results on each of the indicators for each group of students. This dashboard can and should include additional indicators that aren’t part of the rating to provide parents with as complete a picture as possible of their child’s school. - Ratings must take into account both current performance and progress, including whether schools are on track to meet state goals.A school that is low-performing but making big gains is not the same as one that is low-performing and not improving. Similarly, a small amount of progress for an already high-performing school tells us something different from that same amount of progress at a low-performing school. School rating criteria should include both current performance and progress for all students and each student group, including whether a school is on track to meet the state’s goals. - Rating criteria — and the ratings themselves — must be as straightforward and transparent as possible.Rating criteria can be a powerful tool for communicating expectations for school performance. But they can only serve this function if educators — and parents and community members — can understand what is expected of them. The general rule is that school leaders — and parents and the public — should be able to know at the beginning of each school year how much the school needs to improve to get a better rating.This means that states should avoid hard-to-understand calculations, such as z-scores, or other statistical manipulations. They should also avoid changing rating criteria every year. And importantly, it means that states must provide both educators and the public with clear, understandable materials that explain the rating criteria.The ratings themselves must also send a clear signal about school performance. For example, a label of “in need of improvement” is easier to understand than an index value of “330 out of 800 points.” 1Under ESSA, school ratings must be based on results for all students, as well as for each of the following student groups: students from each major racial ethnic group (e.g., Black, Latino, Native American, Asian, and White), students with disabilities, low-income students, and English learners. ESSA also requires states to publicly report results for all of these groups, as well as additional student groups — including male and female students, students who are in foster care, migrant students, and homeless students. 2This section summarizes requirements that are in the Every Student Succeeds Act. The U.S. Department of Education may clarify or add more detail to some of these requirements through its regulations, which were being finalized at the time this document was prepared.
Printable Version (20KB pdf) Below is an overview of capitalization rules. If you are unsure whether a word should be capitalized, you can consult a dictionary. - You should always capitalize proper nouns and words formed from them; do not capitalize common nouns. The following are types of words that you should usually capitalize - Names for the deity, religions, religious followers, sacred books – God, Buddha, Allah, Christianity, Muslims, Bible, Torah - Words of family relationships used as names – Aunt Rose, Uncle Henry, Grandma Reed - Names of countries, states, and cities – France, England, United States of America, New York, New Orleans - Nationalities and their languages, races, tribes – English, African, Sudanese, Spanish, Cherokee - Educational institutions, degrees, particular courses –University of Maryland, Bachelor’s of Science, English 101 - Government departments, organizations, political parties – Federal Bureau of Investigations, the Supreme Court, Congress, Sierra Club, Democrat - Historical movements, periods, events, documents – the Enlightenment, the Declaration of Independence, the Constitution - Specific electronic sources – the Internet, the Net, the World Wide Web - Trade/brand names – Kleenex Months (January, February) and days of the week (Sunday, Monday) are also treated as proper nouns. Seasons and the numbers of the days of the months are not. Also, names of school subjects (math, algebra, geology, psychology) are not capitalized with the exception of the names of languages (French, English). Names of courses are capitalized (Algebra 201, Math 001). - You should capitalize titles of people when used as part of their proper name. - Professor Smith but not “the professor” - District Attorney Rodriquez but not “the new district attorney” - Capitalize the first, last, and all major words of titles and subtitles of works such as books, online documents, songs, articles. Major words can include nouns, verbs, pronouns, adverbs and adjectives, but do not capitalize minor words like articles, and prepositions, and coordinating conjunctions (and, or, the, in) with the only exception if one of these minor words come first or last in the title. - The Cat in the Hat (book title) - I Want to Hold Your Hand (song title) - Capitalize the first word of a sentence. - She went to the store to purchase a new computer. - Capitalize the first word of a quoted sentence but not a quoted phrase. - Professor Smith cautioned students, “Tomorrow is the test, be sure to study chapters eight through twelve and the study notes.” - Professor Smith says we should study “chapters eight through twelve and the study notes.”
Helping the Overweight Child - Weight management goals for the overweight child Your job is to offer nutritious food choices at meals and snack times. You decide what, where, and when your family eats. Your child's job is to choose how much he or she will eat of the foods you serve. Your child even gets to decide whether to eat. restrict food. Food restriction causes children to ignore their internal hunger gauges. Children who have their food restricted often end up heavier, because they become anxious about food and eating. Anxiety about not getting enough to eat will often lead a child to overeat whenever he or she gets a chance. This causes the child to become less in touch with how hungry or full he or she is, and the child becomes more likely to eat more than his or her body needs. This can also happen when children or teens follow weight-loss diets. It doesn't work to put a child on a diet-you get the opposite effect. attention to behaviors that may be adding to weight gain, and then work to correct them. Then trust that your child will end up at the weight that is right for him or her. If you are concerned about your child's weight, talk to your child's doctor. He or she can tell you if your child is gaining weight too quickly and can give you steps to take to help your child have a - Healthy Eating: Helping Your Child Learn Healthy Eating Habits
First Lines is a strategy in which students read the beginning sentences from assigned readings and make predictions about the content of what they're about to read. This pre-reading technique helps students focus their attention on what they can tell from the first lines of a story, play, poem, or other text. As students read the text in its entirety they discuss, revisit and/or revise their original predictions. The First Lines strategy is a versatile and simple technique for improving students' reading comprehension. It requires students to 1) anticipate what the text is about before they begin reading, and 2) activate prior knowledge. First Lines helps students become active participants in learning and can include writing as a way of organizing predictions and/or thoughts generated from discussions. Monitoring each student's predictions provides teachers with information about how much the students already know about the topic. This allows teachers to tailor instruction accordingly. Create and use the strategy Choose the assigned reading and introduce the text to the students. Then describe the purpose of the strategy and provide guidelines for discussions about predictions. Explain that students will be looking at the first sentences from texts that they will be reading during the class or unit. You may wish to copy these first lines separately and give them to each student. As with all strategy instruction, you should model the procedure to ensure that students understand how to use the strategy. Monitor and support students as they work. Create and use the strategy To use the First Lines strategy, teachers should: - Ask students to begin reading the first line of the assigned text. - Ask students to make predictions for the reading based on what they see in the first sentence. - Explain that students should be ready to volunteer the ideas for their predictions. - Remind students that there is not a "right" or "wrong" way to make predictions about a text, but emphasize that readers should be able to support their predictions from the information in the sentence. - Engage the class in discussion about each student's predictions. - Ask students to review their predictions and to note any changes or additions to their predictions in a journal or on recording Sheets before reading the text. Students might work in groups or individually. - Encourage students to return to their original predictions after reading the text, assessing their original predictions and building evidence to support those predictions which are accurate. Students can create new predictions as well. Beers, K. (2003). When Kids Can't Read--What Teachers Can Do: A Guide for Teachers 6-12. Portsmouth, NH: Heinemann.
Churchyard lecanactis (Lecanactis hemisphaerica) |Size||Diameter of fruiting body: 0.4-1.2 mm (9)| Classified as Near Threatened in Great Britain and is protected under Schedule 8 of the Wildlife and Countryside Act 1981 (2). Churchyard lecanactis is a rare lichen that grows in crust-like formations (2). The name of the genus Lecanactis means 'shining small bowl' and refers to the reproductive fruiting body, which contains a bag-like structure that contains the spores(7). It is known from 44 locations in south-east England (8), including sites in Somerset, Sussex, Suffolk, Kent, Dorset and Norfolk (2). Outside of the UK it only occurs in Italy (2). Inhabits external church walls that face to the north or east (2), and are sheltered from both rain and light (6). It tends to occur in coastal areas and typically grows on plaster or mortar (2). Lichens are remarkable organisms; they are stable combinations of an alga and/ or a cyanobacteria with a fungus, living together in a symbiotic association (7). The fungus causes the alga to release sugars, which allow the fungus to grow, reproduce and generally survive. The fungus provides protection for the alga, and enables it to live in environments in which it could not survive without the fungal partner (7). A general rule is that the fungal component of a lichen is unable to live independently, but the alga may live without the fungus as a distinct species (3). Many lichens are known to be very sensitive to environmental pollution, and they have been used as 'indicators' of pollution (4). Churchyard lecanactis has an extremely slow rate of growth (6). Possible threats include the deterioration of walls on which the species occurs and repair of the walls using unsuitable materials (2). This lichen is prevented from spreading as suitable external walls are in short supply (2). The churchyard lecanactis is a UK Biodiversity Action Plan Priority Species, the Species Action Plan, which is lead by the wild plant charity Plantlife, aims to maintain the existing populations and to create three new colonies by 2005 (2). In addition, Plantlife has included the churchyard lecanactis on its Back From the Brink programme (4) and has produced a leaflet 'Churchyard Lecanactis: old walls can harbour secrets', available on request from Plantlife ( [email protected]) (8). In 1990 the British Lichen Society set up the Churchyards Project, this project is concerned with research, conservation and education on lichens of churchyards (5). Regular survey work is carried out, and leaflets containing conservation guidelines have been produced (5). For more on churchyard lichens see the British Lichen Society's article available on-line at: http://www.thebls.org.uk/content/chlich.html and the Plantlife leaflet 'Churchyard Lecanactis: old walls can harbour secrets' available from For more on British lichens see: Dobson, F. (2000). Lichens. An illustrated guide to the British species. The Richmond Publishing Co. Ltd., Slough. Information authenticated by Plantlife, the wild plant conservation charity: - Alga: a collection of taxonomically unrelated groups that share some common features but are grouped together for historical reasons and for convenience. They are of simple construction, and are mainly photoautotrophic, obtaining all their energy from light and carbon dioxide, and possess the photosynthetic pigment, chlorophyll A. They range in complexity from microscopic single cells to very complex plant-like forms, such as kelps. Algal groups include blue-green algae (cyanobacteria), red algae (rhodophyta), green algae (chlorophyta), brown algae and diatoms (chromista) as well as euglenophyta. - Cyanobacteria: a group of bacteria that are able to photosynthesise and contain the pigment chlorophyll. They used to be known as ‘blue-green algae’. They are thought to have been the first organisms to produce oxygen; fossil cyanobacteria have been found in 3000 million year old rocks. As they are responsible for the oxygen in the atmosphere they have played an essential role in influencing the course of evolution on this planet. - Fungus: fungi are one of the taxonomic kingdoms, separate from plants and animals. They obtain nutrients by absorbing organic compounds from the surrounding environment. - Spores: microscopic particles involved in both dispersal and reproduction. They comprise a single or group of unspecialised cells and do not contain an embryo, as do seeds. - Symbiotic relationship: relationship in which two organisms form a close association, the term is now usually used only for associations that benefit both organisms (a mutualism). National Biodiversity Network Species Dictionary ( November 2002) - Purvis, O.W., Coppins, B.J., Hawksworth, D.L., James, P.W., & Moore, D.M. (1992) The lichen flora of Great Britain and Ireland. The British Lichen Society, London. UK BAP Species Action Plan (Nov 2002): - Dobson, F. (2000) Lichens. An illustrated guide to the British species. The Richmond Publishing Co. Ltd., Slough. - Duckworth, J. (2002) Pers. comm. NFU (Nov 2002): - Church, J.M., Coppins, B.J., Gilbert, O.L., James, P.W. & Stewart, N.F. (1996). Red Data Book of Britain and Ireland: lichens. Volume 1: Britain. The Joint Nature Conservation Committee, Peterborough. Plantlife (Nov 2002): British Lichen Society: Lichen churchyard project (Nov 2002):
A Comment holds annotation markers, which specify for which range of document elements it refers. Every Comment has a corresponding CommentRangeStart and CommentRangeEnd, which are inline elements. These two elements specify the comment's location as follows. - CommentRangeStart: Specifies the start of a comment annotation. - CommentRangeEnd: Specifies the end of a comment annotation. Example 1 shows how to create a Comment and add its CommentRangeStart and CommentRangeEnd elements in a paragraph. [C#] Example 1: Add a comment to a paragraph Comment comment = document.Comments.AddComment(); paragraph.Inlines.Add(comment.CommentRangeStart); paragraph.Inlines.AddRun("text"); paragraph.Inlines.Add(comment.CommentRangeEnd); The AddComment() method of the Comments collection of a document creates a new comment and return it. The location of the comment is around a run with text "text". Note, that the paragraph should belong to the same document as the one passed to the constructor of the Comment or an exception is going to be thrown. Example 2 shows how you can insert a previously created Comment object in a document by using RadFlowDocumentEditor. The InsertComment() method will insert the comment's start and end elements. [C#] Example 2: Insert previously created comment RadFlowDocumentEditor editor = new RadFlowDocumentEditor(new RadFlowDocument()); editor.InsertComment(comment); Example 3 demonstrates how you can use another overload of RadFlowDocumentEditor's InsertComment() method. In this case, a string representing the text of the Comment and two inline elements are passed. The two inline elements specify the element prior, which the CommentRangeStart should be added and the element after which the CommentRangeEnd should be added. [C#] Example 3: Insert comment around run RadFlowDocumentEditor editor = new RadFlowDocumentEditor(new RadFlowDocument()); Run run = editor.InsertText("text"); editor.InsertComment("My sample comment.", run, run); The Comment class exposes several properties which allow you to customize information about it: - Author: Property of type string specifying the author of the comment. - Initials: Property of type string specifying the author' initials. - Date: DateTime property showing the moment the comment was created. Example 4 shows how you can add a Table to a Comment. [C#] Example 4: Add blocks to a comment Paragraph paragraph = comment.Blocks.AddParagraph(); Table table = comment.Blocks.AddTable();
If you came to class this week, you had the opportunity to practice description, perhaps the most valuable of all skills, when learning a language. Why is it so valuable you ask? Because when you’re in a real life situation, chances are you won’t have a dictionary and the other person may not speak English. So what do you do when you don’t have the necessary vocabulary? You work your way around it, you describe the words you do not know. This week’s tip is to practice this skill in English. I know that may sound crazy, considering that you’re learning Spanish after all, but the point is to develop the skill, and not everyone is instantly good at this in their own language. By practicing on your friends & family in English, you will find out how well you do this or what it is that you need to work on. And then you can practice doing it in Spanish… 1 – Describe what it is in basic terms (a thing, a place, an animal, a color, a concept, etc.) 2 – Give details (what does it look like, when is it done/used, what size is it, similar to/opposite of) 3 – Give examples
Fitness: Maintaining optimum strength on weak muscles Muscular strength should be maintained at the level which supports the daily activities and allows for emergency physical activities and occasional prolonged periods when adequate nutrition and hours of rest are reduced. Most occupations involve some muscular strain. Sitting for hours at a desk places a continuous strain on the small muscles supporting the shoulders and head. If these small muscles are not allowed to rest or if their circulation is not improved by massage or exercise, they will become fatigued and distractingly painful. The strenuousness of muscular activity is proportional to the strength of the muscles involved. An activity which is strenuous for a weak muscled individual is less strenuous for another individual with stronger muscles. If a sufficient reserve of muscular strength is maintained daily tasks are performed with greater ease and efficiency, in greater quantities and with less fatigue. As the athlete trains for his event by strengthening himself through increasing loads of work, so the worker and the executive can train themselves the better to withstand their physical stresses through extra loads of physical activity. The athlete requires daily periods of hard work to maintain a high state of training but those who perform sedentary or moderate work need less frequent and less strenuous periods of extra physical activity. The exercise periods can be made very pleasurable if the work is accomplished in the form of golf, bowling, tennis or other sport. If there is sufficient leisure time, desirable levels of muscular strength can be maintained by such activities as gardening, home workshop activities, fishing, hunting, and camping. Strengthening Weak Muscles Muscular weakness may be corrected by working the muscles against heavy loads. The loads should be adapted to the strength of the muscles and increased as muscle strength is improved. Tile rate of improvement will generally be in proportion to the amount of work performed by the muscles. Rapid improvement requires long periods of work. If the load of work is too heavy or the movement too rapid, or if insufficient rest is allowed between the bouts of work, exhaustion will occur and the total amount of work which can be accomplished during the exercise period is diminished. A properly planned weight lifting program using dumbbells and barbells will give rapid increase in strength of weak muscles. The amount of work can be accurately controlled and the exercise can be adapted to the muscle groups needing the greatest development. Wrestling and gymnastics are also useful for improving muscular strength. In wrestling, however, a weak person usually exhausts himself before he has performed enough work to bring about the desired rate of improvement, Gymnastics tend to develop only the special parts of the body which are used in exercises. Both wrestling and gymnastics have a greater value in the later stages of a strength building program. A special problem arises in exercises designed to strengthen abdominal muscles. Leg-lifting and trunk-flexing exercises can be performed most easily by contractions of the strong hip flexor muscles, the sartorius, rectus femorus, psoas major, iliacus, and the adductors. Abdominal muscles are brought strongly into play only when the performer contracts them voluntarily during exercise. Assistance can be given by palpation of the abdominal muscles and encouragement of the performer to use his abdominal muscles strongly in the exercise. Autogenous auditory facilitation by means of electrical amplification of the performer’s own muscle sounds assists in increasing the work output and endurance when muscular exercise is difficult.
What Is Arthrogryposis? A child with contractures that bend their fingers, hand and wrist. Arthrogryposis (arth-ro-grip-OH-sis) means a child is born with joint contractures. This means some of their joints don't move as much as normal and may even be stuck in one position. Often the muscles around these joints are thin, weak, stiff or missing. Extra tissue may have formed around the joints, holding them in place. Most contractures happen in the arms and the legs. They can also happen in the jaw and the spine. Arthrogryposis does not occur on its own. It is a feature of many other conditions, most often . Children with arthrogryposis may have other health problems, such as problems with their nervous system, muscles, heart, kidneys or other organs, or differences in how their limbs, skull or face formed. This condition is also called arthrogryposis multiplex congenita. "Arthrogryposis" means the joints are curved or crooked. "Multiplex" means it affects more than one joint. "Congenita" means the condition is present at birth. Arthrogryposis in Children About 1 baby in 3,000 is born with arthrogryposis. Each child with arthrogryposis is different. In some children, the condition is mild. It affects only a few joints, and these joints have almost as much movement as normal. In other children, the condition is more serious. It affects more joints and restricts their movement more. In extreme cases, arthrogryposis affects nearly every joint. What to expect Arthrogryposis does not get worse over time. For most children, treatment can lead to big improvements in how they can move and what they can do. Most children with arthrogryposis have typical cognitive and language skills. Most have a normal life span. Most lead independent, fulfilling lives as adults. However, some need lifelong help with daily activities. Some walk, and others use a wheelchair. The main cause of arthrogryposis is fetal akinesia. This means the baby does not move around inside the womb as much as normal. Starting in early pregnancy, moving helps a baby's joints, muscles and develop. If a baby doesn't move much, these parts may not develop well, and extra tissue may form in the joints, making movement harder. There are many reasons why fetal akinesia might happen, including: - Nerve signals don't reach the baby's muscles because of problems with the baby's central nervous system (CNS) - There isn't enough room inside the womb for the baby to move. This may happen if the womb is not the typical shape or if amniotic fluid leaks out of the womb (oligohydramnios). - The baby's muscles don't form normally and are weak, or their tendons, bones or joints don't form normally. Fetal akinesia usually has nothing to do with what the mother did or did not do while she was pregnant. Most families who have a child with arthrogryposis are not at greater risk for having another child with it. In about one-third of children with this condition, doctors do find a cause. The families of these children may be at greater risk. Your child's doctor can explain what this means for your family. Arthrogryposis at Seattle Children's The Seattle Children's Arthrogryposis Clinic includes experts in . We work as a team to evaluate your child's abilities and recommend a plan to help your child become as active as possible. Treatment is tailored to your child and family. Our team members use the latest methods to increase your child's range of motion, muscle strength and skills. As your child grows, we adapt their treatment to meet their changing needs. Along the way, we consider all aspects of what your child and family need to thrive - from learning practical skills for daily life to coping with feelings. Every quarter, Children's Rehabilitation staff host a midday lunch for families who have children with arthrogryposis. These lunches happen during our quarterly Arthrogryposis Clinic dates. They are a time to meet other families, share a meal and learn from staff and each other. The Arthrogryposis Clinic team also provides prenatal consultations if an before birth shows that your baby may have arthrogryposis.
What is Fiscal Policy Fiscal policy refers to the use of government spending and tax policies to influence macroeconomic conditions, including aggregate demand, employment, inflation and economic growth. BREAKING DOWN Fiscal Policy Fiscal policy is largely based on the ideas of the British economist John Maynard Keynes (1883-1946), who argued that governments could stabilize the business cycle and regulate economic output by adjusting spending and tax policies. His theories were developed in response to the Great Depression, which defied classical economics' assumptions that economic swings were self-correcting. Keynes' ideas were highly influential and led to the New Deal in the U.S., which involved massive spending on public works projects and social welfare programs. To illustrate how the government could try to use fiscal policy to affect the economy, consider an economy that's experiencing recession. The government might lower tax rates to increase aggregate demand and fuel economic growth; this is known as expansionary fiscal policy. The logic behind this approach is that if people are paying lower taxes, they have more money to spend or invest, which fuels higher demand. That demand in turn leads firms to hire more – decreasing unemployment – and compete for labor, raising wages and providing consumers with more income to spend and invest: a virtuous cycle. Rather than lowering taxes, the government might decide to increase spending. By building more highways, for example, it could increase employment, pushing up demand and growth as described above. Expansionary fiscal policy is usually characterized by deficit spending, when government expenditures exceed receipts from taxes and other sources. In practice, deficit spending tends to result from a combination of tax cuts and higher spending. Economic expansion can get out of hand, however, as rising wages lead to inflation and asset bubbles begin to form. In this case a government might pursue contractionary fiscal policy – similar in practice to austerity – perhaps even forcing a brief recession in order to restore balance to the economic cycle. The government can do this by reducing public spending and cutting public sector pay or jobs. Contractionary fiscal policy is usually characterized by budget surpluses. It is rarely used, however, as the preferred tool for reining in unsustainable growth is monetary policy. When fiscal policy is neither expansionary nor contractionary, it is neutral. Aside from spending and tax policy, governments can employ seigniorage – the profits derived from printing of money – and sales of assets to effect changes in fiscal policy. Many economists dispute the effectiveness of expansionary fiscal policies, arguing that government spending crowds out investment by the private sector. Fiscal stimulus, meanwhile, is politically difficult to reverse; whether it has the desired macroeconomic effects or not, voters like low taxes and public spending. The mounting deficits that result can weigh on growth and create the need for austerity.
Research and Experiments Research and Experiments On each mission, when not subjecting themselves to medical and psychological testing, space station crews perform hundreds of scientific experiments. All experiments are selected by a panel of NASA scientists from thousands of suggestions. Each is then carefully planned and all needed hardware is assembled. Prior to liftoff, each crew rehearses the steps required for each experiment to minimize failures. Much is at stake: Multimillion-dollar projects can be rendered useless if an experiment is botched. Scientists representing nearly all major branches of knowledge have jockeyed to gain permission to conduct experiments on Salyut, Skylab, Mir, and the ISS. Everyone has recognized that their unique environments, far beyond Earth's atmosphere and floating in weightlessness, hold extraordinary potential for new discoveries in many fields. Those fields given the highest priority have been astronomy, earth environmental study, material development, botany, combustion and fluid physics, and military reconnaissance. The point of research in space, in the view of most scientists, is principally to improve human life on Earth. From this research they believe will come knowledge and discoveries that will change and improve everyone's lives on Earth, from the foods that people eat, the cars they drive, the computers they use, and even medical procedures used by physicians. A Giant Leap for Astronomy First with Salyut and Skylab, then with Mir, and today with the ISS, one of the key focuses of scientific exploration has been furthering human understanding of the cosmos. All space stations have carried instrumentation of various types on their missions miles above Earth to provide astronomers with clearer images of planets, stars, and galaxies than even the largest telescopes on Earth can offer. The principal reason astronomers are interested in mounting their instruments on space stations is that they operate far above Earth's atmosphere, which obscures astronomers' views due to dust particles, changing temperatures, and moisture in the form of clouds, rain, and fog. In addition, light from large cities interferes by scattering throughout the dust and moisture droplets often found in the lower atmosphere. On space stations, however, far above Earth's murky atmosphere, many distant objects can be clearly seen and photographed. One of the earliest attempts at placing a telescope on a space station occurred on Salyut for the purpose of investigating the Sun. Skylab followed with the Apollo telescope mount (ATM), a canister attached to the space station and containing a conventional telescope with lenses that could zoom in on a solar event such as sunspots. It also carried ultraviolet cameras that, thanks to sophisticated mounts, could be aimed steadily and precisely at any point on the Sun regardless of disturbances, such as those caused by crew movement. The instruments provided astronomers with thousands of remarkably detailed photographs of the Sun's surface and of solar flares. With the launch of Mir, which carried state-of-the-art instrumentation, photographs deeper into space became possible. Soviet cosmonauts conducted a photographic survey of galaxies and star groups using the Glazar telescope. Because the telescope was pointed hundreds of millions of miles into deep space, far beyond the solar system, the amount of light being captured was so small that exposure times up to eight minutes were required to capture enough light for a single photograph. Under such circumstances, even the slightest vibrations from astronaut movements could shake the space station and ruin the photograph. As a result, all astronauts were required to sit, strapped into chairs, during these long exposures. Of greatest excitement to astronomers today is a new generation of telescope, already built, tested, and secured on the ISS. This telescope, called the Submillimetron, is unique in three significant ways. First, as its name suggests, it detects and photographs very short wavelengths of light, much shorter than sunlight. These short microlight waves were emitted billions of years ago, when the universe was first formed. Astronomers believe that these images, then, are of cosmic bodies formed close to the beginning of the universe. Second, such a unique and precise instrument is designed to operate at supercold temperatures using liquid helium to chill sky-scanning equipment, thereby increasing the sensitivity of the Submillimetron's telescopic gear by slowing the motion of the molecules. A third unique feature allows for normal crew activity at all times, despite the extreme sensitivity of the equipment and extreme distances it photographs. The Submillimetron undocks from the ISS before it is used and then redocks for necessary maintenance. Astrophysicists interested in both the origin and ultimate fate of the universe are particularly interested in the Submillimetron's capabilities. Investigating Environmental Hot Spots Environmentalists and biologists recognize the value of space stations as a unique means to gain the broadest possible view of Earth as well as detailed views of particular environmental hot spots. When Earth is viewed from space through a variety of infrared and high-resolution cameras, natural resources can be identified, crops can be surveyed, and changes in the atmosphere and climate can be measured. Events on the surface, such as floods, oil spills, landslides, earthquakes, droughts, storms, forest fires, volcanic eruptions, and avalanches can be accurately located, measured, and monitored. One of the earliest and most successful environmental projects carried out aboard a space station was the use of a scatterometer on Skylab. A scatterometer is a remote-sensing instrument capable of measuring wind speed and direction on Earth under all weather conditions. When it was activated on Skylab, the scatterometer captured wind speed and direction data once a second and transmitted the data back to Earth. Engineers analyzed the data and used it to forecast weather, warn ships at sea of approaching heavy storms, assisted in oil spill cleanup efforts by accurately predicting the direction and speed the oil slick was taking, and notified manufacturers of hazardous chemicals of the safest times to ship their products. Mir also proved its value to environmental science. One of Mir's modules, called "Priroda," a Russian word meaning "nature," was launched in April 1996. Priroda carried equipment to study the atmosphere and oceans, with an emphasis on pollution and other forms of human impact on Earth. It also was capable of conducting surveys to locate mineral resources and underground water reserves as well as studies of the effects of erosion on crops and forests. To accomplish these ambitious objectives, environmental engineers loaded Priroda with active, passive, and infrared sensors for detecting and measuring natural resources. It carried several types of spectrometers used for measuring ozone and fluorocarbon (the chemical found in many aerosols) concentrations in the atmosphere. At the same time, equipment monitored the spread of industrial pollutants, mapped variations in water temperatures across oceans, and measured the height of ocean waves, vertical structure of clouds, and wind direction and speed. When the ISS went into space in 1998, environmental studies were high on the list of projects for the astronauts to work on. From the ISS orbit, 85 percent of Earth's surface can be observed. Continuously monitoring and investigating Earth from space with an impressive array of high-tech instrumentation, the ISS has facilitated in the identification of many environmental problems. In 2001 the commander of the ISS, Frank Culbertson, shared with the British Broadcasting Corporation the many observations he and other astronauts had made after studying Earth's The ISS Window Designers of the ISS wished to add a special portal on one of the modules through which astronauts could gaze at and photograph Earth and neighboring planets. Gazing out into space was not new, but previous windows were made of glass that easily scratched, clouded, and discolored. In an effort to correct these defects, optical engineers created the Nadir window, named after the astronomical term describing the lowest point in the heavens directly below an observer. Mounted in the U.S. laboratory module element of the space station, the twenty-inch diameter Nadir window provides a view of more than 75 percent of Earth's surface, containing 95 percent of the world's population. Designed by Dr. Karen Scott of the Aerospace Corporation, the high-tech five-inch-thick window is actually a composite of four laminated panes consisting of a thin exterior "debris" pane that protects it from micrometeorites, primary and secondary internal pressure panes, and an interior "scratch" pane to absorb accidental interior impacts. Each has different optical characteristics. Scott headed a team of thirty optical engineers that used a five-hundred-thousand-dollar optical instrument to make fine calibration measurements on the window to ensure precise clarity free of distortion before installing it in the lab module. Tests conducted on the multiple layers of the window ensured that they would not distort under the varying pressure and temperatures common on the space station. After five days of extreme testing, the unique window was determined to have the characteristics that would allow it to support a wide variety of research applications, including such things as coral reef monitoring, the development of new remote-sensing instruments, and monitoring of Earth's upper atmosphere. environment for four months. High above Earth, Culbertson made some startling observations: We see storms, we see droughts, we saw a dust storm a couple of days ago, in Turkey I think it was, and we have seen hurricanes. It is a cause for concern. Since my first flight in 1990 and this flight, I have seen changes in what comes out of some of the rivers, in land usage. We see areas of the world that are being burned to clear land, so we are losing lots of trees. There is smoke and dust in wider spread areas than we have seen before, particularly as areas like Africa dry up in certain regions.26 Cutting-Edge Cell Research Since 2000, NASA has been conducting cellular research on board the ISS to take advantage of the weightless environment to study cell growth and the intricate and mysterious subcellular functions within cells. Traditionally, biologists study cells by slicing living tissue into sections of single-cell thickness. The drawback to this process, for as long as it has been practiced, is that the prepared specimens begin to die within a few hours as the cells begin to lose their ability to function normally. At best, researchers on Earth have only one day to scrutinize under microscopes the workings of minute structures within cells. The problem that occurs when single cells are removed from a living organ for examination is that microscopic structures crucial to the life of the cell collapse, causing the cell to cease functioning. This research has primarily focused on the functioning of cells in the human liver, the organ that regulates most chemical levels in the blood and breaks down the nutrients into forms that are easier for the rest of the body to use. In a weightless environment slices of liver one-cell thick remain healthy and active for up to seven days, a significant advantage for researchers in space over those working on Earth. According to Dr. Fisk Johnson, a specialist in liver disease under contract with NASA, "Space is the gold-standard environment for this cutting-edge cell research. Only in space, a true microgravity environment, will we be able to isolate and study each of the individual factors impacting cell function."27 Once this advantage was discovered, the question then arose of how medical researchers on Earth could gain the same advantage. That question was answered by medical laboratories working with NASA that developed a device called a rotating bioreactor, which is capable of simulating a weightless environment on Earth. The rotating bioreactor works by gently spinning a fluid medium filled with cells. The spinning motion neutralizes most of gravity's effects, creating a near-weightless environment that allows single cells to function normally rather than collapse as they would otherwise do. Utilizing the rotating bioreactor on Earth in the year 2002 scientists successfully accomplished long-term culturing of liver cells, which allows the cells to maintain normal functions for six days. One of the advantages of studying healthy cells for a long time is the ability to identify and match cellular characteristics to drugs that might cure particular diseases. According to Dr. Paul Silber, a liver specialist, "Our recent discoveries could lead to better, earlier drug-candidate screening, which would speed up drug development by pharmaceutical companies, and importantly, to a longer life for the 25,000 people every year waiting for a life-saving liver transplant."28 Creating Materials in a Weightless Environment The weightless environment on space stations was of as much interest to materials scientists as to any others. Scientists are interested in a variety of physical properties of materials, such as melting points, molding characteristics, and the combining or separating of raw materials into useful products. Before the first space stations, materials scientists performed simple experiments of very short duration aboard plummeting airplanes and from tall drop towers. Through these studies, scientists discovered that gravity plays a role in introducing defects in crystals, in the combination of materials, and in other processing activities requiring the application of heat. Until the advent of space stations, however, they were incapable of sustaining a weightless environment long enough to thoroughly study these phenomena. The advent of space stations allowed the study of new alloys, protein crystals for drag research, and silicon crystals for use in electronics and semiconductors. Materials scientists theorized that improvements in processing in weightlessness could lead to the development of valuable drugs; high-strength, temperature-resistant ceramics and alloys; and faster computer chips. One of the Mir components, the Kristall module, was partially dedicated to experiments in materials processing. One objective was to use a sophisticated electrical furnace in a weightless environment for producing perfect crystals of gallium arsenide and zinc oxide to create absolutely pure computer chips capable of faster speeds and fewer errors. Although they failed to create absolutely pure chips, they were purer than those they could create within Earth's gravitational field. More recently, fiber-optic cables are also being improved in weightlessness. Fiber-optic cables, vital for high-speed data transmission, microsurgery, certain lasers, optical power transmission, and fiber-optic gyroscopes, are made of a complex blend of zirconium, barium, lanthanum, aluminum, and sodium. When this blend is performed in a weightless environment, materials scientists are finding them to be more than one hundred times more efficient than fibers created on Earth. In 2002 the ISS began the most complex studies of impurities in materials and ways to eliminate them in a microgravity environment. One of the more interesting causes of impurities, for example, is bubbles. On Earth, when metals are melted and blended, bubbles form. According to materials scientist Dr. Richard Grugel, "When bubbles are trapped in solid samples, they show up as internal cracks that diminish a material's strength and usefulness."29 In a weightless situation, however, although bubbles still form, they move very slightly, and this reduces internal cracks. Secondarily, their slow movement allows researchers to study the effect of bubbles on alloys more easily and precisely. According to Dr. Donald Gillies, NASA's leader for materials science, the studies of bubbles and other mysteries of materials production hold promise for new materials: We can thank advances in materials science for everything from cell phones to airplanes to computers to the next space ship in the making. To improve materials needed in our high-tech economy and help industry create the hot new products of the future, NASA scientists are using low gravity to examine and understand the role processing plays in creating materials.30 For centuries, physicists and chemists have been experimenting on a variety of elements and metals to discover new compounds and to improve existing alloys. They have also been aware that their experimental results are often affected by the containers they use and by the instruments that measure those results. Such contamination often invalidates experiments. Even worse, containers can sometimes dampen vibrations in a material or cool the sample too rapidly, throwing the validity of the experiment into doubt. In some cases, a metal is reactive enough to destroy its container, meaning that some materials simply cannot be studied on Earth. When the first space stations went into orbit, physicists and chemists seized on the opportunity to conduct experiments within a weightless environment. If materials could be suspended in space during experiments, without the need for containers and eliminating the variables that the containers themselves imposed, far more accurate results would be allowable. Initial results of such experiments answered many questions that could not have been resolved on Earth. Of particular interest was the property of metals in a liquid state that causes them to resist solidifying, even at temperatures where they would be expected to do so. This phenomenon is called nucleation. According to Dr. Kenneth Kelton, a physics professor at Washington University in St. Louis, "Nucleation is the major way physical systems change from one phase to another. The better we understand it, the better we can tailor the properties of materials to meet specific needs."31 Encouraged by the results of experiments carried out in space, engineers developed an apparatus on Earth that could duplicate a weightless environment for further research. NASA, joined by several private research companies, developed the electrostatic levitator (ESL), which is capable of suspending liquid metals without the sample touching the container and without the technicians handling equipment in ways that might alter results. Two practical applications using the ESL are the production of exceedingly smooth surfaces for computer and optical instrumentation and exceedingly pure metal for wires, making them capable of transmitting large volumes of data. Greenhouses in Space While materials scientists look to space station experiments in hopes of improving industrial processes on Earth, others are focused on investigating processes that might someday happen on a large scale in space. For example, botanists are studying the feasibility of crop cultivation on space stations in the belief that grains and vegetables may someday be needed in quantities large enough to supply deep space expeditions or even space colonies. To these ends, many experiments have been performed testing different gases, soils, nutrients, and seeds. One of them, called seed-to-seed cycling in a weightless environment, produced remarkably optimistic results. According to biologist Mary E. Musgrave: By giving space biologists a look at developmental events beyond the seedling stage, this experiment was an important contribution not only to gravitational biology, but also to the study of space life support systems. Data from this experiment on gas exchange, dry matter production and seed production provided essential information on providing a plant-based food supply for humans on long-duration space flights.32 Many of the botanical experiments in orbit have focused on the effects of weightlessness on plant growth and seed germination. Botanists had known for many years that seedlings on Earth display geotropism—that is, they respond to gravity by sending their roots down into the soil and stalks up above the ground. In addition, gravity affects the diffusion of gases given off by the plant, the drainage of water through soil, and the movement of water, nutrients, and other substances within the plant. Early experiments aboard Skylab were not encouraging for those who hoped to grow plants in space. For example, researchers' speculations were confirmed that without gravity, the roots and stalks of plants could not correctly orient themselves. Some seedlings sent their roots above the soil and their stalks deep into the soil, with the result that they withered and died. And even those that did properly orient their roots and stalks often failed to produce seeds, a critical failure unanticipated by researchers. In the mid-1980s, botanists performed an experiment to understand how seeds might survive weightlessness. Scientists sent 12.5 million tomato seeds into space and kept them there aboard Mir for four years. In 1990 the seeds were planted by botanists; many were also given to schoolchildren so they could make science projects of germinating them. Botanists discovered that a slightly higher percentage of seeds from space germinated than did seeds that had been kept on Earth and that almost all produced normal plants. These results were achieved even though the seeds had been exposed to radiation while in space. A second significant experiment on the ISS sought to determine whether second-generation space plants would be as healthy as second-generation plants on Earth. Scientists analyzing the data concluded that the quality of second-generation seeds produced in orbit was lower than that of seeds produced on Earth, resulting in a smaller second-generation plant size. This diminished seed quality is believed to be caused by the different ripening mechanics inside the seed pod in weightlessness. With so much evidence pointing to weightlessness as a hostile environment for plant production, botanists are a bit uncertain of the future of agriculture in space. One potential solution being investigated on the ISS is to grow plants without soil, a process known as hydroponics. In this process, the plants grow without soil, in a nutrient-rich solution. Whether hydroponics can solve the problem of large-scale horticulture, though, is still uncertain. In addition to their promise for scientists, space stations from the very beginning were seen as having military value. During the Cold War, when the United States and the Soviet Union jockeyed for political and military advantage on Earth, each country also looked to space stations to give them battlefield superiority. Although neither nation actually placed offensive weapons on board their space stations, both sought to exploit space stations' potential for reconnaissance. All space stations have carried equipment capable of photographing objects 250 miles below. Photographs are detailed enough, for example, to allow analysts to determine the types and numbers of aircraft on aircraft carriers and to track troop movements on land. Yet, military officials admit that so far, at least, space outposts can do little more than support more conventional military operations. At a meeting of the American Institute of Aeronautics held in Albuquerque, New Mexico, in August 2001, Colonel Steve Davis, an officer at Kirtland Air Force Base, said, "We're [the Air When NASA and the Russian Space Agency negotiated the initial agreement for the construction, deployment, and utilization of the ISS, no one gave consideration to using it as a tourist destination. From the inception of the project, all countries involved considered the ISS to be an orbiting laboratory dedicated to the study of a variety of scientific experiments and observations. This somewhat parochial view was shaken in 2001 when the multimillionaire American businessman Dennis Tito expressed an interest in paying for a short vacation on the ISS to satisfy his own personal fascination with space. When NASA was notified of his interest and willingness to pay for a short visit to the spacecraft, his request was rejected on the grounds that the multibillion-dollar craft was for scientific purposes only. Recognizing that the Russians were short of money needed to continue their construction and launch costs, Tito approached them with an offer of $20 million. Brushing aside NASA's objections, the Russians required Tito first to complete the standard training program before being blasted on what most called the most expensive vacation ever. In May 2001, when Tito docked at the ISS, several important milestones were achieved. These included the fact that a middle-aged civilian astronaut could easily survive space travel, that a space-tourism market did indeed exist, and that there was no longer a valid reason to discount the notion of space tourism. Despite NASA's long-running opposition to his flight, which included preventing him from training with his Russian crewmates at the Johnson Space Center that triggered a minor international incident, Tito said he enjoyed his eight days in space and hoped that NASA would be more supportive in the future. Force] still looking for that definitive mission in space; force enhancement is primarily what we're doing today." Davis added that there is increasing reliance on using space for military needs: "Space control is becoming more important as we have very high value assets in orbit. We depend on these assets and are interested in protecting them." Davis added that aboard one of the Soviet Union's early orbital piloted stations, it had a rapid-fire cannon installed. The military outpost was armed, Davis said, "so they could defend themselves from any hostile intercepts."33 Even the ISS is seen by some participating nations as having military value. An intergovernmental agreement on the ISS was first put in place in 1988, resulting in an exchange of letters between participating countries involved in the megaproject. Those letters state that each partner in the project determines what a "peaceful purpose" is for its own element. According to Marcia Smith, a space policy expert at the Congressional Research Service, a research arm of the U.S. Congress, "The 1988 U.S. letter clearly states that the United States has the right to use its elements . . . for national security purposes, as we define them."34 One of the more perceptive observations made when the first space stations flew into orbit was the potential that these floating laboratories might provide for investigating and solving a multitude of scientific questions. To a great degree, those making these observations were correct. Nearly every branch of science jumped on the space station bandwagon with proposals to investigate a host of questions. As the twenty-first century pushes forward, many problems of living in space have been solved while others remain elusive. The question being asked more frequently than ever is whether the costs of the many space stations and their experiments have returned enough benefits to taxpayers to continue the space station program. "Research and Experiments." Lucent Library of Science and Technology: Space Stations. . Encyclopedia.com. (November 14, 2018). https://www.encyclopedia.com/science/technology-magazines/research-and-experiments "Research and Experiments." Lucent Library of Science and Technology: Space Stations. . Retrieved November 14, 2018 from Encyclopedia.com: https://www.encyclopedia.com/science/technology-magazines/research-and-experiments Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA). Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list. Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites: Modern Language Association The Chicago Manual of Style American Psychological Association - Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates. - In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.
Did you know beeswax is made from honey? Bees collect nectar and pollen to make honey to feed the hive. As they eat honey, their bodies make wax. Chewing this wax with a little more honey, the bees build combs. When the time is just right, beekeepers open these “honey pantries” to collect the extra honey — and we collect combs.These combs are turned into pellets which we use in the design of our Beeswax Candle Blend. Bees Making Wax It all begins on a flower in a field. Bees collect nectar from flowers and bring it to the hive where it becomes either beeswax or honey. A bee’s diet consists primarily of honey, and any honey not consumed by the bees or in the raising of brood is stored as surplus and is ultimately consumed in the winter months when no flowers are available. However, it is honey’s other use that interests us, its conversion into beeswax. The production of beeswax is essential to the bee colony. It is used to construct the combs in which the bees raise their brood and into which they store pollen and surplus honey for the winter. Worker bees, which live only around 35 days in the summer, develop special wax-producing glands on their abdomens (inner sides of the sternites of abdominal segments 4 to 7) and are most efficient at wax production during the 10th through the 16th days of their lives. From about day 18 until the end of its life, a bee’s wax glands steadily decline. Bees consume honey (6-8 pounds of honey are consumed to produce a pound of wax) causing the special wax-producing glands to covert the sugar into wax which is extruded through small pores. The wax appears as small flakes on the bees’ abdomen. At this point the flakes are essentially transparent and only become white after being chewed. It is in the mastication process that salivary secretions are added to the wax to help soften it. This also accounts for its change in color. The exact process of how a bee transfers the wax scales from its abdomen to its mandibles was a mystery for years. It’s now understood to be processed in either of two ways. Most of the activities in the hive are cooperative so it should be no surprise that other worker bees are willing to remove the wax scales from their neighbors and then chew them. The other method is for the same bee extruding the wax to process her own wax scales. This is done using one hind leg to move a wax scale to the first pair of legs (forelegs). A foreleg then makes the final transfer to the mandibles where it is masticated, and then applied to the comb being constructed or repaired. Beeswax becomes soft and very pliable if the temperature is too high (beeswax melts around 149 degrees Fahrenheit). Likewise, it becomes brittle and difficult to manage if the temperature is too low. However, honeybees maintain their hive at a temperature of around 95 degrees Fahrenheit, which is perfect for the manipulation of beeswax. A honeycomb constructed from beeswax is a triumph of engineering. It consists of hexagon shaped cylinders (six-sided) that fit naturally side-by-side. It has been proven that making the cells into hexagons is the most efficient shape for using the smallest possible amount of wax to contain the highest volume of honey. It has also been shown to be one of the strongest possible shapes while using the least amount of material. The color of beeswax comprising a comb is at first white and then darkens with age and use. This is especially true if it is used to raise brood. Pigmentation in the wax can result in colors ranging from white, through shades of yellow, orange, red, and darker all the way to brownish black. The color has no significance as to the quality of the wax (other than its aesthetic appeal). Formerly, wax was bleached using ionization, sulphuric acid or hydrogen peroxide which resulted in the inclusion of toxic compounds. Bleaching has now been abandoned by reputable candle manufacturers and other suppliers of beeswax. If beeswax has a medicinal smell, chances are that it has been chemically altered or bleached.
Soulful Creatures: Animal Mummies in Ancient Egypt In the ancient burial ground at Saqqara, Egypt, one animal cemetery alone has yielded over four million individual ibis mummies. And the nearby dog cemetery contained over seven million mummies, while additional sites throughout Egypt held the remains of countless cats, snakes, and various other creatures. This unusual aspect of ancient Egyptian culture and religion—the mummification of animals—has remained largely a mystery. This exhibition explores the religious purpose of these mummies, how they were made, and why there are so many. Animals were central to the ancient Egyptian worldview. In a rarity among ancient cultures, Egyptians believed that animals possessed souls. Since most species were thought to have a connection to a particular deity, after death the soul of a mummified animal could carry an individual’s message to a god. Yet not all animal mummies are what they seem. Scientific investigation proves that the corrupt burial practices alleged by some ancient texts were all too real. CT scans displayed in the exhibition uncover the empty wrappings, double mummies, and misleading packaging among the mummies that some priests sold to worshippers perhaps for profit. Drawn entirely from the Brooklyn Museum’s renowned collection, Soulful Creatures combines the tools of art history, archaeology, and forensic science to reveal the religion, commerce, and biology of animal mummies. Animals in Ancient Egyptian Culture Animals were integral to ancient Egyptian life, whether as beasts of burden, food sources, pets, or feared wild predators of the desert or marshes. As a result, they were ever-present in Egyptian religious, political, and artistic symbolism for over four thousand years. Individual species came to symbolize specific traits in gods or humans, such as the power of the lion, and seemed to embody both the dangerous and the beneficial forces of nature. Carrying this weight of meaning, animal images could be used in writing, magic, or healing. And the bodies of animals, mummified after death, could serve religious purposes. Though foreign contemporaries admired Egyptian culture in general, nonetheless, Hebrew, Greek, Roman, and early Christian writings condemned and ridiculed the Egyptian view of animals. But today, modern scholars have found that the Egyptian attitude toward the animal world was in fact more complex and nuanced than their neighbors acknowledged, as the objects shown in this section of the exhibition demonstrate. Types of Animal Mummies The Egyptians preserved animals as mummies for several different purposes. Rarely, mummies of pets were buried with their owners; the best-known examples are from within the royal family. More commonly, mummified farm or game animals served as food offerings for the deceased and were included in tombs. Also, particular sacred animals, which were considered a god incarnate, were mummified at death and buried much like royalty. The vast majority of animal mummies, however, were votive: animals prepared for burial so that their souls would be set free to deliver messages to the gods. Kings and Mummies The king and queen sponsored cults of sacred animals as one of their basic royal responsibilities and ensured that a divine animal was mummified. The royal government also undertook to guarantee the integrity of votive mummy manufacture and burial. Royal regulations controlled the large institutions that housed the living animals, converted the slain animals to mummies, and buried them in the large animal cemeteries throughout Egypt. Making Votive Mummies Votive animal mummies were made not only from domesticated animals, such as cats and dogs, but also from wild animals, including crocodiles, snakes, and birds. Some animals were specifically raised for mummification in temples. Royal regulations concerning these votive mummies specified that one body per package was ideal. However, there are sometimes multiple animals, parts of animals, or no animal at all in the package. The corpse was dried using natron, a naturally occurring salt, and wrapped in linen. Priests could deposit the linen bundle directly in niches carved into the cemetery’s limestone walls, or they could place it first in a coffin. The coffin could resemble the shape of the animal or assume other symbolic shapes, such as an obelisk or cartouche. Coffins of pottery, wood, or bronze added to the economic value, and perhaps to the efficacy, of the mummy. Votive Animal Mummies: Messengers to the Gods In Egyptian religion, many species of animals each had a relationship with one or more deities. This relationship allowed the soul of the animal, upon release from the earthly body, to act as a messenger sent by a human to a god. These messages could be written on papyrus or linen, recited orally, or perhaps a combination of the two. Complaints are the most common features of letters addressed to the gods. Writers complain about a variety of subjects, most often a crime committed against him or her. But the complaint can also address sickness, a deplorable state of affairs at work, an injustice within a family, perjury in court, or libel against the writer. In redress of their grievances, worshippers therefore ask the god to intervene and provide long life, improved health, better working conditions, enriched relations with parents, swift return of stolen goods, and immediate protection from evil spirits. Clearly, they expected to see results. Scientific Study of Animal Mummies In preparing for this exhibition, the Conservation Laboratory at the Brooklyn Museum analyzed a number of the animal mummies to determine their composition, content, and age. Certain testing methods allowed scientists to see inside the mummies, such as X-radiography and computed tomography (CT), which revealed previously hidden contents and uncovered how the mummies were made. The use of stereomicroscopes and ultraviolet radiation let scientists examine materials on the surface more closely. X-ray diffraction (XRD) and gas chromatography (GC) were employed to identify more accurately the embalming resins and other substances found on the animal mummies. Radiocarbon or carbon 14 dating was done on several samples of the linen wrappings, to compare the physical age of the linen with the supposed age of the mummies. With these tools, modern scientific testing has refined and enhanced art-historical and archaeological observation and greatly added to our understanding of ancient animal mummies. In certain instances, testing has shown that there are no remains at all within an animal mummy’s wrappings. In other cases, the presence of a different animal from the one advertised, or fragments of several different animals, are now confirmed by scientific methods. For example, of the four ibis-shaped mummies displayed nearby, two contain complete ibis mummies while the third contains snakes and the fourth contains shrews. Though we cannot be sure that this does not have another religious explanation, what we have learned through advanced imaging of these mummies suggests that despite the standard accounting practices established by Egyptian kings, some corrupt priests may have cheated worshippers and the institutions that prepared and buried the animal mummies. April 12, 2017 Soulful Creatures: Animal Mummies in Ancient Egypt, opening September 29, 2017, is the first major exhibition on the topic Soulful Creatures: Animal Mummies in Ancient Egypt is the first major exhibition to focus on one of the most fascinating and mysterious aspects of ancient Egyptian culture and religion—the mummification of animals. Organized by the Brooklyn Museum and drawn from its renowned Egyptian collection, the exhibition clarifies the role animals, and images of animals, played in the Egyptian natural and supernatural world through 30 mummified birds, cats, dogs, snakes, and other animals and more than 65 objects related to the ritual use of animal mummies. Soulful Creatures is on view from September 29, 2017, through January 21, 2018. Excavated in the nineteenth and twentieth centuries, from at least thirty-one different cemeteries, the animal mummies on display cover Egyptian history from as early as 3000 B.C.E. until the Roman period at the end of the second century C.E. “While the exact significance of animal mummies has largely remained a mystery, they are the most numerous type of artifact preserved from ancient Egypt,” stated Edward Bleiberg, Senior Curator, Egyptian, Classical, and Ancient Near Eastern Art. “For example, over four million individual ibis mummies have been found at an ancient burial ground in Saqqara, and a nearby dog cemetery yielded over seven million mummies. Soulful Creatures explores the purpose of these mummies, how they were made, and why there are so many.” The four kinds of animal mummies that ancient Egyptians produced include pet, victual (mummified food placed in the tomb for the afterlife), divine, and votive (sacred offerings that communicated directly with a deity). Animals were central to the ancient Egyptian worldview and most had connections to a particular deity. The ibis and the dog are two sacred votive examples that acted as messengers to the Egyptian gods Thoth and Anubis, respectively. After death, mummified animals served a variety of religious purposes, allowing the animals’ souls to carry messages to the gods. These messages were often sent through accompanying hand written letters that frequently requested good health for a sick relative or help with problems at work. The exhibition even includes an example from a child who complained to a particular god about their parent’s behavior. “These mummies were made in a carefully controlled process that resembled human mummification. It just shows how Egyptians thought of animals, on some basic level, as being very similar to human beings. They regarded animals as creatures created by the gods and believed they possessed a soul, which was unusual for an ancient culture,” said Bleiberg. One of the most elaborate and expensive animal mummies in the exhibition is the Ibis Mummy from the Early Roman Period. The extraordinary artifact is lavishly wrapped in linen strips that are dyed and woven together to form a herring bone pattern and includes a wooden beak and crown to make the ibis recognizable in its wrappings. By drawing on archaeology, cultural history, and modern medical imaging—done by the Brooklyn Museum with Dr. Anthony Fischetti of the New York Animal Medical Center—Soulful Creatures also reveals that many animal mummies are not what they seem. Scientific investigation of the mummies produced some surprising results and confirmed corruption in animal cemeteries that some contemporaneous texts allege. CT scans displayed in the exhibition analyze how animal mummies were made and what they contain, uncovering the empty wrappings, double mummies, and misleading packaging among some of the mummies that the priests sold to worshippers. Soulful Creatures illuminates recent scientific tests that have uncovered key information about animal mummification and illustrates how Egyptologists today investigate the many provocative theories proposed—like corruption in animal cemeteries—to explain the practice, origins, techniques, and rituals of animal mummification. Soulful Creatures: Animal Mummies in Ancient Egypt is organized by Edward Bleiberg, Senior Curator, Egyptian, Classical, and Ancient Near Eastern Art, and Yekaterina Barbash, Associate Curator of Egyptian Art, Brooklyn Museum. The accompanying book is published by the Brooklyn Museum in association with D. Giles Ltd, London. The exhibition was organized by the Brooklyn Museum and debuted at the Bowers Museum in Santa Ana, California, in March 2014 and then toured to the Memphis Brooks Museum of Art in Memphis, Tennessee, in October 2014. The Brooklyn Museum is the final venue of the tour.
AUSTIN (KXAN) — NASA announced this week that the Hubble Space Telescope detected what might be a wandering ‘black hole’ nearly 5,000 light-years away in the Milky Way Galaxy. The discovery led NASA to believe that the nearest black hole may be only 80 light-years away. The closest star to our own, Proxima Centauri, is about 4 light-years from Earth. The wandering object was detected in the Carina-Sagittarius spiral arm of the Milky Way galaxy. Earth is located in the Orion spiral arm. It is moving at around 100,000 mph. NASA says at that speed, the object could travel from the Earth to the Moon in around three hours. It took three days for humans to travel that same distance aboard Apollo 11. Is it a black hole or… something else? Two teams worked together to locate the object: Kailash Sahu with the Space Telescope Science Institute out of Baltimore, Maryland and a team led by Casey Lam of the University of California, Berkeley. They disagree on what it may be: a black hole or maybe a star. Black holes can not be seen with a traditional telescope. However, Hubble was able to detect the gravity warping effects caused by the object. We detect these effects when an object passes in front of a star, because they literally bend the light of star. Based on how the star’s light is altered, we can determine the size of the object moving in front of it. If the light is altered significantly, then it is likely a black hole. If it is only altered slightly and the color of the star changes, then it is likely another star. Lam’s team believes the object is likely a star, while Sahu’s team believes it is a black hole. The debate surrounds the teams’ methodology. Lam’s team used Hubble while the other team didn’t. How are black holes created? Black holes are born from destruction. They are created when a super massive star dies. These stars are huge. Each one is around 20 times larger than our sun. As they die, they explode in what is called a supernova. According to NASA, what’s left after that explosion is then crushed under its own gravity. That gravity is so intense it then sucks everything in around it: even light and time itself. Wild, right?!? When that supernova happens, the kickback from the explosion can then hurl the black hole into space. Are we in danger of being sucked up by a black hole? Likely, no. While the black hole is moving pretty fast, it’s not moving fast enough to reach our solar system anytime soon. The likelihood of Earth being hit by a black hole, one of the studies author’s told Newsweek, is relatively low.
Astronomers have made the most precise observations of a pulsar’s mass, size and surface yet. The observations were made using NASA’s Neutron Star Interior Composition Explorer (NICER) X-ray telescope aboard the ISS of a pulsar, almost 1100 light-years away in the constellation Pisces. Pulsars are neutron stars that emit pulsating high energy radiations from their magnetic poles as they rotate at ultra-fast speeds. The particles on the surface are accelerated in the magnetic field and create hotspots on the surface. These hotspots are brighter than the rest of the surface. In their observations of the pulsar J0030+0451 (J0030), researchers found three hotspots in the southern hemisphere of the dead star. Researchers used two methods to map the hot spots on J0030 both of which gave similar measurements of mass and size. In the first method, researchers found that the pulsar weighs 1.3 solar masses with a diameter of about 25.4 kilometers and by the second method, it weighed 1.4 solar masses and about 26 kilometers wide. SEE ALSO: Astronomers Capture A 'Glitch' In Vela Pulsar Scientists reached a new frontier in understanding pulsars, the dense, whirling remains of exploded stars. The NICER X-ray instrument on @space_station produced the 1st dependable measurements of both the mass and size of a pulsar, including a surface map. https://t.co/EQkuKY7xCL pic.twitter.com/lRHx8rc5vB— NASA Goddard (@NASAGoddard) December 14, 2019 The researchers were able to make accurate measurements because of the 20 times better precision provided by NICER X-ray telescope. Cole Miller from University of Maryland explained, “NICER’s unparalleled X-ray measurements allowed us to make the most precise and reliable calculations of a pulsar’s size to date, with an uncertainty of less than 10%.” Through computer simulations, researchers again used two different processes to locate the hotspots. One team of researchers considered circula hotspots while the other team considered ovular hotspots. They found two circular hotspots but observed three ovular hotspots where two hotspots were in same place as the circular counterparts with an additional smaller and cooler hotspot. The science lead of NICER Zaven Arzoumanian from NASA’s Goddard Space Flight Center explained, “It’s remarkable, and also very reassuring, that the two teams achieved such similar sizes, masses and hot spot patterns for J0030 using different modeling approaches.” SEE ALSO: Astronomers Detect Powerful Pulsating Gamma Rays Emitting From A Neutron Star Image Credit: NASA's Goddard Space Flight Center
based on the writings of Harav Kook In 1751, the Pennsylvania Assembly ordered a special bell be cast, commemorating the 50th anniversary of William Penn’s ‘Charter of Privileges.’ The Speaker of the Assembly was entrusted with finding an appropriate inscription for what later became famous as the Liberty Bell. The best expression of freedom and equality that the speaker could find was the Biblical verse describing the Jubilee year: “You will blow the shofar on the tenth day of the seventh month; on Yom Kippur you will blow the shofar in all your land. You shall sanctify the fiftieth year, proclaiming freedom to all its inhabitants.” (Lev. 25:9–10) The triumphant announcement of the Jubilee year, with blasts of the shofar, takes place on the tenth of Tishrei. This date is Yom Kippur, the Day of Atonement. Yet, this is a curious date to announce the new year. The Jubilee year, like any other year, begins on the first of Tishrei, on Rosh Hashanah. Why was the formal proclamation of the Jubilee year postponed until Yom Kippur, ten days later? National Sabbath Rest The Jubilee year is a super-Sabbatical year. Like the seventh year, agricultural labor is prohibited, and landowners forego all claims on produce grown during that year. The Jubilee also contains two additional aspects of social justice: the emancipation of slaves and the restoration of land to its original owner. Just as the Sabbath day allows the individual to rest, so too the Sabbatical and Jubilee years provide rest for the nation. The entire nation is able to take a break from competition and economic struggle. The Sages noted that the phrase “Sabbath to God” appears both in the context of the weekly Sabbath and the Sabbatical year. Both are designed to direct us towards spiritual growth: the Sabbath on the individual level, and the Sabbatical year on the national level. Healing Rifts in Society The Talmud in Rosh Hashanah 8b relates that during the first ten days of the Jubilee year, the slaves were not sent home. Nor did they work. They would feast and drink, celebrating their freedom ‘with crowns upon their heads.’ Only after the court blew the shofar on Yom Kippur would the newly freed slaves return home. The freeing of slaves in the Jubilee year serves as an important safeguard for social order. Societies that rely on slave labor usually suffer from slave revolts and violent acts of vengeance by the underclass.1 Instead of attaining social justice through bloody revolt and violent upheaval, the Jubilee emancipation allows for peaceful and harmonious social change. The restoration of rights for the poor and disadvantaged becomes an inherent part of the societal and economic order. Most significantly, during their final days of servitude, the freed slaves celebrate together with their former masters. The Torah also obligates the master to send off his servants with generous presents (ha’anakah). These conciliatory acts help heal the social and psychological wounds caused by socio-economic divisions and class estrangement. The national reconciliation reaches its peak on Yom Kippur, when the shofar exuberantly proclaimed freedom and equality. Atonement for the Nation Thus, the formal announcement of the Jubilee year is integrally connected to Yom Kippur. On that year, the Day of Atonement becomes a time of forgiveness and absolution, not only for the sins of the individual, but also for the sins of society. (Gold from the Land of Israel, pp. 213-215. Adapted from the Forward to Shabbat HaAretz, p. 9.)
Our School App Stay connected on the go... Through a wide range of topics, students are able to develop a number of key historical skills, such as research, analysis and evaluation. History is taught through a range of resources and techniques, providing interesting, engaging and challenging lessons for students. We also aim to provide opportunities for students to become interested in History beyond the classroom through a number of trips. We hope to inspire students to become interested in the past and for them to see how past events have shaped the world today. Years 7, 8 and 9 In Year 7 and 8, students develop a solid understanding of British History. Year 7 begin the year with a short topic on the Romans with the aim of developing historical skills. Students learn how to analyse sources and develop their chronological skills. For the remainder of Year 7, students develop these key skills through exploring the topic of Medieval England. In Year 8, students build on these historical skills and investigate key areas of British History, such as the Tudors, The English Civil War and the British Empire. Students are assessed at the end of each half term, either through a test or a piece of extended writing on the topic they have just learnt. Students will also have an end of year assessment, covering content from all the topics studied throughout the year. Each term students will complete an extended project for homework. During Year 9, students develop a range of historical skills that they learnt in previous years and build upon these skills in order to prepare them for GCSE. Students develop these skills through investigating key events of the Twentieth Century. These events include the First World War, The Russian Revolution, The Second World and the Cold War. Students are assessed at the end of each half term, either through a test or a piece of extended writing on the topic they have just learnt. Students will also have an end of year assessment, covering content from all the topics studied throughout the year. Students receive regular pieces of homework, building upon content and skills covered in lessons. In Year 9, students have the opportunity to learn beyond the classroom by attending the Imperial War Museum trip. Years 10 & 11 At GCSE, students must choose to study History or Geography, with many opting to study both. GCSE History covers a broad range of topics and time periods. Students study four key topics; Crime and Punishment in Britain, The Cold War and Superpower Relations, Elizabeth I and the USA (Civil Rights movement and the war in Vietnam). Students develop historical skills such as analysing sources, forming an argument, interrogating evidence and understanding interpretations. Lessons in Year 10 and 11 are structured carefully to fully prepare students on the vast content, skills and revision techniques needed for success at this level. Revision sessions and intervention sessions are run in the build up to exams. Students follow the Edexcel specification Students will be examined by end of topic tests and mock exams throughout the two years of GCSE study. Students will receive regular homework tasks building upon content and skills covered in lessons, many of which will be exam questions. Careers & Future Study GCSE History prepares students for future careers or further study in a number of ways. It develops skills such as organisation, research, analysis, evaluation and communication. These skills are vital in many areas, such as law, policing, social work, teaching, business, to name just a few. History at A Level is a popular choice with many students choosing to study this further at university. Students study four units; Communist China, Communist Russia, The Changing nature of British Warfare and conduct a coursework investigation on the Holocaust. Students broaden their historical skills; learn to debate, reach a sustained argument and question interpretations. A Level History can prepare students for work in many fields including media, journalism, law and civil service. Students follow the Edexcel specification Subject Leader: Mrs E Seymour Miss G Cairns Miss L English Mr M Pembroke Mr K Price
NEWS FLASHSubject: Social Studies Grade Levels: 6 through 12 Sunshine State Standards: View all Sunshine State Standards - Grades 6-8 - SS.A.2.3.2, 3.3.4 - Grades 9-12 - SS.A.1.4.4, 3.4.9 The world was a very different place in 1933. The United States was in the midst of the Great Depression and Europe was at the beginning of a terrible conflict that would ultimately encompass the whole planet. Select a handful of students to research each year from 1933 to 1945. During each day of class, have the students make a newspaper headline from a specific year to hang up for the rest of the students to see and discuss. Each headline should go in chronological order starting with 1933. If the researched material comes from actual newsprint of that period (like the New York Times) show your class the front page and talk about the other news that may be on it. Your students may be surprised to learn that the suffering and horror that the Jewish people were going through was not getting the press that it should have. The camp at Terezin was a prime example of the way the Nazis tried to hide the Jewish tragedy from outside eyes. Ask the students what they could do to be better informed about world affairs. - The New York Times, Page One 1896-1996. Commemorative ed. New York: Galahad Books, 1996. A Teacher's Guide to the Holocaust Produced by the Florida Center for Instructional Technology, College of Education, University of South Florida © 1997-2013.
It's right there in our name: The Planetary Society. But what is a planet? This seemingly simple question is the subject of much debate. We at The Planetary Society love all worlds and advocate for their exploration, whether they’re big or small, hot or cold, traveling alone or orbiting another world. As with all words, the meaning of “planet” has evolved over time, and will continue to change in the future. What “planet” means partially depends on who is talking and what definition they find most useful. Before we discuss definitions of the word, let’s consider the diversity of worlds that have been called planets in the past. A short history of planets “Planet” is a word used by the ancient Greeks to describe stars, visible to the naked eye, that moved in relation to the fixed, background stars. The word "planet" comes from the Greek word "planetes," which means "wanderer," and likely has more ancient origins. We’ll never know when humans first noticed that some stars moved while most did not, nor what name they first called those wandering stars. The ancient Greeks believed that Earth was at the center of the universe, and the planets—which included the Sun and Moon—revolved around us on fixed, concentric spheres. Over time, philosophers and scientists from Copernicus to Kant to Hubble modified this perspective until Earth was viewed as just one of many planets, orbiting an average star that, itself, orbited the distant center of the Milky Way, which was one of many galaxies. While Mercury, Venus, Mars, Jupiter, and Saturn have been known since antiquity, Uranus wasn’t discovered until 1781, orbiting 20 astronomical units (AU) from the Sun, doubling the size of the then-known solar system. Between 1801 and 1808, astronomers found 4 new worlds much closer to home: Ceres, Pallas, Juno, and Vesta, all orbiting the Sun between Mars and Jupiter, at distances ranging from 2 to 4 AU. By the time a fifth world, Astraea, was discovered in 1845, astronomers referred to them as “asteroids,” “small planets,” or “minor planets,” and considered them to be a subset of planets, just like rodents are a subset of mammals. The planet Neptune was discovered in 1846, expanding the size of the known solar system again, this time to 30 AU. Pluto was also named a planet when it was discovered in 1930. Pluto’s orbit was highly unusual: very elongated, markedly tilted, and in a dance with Neptune such that Pluto orbits the Sun twice for every 3 times Neptune does. The rhythmic motions occasionally bring Pluto even closer to the Sun than Neptune (such as from 1979-1999), but the 2 planets are never near each other in space, nor will they ever be. At first, Pluto’s mass was estimated to be similar to Earth’s, but those estimates shrank over time. We now know it is only 0.2% as massive as Earth. By the 1950s, scientists began to agree that asteroids formed differently and were intrinsically different than the rest of the planets. Improvements in the understanding of the origin of planetary systems led scientists to realize that Jupiter’s powerful gravity had so perturbed this region of space that no worlds larger than Ceres have been able to survive intact there. Because of this distinction, usage of the term "small planets" and "minor planets" to name asteroids plummeted. Asteroids were no longer considered a subset of planets, and most people alive today grew up learning there were 9 planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto. Robotic spacecraft reveal a diversity of worlds Starting in 1959, we began visiting other planets, moons, asteroids, and comets with spacecraft. Space missions turned these worlds from points of light into places. Suddenly, it wasn’t just astronomers studying other worlds: geologists, atmospheric scientists, physicists, and other scientists could look at the landforms, atmospheres, and interiors of other worlds, and compare them to each other and to Earth. Mercury, Venus, the Moon, and Mars all had clear similarities to Earth: they are rocky worlds possessing volcanoes, fractured crusts, and (for Mars and Venus) atmospheres with weather and climate. Asteroids proved to be lumpy worlds that appeared very different from planets, as predicted. The giant planets—Jupiter, Saturn, Uranus, and Neptune—were clearly distinct from everything else, with their gassy envelopes, ring systems, and families of moons. And to many scientists’ surprise, these giant planets’ moons were quite planetlike: Jupiter’s Io had active volcanoes, Europa and Ganymede had fresh icy surfaces crisscrossed by grooves, and Saturn’s Titan had a thick atmosphere. These satellites all joined Earth’s Moon as targets of interest to people who called themselves planetary scientists. Suddenly, it was possible to have a career as a planetary geologist. (A few people, noting that “geo-” signifies Earth, advocated for calling the study of Moon and Mars rocks “selenology” and “areology.” However, the idea of naming the same science differently for each world that had rocks never caught on, so “geo-” is used in the general sense.) Discovery of the third zone of the solar system In 1992, humanity’s view of the solar system began to expand again when astronomers found another object in Pluto’s neighborhood, 1992 QB1, now named Albion. The next year, 5 more such worlds were discovered. By the turn of the millennium, more than 200 worlds were known to travel within the Kuiper belt—the region of our solar system beyond Neptune that stretches from Neptune’s orbit at 30 AU to a sharp edge at 50 AU. (Why is the edge sharp? We don’t yet know.) Some of these newly found worlds were quite large, bigger than Ceres. All early-discovered Kuiper belt objects were fainter and therefore smaller than Pluto, but in 2005 a team of scientists announced the discovery of a world we now call Eris that was brighter than Pluto. For many years it seemed that Eris was larger than Pluto, but we now understand that although Eris is more massive than Pluto, it’s just slightly smaller. Other trans-Neptunian worlds that are definitely larger than Ceres are Makemake, Haumea, Gonggong, Pluto’s moon Charon, and probably Quaoar and Sedna. Was Eris a planet too? If it was, what about other large Kuiper belt worlds like Makemake and Haumea? What about Sedna, which lies beyond the Kuiper belt and hints at the possibility of more undiscovered, very distant worlds, one or more of which could turn out to be even larger than Pluto? If Eris was not a planet, then how could Pluto still be called a planet? Or perhaps Eris and all the other newly discovered round worlds should be considered planets? We could classify the worlds of the solar system in many different ways: by size (mass or diameter), composition (metal, rocky, icy, gassy, or combinations thereof), location (orbiting near or far from the Sun, alone or circling another world), whether or not it has an atmosphere, magnetic field, oceans, weather, present-day geologic activity, and more. What combination of qualities makes something a planet? If a scientist is interested in impact craters or subsurface oceans, does it matter if a world orbits the Sun or another planet? An Astronomer’s Definition of “Planet” The International Astronomical Union (IAU) is an international organization with more than 10,000 members that fosters collaboration among astronomers. Among the IAU’s activities is the organization of committees of its members that assign names to celestial bodies, providing a consistent terminology used across scientific publications by most of the world’s astronomers who publish in English. Following the discovery of Eris in 2005, the IAU attempted to clear up the confusion surrounding planethood by voting on a new definition for planets at its General Assembly in 2006. At the end of the General Assembly, a majority of the remaining 500 members still present for the vote passed the IAU planet definition resolution. The resolution said that a planet must: - Orbit the Sun. - Be round or nearly round due to gravity. - "Clear the neighborhood” around its orbit. Furthermore, the IAU-adopted definition stated that if an object meets the first two criteria but not the last, and is also not a satellite of another planet, it is a “dwarf planet” and that “planets and dwarf planets are two distinct classes of objects.” That is, dwarf planets are not planets. The first 2 criteria (orbit the Sun and be round) are easy to understand, but the third has caused confusion. The IAU resolution did not include a definition of what it meant to “clear the neighborhood.” Most astronomers interpret the phrase to mean that a planet is gravitationally dominant in its region of space. Another way to put this is: a planet is a world that has dramatically more mass than anything else that orbits near it. Indeed, if you graph the mass of the solar system’s worlds against their distance from the Sun, there are 8 worlds in our solar system that stand dramatically above everything else. For example, although many trans-Neptunian objects (including Pluto) have orbits that cross Neptune’s, Neptune is so massive and has so much greater gravity than any other nearby object that it controls the positions and periods of these smaller worlds’ orbits. According to this astronomical definition, Neptune is a planet and these other worlds are not planets (though they may be dwarf planets). Likewise, Earth’s orbit is constantly crossed by asteroids and meteoroids—we are hit by meteoroids every day—but Earth is so comparatively large that no asteroid that strikes Earth will affect Earth’s orbit in a noticeable way, making Earth a planet and the Earth-crossing asteroids not planets. There are two other aspects of the IAU definition that are worth mentioning: - The IAU definition specifically applies only to our own solar system. It does not apply to the worlds that orbit other stars, known as exoplanets. - IAU planethood depends on a world’s closeness to the Sun: smaller worlds close to the Sun, like Mercury, are able to sweep their orbits effectively, but if Mercury were located far from the Sun, it might not clear its neighborhood and would thus be a dwarf planet. Many people, including those who felt passionately that Pluto should be considered a planet because of its special place in popular culture, disputed the IAU’s 2006 definition of planethood. Others pointed out some complications of applying it. These arguments are well summarized in an article written by David Grinspoon in 2015, explaining the logic behind a simpler definition, which would be the same as the IAU definition, minus the “clear the neighborhood” requirement: “A planet is a gravitationally rounded object that is orbiting a star” and is not itself a star. The Geoscientist's Perspective on Planets Not only astronomers study planets, of course. Now that spacecraft routinely explore our solar system, other kinds of scientists can study planets, too. Planetary scientists are interested in a world’s physical properties and history. A planetary volcanologist studies volcanoes whether they’re on Earth, Venus, or Io; an atmospheric scientist might study polar weather on Mars, Titan, Saturn, or Pluto. All these types of science can be described with the umbrella term “geoscience”—the science of worlds. It doesn’t necessarily matter to a geoscientist interested in planetary processes like volcanism and weather, if a world orbits the Sun or a planet. You’re a planetary scientist even if you mostly study moons. So a geoscientist might want a different definition of “planet” than one that’s useful to astronomers. Geoscientists care about the intrinsic properties of a world like its surface landforms, mass distribution, and composition more than the location of the world. Mass is the most useful predictor of the kinds of physical processes that can happen on a world, regardless of where it is in the solar system. Composition, location, and history are also important, but you can predict a lot about a world if you only know its mass. At the large end of the mass scale, pretty much everyone agrees that there’s a sharp boundary between planets and stars. Stars are objects that are (or used to be) capable of nuclear fusion. Planets are too small ever to have fused atoms. But what is the boundary between planets and not-planets at the small end of the size range? Put another way, what’s the geophysical difference between a lumpy asteroid like Bennu and a small planet like Mercury? To figure out where that boundary lies, and how geoscientists might classify planets, it helps to know what planetary scientists have learned about the variety of worlds in our solar system, and which ones experience planetary processes like volcanism, tectonics, and weather. Small-mass objects like most asteroids can’t hang on to atmospheres and don’t have internally driven geology. They are often loosely-assembled piles of rubble with large, empty void spaces between their component blocks. They don’t make their own geology; change on small asteroids is caused by external forces like impacts and solar radiation. They are not considered to be planets by anyone. With enough mass, a world’s self-gravity can crush itself under its own weight, closing up the pore spaces found in rubble-pile asteroids. Gravity acts to pull materials from high places to low ones, smoothing out highs and filling in lows. The more mass an object has, the more its gravity is able to reshape it. Larger worlds tend toward a spherical shape, perhaps with a bulge at the equator due to its rotation. This transition from lumpy to round happens between about 400 and 600 kilometers, depending on a world’s composition. An object that is mostly made of ice will crush itself and become round at a smaller diameter than a world mostly made of rock, because ice is weaker than rock. Saturn’s moon Mimas (about 400 kilometers across) is the smallest icy world we have visited that is round, though Neptune’s slightly larger icy moon Proteus (420 km across) is non-spherical. Asteroid 2 Pallas (about 510 kilometers) is the smallest rocky world that appears to be nearly round. Unfortunately, there are no worlds closer than the Kuiper belt with diameters between those of Pallas and Ceres (a mixed rock-ice dwarf planet about 950 kilometers in diameter), so we can’t see how the lumpy-to-round transition plays out across different types of icy and rocky worlds without much better knowledge of the shapes of very distant Kuiper belt objects. Worlds in the size range from Saturn’s moon Mimas (about 400 kilometers) to Earth (12,700 kilometers) are made mostly of iron, silicate rock, and water ice, but the relative proportions of these components depend on where in the solar system the world formed. Worlds that formed close to the Sun are mostly rock and metal, while those that formed far from the Sun are mostly rock and ice. Round worlds experience geology driven by internal heat. The heat drives geologic activity like volcanism and tectonics. Larger worlds can maintain geologic activity for billions of years. Worlds with an internal molten layer may have an internally generated magnetic field. Worlds with atmospheres can have climate and weather. For worlds with more mass than Earth, another planetary transition happens. The four largest worlds of our solar system—Neptune, Uranus, Saturn, and Jupiter—were massive enough to collect large amounts of ices and gases as they formed. Their materials are squeezed to unimaginable pressures at very high temperatures. These worlds may have no solid surfaces at all, moving from gas to liquid to more exotic, high-pressure forms of matter like superionic water and metallic hydrogen deep inside them. Their materials create powerful magnetic fields. Our solar system doesn’t contain any bodies with masses between those of Jupiter and the Sun. But we know from studying stars that, at a mass of 10 or 15 times that of Jupiter, the pressures and temperatures inside a world become so large that they can begin to fuse atoms. Things that fuse atoms are usually considered to be stars, not planets. The very smallest atom-fusing entities, called brown dwarfs, are not quite stars, and run out of their fuel fairly quickly. We have none of these almost-stars in our solar system, but brown dwarfs are common in our galaxy. It takes 75 times the mass of Jupiter for a star to ignite long-lasting fusion. Our own star, the Sun, is medium-sized, about 1000 times the mass of Jupiter. What Are the Geophysical Planets? Therefore, the word “planet” as used in planetary geology implies: a planet is a world too small to be a star (meaning it never produced nuclear fusion) and is big enough to be round due to its self-gravity. The transition from not-round to round is gradual, unlike the sharp transition from planet to star. From the Sun out to Neptune, there are about 30 worlds that satisfy this definition of “planet”: - The 8 big ones: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune. - The 18 or 19 round moons: Earth’s Moon; Jupiter’s Io, Europa, Ganymede, and Callisto; Saturn’s Mimas, Enceladus, Tethys, Dione, Rhea, Titan, and Iapetus; Uranus’ Miranda, Ariel, Umbriel, Titania, and Oberon; Neptune’s Triton and possibly Proteus. - A few of the 4 largest asteroids: Ceres and possibly Pallas and Hygeia, but possibly not Vesta. Beyond Neptune, it’s harder to count because it’s difficult to measure diameters of worlds that are so small and far away. Here is a partial list of trans-Neptunian planets that have been discovered as of February 2020 that are probably large enough to be round, though in many cases their diameters and roundness are uncertain. There are definitely at least 30, and possibly more than 100—with lots more that remain undiscovered. - 5 that are empirically proven to be round: Pluto, Charon, Eris, Makemake, and Haumea. - 6 that are known to be large enough that their roundness is virtually certain: Gonggong, Quaoar, Sedna, Orcus, 2002MS4, and Salacia. - 17 that are probably larger than 600 kilometers in diameter and so are very likely to be round, only a few of which have been formally named: Varuna, Varda, Ixion, G!kunll'homdima, Chaos, 2002AW197, 2003AZ84, 2013FY27, 2002UX25, 2004GV9, 2005RN43, 2014UZ224, 2005UQ513, 2018VH18, 2014 EX51, 2015RR245, and 2010RF43. - More than 100 that are estimated to be in the size range of 400 to 600 kilometers, many but not all of which might be round. Types of Geophysical Planets Geoscientists most often lump worlds into classifications according to composition, which influences what kinds of processes happen there. Our solar system contains many different kinds of planets: - Gas giants (Jupiter and Saturn) have similar compositions to the Sun: mostly hydrogen and helium. They have extensive systems of rings and moons. Deep inside, the hydrogen is under such intense heat and pressure that it becomes metallic and conducts electricity. The movement of this metallic hydrogen is what helps gas giants generate enormous, strong magnetic fields. Metallic hydrogen may also have dissolved the original rocky cores of gas giants; there may be no distinct rocky or metal center. - Ice giants (Neptune and Uranus) are smaller and have less hydrogen and helium than the gas giants. They, too, have extensive systems of rings and moons. They contain hydrogen and helium but are made mostly of water, methane, and ammonia. These substances, which planetary scientists call “ices,” are in gaseous form near the visible surfaces of the ice giants but further down are compressed into an exotic liquid–solid substance called superionic water. Ice giants are not massive enough to create the pressures necessary for metallic hydrogen to exist, so they probably have large rock and metal cores. - Terrestrial planets (Earth, Venus, Mars, Mercury, the Moon, Io, and possibly Pallas and Vesta) are made mostly of metal and rock and have (or had) volcanoes that erupt liquid rock. They have very little hydrogen, helium, or ices, because most of these worlds formed too close to the Sun for those materials to solidify on their surfaces. In the case of the inner planets like Earth, comets and asteroids have delivered icy materials like water to their surfaces. Europa formed with some ice; Io probably did, too, but has lost it due to its internal heat and Jupiter’s magnetic field stripping it all away. - Dwarf and Satellite planets, or “icy planets” (Ceres, possibly Hygiea, all the other giant-planet moons, and trans-Neptunian objects) are made mostly of a mix of rock and ices. Some have distinct metal cores; most don’t. Some are differentiated and some are not. They have solid icy surfaces, have (or used to have) internal saltwater oceans, and have (or used to have) volcanoes that erupt liquid water. A few have atmospheres and weather; on such cold worlds, it’s possible for clouds, rain, rivers, and ice to be made of methane or nitrogen. There is a lot of overlap between terrestrial and icy worlds. Some that are usually thought of as icy worlds, like Eris and Europa, are actually mostly rock and metal with only a thin veneer of ice and/or water on top. Others, like Ceres, have their ice and rock all mixed together, not in distinct layers. Many icy worlds have internal saltwater oceans. Most metal-rich worlds, including Europa and Ganymede, have solid-metal inner cores and liquid-metal outer cores just like Earth, but some, like Io and Mercury, have nearly fully molten cores. The diversity is impressive. There are two “missing” sizes of planets that our solar system does not contain, but which are common in exoplanetary systems. Super-Earths and sub-Neptunes (terms coined and widely used by astronomers and planetary scientists) occupy a range of masses from 2 to 10 times that of Earth. They have an enormous variety of densities and probably an enormous variety of compositions. There might be such planets that are mostly rocky, rocky with gas layers, rocky with deep oceans, and more. Proposed Geophysical Planet Definitions Immediately after the IAU took their vote on their planet definition, planetary scientists like Mark Sykes pointed out that the definition did not take into account “the intrinsic nature of planets that sets them apart from other categories of objects,” in other words, their physics. Since then, 2 proposed geophysical planet classifications have appeared in the conference literature, though neither have received formal peer review: - In 2013, David Russell defined a planetary classification taxonomy relying primarily on composition (rock, ice, and/or gas), designed to apply both to solar system planets and exoplanets. His taxonomy could be supplemented by applying dynamical classes (that is, by grouping things according to whether they orbit the Sun or planets, inside or outside a belt of other objects) in the case of the worlds of the solar system. He also proposed the idea of supplemental classes for those planets that include biology, and those that do not. - In 2017, Kirby Runyon and coauthors defined a geophysical planet definition: “A planet is a sub-stellar mass body that has never undergone nuclear fusion and that has sufficient self-gravitation to assume a spheroidal shape adequately described by a triaxial ellipsoid [be round] regardless of its orbital parameters.” They argued to professionals that the proposed definition is better for planetary science education and took their argument to the public, including in an article published in Astronomy magazine in 2018. More to Explore Many of the most compelling questions that we have about the science and exploration of our solar system—Where did we come from? Are we alone in the universe? Can we prevent dangerous asteroids from impacting Earth?— may be answered by visiting worlds that have never been named “planets.” Large or small, with or without a solid surface, active or quiescent, stormy or hazy, orbiting a star or another world, all these places are worth exploring. - Harry McSween and coauthors published an excellent introductory textbook in 2019 titled Planetary Geoscience that is suitable for advanced high school or introductory college level courses, or for space enthusiasts - Mike Brown’s regularly updated list of trans-Neptunian object diameters indicates which are certainly, likely, or just possibly round and are thus dwarf planets - The JPL small-body database browser can be used to visualize the orbits of trans-Neptunian objects as well as asteroids, centaurs, and periodic comets, and the page for each object includes physical data like mass and diameter (if known) - The JPL Horizons database hosts lists of physical parameters of planets, satellites, and certain smaller bodies - Phil Metzger’s: “The Reclassification of Asteroids from Planets to Non-Planets” examines the history of the word “planet” in scientific discourse - Kirby Runyon and coworkers presented their planet definition at the Lunar and Planetary Science Conference in 2017 and again at the Pluto System After New Horizons meeting in 2019. Here is their 2017 poster presentation - David Russell’s geophysical planet classification is an unpublished manuscript, but he did publish a peer-reviewed article pointing out that Earth’s Moon meets all requirements of the IAU’s planet definition - Ron Ekers presented to the IAU’s centenary symposium in 2018 about how some obscure rule changes for IAU meetings helped to cause considerable behind-the-scenes drama surrounding the status of Pluto and the planet definition This page was first published on 21 April 2020. It was written by Emily Lakdawalla and other Planetary Society staff members. We thank Jim Bell, Kirby Runyon, Paul Byrne, and David Grinspoon for their helpful reviews.
How does Shakespeare present the characters in 'The Merchant of Venice?' In the literature exam, you may be asked to write about how a particular character is presented by Shakespeare in an extract from 'The Merchant of Venice'. You will also need to know the play well enough to be able to write about a character and how it is presented by Shakespeare in the wider play. To achieve a higher mark, you will need to show that you are analysing the character. To show this, you will need to: - comment on some of the choices of language/structure/dramatic devices used by Shakespeare to present the character - comment on how the presentation of the character is used to reflect the context in which Shakespeare was writing You should always refer to your own text when working through these examples. These quotations are for reference only.
A stream bed or streambed is the bottom of a stream or river (bathymetry) or the physical confine of the normal water flow (channel). The lateral confines or channel margins are known as the stream banks or river banks, during all but flood stage. Under certain conditions a river can branch from one stream bed to multiple stream beds. A flood occurs when a stream overflows its banks and flows onto its flood plain. As a general rule, the bed is the part of the channel up to the normal water line, and the banks are that part above the normal water line. However, because water flow varies, this differentiation is subject to local interpretation. Usually, the bed is kept clear of terrestrial vegetation, whereas the banks are subjected to water flow only during unusual or perhaps infrequent high water stages and therefore might support vegetation some or much of the time. The nature of any stream bed is always a function of the flow dynamics and the local geologic materials, influenced by that flow. With small streams in mesophytic regions, the nature of the stream bed is strongly responsive to conditions of precipitation runoff. Where natural conditions of either grassland or forest ameliorate peak flows, stream beds are stable, possibly rich, with organic matter and exhibit minimal scour. These streams support a rich biota. Where conditions produce unnatural levels of runoff, such as occurs below roads, the stream beds will exhibit a greater amount of scour, often down to bedrock and banks may be undercut. This process greatly increases watershed erosion and results in thinner soils, upslope from the stream bed, as the channel adjusts to the increase in flow. The stream bed is very complex in terms of erosion. Sediment is transported, eroded and deposited on the stream bed. The majority of sediment washed out in floods is "near-threshold" sediment that has been deposited during normal flow and only needs a slightly higher flow to become mobile again. This shows that the stream bed is left mostly unchanged in size and shape. Beds are usually what would be left once a stream is no longer in existence; the beds are usually well preserved even if they get buried, because the walls and canyons made by the stream usually have hard walls, usually soft sand and debris fill the bed. Dry stream beds are also subject to becoming underground water pockets (buried stream beds only) and flooding by heavy rains and water rising from the ground and may sometimes be part of the rejuvenation of the stream.
creepy crawly creatures Read, write, and identify words beginning with the /cr/ blend. - Creepy crawly creatures (graphics/small plastic toys) - Creepy Crawly Creature Worksheet State and Model the Objective Tell the children that today they are going to find some creepy crawly creatures around the room as they practice the /cr/ blend. Play a variation of a hot and cold game - Choose one child to be “it”, and have him or her wait in the hall. - Choose another child to hide a small plastic toy or a picture of a creepy crawly creature in a crack or a crevice, and make sure all the children know where it is hidden. - Invite the first child back into the room to find the picture. - Have the entire class help the first child find the picture or toy by saying /kr/, /kr/, /kr/ loudly if he or she is close to it or softly if the child is far away from it. - Optional variation: Have the child finding the picture creep or crawl as they look for the hidden picture or toy. - Repeat as often as desired. Read target words - Write the following text on the board: - Finding creepy crawly creatures is fun. - I can creep and crawl. - Read the text together as a class, emphasizing each /cr/ blend. - Choose different children to circle each /cr/ blend in the text. Write about the activity - Help the children create a word wall of words that begin with the /cr/ blend. - Give each child a Creepy Crawly Creature Worksheet. - Instruct the children to write a sentence in each box of the worksheet using at least 1 /cr/ word per sentence. 1. CCSS.ELA-LITERACY.RF.1.2.B: Orally produce single-syllable words by blending sounds (phonemes), including consonant blends. Creepy Crawly Creatures
We are learning about Information Reports. We are learning to be communicator when we organise our thoughts into simple, complex and compound sentences. We are learning to synthesis our information into sentences. Open the example and watch it as many times as you need to in order to understand what to do. Then open the student response sheet. Then the words to create your own sentences. How many different types of sentences can you make with the words given? Once you have made a few sentences, use the button to annotate your sentence while thinking about verbs, nouns, pronouns, proper nouns, common nouns, adjectives, conjunctions, subordinate clauses and main clauses. Then tap the to read your sentences, ensuring that you're paying attention to punctuation. When you have finished tap the button to save your work.
Obsessive-Compulsive Disorder (OCD) and Related Disorders in Children and Adolescents Obsessions often involve worry or fear of being harmed or of loved ones being harmed (for example, by illness, contamination, or death). Compulsions are excessive, repetitive, purposeful behaviors that children feel they must do to manage their doubts (for example, by repeatedly checking to make sure a door is locked), to prevent something bad from happening, or to reduce the anxiety caused by their obsessions. Behavioral therapy and drugs are often used in treatment. (See also Overview of Anxiety Disorders in Children and Adolescents and Obsessive-Compulsive Disorder in adults.) On average, obsessive-compulsive disorder (OCD) begins at about age 19 to 20 years, but about 25% of cases begin before age 14. The disorder often lessens after children reach adulthood. Obsessive-compulsive disorder includes several related disorders: Body dysmorphic disorder: Children become preoccupied with an imagined defect in appearance, such as the size of their nose or ears, or become excessively concerned with a slight abnormality, such as a wart. Hoarding: Children have a strong need to save items regardless of their value and cannot tolerate parting with the items. Trichotillomania (hair pulling) Some children, particularly boys, also have a tic disorder. Genes and environmental factors are thought to cause OCD. Studies to identify the genes are being done. There is some evidence that infections may be involved in a few cases of OCD that begin suddenly (overnight). If streptococci are involved, the disorder is called pediatric autoimmune neuropsychiatric disorder associated with streptococcus (PANDAS). If other infections (such as Mycoplasma pneumoniae infection) are involved, the disorder is called pediatric acute-onset neuropsychiatric syndrome (PANS). Researchers continue to study the connection between infections and OCD. Typically, symptoms of obsessive-compulsive disorder develop gradually, and most children can hide their symptoms at first. Children are often obsessed with worries or fears of being harmed—for example, of contracting a deadly disease or of injuring themselves or others. They feel compelled to do something to balance or neutralize their worries and fears. For example, they may repeatedly do the following: Check to make sure they turned off their alarm or locked a door Wash their hands excessively, resulting in raw, chapped hands Count various things (such as steps) Sit down and get up from a chair Constantly clean and arrange certain objects Make many corrections in schoolwork Chew food a certain number of times Avoid touching certain things Make frequent requests for reassurance, sometimes dozens or even hundreds of times per day Some obsessions and compulsions have a logical connection. For example, children who are obsessed with not getting sick may wash their hands very frequently. However, some are totally unrelated. For example, children may count to 50 over and over to prevent a grandparent from having a heart attack. If they resist the compulsions or are prevented from carrying them out, they become extremely anxious and concerned. Most children have some idea that their obsessions and compulsions are abnormal and are often embarrassed by them and try to hide them. However, some children strongly believe that their obsessions and compulsions are valid. OCD resolves after a few years in about 5% of children and by early adulthood in about 40%. In other children, the disorder tends to be chronic, but with continuing treatment, most children can function normally. About 5% of children do not respond to treatment and remain greatly impaired. Doctors base the diagnosis of OCD on symptoms. Several visits may be needed before children with OCD trust a doctor enough to tell the doctor their obsessions and compulsions. For OCD to be diagnosed, the obsessions and compulsions must cause great distress and interfere with the child's ability to function. If doctors suspect that an infection (such as PANDAS or PANS) may be involved, they usually consult with a specialist in these disorders. Cognitive-behavioral therapy, if available, may be all that is needed if children are highly motivated. If needed, a combination of cognitive-behavioral therapy and a type of antidepressant called a selective serotonin reuptake inhibitor (SSRI) is usually effective for OCD. This combination enables most children to function normally. If SSRIs are ineffective, doctors may prescribe clomipramine, another type of antidepressant. However, it can have serious side effects. If treatment is ineffective, children may need to be treated as inpatients in a facility where intensive behavioral therapy can be done and drugs can be managed. If streptococcal infection (PANDAS) or another infection (PANS) is involved, antibiotics are usually used. If needed, cognitive-behavioral therapy and the drugs typically used to treat OCD are also used.
Using an ultrafast, ultraprecise laser, researchers have taken a step towards a fuller understanding of the how the body triggers the complex process of healing wounds. In a sharp and pointy world, wound healing is a critical and marvelous process. Despite a tremendous amount of scientific study, however, many outstanding mysteries still surround the way in which cells in living tissue respond to and repair physical damage. One prominent mystery—one that the new research may begin to solve—is exactly what triggers wound-healing. A better understanding of this process is essential for developing new and improved methods for treating wounds of all types. Previous research had determined that calcium ions play a key role in wound response. That is not surprising, because calcium signaling has an impact on nearly every aspect of cellular life. So, the researchers targeted cells on the back of fruit fly pupae that expressed a protein that fluoresces in the presence of calcium ions. This allowed them to track changes in calcium ion concentrations in the cells around wounds in living tissue (as opposed to the cell cultures used in many previous wound response studies) and to do so with an unprecedented, millisecond precision. “Once we understand these trigger mechanisms, it should be possible to find ways to stimulate the wound healing process…” The team created microscopic wounds in the pupae’s epithelial layer using a laser that can be focused down to a point small enough to punch microscopic holes in individual cells (less than a millionth of a meter). The laser’s precision allowed them to create repeatable and controllable wounds. They found that even the briefest of pulses in the nanosecond to femtosecond range produced a microscopic explosion called a cavitation bubble powerful enough to damage nearby cells. “As a result, the damage the laser pulses produce is quite similar to a puncture wound surrounded by a crush wound—blunt force trauma in forensic terms—so our observations should apply to most common wounds,” says first author Erica Shannon, a doctoral student in developmental biology at Vanderbilt University. The researchers were testing two prevailing hypotheses for the wound-response trigger. One is that damaged and dying cells release proteins into the extracellular fluid which surrounding cells sense, causing them to boost their internal calcium levels. This increased calcium concentration, in turn, triggers their transformation from a static to a mobile form, allowing them to begin sealing off the wound. The second hypothesis proposes that the trigger signal spreads from cell to cell through gap junctions, specialized intercellular connections that directly link two cells at points where they touch. These are microscopic gates that allow neighboring cells to exchange ions, molecules, and electrical impulses quickly and directly. “What is extremely exciting is that we found evidence that cells use both mechanisms,” says Shannon. “It turns out cells have a number of different ways to signal injury. This may allow them to differentiate between different kinds of wounds.” The experiments revealed that the creation of a wound generates a complex series of calcium signals in the surrounding tissue: - First comes a rapid influx of calcium into the cells immediately around the wound. This matches the footprint of the cavitation bubble. Calcium levels in the extracellular liquid are much higher than they are within the cells. Because of the rapidity with which it occurs (less than a tenth of a second) the researchers argue that this influx is caused by micro-tears in cell membranes ripped open by the force of the micro-explosion; - Next, a short-lived, short-ranged wave spreads through healthy neighboring cells. The bigger the wound, the faster the wave spreads. The speed with which the wave moves suggests that it travels through gap junctions and is made up either of calcium ions or some other small signaling molecule. - About 45 seconds after wounding a second wave appears. This wave moves much more slowly than the first wave but spreads considerably farther. The researchers interpret this to mean that it is being spread by larger molecules, most likely special signaling proteins, that diffuse more slowly than ions. They caution, however, that further experiments are required to confirm this supposition. The second wave only occurs when cells are killed, not when they are just damaged, suggesting that it is dependent on the extent of the damage. - The first two waves spread relatively symmetrically through the tissue. After the second wave, however, the area of high calcium concentration begins sending out “flares”—directional streams of calcium uptake that spread farther into the surrounding tissue. Each flare lasts for tens of seconds and new flares continue starting for more than 30 minutes after the injury. “Once we understand these trigger mechanisms, it should be possible to find ways to stimulate the wound healing process in people with conditions, like diabetes, that slow down the process or even to speed up normal wound healing,” says Shane Hutson, a professor of physics and biological sciences at the university. The researchers report their findings in a paper in the Biophysical Journal. Grants from the National Institute of Health and National Science Foundation supported the research. Source: Vanderbilt University
Hodgkin disease, or Hodgkin Lymphoma is a type of lymphoma, a cancer of the immune cells of the blood system.1 Certain white blood cells called lymphocytes are normal parts of the immune system of the body, the system responsible for recognizing and mounting a defense against germs (bacteria, viruses, fungi, etc.). Lymphocytes are white blood cells that circulate in the blood stream as well as migrate to areas of the body where germs enter such as the lining of the mouth, nares, throat, intestinal tract and skin. In addition, lymphocytes collect in small, bean shaped structures called lymph nodes located throughout the body, as well as in the spleen, bone marrow and a special organ in the chest called the thymus. Lymphocytes are a major source of the defense system against germs that enter the body. Hodgkin Lymphoma is a rare cancer that occurs in ~2.7 people per 100,000/year in the US. This means there are about 8,260 cases per year in the United States. Incidence rates vary around the world but are similar to the US in most countries ranging from 1-4/100,000. Hodgkin Lymphoma, is believed to have developed from a lymphocyte that has had an error in the DNA program of the cell that leads to an advantage in survival and abnormal growth. This cell, called a Reed Sternberg cell, is the cancer cell of Hodgkin Lymphoma. Reed Sternberg cells also produce substances called cytokines, which further promote the growth of Reed-Sternberg cells. Scientists are not certain of the cause that leads a normal lymphocyte to become a malignant Reed-Sternberg cell. More on this topic can be found at the American Cancer Society website at https://www.cancer.org/cancer/Hodgkin-lymphoma.html. There are two kinds of Hodgkin Lymphoma- Classical Hodgkin Lymphoma and Lymphocyte Predominant Hodgkin Lymphoma. Both types are cancer (malignant) which means the cancer cells can spread to other parts of the body. Hodgkin Disease usually starts in one of the lymphatic (see below diagram) and then, over time spreads, progressively, to other sites. Symptoms are usually enlargement of one or many regional areas of lymphatic tissue (enlarged lymph nodes, enlarged spleen, etc.). Enlargement is usually one sided at first and then spreads to other areas. Other symptoms, called “B” symptoms can occur which include fevers that persist or come and go, drenching night sweats, and weight loss (>10% of body weight). Other symptoms such as fatigue, generalized itching, and decreased appetite may occur but are not specific. A diagnosis of Hodgkin Lymphoma results from a surgical sampling of the enlarged lymph tissue and examination by a tissue expert called a pathologist. Staging (determination of the extent of spread in the body) is determined with radiology testing including a CT scan and a PET (Positron Emission Tomography) scan. Treatment is chemotherapy and regimens are selected by a medical oncologist (cancer specialist) in collaboration with the patient based on stage, symptoms and type (classical vs. Lymphocyte Predominant). Radiation treatments may also be part of the treatment regimen. Hodgkin Lymphoma disease extent in the body is categorized by a system called staging. A common staging system used is called the Ann Arbor Staging system. The principal stage is determined by location of the tumor 2: - Stage I indicates that the cancer is located in a single region, usually one lymph node and the surrounding area. Stage I often will not have outward symptoms. - Stage II indicates that the cancer is located in two separate regions, an affected lymph node or organ and a second affected area, and that both affected areas are confined to one side of the diaphragm—that is, both are above the diaphragm, or both are below the diaphragm. - Stage III indicates that the cancer has spread to both sides of the diaphragm, including one organ or area near the lymph nodes or the spleen. - Stage IV indicates diffuse or disseminated involvement of one or more extralymphatic organs, including any involvement of the liver, bone marrow, or nodular involvement of the lungs. Treatment for Hodgkin Lymphoma is based upon staging and absence (“A”) or presence (“B”) of specific B symptoms. More advanced Disease (Stages III and IV and with B symptoms) is treated with more aggressive chemotherapy than disease that is more localized (Stages I-II). For more information on treatment regimens please refer to the American Cancer Society site https://www.cancer.org/cancer/hodgkin-lymphoma.html. Prognosis and outcome is determined by stage (extent of cancer spread), symptoms, and treatment regimen. Patients with more advanced staged cancer will require more aggressive therapy. However, even in more advanced stage Hodgkin Lymphoma the chances for a good outcome and cure are very good. Treatment success by stage can also be found on the American Cancer Society Website.
Bringing Characters to Life Learn how to create 3-dimensional characters and bring them to life in the reader's imagination. Having the routine illustrated and easy to see will help your second graders remember how to start each day independently. Finish the Story Writing Worksheets In these writing practice worksheets, students practice both reading and writing in these exercises. Revision is a necessary skill for writers, students who expect to revise their work will develop the habit of proofreading. Open it up to page 7. Writer's Workshop is a teaching technique that invites students to write by making the process a meaningful part of the classroom curriculum. Seuss, owls, orange and teal, minions, and superheroes. Writes simple sentences There was one fish. In the Second Grade this teaching technique allows students the opportunity to develop expression, revision strategy and skill in writing. One student writes the beginning of a story and then passes it on to a friend who writes only the middle. My parents love getting notifications that their child was recognized for something they were doing right! Fall Stationery - This file includes two color and two black and white decorated papers, lined and unlined for drawing. Stationery and Writing Paper Friendly Spider Paper - This file includes three styles of writing paper adorned with cute spiders. On these worksheets, students learn to improve their writing by finishing the story, responding to questions, writing in practical situations, arguing a position, and writing ly and creatively. Teachers may use the editing process to individually encourage students to revise further and attempt more and more challenging writing and to guide students to develop the plot and focus of the story. Make your excuses as original and wild as possible. If you can make people care about your characters, they'll care about your stories. Autumn Acrostic Poem - Write a poem about this season using the letters in the word autumn. So, these worksheets are intended to be completed and then reviewed by a competent educator. I would recommend the course to anyone. They will have the opportunity to practice their handwriting and grammar as well as learn to be grateful when others think of them with a gift or through a kind act. Then, they try to finish it using their own words. Write an advert selling a boa constrictor as a family pet. More Writing Ideas - Web Resources - References Introduction - Grade 2 Depending on your class situation and available time, Writer's Workshop activities can be a useful and meaningful extension to TeachersFirst's online instructional units. If you want to write a non-fiction book, write a letter to your future self. Switch Persona Write a mini-story in the first person. Our hope is that these activities will create a workshop-like environment that fosters feedback and collaboration in your writing classroom. Writes nouns, verbs in simple past tense, prepositions and plurals. But all of those ten minute efforts will add up. This course is amazing. The child receives a new page after he or she has met with a peer, written text with possibly a basic idea web, illustrated if illustrations are part of the story and reread the previous first draft page to the teacher.The ECD- IV Grade 2, classroom was selected because it served as a single entity to examine how the teacher was teaching creative writing in a natural setting. Second Grade Writing Worksheets & Printables In second grade, young writers begin to develop complex writing abilities, building on growing vocabularies, spelling knowledge, and comprehension. Our themed writing prompts and exercises will help kids enrich their language skills and imaginations. 2 HOW TO TEAH REATIVE WRITING Source - http: //cheri197.com General creative writing skills. Once learned, the activities serve as tools that your students can keep primary grade mini-lessons. 8 Start a writing club to join together students who already enjoy writing. Here are ten of the best creative writing exercises to inspire you to start (and finish) that book. 1. 7x7x7 Find the 7th book from your bookshelf (or digital library). Open it up to page 7. Look at the 7th sentence on the page. Begin a paragraph that begins with that sentence and limit the length to 7 lines. Repeat. Writing standards for second grade define the knowledge and skills needed for writing proficiency at this grade level. By understanding 2nd grade writing standards, parents can be more effective in helping their children meet grade level expectations. creative writing worksheets for grade 2 download them and try to solve 1 c. creative writing worksheets high school pdf fun printable for grade 2 skills,creative writing worksheets high school pdf for grade 5 of language 10 year olds,creative writing worksheets for 6th grade pdf online exercises speech presentation year 1,creative writing worksheets for grade 10 1 in free 5,creative writing.Download
NASA is set to launch a new satellite, the Ionospheric Connection Explorer (ICON), into orbit Wednesday night on the Northrop Grumman Pegasus XL rocket. Their hope is to better understand the ways in which the far outer atmosphere is affected by space weather and Earth-based turbulence. ICON is planned for a two year mission circling 360 miles above the Earth. What is the ionosphere? The ionosphere is the outermost layer of the atmosphere stretching 30 miles to 600 miles above the surface. This part of the atmosphere is important as this is what makes radio communications and GPS navigation possible. As stated by NASA: “Pressure differences created by weather near Earth’s surface can propagate into the very highest reaches of the upper atmosphere and influence the winds in this region. The exact role these winds—and by extension, terrestrial weather—play in shaping the ionosphere remains an outstanding question, and one that scientists hope ICON will answer.”Exploring the Ionosphere, Earth’s Interface to Space | NASA Why is it important? Unpredictable changes in the ionosphere can interfere with communication and navigation here at the surface. Changes in the electric current in the ionosphere can put strain on surface-based technologies and in some cases, causing outages. For more information on NASA’s ICON mission, click here.
ESL (English as a Second Language) 50 Essential Resources for ESL Students Website on the Open Education Database that offers links to a wealth of resources including information on the following: grammar and usage; spelling and pronunciation; vocabulary and writing; quizzes and worksheets; podcasts; and YouTube channels. Activities for ESL Students Quizzes, tests, exercises and puzzles to help you learn English as a Second Language (ESL). The website is a project of The Internet TESL Journal (iteslj.org) which contains thousands of contributions by many teachers. All you need is your PCPL library card and an email address to register to access award-winning online language courses, including English, Spanish, French, Italian, etc. Strategies and Resources for Supporting English-Language Learners Edutopia is a comprehensive website and online community that increases knowledge, sharing, and adoption of what works in K-12 education. Supported by The George Lucas Educational Foundation, a nonprofit operating foundation, was founded by filmmaker George Lucas in 1991 to focus on schools’ untapped potential to truly engage students and inspire them to become active, lifelong learners. He decided to invest in making a difference and created the Foundation to identify and spread innovative, replicable and evidence-based approaches to helping K-12 students learn better.
Opener: As students enter the room, they will immediately pick up and begin working on the opener. Please see my instructional strategy clip for how openers work in my classroom (Instructional Strategy - Process for openers). This method of working and going over the opener lends itself to allow students to construct viable arguments and critique the reasoning of others, which is mathematical practice 3. Learning Target: After completion of the opener, I will address the day’s learning targets to the students. In today’s lesson, the intended target is, “I can find the mean, median, and mode. I can calculate the mean absolute deviation. I can calculate probability of an event.” Students will jot the learning target down in their agendas (our version of a student planner, there is a place to write the learning target for every day). Recap: See Video! Sample Test Questions: I am going to present this portion of the lesson as a table challenge. For this table challenge, I am going to give the students a copy of the problems first and give them 15 minutes to work them out with their tables. At the end of the 15 minutes, I am going to draw cards to determine which table will work out which problem. In order to keep all tables on task throughout all problems, I do not reward correct tables until the end of the activity. Though it is not a practice I enjoy, part of student success on a state exam is being able to break down questions in an effort to figure out exactly what is being asked. It is important that students are fluent with solving questions that are worded and presented like those on the state exam so that the only thing on their minds during the exam is the content – not the presentation of the content. As with all table challenges, students will be asked to persevere with problems and work them out together, which is mathematical practice 1. I am more than willing to help students, but they have to really try first! Additionally, the types of problems I have chosen require that students reason abstractly and quantitatively (mathematical practice 2), making sense of words by writing equations or drawing figures. Also, the problems they are solving model real world applications of the topics, which is mathematical practice 4. Whole Group Question: To summarize today’s lesson, I am going to ask that students raise their hand when they can tell me the three measures of center and how to calculate them. For whatever reason, the phrase “measures of center” continually throws students off – so I want to be sure that they understand that phrase and what it is asking for. I am hoping (expecting) lots of hands at this point!
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work! Cecropia, (genus Cecropia), several species of tropical tree of the family Cecropiaceae common to the understory layer of disturbed forest habitats of Central and South America. It is easily recognized by its thin, white-ringed trunk and umbrella-like arrangement of large leaves at the branch tips. These extremely fast-growing trees are colonizers of forest gaps or clearings. They usually live about 30 years and grow to less than 18 metres (60 feet), producing a very soft wood in the process. Trees are either male or female, with the female producing nearly one million seeds every time it fruits. Flowers are very small and borne on elongated, hanging structures called catkins. The cecropia’s interaction with Azteca ants is a classic case of defense mutualism in the tropics. The tree provides the ants with a nest consisting of multiple chambers within the stems as the ants burrow through the soft internal tissue. Food is also provided to the ants in the form of glycogen-containing structures that the tree produces at the base of its leaves. The food bodies are produced in the greatest quantity under young leaves. Ants patrol these areas and prevent insects from damaging this foliage. Some ant species also benefit the tree by actively cutting vines that grow onto the tree. In spite of such an elaborate defense, cecropias attract a wide variety of birds and other animals that feed on fruit, flowers, or leaves. Sloths even prefer to feed on cecropia trees, as the ants do not seem concerned with the main leaf surfaces or external wood surfaces. (See rainforest ecosystem sidebar, “A Moving Habitat.”) Learn More in these related Britannica articles: Rosales: Urticaceae…relationship exists between species of Cecropiaand ants of the genus Azteca. The ants establish colonies within the hollow trunks and stems of the Cecropiaplants. The ants consume glycogen (an energy source generally produced by animals) and proteinaceous substances made by these trees. This food is continually replaced as… A Moving Habitat…sloths are often seen in cecropia trees and may feed in 15 to 40 neighbouring trees over the course of a few months, they tend to spend most of their time in one particular “modal” tree. Up to half of the nutrients consumed by the sloth may be returned to… Rainforest Regeneration in Panamasuch as balsa or cecropia. These trees are characterized by rapid growth in high light, high mortality (especially in shaded environments), low wood densities, and relatively rapid attainment of reproductive status. They also tend to produce leaves with high photosynthetic capacities that flush green but suffer high levels of…
Arranging Objects Questions This style of Spatial Reasoning question involves assembling block/shapes in a particular way in order to create the shape target shape with the correct pattern. In this case we are using 4 cuboids to create a single cube. We have included a demonstration below to illustrate how the shapes would be arranged in this example. There are many ways of arranging the shapes to create a regular cube, see how many you can think of (hint: It's allot more than 24). Our users find this question type the easiest to answer. We have included a question below for you to answer. Now that you have seen how to complete this question, try it yourself with the question below. Try it Yourself Consider the 4 building blocks below. Click the cube below that can be obtained by arranging the 4 blocks above.
The Italian Renaissance The art of the Italian Renaissance was influential throughout Europe for centuries. - The Florence school of painting became the dominant style during the Renaissance. Renaissance artworks depicted more secular subject matter than previous artistic movements. - Michelangelo, da Vinci, and Rafael are among the best known painters of the High Renaissance. - The High Renaissance was followed by the Mannerist movement, known for elongated figures. - fresco: A type of wall painting in which color pigments are mixed with water and applied to wet plaster. As the plaster and pigments dry, they fuse together and the painting becomes a part of the wall itself. - Mannerism: A style of art developed at the end of the High Renaissance, characterized by the deliberate distortion and exaggeration of perspective, especially the elongation of figures. The Renaissance began during the 14th century and remained the dominate style in Italy, and in much of Europe, until the 16th century. The term “renaissance” was developed during the 19th century in order to describe this period of time and its accompanying artistic style. However, people who were living during the Renaissance did see themselves as different from their Medieval predecessors. Through a variety of texts that survive, we know that people living during the Renaissance saw themselves as different largely because they were deliberately trying to imitate the Ancients in art and architecture. Florence and the Renaissance When you hear the term “Renaissance” and picture a style of art, you are probably picturing the Renaissance style that was developed in Florence, which became the dominate style of art during the Renaissance. During the Middle Ages and the Renaissance, Italy was divided into a number of different city states. Each city state had its own government, culture, economy, and artistic style. There were many different styles of art and architecture that were developed in Italy during the Renaissance. Siena, which was a political ally of France, for example, retained a Gothic element to its art for much of the Renaissance. Certain conditions aided the development of the Renaissance style in Florence during this time period. In the 15th century, Florence became a major mercantile center. The production of cloth drove their economy and a merchant class emerged. Humanism, which had developed during the 14th century, remained an important intellectual movement that impacted art production as well. During the Early Renaissance, artists began to reject the Byzantine style of religious painting and strove to create realism in their depiction of the human form and space. This aim toward realism began with Cimabue and Giotto, and reached its peak in the art of the “Perfect” artists, such as Andrea Mantegna and Paolo Uccello, who created works that employed one point perspective and played with perspective for their educated, art knowledgeable viewer. During the Early Renaissance we also see important developments in subject matter, in addition to style. While religion was an important element in the daily life of people living during the Renaissance, and remained a driving factor behind artistic production, we also see a new avenue open to panting—mythological subject matter. Many scholars point to Botticelli’s Birth of Venus as the very first panel painting of a mythological scene. While the tradition itself likely arose from cassone painting, which typically featured scenes from mythology and romantic texts, the development of mythological panel painting would open a world for artistic patronage, production, and themes. The period known as the High Renaissance represents the culmination of the goals of the Early Renaissance, namely the realistic representation of figures in space rendered with credible motion and in an appropriately decorous style. The most well known artists from this phase are Leonardo da Vinci, Raphael, Titian, and Michelangelo. Their paintings and frescoes are among the most widely known works of art in the world. Da Vinci’s Last Supper, Raphael’s The School of Athens and Michelangelo’s Sistine Chapel Ceiling paintings are the masterpieces of this period and embody the elements of the High Renaissance. High Renaissance painting evolved into Mannerism in Florence. Mannerist artists, who consciously rebelled against the principles of High Renaissance, tended to represent elongated figures in illogical spaces. Modern scholarship has recognized the capacity of Mannerist art to convey strong, often religious, emotion where the High Renaissance failed to do so. Some of the main artists of this period are Pontormo, Bronzino, Rosso Fiorentino, Parmigianino and Raphael’s pupil, Giulio Romano. Art and Patronage The Medici family used their vast fortune to control the Florentine political system and sponsor a series of artistic accomplishments. Discuss the relationship between art, patronage, and politics during the Renaissance - Although the Renaissance was underway before the Medici family came to power in Florence, their patronage and political support of the arts helped catalyze the Renaissance into a fully fledged cultural movement. - The Medici wealth and influence initially derived from the textile trade guided by the guild of the Arte della Lana; through financial superiority, the Medici dominated their city’s government. - Medici patronage was responsible for the majority of Florentine art during their reign, as artists generally only made their works when they received commissions in advance. - Although none of the Medici themselves were scientists, the family is well known to have been the patrons of the famous Galileo Galilei, who tutored multiple generations of Medici children. - Lorenzo de’ Medici: An Italian statesman and de facto ruler of the Florentine Republic, who was one of the most powerful and enthusiastic patrons of the Renaissance. - patronage: The support, encouragement, privilege, or financial aid that an organization or individual bestows on another, especially in the arts. It has long been a matter of debate why the Renaissance began in Florence, and not elsewhere in Italy. Scholars have noted several features unique to Florentine cultural life that may have caused such a cultural movement. Many have emphasized the role played by the Medici, a banking family and later ducal ruling house, in patronizing and stimulating the arts. Lorenzo de’ Medici (1449–1492) was the catalyst for an enormous amount of arts patronage, encouraging his countrymen to commission works from the leading artists of Florence, including Leonardo da Vinci, Sandro Botticelli, and Michelangelo Buonarroti. Works by Neri di Bicci, Botticelli, da Vinci, and Filippino Lippi had been commissioned additionally by the convent di San Donato agli Scopeti of the Augustinians order in Florence. The Medici House Patronage The House of Medici was an Italian banking family, political dynasty, and later royal house that first began to gather prominence under Cosimo de’ Medici in the Republic of Florence during the first half of the 15th century. Their wealth and influence initially derived from the textile trade guided by the guild of the Arte della Lana. Like other signore families, they dominated their city’s government, they were able to bring Florence under their family’s power, and they created an environment where art and Humanism could flourish. They, along with other families of Italy, such as the Visconti and Sforza of Milan, the Este of Ferrara, and the Gonzaga of Mantua, fostered and inspired the birth of the Italian Renaissance. The biggest accomplishments of the Medici were in the sponsorship of art and architecture, mainly early and High Renaissance art and architecture. The Medici were responsible for the majority of Florentine art during their reign. Their money was significant because during this period, artists generally only made their works when they received commissions in advance. Giovanni di Bicci de’ Medici, the first patron of the arts in the family, aided Masaccio and commissioned Brunelleschi for the reconstruction of the Basilica of San Lorenzo, Florence, in 1419. Cosimo the Elder’s notable artistic associates were Donatello and Fra Angelico. The most significant addition to the list over the years was Michelangelo Buonarroti (1475–1564), who produced work for a number of Medici, beginning with Lorenzo the Magnificent, who was said to be extremely fond of the young Michelangelo, inviting him to study the family collection of antique sculpture. Lorenzo also served as patron of Leonardo da Vinci (1452–1519) for seven years. Indeed, Lorenzo was an artist in his own right, and an author of poetry and song; his support of the arts and letters is seen as a high point in Medici patronage. In architecture, the Medici are responsible for some notable features of Florence, including the Uffizi Gallery, the Boboli Gardens, the Belvedere, the Medici Chapel, and the Palazzo Medici. Later, in Rome, the Medici Popes continued in the family tradition by patronizing artists in Rome. Pope Leo X would chiefly commission works from Raphael. Pope Clement VII commissioned Michelangelo to paint the altar wall of the Sistine Chapel just before the pontiff’s death in 1534. Eleanor of Toledo, princess of Spain and wife of Cosimo I the Great, purchased the Pitti Palace from Buonaccorso Pitti in 1550. Cosimo in turn patronized Vasari, who erected the Uffizi Gallery in 1560 and founded the Accademia delle Arti del Disegno (“Academy of the Arts of Drawing”) in 1563. Marie de’ Medici, widow of Henry IV of France and mother of Louis XIII, is the subject of a commissioned cycle of paintings known as the Marie de’ Medici cycle, painted for the Luxembourg Palace by court painter Peter Paul Rubens in 1622–1623. Although none of the Medici themselves were scientists, the family is well known to have been the patrons of the famous Galileo Galilei, who tutored multiple generations of Medici children and was an important figurehead for his patron’s quest for power. Galileo’s patronage was eventually abandoned by Ferdinando II when the Inquisition accused Galileo of heresy. However, the Medici family did afford the scientist a safe haven for many years. Galileo named the four largest moons of Jupiter after four Medici children he tutored, although the names Galileo used are not the names currently used. Leonardo da Vinci While Leonardo da Vinci is admired as a scientist, an academic, and an inventor, he is most famous for his achievements as the painter of several Renaissance masterpieces. Describe the works of Leonardo da Vinci that demonstrate his most innovative techniques as an artist - Among the qualities that make da Vinci’s work unique are the innovative techniques that he used in laying on the paint, his detailed knowledge of anatomy, his innovative use of the human form in figurative composition, and his use of sfumato. - Among the most famous works created by da Vinci is the small portrait titled the Mona Lisa, known for the elusive smile on the woman’s face, brought about by the fact that da Vinci subtly shadowed the corners of the mouth and eyes so that the exact nature of the smile cannot be determined. - Despite his famous paintings, da Vinci was not a prolific painter; he was a prolific draftsman, keeping journals full of small sketches and detailed drawings recording all manner of things that interested him. - sfumato: In painting, the application of subtle layers of translucent paint so that there is no visible transition between colors, tones, and often objects. While Leonardo da Vinci is greatly admired as a scientist, an academic, and an inventor, he is most famous for his achievements as the painter of several Renaissance masterpieces. His paintings were groundbreaking for a variety of reasons and his works have been imitated by students and discussed at great length by connoisseurs and critics. Among the qualities that make da Vinci’s work unique are the innovative techniques that he used in laying on the paint, his detailed knowledge of anatomy, his use of the human form in figurative composition, and his use of sfumato. All of these qualities are present in his most celebrated works, the Mona Lisa, The Last Supper, and the Virgin of the Rocks. The Last Supper Da Vinci’s most celebrated painting of the 1490s is The Last Supper, which was painted for the refectory of the Convent of Santa Maria della Grazie in Milan. The painting depicts the last meal shared by Jesus and the 12 Apostles where he announces that one of the them will betray him. When finished, the painting was acclaimed as a masterpiece of design. This work demonstrates something that da Vinci did very well: taking a very traditional subject matter, such as the Last Supper, and completely re-inventing it. Prior to this moment in art history, every representation of the Last Supper followed the same visual tradition: Jesus and the Apostles seated at a table. Judas is placed on the opposite side of the table of everyone else and is effortlessly identified by the viewer. When da Vinci painted The Last Supper he placed Judas on the same side of the table as Christ and the Apostles, who are shown reacting to Jesus as he announces that one of them will betray him. They are depicted as alarmed, upset, and trying to determine who will commit the act. The viewer also has to determine which figure is Judas, who will betray Christ. By depicting the scene in this manner, da Vinci has infused psychology into the work. Unfortunately, this masterpiece of the Renaissance began to deteriorate immediately after da Vinci finished painting, due largely to the painting technique that he had chosen. Instead of using the technique of fresco, da Vinci had used tempera over a ground that was mainly gesso in an attempt to bring the subtle effects of oil paint to fresco. His new technique was not successful, and resulted in a surface that was subject to mold and flaking. Among the works created by da Vinci in the 16th century is the small portrait known as the Mona Lisa, or La Gioconda, “the laughing one.” In the present era it is arguably the most famous painting in the world. Its fame rests, in particular, on the elusive smile on the woman’s face—its mysterious quality brought about perhaps by the fact that the artist has subtly shadowed the corners of the mouth and eyes so that the exact nature of the smile cannot be determined. The shadowy quality for which the work is renowned came to be called sfumato, the application of subtle layers of translucent paint so that there is no visible transition between colors, tones, and often objects. Other characteristics found in this work are the unadorned dress, in which the eyes and hands have no competition from other details; the dramatic landscape background, in which the world seems to be in a state of flux; the subdued coloring; and the extremely smooth nature of the painterly technique, employing oils, but applied much like tempera and blended on the surface so that the brushstrokes are indistinguishable. And again, da Vinci is innovating upon a type of painting here. Portraits were very common in the Renaissance. However, portraits of women were always in profile, which was seen as proper and modest. Here, da Vinci present a portrait of a woman who not only faces the viewer but follows them with her eyes. Virgin and Child with St. Anne In the painting Virgin and Child with St. Anne, da Vinci’s composition again picks up the theme of figures in a landscape. What makes this painting unusual is that there are two obliquely set figures superimposed. Mary is seated on the knee of her mother, St. Anne. She leans forward to restrain the Christ Child as he plays roughly with a lamb, the sign of his own impending sacrifice. This painting influenced many contemporaries, including Michelangelo, Raphael, and Andrea del Sarto. The trends in its composition were adopted in particular by the Venetian painters Tintoretto and Veronese. Michelangelo was a 16th century Florentine artist renowned for his masterpieces in sculpture, painting, and architectural design. Discuss Michelangelo’s achievements in sculpture, painting, and architecture - Michelangelo created his colossal marble statue, the David, out of a single block of marble, which established his prominence as a sculptor of extraordinary technical skill and strength of symbolic imagination. - In painting, Michelangelo is renowned for the ceiling and The Last Judgement of the Sistine Chapel, where he depicted a complex scheme representing Creation, the Downfall of Man, the Salvation of Man, and the Genealogy of Christ. - Michelangelo’s chief contribution to Saint Peter’s Basilica was the use of a Greek Cross form and an external masonry of massive proportions, with every corner filled in by a stairwell or small vestry. The effect is a continuous wall-surface that appears fractured or folded at different angles. - contrapposto: The standing position of a human figure where most of the weight is placed on one foot, and the other leg is relaxed. The effect of contrapposto in art makes figures look very naturalistic. - Sistine Chapel: The best-known chapel in the Apostolic Palace. Michelangelo was a 16th century Florentine artist renowned for his masterpieces in sculpture, painting, and architectural design. His most well known works are the David, the Last Judgment, and the Basilica of Saint Peter’s in the Vatican. In 1504, Michelangelo was commissioned to create a colossal marble statue portraying David as a symbol of Florentine freedom. The subsequent masterpiece, David, established the artist’s prominence as a sculptor of extraordinary technical skill and strength of symbolic imagination. David was created out of a single marble block, and stands larger than life, as it was originally intended to adorn the Florence Cathedral. The work differs from previous representations in that the Biblical hero is not depicted with the head of the slain Goliath, as he is in Donatello’s and Verrocchio’s statues; both had represented the hero standing victorious over the head of Goliath. No earlier Florentine artist had omitted the giant altogether. Instead of appearing victorious over a foe, David’s face looks tense and ready for combat. The tendons in his neck stand out tautly, his brow is furrowed, and his eyes seem to focus intently on something in the distance. Veins bulge out of his lowered right hand, but his body is in a relaxed contrapposto pose, and he carries his sling casually thrown over his left shoulder. In the Renaissance, contrapposto poses were thought of as a distinctive feature of antique sculpture. The sculpture was intended to be placed on the exterior of the Duomo, and has become one of the most recognized works of Renaissance sculpture. Painting: The Last Judgement In painting, Michelangelo is renowned for his work in the Sistine Chapel. He was originally commissioned to paint tromp-l’oeil coffers after the original ceiling developed a crack. Michelangelo lobbied for a different and more complex scheme, representing Creation, the Downfall of Man, the Promise of Salvation through the prophets, and the Genealogy of Christ. The work is part of a larger scheme of decoration within the chapel that represents much of the doctrine of the Catholic Church. The composition eventually contained over 300 figures, and had at its center nine episodes from the Book of Genesis, divided into three groups: God’s Creation of the Earth, God’s Creation of Humankind, and their fall from God’s grace, and lastly, the state of Humanity as represented by Noah and his family. Twelve men and women who prophesied the coming of the Jesus are painted on the pendentives supporting the ceiling. Among the most famous paintings on the ceiling are The Creation of Adam, Adam and Eve in the Garden of Eden, the Great Flood, the Prophet Isaiah and the Cumaean Sibyl. The ancestors of Christ are painted around the windows. The fresco of The Last Judgment on the altar wall of the Sistine Chapel was commissioned by Pope Clement VII, and Michelangelo labored on the project from 1536–1541. The work is located on the altar wall of the Sistine Chapel, which is not a traditional placement for the subject. Typically, last judgement scenes were placed on the exit wall of churches as a way to remind the viewer of eternal punishments as they left worship. The Last Judgment is a depiction of the second coming of Christ and the apocalypse; where the souls of humanity rise and are assigned to their various fates, as judged by Christ, surrounded by the Saints. In contrast to the earlier figures Michelangelo painted on the ceiling, the figures in The Last Judgement are heavily muscled and are in much more artificial poses, demonstrating how this work is in the Mannerist style. In this work Michelangelo has rejected the orderly depiction of the last judgement as established by Medieval tradition in favor of a swirling scene of chaos as each soul is judged. When the painting was revealed it was heavily criticized for its inclusion of classical imagery as well as for the amount of nude figures in somewhat suggestive poses. The ill reception that the work received may be tied to the Counter Reformation and the Council of Trent, which lead to a preference for more conservative religious art devoid of classical references. Although a number of figures were made more modest with the addition of drapery, the changes were not made until after the death of Michelangelo, demonstrating the respect and admiration that was afforded to him during his lifetime. Architecture: St. Peter’s Basilica Finally, although other architects were involved, Michelangelo is given credit for designing St. Peter’s Basilica. Michelangelo’s chief contribution was the use of a symmetrical plan of a Greek Cross form and an external masonry of massive proportions, with every corner filled in by a stairwell or small vestry. The effect is of a continuous wall surface that is folded or fractured at different angles, lacking the right angles that usually define change of direction at the corners of a building. This exterior is surrounded by a giant order of Corinthian pilasters all set at slightly different angles to each other, in keeping with the ever-changing angles of the wall’s surface. Above them the huge cornice ripples in a continuous band, giving the appearance of keeping the whole building in a state of compression. Mannerist artists began to reject the harmony and ideal proportions of the Renaissance in favor of irrational settings, artificial colors, unclear subject matters, and elongated forms. Describe the Mannerist style, how it differs from the Renaissance, and reasons why it emerged. - Mannerism came after the High Renaissance and before the Baroque. - The artists who came a generation after Raphael and Michelangelo had a dilemma. They could not surpass the great works that had already been created by Leonardo da Vinci, Raphael, and Michelangelo. This is when we start to see Mannerism emerge. - Jacopo da Pontormo (1494–1557) represents the shift from the Renaissance to the Mannerist style. - Mannerism: Style of art in Europe from c. 1520–1600. Mannerism came after the High Renaissance and before the Baroque. Not every artist painting during this period is considered a Mannerist artist. Mannerism is the name given to a style of art in Europe from c. 1520–1600. Mannerism came after the High Renaissance and before the Baroque. Not every artist painting during this period is considered a Mannerist artist, however, and there is much debate among scholars over whether Mannerism should be considered a separate movement from the High Renaissance, or a stylistic phase of the High Renaissance. Mannerism will be treated as a separate art movement here as there are many differences between the High Renaissance and the Mannerist styles. What makes a work of art Mannerist? First we must understand the ideals and goals of the Renaissance. During the Renaissance artists were engaging with classical antiquity in a new way. In addition, they developed theories on perspective, and in all ways strived to create works of art that were perfect, harmonious, and showed ideal depictions of the natural world. Leonardo da Vinci, Raphael, and Michelangelo are considered the artists who reached the greatest achievements in art during the Renaissance. The Renaissance stressed harmony and beauty and no one could create more beautiful works than the great three artists listed above. The artists who came a generation after had a dilemma; they could not surpass the great works that had already been created by da Vinci, Raphael, and Michelangelo. This is when we start to see Mannerism emerge. Younger artists trying to do something new and different began to reject harmony and ideal proportions in favor of irrational settings, artificial colors, unclear subject matters, and elongated forms. Jacopo da Pontormo Jacopo da Pontormo (1494–1557) represents the shift from the Renaissance to the Mannerist style. Take for example his Deposition from the Cross, an altarpiece that was painted for a chapel in the Church of Santa Felicita, Florence. The figures of Mary and Jesus appear to be a direct reference to Michelangelo’s Pieta. Although the work is called a “Deposition,” there is no cross. Scholars also refer to this work as the “Entombment” but there is no tomb. This lack of clarity on subject matter is a hallmark of Mannerist painting. In addition, the setting is irrational, almost as if it is not in this world, and the colors are far from naturalistic. This work could not have been produced by a Renaissance artist. The Mannerist movement stresses different goals and this work of art by Pontormo demonstrates this new, and different style.
Astronomy is study of celestial objects (such as stars, galaxies, planets, moons, asteroids, comets and nebulae); the physics, chemistry, and evolution of such objects; and phenomena that originate outside the atmosphere of Earth, including supernovae explosions, gamma ray bursts, and cosmic microwave background radiation. Astronomy is one of the oldest sciences. The early civilizations in recorded history, such as the Babylonians, Greeks, Indians, Egyptians, Nubians, Iranians, Chinese, and Maya performed methodical observations of the night sky. However, the invention of the telescope was required before astronomy was able to develop into a modern science. Astronomy is one of the few sciences where amateurs can play an active role, especially in the discovery and observation of one off events and amateur astronomers have made many important astronomical discoveries.
Butterflies are one of the most beautiful and charismatic insects around us. They belong to order Lepidoptera. Butterflies often have brightly coloured wings with unique patterns made up of tiny scales. Butterflies have taste receptors on their feet. They are one of the most studied group of insects too. Still a lot remains to be explored about them, like life cycle of particular species, their feeding habits at different stages etc. A butterfly’s lifecycle is made up of four parts, egg, larva, pupa and adult. Butterflies can live in the adult stage from anywhere between a week and a year, depending on the species. At larval stage butterflies feed on tender leaves, buds, flowers, fruits etc. Different species feed on different plant parts and variety of plants. At adult stage butterflies feed on nectar, minerals from damp patches or mud (called as mudpuddling), tree sap or pod sap, animal scat or dead animals. But nectar is an important food source of butterflies. Many butterflies have developed interesting ways of defending themselves from predators. One method is camouflage, or “cryptic coloration”, where the butterfly has the ability to look like a leaf or blend into the bark of a tree to hide from predators. Some butterflies have tail like antenna and eyespots like eyes by which predator gets confused easily. Another method is chemical defence, where the butterfly has evolved to have toxic chemicals in its body, they are also called as milkweed butterflies. These species of butterfly are often brightly coloured, and predators have learned over time to associate their bright colour with the bad taste of the chemicals. The greatest threats to butterflies are habitat loss due to residential, commercial and agricultural development also use of pesticides and de-weeding. Climate change is also threatening species of butterflies. For butterfly conservation, we can maintain home gardens or community parks with larval and nectar food plants for them. In addition to this awareness should be created regarding Organic gardening methods or minimal pesticides and fertilisers usage. In city areas, hills, open grasslands and scrub areas act as lungs of the city. These areas harbour and support urban biodiversity by offering food and suitable habitat to butterflies, to other insects,birds, etc. Conserving these hills and open areas will help in long term conservation of butterflies.
The kidneys can produce a range of urine osmolalities depending on the levels of ADH. The production of hypo-osmotic urine is an understandable process: The tubules (particularly the thick ascending limb of Henle's loop) reabsorb relatively more solute than water, and the dilute fluid that remains in the lumen is excreted. The production of hyperosmotic urine is also straightfoward in that reabsorption of water from the lumen into a hyperosmotic interstitium concentrates that luminal fluid, leaving concentrated urine to be excreted. The Mechanism to Generate Medullary Osmotic Gradient There is a gradient of osmolality (hyperosmotic), increasing from a nearly iso-osmotic value at the corticomedullary border to a maximum of greater than 1000 mOsm/kg at the papilla. The peak osmolality is variable depending on dehydration or overhydration, where highest during periods of dehydration and lowest (approximately half of that during excess hydration) during excess hydration. In the steady state there must be mass balance, that is, every substance that enters the medulla via tubule or blood vessel must leave the medulla via tubule or blood vessels. However, during development of the gradient there are transient accumulations of solute, and during washout of the gradient there are losses. To develop the osmotic gradient in the medullary interstitium, there must be deposition of solute in excess of water. It is reabsorption of sodium and chloride by the thick ascending limb in excess of water reabsorbed in the thin descending limbs that accomplishes this task. At the junction between the inner and outer medulla, the ascending limbs of all loops of Henle, whether long or short, turn into thick regions and remain thick all the way back until they reach the original Bowman's capsules. As they reabsorb solute without water and dilute the luminal fluid, they simultaneously add solute without water to the surrounding interstitium. This action of the thick ascending limb is absolutely essential and is the key to everything else that happens. If transport in the thick ascending limb is innhibited, the lumen is not diluted and the interstitium is not concentrated, and the urine becomes iso-osmotic. - For thick ascending limbs in the cortex, reabsorbed solute is taken up by abundant cortex blood flow so intersititum osmolality in the cortex approximately equal plasma - High concentratiion sodium level in outer medulla interstitium makes them diffuse into DVR and AVR - Hyperosmotic sodium in AVR can diffuse into nearby DVR – the countercurrent exchange For those portions of the thick ascending limbs in the cortex, the reabsorbed solute simply mixes with material reabsorbed by the nearby proximal convoluted tubules. Because the cortex contains abundant peritubular capillaries and a hight blood flow, the reabsorbed material immediately moves into the vasculature and returns to the general circulation. However, in the medulla, the vascular anatomy is arranged differently and total blood flow is much lower. Solute that is reabsorbed and deposited in the outer medullary interstitium during the establishment of the osmotic gradient is not immediately removed, that is, it accumulates. The degree of accumulation before a steady state is reached is a function of the arrangement of the vasa recta, their permeability properties and the volume of blood flowing within them. Imagine first a hypothetical situation of no blood flow. Sodium would accumulate in the outer medulla without limit, because there would be no way to remove it. But, of course the outer medulla is perfused with blood, as are all tissues. Blood enters and leaves the outer medulla through parallel bundles of descending and ascending vasa recta (DVR and AVR). These vessels are permeable to sodium. Therefore sodium enters the vasa recta driven by the rise in concentration in the surrounding interstitium. Sodium entering the ascending vessels returns to the general circulation, but sodium in the descending vessels is distributed deeper into the medulla, where it diffuses out across the endothelia of the vassa recta and the interbundle capillaries that they feed, thereby raising the sodim content throughout the medulla. Later, the interbundle capillaries drain into ascending vasa recta that lie near descending vasa recta. The walls of the ascending vasa recta are fenestrated, allowing movment of water and small solutes between plasma and interstitium. As the sodium concentration of the medullary interstitium rises, blood in the ascending vessels also takes on an increasingly higher sodium concentration. However, blood entering the medulla always has a normal sodium concentration (approximtely 140 mEq/L). Accordingly, some of the sodium begins to re-circulate, diffusing out of ascending vessels and reentering nearbydescending vessels that contain less sodium (countercurrent exchange). So sodium is entering descending vasa recta from 2 sources – re-circulated sodium from the ascending vasa recta, and new sodium from the thick ascending limbs. Over time, everything reaches a steady state in which the amount of new sodium entering the interstitium from thick ascending limbs matches the amount of sodium leaving the interstitium in ascending vasa recta. At its peak, the concentration of sodium in the medulla may reach 300 mEq/L, more than double its value in the general circulation. Since sodium is accompanied by an anion, mostly chloride, the contribution of salt to the medullary osmolality is approximately 600 mOsm/kg. - While solute can accumulate without a major effect on renal volume, the amount of water in the medullary interstitium must remain nearly constant; otherwise the medulla would undergo significiant swelling or shrinking. - Because water is always being reabsorbed from the medullary tubules into the interstitium (from descending thin limbs and medullary colelcting ducts), that water movement must be matched by equal water movement from the interstitium to the vasculature. - Blood entering the medulla has passed through glomeruli, thereby concentrating the plasma proteins. While the overall osmotic content (osmolality) of this blood is essentially isosmotic with systemic plasma, its oncotic pressure is considerably higher. The challenge for the kidneys is to prevent dilution of the hyperosmotic interstitium by water reabsorbed from the tubules and by water diffusing out of the iso-osmotic blood entering the medulla. The endotheial cells of descending vasa recta contain aquaporins. So water is drawn osmotically into the outer medullary interstitium by the high salt content in a manner similar to water being drawn out of tubular elements. At first glance it seems that this allows the undesired diluting effect to actually take place. But, of course, solute is also constantly being added from the nearby thick ascending limbs. The loss of water from descending vasa recta in the outer medulla serves the useful purpose of raising the osmolality of blood penetrating the inner medulla and decreasing its volume, thereby reducing the tendency to dilute the inner medullary interstitium. The ascending vasa recta have a fenestrated endothelium, allowing free movement of water and small solutes. Since the oncotic pressure is high, water entering the interstitium of the outer medulla from descending vasa recta is taken up by ascending vasa recta and removed from the medulla. In addition, water reabsorbed from tubular elements (descending thin limbs and collecting ducts) is also taken up by ascending vasa recta and removed, thereby preserving constancy of total medullary water content. The magnitude of blood flow in the vasa recta is a crucial variable. The peak osmolality in the interstitium depends on the ratio of sodium pumping by the thick ascending limbs to blood flow in the vasa recta. If this ratio is high (meaning low blood flow), water from the isosmotic plasma entering the medulla in descending vasa recta does not dilute the hyperosmotic interstitium. In effect the "salt wins" and osmolality remains at a maximum. But in conditions of water excess, this ratio is very low (high blood flow) and the diluting effect of water diffusing out of descending vasa recta is considerable. In part the tendency to diulte is controlled by ADH (due to its vasoconstriction effect to limit the blood flow of DVR). The peak osmolality in the renal papilla reaches over 1000 mOsm/kg. Approximately half of this is accounted for by sodium and chloride, and most of the rest is (500-600 mOsm/kg) accounted for by urea. Urea is a very special substance for the kidney. It is an end product of protein metabolism, waste to be excreted, and also an important component for the regulation of water excretion. For the handle of urea in the kidneys: 1.There are no membrane transport mechanisms in the proximal tubule; instead, it easily permeates the tight junctions of the proximal tubule where it is reabsorbed paracellularly. 2.Tubular elements beyond the proximal tubule express urea transporters and handle urea in a complex, regulated manner. The gist of the renal handling of urea is the following: it is freely filtered. About half is reabsorbed passively in the proximal tubule. Then an amount equal to that reabsorbed is secreted back into the loop of Henle. Finally, about half is reabsorbed a second time in the medullary collecting duct. The net result is that about half the filtered load is excreted. Urea does not permeate lipid layer because of its highly polar nature, but a set of uniporters transport urea in various places beyond the proximal tubule and in other sites within the body. Because urea is freely filtered, the filtrate contains urea at a concentration identical to that in plasma. In the proximal tubule when water is reabsorbed, the urea concentation rises well above the plasma urea concentration, driving diffusion through the leaky tight junctions. Roughly, half the filtered load is reabsorbed in the proximal tubule by the by the paracellular route. As the tubular fluid enters the loop of Henle, about half the filtered urea remains, but the urea concentration has increased somewhat above its level in the filtrate because proportionally, more water than urea was reabsorbed. At this point, the process becomes fairly complicated. The interstitium of the medulla has a considerably higher urea concentration than does plasma. The concentration increases from the outer to the inner medulla. Since the medullary interstitial urea concentration is greater than that in the tubular fluid entering the loop of Henle, there is a concentration gradient favoring secretion into the lumen. The tight junctions in the loop of Henle are no longer permeable, but the epithelial membranes of the thin regions of the Henle's loops express urea uniporters, members of the UT family. This permits secretion of urea into the tubule. In fact, the urea secreted from the medullary interstitium into the thin regions of the loop of Henle replaces the urea previously reabsorbed in the proximal tubule. Thus, when tubular fluid enters the thick ascending limb, the amount of urea in the lumen is at least as large as the filtered load. However, because about 80% of the filtered water has now been reabsorbed, the luminal urea concentration is now several times greater than in the plasma. Beginning with the thick ascending limb and continuing all the way to the inner medullary collecting ducts (through the distal tubule and cortical collecting ducts), the apical membrane urea permeability (and the tight junction permeability) is essentially zero. Therefore, an amount of urea roughly equal to the filtered load remains within the tubular lumen and flows from the cortical into the medullary collecting ducts. During the transit through the cortical collecting ducts variable amounts of water are reabsorbed, significantly concentrating the urea.We indicated eariler that the urea concentration in the medullary interstitium is much greater than in plasma, but the luminal concentration in the medullary collecting ducts is even higher (up to 50 times its plasma value), so in the inner medulla the gradient now favors reabsorption and urea is reabsorbed a second time via another isoform of UT urea uniporter. It is this urea reabsorbed in the inner medulla that leads to the high medullary interstitial concentration, driving urea secretion into the thin regions of the loop of Henle.
Your kidneys play an important part in keeping your body healthy. They are about as big as a fist, and weigh about five or six ounces. They are located to the left and right of the spine right underneath the rib-cage. The kidneys are powerful chemical factories that perform the following functions: - Remove waste products from the body - Balance the body’s fluids - Synthesize the vitamins which control growth - Control the production of red blood cells - Release hormones which regulate blood pressure The functions listed above are carried out by one million functioning units called nephrons. A nephron consists of a filtering unit of tiny blood vessels called a glomerulus attached to a tubule. Fluid then passes along the tubule. In the tubule, chemicals and water are either added to or removed from this filtered fluid according to the body’s needs, the final product being the urine we excrete. Kidney disease comes in two main forms: acute and chronic. Acute kidney diseases are those that occur suddenly such as a bacterial or viral infection, injuries, or medications. These are less common than the more potent threat; chronic kidney disease (CKD). Chronic kidney disease is caused by sustained long-term damage to the kidney over a period of time. This disease can be caused by high blood pressure, unmanaged diabetes, lack of exercise, and also hereditary factors. Currently, more than 26 million adults in the United States have been diagnosed with CKD. Other possible risk factors can lead to the development of this disease such as age, gender, and race as well. These diseases limit the functionality of the kidneys and reduce their renal function. If both kidneys cannot function, waste products and water will build up in the body. This is called uremia. Serious health problems can arise if a person has less than 20 percent of their renal function. If your renal function drops below 10 to 15 percent, you cannot live long without some form of renal replacement therapy either through dialysis or transplantation. This is the point where an individual enters kidney failure. Acute Renal Failure is a sudden and complete loss of kidney function. Some causes of acute renal failure are, accidents, medicines, surgery, low blood pressure from shock or serious infections. In acute renal failure, the kidneys will start working again in one to four weeks with medical treatment. Chronic renal failure is a decrease of kidney function in both kidneys over a period of time. The most common reasons for this are: • Kidney disease • Damage to the kidney from diabetes, heart disease, drug abuse or high blood pressure • Kidney infections • Kidney stones or a blockage present from birth As stated above, when a person’s renal function drops below 10 to 15 percent, they cannot live long without some form of renal replacement therapy which can be in the form of either dialysis or a kidney transplant . Treatment of Kidney Disease and Kidney Failure Treatment of kidney disease and failure differs person by person. Many treatment options exist for kidney disease before it has progressed into kidney failure. These include: • Regularly see your physician or doctor • Maintaining a healthy body weight • Getting daily physical exercise • Eating a healthy diet low in sodium, fat, and sugar • Managing existing conditions such as diabetes, high blood pressure, and heart disease Kidney failure can be treated with a special diet, medicines, regular dialysis treatments and, possibly, a kidney transplant. Your treatment is based on your special needs. Your age, the type of kidney disease, your current state of health, and your lifestyle are a few of the things that your doctor considers when selecting a treatment option. The the two most common options for those with kidney failure are dialysis and transplantation of a new kidney. Dialysis is a treatment option used to mimic the functions of the kidneys. This can be done either at home or at a treatment clinic. There are two main forms of dialysis; peritoneal and the more common, hemodialysis. Sometimes, when a person is receiving dialysis or has acute renal failure, a transplant of a new kidney is needed. A doctor should be consulted with all matter dealing with a kidney transplant. The kidney is the most needed organ in the United States. Currently, there are over 93,000 people on the kidney transplant waiting list. The wait for a deceased donor could be 5 years, and in some states, it is closer to 10 years. Patients are prioritized by how long they have been on the waiting list, their blood type, immune system activity and other factors. 80% of the people on the waiting list are currently receiving kidney dialysis. The need grows everyday for these patients. To register as an Organ Donor Visit: Donate Life Ohio Living Kidney Donors Network
Clarifying the term "orphan" The term orphan is one that is used to designate children who have lost one or both parents. Confusion from this usage is has led to misunderstanding of the needs of children who have been identitied as orphans. UNICEF has issued a statement that raises awareness of the confusion and calls for its clarification. UNICEF and global partners define an orphan as a child who has lost one or both parents. By this definition there were over 132 million orphans in sub-Saharan Africa, Asia, Latin America and the Caribbean in 2005. This large figure represents not only children who have lost both parents, but also those who have lost a father but have a surviving mother or have lost their mother but have a surviving father.Many thanks to Ethica for sending out the link. Of the more than 132 million children classified as orphans, only 13 million have lost both parents. Evidence clearly shows that the vast majority of orphans are living with a surviving parent grandparent, or other family member. 95% of all orphans are over the age of 5. This definition contrasts with concepts of orphan in many industrialized countries, where a child must have lost both parents to qualify as an orphan. UNICEF and numerous international organizations adopted the broader definition of orphan in the mid-1990s as the AIDS pandemic began leading to the death of millions of parents worldwide, leaving an ever increasing number of children growing up without one or more parents. So the terminology of a ‘single orphan’ – the loss of one parent – and a ‘double orphan’ – the loss of both parents – was born to convey this growing crisis. However, this difference in terminology can have concrete implications for policies and programming for children. For example, UNICEF’s ‘orphan’ statistic might be interpreted to mean that globally there are 132 million children in need of a new family, shelter, or care. This misunderstanding may then lead to responses that focus on providing care for individual children rather than supporting the families and communities that care for orphans and are in need of support. There is growing consensus on the need to revisit the use of the term ‘orphan’ and how it is applied to help overcome this confusion.
If you toss a coin 10 times and count 58 heads, you know the coin is NOT fair. Flip a coin 1000 times, counting the number of 'heads' that occur. The relative frequency probability of 'heads' for that coin (aka the empirical probability) would be the count of heads divided by 1000. Please see the link. Since it is a fair coin, the probability is 0.5 The probability is 0.09766%.Each toss has a ½ chance to be heads. To combine probabilities use multiply them. So the probability to get two heads out of two tosses is ½ * ½, and three heads out of three tosses is ½ * ½ * ½. So the exact answer is 0.5^10 The probability of flipping a coin 3 times and getting 3 heads is 1/2 The probability of 'heads' on any flip is 50% . The probability is 6 in 12, or 1 in 2. The mathematical probability of getting heads is 0.5. 70 heads out of 100 tosses represents a probability of 0.7 which is 40% larger. the probability of getting heads-heads-heads if you toss a coin three times is 1 out of 9. The probability is 0.5 regardless how many times you toss the coin." the probability is actually not quite even. It would actually land heads 495 out of 1000 times because the heads side is slightly heavier The probability to get heads once is 1/2 as the coin is fair The probability to get heads twice is 1/2x1/2 The probability to get heads three times is 1/2x1/2x1/2 The probability to get tails once is 1/2 The probability to get tails 5 times is (1/2)5 So the probability to get 3 heads when the coin is tossed 8 times is (1/2)3(1/2)5=(1/2)8 = 1/256 If you read carefully you'll understand that 3 heads and 5 tails has the same probability than any other outcome = 1/256 As the coin is fair, each side has the same probability to appear So the probability to get 3 heads and 5 tails is the same as getting for instance 8 heads or 8 tails or 1 tails and 7 heads, and so on The probability of a heads is 1/2. The expected value of independent events is the number of runs times the probability of the desired result. So: 100*(1/2) = 50 heads The probability of getting 3 or more heads in a row, one or more times is 520/1024 = 0.508 Of these, the probability of getting exactly 3 heads in a row, exactly once is 244/1024 = 0.238 Probability of not 8 heads = 1- Prob of 8 heads. Prob of 8 heads = 0.5^8 = 0.003906 Prob of not 8 heads= 1- 0.003906 = 0.99604 The probability of heads is 0.5 each time.The probability of four times is (0.5 x 0.5 x 0.5 x 0.5) = 0.0625 = 1/16 = 6.25% . There are 8 possible outcomes when a coin is tossed 3 times. Here they are:1. Heads, Heads, Tails.2. Heads, Tails, Heads.3. Tails, Heads, Heads.4. Heads, Heads, Heads.5. Tails, Tails, Heads.6. Tails, Heads, Tails.7. Heads, Tails, Tails.8. Tails, Tails, Tails.There is only one outcome that is heads, heads, heads, so the probability of three heads coming up in three coin tosses is 1 in 8 or 0.125 for that probability. There is a 50% chance that it will land on heads each toss. You need to clarify the question: do you mean what is the probability that it will land on heads at least once, exactly once, all five times? Experimental probability is the number of times some particular outcome occurred divided by the number of trials conducted. For instance, if you threw a coin ten times and got heads seven times, you could say that the experimental probability of heads was 0.7. Contrast this with theoretical probability, which is the (infinitely) long term probability that something will happen a certain way. The theoretical probability of throwing heads on a fair coin, for instance, is 0.5, but the experimental probability will only come close to that if you conduct a large number of trials. The answer depends on how many times the coin is tossed. The probability is zero if the coin is tossed only once! Making some assumptions and rewording your question as "If I toss a fair coin twice, what is the probability it comes up heads both times" then the probability of it being heads on any given toss is 0.5, and the probability of it being heads on both tosses is 0.5 x 0.5 = 0.25. If you toss it three times and want to know what the probability of it being heads exactly twice is, then the calculation is more complicated, but it comes out to 0.375. If you have tossed a fair, balanced coin 100 times and it has landed on HEADS 100 consecutive times, the probability of tossing HEADS on the next toss is 50%. The probability is 5/16. The correct answer is 1/2. The first two flips do not affect the likelihood that the third flip will be heads (that is, the coin has no "memory" of the previous flips). If you flipped it 100 times and it came up heads each time, the probability of heads on the 101st try would still be 1/2. (Although, if you flipped it 100 times and it came up heads all 100 times - the odds of which are 2^100, or roughly 1 in 1,267,650,000,000,000,000,000,000,000,000 - you should begin to wonder about whether it's a fair coin!). If you were instead asking "What is the probability of flipping a coin three times and having it land on "heads" all three times, then the answer is 1/8. Theoretical probability = 0.5 Experimental probability = 20% more = 0.6 In 50 tosses, that would imply 30 heads. The opposite of getting at most two heads is getting three heads. The probability of getting three heads is (1/2)^2, which is 1/8. The probability of getting at most two heads is then 1 - 1/8 which is 7/8. p(heads)= 0.5 p(heads)^4= 0.0625 Asked By Wiki User Asked By Wiki User Asked By Wiki User Asked By Wiki User Copyright © 2021 Multiply Media, LLC. All Rights Reserved. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply.
The Earth and the moon are always moving! Learn all about how the moon travels around Earth and why the moon looks different from night to night. The moon goes through different phases based on its position in relation to Earth and the sun. Vibrant images pair with easy-to-read text to keep students engaged from cover to cover. This reader also includes instructions for an engaging science activity and practice problems to further students' understanding of Earth and the moon in a creative way. A helpful glossary and index are also included for additional support. This 6-Pack includes six copies of this Level J title and a lesson plan that specifically supports guided reading instruction.
In the power plant, steam for generating power is produced directly in receiver tubes in the parabolic troughs. At the Plataforma Solar de Almería in southern Spain, researchers from the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) have put a test facility for solar thermal power plants into operation. This avoids the need for intermediate stages using thermal transfer media and also allows for higher operating temperatures. The new technology enables parabolic trough power generators to produce power more efficiently and cost-effectively. Everything in one tube In the test facility in Almería, parabolic mirrors reflect the Sun’s rays onto receiver tubes. These tubes absorb the solar radiation, convert it into thermal energy, and pass it on. What is special about this test facility is that the tubes contain water, instead of the more usual oil; the water is directly heated and turned into steam. The superheated steam generated in this way can be used to drive a turbine in a power generator. Researchers refer to this as a ‘once-through concept’. “The main challenges with this type of direct steam generator are the high operating pressure – approximately 110 bar – in the receiver tubes and the control of the entire process. But the advantages outweigh these; with the ‘once-through concept’, there is no longer a need for heat exchangers and many other additional components such as oil treatment facilities,” says project leader Fabian Feldhoff from the DLR Institute of Solar Research, describing the benefits of the new technology. “This enables a reduction in the cost of solar power generators. Furthermore, power generators using this technology can operate at higher temperatures, making the generation process more efficient.” More efficient concentrated solar power plants Using a 1000-metre-long collector array with a thermal output of three megawatts, researchers in the DUKE research project (Durchlaufkonzept – Entwicklung und Erprobung; Once-through Concept – Development and Testing) are attempting to demonstrate the ‘once-through concept’ on an industrial scale. The new test facility offers one-of-a-kind opportunities for research as well as for continuing to develop this technology. Advantages of water as a thermal transfer medium Parabolic trough power generators are currently the most proven solar thermal power generators. Almost every commercial facility constructed to date uses synthetic thermal oil in the receiver tubes of the mirror array. The disadvantage of such thermal oils is that they can only be heated to a maximum of 400 degrees Celsius, which gives rise to limitations in the level of efficiency that can be attained. The facility now being tested by DLR can operate at temperatures of up to 500 degrees Celsius using a new receiver design. Using water as the heat transfer medium has the additional advantage of being low-cost, and it is neither flammable nor harmful to the environment. In facilities that use the ‘once-through concept’, the steam for the turbine is evaporated and superheated in one continuous process in the collector array. Previous commercially operated direct steam generation units work using the recirculation concept. With this method, the water flows through three areas in the solar array – the evaporation area where the steam is generated; the ‘steam drum’, where the liquid water and steam are separated; and the ‘superheating area’, where the steam is heated to even higher temperatures. Facilities of this type were preferred because they are easier to control. The ‘once-through concept’ now being developed and tested in Almería certainly presents greater challenges in terms of controlling the facility; however, the scientists believe that, overall, operation of the system will be more cost-effective while at the same time more efficient. Furthermore, facilities of this type are more easily scalable, as solar power generators can easily be expanded. This is particularly important for further cost reductions in the long term. DUKE project description The DUKE solar research project (Durchlaufkonzept – Entwicklung und Erprobung; Once-through Concept – Development and Testing) has been funded by the German Federal Ministry for the Environment, Nature Conservation and Nuclear Safety (Bundesministerium für Umwelt, Naturschutz und Reaktorsicherheit; BMU) and is carried out together with industrial partners. The test facility is being operated at the Plataforma Solar de Almería (PSA) under an already-successful cooperation with the Spanish organisation CIEMAT (Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas; Centre for Energy, Environment and Technology Research). The project is expected to run until April 2014. The Plataforma Solar de Almería is a test centre for high-temperature concentrating solar technology. Since the very beginning, DLR has played a major role in its planning and construction, and has been making use of the facility with on-site scientific staff to conduct its solar technology testing and development work in close collaboration with its Spanish partner, CIEMAT, which owns and operates the facility.
Anatomy worksheets really are a fun and useful way to simply help students understand the anatomy of their body. There are many types and combinations of those worksheets, and they are available in virtually every medical classroom, no matter size or the age of the students. These worksheets offer students with a good way to explore the various organs and structures of these bodies. These worksheets are easy to read and even more so, they may be used to teach students at any degree of education. Students need certainly to be able to easily understand the names of body parts, the kinds of structures present within the body, and the functions that all these functions performs. Students also have to manage to decide which the main body the organ they are working with is part of. Therefore, Anatomy worksheets are excellent for teaching students in regards to the relationships between organs in an entire body. They can be used to spot what section of a patient's body is suffering from certain conditions, diseases or injuries. Finally, worksheets can be used as reference guides as well as as a means to generate classroom games. For instance, one student could have a worksheet with the many organs in his body, and if he were to ask his teacher a concern about them, the teacher could explain the data he was presented with on the worksheets and then produce a game where each player is asked to use the info to diagnose the problem.
[By: Ellee Rogers] In a rapidly changing education system, it can often be difficult to pinpoint if society is progressing or deteriorating. Behind institutionalized racism lies long-term consequences that studies show adolescents may carry with them for the rest of their lives. One of the most prevailing of these includes the school-to-prison-pipeline, which is the process in which children are pushed out of schools and into prisons and other justice systems. There can be several factors surrounding the school-to-prison pipeline standpoint, specifically regarding race, ethnicity and cultural differences. Educational systems historically have always favored privileged groups, which include (but are not limited to): white, male, upper-class, Christian, able-bodied, Americans. This leaves marginalized groups to have fewer opportunities and a higher chance of being funneled into this national trend. Some of the detrimental impacts can follow an individual person for the rest of their lives, lessening their chances of success both academically and professionally. This can also include significantly lowered self-esteem, depression, and/or anxiety, according to the National Alliance on Mental Health. Institutionalized racism is also a key factor in the school-to-prison-pipeline, which can further these negative psychological impacts. Zero-tolerance policies in the education system are the discipline guidelines that mandate each student having the same, predetermined consequences for acting out, by deciding beforehand how the administration would react to any given infraction. These policies were adopted in hopes to reduce crime within the education system and set an example for future generations of students. These predetermined consequences are often viewed as harsh, as they do not take case-by-case instances into consideration. They also often criminalize students, particularly students of color and other minority groups, as they are viewed to be particularly vulnerable when facing punishment for minor infractions. This leads to minority students being disproportionately impacted. Black students are nearly four times more likely to be suspended than their white peers. Procedures following the integration of zero-tolerance policies can include suspension or expulsion. The education system’s duty is to make students’ safety and wellbeing their top priority, which is essential for an individual’s academic and overall success. However, zero-tolerance policies assume that removing students who engage in disruptive behavior will deter others from disruption. This goal is often not accomplished and may actually cause a divide that will eventually make schools more unsafe. Furthermore, these policies push students out of the classroom after being criminalized, which later leads to incarceration. Once a student has such crucial marks on their record, it can be increasingly difficult for them to obtain jobs, access other educational opportunities, or experience feelings of sympathy from the outside world. The impact of these actions can be destructive to the students who have been faced with them and can lead to future behavioral issues. In March 2019, Project on the History of Black Writing (HBW) held the Mass Incarceration Symposium, where we were able to collectively shed light and educate others on how severe and prevalent these injustices are. Jennifer Wilmot, a Ph.D. student in the Educational Leadership Program gave a powerful and insightful presentation and personal testimony focusing specifically on how girls of color are often criminalized within the education system. Her presentation focused on how harrowing institutionalized racism can be and how permanently it can impact an individual, while simultaneously empowering women of color to stand up for themselves and other women in the face of hatred. Overall, zero-tolerance policies systematically promote the mistreatment of minority groups, particularly people of color and those with disabilities and promote the heinous school-to-prison-pipeline. These factors contribute to the toxicity of implicit bias and institutionalized racism in school, which is inherently negative for a student’s mental and emotional wellbeing. As a future educator, it is of utmost importance to me that all children have a chance to perform to the best of their abilities, whether it be continuing on to higher education later on in life, or pursuing their talents elsewhere. All infringements should be taken individually so that every child has a chance to defend themselves and have their voices heard. More than this, it gives all students of color and those with disabilities a better chance to avoid the school-to-prison pipeline. Action can be made from with the education system itself, from the administration keeping an open conversation, to the students who can continue to speak up about these injustices. While I do strongly believe in being patient, understanding, and compassionate, I want to also make clear that prejudice of any sort will not be tolerated. I want to be honest and open with my students, as well as providing an environment where they feel comfortable to ask questions that may feel a little more uncomfortable to them. I want to make clear that this is a safe place to learn about tolerance and values, as well as encouraging students to embrace and celebrate their own culture. I want to educate students on the severe consequences behind making assumptions and how these can stop them from growing as individuals, as well as distracting them from the potential of another human being. Ellee Rogers is a sophomore in the School of Education where she is pursuing a degree in secondary English education. In her free time, she enjoys reading and writing about current events and journaling.
Baroreceptors and mechanoreceptors respond to changes in pressure or stretch in blood vessels within the aortic arch and carotid sinus. In part, they can respond to changes in pH and changes in specific metabolites in the blood. They help maintain mean arterial pressure, adjusting blood pressure based on physiological input and return to their baseline level of activity upon attaining homeostatic arterial pressure. They are a part of the afferent system, which senses pressure inputs and relaying these via cranial nerve signals to adjust blood pressure accordingly. Understanding the physiological principles of the baroreceptor mechanism is clinically significant in understanding the mechanism of carotid massage, carotid occlusion, and in the Cushing reflex. In this review, we will explore the structure, function, and clinical relevance of the baroreceptors within the aortic arch and carotid sinus. Peripheral baroreceptors reside in the aortic arch and carotid sinus. The baroreceptors of the aortic arch transmit signals via the vagus nerve, or cranial nerve ten, to the solitary nucleus of the medulla. The location of the baroreceptors of the carotid sinus is where the common carotids bifurcate and transmit signals via the glossopharyngeal nerve, or cranial nerve nine, to the solitary nucleus of the medulla. The aortic arch and carotid sinus have stretch fibers which send electrical signals to the brain based on how much stretch stimulus they receive. As blood pressure increases, there is an increase in the stretch of the fibers of the aortic arch and carotid sinus, and this increases signals to the brain. The nucleus solitarius is the portion of the brain that receives neural messages and makes changes to blood pressure based on afferent inputs from the vagus nerve via the aortic arch and glossopharyngeal nerve via the carotid sinus. When increased blood pressure is detected, there is increased stretch fiber stimulation along the aortic arch and carotid sinus. The vagus and glossopharyngeal nerves send afferent inputs to the solitary nucleus of the medulla; this then leads to the efferent signal that exits the brain to cause veins to dilate, serving as a storage basin for blood and fluid in the heart, and for arteries to dilate to decrease blood pressure. This response is because decreased efferent sympathetic outflow causes decreased vasoconstriction which leads to reduced total peripheral resistance. Efferent signals also cause the heart to lower heart rate and to decrease contractility, because there is an increase in parasympathetic efferent outflow to the sinoatrial node which will reduce the heart rate. There is also a decrease in efferent sympathetic outflow, which causes a reduction in contractility of the heart and decreased heart rate, leading to reduced cardiac output. The increase in blood pressure also causes the kidney to decrease salt or water retention, leading to a decrease in blood pressure. These efferent signals from the heart, veins, arteries, and kidneys are compensatory for the increased blood pressure detected. Baroreceptor activity returns to baseline level upon attaining homeostatic arterial pressure. Hemorrhage, on the other hand, causes a decrease in blood pressure. This decreased blood pressure leads to fewer signals to be sent to stretch fibers in the aortic arch via the vagus nerve and carotid sinus via the glossopharyngeal nerve. Thus there are decreased inputs sent to the solitary nucleus of the medulla, which then leads to the efferent signal that exits the brain to cause veins and arteries to constrict, such that more fluid gets pushed into the heart and arteries. Efferent signals also cause the heart to increase heart rate and to increase contractility. These responses are because there is a decrease in parasympathetic efferent outflow which leads to an increase in heart rate to compensate. There is also an increase in efferent sympathetic outflow, which causes an increase in cardiac contractility, heart rate, vasoconstriction, and arterial pressure. This decrease in blood pressure also causes the kidney to increase salt or water retention, leading to an increase in blood pressure. These efferent signals from the heart, veins, arteries, and kidneys are compensatory for the decreased blood pressure detected. During development, the baroreceptors are active. The motivation is to improve synaptic plasticity, thanks to several neurotrophic relationships with BDNF - brain-derived neurotrophic factor. Their derivation is from the ectodermal sheet. Baroreceptors in the aortic arch and carotid sinus have significant clinical significance. For example, carotid massage can occur when there is increased pressure on the carotid artery. Increased pressure on the carotid artery leads to increased signaling of stretch fibers, which causes increased electrical signaling of the baroreceptors from the increased stretch. The carotid sinus detects this increased firing of afferent signals via the glossopharyngeal nerve, leading to a misinterpretation of the existence of hypertension. To compensate for this, efferent signals to the nucleus solitarius will cause venous dilation, arterial dilation, decreased heart rate with increased atrioventricular node refractory period, and ultimately reduced blood pressure. This sudden decrease in blood pressure can cause syncope, which often presents in patients who have a history of syncope while shaving or buttoning their shirts, activities which cause increased pressure on the carotid artery. In contrast, during carotid occlusion, there is no blood flow to the carotid sinus. Decreased blood flow to the carotid artery leads to decreased signaling of the stretch fibers, which causes lower levels of electrical signaling of the baroreceptors from the reduced stretch. The carotid sinus detects this decreased firing of afferent signals via the glossopharyngeal nerve, incorrectly appearing to be hypotension. To compensate for this, efferent signals to the nucleus solitarius will cause venous constriction, arterial constriction, increased heart rate, and ultimately increased blood pressure. Carotid occlusive disease, or carotid stenosis, can present in patients with ischemic stroke. The Cushing reaction is another clinically significant application of the baroreceptor mechanism. This reaction is a triad of bradycardia, hypertension, and respiratory depression. In the Cushing reflex, increased intracranial pressure causes arterioles to constrict, which leads to cerebral ischemia, which leads to an increased pCO2 and decreased pH, leading to increased sympathetic in perfusion pressure, or hypertension. Hypertension causes and increases sympathetic afferent signal because of increased total peripheral resistance and increased arterial pressure. This hypertension leads to increased stretch signals, which leads to increased electrical signals of baroreceptor firing. This increased peripheral baroreceptor activity then leads to compensatory reflex bradycardia. The perception of pain diminishes with the retention of breath following a deep inhalation; this event reflects the intervention of the baroreceptors. The activation of the baroreceptors during systole attenuates the nociceptive stimulus. |||Lau EO,Lo CY,Yao Y,Mak AF,Jiang L,Huang Y,Yao X, Aortic Baroreceptors Display Higher Mechanosensitivity than Carotid Baroreceptors. Frontiers in physiology. 2016; [PubMed PMID: 27630578]| |||Saku K,Kishi T,Sakamoto K,Hosokawa K,Sakamoto T,Murayama Y,Kakino T,Ikeda M,Ide T,Sunagawa K, Afferent vagal nerve stimulation resets baroreflex neural arc and inhibits sympathetic nerve activity. Physiological reports. 2014 Sep 1; [PubMed PMID: 25194023]| |||Fahim M, Cardiovascular sensory receptors and their regulatory mechanisms. Indian journal of physiology and pharmacology. 2003 Apr; [PubMed PMID: 15255616]| |||Lantelme P,Harbaoui B,Courand PY, [Resistant hypertension and carotid baroreceptors stimulation]. Presse medicale (Paris, France : 1983). 2015 Jul-Aug; [PubMed PMID: 26144275]| |||Geerdes BP,Frederick KL,Brunner MJ, Carotid baroreflex control during hemorrhage in conscious and anesthetized dogs. The American journal of physiology. 1993 Jul; [PubMed PMID: 8342687]| |||Mateos JCP, Carotid Sinus Massage in Syncope Evaluation: A Nonspecific and Dubious Diagnostic Method. Arquivos brasileiros de cardiologia. 2018 Jul; [PubMed PMID: 30110050]| |||Förster A,Wenz H,Böhme J,Groden C,Alonso A, Asymmetrical Gadolinium Leakage in Ocular Structures in Stroke Due to Internal Carotid Artery Stenosis or Occlusion. Clinical neuroradiology. 2018 Dec 28; [PubMed PMID: 30593604]| |||Saleem S,Teal PD,Howe CA,Tymko MM,Ainslie PN,Tzeng YC, Is the Cushing mechanism a dynamic blood pressure-stabilizing system? Insights from Granger causality analysis of spontaneous blood pressure and cerebral blood flow. American journal of physiology. Regulatory, integrative and comparative physiology. 2018 Sep 1; [PubMed PMID: 29668325]| |||Bordoni B,Marelli F,Bordoni G, A review of analgesic and emotive breathing: a multidisciplinary approach. Journal of multidisciplinary healthcare. 2016 [PubMed PMID: 27013884]|
Lake Elmenteita is a hypertrophic, saline lake of low water clarity. Saline lakes are usually shallow and their waters do not stratify vertically.The lack of vertical stratification means that they undergo large fluctuations in lake volume and surface area, as well as salinity levels. However, as a saline lake, the water is alkali carbonate meaning it is rich in sodium hydrogen carbonate chloride (NaHCO3Cl)); thus, it is considered an area of high biomass and productivity.This high biomass and productivity allows Lake Elmenteita to serve as a home and a feeding area to Lesser Flamingos and the breeding area for the Great White Pelicans. Lake Elmenteita does indeed support a migratory population of flamingos which varies according to lake levels and salinity. The fluctuations in lake volume and salinity of Lake Elmenteita have been linked to flamingo population crashes; research shows that these crashes have been associated with periods of extreme salinity.The saline conditions in Lake Elmenteita also leads to the high productivity of its major phytoplankton species; the cyanobacterium ‘Spirulina’ Arthrosporia fusiformis, which also serves as the only food source of the Lesser Flamingo. Saline lakes have few species types, but contain large numbers of microorganisms. Most of the physical processes taking place in the lakes are controlled by wind- induced currents. Major water inflows of saline lakes are via ground water flow.These factors ultimately mean that environmental impact on ground water is much higher in saline lakes than fresh water lakes. The high nutrient content in saline lakes is important for the ecological processes occurring within the water column. Nutrient level changes will in turn result in changes in the species complement and population structure. Groundwater quality is the sum of all natural and anthropogenic influences.The processes that may result in addition of nutrients into ground water systems may be physical, geochemical, or biochemical. Some examples of ground water contamination sources are un- sewered domestic sanitation such as pit latrines, disposal of industrial waste, cultivation using agrochemicals, and mining activities. The formation of the Great Rift Valley profoundly affected the drainage of East Africa resulting in the formation of a chain of lakes: Turkana, Baringo, Bogoria, Nakuru, Elmenteita, Naivasha, Magadi, Natron and Tanganyika. Some of these lakes are long, narrow and deep, while others are wide and shallow. The lakes are located in areas with high evaporation rates but with low rainfall. Three of these lakes are fresh water while the rest are alkaline. Eruptions in the area of fluid lava with a low viscosity have resulted in the formation of Rhyolites while Basaltic rocks have been formed by a lava with a higher viscosity that has not flowed over such large distances. These basaltic rocks are basic and contain relatively low amounts of silica (less than 45%) while Rhyolites are acidic rocks and hence rich in silica (greater than 45%). The parent rock of both types contains high levels of Sodium resulting in the soda lakes of the Rift Valley. Also visible are Trachyte rocks which have intermediate character. The high levels of chlorides in Lake Elmenteita are indicative of a closed lake system with high evaporation rates and hot springs fed by a geothermal aquifer. These fault movements were succeeded by trachytes and phonolites which were later covered by thin layers of lake sediments. Grid faulting followed this and resulted in minor vulcanicity in the area which has led to the formation of several large craters and in one area, there can be seen, the evidence of a very young lava flow that forms the “Ututu” or Bad Lands. Lava tubes are also a feature of this area. Research also shows there to be large lacustrine sediments that include thick diatomite from middle Pleistocene age at Kariandusi and Soysambu. Soils of the Elmenteita Watershed and Basin Soils found in the Elmenteita basin can be classified into five main types.These include the soils found on the hills and minor scarps, those found in the more mountainous areas and on the major scarps, soils developed in volcanic highlands and on volcanic plains and lastly those on the undifferentiated areas of the uplands. Hills and minor scarps (H) Mountains and major scarps (M) Soils developed on sediments mainly from volcanic ashes Volcanic plains (Pv) Uplands undifferentiated levels (Ux) Climate and the Hydrological Cycle Elmenteita, the springs and streams provide the water supply to the originate the highland areas of Kenya, within the forested belts of the Mau Escarpment and the mountain ranges of the Central Rift. At Elmenteita, there is a dynamic relationship between the processes of precipitation, interception, through-flow, overland flow groundwater recharge which defines the interaction between vegetation cover and the hydrological cycle. In areas where ground surfaces are exposed from grazing and desertification, run-off is increased; this increase results in a smaller base flow and accelerated soil erosion, producing higher sediment yields that affect water quality as they are washed down into the lakes. The interaction between these processes also results in the eventual decline in groundwater pools. Past Climatic Conditions Between 20 and 12,000 years ago, the vegetation in the Elmenteita region was dominated by open grasslands and arid conditions. Later, between 17 and 15,000 years ago, a moderately wetter climate developed, which resulted in a slight increase in both montane and lowland forest vegetation in the region. In the last 30,000 years, research has indicated unusually low lake levels, which suggests the existence of a prolonged period of aridity and low temperatures. The existence of a period of aridity and low temperatures is also supported by research which indicates a reduced forest distribution and increased grasslands in the region. In the Central Rift Valley, a trend towards warmer and wetter conditions around 12,500 BP was recorded, which coincided with glaciation at higher altitudes. As a result, Lake Elmenteita has experience low and intermittent lake water levels since that time. Present Climatic Conditions Several weather stations are located in the Elmenteita area; however, only two of those stations - Soysambu and Dundori - have reliable and continuous rainfall data from 1958 to present. Nevertheless, continuous temperature data is not available for Elmenteita area.The floor of the Central Rift Valley is currently mildly warm and dry, while the escarpments to the East and West are often cool and wet. Both rainfall and temperature typically correlate closely with altitude; yet, in the Central Rift Valley, this correlation is altered slightly due to the rain-shadow cast by the mountains located to the east. In this precise location, temperature decreases with altitude at a rate much faster than the rate at which precipitation increases; thus, Soysambu Conservancy is often hot, dusty and dry. Elmenteita and part of its catchment area are classified in the agroclimatic zone VI-6 which is characteristically humid to arid with dry woodland vegetation. The remaining sections of the Central Rift are classified agroclimatic zone III-5 which is characterised by humid conditions with dry forest and woodland vegetation. In the region, the mean annual temperatures range from 12 degrees in the warm, humid upper catchment areas to 18 degrees in the hot, dry lowlands. Basin evaporation rates are high at above 1400mm/year. In the fringing highlands of the upper catchment, mean rainfall was reported at around 1066mm between1959-1985, whereas the mean was recorded at 733mm/year between1958-1987. Rainfall in the area tends to be modal, with two peaks in April and August. The average annual rainfall for Soysambu is about 790mm, with two peaks in April (average 107mm) and November (average 71mm). The Central Rift Valley forms an important catchment for the drainage from the two forest stands on both margins of the rift.The Nyandarua Mountains to the East (3960m) and the Mau Escarpment to the West (300m) each drain into one of the three central rift lakes: Nakuru, Elmenteita and Naivasha. Due to its shallowness, Elmenteita fluctuates greatly not only in water level but also in alkalinity and thus can only at times support fish. The lake is now a shallow, small, saline lake that is fed by inflows from the rivers Mbaruk, Chamuka and Meroroni. Historically, the main water source is the Meroroni River: initially flowing parallel to the other rivers and then abruptly changing direction to flow in a south-eastern direction along an extremely straight line.This is due to Rift Valley faulting. In recent years, inflow from the Meroroni River has decreased significantly as a result of increased upstream water withdrawal. There is also some inflow from hot springs located on the south- eastern part of the lake while subsurface flows from Lake Naivasha also add to water levels. Lake Elmenteita Waters
As the global headcount nears 8 billion, our thirst for kilowatts is growing by the minute. How will we keep the lights on without overheating the planet in fossil fuel exhaust? Alternative energy is the obvious choice, but scaling up is hard. It would take an area the size of Nevada covered in solar panels to get enough energy to power the planet, says Justin Lewis-Weber, “and to me, that’s just not feasible.” This past March, Lewis-Weber, a then-high school senior in California, came up with a radical plan: self-replicating solar panels—on the moon. Here’s the gist: When solar panels are orbiting Earth, they enjoy 24 hours of unfiltered sunshine every day, upping their productivity. Once out there, they could convert that solar radiation into electricity (just as existing solar panels do) and then into microwave beams (using the same principle as your kitchen appliance). Those microwaves then get beamed back to Earth, where receivers convert them back into electricity to power the grid. Simple! Except that Lewis-Weber estimates that building and launching thousands of pounds of solar panels and other equipment into space will be outrageously expensive, in the range of hundreds of trillions of dollars. Instead, he suggested, why not make them on the moon? Land a single robot on the lunar surface, and then program it to mine raw materials, construct solar panels, and (here’s the fun part) make a copy of itself. The process would repeat until an army of self-replicating lunar robot slaves has churned out thousands of solar panels for its power- hungry masters. You’d still have to get those panels to orbit Earth so a steady beam could reach the surface; that’s where the moon’s weak gravity and nonexistent atmosphere come in handy, requiring far less energy (and money) for the panels to escape. It might be a moonshot, but the technology to pull it off isn’t far away.
The beauty of biotechnology is that the same techniques can be used whether you are studying humans, mice, fruit flies, or the ugly naked mole rat. A naked mole rat. Image from here. A human gene can even be cloned into a plasmid and expressed in insects and bacteria. The same DNA sequencing techniques have been used to sequence the genomes of organisms, from humans to yeast. PCR can be used to amplify DNA from any organism whose sequence is known. We can perform a Western blot to look for a protein from any organism. Well, as long as we have an antibody for that protein. We can do DNA microarray analysis on several different organisms, too. Gene chips have been created for many different organisms. Why can these techniques be used for any organism? We'll take universality of the genetic code for $200, Alex. As different as organisms are (compare a human being with a fruit fly), our DNA contains the same four building blocks or bases: A,G,C, and T. Base pairing rules are even the same across species. The techniques used to extract the DNA from different organisms may differ slightly, however. After all, organisms show structural variations. For example, the technique for bursting open the cells has to be a little different depending on if the cell is surrounded by a membrane (animal cells) or a tough wall (plant cells). The structural variation is part of the diversity created by the four bases. Wait! What? We know we told you the four bases unite all organisms. They are also responsible for making organisms different. The sequence of the four bases is what creates diversity among organisms. The four bases are both the yin and the yang. The unity and the diversity.
The separation of powers: doctrine and practice by Graham Spindler, of Education Programs, Parliament House In fact, the doctrine is not exemplified in the constitutions of the Australian states. However, the practice is usually evident, and if the object of separation of powers is to develop mechanisms to prevent power being overly concentrated in one arm of government, then state processes do eventually have that effect. DEFINING THE DOCTRINE The doctrine of the separation of powers divides the institutions of government into three branches: legislative, executive and judicial: the legislature makes the laws; the executive put the laws into operation; and the judiciary interprets the laws. The powers and functions of each are separate and carried out by separate personnel. No single agency is able to exercise complete authority, each being interdependent on the other. Power thus divided should prevent absolutism (as in monarchies or dictatorships where all branches are concentrated in a single authority) or corruption arising from the opportunities that unchecked power offers. The doctrine can be extended to enable the three branches to act as checks and balances on each other. Each branchs independence helps keep the others from exceeding their power, thus ensuring the rule of law and protecting individual rights. Obviously under the Westminster System the parliamentary system of government Australia adopted and adapted from England this separation does not fully exist. Certainly in Australia the three branches exist: legislature in the form of parliaments; executive in the form of the ministers and the government departments and agencies they are responsible for; and the judiciary or the judges and courts. However, since the ministry (executive) is drawn from and responsible to the parliament (legislature) there is a great deal of interconnection in both personnel and actions. The separation of the judiciary is more distinct. ORIGINS OF THE DOCTRINE States throughout history have developed concepts and methods of separation of power. In England, parliament from its origins at least seven centuries ago was central to an struggle for power between the original executive (the monarch) and the councils of landowners, church leaders and commons. Similarly judges, originally representing the executive, developed increasing independence. Parliament was a significant force in an increasingly mixed form of government by the time of the Tudors and soon afterwards was directly challenging the doctrine of the divine right to power of the Stuart monarchs. The English Civil War (1642-60) between parliament and monarchy resulted in the monarchy continuing but under an arrangement which established not only parliaments legislative authority but also opened the way to the development of cabinet government. In his Second Treatise of Civil Government, English philosopher John Locke (1632-1704) noted the temptations to corruption that exist where ... the same persons who have the powers of making laws to have also in their hands the power to execute them ... . Lockes views were part of a growing English radical tradition, but it was French philosopher, Baron de Montesquieu (1689-1755), who articulated the fundamentals of the separation doctrine as a result of visiting England in 1729-31. In his The Spirit of Laws (1748), Montesquieu considered that English liberty was preserved by its institutional arrangements. He saw not only separations of power between the three main branches of English government, but within them, such as the decision-sharing power of judges with juries; or the separation of the monarch and parliament within the legislative process. Locke and Montesquieus ideas found a practical expression in the American revolution in the 1780s. Motivated by a desire to make impossible the abuses of power they saw as emerging from the England of George III, the framers of the Constitution of the United States adopted and expanded the separation of powers doctrine. To help ensure the preservation of liberty, the three branches of government were both separated and balanced. Each had separate personnel and there were separate elections for executive and legislature. Each had specific powers and some form of veto over the other. The power of one branch to intervene in another through veto, ratification of appointments, impeachment, judicial review of legislation by the Supreme Court (its ability to strike down legislation or regulations deemed unconstitutional), and so forth, strengthened the separation of powers concept, though inevitably involving each branch in the affairs of another and to some extent actually giving some of the powers of one branch to another. It was a high water mark in institutionalising individual liberty through the separation of powers and one embedded even further by early judgments of the Supreme Court but, as the struggles, inefficiencies and political gamesmanship illustrated by the recent Clinton impeachment attempt or by Congresss delaying of budgets, it also made government harder. This had been partly the intention. Few subsequent democracies have fully adopted the American approach, but the concept is widely aspired to, though taking varying forms amidst the complex interplay of ideas, interests, institutions and Realpolitik that are part of each system of government. THE DOCTRINE IN AUSTRALIA - THE COMMONWEALTH While certainly not the American model, a form of the doctrine operates in the Australian versions of the Westminster model, most notably in the Federal Constitution. The writers of the Australian Constitution in the 1890s retained the Westminster cabinet system. Unlike the Americans of the 1780s, they had several working democratic federal constitutional models to examine, along with well-established democratic traditions of their own, and wanted to maintain strong ties with Britain, not create a revolution. Their interest in the U.S. Constitution was more in its mechanisms of federation such as the Senate, than in the checks and balances between branches of government. Nevertheless some elements can be found. The Australian Constitution begins with separate chapters each for the Parliament, Executive and Judiciary, but this does not constitute a separation of powers in itself. Executive power was nominally allocated to the Monarch, or her representative the Governor-General (Section 61), while allocating it in practice to the Ministry by requiring the Governor-General to act on the Governments advice (subject, of course to the Governor-Generals controversial reserve powers). This was the Westminster model and it relied on convention as much as the words of the Constitution. However, the specific requirement for Ministers of State to sit in Parliament (Section 64) clearly established the connection between Executive and Parliament and effectively prevented any American-style separate executive. The situation with the judiciary, however, was different. The whole of Chapter III of the Constitution (Judicial Power of the Commonwealth) and Section 71 in particular, has been used by the courts to establish a strict separation of powers for Federal Courts from the ministry and parliament. In New South Wales v. Commonwealth 1, the High Court ruled that this part of the Constitution does embody the doctrine of separation of judicial powers. This also applies to tribunals and commissions set up by Federal Parliament which, unlike some of their equivalents in the states, can only recommend consequences. The Federal Parliament itself, however, has the rarely used privilege of being able to act as a court in some circumstances, primarily where it may regard a nonmember as acting in contempt of parliament. However, the Courts have found that the separation that exists for the judiciary does not strictly apply to the relationship between executive and legislature 2. In Victorian Stevedoring and General Contracting Co. v. Dignan 3, it was found that legislative power may be delegated to the executive. The same case, however, reconfirmed the separation of judicial powers. Thus, while the courts are separate and the High Court can rule on legislative and constitutional questions, the executive is not only physically part of the legislature, but the legislature can also allocate it some of its powers, such as of the making of regulations under an Act passed by Parliament. Similarly, the legislature could restrict or over-rule some powers held by the executive by passing new laws to that effect (though these could be subject to judicial review). The Constitution does provide for one form of physical separation of executive and legislature. Section 44, concerning the disqualifications applying to membership of Parliament, excludes from Parliament government employees (who hold an office of profit under the crown) along with people in certain contractual arrangements with the Commonwealth. This was demonstrated in 1992 after Independent MP, Phil Cleary, had won the Victorian seat of Wills. Cleary, on leave without pay from the Victorian Education Department at the time of his election, was held to be holding an office of profit under the Crown and disqualified 4. The Court noted that that Section 44s intention was to separate executive influence from the legislature. This requirement does not apply to state elections. Elections themselves, in recent years, have reflected voter concern with separation of powers-related issues. In 1995, NSW voters overwhelmingly endorsed a referendum proposal clarifying the independence of judges. In the 1999 Victorian election, voters appeared to reject a perceived concentration of power by the Premier, particularly in his gagging of fellow party members and changes to the role of the state Auditor General. Even though the Australian Constitution says little about political parties, parties have an important impact on the relationship of powers between executive and legislature. The existence of varied political parties is a feature of the freedoms of opinion essential to a liberal democratic system and the contest between them is a factor in controlling the potential excesses of any one group. However, the system can have other effects. Since by convention the party controlling the lower house forms the government, then the ministry (being also the party leaders) also exerts authority over the lower house. The exceptional strength of Australian party discipline ensures that, within the house, every member of the numerically larger party will almost always support the executive and its propositions on all issues. Despite debates and the best efforts of the Opposition and Independents (particularly in Question Time), this inevitably weakens the effective scrutiny of the executive by the legislature. Party domination in Australia thus further reduces the separation between executive and legislature, although Parliamentary processes do usually prevail. However, robust democratic systems have a capacity to self-correct, as has been demonstrated by the Senate. Because of the party system, the Senate failed to ever be the states house originally intended by the Constitutional framers. However, the adoption of a proportional system of voting in 1949 created a new dynamic and the Senate in recent decades has rarely been controlled by Governments. Minor parties have gained greater representation and Senate majorities on votes come not from the discipline of a single party but from a coalition of groups on a particular issue. This happens in most democracies but in Australia is often regarded (particularly by supporters of the major parties) as an unnatural aberration. As a result the role of the Parliament as scrutineer of executive government, immobilised to some extent in the Lower House by the party system, has been expanded by the Upper House. THE DOCTRINE IN AUSTRALIA - THE STATES In the case of the Australian states, where the basic governmental structures were in place before the Australian Constitution, separation of powers has little constitutional existence even though it is generally practiced. This has been shown in cases such as Clyne v. East 5 for NSW and the doctrine extensively discussed in cases such as Kable v. The Director of Public Prosecutions 6. In these and other judgments it was noted that a general doctrine of separation of powers operates as accepted practice in the state through constitutional convention. That the position is similar in other states has been confirmed in cases in Victoria 7, Western Australia 8 and South Australia 9. In practice, there is far more crossing of responsibilities in the states than Federally. As with the Commonwealth, Ministers have powers to make regulations (in effect, legislating) and are, of course, Members of Parliament and responsible to it. Again, the rigid party system increases the domination of at least the lower house by the executive from the majority party and there are often complaints that the executive is manipulating parliament or treating it with contempt. In some cases upper houses have increased their roles of scrutiny of the executive, though this varies according to the electoral systems used for upper houses where they exist. Parliamentary scrutiny of the executive and, in particular, by the NSW Upper House, was tested in 1996-99 when Treasurer Michael Egan, on behalf of cabinet, refused to table documents in the Legislative Council of which he is a member. The documents related to several controversial issues, and the reasons given for this refusal included commercial confidentiality, public interest immunity, legal professional privilege and cabinet confidentiality. The Council, determined to exercise its scrutiny of the executive, pressed the issues and eventually adjudged the Treasurer in contempt, suspending him from the house twice. The matters were disputed in three cases in the High Court and the Supreme Court of NSW 10-12. The results upheld that the Legislative Council did have the power to order the production of documents by a member of the House, including a minister, and could counter obstruction where it occurred. However, the question of the extent of the power as regards cabinet documents, will be subject to continued court interpretation. In relation to the judiciary, traditionally the most separated and independent arm, the separation so clearly established in the Commonwealth does not exist in the state constitutions. Nevertheless, certain state courts, having had jurisdiction to deal with Federal laws conferred on them by the Commonwealth Parliament, have in effect a Federal Constitutional basis for separation of their powers. The general separation of state courts is practiced, but the issue of tribunals set up by state parliaments is different since such bodies sometimes exercise both executive and judicial power through being able to impose fines or penalties. The Administrative Decisions Tribunal in NSW is one such example. In NSW, the issue of judicial independence was recently raised in a rare Australian instance of a legislature exercising scrutiny over a judge. The power of removal of a judge in NSW lies with the Governor on Parliamentary recommendation, the possible grounds being proved misbehaviour or incapacity. In 1998 the Judicial Commission recommended Parliament consider removal of a Supreme Court Judge on the grounds of incapacity. In the Court of Appeal 13 the Judge, Justice Bruce, argued that this contradicted the concept of the independence of the judiciary. The Supreme Court agreed that, despite the lack of any formal separation of powers in the NSW Constitution, the Commonwealth Constitution did significantly restrain Parliamentary interference with the judiciary. Nevertheless, the court held that nothing had occurred that would impinge on the integrity of the judicial system and that Parliament could consider the case. Justice Bruce appeared before the Legislative Council but removal was not recommended. While the doctrine of the separation of powers and its practice will not necessarily be the same thing, the purpose behind the doctrine can be seen to be embedded in democracies. In the Westminster system, as practiced in Australia, discussion of the doctrine is riddled with exceptions and variations. Certainly, in its classical form it exists here only partially at best; but in practice mechanisms for avoiding the over-concentration of power exist in many ways through constitutions and conventions; the bicameral system; multiple political parties; elections; the media; courts and tribunals; the federal system itself; and the active, ongoing participation of citizens. The doctrine is part of a simultaneously robust and delicate constant interplay between the arms of government. A tension between separation and concentration of powers will always exist, and the greatest danger will always lie with the executive arm not judges or legislatures because in the executive lies the greatest potential and practice for power and for its corruption. Preventing this in our system relies as much upon conventions as constitutions and the alarm bells should ring loudly when government leaders dismiss or profess ignorance of the concept.
10 Bountiful Blue Whale Facts “Whale” is the common name for a variety of marine mammals. The Latin word cetus gives the species its scientific title, Cetacea. Whales come in two categories of Cetacea. The largest, Mysticeti, are characterized by their baleen, sieve-like structures in their upper jaw that are used to filter food from the seawater. The other category, Odontoceti, have teeth, such as sperm whales and orcas. 1. Blue Whales Are the Largest Animal to Have Lived on Earth – Ever Blue whales are the largest mammal ever to have lived on the Earth, growing to lengths of 30 meters (100 feet), or as long as three school buses. They can weigh as much as 200,000 kg (440,000 pounds), and several of their organs are the largest in the animal kingdom. Their hearts, for example, can weigh as much as a car, around 180 kg (400 pounds). However, blue whales have proportionally small brains, about 7 kg (15 pounds), only .007% of its body weight. All whales are descended from land-living mammals and are thought to have begun their transition to seaborn life around 50 million years ago. The low-gravity saltwater is why they were able to grow larger than other animals. 2. The Blue of the Blue Whale Comes Partially from the Sea Although blue whales are thought to have a deep blue color, when they are at the surface of the water they actually appear grey. When they dive down again, the color of the water and the light from the sun make them look deeper blue than they really are. Blue whales are really a lightly mottled blue-grey, with light grey or yellow-white undersides. The yellow ventral coloring is due to the accumulation of diatoms (microscopic unicellular marine algae) in cold water. 3. Each Blue Whale Has Its Own Unique Marking Before modern whaling began in the late 19th century, little was known about blue whales. Nowadays, however, researchers have applied photo-identification to learn that the mottled pigmentation pattern characteristic of the species is unique to each individual. The mottling is distinct enough that individuals can easily be recognized through clear photographs. The amount of detail in the markings is so great that exact duplicates are extremely unlikely. Over a long period of study, researchers have learned about blue whale lifespans and migration routes. 4. Blue Whales Have One of the Loudest Voices on the Planet Although we cannot hear them underwater, blue whales have one of the loudest voices on Earth. Their call can be louder than a jet engine, and has been measured at 188 decibels. It is thought that in good conditions blue whales can hear each other across distances of up to 1,600 km (994 miles). They communicate by using loud, low-pitched moans and whines. During mating periods, around late autumn and until the end of winter, adult blue whales perform mating calls. These “mating songs” can likewise be heard over incredibly long distances. 5. Conscious Sleeping Means Maximum Blue Whale Efficiency Although blue whales are mid-water hunters, they must come to the surface to breathe. Blue whales have twin blowholes shielded by a large splashguard. These blowholes are large enough for a young child to crawl through. When blue whales surface, they exhale air out of their blowholes in a vertical cloud of pressurized vapor rising up to 9 meters high (30 feet). When searching for food, blue whales sometimes dive very deep. The deepest confirmed dive was 506 meters (1,660 feet) with 35 minutes underwater. (Most dives last 10 – 20 minutes or less). Blue whales, like other whales and dolphins, are conscious breathers. They never fall asleep completely, resting only one half of their brain at a time. The other half stays awake to prevent drowning. 6. Big Blue Whales Survive on Small Things Despite being Earth’s largest animal, blue whales primary eat krill, a small aquatic lifeform resembling shrimp. The Norwegian word “krill” means “young fry of fish.” Blue whales can eat as many as 40 million krill per day, or around (3,630 kg) 8,000 pounds daily. In order to maintain their diet, blue whales are almost always found in areas with high concentrations of krill, such as the Arctic Ocean. Despite being such huge animals, blue whales lack the esophagus size to consume larger sources of food. They are unable to chew and break down food into smaller pieces. In fact, the blue whale esophagus is so small it would not be able to swallow an adult human. 7. The Migration Patterns of Blue Whales Are Highly Diverse Many whales, especially baleen whales, tend to migrate long distances between cold-water feeding grounds and warm-water breeding grounds each year. These migration patterns are not thoroughly understood and are highly diverse. Some populations appear to be year-round residents of high-productivity habitats. Other whale groups migrate to cold water, like the Arctic and Antarctic, for feeding. As such, they are occasionally spotted on both Arctic cruises and Antarctic cruises. After the end of feeding season, they travel back to warmer water, where there are stable and secure places for birthing. On long migrations, blue whales are known to fast for up to four months, living off of stored body fat accumulated during their feeding season. 8. Even Baby Blue Whales Are Bigger Than Most Animals Females breed only once every three years, and gestation lasts between 11 – 12 months. They usually have only one young. The calves are born more than 7 meters long (25 feet) and weigh up to 3,000 kg (6,600 pounds). They enter the world already ranking among the planet’s largest creatures, and are suckled up to a year before feeding independently. Baby blues gorge on nothing but mother’s milk, gaining about 90 kg (200 pounds) per day during their first year. 9. A Built-In Thermal Insulator Keeps Blue Whales Warm Heat loss in the water is 27 times greater than on land, but blue whales have adapted to cold oceans: More than a quarter of a blue whale’s body mass is blubber, which acts as a form of protection and thermal insulator. They have no skin glands for evaporation, instead using blubber thickness and blood flow to stay warm. 10. Blue Whales Were Once Numerous Blue whales have a truly global distribution, occurring in all of the world’s oceans. In pre-whaling eras, there may have been more than 250,000 blue whales worldwide. But intensive hunting in the 1900s reduced blue whale populations by more than 99 percent. From 1904 to 1967, more than 350,000 blue whales were killed in the Southern Hemisphere. Fortunately, the 1966 International Whaling Commission finally gave them protection. Blue whales are one of the rarest whales, numbering between 10,000 – 25,000 today. Most biologists consider them to be among the most endangered of the great whales.
costo-, cost-, costi- + (Latin: rib, ribs; side; coast) 2. To approach and to speak to someone boldly or aggressively, as with a demand or request. 3. Etymology: via French and ultimately from Latin accostare, "to adjoin"; from Latin costa, "rib, side" (source of English coast). The essential sense is "to be alongside". Costa is the Latin word for "rib", and therefore, "side" and accost is formed from Latin ad-, "to" + costa, making the verb accostare, "to bring to the side of, to bring side by side". From this, or from the French derivative accoster, we have made English accost, which first meant "to lie alongside", then "to come alongside", "to approach and to greet"; and finally simply "to greet", "to speak to". 2. Someone; especially, anyone who is not known, who has been approached or stopped and spoken to in a threatening way 2. To move forward by momentum, without applying power or cause something to move in this way. 3. To progress with very little effort. 4. Etymology: from Old French coste, "shore, coast"; from Latin costa, "a rib, a side", developing a sense in Medieval Latin (Latin as written and spoken about 700 to around1500) of the shore as the "side" of the land. French also used this word for "hillside, slope"; which led to the verb use of "sled downhill." 2. The two lower ribs on either side that are not attached anteriorly. 2. A painful affection of the tendinous attachments of the thoracic muscles, usually on one side only. 3. Pain in a rib or the intercostal spaces (e.g., intercostal neuralgia).
A particle P is moving on a straight line with S.H.M. of period pi/3 s. Its maximum speed is 5 m/s. Calculate the amplitude of the motion and the speed of P 0.2s after passing through the centre of oscillation. T=2pi/w So w=6 5^2=6^2(a^2-0) so a=5/6 (matched with book) The answer given in book is 1.81. Somebody help. What did go wrong with this very simple sum? Interesting. I've never heard of this before, but according to wikipedia, simple harmonic motion is given by: where is displacement, is time, is amplitude, is frequency, and is phase. We also note that period is given by: This means that . So: To find velocity, we take the derivative: To find velocity extrema, we take the derivative of and set it equal to zero: Let's plug that into our velocity function: Since the amplitude must be a positive value, we can just say: Now, the center of oscillation is another way of saying . So: Let's let and : And we plug that into our velocity function: Then plug in : But of course we know that speed is relative, and so the value is actually:
NASA’s Kepler Space Telescope has helped discover a new Earth-sized planet that has a year that lasts less time than an average work day or a good night’s sleep. Kepler 78b zips around its host star in a mere 8.5 hours — making this one of the shortest orbital periods ever detected. (See also: “Most Earthlike Planets Found Yet: A ‘Breakthrough.'”) Researchers at MIT are reporting that Kepler 78b sits about 700 light years away from Earth, and orbits about 40 times closer to its parent star than Mercury does. This scorched planet orbits so close that it sports temperatures reaching up to 5,000 degrees Fahrenheit. While this would seem to be the destination to celebrate a lot of birthdays it’s probably not the best place for a vacation since scientists predict the surface is covered with molten lava. “We’ve gotten used to planets having orbits of a few days,” said Josh Winn, co-author of both studies in a press statement. “But we wondered, what about a few hours? Is that even possible? And sure enough, there are some out there.” Winn and his team were able to detect the light given off by Kepler 78b by measuring the dips in starlight each time the planet periodically passed in front of its star. (Related: Kepler Spacecraft Disabled; “Exciting Discoveries” Still to Come) Looking forward, Winn will be working towards getting a handle on how much this planet may actually tug on its star, which will hopefully allow the team to measure the planet’s mass, making it the first Earth-sized planet outside our own solar system whose mass is known. But the discovery of this hellish planet doesn’t rule out the possibility that there may be other short-period alien worlds out there that are indeed habitable. Winn’s team is on the hunt now for just these kind of planets circling small brown dwarf stars. “If you’re around one of those brown dwarfs, then you can get as close in as just a few days,” explained Winn. “It would still be habitable, at the right temperature.” The discovery of Kepler 78b appears in The Astrophysical Journal.
Harvard University announced that its researchers have developed a way to print objects using sound. Called "acoustophoretic printing," the method "could enable the manufacture of many new biopharmaceuticals, cosmetics, and food, and expand the possibilities of optical and conductive materials," according to the press release dated August 31, 2018. Printing with liquid, such as ink, has become a way of life, thanks to the inkjet printing process. But what if you wanted to print living cells or other biological materials? What if you wanted to print liquid metal? With inkjets, the ability of a printer to pull a substance out of a nozzle grinds to a halt as the substance becomes thicker. But now, though it is still very early in the experimental phase of the process, the team of scientists at Harvard has announced significant progress in the creation of sound fields that can pull viscous substances, such as liquid metal, honey and even living cells, from the nozzle of a printer. It begins with gravity. Simple gravity is what causes liquid to drip. How fast or often it drips depends on its viscosity — its thickness and resistance to shearing and tensile stresses. Water, for example, is far less viscous than corn syrup. Corn syrup is far less viscous than honey. The more viscous a fluid is, the longer it takes for gravity to produce a droplet. Printing systems, such as inkjet printing, typically use a droplet method of transferring a liquid material to a medium, such as paper. The more viscous a material is, however, the more difficult it is to manipulate for printing. "Our goal was to take viscosity out of the picture by developing a printing system that is independent from the material properties of the fluid," said Daniele Foresti, a research associate in materials science and mechanical engineering at Harvard. This is where sound comes in. Foresti and his fellow researchers began experimenting with the pressures of sound waves on liquids in order to give gravity a boost. They built a "subwavelength acoustic resonator" designed to produce tightly controlled acoustic fields that effectively increase the relative gravity at the printing nozzle. According to the release, the researchers have been able to generate pulling forces "100 times the normal gravitation forces (1G) of the printer nozzle," more than four times the gravity of the sun. The size of the droplet is simply determined by the amplitude of the soundwave — the higher the amplitude, the smaller the drop. Here is an explanatory video from the research team at Harvard: "The idea is to generate an acoustic field that literally detaches tiny droplets from the nozzle, much like picking apples from a tree," said Foresti. A wide range of materials have been used to test this new printing method, including honey, stem-cell inks, biopolymers, optical resins and liquid metals. Because sound waves don't pass through materials, using sound to create droplets won't harm the material itself, which is important for printing with living cells. Dr. Jennifer Lewis, professor of biologically inspired engineering at Harvard, stated, "Our technology should have immediate impact on the pharmaceutical industry. However, we believe that this will become an important platform for multiple industries."
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, videos, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Dr. Seussís Sound Words: Playing with Phonics and Spelling |Grades||K – 2| |Lesson Plan Type||Standard Lesson| |Estimated Time||Three 50-minute sessions| Grades K – 2 | Lesson Plan | Standard Lesson Through the contrast of short-vowel patterns and use of Dr. Seuss rhymes, students apply their knowledge of vowel sounds in reading and spelling new words. Grades K – 2 | Calendar Activity |  March 2 After sharing The Cat in the Hat and other patterned books with students, groups brainstorm sets of rhyming words and create a story using these words. Grades 7 – 12 | Calendar Activity |  May 24 Students discuss why certain contests get more publicity than others and what counts as "knowledge." Grades 1 – 12 | Calendar Activity |  July 12 Students make mental "snapshots" of a natural setting, then capture the details of their setting by writing and then creating a class booklet of the nature walk. Grades K – 8 | Professional Library | Book Laminack and Wood describe, in accessible and lively prose, both the theoretical foundations and the practical details of whole language classrooms. Grades K – 8 | Professional Library | Journal Explores (from the point of view of the writer, a children's author) one aspect of learning about language that is present in the picture books he writes: the relation between sound and sense. ACTIVITIES & PROJECTS Grades K – 2 | Activity & Project Whether taking a sound hike at the mall, a near-by park, or on a family trip, ask children to notice the sounds they hear and then use sound words as they write their own books.
Module 12 – Phylum Arthropoda Pylum Arthropoda – This is my favorite module and I was not able to lecture on it more than one day. We basically covered only the general Characteristics of Arthropods, characteristics in the class Arachnida and insecta. We could have easily stayed here for three weeks. We were only able to really cover the crayfish. Dissection of the Crayfish. This is the module the students begin to collect their insect specimens. The requirements are: - 30 different insects. It can be more than one kind of beetle or more than one kind of butterfly, but they cannot have all beetles or all butterflies. - They must be pinned on a foam board in a neat order. - They must be labeled with phylum and class. Naming the insect would be great too. - Due at the end of the year. - Any different insects they collect after the 30, will be extra credit. - How to collect insects at night. - How to kill an insect in a killing jar. - What the insect board should look like.
National Geographic had a fascinating write up in this month’s magazine about a group of archeologists who discovered an apparently untouched city in the remote jungles of Honduras. While the importance of learning about and recording these previously lost civilizations cannot be understated, the method they used to find the ruins is fascinating in itself. The explorers used a process called lidar (light detection and ranging) shoots hundreds of thousands of pulses of light at the ground, and then maps the way they bounce back in three dimensions. The lidar operators are then able to clean up the image, so to speak, removing brush and trees from the equation to produce a 3-D map of the landscape underneath the jungle. This allows historians to analyze vast tracts of ground that would take weeks to map out by hand, if they could be mapped out at all. This technology is hugely important for looking for ruins in places not easily accessible by traditional means of exploration. Unfortunately, lidar is hugely expensive. According to the National Geographic story, “For NCALM [the National Center for Airborne LAser Mapping at the University of Houston] to scan just the 55 square miles…would cost a quarter of a million dollars.” While 55 square miles might seem like a decent amount of ground, it’s not when you consider the vastness of the Central and South American jungle — all possible locations for lost and forgotten civilizations. The story itself is fascinating. Although, according to The Guardian, other archeological experts are quick to point out that the expedition didn’t necessarily find the treasured lost city that they had set out to uncover, what they do find is a testament to a people whose advancement and skill is (I think) often underestimated. The society that the city likely belonged to, the Mosquitia, had developed an impressive culture in their own right, although on a much smaller scale than their Mayan neighbors to the north. You can read about the expidition in the web version of the article here, or in the October 2015 issue of National Geographic.
Each renewable technology has its merits, and as the price of carbon fuels increase and as the cost of pollution is factored into conventional generation, we can expect all renewables to become more viable. A mix of renewable technologies is beneficial, but some are perhaps better placed to play a leading role. Of the developed renewable technologies, concentrating solar power (CSP) is possibly the most adaptable. It can be built in a range of sizes, from a few MW up to several hundred MW. It can be configured with varying levels of storage to suit local weather conditions and to meet the requirements of the local grid operator – concentrating solar power 's optional heat storage means power can be generated when the sun is not shining. And it can be used in co-firing arrangements where the ‘back-end' steam turbines and electrical generation/transmission components are powered by burning gas. Build costs and ongoing operational costs for concentrating solar power are both being improved through optimised system design and better specification of materials. Larger scale manufacture, more modular manufacturing processes and better organised deployment to site are also forecast to drive down the cost significantly. To store or not to store Grid operators use a mix of different types of generator with which to balance supply against demand: To maintain stability, they can't include too much intermittent capacity in the mix. But by diversifying the types of renewables in the mix – solar PV, wind, concentrating solar power, geothermal, landfill gas, hydropower and marine – as well as by having a geographical spread, utilities can go a long way to full de-carbonisation and still be able to support a fluctuating demand. And here, energy storage can certainly help. Pumped hydro schemes, flow batteries and compressed air storage systems are proven, but as yet not widely adopted technologies to help smooth the peaks of intermittent generation. Why? Because converting electrical energy into potential energy or chemical energy in this way, then back again, is inefficient. concentrating solar power offers a more elegant and efficient storage mechanism – the high-grade heat captured by its solar collectors can be processed immediately into electrical power, or it can be stored as heat and converted at a later time. Equipped with storage, a concentrating solar power plant is more flexible, allowing power to be produced after dark. Storage is not new, it's always been one of concentrating solar power's differentiators, offering the utility “power shifting” and dispatchability to help balance their systems. It also allows parabolic trough projects to achieve capacity factors greater than 50%. The first Luz parabolic trough plant, SEGS I, in California included a direct two-tank thermal energy storage system with 3 hours of full-load storage capacity. The Solar Two experimental system built in the 1990s in New Mexico by Sandia, routinely produced electricity during cloudy weather and at night. In one demonstration, it delivered power 24 hours per day for nearly 7 straight days before cloudy weather interrupted operation. The finances of storage Storage can actually make concentrating solar power electricity a more attractive financial proposition: - Spreading the delivery of power – without storage, a concentrating solar power plant needs a turbine large enough to handle peak steam production, during the hottest times of the day. Or the solar collector needs to be backed off. With heat storage, a plant can use a smaller, cheaper steam turbine that can be kept running steadily for more hours of the day, and thereby maximise the investment in the solar collector; - Concentrating the delivery of power – with storage, the plant operator can hold back solar energy collected in the morning and dispatch it to the grid when wholesale prices are higher at times of peak demand, typically late afternoon and early evening in markets where air conditioning is prevalent. The power plant has a higher peak electrical output than the solar collector because stored solar energy and real-time solar energy can be fed to the turbine simultaneously. The developers of the Andasol 1 plant in Spain say their electricity will cost 11% less to produce than a similar plant without storage, citing figures of €271/MWh instead of €303/MWh. The amount of storage required will vary according to capital availability, and the needs of a given utility. “There is an optimal point that could be three hours of storage or 6 hours of storage, where the cents per kilowatt-hour is the lowest,” says Fred Morse, senior advisor for US operations with Abengoa Solar. Its 280 MW plant in Arizona, scheduled to be in service in 2011, will have 6 hours of storage, while other recent projects are aiming for 7 or 8. CSP storage technologies The most well known variant is the indirect thermal energy storage technique – it uses molten potassium and sodium nitrate salt in a two-tank system. Salt from the ‘cold tank' is heated by the heat transfer fluid (oil) coming out of the solar collector field, and is then transferred to the ‘hot tank'. To recover the stored energy to create steam for the turbine, salt is pumped from the hot tank to the cold tank to reheat the oil. It's referred to as an indirect system because the fluid used as the storage medium is different from that circulated in the solar field. The Andasol 1 parabolic trough plant will use this technique to run its 50 MW steam turbine for up to 7.5 hours after dark. Tanks 14 metres high, and 38.5 metres in diameter will store the 28,500 tonnes of molten salt, which provides the necessary heat storage. Power towers currently have the advantage that it's possible to use the molten salt itself as the heat transfer fluid. Heating the salt directly instead of using oil as an intermediate carrier gives higher efficiency because the salts can be safely heated well beyond the 400°C limit of synthetic heat transfer oils. With a greater temperature difference between hot and cold (300°C instead of just 130°C), less salt is needed to store the same amount of energy. Expensive heat exchangers are not needed, also helping keep costs down. In Spain, Abengoa Solar and Sener are each testing solar thermal plants with this form of integrated molten-salt storage, and SolarReserve is developing similar systems based on Rocketdyne's molten-salt heat receivers used in the 10 MW power-tower Solar 2 demo plant that operated in the early 1990s. American and Italian researchers have worked on developing molten salt heat transfer fluids suitable for use in the solar field of parabolic trough plants. A key issue is to find a salt that does not freeze in the solar field piping during the night. If successful, it offers the potential for efficiency improvements and cost savings by avoiding heat exchangers. But for trough plants, some believe that a single-tank thermocline-type energy storage system could turn out to be the most cost effective option. In thermocline systems the hot storage fluid is held at the top of a tank with the cold fluid on the bottom. The zone between the hot and cold fluids is called the thermocline. This type of storage system has an additional advantage – much of the storage fluid can be replaced with a low-cost filler material. Sandia National Laboratories has demonstrated a 2.5 MWh, thermocline storage system with binary molten-salt fluid, and quartzite rock and sand for the filler material. Other non-salt storage techniques have been developed. The PS10 power tower near Seville in Spain has relatively small-scale and technically straightforward storage to keep the plant operational during cloudy periods. Its saturated water thermal storage system has a thermal capacity of 20 MWh and comprises four tanks that are sequentially operated in relation to their charge status. During full load operation of the plant, some of the steam produced by the saturated steam receiver at the top of the tower is used to load the thermal storage system. When energy is needed to cover a transient period, energy from saturated water is recovered at 20 bar to run the turbine at a 50% partial load. A very different design has emerged from down under. Cloncurry in Queensland Australia is to get a 10 MW concentrating solar power power tower plant in early 2010, which will use graphite blocks at the focal point of the solar field. Steam is produced by running water through pipes embedded in the 540 tonnes of graphite. This steam (at very high temperatures) is then used to drive the turbine. The heat stored in the graphite will run the turbine at full capacity for 8 hours. Solid materials for heat storage are also under development for trough power plants. The German Aerospace Centre (DLR) is assessing the performance, durability and cost of using high-temperature concrete or castable ceramic materials as the thermal energy storage medium. A standard heat transfer fluid from the solar field passes through an array of pipes imbedded in the solid medium to transfer the thermal energy. The primary advantage of this approach is the low cost of the solid medium compared with molten salt. A paper design from Professor Reuel Shinnar of New York's Clean Fuels Institute takes yet another approach. His proposal for ultra-high efficiency concentrating solar power plants is based on the principle that thermal efficiency rises, and cost of power production falls, with increased operating temperature. He reasons that current CSP designs are inefficient because the heat transfer materials and the storage systems currently used can't make use of the high operating temperature that can be achieved by a good solar collector in a region with high insolation. Shinnar's design uses pressurised gas (he suggests CO2) as the heat transfer medium flowing in a closed circuit from solar collectors, either directly to the power plant or through heat storage. There is no maximum temperature imposed by the gas itself. His proposed heat storage system uses vessels filled with a heat resistant solid filler, such as alumina pebbles which can operate at temperatures up to 1650 °C. and he Phase change materials (PCMs) are yet another option for storing heat from a concentrating solar power plant and are considered to be a good candidate for trough plants that use direct steam generation. Heat is absorbed or released when PCMs change from solid to liquid and vice versa. A DLR/EU project known as DISTOR is working to optimise performance through the micro-encapsulation of PCM in a matrix of expanded graphite material. Solar power without the sun Storage gives concentrating solar power plants the power to produce electricity when sunshine is not available. Co-firing or hybridisation with gas or other fuels also allows the power plant side of the concentrating solar power site to be productive up to 24 hours a day. Plants can be configured so that conventional fuels like gas generate the majority of the power, as for example in the World-Bank-funded 150 MW plant at Hassi R'mel, south of Algiers. The plant is due to go into operation in 2009 and has a 25 MW solar energy capacity using a parabolic trough solar collector. A less conventional approach to co-firing was announced in June this year, when plans appeared for a California Central Valley 100 MW concentrating solar power plant co-fired with agricultural waste and manure. The EU research projects SOLGATE and HYPHIRE have shown that a gas turbine could be modified to allow dual operation from solar and gas, and that solar dish/Stirling engine technology could be adapted to use heat from solar and fossil fuel. The World Bank has suggested that investors and decision-makers will see hybrid solar-gas plants as being less risky than an all-solar plant, and therefore more likely to attract investment. In theory, as confidence in solar grows, more solar collectors could be added to existing hybrid plants. Getting to grid parity and beyond Across the system, measures need to address thermal efficiency, durability, ease of manufacture and on-site construction. In the solar field itself, developments are targeting the optical performance of mirrors, their longevity, the support structures, the durability of the heat collection elements used in trough systems, and the electrical/electronic systems used to direct heliostats. Instead of steel for framing its solar troughs, the Solargenix collector used at Nevada Solar One is made from extruded aluminium. The lower weight collector has a unique organic hubbing structure, initially developed for buildings and bridges. Manufacturing is simplified and no field alignment is needed. And while most parabolic troughs are made out of glass, SkyFuel's SkyTroughs are made from the company's own mylar-like ReflecTech film. SkyFuel claims it can bring down the cost of a solar system by 25% with this material. NREL's Advanced Materials programme is continuing to assess a range of solar reflector materials including thin glass, thick glass, aluminised reflectors, front-surface mirrors, and silvered polymer mirrors. But perhaps it is the choice of system design that will ultimately have the biggest impact on cost. Argument rages over the relative merits of the different concentrating solar power technologies. Power towers are not restricted by the temperature limits of oil-based heat transfer fluids typically found in trough systems, and can use molten salt or steam as the heat carrier. Advocates claim the increased thermodynamic efficiency will be key. Towers also avoid the miles of precision evacuated glass tubing needed in trough systems. But trough supporters point to the years of accumulated experience as evidence that theirs is a durable, dependable design. A recent entrant to the power-tower market is eSolar. Its strategy is to make prefabricated modular solar-thermal power plants (typically 33 MW) and locate them near towns and cities. Multiple modules could be configured together on one site to increase capacity. Their design uses direct steam generation with relatively short towers, to keep down the cost. Heliostats are small and low to the ground, reducing their wind profile and the company believes high volume manufacturing and reduced installation effort will drive costs further down. Ausra is another leader in design optimisation for cost. Its Linear Fresnel designs use lower cost mirrors than troughs (see ‘CSP: bright future for linear fresnel technology?' Sep/Oct, pages 48–51), and avoid the need for the expensive heliostats inherent in a power tower design. Solar dish systems using Stirling engines free the utility operator of the need to use large areas of level land, and to provide the concentrating solar power plant with a water supply. Israeli company HelioFocus' novel design using superheated air as the heat transfer mechanism also avoids the need for a water supply. Its solar receiver collects heat from a parabolic concentrator, and the hot air it produces drives a micro-turbine directly. Meanwhile the EU SOLAIR research project is studying how ceramic receiver technology using air as the heat transfer medium could be applied in power tower design. Is concentrating solar power a mature technology? The extent of the Government-funded research effort, and the number of new market entrants with fresh thinking suggests that there are still opportunities to enhance concentrating solar power technology and enhance its appeal to utilities. But concentrating solar power's underlying concept is not in doubt – the research and development work happening around the world is essentially evolutionary, and argues strongly that this is a maturing technology with a key role to play in the de-carbonisation of the world economy. |About the author |Robert Palgrave is a member of the TREC-UK group.
While lopsidedness is seen to be a ubiquitous phenomenon (Section 1), various methods are used in the literature to define the asymmetry in disk galaxies. We list and compare below the details, such as the definitions, the size of the sample studied, the tracer used (near-IR radiation from old stars, and 21 cm emission from the HI gas etc.) There is no consensus so far as to what is the definition for lopsidedness as well as what constitutes lopsidedness (or the threshold). Obviously this has to be done if say the percentage of galaxies showing lopsidedness as per the different criteria are to be compared. At the end of Section 2.1, we recommend the use of a standard criterion for lopsidedness as well as the threshold that could be adopted by future workers. The disk lopsidedness in spiral galaxies has been studied in two different tracers - HI, the atomic hydrogen gas studied in the outer parts of disks, and the near-IR radiation from the inner/optical disks which traces the old stellar component of the disks. The lopsidedness was historically first detected in the HI, which we now realize is due to a higher lopsided amplitude at large radii. Hence we follow the same, chronological order in the summary of observations given next. 2.1. Morphological lopsidedness 2.1.1. Morphological lopsidedness in HI gas The asymmetric distribution of HI in M101 was noted in the early HI observations (Beale & Davies 1969). This phenomenon was first highlighted by Baldwin et al. (1980), who studied galaxies with highly asymmetric spatial extent of atomic hydrogen gas distributions in the outer regions in the two halves of a galaxy, such as M101 and NGC 2841 (see Fig. 1 here). This paper mentions the asymmetric distribution of light and HI in spiral galaxies such as M101. Quantitatively, they looked at the HI distribution in the four prototypical galaxies, namely, M101, NGC 891, NGC 4565, NGC NGC 2841. They defined a galaxy to be "lopsided" in which the galaxy is more extended on one side than the other, and where the projected density of HI on the two sides of the galaxy is at least 2:1 , and in which the asymmetry persists over a large range in radius and position angle. All these lopsided galaxies were also noted to have large-scale velocity perturbations. For a quantitative measurement of lopsidedness the edge-on systems cannot be used. The cut-off in inclination used for the near-IR and HI studies is given in Sections 2.1.2 and 5.1 respectively. Figure 1. Galaxies showing an asymmetry in the spatial extent of 2:1 or more in the HI distribution : M101 (top left, where the HI intensity is plotted here as gray scale) and NGC NGC 2841 (top right: here the HI contours are superimposed on an optical image), These galaxies were termed "lopsided" galaxies by Baldwin, Lynden-Bell, & Sancisi (1980). These figures are from Braun (1995) and Bosma (1981) respectively. Other typical examples are NGC 4654 (lower left: where HI contours are superimposed on an optical image) and UGC 7989 (lower right, showing contours and grey scale of the HI intensity), from Phookun & Mundy (1995), and Noordermeer et al. (2005) respectively. There was no further work on this topic for over a decade. Further HI mapping of a few individual galaxies such as NGC 4254 was done by Phookun, Vogel, & Mundy (1993) which stressed the obvious spatial asymmetry but they did not measure the lopsidedness. This paper studied the special case of the not-so-common one-armed galaxies such as NGC 4254 where the phase varies with radius (see Section 2.3). Richter & Sancisi (1994) collected data from single-dish HI surveys done using different radio telescopes, for a large sample of about 1700 galaxies. They found that 50% of galaxies studied show strong lopsidedness in their global HI velocity profiles. This could be either due to spatial lopsided distribution and/or lopsided kinematics. But since a large fraction of their sample shows this effect, they concluded that it must reflect an asymmetry in the gas density, as is confirmed by the correlation between the spatial and global HI velocity asymmetries in some galaxies like NGC 891, see Fig. 2 here. They argued that the asymmetry in HI represents a large-scale structural property of the galaxies. The criteria they used to decide the asymmetry are: (1). significant flux difference between the two horns (> 20% or > 8 sigma) (2). Total flux difference (> 55 : 45%) between the low and the high velocity halves (3). Width differences in the two horns (> 4 velocity channels or 50 km s-1). One word of caution is that it is not clear from their paper if these three give consistent identification of a galaxy as being lopsided or not. Figure 2. Asymmetric HI surface density plot of NGC 891 (contours at the bottom left); position-velocity diagram of the same galaxy (contours at the bottom right); and the global velocity profile in NGC 891 (spectrum at the top), from Richter & Sancisi (1994). The global velocity tracer is likely to underestimate the asymmetry fraction as for example if the galaxy were to be viewed along the major axis as pointed out by Bournaud et al. (2005b), or for a face-on galaxy as noted by Kamphuis (1993). The comparison of asymmetry in the stars as detected in the near-IR and the HI asymmetry in surface density using the second criterion of Richter & Sancisi (1994) shows a similar behaviour in a sample of 76 galaxies (Fig. 6 of Bournaud et al. 2005b). However, the asymmetry is quantitatively larger and more frequent in HI than in stars. This result supports the conjecture by Richter & Sancisi (1994) that the asymmetry in the global velocity profiles is a good tracer of the disk mass asymmetry. While making this comparison, it should be noted, however, that the HI asymmetry is seen more in the outer radial range while the asymmetry in the near-IR is seen in the inner region of a galactic disk. Haynes et al. (1998) studied the global HI profiles of 103 fairly isolated spirals. 52 of the 78 or ~ 75% galaxies showed statistically significant global profile asymmetry of 1.05. Many show large asymmetry: 35 have asymmetry > 1.1, 20 have > 1.15, and 11 have > 1.2. The atomic hydrogen gas is an ideal tracer for a quantitative study of lopsidedness in galaxies since the HI gas extends farther out than the stars. The typical radial extent of HI is 2-3 times that of the stars (e.g. Giovanelli & Haynes 1988), and the amplitude of asymmetry increases with radius (Rix & Zaritsky 1995) as seen for the stars. However, surprisingly, a quantitative measurement of HI spatial lopsidedness has not been done until recently. In a first such study, the two-dimensional maps of the surface density of HI have been Fourier-analyzed recently to obtain the m = 1 Fourier amplitudes and phases for a sample of 18 galaxies in the Eridanus group (Angiras et al. 2006) - see Section 5.2 for details. Such analysis needs to be done for nearby, large-angular size galaxies using sensitive data which will allow a more detailed measurement of lopsidedness in nearby galaxies. A study along these lines is now underway using the data from WHISP, the Westerbork observations of neutral Hydrogen in Irregular and SPiral galaxies (Manthey et al. 2008). The molecular hydrogen gas also shows a lopsided distribution in some galaxies, with more spatial extent along one half of a galaxy as in NGC 4565 (Richmond & Knapp 1986), IC 342 (Sage & Solomon 1991), NGC 628 (Adler & Liszt 1989) and M51 (Lord & Young 1990). However, this effect is not common in most cases and that can be understood as follows. The lopsidedness appears to be more strongly seen in the outer parts of a galaxy and the amplitude increases with radius as seen in stars (Rix & Zaritsky 1995), and also in HI gas (Angiras et al. 2006). Theoretically it has been shown that the disk lopsidedness if arising due to a response to a distorted halo with a constant amplitude can only occur in regions beyond ~ 2 disk scalelengths (Jog 2000). In most galaxies the molecular gas in constrained to lie within two disk scalelengths or half the optical radius (Young 1990). Hence we can see that in most galaxies, there is no molecular gas in the regions where disk lopsidedness in stars or HI is seen. This is why the molecular gas being in the inner parts of the galactic disk does not display lopsidedness in most cases. 2.1.2. Morphological lopsidedness in old stars The near-IR traces the emission from the old stars since dust extinction does not significantly affect the emission in the near-IR. The systematic study of this topic was started in the 1990's when a few individual galaxies such as NGC 2997 and NGC 1637 were mapped in the near-IR by Block et al (1994). They noted the m = 1 asymmetry in these but did not study it quantitatively. The pioneering quantitative work in this field was done by Rix & Zaritsky (1995) who measured the asymmetry in the near-IR for a sample of 18 galaxies. In each galaxy, A1, the fractional amplitude for the m = 1 mode normalized by an azimuthal average (or m = 0), was given at the outermost point measured in the disk, i.e. at 2.5 exponential disk scalelengths. This distance is set by the noise due to the sky background in the near-IR. The mean value is 0.14 , and 30% have A1 values more than 0.20 which were defined by them to be lopsided galaxies. A typical example is shown in Fig. 3. Figure 3. The values of the various fractional Fourier amplitudes and phases vs. radius in units of the disk scalelengths for NGC 1325 (from Rix & Zaritsky 1995). The amplitude scale in the lower three panels has been expanded by a factor of 5/3 for clarity, since these higher m components have smaller amplitudes. The lopsided amplitude A1 increases with radius, and the phase is nearly constant with radius. This study was extended for a sample of 60 galaxies by Zaritsky & Rix (1997). They carried out the Fourier analysis of the near-IR surface brightness between the radial range of 1.5-2.5 Rexp, where Rexp is the characteristic scale of the exponential light distribution. The normalised m = 1 Fourier amplitude A1 is a more representative indicator of disk lopsidedness. It was shown that 30% of the galaxies have A1 > 0.2, which was taken to define the threshold lopsidedness as in the previous work. Rudnick & Rix (1998) studied 54 early-type galaxies (S0-Sab) in R- band and found that 20% show A1 values measured between the radial range of 1.5-2.5 Rexp to be > 0.19. A similar measurement has recently been done on a much larger sample of 149 galaxies from the OSU (Ohio State University) database in the near-IR (Bournaud et al. 2005b), see Fig. 4. This confirms the earlier studies but for a larger sample, and gives a smaller value of lopsided amplitude, namely ~ 30% show lopsideness amplitude of 0.1 or larger when measured over the radial range of 1.5-2.5 Rexp. The galaxies with inclination angles of < 70° were used for this study. Since this is a large, unbiased sample it can be taken to give definitive values, and in particular the mean amplitude of 0.1 can be taken as the threshold value for deciding if a galaxy is lopsided. The lopsidedness also shows an increasing value with radius, as seen in the Rix & Zaritsky (1995) study. Figure 4. The histogram showing the distribution of lopsidedness measured in 149 galaxies at inclination of < 70° from the OSU sample (Bournaud et al. 2005b). The typical normalized lopsided amplitude A1 measured over the radial range between 1.5 to 2.5 disk scalelengths is ~ 0.1. Thus most spiral galaxies show significant lopsidedness. In the Fourier decomposition studies the determination of the center is a tricky issue, and the same center has to be used for all the subsequent annular radial bins, otherwise during the optimization procedure, a different center could get chosen and will give a null measurement for the lopsidedness. This is applicable for the lopsidedness analysis for HI (Angiras et al. 2006, 2007), and also for centers of mergers (Jog & Maybhate 2006). These two cases are discussed respectively in Sections 5.1 and 5.3. The large number of galaxies used allows for a study of the variation with type. It is found that late-type galaxies are more prone to lopsidedness, in that a higher fraction of late-type galaxies are lopsided, and they show a higher value of the amplitude of lopsidedness (Bournaud et al. 2005b), see Fig. 5. This is similar to what was found earlier for the variation with galaxy type for the HI asymmetries (Matthews, van Driel, & Gallagher 1998). These samples largely consist of field galaxies, while the group galaxies show a reverse trend with galaxy type (Angiras et al. 2006, 2007) implying a different mechanism for the origin of lopsidedness in these two settings, see Section 5.1. Figure 5. The plot of the cumulative function of <A1> for three groups of Hubble types of spiral galaxies: the early-types (0 < T < 2.5), the intermediate types (2.5 < T < 5 ), and the late-types (5 < T < 7.5 ), where T denoted the Hubble type of a galaxy (taken from Bournaud et al. 2005b). The late-type galaxies are more lopsided than the early-type galaxies. While an axisymmetric galaxy disk gives rise only to radial gravity forces, and therefore no torque, any asymmetry in the disk, either m = 1, m = 2 or higher, gives rise to tangential forces in addition to radial forces, and then to a gravity torque. From the near-infrared images, representative of old stars, and thus of most of the mass, it is possible to compute the gravitational forces experienced in a galactic disk. The computation of the gravitational torque presents important complementary information, namely it gives the overall average strength of the perturbation potential as shown for m = 2 (Block et al. 2002), and for m = 1 (Bournaud et al. 2005b), whereas the Fourier amplitudes give values which are weighted more locally. The gravitational potential is derived from the near-infrared image, see Bournaud et al. (2005b) for the details of this method. The m = 1 component of the gravitational torque, Q1 between 1.5-2.5 disk scalelengths is obtained, the histogram showing its distribution is plotted in Fig. 6 , which is similar to that for the lopsided amplitude A1 (Fig. 4) as expected. Figure 6. The histogram of Q1, the m = 1 Fourier amplitude in the gravitational potential, for the OSU sample of galaxies, from Bournaud et al. 2005b. An even larger sample based on the SDSS data has now been Fourier-analyzed by Reichard et al. (2008), and they also get a similar average value of lopsidedness in spiral galaxies, see Fig. 7. However, they use the surface density data between 50% and 90% light radii, so a clear one-to-one quantitative comparison of the values of lopsidedness from this work as compared to the previous papers in the literature discussed above is not possible. This work confirms that galaxies with low concentration, and low stellar mass density (or the late-type spirals) are more likely to show lopsidedness, as shown earlier for HI by Matthews et al. (1998) and for stars by Bournaud et al. (2005b). Figure 7. The histogram of number of galaxies vs. A1 values for the SDSS sample, from Reichard et al. (2008). The histogram gives similar values to the earlier studies by Rix & Zaritsky (1995) and Bournaud et al. (2005b) Another approach to measure the asymmetry involves the wedge method (Kornreich et al. 1998, 2002) where the flux within circular sectors or wedges arranged symmetrically with respect to the galactic disk centre are compared. While it is easier to measure this than the Fourier amplitudes, it gives only average values. An extreme limit of the wedge method is applied by Abraham et al. (1996) and Conselice et al. (2000). They define the rotation asymmetry parameter as the ratio of fractional difference between the two sides, so that 0 corresponds to a symmetric galaxy and 0.5 for the most asymmetric galaxy. This is a more global or average definition of the disk asymmetry and is suitable for studying highly disturbed galaxies, and not those showing a systematic variation as in a lopsided distribution. Such highly disturbed systems are seen at high redshifts, for which this criterion was used first. An interesting point to note is that spurious signs of asymmetry arising due to dust extinction (Rudnick & Rix 1998) and that arising due to the pointing error of single-dish telescope (Richter & Sancisi 1994) were checked and ruled out. Conversely, a galaxy could look more symmetric in the global velocity profile than it is, if the galaxy is seen face-on. In that case even though the morphology is asymmetric - as in HI in NGC 628, the global velocity profile is symmetric, and hence the galaxy would appear to be kinematically symmetric - see Kamphuis (1993). Based on the above discussion of the various methods, we recommend that the future users adopt the fractional Fourier amplitude A1 as the standard criterion for lopsidedness. This is because it gives a quantitative measurement, is well-defined, and can be measured easily as a function of radius in a galaxy, and thus allows an easy comparison of its value between galaxies and at different radii. The threshold value that could be adopted could be the average value of 0.1 seen in the field galaxies in the intermediate radial range of 1.5-2.5 Rexp (Bournaud et al. 2005b), so that galaxies showing a higher value can be taken to be lopsided. A uniform criterion will enable the comparison of amplitudes of lopsidedness in different galaxies, and also allow a comparison of the fraction of galaxies deduced to be lopsided in different studies. 2.2. Kinematical lopsidedness The lopsidedness or a (cos ) asymmetry is often also observed in the kinematics of the galaxies. This could be as obvious as the asymmetry in the rotation curves on the two halves of a galactic disk, as is shown in Fig 8 (Swaters et al. 1999, Sofue & Rubin 2001), or more subtle as in the asymmetry in the velocity fields (Schoenmakers et al. 1997). Often the optical centres are distinctly separated spatially from the kinematical centers as in M33, M31, and especially in dwarf galaxies as pointed out by Miller & Smith (1992). The rotation curve asymmetry is also seen as traced in the optical for stars (Sofue & Rubin 2001). The detailed 2-D velocity fields were so far mainly observed for HI as in the interferometric data (see e.g. Schoenmakers et al. 1997, Haynes et al. 1998). Now such information is beginning to be available for the bright stellar tracers as in H emission from HII regions (Chemin et al. 2006, Andersen et al. 2006), however since the filling factor of this hot, ionized gas is small, it is not an ideal tracer for a quantitative study of disk lopsidedness. Schoenmakers et al. (1997) use the kinematical observational data in HI on two galaxies - NGC 2403 and NGC 3198, and deduce the upper limit on the asymmetry in the m = 2 potential to be < a few percents. However, this method gives the result up to the sine of the viewing angle. Kinematic asymmetry in individual galaxies such as NGC 7479 has been studied and modeled as a merger with a small satellite galaxy (Laine & Heller 1999). The rotation curve is asymmetric in the two halves of a galaxy or on the two sides of the major axis as shown for DDO 9 and NGC 4395 by Swaters et al. (1999), see Figure 8. However, they do not make a more detailed quantitative measurement of the asymmetry. Swaters (1999) in his study of dwarf galaxies showed that 50% of galaxies studied show lopsidedness in their kinematics. Schoenmakers (2000) applied his calculations on kinematical lopsidedness in galactic disks to five galaxies in the Sculptor group and found that all five show significant lopsidedness. A similar result has been found for the 18 galaxies studied in the Ursa Major cluster (Angiras et al. 2007). The frequency of asymmetry and its magnitude is higher in galaxies in groups - see section 5.2 for details. A galaxy which shows spatial asymmetry would naturally show kinematical asymmetry (e.g., Jog 1997) except in the rare cases of face-on galaxies as discussed above where the galaxy can show asymmetry in the morphology but not in the kinematics. However, the papers which study lopsidedness do not always mention it. On the contrary, in the past, several papers have made a distinction between the spatial or morphological lopsidedness and kinematical lopsidedness (e.g. Swaters et al. 1999, Noordermeer, Sparke & Levine 2001) and have even claimed (Kornreich et al. 2002) that the velocity asymmetry is not always correlated with the spatial asymmetry. However, in contrast, it has been argued that the two have to be causally connected in most cases (Jog 2002), especially if the lopsidedness arises due to the disk response to a tidal encounter. An important point to remember is that the same tracer (stars or HI) should be considered to see if a galaxy showing spatial lopsidedness is also kinematically lopsided or not, and vice versa. This is because the HI asymmetry is higher and is seen in the outer parts of the galaxy while the asymmetry in the near-IR is more concentrated in the inner regions. This criterion is not always followed (see e.g., Kornreich et al 2002). Thus the often-seen assertion in the literature that the spatial asymmetry is not correlated with kinematic asymmetry is not meaningful, when the authors compare the spatial asymmetry in the optical with the kinematical asymmetry in the HI. 2.3. Phase of the disk lopsidedness The phase of the lopsided distribution provides an important clue to its physical origin, but surprisingly this has not been noted or used much in the literature. Interestingly, the phase is nearly constant with radius in the data of Rix & Zaritsky (1995), as noted by Jog (1997). This is also confirmed in the study of a larger sample of 60 galaxies by Zaritsky & Rix (1997), (Zaritsky 2005, personal communication), and also for the sample of 54 early-type galaxies studied by Rudnick & Rix (1998). A nearly constant phase with radius was later confirmed for a larger sample of 149 mostly field galaxies (Bournaud et al. 2005b), and also for the 18 galaxies in the Eridanus group (Angiras et al. 2006). The latter case is illustrated in Fig. 9. This points to the lopsidedness as a global m = 1 mode, and this idea has been used as a starting point to develop a theoretical model (Saha et al. 2007). There are a few galaxies which do show a systematic radial variation in phase, as in M51 (Rix & Rieke 1993), which therefore appear as one-armed spirals. Figure 9. The plot of the phase of the m = 1 Fourier component vs. radius (given in terms of the disk scalelength) for the HI surface density for two galaxies ESO 482- G 013 and NGC 1390 in the Eridanus group of galaxies, from Angiras et al. (2006). Note that the phase is nearly constant with radius indicating a global lopsided mode. In contrast, the central regions of mergers of galaxies, show highly fluctuating phase for the central lopsidedness (Jog & Maybhate 2006). This may indicate an unrelaxed state, which is not surprising given that the mergers represent very different systems than the individual spirals mainly discussed here. 2.4. Observations of off-centered nuclear disks A certain number of galaxies are observed to have an off-centered nuclear disk, and more generally an m = 1 perturbation affecting more particularly the nuclear region. Our own Galaxy is a good example, since the molecular gas observations have revealed that the molecular nuclear disk has three quarters of its mass at positive longitude, which is obvious in the central position-velocity diagram (the parallelogram, from Bally et al 1988). The asymmetry appears to be mainly a gas phenomenon, since the off-centreing is not obvious in the near-infrared images (e.g. Alard 2001, Rodriguez-Fernandez & Combes 2008). The gas is not only off-centered but also located in an inclined and warped plane (Liszt 2006, Marshall et al 2008). An m = 1 perturbation is superposed on the m = 2 bar instability. The most nearby giant spiral galaxy, M31, has also revealed an m = 1 perturbed nuclear disk in its stellar distribution (Lauer et al 1993, Bacon et al 1994). The spatial amplitude of the perturbation is quite small, a few parsecs, and this suggests that this nuclear lopsidedness could be quite frequent in galaxies. However, it is difficult to perceive it due to a lack of resolution in more distant objects. Since M31 is the prototype of the m = 1 nuclear disk, we will describe it in detail in the next section. Some other examples have been detected, like NGC 4486B in the Virgo cluster (Lauer et al 1996), but the pertubation must then be much more extended, and that phenomenon is rare. 2.4.1. The case of the M31 nuclear disk The first images of M31 to reveal the asymmetrical nucleus were the photographs at 0.2" resolution from the Strastoscope II by Light et al (1974). They first resolved the nucleus, and measured a core radius of 0.48" (1.8pc). The total size of the nucleus is 4 arcsec (15pc). They showed that the nucleus is elongated, with a low intensity extension outside the bright peak (cf Fig. 10); and they considered the possibility of a dust lane to mask the true center. Nieto et al (1986) confirmed this morphology in the near-UV and also evoked dust. Later, it was clear that dust could not be the explanation of this peculiar morphology, since the center was still offset from the bulge in the near-infrared image (Mould et al 1989). As for the kinematics, it was already remarked by Lallemand et al. (1960) that the nucleus is rotating rapidly, showing a very compact velocity curve, falling back to zero at a radius of 2 arcsec. This was confirmed by Kormendy (1988) and Dressler & Richstone (1988), who concluded to the existence of a black hole in the center of M31, of ~ 107 M, with the assumption of spherical symmetry. Lauer et al (1993, 1998) revealed with HST that the asymmetrical nucleus can be split into two components, like a double nucleus, with a bright peak (P1) offset by ~ 0.5" from a secondary fainter peak (P2), nearly coinciding with the bulge photometric centre, and the proposed location of the black hole (e.g. Kormendy & Bender 1999). It is well established now from HST images from the far-UV to near-IR (King et al. 1995, Davidge et al. 1997) that P1 has the same stellar population as the rest of the nucleus, and that a nearly point-like source produces a UV excess close to P2 (King et al. 1995). Figure 10. HST WFPC2 V-band image of M31. The surface brightness contributed by the UV cluster coinciding with the component P2 has been clipped out. The white dot indicates the position of the black hole. From Kormendy & Bender (1999). 2-D spectroscopy by Bacon et al (1994) revealed that the stellar velocity field is roughly centred on P2, but the peak in the velocity dispersion map is offset by ~ 0.7" on the anti-P1 side (Fig. 11). With HST spectroscopy the velocity dispersion peak reaches a value of 440 ± 70 km s-1 and the rotational velocity has a strong gradient (Statler et al. 1999). The black hole mass required to explain these observations ranges from 3 to 10 × 107 M. The position of the black hole is assumed to coincide with the centre of the UV peak, near P2, and possibly with the hard X-ray emission detected by the Chandra satellite (Garcia et al. 2000). Figure 11. Velocity profile (top) and velocity dispersion (bottom) in the nucleus of M31. The crosses are from STIS (HST) and the filled circles from OASIS (CFHT). The OASIS kinematics have been averaged over a 0.2" wide slit (PA = 39°) - taken from Bacon et al. (2001). 2.4.2. Other Off-centered nuclei It has been known from a long time that the nearby late-type spiral M33 has a nucleus displaced from the young population mass centroid, by as much as 500pc (de Vaucouleurs & Freeman 1970, Colin & Athanassoula 1981). This off-centreing is also associated with a more large-scale lopsidedness, and can be explained kinematically by a displacement of the bulge with respect to the disk. Such kind of off-centreing is a basic and characteristic property of late-type Magellanic galaxies. In NGC 2110, an active elliptical galaxy, Wilson & Baldwin (1985) noticed a displacement of the nucleus with respect to the mass center of 220pc, both in the light and kinematics. Many other active nuclei in elliptical galaxies have been reported off-centered, but the presence of dust obscuration makes its reality difficult to assert (e.g. Gonzalez-Serrano & Carballo 2000, where 9 galaxies out of a sample of 72 ellipticals are off-centered). Quite clear is the case of the double nucleus in the barred spiral M83 (Thatte et al 2000): near-infrared imaging and spectroscopy reveals, in spite of the high extinction, that the nucleus is displaced by 65pc from the barycenter of the galaxy, or that there are two independent nuclei. Molecular gas with high velocity is associated with the visible off-center nucleus, and this could be the remnant of a small galaxy accreted by M83 (Sakamoto et al 2004). In some cases what appeared to be a double nucleus could in fact be two regions of star formation in centers of mergers of galaxies as in Arp 220 (Downes & Solomon 1998). Recently, Lauer et al (2005) studied a sample of 77 early-type galaxies with HST/WFPC2 resolution, and concluded that all galaxies with inner power-law profiles have nuclear disks, which is not the case of galaxies with cores. They found 2 galaxies with central minima, likely to have a double nucleus (cf Lauer et al 2002), and 5 galaxies having an off-centered nucleus. This perturbation also appears as a strong feature in the Fourier analysis (A1 term). Off-centering is also frequently observed in central kinematics, where the peak of the velocity dispersion is displaced with respect to the light center (Emsellem et al 2004, Batcheldor et al 2005). Decoupled nuclear disks, and off-centered kinematics are now clearly revealed by 2D spectroscopy.
Cornea, dome-shaped transparent membrane about 12 mm (0.5 inch) in diameter that covers the front part of the eye. Except at its margins, the cornea contains no blood vessels, but it does contain many nerves and is very sensitive to pain or touch. It is nourished and provided with oxygen anteriorly by tears and is bathed posteriorly by aqueous humour. It protects the pupil, the iris, and the inside of the eye from penetration by foreign bodies and is the first and most powerful element in the eye’s focusing system. As light passes through the cornea, it is partially refracted before reaching the lens. The curvature of the cornea, which is spherical in infancy but changes with age, gives it its focusing power; when the curve becomes irregular, it causes a focusing defect called astigmatism, in which images appear elongated or distorted. The cornea itself is composed of multiple layers, including a surface epithelium, a central, thicker stroma, and an inner endothelium. The epithelium (outer surface covering) of the cornea is an important barrier to infection. A corneal abrasion, or scratch, most often causes a sensation of something being on the eye and is accompanied by intense tearing, pain, and light sensitivity. Fortunately, the corneal epithelium is able to heal quickly in most situations. human eye: The outermost coat The collagen fibres that make up the corneal stroma (middle layer) are arranged in a strictly regular, geometric fashion. This arrangement has been shown to be the essential factor resulting in the cornea’s transparency. When the cornea is damaged by infection or trauma, the collagen laid down in the repair processes is not regularly arranged, with the result that an opaque patch or scar may occur. If the clouded cornea is removed and replaced by a healthy one (i.e., by means of corneal transplant), usually taken from a deceased donor, normal vision can result. The innermost layer of the cornea, the endothelium, plays a critical role in keeping the cornea from becoming swollen with excess fluid. As endothelial cells are lost, new cells are not produced; rather, existing cells expand to fill in the space left behind. Once loss of a critical number of endothelial cells has occurred, however, the cornea can swell, causing decreased vision and, in severe cases, surface changes and pain. Endothelial cell loss can be accelerated via mechanical trauma or abnormal age-related endothelial cell death (called Fuchs endothelial dystrophy). Treatment may ultimately require corneal transplant.
In linguistics an accidental gap, also known as a gap, accidental lexical gap, lexical gap, lacuna, or hole in the pattern, is a word or other form that does not exist in some language but which would be permitted by the grammatical rules of the language. Accidental gaps differ from systematic gaps, those words or other forms which do not exist in a language due to the boundaries set by phonological, morphological, and other rules of that specific language. In English, for example, a word pronounced /pfnk/ cannot exist because it has no vowels and therefore does not obey the word-formation rules of English. This is a systematic gap. In contrast, a word pronounced /peɪ̯k/ would obey English word-formation rules, but this is not a word in English. Although theoretically such a word could exist, it does not; its absence is therefore an accidental gap. Various types of accidental gaps exist. Phonological gaps are either words allowed by the phonological system of a language which do not actually exist, or sound contrasts missing from one paradigm of the phonological system itself. Morphological gaps are non-existent words potentially allowed by the morphological system. A semantic gap refers to the non-existence of a word to describe a difference in meaning seen in other sets of words within the language. Often words that are allowed in the phonological system of a language are absent. For example, in English the consonant cluster /spr/ is allowed at the beginning of words such as spread or spring and the syllable rime /ɪk/ occurs in words such as sick or flicker. Even so, there is no English word pronounced */sprɪk/. Although this potential word is phonologically well-formed according to English phonotactics, it happens to not exist. The term "phonological gap" is also used to refer to the absence of a phonemic contrast in part of the phonological system. For example, Thai has several sets of stop consonants that differ in terms of voicing (whether or not the vocal cords vibrate) and aspiration (whether a puff of air is released). Yet the language has no voiced velar consonant (/ɡ/). This lack of an expected distinction is commonly called a "hole in the pattern". |plain voiceless||aspirated voiceless||voiced consonant| A morphological gap is the absence of a word that could exist given the morphological rules of a language, including its affixes. For example, in English a deverbal noun can be formed by adding either the suffix -al or -(t)ion to certain verbs (typically words from Latin through Anglo-Norman French or Old French). Some verbs, such as recite have two related nouns, recital and recitation. However, in many cases there is only one such noun, as illustrated in the chart below. Although in principle the morphological rules of English allow for other nouns, those words do not exist. |verb||noun (-al)||noun (-ion)| Many potential words that could be made following morphological rules of a language do not enter the lexicon. Homonymy blocking and synonymy blocking stop some potential words. A homonym of an existing word may be blocked. For example, the word liver meaning "someone who lives" is not used because the word liver (an internal organ) already exists. Likewise, a potential word can be blocked if it is a synonym of an existing word. An older, more common word blocks a potential synonym, known as token-blocking. For example, the word stealer ("someone who steals") remains only a potential word because the word thief already exists. Not only individual words, but entire word formation processes may be blocked. For example, the suffix -ness is used to form nouns from adjectives. This productive word-formation pattern blocks many potential nouns that could be formed with -ity. Nouns such as *calmity (a potential synonym of calmness) and *darkity (cf. darkness) are unused potential words. This is known as type-blocking. A defective verb is a verb that lacks some grammatical conjugation. For example, several verbs in Russian do not have a first-person singular form in non-past tense. Although most verbs have such a form (e.g. vožu "I lead"), about 100 verbs in the second conjugation pattern (e.g. *derz'u "I talk rudely"; the asterisk indicates ungrammaticality) do not appear as first-person singular in the present-future tense. Morris Halle called this defective verb paradigm an example of an accidental gap. A gap in semantics occurs when a particular meaning distinction visible elsewhere in the lexicon is absent. For example, English words describing family members generally show gender distinction. Yet the English word cousin can refer to either a male or female cousin. Similarly, while there are general terms for siblings and parents, there is no comparable common gender-neutral term for a parent's sibling or a sibling's child.[a] The separate words predicted on the basis of this semantic contrast are absent from the language, or at least from many speakers' dialects. - Idiom (language structure) - Lacuna model - Pseudoword, a unit that appears to be a word in a language but has no meaning in its lexicon - Semantic gap in computer programming languages and natural language processing - Sniglet, described as "any word that doesn't appear in the dictionary, but should" - Crystal, David (2003). A Dictionary of Linguistics and Phonetics. Malden: Wiley-Blackwell. ISBN 0-6312-2664-8. - Trask, Robert Lawrence (1996). A Dictionary of Phonetics and Phonology. London: Routledge. - Abramson, Arthur S. (1962). The Vowels and Tones of Standard Thai: Acoustical Measurements and Experiments. Bloomington: Indiana University Research Center in Anthropology, Folklore, and Linguistics. - Kerstens, Johan; Eddy Ruys; Joost Zwarts, eds. (2001). "Accidental gap". Lexicon of Linguistics. Utrecht institute of Linguistics OTS. Retrieved 2011-02-12. - Aronoff, Mark (1983). "Potential words, actual words, productivity and frequency". Proceedings of the 13th International Congress of Linguists: 163–171. - Fernández-Domínguez, Jesús (2009). "3". Productivity in English Word-formation: An Approach to N+N Compounding. Bern: Peter Lang. pp. 71–74. ISBN 9783039118083. - Halle, Morris (1973). "Prolegomena to a theory of word-formation". Linguistic Inquiry. 4: 451–464. - Quinion, Michael (23 November 1996). "Unpaired words". World Wide Words. Retrieved 2012-07-31.
Topic: Silk production and use in arthropods Remarkably, fossil silk is known, especially from amber of Cretaceous age. Material includes both silk with trapped insects, possibly from an orb-web, and strands with the characteristic viscid droplets that are the key in trapping prey. Silk is one of the most remarkable of biomaterials known, with some varieties having mechanical properties that approach that of tensile steel, but achieved at a fraction of the weight. Silk is rampantly convergent, and has evolved many times. Although all the examples known are from the arthropods, there are two reasons to regard this as examples of independent evolution rather than ultimately stemming from a common ancestor. First, the distribution of silks in the arthropods is very disparate, notably in the arachnids (spiders and relatives) and the insects (e.g. silk-worm); the insects, for example, are closely related to the crustaceans which are not known to make silk. Second, silk structures (fibroins) are found in other groups, notably the byssus of the bivalve molluscs, where they evidently contribute to the extraordinary mechanical properties of these threads that serve to attach the bivalve to the substrate: securing mussels to rocks may be the best known example. Another example is the use of adhesive fibroin units in the ocites layer of the eggs of such fish as the carp, which are used to attach the eggs to a substrate of flowing water. Silk structure and composition Silk is a fibrous protein, and is secreted from glands located in many different areas of the arthropod. It is extruded as a liquid, typically via a nozzle-like arrangement (such as the spinnerets in spiders), and then forms a remarkable thread that is in part crystalline. There are a very wide variety of silks, and the different mechanical properties are largely controlled by the amino acid compositions. The amino acids glycine (G) and alanine (A) are particularly common, and very often sections of the silk are determined by characteristic motifs of repetitive G or A, and another amino acid (X) e.g. AAX, GXG, etc. Spiders produce the widest repertoire of silks, with some species producing five or more varieties (each from a specific gland) with specific properties in web constructure e.g. drag-line, capture threads, etc. Silk is probably best known in the construction of webs, and these are also convergent because equivalent aquatic varieties are constructed by trichopteran insects, the caddisflies. Silk is also widely employed in functions as diverse as other sorts of traps to webs (“flypapers”), cocoon formation (as in Bombyx, the silkworm), egg coverings, domiciles, hunting (as in spitting), ballooning, escape lines and draglines, weaving (notably the weaver ants), and nuptial gifts. Evolution of spider web silk Not surprisingly silk is geologically ancient. Spinnerets from the Middle Devonian evidently belong to a primitive spider, but in the absence of flying insects it is more likely that the silk produced was to line a burrow (or similar) rather than construct a web. Remarkably, fossil silk is also known, especially from amber of Cretaceous age. Material includes both silk with trapped insects, possibly from an orb-web, and strands with the characteristic viscid droplets that are the key in trapping prey (this example might be from a gum foot web). Spider silk has received extensive attention, not only because of its intrinsic interest but also on account of potential biotechnological applications. The more primitive arrangement of web construction is found in the so-called cribellate spiders, so-called because of a spinning plate (the cribellum) located on the abdomen, and across which the silk is drawn. In what appears to be its most primitive arrangement the silk is a simple strand, and its adhesion depends on the same mechanism as the adhesive pads of the gecko lizards (and indeed the scopula of spiders), that is the application of Van der Waals molecular forces. More often, however, the cribellate silk thread is expanded into characteristic “puffs” and these also draw upon hygroscopic forces dependent on moisture for adhesion. An extraordinary evolutionary breakthrough, however, led to a much more effective type of capture silk, both in terms of ease and economy of production. So successful is it that the vast majority of spider species employ it in so-called viscid thread. The trick lies in applying a layer of liquid on the outside of the silk, and this spontaneously collapses into a sense of droplets (an instability first recognized by the great physicist Lord Raleigh). The droplets are sticky and ingeniously have osmotic properties, widely adopted in other organisms, that prevent evaporation. Spider web construction and capture devices The web construction provides many other interesting evolutionary insights. It is possible that the orb-web construction is convergent. In particular, although molecular data suggest that the araneoids (which build a viscid orb-web) and the cribellate deinopoids are related in such a way that only one origin of an orb-web needs to be postulated it needs to be remembered that orb-web building is a behaviour and it seems possible that it emerged independently. Another fascinating area of web design is to do with the capture of moths. This is particularly problematic because the body and wings of the moth are covered with scales, and these readily detach on contact with the web, so allowing the moth to escape. At least two solutions to this dilemma have evolved. One is to construct a so-called ladder web (and this has probably evolved more than once) whereby above the orb itself there is an extraordinary “ladder” that can be 70 cm in length. Vertically arranged it ensures that when a moth blunders into the ladder it falls successively downward, leaving a trail of scales, and eventually adhering to the web. Even more extraordinary are the bolas spiders, and these are worthy of attention because in addition to their unusual web construction they provide an excellent example of molecular convergence in the form of chemical mimicry of pheromones. Whilst not a true bolas in the form of releasing a rope of silk, the bolas spider makes a mud reduced trapeze-like web from which it sits, suspending a line of silk with a sticky droplet at the end. The droplet has a complex structure, and the outer layer is evidently very liquid and so can flow between the moth scales, while the centre contains a folder and apparently adhesive thread. It would seem that remarkable as this capturing device is, its chances of snaring a moth would be very low. So it would, except that the bolas spider manages an extraordinary chemical mimicry in the form of secreting the equivalent male moth pheromone. The male moth, of course, is attracted to what is a very sticky end. Equally remarkably the bolas spider can emit different pheromones to attract different species of moth, also adjusting concentrations according to what times of the night each species is most active. Silk-producing chelicerates other than spiders The ability to make silk has evolved at least twice more in the arachnids. The aptly named spider-mites (e.g. Tetranychus) are one example. These are well-known agricultural pests, and their silken webbing is familiar to plant pathologists. Interestingly the silk appears to have keratin-like features, keratin being the characteristic protein of hairs, hooves and claws. Silk is also made by some pseudoscorpions, including the marine neobisiids. Typically they make silk chambers that have a variety of functions including brooding and hibernation. It too has a keratin-like structure. Silk is also made by a wide variety of insects, probably best known in the lepidopterans, notably the eponymous silk-worm. In addition to being used to make cocoons silk-worm silk has other functions, including descent lines and also ballooning by larvae. In the dipterans we find that silk may have a wide range of functions. In the fungus gnats, that have convergently evolved bioluminescence, the silk forms traps, either hanging from a web as a sort of “flypaper” (as in Arachnocampa) or located on the ground (in Orfelia). Dipterans, however, employ silk for other purposes, including the remarkable hilarinid flies (e.g. Hilara maura), members of the Empididae which in another group have a striking convergence in terms of raptorial fore-limb to the praying mantis. The silk-glands are located in the front (e.g. basitarsus), an arrangement that has independently evolved in the webspinners. In the hilarinid the silk is made by the male, and used as a nuptial gift to the swarming females, hence their alternative name as dance flies. In the aquatic chironomid (midge) larvae the silk is used as draglines, to assist with snagging of the substrate. Silk is also made by ants, members of the Hymenoptera to which bees and wasps also belong. Silk production in ants is most familiar in the weaver ants, and evidently this ‘weaver’ ecology has evolved several times. As the name suggests weaver ants use the silk to bring leaves together to form nests, and the technique depends on the adult holding the larva which produces the silk and shuttling it backwards and forwards. However, silk is also produced in other groups of ants by the adult. Here it is used in nest construction. Other hymenopterans also make silk, including the larvae of bees and also wasps. Most likely these uses of silk are independent. The name webspinner, that is the group known as the embiopterans, is a clear indication of silk use, and in this primitive group silk plays a central role in nest construction. Silk production is also known in the trichopterans, especially the aquatic formation of feeding web. Yet another group of insects that has learnt to make silk is a hemipteran, specially in the leafhopper, which constructs a waterproof tent. Cite this web page Map of Life - "Silk production and use in arthropods" July 21, 2019
It seems it’s the sperm that contains many more “opportunities” for DNA mutation than the egg. You can look at this in two ways, either males are the key to species evolution, or they’re the major reason that animals, including humans, have birth defects. |Why Males Are Biology’s Riskier Sex| But a key paper published in Nature by Hákon Jónsson and colleagues, including impressively large samples of both women and men, has dramatically confirmed mounting indications of major differences in mutation rate between the sexes — between sperms and eggs. Analysis of entire nuclear genome sequences from a large database for thousands of Icelanders clearly showed that mutations accumulate at significantly different rates in sperms and eggs. The bottom line from the findings reported by Jónsson and colleagues is that children inherit many more mutations from their dads than from their moms. These findings also have far wider implications that will resonate for some time. Take, for example, a long-standing puzzle with mitochondria. These tiny power houses of the cell are derived from once free-living bacteria that became residents in early organisms with a cell nucleus more than 1.5 billion years ago. Reflecting this origin, each mitochondrion carries a few copies of its own genome, a stripped-down circular strand of DNA. Both eggs and sperms have mitochondria, yet surprisingly those borne by sperms are eliminated after fertilization. This is seemingly counterproductive, as it removes a potential source of variability. |Read More at NPR.com|
|Adult in breeding plumage| The grey plover (Pluvialis squatarola), known as the black-bellied plover in North America, is a medium-sized plover breeding in Arctic regions. It is a long-distance migrant, with a nearly worldwide coastal distribution when not breeding. The genus name is Latin and means relating to rain, from pluvia, "rain". It was believed that golden plovers flocked when rain was imminent. The species name squatarola is a Latinised version of Sgatarola, a Venetian name for some kind of plover. They are 27–30 cm (11–12 in) long with a wingspan of 71–83 cm (28–33 in) and a weight of 190–280 g (6.7–9.9 oz) (up to 345 g (12.2 oz) in preparation for migration). In spring and summer (late April or May to August), the adults are spotted black and white on the back and wings. The face and neck are black with a white border; they have a black breast and belly and a white rump. The tail is white with black barring. The bill and legs are black. They moult to winter plumage in mid August to early September and retain this until April; this being a fairly plain grey above, with a grey-speckled breast and white belly. The juvenile and first-winter plumages, held by young birds from fledging until about one year old, are similar to the adult winter plumage but with the back feathers blacker with creamy white edging. In all plumages, the inner flanks and axillary feathers at the base of the underwing are black, a feature which readily distinguishes it from the other three Pluvialis species in flight. On the ground, it can also be told from the other Pluvialis species by its larger (24–34 mm, 0.94–1.34 in), heavier bill. Breeding and migration Their breeding habitat is Arctic islands and coastal areas across the northern coasts of Alaska, Canada, and Russia. They nest on the ground in a dry open tundra with good visibility; the nest is a shallow gravel scrape. Four eggs (sometimes only three) are laid in early June, with an incubation period of 26–27 days; the chicks fledge when 35–45 days old. They migrate to winter in coastal areas throughout the world. In the New World they winter from southwest British Columbia and Massachusetts south to Argentina and Chile, in the western Old World from Ireland and southwestern Norway south throughout coastal Africa to South Africa, and in the eastern Old World, from southern Japan south throughout coastal southern Asia and Australia, with a few reaching New Zealand. Most of the migrants to Australia are female. It makes regular non-stop transcontinental flights over Asia, Europe, and North America, but is mostly a rare vagrant on the ground in the interior of continents, only landing occasionally if forced down by severe weather, or to feed on the coast-like shores of very large lakes such as the Great Lakes, where it is a common passage migrant. They forage for food on beaches and tidal flats, usually by sight. The food consists of small molluscs, polychaete worms, crustaceans, and insects. It is less gregarious than the other Pluvialis species, not forming dense feeding flocks, instead feeding widely dispersed over beaches, with birds well spaced apart. They will however form dense flocks on high tide roosts. The grey plover is one of the species to which the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA) applies. - BirdLife International (2012). "Pluvialis squatarola". IUCN Red List of Threatened Species. Version 2013.2. International Union for Conservation of Nature. Retrieved 26 November 2013. - Hayman, P.; Marchant, J.; Prater, T. (1986). Shorebirds. Croom Helm. ISBN 0-7099-2034-2. - Jobling, James A (2010). The Helm Dictionary of Scientific Bird Names. London: Christopher Helm. pp. 311, 363. ISBN 978-1-4081-2501-4. - Snow, D.W.; Perrins, C.M. (1998). The Birds of the Western Palearctic (Concise ed.). Oxford University Press. ISBN 0-19-854099-X. - Dickinson, M.B.; et al., eds. (1999). Field Guide to the Birds of North America. National Geographic. ISBN 0-7922-7451-2. |Wikimedia Commons has media related to Pluvialis squatarola.| |Wikispecies has information related to Pluvialis squatarola| - Black-bellied plover at Animal Diversity Web - Pluvialis squatarola in the Flickr: Field Guide Birds of the World - "Grey plover media". Internet Bird Collection. - Black-bellied plover photo gallery at VIREO (Drexel University) - Interactive range map of Pluvialis squatarola at IUCN Red List maps - Grey plover species text in The Atlas of Southern African Birds - BirdLife species factsheet for Pluvialis squatarola - "Pluvialis squatarola". Avibase. - Black-bellied Plover Species Account – Cornell Lab of Ornithology - Audio recordings of Grey plover on Xeno-canto.
San José State University Department of Economics & Tornado Alley Soviet Union under Nikita Khrushchev Nikita Khrushchev came from a farm family in the Ukraine. His father left the farming area to work in a mine and Nikita joined the mining workforce as a metal fitter. He married early but his wife perished of starvation in the terrible famine of 1921. This must have left a lifelong concern for food and agriculture in Khrushchev's mind in his later career as a Communist Party politician. He probably was never aware that the source of the famine was the Bolshevik program of confiscation of grain from the farmers. Khrushchev rose to the leadership of the Communist Party in part because of his advocacy of a program to increase food production. Superficially considered the solution to increased food production seemed to be to increase the amount of land farmed. The Central Committee of the Communist Party issued a decree in March of 1954 that grain production was to be increased by sowing grain on idle and virgin land, land that had never been farmed before. The Party decree stated that about 33 million acres of new land was to be sown in the following crop year of 1954-55. This decree stated that the new land would produce 18 to 20 million metric tons of grain, 13 to 15 million tons of which would be marketed. By August of 1954 33.5 million acres had been plowed and there were great expectations that the plan for increased grain production would be fulfilled. The targets for increased future production were correspondingly increased. However there is a distinct tendency in socialist decison-making to confuse inputs with outputs. It is important not to presume one statistic necessarily is equivalent to another statistics. For example, in Kazakhstan where most of the virgin lands were located there were millions of acres which were plowed but not sown in seed. Later, also in Kazakhstan, there were millions of acres on which grain was produced but not harvested. At other times there was large amounts of grain that was cut but not thrashed because it had gotten wet and could not be stored. And of course there was some grain that was stored wet and became unusable due to mildew and fungus. For some of the fiascos of mismanagement and misallocation of resources in the virgin lands program see Virgin Lands Operations. In socialist decision-making seldom were trade-offs considered. While the output of grain might be increased by bringing previously uncultivated under cultivation the question is whether the enormous funds devoted to the virgin lands could have brought greater increases in the production from already cultivated land through increased fertilizer inputs. The official record of areas sown and grain production in the areas where there was virgin and idle land brought under cultivation is shown below. The figures for 1953 indicate how much of the land in those areas was already under cultivation. The nature of idle land is crucial. Idle land is land previously cultivated but not currently under cultivation. If the land is merely neglected or not cultivated for lack of adequate resources then bringing it into cultivation would be beneficial. However if land which is being kept fallow to let it recover its fertility then bringing it into cultivation too early could hurt future yields. It is often more useful to view such statistical information in the form of graphs and charts. Here is the graph of the above data. The story behind the figures for each year is given below: The 1954 crop year illustrates the problems of initiating a new agricultural program. The crop on the virgin lands planted was larger than anticipated and the authorities could not get it all harvested. Thus the amount of grain produced was significantly larger than the amount harvested. While the production in the virgin lands area was larger there were droughts in the Volga River Basin grain growing region. The 1955 crop was a disappointment. The plowed area of virgin lands increased but droughts especially in Kazakhstan resulted in a 38 percent fall in the harvest compared to 1954 even though the plowed area had increase 100 percent. Kazakhstan in 1955 got only 10 percent of its normal precipitation. However, in contrast to 1954, the non-virgin land harvest more than offset the failure of the virgin land crop. The Ukraine crop was more than double its 1954 crop. However the lesson of the undependability of precipitation in the virgin lands was not heeded and Khrushchev called for an expansion of the virgin lands program into East Siberia and the Soviet Far East. The 1956 crop was a bumper crop and its success seemed to justify the risks taken in the virgin lands program. The virgin lands crop for 1956 nearly equalled the combined crop of the two previous years. Droughts led to great disappointment at the 1957 crop. The virgin lands crop was down 40 percent; Kazakhstan's crop was down 50 percent. A recovery of production came in 1958, but the crop did not equal the bumper crop of 1956. The crop year of 1959 was down due to an early winter and freezes which killed some of the winter wheat crop. But the declines in production was relatively minor, being only 10 percent for the whole country and only 6 percent for the virgin lands area. This was despite a 60 percent decline in the Volga region production. The crop for 1960 was good but did not exceed the 1956 crop. However it did fall short of the planned harvest. 1961 provided a good average crop but it was a disppointment to Khrushchev. He called for a crash program. He felt that crop production was not keeping up with population increases, particularly the rise in urban population. The opportunities for being virgin land into production quickly were not available so he called for the plowing of fallowed land and land which had been devoted to grass. The weather for the 1962 crop was erratic. There were some favorable conditions and then some unfavorable conditions. Grain production on the old croplands was good and average on the virgin lands so the overall production for the Soviet Union was good, exceding the previous year by 10 million tons. The authorities were anticipating a bumper crop in 1963 but instead faced a disaster. The production was less than what was considered the required grain needs of the Soviet population. It was an economic disaster that led to a regime change. In 1964 Nikita Khrushchev lost power in the Soviet Union. Ironically 1964 produced the bumper crop that had not materialized in 1963. (To be continued.) Wind erosion was a serious problem for the virgin lands program. In years in which there was a drought the wind picked up unprotected soil in the virgin lands area. That dust would be deposited on areas with vegetation destroying the crop. The wind storms were particularly bad in 1960 and 1962 and the general public became aware of the problem. Khrushchev gave little attention to the problem. Joseph Stalin had ordered in 1948 the planting of trees for windbreaks in agricultural areas. Some of the trees died but the plan had merit. It however was neglected after Stalin's death in 1953. There were recommendations that the virgin land should be plowed without the moldboards that turned over the soil in a furrow and exposed it to the wind. This recommendation would require the manufacture of special plows and therefore it was largely ignored. Water erosion was just as big of a problem. Authorities estimated that the Soviet Union was losing 500 million tons of top soil annually and that the lost top soil contained more nitrogen that was being put on the soil as fertilizer. Khrushchev's drive for production increases at any cost ultimately sabotaged itself by neglecting erosion. (To be continued.) Although the matter of costs was crucial there is only limited amounts of information available for the costs of the virgin lands program. What is different for the program is that the costs of the housing, roads and other infrastructure needs to be counted as part of the cost of the program because otherwise those facilities would not have been built. (To be continued.) In December of 1963 Nikita Khrushchev gave a speech to the Central Committee of the Communist Party in which he acknowledged that the solution to the objective of increasing grain production was not to be found in bringing new land under cultivation. The new lands required excessive capital investment in clearing the land, draining it and otherwise preparing it for cultivation. He said that the funds invested in virgin lands cultivation would have been better spent in producing more mineral fertilizer to use on the already cultivated land to increase the crop yield. Martin McCauley, Khrushchev and the Development of Soviet Agriculture: The Virgin Lands Program 1953-1964 Holmes & Meier Publishers, Inc., New York, 1976. HOME PAGE OF Thayer Watkins
[autop] [feature_image] We’ve come to expect every generation of iPhone to be smaller, lighter, faster. What we don’t realize, however, is while our phones are indeed becoming smaller, lighter and faster, the device is profoundly different than its flip-open counterpart. What we’ve perceived as a natural progression actually required a huge leap in fundamental science over the past 15 years. This science goes beyond mixing chemicals in a lab to laying the theoretical framework for ideas made tangible. The result is technology so seamless, we don’t realize what we’re holding in our hands. In the College of Liberal Arts and Sciences, we call this complex materials research. When the goal is smaller, lighter, faster, materials set the speed limit. Eventually, traditional building blocks reach their limit to developing new technology. By creating novel materials such as crystals, magnets, metals and others, researchers in LAS and the Department of Energy’s Ames Laboratory are constructing new building blocks to push technology further. “The products of condensed matter physics are all around us,” Paul Canfield, a Distinguished Professor of physics and astronomy in LAS, said. “And one of the successes of condensed matter physics is the fact that people take it for granted. It’s invisible to many people.” LAS is where discovery meets technology. Our fundamental science is why Apple can manufacture a phone that’s, well, smaller, lighter and faster. It’s why a surgeon’s tools can be made more durable. It’s why a company can harvest more solar power for their facilities. Science for all A cell phone contains hundreds of different types of materials that allow it to show you the time, weather and score all while SnapChat-ing with a friend. And, it fits in your pocket. Fifty years ago, the same technology would have taken up the entire floor of a building and would have been able to do far less, Canfield said. “The computers used for the Manhattan Project were something you could now fit into the size of a wristwatch.” “New materials enable manufacturers to make electronic devices smaller, more powerful, more sophisticated and, in many cases, less expensive,” Beate Schmittmann, dean of the LAS college, said. This leads to tangible solutions to problems and inefficiencies in energy production, clean air and water, national security and healthcare. “There is no area of modern society and, in particular, modern technology that does not in some form or fashion rely heavily on complex materials,” she said. “And Iowa State University is known worldwide for its research in this field.” Better science, better healthcare “Magnesium diboride.” Ever heard of it? Probably not, but someday, if you need an MRI scan, you’ll be grateful for it. In 2001, Canfield – also a physicist at the Ames Laboratory and the Robert Allen Wright Professor in Physics and Astronomy – and his colleagues proved magnesium diboride was a cheap and useful superconducting material. Today, solenoids (a coil of wire that generates a uniform magnetic field) are being crafted out of wires spun from magnesium diboride. Those solenoids could allow MRI machines to be easier to access, cost less, and use less energy thanks to significantly lower cooler requirements. LAS’ research in complex materials is leading to many other improvements in healthcare. In addition to a more affordable MRI scan, our scientists are discovering ways to make surgical tools more durable and medical imaging sharper – sharp enough to some day see inside a human cell. Pat Thiel’s research could lead to more durable coatings for everything from surgical tools to nonstick cookware. Thiel is a Distinguished Professor of chemistry in LAS and a scientist at the Ames Laboratory. Her research focuses on surface chemistry, specifically quasicrystals. Her research is critical to work with complex materials because “to be able to create complex materials,” she said, “it all starts on the surface.” Quasicrystals were discovered in the late 1980s by Israeli researcher Dan Shechtman. His research earned him the 2011 Nobel Prize for Chemistry, and he spends several months each year working at ISU and the Ames Laboratory. Quasicrystals have fundamental properties that include low friction, oxidation resistance, and hardness. They could lead to better metal coatings, stronger razor blades, and ultra-fine needles used in medical fields. Perhaps someday, “going to the doctor” won’t be all that bad. Illuminating the future There are many ways research in new materials is paving the way for a better life. For example, LAS scientists are studying the way light’s momentum can transfer energy to objects, leading to better solar energy harvesting. Researchers use metamaterials to study this light-to-energy transfer. Costas Soukoulis, a Distinguished Professor of physics and astronomy and an Ames Laboratory physicist, has been studying metamaterials for years. Metamaterials are manmade structures that create optical and magnetic properties not found in nature. Soukoulis, the Frances M. Craig Professor of Physics and Astronomy, said because they have a negative index of refraction, scientists could create “super lenses,” which could allow them to distinguish a distance equal to a wavelength. A super lens could one day be strong enough to see the details of a DNA molecule. Soukoulis and his colleagues are working on a way to enhance the force of light on matter. Most of the time, the momentum of light and its associated forces are too small to notice. But at a nanoscale level, the effect can be quite large. Ames Laboratory scientists are using these forces to dynamically manipulate optical wavelengths at a nanoscale level. This is all done with metamaterials, which can absorb all light, or bend light backwards. Soukoulis is also working with graphene, a material used for manipulating terahertz waves (which operate at frequencies between microwave and infrared). Graphene could one day be used to make metamaterial devices that would have a superior ability to tune electrical responses for real-world solutions to solar power, telecommunications and medical imaging, meaning, this technology could lead to the design of mechanical devices activated entirely by light. A myriad of opportunity Complex materials are vital ingredients of the tiniest electronics, fastest computers, advanced batteries in hybrid cars, and giant wind turbines. They’ll create more durable tools for surgery, more reliable cooking surfaces, more efficient medical machines, and sharper medical images. Complex materials research at the Ames Laboratory and in the College of Liberal Arts and Sciences offers a promise of a better world, and our discoveries are leading to tangible applications at a crucial time. “This is something that is going to be important for us in the foreseeable future,” Schmittmann said. “I see no end in sight. The sky is the limit.” [/autop] [feature_footer author="Jess Guess" read_more="alumni"]
Continuing in this mini-series on sentence diagramming, I want to keep the focus on two things: (a) make it as simple and understandable as possible and (b) explain why this is important to the disciple-making process. There are more technical ways for breaking down texts of Scripture, but I will leave that for your Greek syntax and exegesis class in seminary. 🙂 The goal behind this mini-series is to help disciple makers employ a very practical method for training believers to handle Scripture, consequently bringing greater confidence and consistency in applying it to their lives. In the previous post, I explained the basic set up for sentence diagramming. In this post, I want to explain propositions and their relationship to one another. Remember, a proposition is simply a phrase that makes an assertion or point, and a verse may have several propositions therein. Coordinate clauses are propositions of equal importance. Subordinate clauses are propositions that modify or explain the lead proposition. Knowing the difference between the two will determine how you diagram a sentence and learn the thought flow of the text. To be clear, we are not seeking to diagram the grammar of the text (relationship between words); rather, we are diagramming the concepts/ideas in the text (relationship between propositions). Once the document is set up (see part 2), the fun begins. - Start with putting the main clause/proposition in the upper left hand corner of your document/paper. - Indent all subordinate clauses. - Line up all coordinate clauses. - Connect related main clauses. - Finally, explain the relation between clauses/propositions. One point should be made here. You are going to have to make subjective calls on whether propositions or subordinate or coordinate clauses. The important thing is that you are consistent throughout your diagramming and focus on the flow of the text (there is no inerrant or perfect sentence diagramming!). The benefit of using a word processor is that you can make changes rather easy in the diagramming process. Once the propositions are diagrammed by coordinate and subordinate clauses, the next step is to determine the relationship between them. Before we jump to learning the various types of coordinate and subordinate clauses, let’s revisit 1 John 1 from my last post and update the sentence diagramming. What you will see is how I determined coordinate and subordinate clauses. So that I don’t unload everything all at once, part 4 will focus on explaining the relationship between clauses now that we have the idea/thought flow diagrammed.
intonation- the rising and falling sounds of the voice when speaking.Language conveys very specific information, such as how to get somewhere or what someone is doing. It can be also used beyond the exact meaning of the words to indicate how the speaker feels about what he is saying, or how he personally feels at that moment. Generally speaking, if English is not your first language, this is where you start running into difficulty. Even if you pronounce each word clearly, if your intonation patterns are non-standard, your meaning will probably not be clear. Also, in terms of comprehension, you will lose a great deal of information if you are listening for the actual words used. This is the starting point of standard intonation. When we say that we need to stress the new information, it's logical to think, "Hmmm, this is the first time I'm saying this sentence, so it's all new information. I'd better stress every word." Well, not quite. In standard English, we consider that the nouns carry the weight of a sentence, when all else is equal. Although the verb carries important information, it does not receive the primary stress of a first-time noun.Dogs eat bones. They eat them. Check Your UnderstandingMultiple Choice. Choose the best answer. Check your answers below. 1. Language can tell us... a. how a speaker feels about what he is saying b. how a speaker feels at the moment he is speaking c. both of the above 2. If your intonation patterns are not standard... a. everyone will b. your meaning will probably not be clear c. neither of the above 3. What is usually stressed in a sentence? a. every word 4. After the nouns have been introduced and we begin using pronouns, which words are usually stressed?
Red Giants and Planet Formation This article will explore the potential for life to develop in the outer planetary systems of red giant stars. It will then discuss the death-throes of red giant stars, and whether the subsequent outward thrust of stellar material might provide another mechanism for free-floating planets in interstellar space. Exoplanets have already been found orbiting extremely old stars, one some 11 billion years old (1). This star, named Kepler-444, makes our own Sun, at a mere 4.6 billion years old, seem like an infant in comparison. The implication of this is that life could readily have got going early on in the history of the universe, long before the birth of our Sun. Furthermore, if these exoplanets were to benefit from a relatively stable stellar environment during that long timescale, then the chances of life evolving into higher forms are statistically more probable. Scale this up across trillions of stars, and the possibilities become clear. Our own Sun has a shorter lifespan than this. Its main sequence life is expected to last another 5 billion years, by which point it will have burned up all of its hydrogen fuel. Then it will swell into a red giant star, before collapsing down into a white dwarf. For Earth, this post-main sequence (post-MS) phase of the Sun’s life will be pretty disastrous. The Sun’s expansion to a red giant will swallow the Earth up. However, a less catastrophic outcome might be expected for planets in the outer solar system, beyond, say, Jupiter. In fact, their climates might significantly improve – for a while, at least. The habitable zone of the solar system will expand outwards, along with the expanding star. Saturn’s largest moon Titan, for instance, might benefit greatly from a far milder climate – as long as it can hang onto its balmy atmosphere in the red heat of the dying Sun. The expansion of habitable zones, as late main sequence stars become hydrogen-starved, offers the potential for life to make a new start in previously frigid environments. The burning question here is how long these outer planets have to get life going before the red giant then withdraws into its cold white shell. A study published last year by scientists at the Cornell University’s Carl Sagan Institute attempted to answer this question (2), choosing to examine yellow dwarf stars whose sizes range from half that of the Sun, to approximately twice its mass. They argue that the larger stars along this sequence could well have larger rocky terrestrial planets in their outer planetary systems than our Sun does (at least, insofar as we know it does!) This is because the density of materials in their initial proto-planetary disks should be that much greater for larger stars (3). Larger Earth-like planets in outer regions mean more potential for stable atmospheric conditions during the post-MS period under consideration. In other words, the growing red giant (which is shedding its mass pretty wildly at this point) would not necessarily blast away an outer planet’s atmosphere if that rocky planet had sufficient gravity to hold onto it. However, the larger stars enjoy much shorter post-MS phases than their cooler cousins. So the potential for larger Earth-like planets in the outer reaches of their surviving planetary systems is offset by the shorter time period required to allow life to get a foothold. That might not be as much of a problem as it first appears, however: “Life may become remotely detectable during the post main sequence lifetime of a star. First, life may be able to evolve quickly (i.e. within a few million to a hundred million years). Secondly, it is not necessary for life to evolve during the post-MS phase. Life may have started in an initially habitable environment and then moved subsurface, or stayed dormant until surface conditions allowed for it to move to the planet’s surface again, like in a star’s post-MS phase. Lastly, life could have evolved during early times on a cold planet located beyond the traditional habitable zone, remaining subsurface or under a layer of ice until emerging during the post-MS phase.” (3) That said, the cooler main sequence stars (those with a fraction of the Sun’s mass) enjoy a much, much longer post-MS period – a ‘retirement’ period which might last as long as 9 billion years! But, these same stars also enjoy very long main sequence lifetimes, meaning that some of the oldest, coolest stars have yet to reach the point where they might have burned up all of their hydrogen fuels within the actual lifetime of the universe: “None of the cool late K [orange dwarf] and M [red dwarf] stars have yet reached the post-MS phase, making the lifetime in the post-MS HZ for cool stars a prediction, not an observable quantity.” (3) So, we need not concern ourselves particularly with orange dwarfs and red dwarfs – the Sun’s smaller stellar cousins. They simply take too long to burn out. The bigger the star, the quicker it moves through its lifecycle (to complicate matters further, this is also dependent upon its inherent metallicity, which tends to be greater in stars as the universe ages). So, stars larger than the Sun move through their post-MS period much quicker than their cooler cousins (4). That period of time between the fully expanded red giant phase, and the essentially deceased white dwarf phase, contains mysteries which have yet to be unravelled. Whilst discussing the decreasing size of the famous red giant star Betelgeuse, Edward Wishnow, a research physicist at UC Berkeley’s Space Sciences Laboratory, was quoted making the following point about end stage red giants: “Considering all that we know about galaxies and the distant universe, there are still lots of things we don’t know about stars, including what happens as red giants near the ends of their lives.” (5) The red supergiant Betelgeuse is currently shedding huge amounts of itself into space, forming a spectacular planetary nebula (6). These nebulae are created around red giants as they move relatively rapidly through a sequence of internal changes. They are essentially throbbing and pulsating back and forth as the increasingly unstable red giant star expands and contracts in upon itself, as it feeds upon an increasingly heavier diet of internal nuclear fuels: “During the latter parts of the Red Giant stage of a star, the star begins to throb and pulsate. The helium-burning shell collapses into the core when its contents are fused into carbon. There is a brief shut-down of one form of nuclear fusion and the star shrinks slightly. Then a new shell of helium ignites and blows the star outward. This shrinking and expanding is called the Asymptotic Giant Branch lifestage of a star, and during this time, the star sheds much of its outer material into space in huge rings of gas and dust.” (7) And this is where things get interesting. The image above, of the red star surrounded by a encircling planetary nebula, is of the red giant V838 Monocerotis, in the constellation Monoceros. This red giant is blowing its outer layers into space without actually turning into a nova (8). It appears like a red star wrapped up in a dusty nebula. Similar, then, to the imagery I have described for a Dark Star in our own backyard. However, this is on a titanic scale, at the dying end point of a star’s life. But it creates an interesting precedent for the kind of structure I’ve been discussing for a much, much smaller red object, closer to home; itself perhaps wrapped up in a cloud of obscuring dust (9, 10, 11). Let’s explore this connection further. Non-conventional Planet Formation Could massive ‘clumps’ of the red giants’ planetary nebulae get interred into interstellar space in great numbers, and after drifting through the darkness of interstellar space, end up getting picked up by the gravitational fields of main sequence stars, like the Sun? I wonder whether the materials driven forth by the red giant could be a source of clumps of dark interstellar material sizeable enough to form massive gaseous planets within them, like Dark Stars and other sub-brown dwarfs. This stunning picture of the planetary nebula in Monoceros relied upon a chance flaring of red giant starlight illuminating the dusty nebula beyond, a phenomenon known as a ‘light echo’ (8). It may only have been blind luck that this image was even captured. As the red giant dies back, and in the absence of other illuminating sources, this nebula will become progressively darker. Arguably, then, dark nebulae may be emerging from these dying stars regularly, but never witnessed by astronomers. Back in January, I wrote about the ‘spaghettification’ of stars by black holes, and how this debris field of matter is flung out into the rest of the Milky Way by the supermassive black hole which lies at the galactic centre (12). Clumps of this strewn material, or ‘spitballs’, are thought to become sizeable free-floating planets (13), including sub-brown dwarfs. Coming back to end-stage red giants, might not these expanding planetary nebulae also provide a non-conventional environment for planetary formation, as matter clumps together in the eddies of this outwardly expanding rush of material? Chips off the old block, one might say. The more we come to appreciate how much material is ejected from young planet-forming star systems, binary star systems (14), star-crushing black holes and red giants, the more we need to accept that interstellar space is far from empty. On the contrary, like a swirling junk yard, it is a vast repository of broken stars, planets and dark nebulae, consisting of material which is either the unfinished detritus of creation, or from repetitive cosmic recycling. This non-uniform stream of material moves around the galaxy, like the stars – but is not sufficiently lit by them to be observable. The stars clear away this material from their own local environments through the action of their solar winds, their heliopause borders and the gravitational wells of the stars themselves. As we happen to exist within one of these solar bubbles, and our subjective viewpoint is taken from within them, we assume a great deal about the conditions beyond (11). Because we cannot see stuff out there, besides the obvious denser patches lit internally by stars (like star-forming nebulae, and giant molecular clouds), we conclude that interstellar space is largely empty. But then, where does all this junk end up? I suspect much of it aggregates into sub-stellar free-floating planets, forming dark mini-systems enveloped in shrouds of gas and residual matter. So, instead of an opaque fog of matter strewn across space, which we might readily observe, there are instead tiny rain drops of condensed matter invisible in the starlight. A galaxy full of dark stars and free-floating planets; felt, but unseen. Written by Andy Lloyd, 12th March 2017 1) Campante et al “An ancient extrasolar system with five sub-Earth-size planets” The Astrophysical Journal, 26th January 2015, https://arxiv.org/abs/1501.06227 2) Melissa Osgood “Hunting for hidden life on worlds orbiting old, red stars” Cornell University, 16th May 2016 http://mediarelations.cornell.edu/2016/05/16/hunting-for-hidden-life-on-worlds-orbiting-old-red-stars/ 3) Ramirez, R. & Kaltenegger, L. “Habitable Zones of Post-Main Sequence Stars,” The Astrophysical Journal 15th May 2016, volume 823:6, 14pp, https://arxiv.org/abs/1605.04924 4) The Astrophysics Spectator “Red Giant Evolution” Issue 3.19, 19th October 2006, http://www.astrophysicsspectator.com/topics/stars/RedGiantsEvolution.html 5) Robert Sanders “Red giant star Betelgeuse mysteriously shrinking” 09 June 2009 http://www.berkeley.edu/news/media/releases/2009/06/09_betelim.shtml 6) European Southern Observatory – Press Release 1121 “The Flames of Betelgeuse: New image reveals vast nebula around famous supergiant star” 23rd June 2011, https://www.eso.org/public/unitedkingdom/news/eso1121/ 7) Hopkins On-Line Astronomy “Red Giant Stars – Introduction” http://astro.hopkinsschools.org/course_documents/stars/big_picture/redgiants.htm 8) NASA “Light Echoes From a Red Supergiant” 23rd March 2008 https://www.nasa.gov/multimedia/imagegallery/image_feature_784.html 9) Andy Lloyd “Dust in the Winged” 23rd June 2016, http://andy-lloyd.com/dust-in-the-winged/ 10) Andy Lloyd “Interstellar Planet Formation” 8th-17th July 2016 http://www.andylloyd.org/darkstarblog40.htm 11) Andy Lloyd “The Cumulative Effect of Intermittent Interstellar Medium Inundation Upon Objects In The Outer Solar System” 02/2016, DOI: 10.13140/RG.2.1.5112.5526,https://www.academia.edu/21700220/The_Cumulative_Effect_of_Intermittent_Interstellar_Medium_Inundation_Upon_Objects_In_The_Outer_Solar_System 12) Andy Lloyd “The Galactic Core Spits out Dark Stars” 15th January 2017 http://www.andylloyd.org/darkstarblog46.htm 13) Harvard-Smithsonian Center for Astrophysics Press Release 2017-01 “Our Galaxy’s Black Hole is Spewing Out Planet-size “Spitballs”” 6th January 2017 https://www.cfa.harvard.edu/news/2017-01 14) Ramin Skibba “Binary stars shred up and shove off their newborn planets” 13th January 2017 https://www.newscientist.com/article/2117948-binary-stars-shred-up-and-shove-off-their-newborn-planets/
- suffix forming the possessive singular case of most Modern English nouns; its use gradually was extended in Middle English from Old English -es, the most common genitive inflection of masculine and neuter nouns (such as dæg "day," genitive dæges "day's"). Old English also had genitives in -e, -re, -an, as well as "mutation-genitives" (boc "book," plural bec), and the -es form never was used in plural (where -a, -ra, -na prevailed), thus avoiding the verbal ambiguity of words like kings'. In Middle English, both the possessive singular and the common plural forms were regularly spelled es, and when the e was dropped in pronunciation and from the written word, the habit grew up of writing an apostrophe in place of the lost e in the possessive singular to distinguish it from the plural. Later the apostrophe, which had come to be looked upon as the sign of the possessive, was carried over into the plural, but was written after the s to differentiate that form from the possessive singular. By a process of popular interpretation, the 's was supposed to be a contraction for his, and in some cases the his was actually "restored." [Samuel C. Earle, et al, "Sentences and their Elements," New York: Macmillan, 1911] As a suffix forming some adverbs, it represents the genitive singular ending of Old English masculine and neuter nouns and some adjectives.
US History/New Nation - 1 The Articles of Confederation - 2 The Northwest Ordinance - 3 Problems with the Confederation - 4 Shays' Rebellion - 5 US Presidents before George Washington - 6 Notes - 7 References - 8 Further reading - 9 External links The Articles of Confederation (The following text is taken from Wikipedia) The Articles of Confederation and Perpetual Union, also the Articles of Confederation, was the governing constitution of the alliance of thirteen independent and sovereign states styled "United States of America." The Article's ratification (proposed in 1777) was completed in 1782, legally uniting the states by compact into the "United States of America" as a union with a confederation government. Under the Articles (and the succeeding Constitution) the states retained sovereignty over all governmental functions not specifically deputed to the confederation. The final draft of the Articles was written in the summer of 1777 and adopted by the Second Continental Congress on November 15, 1777 in York, Pennsylvania after a year of debate. In practice the final draft of the Articles served as the de facto system of government used by the Congress ("the United States in Congress assembled") until it became de jure by final ratification on March 1, 1781; at which point Congress became the Congress of the Confederation. The Articles set the rules for operations of the "United States" confederation. The confederation was capable of making war, negotiating diplomatic agreements, and resolving issues regarding the western territories; it could mint coins and borrow inside and outside the United States. An important element of the Articles was that Article XIII stipulated that "their provisions shall be inviolably observed by every state" and "the Union shall be perpetual." This article was put to the test in the American Civil War. The Articles were created by the chosen representatives of the states in the Second Continental Congress out of a perceived need to have "a plan of confederacy for securing the freedom, sovereignty, and independence of the United States." Although serving a crucial role in the attainment of nationhood for the thirteen states, a group of reformers, known as "federalists", felt that the Articles lacked the necessary provisions for a sufficiently effective government. Fundamentally, a federation was sought to replace the confederation. The key criticism by those who favored a more powerful central state (i.e. the federalists) was that the government (i.e. the Congress of the Confederation) lacked taxing authority; it had to request funds from the states. Another criticism of the Articles was that they did not strike the right balance between large and small states in the legislative decision making process. Due to its one-state, one-vote plank, the larger states were expected to contribute more but had only one vote. The Articles were replaced by the United States Constitution on June 21, 1788. The political push for the colonies to increase cooperation began in the French and Indian Wars in the mid 1750s. The opening of the American Revolutionary War in 1775 induced the various states to cooperate in seceding from the British Empire. The Second Continental Congress starting 1775 acted as the confederation organ that ran the war. Congress presented the Articles for enactment by the states in 1777, while prosecuting the American Revolutionary war against the Kingdom of Great Britain. Congress began to move for ratification of the Articles in 1777: The articles can always be candidly reviewed under a sense of the difficulty of combining in one general system the various sentiments and interests of a continent divided into so many sovereign and independent communities, under a conviction of the absolute necessity of uniting all our councils and all our strength, to maintain and defend our common liberties... The document could not become officially effective until it was ratified by all of the thirteen colonies. The first state to ratify was Virginia on December 16, 1777. The process dragged on for several years, stalled by the refusal of some states to rescind their claims to land in the West. Maryland was the last holdout; it refused to go along until Virginia and New York agreed to cede their claims in the Ohio River valley. A little over three years passed before Maryland's ratification on March 1, 1781. Even though the Articles of Confederation and the Constitution were established by many of the same people, the two documents were very different. The original five-paged Articles contained thirteen articles, a conclusion, and a signatory section. The following list contains short summaries of each of the thirteen articles. - Establishes the name of the confederation as "The United States of America." - Asserts the precedence of the separate states over the confederation government, i.e. "Each state retains its sovereignty, freedom, and independence, and every power, jurisdiction, and right, which is not by this Confederation expressly delegated." - Establishes the United States as a league of states united ". . . for their common defense, the security of their liberties, and their mutual and general welfare, binding themselves to assist each other, against all force offered to, or attacks made upon them . . . ." - Establishes freedom of movement–anyone can pass freely between states, excluding "paupers, vagabonds, and fugitives from justice." All people are entitled to the rights established by the state into which he travels. If a crime is committed in one state and the perpetrator flees to another state, he will be extradited to and tried in the state in which the crime was committed. - Allocates one vote in the Congress of the Confederation (United States in Congress Assembled) to each state, which was entitled to a delegation of between two and seven members. Members of Congress were appointed by state legislatures; individuals could not serve more than three out of any six years. - Only the central government is allowed to conduct foreign relations and to declare war. No states may have navies or standing armies, or engage in war, without permission of Congress (although the state militias are encouraged). - When an army is raised for common defense, colonels and military ranks below colonel will be named by the state legislatures. - Expenditures by the United States will be paid by funds raised by state legislatures, and apportioned to the states based on the real property values of each. - Defines the powers of the central government: to declare war, to set weights and measures (including coins), and for Congress to serve as a final court for disputes between states. - Defines a Committee of the States to be a government when Congress is not in session. - Requires nine states to approve the admission of a new state into the confederacy; pre-approves Canada, if it applies for membership. - Reaffirms that the Confederation accepts war debt incurred by Congress before the Articles. - Declares that the Articles are perpetual, and can only be altered by approval of Congress with ratification by all the state legislatures. Still at war with the Kingdom of Great Britain, the colonists were reluctant to establish another powerful national government. Jealously guarding their new independence, members of the Continental Congress created a loosely-structured unicameral legislature that protected the liberty of the individual states. While calling on Congress to regulate military and monetary affairs, for example, the Articles of Confederation provided no mechanism to force the states to comply with requests for troops or revenue. At times, this left the military in a precarious position, as George Washington wrote in a 1781 letter to the governor of Massachusetts, John Hancock. The end of the war The Treaty of Paris (1783), which ended hostilities with Great Britain, languished in Congress for months because state representatives failed to attend sessions of the national legislature. Yet Congress had no power to enforce attendance. Writing to George Clinton in September 1783, George Washington complained: - Congress have come to no determination yet respecting the Peace Establishment nor am I able to say when they will. I have lately had a conference with a Committee on this subject, and have reiterated my former opinions, but it appears to me that there is not a sufficient representation to discuss Great National points. The Articles supported the Congressional direction of the Continental Army, and allowed the 13 states to present a unified front when dealing with the European powers. As a tool to build a centralized war-making government, they were largely a failure, but since guerrilla warfare was correct strategy in a war against the British Empire's army, this "failure" succeeded in winning independence. Under the articles, Congress could make decisions, but had no power to enforce them. There was a requirement for unanimous approval before any modifications could be made to the Articles. Because the majority of lawmaking rested with the states, the central government was also kept limited. Congress was denied the power of taxation: it could only request money from the states. The states did not generally comply with the requests in full, leaving the confederation chronically short of funds. Congress was also denied the power to regulate commerce, and as a result, the states fought over trade as well. The states and the national congress had both incurred debts during the war, and how to pay the debts became a major issue. Some states paid off their debts; however, the centralizers favored federal assumption of states' debts. Nevertheless, the Congress of the Confederation did take two actions with lasting impact. The Land Ordinance of 1785 established the general land survey and ownership provisions used throughout later American expansion. The Northwest Ordinance of 1787 noted the agreement of the original states to give up western land claims and cleared the way for the entry of new states. Once the war was won, the Continental Army was largely disbanded. A very small national force was maintained to man frontier forts and protect against Indian attacks. Meanwhile, each of the states had an army (or militia), and 11 of them had navies. The wartime promises of bounties and land grants to be paid for service were not being met. In 1783, Washington defused the Newburgh conspiracy, but riots by unpaid Pennsylvania veterans forced the Congress to leave Philadelphia temporarily. The Second Continental Congress approved the Articles for distribution to the states on November 15 1777. A copy was made for each state and one was kept by the Congress. The copies sent to the states for ratification were unsigned, and a cover letter had only the signatures of Henry Laurens and Charles Thomson, who were the President and Secretary to the Congress. But, the Articles at that time were unsigned, and the date was blank. Congress began the signing process by examining their copy of the Articles on June 27 1778. They ordered a final copy prepared (the one in the National Archives), and that delegates should inform the secretary of their authority for ratification. On July 9, 1778, the prepared copy was ready. They dated it, and began to sign. They also requested each of the remaining states to notify its delegation when ratification was completed. On that date, delegates present from New Hampshire, Massachusetts, Rhode Island, Connecticut, New York, Pennsylvania, Virginia and South Carolina signed the Articles to indicate that their states had ratified. New Jersey, Delaware and Maryland could not, since their states had not ratified. North Carolina and Georgia also didn't sign that day, since their delegations were absent. After the first signing, some delegates signed at the next meeting they attended. For example, John Wentworth of New Hampshire added his name on August 8. John Penn was the first of North Carolina's delegates to arrive (on July 10), and the delegation signed the Articles on July 21 1778. The other states had to wait until they ratified the Articles and notified their Congressional delegation. Georgia signed on July 24, New Jersey on November 26, and Delaware on February 12 1779. Maryland refused to ratify the Articles until every state had ceded its western land claims. On February 2, 1781, the much-awaited decision was taken by the Maryland General Assembly in Annapolis. As the last piece of business during the afternoon Session, "among engrossed Bills" was "signed and sealed by Governor Thomas Sim Lee in the Senate Chamber, in the presence of the members of both Houses… an Act to empower the delegates of this state in Congress to subscribe and ratify the articles of confederation" and perpetual union among the states. The Senate then adjourned "to the first Monday in August next." The decision of Maryland to ratify the Articles was reported to the Continental Congress on February 12. The formal signing of the Articles by the Maryland delegates took place in Philadelphia at noon time on March 1, 1781 and was celebrated in the afternoon. With these events, the Articles entered into force and the United States came into being as a united, sovereign and national state. Congress had debated the Articles for over a year and a half, and the ratification process had taken nearly three and a half years. Many participants in the original debates were no longer delegates, and some of the signers had only recently arrived. The Articles of Confederation and Perpetual Union were signed by a group of men who were never present in the Congress at the same time. The signers and the states they represented were: - New Hampshire: Josiah Bartlett and John Wentworth Jr. - Massachusetts Bay: John Hancock, Samuel Adams, Elbridge Gerry, Francis Dana, James Lovell, and Samuel Holten - Rhode Island and Providence Plantations: William Ellery, Henry Marchant, and John Collins - Connecticut: Roger Sherman¹, Samuel Huntington, Oliver Wolcott, Titus Hosmer, and Andrew Adams - New York: James Duane, Francis Lewis, William Duer, and Gouverneur Morris - New Jersey: John Witherspoon and Nathaniel Scudder - Pennsylvania:Robert Morris², Daniel Roberdeau, Jonathan Bayard Smith, William Clingan, and Joseph Reed - Delaware: Thomas McKean, John Dickinson³, and Nicholas Van Dyke - Maryland: John Hanson and Daniel Carroll³ - Virginia: Richard Henry Lee, John Banister, Thomas Adams, John Harvie, and Francis Lightfoot Lee - North Carolina: John Penn, Cornelius Harnett, and John Williams - South Carolina: Henry Laurens, William Henry Drayton, John Mathews, Richard Hutson, and Thomas Heyward Jr. - Georgia: John Walton, Edward Telfair, and Edward Langworthy - ¹ The only person to sign all four great state papers of the United States: the Articles of Association, the United States Declaration of Independence, the Articles of Confederation and the United States Constitution. - ² One of only 2 people to sign three of the great state papers of the United States: the United States Declaration of Independence, the Articles of Confederation and the United States Constitution. - ³ One of only 4 people to sign both the Articles of Confederation and the United States Constitution. Presidents of the Congress The following list is of those who led the Congress of the Confederation under the Articles of Confederation as the Presidents of the United States in Congress Assembled. Under the Articles, the president was the presiding officer of Congress, chaired the Cabinet (the Committee of the States) when Congress was in recess, and performed other administrative functions. He was not, however, a chief executive in the way the successor President of the United States is a chief executive, but all of the functions he executed were under the auspices and in service of the Congress. - Samuel Huntington (March 1, 1781 – July 9, 1781) - Thomas McKean (July 10, 1781 – November 4, 1781) - John Hanson (November 5, 1781 – November 3, 1782) - Elias Boudinot (November 4, 1782 – November 2, 1783) - Thomas Mifflin (November 3, 1783 – October 31, 1784) - Richard Henry Lee (November 30, 1784 – November 6, 1785) - John Hancock (November 23, 1785 – May 29, 1786) - Nathaniel Gorham (June 6, 1786 – November 5, 1786) - Arthur St. Clair (February 2, 1787 – November 4, 1787) - Cyrus Griffin (January 22, 1788 – November 2, 1788) For a full list of Presidents of the Congress Assembled and Presidents under the two Continental Congresses before the Articles, see President of the Continental Congress. Revision and replacement In May 1786, Charles Pinckney of South Carolina proposed that Congress revise the Articles of Confederation. Recommended changes included granting Congress power over foreign and domestic commerce, and providing means for Congress to collect money from state treasuries. Unanimous approval was necessary to make the alterations, however, and Congress failed to reach a consensus. In September, five states assembled in the Annapolis Convention to discuss adjustments that would improve commerce. Under their chairman, Alexander Hamilton, they invited state representatives to convene in Philadelphia to discuss improvements to the federal government. Although the states' representatives to the Constitutional Convention in Philadelphia were only authorized to amend the Articles, the representatives held secret, closed-door sessions and wrote a new constitution. The new Constitution gave much more power to the central government, but characterization of the result is disputed. Historian Forrest McDonald, using the ideas of James Madison from Federalist 39, describes the change this way: The constitutional reallocation of powers created a new form of government, unprecedented under the sun. Every previous national authority either had been centralized or else had been a confederation of sovereign states. The new American system was neither one nor the other; it was a mixture of both. Historian Ralph Ketcham comments on the opinions of Patrick Henry, George Mason, and other antifederalists who were not so eager to give up the local autonomy won by the revolution: Antifederalists feared what Patrick Henry termed the "consolidated government" proposed by the new Constitution. They saw in Federalist hopes for commercial growth and international prestige only the lust of ambitious men for a "splendid empire" that, in the time-honored way of empires, would oppress the people with taxes, conscription, and military campaigns. Uncertain that any government over so vast a domain as the United States could be controlled by the people, Antifederalists saw in the enlarged powers of the general government only the familiar threats to the rights and liberties of the people. According to their own terms for modification (Article XIII), the Articles would still have been in effect until 1790, the year in which the last of the 13 states ratified the new Constitution. The Congress under the Articles continued to sit until November 1788, overseeing the adoption of the new Constitution by the states, and setting elections. Historians have given many reasons for the perceived need to replace the articles in 1787. Jillson and Wilson (1994) point to the financial weakness as well as the norms, rules and institutional structures of the Congress, and the propensity to divide along sectional lines. Rakove (1988) identifies several factors that explain the collapse of the Confederation. The lack of compulsory direct taxation power was objectionable to those wanting a strong centralized state or expecting to benefit from such power. It could not collect customs after the war because tariffs were vetoed by Rhode Island. Rakove concludes that their failure to implement national measures "stemmed not from a heady sense of independence but rather from the enormous difficulties that all the states encountered in collecting taxes, mustering men, and gathering supplies from a war-weary populace." The second group of factors Rakove identified derived from the substantive nature of the problems the Continental Congress confronted after 1783, especially the inability to create a strong foreign policy. Finally, the Confederation's lack of coercive power reduced the likelihood for profit to be made by political means, thus potential rulers were uninspired to seek power. When the war ended in 1783, certain special interests had incentives to create a new "merchant state," much like the British state people had rebelled against. In particular, holders of war scrip and land speculators wanted a central government to pay off scrip at face value and to legalize western land holdings with disputed claims. Also, manufacturers wanted a high tariff as a barrier to foreign goods, but competition among states made this impossible without a central government. The Articles are historically important for two major reasons: i) they were the first constitution or governing document for the United States of America and ii) they legally established a union of the thirteen founding states; a Perpetual Union. Early on, tensions developed surrounding the Union, not least because of the fact that with the US Constitution the basis of government was changed from that of confederation to federation. Thomas Jefferson and John C. Calhoun were in their time leading proponents of guaranteeing the constitutional rights of states in federal legislation. Over time, a legal view developed that if the union violated the constitutional rights of states they might rightfully seceed. A significant tension in the 19th century surrounded the expansion of slavery (which was generally supported in agricultural Southern states and opposed in industrial Northern states). As the secessionist view gained support in the South, the opposing view in the North was that since the U.S. Constitution declared itself to be "a more perfect union" than the Articles, it too must be perpetual, and also could not be broken without the consent of the other states. This view was promoted by Daniel Webster and Abraham Lincoln. In 1861, these constitutional contracts were cited by President Lincoln against any claims by the seceding states that unilaterally withdrawing from the Union and taking federal property within those states was legal. The Northwest Ordinance The Congress established the Northwest Territory around the Great Lakes between 1784 and 1787. In 1787, Congress passed the Northwest Ordinance banning slavery in the new Territory. Congressional legislation divided the Territory into townships of six square miles each and provided for the sale of land to settlers. The Northwest Territory would eventually become the states of Ohio, Wisconsin, Indiana, Illinois and Michigan. Problems with the Confederation The Confederation faced several difficulties in its early years. Firstly, Congress became extremely dependent on the states for income. Also, states refused to require its citizens to pay debts to British merchants, straining relations with Great Britain. France prohibited Americans from using the important port of New Orleans, crippling American trade down the Mississippi river. Due to the post-revolution economic woes, agitated by inflation, many worried of social instability. This was especially true for those in Massachusetts. The legislature's response to the shaky economy was to put emphasis on maintaining a sound currency by paying off the state debt through levying massive taxes. The tax burden hit those with moderate incomes dramatically. The average farmer paid a third of their annual income to these taxes from 1780 to 1786. Those who couldn't pay had their property foreclosed and were thrown into crowded prisons filled with other debtors. But in the summer of 1786, a revolutionary war veteran named Daniel Shays began to organize western communities in Massachusetts to stop foreclosures, with force, by prohibiting the courts from holding their proceedings. Later that fall, Shays marched the newly formed "rebellion" into Springfield to stop the state supreme court from gathering. The state responded with troops sent to suppress the rebellion. After a failed attempt by the rebels to attack the Springfield arsenal, and other small skirmishes, the rebels retreated and then uprising collapsed. Shays retreated to Vermont by 1787. While Daniel Shays was in hiding, the government condemned him to death on the charge of treason. Shays pleaded for his life in a petition which was finally granted by John Hancock on June 17, 1788. With the threat of treason behind him, Shays moved to New York and died September 25, 1825 US Presidents before George Washington Who was the first president of the United States? Ask any school child and they will readily tell you "George Washington." And of course, they would be correct—at least technically. Washington was inaugurated on April 30, 1789, and yet, the United States continually had functioning governments from as early as September 5, 1774 and operated as a confederated nation from as early as July 4, 1776. During that nearly fifteen year interval, Congress—first the Continental Congress and then later the Confederation Congress—was always moderated by a duly elected president. This officer was known as the "President of the Continental Congress", and later as the "President of the United States, in Congress Assembled". However, the office of President of the Continental Congress had very little relationship to the office of President of the United States beyond the name. The President of the United States is the head of the executive branch of government, while the President of the Continental Congress was merely the chair of a body that most resembled a legislature, although it possessed legislative, executive, and judicial powers. The following brief biographies profile these "forgotten presidents." Peyton Randolph of Virginia (1723-1775) When delegates gathered in Philadelphia for the first Continental Congress, they promptly elected the former King's Attorney of Virginia as the moderator and president of their convocation. He was a propitious choice. He was a legal prodigy—having studied at the Inner Temple in London, served as his native colony's Attorney General, and tutored many of the most able men of the South at William and Mary College—including the young Patrick Henry. His home in Williamsburg was the gathering place for Virginia's legal and political gentry—and it remains a popular attraction in the restored colonial capital. He had served as a delegate in the Virginia House of Burgesses, and had been a commander under William Byrd in the colonial militia. He was a scholar of some renown—having begun a self-guided reading of the classics when he was thirteen. Despite suffering poor health served the Continental Congress as president twice, in 1774 from September 5 to October 21, and then again for a few days in 1775 from May 10 to May 23. He never lived to see independence, yet was numbered among the nation's most revered founders. Henry Middleton (1717-1784) America's second elected president was one of the wealthiest planters in the South, the patriarch of the most powerful families anywhere in the nation. His public spirit was evident from an early age. He was a member of his state's Common House from 1744-1747. During the last two years he served as the Speaker. During 1755 he was the King's Commissioner of Indian Affairs. He was a member of the South Carolina Council from 1755-1770. His valor in the War with the Cherokees during 1760-1761 earned him wide recognition throughout the colonies—and demonstrated his leadership abilities while under pressure. He was elected as a delegate to the first session of the Continental Congress and when Peyton Randolph was forced to resign the presidency, his peers immediately turned to Middleton to complete the term. He served as the fledgling coalition's president from October 22, 1774 until Randolph was able to resume his duties briefly beginning on May 10, 1775. Afterward, he was a member of the Congressional Council of Safety and helped to establish the young nation's policy toward the encouragement and support of education. In February 1776 he resigned his political involvements in order to prepare his family and lands for what he believed was inevitable war—but he was replaced by his son Arthur who eventually became a signer of both the Declaration of Independence and the Articles of Confederation, served time as an English prisoner of war, and was twice elected Governor of his state. John Hancock (1737-1793) The third president was a patriot, rebel leader, merchant who signed his name into immortality in giant strokes on the Declaration of Independence on July 4, 1776. The boldness of his signature has made it live in American minds as a perfect expression of the strength and freedom—and defiance—of the individual in the face of British tyranny. As President of the Continental Congress during two widely spaced terms—the first from May 24 1775 to October 30 1777 and the second from November 23 1785 to June 5, 1786—Hancock was the presiding officer when the members approved the Declaration of Independence. Because of his position, it was his official duty to sign the document first—but not necessarily as dramatically as he did. Hancock figured prominently in another historic event—the battle at Lexington: British troops who fought there April 10, 1775, had known Hancock and Samuel Adams were in Lexington and had come there to capture these rebel leaders. And the two would have been captured, if they had not been warned by Paul Revere. As early as 1768, Hancock defied the British by refusing to pay customs charges on the cargo of one of his ships. One of Boston's wealthiest merchants, he was recognized by the citizens, as well as by the British, as a rebel leader—and was elected President of the first Massachusetts Provincial Congress. After he was chosen President of the Continental Congress in 1775, Hancock became known beyond the borders of Massachusetts, and, having served as colonel of the Massachusetts Governor's Guards he hoped to be named commander of the American forces—until John Adams nominated George Washington. In 1778 Hancock was commissioned Major General and took part in an unsuccessful campaign in Rhode Island. But it was as a political leader that his real distinction was earned—as the first Governor of Massachusetts, as President of Congress, and as President of the Massachusetts constitutional ratification convention. He helped win ratification in Massachusetts, gaining enough popular recognition to make him a contender for the newly created Presidency of the United States, but again he saw Washington gain the prize. Like his rival, George Washington, Hancock was a wealthy man who risked much for the cause of independence. He was the wealthiest New Englander supporting the patriotic cause, and, although he lacked the brilliance of John Adams or the capacity to inspire of Samuel Adams, he became one of the foremost leaders of the new nation—perhaps, in part, because he was willing to commit so much at such risk to the cause of freedom. Henry Laurens (1724-1792) The only American president ever to be held as a prisoner of war by a foreign power, Laurens was heralded after he was released as "the father of our country," by no less a personage than George Washington. He was of Huguenot extraction, his ancestors having come to America from France after the revocation of the Edict of Nantes made the Reformed faith illegal. Raised and educated for a life of mercantilism at his home in Charleston, he also had the opportunity to spend more than a year in continental travel. It was while in Europe that he began to write revolutionary pamphlets—gaining him renown as a patriot. He served as vice-president of South Carolina in1776. He was then elected to the Continental Congress. He succeeded John Hancock as President of the newly independent but war beleaguered United States on November 1, 1777. He served until December 9, 1778 at which time he was appointed Ambassador to the Netherlands. Unfortunately for the cause of the young nation, he was captured by an English warship during his cross-Atlantic voyage and was confined to the Tower of London until the end of the war. After the Battle of Yorktown, the American government regained his freedom in a dramatic prisoner exchange—President Laurens for Lord Cornwallis. Ever the patriot, Laurens continued to serve his nation as one of the three representatives selected to negotiate terms at the Paris Peace Conference in 1782. John Jay (1745-1829) America's first Secretary of State, first Chief Justice of the Supreme Court, one of its first ambassadors, and author of some of the celebrated Federalist Papers, Jay was a Founding Father who, by a quirk of fate, missed signing the Declaration of Independence—at the time of the vote for independence and the signing, he had temporarily left the Continental Congress to serve in New York's revolutionary legislature. Nevertheless, he was chosen by his peers to succeed Henry Laurens as President of the United States—serving a term from December 10, 1778 to September 27, 1779. A conservative New York lawyer who was at first against the idea of independence for the colonies, the aristocratic Jay in 1776 turned into a patriot who was willing to give the next twenty-five years of his life to help establish the new nation. During those years, he won the regard of his peers as a dedicated and accomplished statesman and a man of unwavering principle. In the Continental Congress Jay prepared addresses to the people of Canada and Great Britain. In New York he drafted the State constitution and served as Chief Justice during the war. He was President of the Continental Congress before he undertook the difficult assignment, as ambassador, of trying to gain support and funds from Spain. After helping Franklin, Jefferson, Adams, and Laurens complete peace negotiations in Paris in 1783, Jay returned to become the first Secretary of State, called "Secretary of Foreign Affairs" under the Articles of Confederation. He negotiated valuable commercial treaties with Russia and Morocco, and dealt with the continuing controversy with Britain and Spain over the southern and western boundaries of the United States. He proposed that America and Britain establish a joint commission to arbitrate disputes that remained after the war—a proposal which, though not adopted, influenced the government's use of arbitration and diplomacy in settling later international problems. In this post Jay felt keenly the weakness of the Articles of Confederation and was one of the first to advocate a new governmental compact. He wrote five Federalist Papers supporting the Constitution, and he was a leader in the New York ratification convention. As first Chief Justice of the Supreme Court, Jay made the historic decision that a State could be sued by a citizen from another State, which led to the Eleventh Amendment to the Constitution. On a special mission to London he concluded the "Jay Treaty," which helped avert a renewal of hostilities with Britain but won little popular favor at home—and it is probably for this treaty that this Founding Father is best remembered. Samuel Huntington (1732-1796) An industrious youth who mastered his studies of the law without the advantage of a school, a tutor, or a master—borrowing books and snatching opportunities to read and research between odd jobs—he was one of the greatest self-made men among the Founders. He was also one of the greatest legal minds of the age—all the more remarkable for his lack of advantage as a youth. In 1764, in recognition of his obvious abilities and initiative, he was elected to the General Assembly of Connecticut. The next year he was chosen to serve on the Executive Council. In 1774 he was appointed Associate Judge of the Superior Court and, as a delegate to the Continental Congress, was acknowledged to be a legal scholar of some respect. He served in Congress for five consecutive terms, during the last of which he was elected President. He served in that office from September 28, 1779 until ill health forced him to resign on July 9, 1781. He returned to his home in Connecticut—and as he recuperated, he accepted more Councilor and Bench duties. He again took his seat in Congress in 1783, but left it to become Chief Justice of his state's Superior Court. He was elected Lieutenant Governor in 1785 and Governor in 1786. According to John Jay, he was "the most precisely trained Christian jurists ever to serve his country." Thomas McKean (1734-1817) During his astonishingly varied fifty-year career in public life he held almost every possible position—from deputy county attorney to President of the United States under the Confederation. Besides signing the Declaration of Independence, he contributed significantly to the development and establishment of constitutional government in both his home state of Delaware and the nation. At the Stamp Act Congress he proposed the voting procedure that Congress adopted: that each colony, regardless of size or population, has one vote—the practice adopted by the Continental Congress and the Congress of the Confederation, and the principle of state equality manifest in the composition of the Senate. And as county judge in 1765, he defied the British by ordering his court to work only with documents that did not bear the hated stamps. In June 1776, at the Continental Congress, McKean joined with Caesar Rodney to register Delaware's approval of the Declaration of Independence, over the negative vote of the third Delaware delegate, George Read—permitting it to be "The unanimous declaration of the thirteen United States." And at a special Delaware convention, he drafted the constitution for that State. McKean also helped draft—and signed—the Articles of Confederation. It was during his tenure of service as President—from July 10, 1781 to November 4, 1782—when news arrived from General Washington in October 1781 that the British had surrendered following the Battle of Yorktown. As Chief Justice of the supreme court of Pennsylvania, he contributed to the establishment of the legal system in that State, and, in 1787, he strongly supported the Constitution at the Pennsylvania Ratification Convention, declaring it "the best the world has yet seen." At sixty-five, after over forty years of public service, McKean resigned from his post as Chief Justice. A candidate on the Democratic-Republican ticket in 1799, McKean was elected Governor of Pennsylvania. As Governor, he followed such a strict policy of appointing only fellow Republicans to office that he became the father of the spoils system in America. He served three tempestuous terms as Governor, completing one of the longest continuous careers of public service of any of the Founding Fathers. John Hanson (1715-1783) He was the heir of one of the greatest family traditions in the colonies and became the patriarch of a long line of American patriots—his great grandfather died at Lutzen beside the great King Gustavus Aldophus of Sweden; his grandfather was one of the founders of New Sweden along the Delaware River in Maryland; one of his nephews was the military secretary to George Washington; another was a signer of the Declaration; still another was a signer of the Constitution; yet another was Governor of Maryland during the Revolution; and still another was a member of the first Congress; two sons were killed in action with the Continental Army; a grandson served as a member of Congress under the new Constitution; and another grandson was a Maryland Senator. Thus, even if Hanson had not served as President himself, he would have greatly contributed to the life of the nation through his ancestry and progeny. As a youngster he began a self-guided reading of classics and rather quickly became an acknowledged expert in the juridicalism of Anselm and the practical philosophy of Seneca—both of which were influential in the development of the political philosophy of the great leaders of the Reformation. It was based upon these legal and theological studies that the young planter—his farm, Mulberry Grove was just across the Potomac from Mount Vernon—began to espouse the cause of the patriots. In 1775 he was elected to the Provincial Legislature of Maryland. Then in 1777, he became a member of Congress where he distinguished himself as a brilliant administrator. Thus, he was elected President in 1781. He served in that office from November 5, 1781 until November 3, 1782. He was the first President to serve a full term after the full ratification of the Articles of Confederation—and like so many of the Southern and New England Founders, he was strongly opposed to the Constitution when it was first discussed. He remained a confirmed anti-federalist until his untimely death. Elias Boudinot (1741-1802) He did not sign the Declaration, the Articles, or the Constitution. He did not serve in the Continental Army with distinction. He was not renowned for his legal mind or his political skills. He was instead a man who spent his entire career in foreign diplomacy. He earned the respect of his fellow patriots during the dangerous days following the traitorous action of Benedict Arnold. His deft handling of relations with Canada also earned him great praise. After being elected to the Congress from his home state of New Jersey, he served as the new nation's Secretary for Foreign Affairs—managing the influx of aid from France, Spain, and Holland. The in 1783 he was elected to the Presidency. He served in that office from November 4, 1782 until November 2, 1783. Like so many of the other early presidents, he was a classically trained scholar, of the Reformed faith, and an anti-federalist in political matters. He was the father and grandfather of frontiersmen—and one of his grandchildren and namesakes eventually became a leader of the Cherokee nation in its bid for independence from the sprawling expansion of the United States. Thomas Mifflin (1744-1800) By an ironic sort of providence, Thomas Mifflin served as George Washington's first aide-de-camp at the beginning of the Revolutionary War, and, when the war was over, he was the man, as President of the United States, who accepted Washington's resignation of his commission. In the years between, Mifflin greatly served the cause of freedom—and, apparently, his own cause—while serving as the first Quartermaster General of the Continental Army. He obtained desperately needed supplies for the new army—and was suspected of making excessive profit himself. Although experienced in business and successful in obtaining supplies for the war, Mifflin preferred the front lines, and he distinguished himself in military actions on Long Island and near Philadelphia. Born and reared a Quaker, he was excluded from their meetings for his military activities. A controversial figure, Mifflin lost favor with Washington and was part of the Conway Cabal—a rather notorious plan to replace Washington with General Horatio Gates. And Mifflin narrowly missed court-martial action over his handling of funds by resigning his commission in 1778. In spite of these problems—and of repeated charges that he was a drunkard—Mifflin continued to be elected to positions of responsibility—as President and Governor of Pennsylvania, delegate to the Constitutional Convention, as well as the highest office in the land—where he served from November 3, 1783 to November 29, 1784. Most of Mifflin's significant contributions occurred in his earlier years—in the First and Second Continental Congresses he was firm in his stand for independence and for fighting for it, and he helped obtain both men and supplies for Washington's army in the early critical period. In 1784, as President, he signed the treaty with Great Britain which ended the war. Although a delegate to the Constitutional Convention, he did not make a significant contribution—beyond signing the document. As Governor of Pennsylvania, although he was accused of negligence, he supported improvements of roads, and reformed the State penal and judicial systems. He had gradually become sympathetic to Jefferson's principles regarding State's rights, even so, he directed the Pennsylvania militia to support the Federal tax collectors in the Whiskey Rebellion. In spite of charges of corruption, the affable Mifflin remained a popular figure. A magnetic personality and an effective speaker, he managed to hold a variety of elective offices for almost thirty years of the critical Revolutionary period. Richard Henry Lee (1732-1794) His resolution "that these United Colonies are, and of right ought to be, free and independent States," approved by the Continental Congress July 2, 1776, was the first official act of the United Colonies that set them irrevocably on the road to independence. It was not surprising that it came from Lee's pen—as early as 1768 he proposed the idea of committees of correspondence among the colonies, and in 1774 he proposed that the colonies meet in what became the Continental Congress. From the first, his eye was on independence. A wealthy Virginia planter whose ancestors had been granted extensive lands by King Charles II, Lee disdained the traditional aristocratic role and the aristocratic view. In the House of Burgesses he flatly denounced the practice of slavery. He saw independent America as "an asylum where the unhappy may find solace, and the persecuted repose." In 1764, when news of the proposed Stamp Act reached Virginia, Lee was a member of the committee of the House of Burgesses that drew up an address to the King, an official protest against such a tax. After the tax was established, Lee organized the citizens of his county into the Westmoreland Association, a group pledged to buy no British goods until the Stamp Act was repealed. At the First Continental Congress, Lee persuaded representatives from all the colonies to adopt this non-importation idea, leading to the formation of the Continental Association, which was one of the first steps toward union of the colonies. Lee also proposed to the First Continental Congress that a militia be organized and armed—the year before the first shots were fired at Lexington; but this and other proposals of his were considered too radical—at the time. Three days after Lee introduced his resolution, in June of 1776, he was appointed by Congress to the committee responsible for drafting a declaration of independence, but he was called home when his wife fell ill, and his place was taken by his young protégé, Thomas Jefferson. Thus Lee missed the chance to draft the document—though his influence greatly shaped it and he was able to return in time to sign it. He was elected President—serving from November 30, 1784 to November 22, 1785 when he was succeeded by the second administration of John Hancock. Elected to the Constitutional Convention, Lee refused to attend, but as a member of the Congress of the Confederation, he contributed to another great document, the Northwest Ordinance, which provided for the formation of new States from the Northwest Territory. When the completed Constitution was sent to the States for ratification, Lee opposed it as anti-democratic and anti-Christian. However, as one of Virginia's first Senators, he helped assure passage of the amendments that, he felt, corrected many of the document's gravest faults—the Bill of Rights. He was the great uncle of Robert E. Lee and the scion of a great family tradition. Nathaniel Gorham (1738-1796) Another self-made man, Gorham was one of the many successful Boston merchants who risked all he had for the cause of freedom. He was first elected to the Massachusetts General Court in 1771. His honesty and integrity won his acclaim and was thus among the first delegates chose to serve in the Continental Congress. He remained in public service throughout the war and into the Constitutional period, though his greatest contribution was his call for a stronger central government. But even though he was an avid federalist, he did not believe that the union could—or even should—be maintained peaceably for more than a hundred years. He was convinced that eventually, in order to avoid civil or cultural war, smaller regional interests should pursue an independent course. His support of a new constitution was rooted more in pragmatism than ideology. When John Hancock was unable to complete his second term as President, Gorham was elected to succeed him—serving from June 6, 1786 to February 1, 1787. It was during this time that the Congress actually entertained the idea of asking Prince Henry—the brother of Frederick II of Prussia—and Bonnie Prince Charlie—the leader of the ill-fated Scottish Jacobite Rising and heir of the Stuart royal line—to consider the possibility of establishing a constitutional monarch in America. It was a plan that had much to recommend it but eventually the advocates of republicanism held the day. During the final years of his life, Gorham was concerned with several speculative land deals which nearly cost him his entire fortune. Arthur St. Clair (1734-1818) Born and educated in Edinburgh, Scotland during the tumultuous days of the final Jacobite Rising and the Tartan Suppression, St. Clair was the only president of the United States born and bred on foreign soil. Though most of his family and friends abandoned their devastated homeland in the years following the Battle of Culloden—after which nearly a third of the land was depopulated through emigration to America—he stayed behind to learn the ways of the hated Hanoverian English in the Royal Navy. His plan was to learn of the enemy's military might in order to fight another day. During the global conflict of the Seven Years War—generally known as the French and Indian War—he was stationed in the American theater. Afterward, he decided to settle in Pennsylvania where many of his kin had established themselves. His civic-mindedness quickly became apparent: he helped to organize both the New Jersey and the Pennsylvania militias, led the Continental Army's Canadian expedition, and was elected Congress. His long years of training in the enemy camp was finally paying off. He was elected President in 1787—and he served from February 2 of that year until January 21 of the next. Following his term of duty in the highest office in the land, he became the first Governor of the Northwest Territory. Though he briefly supported the idea of creating a constitutional monarchy under the Stuart's Bonnie Prince Charlie, he was a strident Anti-Federalist—believing that the proposed federal constitution would eventually allow for the intrusion of government into virtually every sphere and aspect of life. He even predicted that under the vastly expanded centralized power of the state the taxing powers of bureaucrats and other unelected officials would eventually confiscate as much as a quarter of the income of the citizens—a notion that seemed laughable at the time but that has proven to be ominously modest in light of our current governmental leviathan. St. Clair lived to see the hated English tyrants who destroyed his homeland defeated. But he despaired that his adopted home might actually create similar tyrannies and impose them upon themselves. Cyrus Griffin (1736-1796) Like Peyton Randolph, he was trained in London's Inner Temple to be a lawyer—and thus was counted among his nation's legal elite. Like so many other Virginians, he was an anti-federalist, though he eventually accepted the new Constitution with the promise of the Bill of Rights as a hedge against the establishment of an American monarchy—which still had a good deal of currency. The Articles of Confederation afforded such freedoms that he had become convinced that even with the incumbent loss of liberty, some new form of government would be required. A protégé of George Washington—having worked with him on several speculative land deals in the West—he was a reluctant supporter of the Constitutional ratifying process. It was during his term in the office of the Presidency—the last before the new national compact went into effect—that ratification was formalized and finalized. He served as the nation's chief executive from January 22, 1788 until George Washington's inauguration on April 30, 1789. - Monday, November 17 1777, Journals of the Continental Congress, 1774–1789. A Century of Lawmaking, 1774-1873 - "Articles of Confederation, 1777-1781". U.S. Department of State. http://www.state.gov/r/pa/ho/time/ar/91719.htm. Retrieved 2008-01-26. - Letter George Washington to George Clinton, September 11 1783. The George Washington Papers, 1741-1799 - "While Washington and Steuben were taking the army in an ever more European direction, Lee in captivity was moving the other way — pursuing his insights into a fullfledged and elaborated proposal for guerrilla warfare. He presented his plan to Congress, as a "Plan for the Formation of the American Army." Bitterly attacking Steuben's training of the army according to the "European Plan," Lee charged that fighting British regulars on their own terms was madness and courted crushing defeat: "If the Americans are servilely kept to the European Plan, they will … be laugh'd at as a bad army by their enemy, and defeated in every [encounter]… . [The idea] that a decisive action in fair ground may be risqued is talking nonsense." Instead, he declared that "a plan of defense, harassing and impeding can alone succeed," particularly if based on the rough terrain west of the Susquehannah River in Pennsylvania. He also urged the use of cavalry and of light infantry (in the manner of Dan Morgan), both forces highly mobile and eminently suitable for the guerrilla strategy. This strategic plan was ignored both by Congress and by Washington, all eagerly attuned to the new fashion of Prussianizing and to the attractions of a "real" army." - Murray N. Rothbard, Generalissimo Washington: How He Crushed the Spirit of Liberty excerpted from Conceived in Liberty, Volume IV, chapters 8 and 41.Template:Relevance - Henry Cabot Lodge. George Washington, Vol. I. I. http://www.fullbooks.com/George-Washington-Vol-I4.html. - Friday, February 2 1781, Laws of Maryland, 1781. An ACT to empower the delegates - McDonald pg. 276 - Ralph Ketcham, Roots of the Republic: American Founding Documents Interpreted, pg. 383 - Emory, Bobby (1993). "The Articles of Confederation". Libertarian Nation Foundation. http://libertariannation.org/a/f11e1.html#3. Retrieved 2008-01-26. - "Religion and the Congress of the Confederation, 1774-89 (Religion and the Founding of the American Republic, Library of Congress Exhibition)". Library of Congress. 2003-10-27. http://www.loc.gov/exhibits/religion/rel04.html. - "Records of the Continental and Confederation Congresses and the Constitutional Convention". U.S. National Archives and Records Administration. http://www.archives.gov/research/guide-fed-records/groups/360.html#360.2. - Documents from the Continental Congress and the Constitutional Convention, 1774-1789 - To Form a More Perfect Union: The Work of the Continental Congress & the Constitutional Convention (American Memory from the Library of Congress) - Rakove 1988 p. 230 - In his book Life of Webster Sen. Henry Cabot Lodge writes, "It is safe to say that there was not a man in the country, from Washington and Hamilton to Clinton and Mason, who did not regard the new system as an experiment from which each and every State had a right to peaceably withdraw." A textbook used at West Point before the Civil War, A View of the Constitution, written by Judge William Rawle, states, "The secession of a State depends on the will of the people of such a State." - "First Inaugural Address of Abraham Lincoln, Monday, March 4, 1861". http://www.yale.edu/lawweb/avalon/presiden/inaug/lincoln1.htm. "...no State upon its own mere motion can lawfully get out of the Union; that resolves and ordinances to that effect are legally void, and that acts of violence within any State or States against the authority of the United States are insurrectionary or revolutionary, according to circumstances." - R. B. Bernstein, "Parliamentary Principles, American Realities: The Continental and Confederation Congresses, 1774-1789," in Inventing Congress: Origins & Establishment Of First Federal Congress ed by Kenneth R. Bowling and Donald R. Kennon (1999) pp 76-108 - Burnett, Edmund Cody. The Continental Congress: A Definitive History of the Continental Congress From Its Inception in 1774 to March, 1789 (1941) - Barbara Feinberg, The Articles Of Confederation (2002). [for middle school children.] - Robert W. Hoffert, A Politics of Tensions: The Articles of Confederation and American Political Ideas (1992). - Lucille E. Horgan. Forged in War: The Continental Congress and the Origin of Military Supply and Acquisition Policy (2002) - Merrill Jensen, The Articles of Confederation: An Interpretation of the Social-Constitutional History of the American Revolution, 1774-1781 (1959). - Merrill Jensen: "The Idea of a National Government During the American Revolution", Political Science Quarterly, 58 (1943), 356-79. online at JSTOR - Calvin Jillson and Rick K. Wilson. Congressional Dynamics: Structure, Coordination, and Choice in the First American Congress, 1774-1789. (1994) - Forest McDonald.Novus Ordo Seclorum: The Intellectual Origins of the Constitution. (1985) - Andrew C. Mclaughlin, A Constitutional History of the United States (1935) online version - Pauline Maier, American Scripture: Making the Declaration of Independence (1998). - Jackson T. Main, Political Parties before the Constitution. University of North Carolina Press, 1974 - Jack N. Rakove, The Beginnings of National Politics: An Interpretive History of the Continental Congress (1982). - Jack N. Rakove, “The Collapse of the Articles of Confederation,” in The American Founding: Essays on the Formation of the Constitution. Ed by J. Jackson Barlow, Leonard W. Levy and Ken Masugi. Greenwood Press. 1988. Pp 225-45 ISBN 0313256101 - Klos, Stanley L. (2004). President Who? Forgotten Founders. Pittsburgh, Pennsylvania: Evisum, Inc.. pp. 261. ISBN 0-9752627-5-0. - Text Version of the Articles of Confederation - Articles of Confederation and Perpetual Union - Articles of Confederation and related resources, Library of Congress - Today in History: November 15, Library of Congress - United States Constitution Online - The Articles of Confederation - Free Download of Articles of Confederation Audio - Audio narration (mp3) of the Articles of Confederation at Americana Phonic - The Articles of Confederation, Chapter 45 (see page 253) of Volume 4 of Conceived in Liberty by Murray Rothbard, in PDF format.
The beating heart is an energy source. The heartbeat energy exists, and as such can be transformed. This is what researchers have succeeded in the U.S.: the power feed heartbeat pacemaker patient. Patients could then feed their own pacemaker is a device that converts experimental phase energy in a heartbeat electricity enough to power a pacemaker. The patients could then feed their own pacemaker , eliminating the need to replace the batteries if they are exhausted, according to results of this research, published in Journal of the American Heart Association . In the study, which was presented at the scientific sessions of the American Heart Association , scientists tested a collection device that uses energy piezoelectricity (electric charge generated by the movement) . In their experiments generated the collector observed over ten times the power required by modern pacemaker. has half the size of the pacemaker batteries. The next step is the introduction of energy collector, which is approximately half the size of batteries currently used in pacemakers.Researchers hope to integrate this technology into commercial pacemakers. According to the study’s lead author and researcher at the Department of Aerospace Engineering from the University of Michigan (USA), Amin Karami, this device is a tool “promising” for technology pacemakers. existing pacemakers must be replaced after spending five to seven years since its inception, when their batteries run out, which is costly.This type of electricity could also be used in other implantable cardiac devices , such as defibrillators, which have minimum energy requirements.
Legislative Reorganization Act of 1946, August 2, 1946 The Legislative Reorganization Act of 1946 brought about some of the most significant organizational changes ever made to the U.S. Congress. The act improved legislative oversight of federal agencies after World War II and helped Congress match the growing power of the executive branch in shaping the national agenda. The reorganization drastically reduced the number of standing committees in both the House and the Senate. The act also expanded the Legislative Reference Service (today’s Congressional Research Service), which provides Congress with information on the increasingly wide-ranging, complex issues that come before it. General Records of the U.S. Government, National Archives and Records Administration The U.S. Constitution states that “The Congress shall have Power…To make all Laws.” The original laws enacted by Congress are preserved at the National Archives. This page highlights some of the most historically significant laws Congress has passed throughout the nation’s history.
Surely ever since the first fossils of obviously extinct animals were found, humankind has wondered: "Why did they die?" A poignant question, for it has relevance to us if extinct animals were wiped out by some catastrophe, couldn't that just as easily happen to us? Could we be found as fossils someday, and would no one know why we died? History: Until recently, people simply knew that dinosaurs went extinct their fossils were found throughout the Mesozoic era, but were not located in the rock layers (strata) of the Cenozoic era. So, we knew that dinosaurs went extinct some 64-66 million years ago, but that was all. Many wild ideas about how the dinosaurs were rendered extinct were presented over the years. 1980: Few satisfactory answers to the mystery behind the extinction of dinosaurs were offered until 1980, when a group of scientists at the University of California at Berkeley Luis and Walter Alvarez, Frank Asaro, and Helen Michel proposed a stunning and convincing mechanism for the "K-T extinction" (meaning the extinction of dinosaurs at the boundary between the Cretaceous period (K) and the Tertiary period (T)). This hypothesis is discussed later. Since the Alvarez hypothesis was first proposed, the search for the "perpetrator" of the K-T extinction has been a thriving area of scientific research. It incorporates scientists from many different fields including astrophysics, astronomy, geology, paleontology, ecology, geochemistry, and so on. The mystery has drawn extensive media coverage over the last 15 years, as you may know; some paleontologists have since lost interest in the issue, preferring to study how the dinosaurs and their contemporaries lived rather than why they died. Mass Extinctions: But before we dive into the complex issue of the K-T extinction, we need essential background information to understand the basics of the controversy. The "great dying," as it is sometimes called, is an example of a mass extinction: an episode in evolutionary history where more than 50% of all known species living at that time went extinct in a short period of time (less than 2 million years or Other Mass Extinctions? We know of several mass extinctions in the history of life; the great dying is not nearly the largest! The largest would be the "Permo-Triassic" extinction, between the Permian and Triassic periods, of the Paleozoic and Mesozoic eras. In this obviously catastrophic event, life on Earth nearly was wiped out an estimated 90% of all species living at that time were extinguished. We are fairly sure that the extinction was due to many changing global conditions at that time, but even that is not solved yet. The issue has not received much press because the dinosaurs were not involved, but another familiar group, the trilobites, were wiped out among others. Who Died? How does the K-T extinction compare to this debacle? Well, about 60% of all species that are present below the K-T boundary are not present above the line that divides the "Age of Dinosaurs" and the "Age of Mammals." In fact, dinosaurs were not among the most numerous of the casualties the worst hit organisms were those in the oceans. Large groups of organisms, including some members of Foraminifera, Echinodermata, Mollusca, and the marine Diapsida all were devastated by the K-T event. On land, the Dinosauria of course went extinct, along with the Pterosauria. Mammals and most non- dinosaurian reptiles seemed to be relatively unaffected. The terrestrial plants suffered to a large extent, except for the ferns, which show an apparently dramatic increase in diversity at the K-T boundary, a phenomenon known as the fern spike. Now we're heading into the tough stuff; the reasons why we have no conclusive answer to the mystery of the K-T event. Several complications that make work hard for the scientist/detectives trying to crack this case: The Fossil Record: It's not perfect, as you may know; that's why paleontologists keep finding new fossils: so much is hidden in the rocks! Most data on the K-T event comes from North America, which is one of the few areas known that has a somewhat continuous fossil record (remember, fossils are only formed under certain rare conditions, and are only found in sedimentary rocks). The infamous Hell Creek locality in Montana is one such continuous site enclosing the K-T boundary. UCMP researchers have led and continue to lead expeditions to Hell Creek, gathering fossils from the rich fossil beds. The secret to the K-T event may lie within our collections; who knows! Anyway, we don't know much about what was occurring in the rest of the world at the time of the K-T event. The marine fossil record gives us great hints about what was occurring within the sea, but how applicable is that to what went on in the terrestrial realm? The Nature of Extinction: Extinction is not a simple event; it is not simply the death of all representatives of a group. It is the cessation of the origination of new species that renders a group extinct; if species are constantly dying off and no new ones originate through the process of evolution, then that group will go extinct over time no matter what happens. New dinosaur species ceased to originate around the K-T boundary; the question is, were they killed off (implying causation, especially a catastrophe), or were they not evolving and simply fading away (perhaps implying gradual environmental change)? Time Resolution: Determining the age of rocks or fossils that are millions of years old is not easy; carbon dating only has a reasonable resolution when used with organic material that is less than about 50,000 years old, so it is useless with the 65 million year old K-T material. Other methods of age determination are often less accurate or less useful in certain situations. So we don't know exactly when the dinosaurs went extinct, and matching events precisely to give a picture of what was happening at a specific moment in the Mesozoic is not easy. Thus, the ultimate question of a gradual decline of dinosaurs vs. a sudden cataclysm is almost intractable without a wealth of good data. Reconstruction: To truly understand the situation of the dinosaurs around the K-T boundary, we need to understand the paleoecology of that time on Earth. Paleoecology is an extension of the discipline of ecology, attempting to understand the interactions of organisms with their environment, using geological (the rocks tell you what the soil was like, and thus tell a lot about the abiotic (non-living) environment) and paleontological (what plants and animals are found as fossils tell you a lot about the biotic (living) environment) evidence. With the problems of the fossil record and time resolution, it is difficult to understand the paleoecology of a region at a specific time in The Signor-Lipps Effect: Proposed by Phil Signor and UCMP's own Jere Lipps, this concept helps us to understand the limitations of the fossil record. The theory states that groups of organisms may seem to go extinct in the fossil record before they actually do; this is an artifact of the fickle nature of the fossil record rather than actual extinction. Thus, it is possible that some groups of organisms did not go extinct at the K-T boundary, and also possible that some organisms that seemed to have gone extinct earlier may have survived up to the boundary, and then gone extinct. This matter further complicates the important issue of the selectivity of the K-T extinction (discussed Falsifiability: Sad but true: many hypotheses about dinosaur extinction sound quite convincing and might even be correct, but, as you know, are not really science if they cannot be proven or disproved. Even with the best hypothesis, such as the impact hypothesis, it is very difficult to prove or disprove whether the dinosaurs were rendered extinct by an event that occurred around the K-T boundary, or whether they were just weakened (or unaffected) by the event. This is not to say that all extinction hypotheses are not science; many are excellent examples of good science, but a linkage of direct causation is a problem. "Why" questions, such as "Why did the dinosaurs die out?" or "Why did dinosaurs evolve?" are among the most difficult questions in paleontology. Ultimately, a time machine would be required to see exactly what killed the dinosaurs. Now that you have a background in the extinction issue, feel free to delve into the modern arena of scientific examination of the "Mystery of the Great Dying." Back to DinoBuzz Learn more about the Dinosauria
A patriline is a line of descent from a male ancestor to a descendant (of either sex) in which the individuals in all intervening generations are male. In a patrilineal descent system (= agnatic descent), an individual is considered to belong to the same descent group as his or her father. This is in contrast to the less common pattern of matrilineal descent. The agnatic ancestry of an individual is that person's pure male ancestry. An agnate is one's genetic relative exclusively through males: a kinsman with whom one has a common ancestor by descent in unbroken male line. Contrary to popular belief, one's agnate may be male or female, provided that the kinship is calculated patrilineally, i.e., only through male ancestors. Traditionally, this concept is applied in determining the names and membership of European dynasties. For instance, because Queen Victoria of the United Kingdom was married to a prince of Saxe-Coburg and Gotha, her son and successor, Edward VII, was a member of that dynasty, and is considered the first British king of the House of Saxe-Coburg-Gotha. (And so, technically, are his descendants in the male line; see Elizabeth II's ancestry.) But Victoria is reckoned to have belonged to her father's House of Hanover, despite her marriage and the fact that by marriage she legally became a member of the Saxon dynasty and acquired the name of that family (Wettin). Agnatically, she was a Hanover, and is considered the last member of that dynasty to reign over Britain. The fact that the Y chromosome (Y-DNA) is paternally inherited enables patrilines, and agnatic kinships, of men to be traced through genetic analysis. Y-chromosomal Adam (Y-mrca) is the patrilineal human most recent common ancestor, from whom all Y-DNA in living men is descended. Y-chromosomal Adam probably lived between 60,000 and 90,000 years ago, judging from molecular clock and genetic marker studies. Early medical theoriesEdit In ancient medicine there was a dispute between the one-seed theory, expounded by Aristotle, and the two-seed theory of Galen. By the one-seed theory, the germ of every embryo is contained entirely in the male seed, and the role of the mother is simply as an incubator and provider of food: on this view only a patrilineal relative is genetically related. By the two-seed theory, the embryo is not conceived unless the male and female seed meet: this implies a bilineal, or cognatic, theory of relationship. It may be significant that Galen lived at about the same time that Roman law changed from the agnate to the cognate system of relationships. Common to both theories was the mistaken belief that the female emits seed only when she comes to orgasm. Given that assumption, the evidence for the one-seed theory is the fact that a woman can conceive without coming to orgasm (though this was still a matter of dispute in the ancient world and the Middle Ages). The evidence for the two-seed theory is the fact that a person can look like his or her maternal relatives. These two facts could not be reconciled until the discovery of ovulation. The terms "agnate" (for patrilineal relatives) and "cognate" (for all relatives equally) are taken from Roman law. In Roman times, all citizens were divided by gens (clan) and familia (sept), determined on a purely patrilineal basis, in the same way as the modern inheritance of surnames. (The gens was the larger unit, and was divided into several familiae: a person called "Gaius Iulius Caesar" belonged to the Julian gens and the Caesar family.) In the early Republic, inheritance could only occur within the family, and was therefore purely agnatic. In Imperial times, this was changed by the Praetorian edict, giving paternal and maternal relatives equal rights. In the BibleEdit The line of descent for monarchs and main personalities is almost exclusively through the main male personalities. Tribal descent, such as whether one is a kohen or a Levite, is still inherited patrilineally in Judaism, as is communal identity as a Sephardi or Ashkenazi Jew. This contrasts with the rule for inheritance of Jewish status in Orthodox Judaism, which is matrilineal. See Davidic line and Matrilineality in Judaism. - Patrilineal descent of Elizabeth II - an example. - Agnatic seniority - ^ Murphy, Michael Dean. "A Kinship Glossary: Symbols, Terms, and Concepts". http://www.as.ua.edu/ant/Faculty/murphy/436/kinship.htm. Retrieved 2006-10-05. - ^ In some cultures a rapist could not be convicted if his victim had conceived, as this was taken as evidence that she had come to orgasm and therefore welcomed his attentions. |Wikisource has the text of the 1911 Encyclopædia Britannica article Agnates.| |This page uses content from the English language Wikipedia. The original content was at Patrilineality. The list of authors can be seen in the page history. As with this Familypedia wiki, the content of Wikipedia is available under the Creative Commons License.|
- CONDITION CENTERS Head lice are the most common type of lice. They frequently affect children 3 to 10 years old, particularly during the school year. The most common symptom is itching in the areas where the lice are located. The lice lay eggs near the scalp, and the eggs are "glued" to hair shafts. A nymph hatches from an egg and grows into a mature adult. The eggs that remain once the nymphs have hatched are called nits. Adult lice are the size of sesame seeds, and they produce more eggs to continue the cycle. If lice are not treated, the cycle is repeated every 3 weeks. Seeing head lice can be challenging. You may need a flashlight and a magnifying glass. Using a lice comb may increase the chance of finding them. Lice and nits should not be mistaken for dandruff, however; dandruff is not "glued" to the hair shaft the way lice are. Lice also may be found on the base of the neck and near the ears. It is important to know how to prevent getting and spreading head lice, particularly among school-aged children. However, some myths about lice may affect the prevention and spread of lice and can lead to overuse of medications. Take this test, and see how much you know about the facts of lice! Fact or Fiction? 1. Lice can jump and fly from one person to another. Answer: Fiction?Lice do not have wings, and therefore they cannot jump or fly. Their back legs are used only to "glue" themselves to the hair shafts. Lice are spread from person-to-person contact or from objects that people share. For this reason, people should not share hairbrushes, combs, hats, towels, or headphones. When children have lice, soaking their combs, hairbrushes, and toys in hot water for at least 10 minutes is recommended. Their sheets and towels should be washed in soapy, hot water and dried on the hottest setting in the dryer. Carpet and upholstery should be vacuumed on a regular basis. Objects that cannot be washed should be sealed in a plastic bag for at least 2 weeks to stop the lice cycle. Use of a lice comb such as the LiceMeister Comb is recommended to remove lice and nits. 2. If someone in my family has lice, we all need to be treated. Answer: Fiction?Unless active lice are spotted on the scalp, the use of medications is not recommended. Overusing medications makes them less effective and can pose problems later down the road when someone really needs treatment. 3. If someone in my family has lice, my pet or pets should be treated. Answer: Fiction?Lice require human blood to live. Therefore, human lice cannot survive on pets, and pets do not need to receive treatment. 4. My child cannot go to school or day care if he or she has head lice. Answer: It depends?Attending school and day care generally is not a good idea when lice are active to avoid further spreading. Also, some schools have developed a "No-Nit Policy," which does not allow children to attend if they have nits present. Therefore, it is important to find out what your child's school or day-care center will allow. Several over-the-counter and prescription products are available to treat lice. Discuss these options with a pharmacist to find the best possible treatment. Products containing a certain chemical are extremely effective at treating head lice. They include RID, Pronto, R&C, A-200, and Clear Lice System. They should be applied to the scalp for 10 minutes, then rinsed off. A second application is required 7 to 10 days later to increase the chances of killing any remaining nits. If you are allergic to chrysanthemums and ragweed, use of those products is not recommended. Products containing a different chemical, such as Nix, can be applied to a dry scalp for 10 minutes, then rinsed off. Although this product will leave a residue on the scalp for up to 2 weeks, a second application may be necessary 7 to 10 days later if any active lice are still present. Prescription products are available, but they generally are not the best products to use first, because they may have side effects. If your child is less than 2 years old and has lice, contact your primary care physician so that he or she can follow the treatment closely. Although lice-killing products are likely to be recommended during pregnancy, it is important to let your pharmacist know if you are pregnant or breast-feeding so that you can select the safest and most effective product.
|HOME WISE Uranium Project > Mining & Milling > Impacts >| (last updated 18 May 2011) image (51k) : Rössing open pit mine, Namibia (Thomas Siepelmeyer 1987) image (50k) : Ranger open pit mine, Australia image (32k) : former Lodève open pit mine, France, 1992 Later, mining was continued in underground mines. After the decrease of uranium prices since the 1980's on the world market, underground mines became too expensive for most deposits; therefore, many mines were shut down. New uranium deposits discovered in Canada have uranium grades of several percent. To keep groundwater out of the mine during operation, large amounts of contaminated water are pumped out and released to rivers and lakes. When the pumps are shut down after closure of the mine, there is a risk of groundwater contamination from the rising water level. (see also Uranium Ore Radiation Properties) All these piles threaten people and the environment after shut down of the mine due to their release of radon gas and seepage water containing radioactive and toxic materials. (image (36k): The former waste rock "pyramids" of Ronneburg, Germany, 1990) Waste rock was often processed into gravel or cement and used for road and railroad construction. VEB Hartsteinwerke Oelsnitz in Saxony has processed 200,000 tonnes of material per year into gravel containing 50 g/t uranium. Thus, gravel containing elevated levels of radioactivity were dispersed over large areas. In situ leaching gains importance with a decrease in price of uranium. In the USA, in situ leaching is often used. In 1990, in Texas alone in situ leaching facilities for uranium were operated at 32 sites. In Saxony, Germany, an underground mine converted to an underground in situ leaching mine was operated until end of 1990 at Königstein near Dresden. In the Czech Republic, the in situ leaching technology was used at a large scale at Stráz pod Ralskem in Northern Bohemia. The advantages of this technology are: (for details, see Impacts of Uranium In-Situ Leaching) (image (160k) : Atlas Co. uranium mill tailings, Moab, Utah, USA - U.S. DOE Sep. 2010) (image (46k) Rio Algom Quirke Tailings (water covered): Aerial view - BHP Billton Aug. 1999) (image (39k) : Ranger uranium mill tailings pond, Australia) (image (144k) : Olympic Dam tailings, Australia - Strahlendes Klima 2008) (image (116k) : Olympic Dam tailings, Australia - Strahlendes Klima 2008) The amount of sludge produced is nearly the same as that of the ore milled. At a grade of 0.1% uranium, 99.9% of the material is Apart from the portion of the uranium removed, the sludge contains all the constituents of the ore. As long lived decay products such as thorium-230 and radium-226 are not removed, the sludge contains 85% of the initial radioactivity of the ore. Due to technical limitations, all of the uranium present in the ore can not be extracted. Therefore, the sludge also contains 5% to 10% of the uranium initially present in the ore. In addition, the sludge contains heavy metals and other contaminants such as arsenic, as well as chemical reagents used during the milling process. Mining and milling removes hazardous constituents in the ore from their relatively safe underground location and converts them to a fine sand, then sludge, whereby the hazardous materials become more susceptible to dispersion in the environment. Moreover, the constituents inside the tailings pile are in a geochemical disequilibrium that results in various reactions causing additional hazards to the environment. For example, in dry areas, salts containing contaminants can migrate to the surface of the pile, where they are subject to erosion. If the ore contains the mineral pyrite (FeS2), then sulfuric acid forms inside the deposit when accessed by precipitation and oxygen. This acid causes a continuous automatic leaching of contaminants. Radon-222 gas emanates from tailings piles and has a half life of 3.8 days. This may seem short, but due to the continuous production of radon from the decay of radium-226, which has a half life of 1600 years, radon presents a longterm hazard. Further, because the parent product of radium-226, thorium-230 (with a half life of 80,000 years) is also present, there is continuous production of radium-226. (view Uranium decay series) After about 1 million years, the radioactivity of the tailings and thus its radon emanation will have decreased so that it is only limited by the residual uranium contents, which continuously produces new thorium-230. If, for example, 90% of the uranium contained in an ore with 0.1% grade was extracted during the milling process, the radiation of the tailings stabilizes after 1 million years at a level 33 times that of uncontaminated material. Due to the 4.5 billion year half-life of uranium-238, there is only a minuscule further decrease. (see also Uranium Mill Tailings Radiation Properties) The radium-226 in tailings continuously decays to the radioactive gas radon-222, the decay products of which can cause lung cancer. Some of this radon escapes from the interior of the pile. Radon releases are a major hazard that continues after uranium mines are shut down. The U.S. Environmental Protection Agency (EPA) estimates the lifetime excess lung cancer risk of residents living nearby a bare tailings pile of 80 hectares at two cases per hundred. Since radon spreads quickly with the wind, many people receive small additional radiation doses. Although the excess risk for the individual is small, it cannot be neglected due to the large number of people concerned. EPA estimates that the uranium tailings deposits existing in the United States in 1983 would cause 500 lung cancer deaths per century, if no countermeasures are taken. Tailings deposits are subject to many kinds of erosion. Due to the long half-lives of the radioactive constituents involved, safety of the deposit has to be guaranteed for very long periods of time. After rainfall, erosion gullies can form; floods can destroy the whole deposit; plants and burrowing animals can penetrate into the deposit and thus disperse the material, enhance the radon emanation and make the deposit more susceptible to climatic erosion. When the surface of the pile dries out, the fine sands are blown by the wind over adjacent areas. The sky has darkened from storms blowing up radioactive dust over villages located in the immediate vicinity of Wismut's uranium mill tailings piles. Subsequently, elevated levels of radium-226 and arsenic were found in dust samples from these villages. Seepage from tailings piles is another major hazard. Seepage poses a risk of contamination to ground and surface water. Residents are also threatened by radium-226 and other hazardous substances like arsenic in their drinking water supplies and in fish from the area. The seepage problem is very important with acidic tailings, as the radionuclides involved are more mobile under acidic conditions. In tailings containing pyrite, acidic conditions automatically develop due to the inherent production of sulfuric acid, which increases migration of contaminants to the environment. > View animation of modeled contaminant plume dispersion in groundwater (153k) (Split Rock uranium mill tailings site, Wyoming) > View extension of groundwater plumes (Church Rock uranium mill tailings site, New Mexico) Tailings dams are often not of stable construction. In most cases, they were made from sedimentation of the coarse fraction of the tailings sluge. Some, including those of Culmitzsch and Trünzig in Thuringia, were built on geologic faults. Therefore, they are subject to the risk of an earthquake. As the Thuringian tailings deposits are located in the center of an area of earthquake risk in the former GDR, they suffer a risk of dam failure. Moreover, strong rain or snow storms can also cause dam failures. (for details see: Safety of Tailings Dams) It is of no surprise that again and again dam failures have occured. Some examples are: Occasionally, because of their fine sandy texture, dried tailings have been used for construction of homes or for landfills. In homes built on or from such material, high levels of gamma radiation and radon were found. The U.S. Environmental Protection Agency (EPA) estimates the lifetime excess lung cancer risk of residents of such homes at 4 cases per 100. The obvious idea of bringing the tailings back to where the ore has been taken from, does not in the most cases lead to an acceptable solution for tailings disposal. Although most of the uranium was extracted from the material, it has not become less hazardous, quite to the contrary. Most of the contaminants (85% of the total radioactivity and all the chemical contaminants) are still present, and the material has been brought by mechanical and chemical processes to a condition where the contaminants are much more mobile and thus susceptible to migration into the environment. Therefore, dumping the tailings in an underground mine cannot be afforded in most cases; there, they would be in direct contact with groundwater after halting the pumps. The situation is similar for deposit of tailings in former open pit mines. Here also, immediate contact to ground water exists, or seepage presents risks of contamination of ground water. Only in the case of the presence of proven impermeable geologic or man-made layers can the contamination risk to ground water be prevented. An advantage of in-pit deposition is relatively good protection from erosion. (image (35k) : Tailings disposal in Bellezane open pit, France, 1992) In France and Canada, on the other hand, the concept of dumping the tailings in former open pits in groundwater is pursued or proposed at several sites in recent years. In this case, a highly permeable layer is installed around the tailings, to allow free groundwater circulation around the tailings. Since the permeability of the tailings themselves is lower, it is anticipated (by the proponents) that nearly no exchange of contaminants between tailings and groundwater takes place. A similar method is being tested in Canada for the disposal of uranium mill tailings in lakes (called "pervious surround disposal"). Recent proposals even deny the necessity of an artificial permeable layer around the tailings, since the surrounding rock would provide a high enough permeability. In most cases, tailings have to be dumped on the surface for lack of other options. Here, the protection requirements can more easily be controlled by appropriate methods, but additional measures have to be performed to assure protection from erosion. The untenability of this situation was for the first time recognized by U.S. legislation, which defined legal requirements for the reclamation of uranium mill tailings in 1978 (UMTRCA). On the basis of this law, regulations were promulgated by the Environmental Protection Agency (EPA: 40 CFR 192) and the Nuclear Regulatory Commission (NRC: 10 CFR 40). These regulations not only define maximum contaminant concentrations for soils and admissible contaminant releases (in particular for radon), but also the period of time, in which the reclamation measures taken must be effective: 200 - 1000 years. The reclamation action thus not only has to assure that the standards are met after completion of the reclamation work; but for the first time, a long-term perspective is included in such regulations. A further demand is that the measures taken must assure a safe disposal for the prescribed period of time without active maintenance. If these conditions cannot be met at the present site, the tailings must be relocated to a more suitable place. Considering the actual period of time the hazards from uranium mining and milling wastes persist, these regulations are of course only a compromise, but they are a first step, at least. Regulations for the protection of groundwater were not included in the initial legislation; they were only promulgated in January 1995. Last but not least, public involvement is given an important role in planning and control of the reclamation action. Based on these regulations, various technologies for the safe and maintenance-free confinement of the contaminants were developed in the United States during subsequent years. The reclamation efforts also include the decontamination of homes in the vicinity built from contaminated material or on contaminated landfills. In Canada, on the contrary, authorities decide on a site-by-site basis on the measures to be taken for reclamation; there are no legal requirements. The Atomic Energy Control Board (AECB) has only promulgated rough guidelines; and it decides, together with the mine and mill operators, on the necessity of measures to be taken. Therefore, it is no surprise that the Canadian approach results in a much lower level of protection. The proposals for the management of the uranium mill tailings in the Elliot Lake area, Ontario, for example, include no other "protective barrier" than a water cover. > View Rio Algom Quirke Tailings (water covered): Schematic profile · Aerial view Aug. 1999 (BHP Billiton) Water covers for uranium mill tailings dams are also used by Cogéma at Mounana (Gabon) and at St-Priest-la-Prugne (Loire, France). The site must be appropriate for tailings disposal from the view of geology and hydrology: In some circumstances, it may become necessary, to move all of the material to an intermediate storage place to allow for the installation of a liner below the final deposit. An example for this procedure was the tailings deposit at Canonsburg, In some very unfortunate circumstances, it even may become necessary to move the whole material to a safer site for permanent disposal. This procedure was preferred at 11 sites in the U.S., involving a total of 14.36 million cubic meters of tailings. To prevent seepage of contaminated water, a liner must be installed below the deposit in many cases, if no natural impermeable layer is present. For this purpose, appropriate lining materials have to be selected. A multi-layer liner may To increase mechanical stability, the following management options may be applied: dewatering of the sludge, smoothing of the slopes, and installation of erosion protection. On top of the pile, an appropriate cover has to be installed for protection against release of gamma radiation and radon gas, infiltration of precipitation, intrusion of plants and animals, and erosion. This cover in most cases consists of several different layers to meet all requirements. Moreover, the catchment, collection and treatment of seepage water is necessary to release purified waters to the surface water only. In the long term however, water treatment should no longer be necessary. Finally, it has to be determined if, and to what extent, contaminated material was used in the surrounding area for construction or landfill purposes. Such contaminated properties should be included in the reclamation program. Former uranium mine and mill sites often have very poor properties for the isolation of contaminants. Detailed investigations have to be performed at such sites by independent experts, before such disposal can be considered. > See also: Environmental impacts of uranium mining and milling - Slide Talk |HOME WISE Uranium Project > Mining & Milling > Impacts >|
St. Louis, MO, November 4, 2009 - Americans continue to get heavier. Most weight control methods short of bariatric surgery are generally considered ineffective in preventing obesity or reducing weight. The term energy gap was coined to estimate the change in energy balance (intake and expenditure) behaviors required to achieve and sustain reduced body weight outcomes in individuals and populations. In a commentary published in the November 2009 issue of the Journal of the American Dietetic Association, researchers more precisely clarify the concept of the energy gap (or energy gaps) and discuss how the concept can be properly used as a tool to help understand and address obesity. Investigators from the University of Colorado Denver and the Procter & Gamble Company, Mason, OH, discuss the two key factors related to the energy gap concept: prevention of excess weight gain and maintenance of achieved weight loss. It is estimated that the energy gap for prevention of weight gain among those who have lost weight is about 100 kcal/day in adults and 100-150 kcal/day in children and adolescents. Any combination of increased energy expenditure and decreased energy intake of 100 kcal per day in adults and 100-150 kcal/day for children and adolescents could theoretically prevent weight regain in 90% of the US population. This suggests that this small changes approach could be very effective for preventing excessive weight gain in adults and children. The energy gap to maintain weight loss is generally much larger, amounting to 200 kcal/day for a 100 kg person losing 10% of body weight or 300 kcal/day for the same person losing 15% of body weight. According to James O. Hill, PhD, "This analysis indicates that to create and maintain substantial weight loss (ie, obesity treatment), large behavioral changes are needed. This is in stark contrast to primary obesity prevention in which small behavioral changes can eliminate the small energy imbalance that occurs before the body has gained substantial weight. Because the body has not previously stored this 'new' excess energy, it does not defend against the behavioral strategies as happens when the body loses weight." The energy gap concept is useful for individualizing behavioral strategies for weight loss maintenance. For example, if the energy gap for a given weight-loss maintenance is estimated to be 300 kcal/day, this can lead to a specific individually tailored goal for changing diet and physical activity rather than generic advice to eat less and exercise more. This could be 300 kcal/day of additional physical activity, a reduction of 300 kcal/day from usual energy intake, or a combination of tactics such as adding 150 kcal/day of physical activity and reducing 150 kcal/day from usual energy intake. The article is "Using the Energy Gap to Address Obesity: A Commentary" by James O. Hill, PhD; John C. Peters, PhD; and Holly R. Wyatt, MD. It appears in the Journal of the American Dietetic Association, Volume 109, Issue 11 (November 2009), published by Elsevier.
Present-day Mumbai was originally made up of seven isles. Artefacts found near Kandivali in northern Mumbai indicate that these islands had been inhabited since the Stone Age. In the 3rd century BCE, they were part of the Maurya empire, ruled by the Buddhist emperor Ashoka. The Hindu rulers of the Silhara dynasty later governed the islands until 1343, when it was annexed by the kingdom of Gujarat. Some of the oldest edifices of the archipelago-the Elephanta Caves and the Walkeshwar temple complex date to this era. In 1534, the Portuguese appropriated the islands from Bahadur Shah of Gujarat. They were ceded to Charles II of England in 1661 as dowry or, more appropriately, wedding gifts of Catherine de Braganza. They in turn were leased to the British East India Company in 1668 for a sum of £10 per annum. The company found the deep harbour at Bombay eminently apposite, and the population rose from 10,000 in 1661 to 60,000 by 1675. In 1687, the East India Company transferred their headquarters from Surat to Bombay. From 1817 the city was reshaped, with large civil engineering projects aimed at merging the islands into a single amalgamated mass. This project, the Hornby Vellard, was completed by 1845 and resulted in the area swelling to 438 km². Eight years later, in 1853, India's first railway line was established, connecting Bombay to Thana. During the American Civil War, (1861-1865) the city became the world's chief cotton market, resulting in a boom in the economy and subsequently in the city's stature. The opening of the Suez Canal in 1869 transformed Bombay into one of the largest Arabian Sea ports. The city grew into a major urban centre over the next thirty years, owing to an improvement in the infrastructure and the construction of many of the city's institutions. The population of the city swelled to one million by 1906, making it the second largest in India, after Calcutta. It later became a major base for the Indian independence movement, with the Quit India Movement called by Mahatma Gandhi in 1942 being its most rubric event. After independence, the city incorporated parts of the island of Salsette, expanding to its present day limits in 1957. It became the capital of the new linguistic state of Maharashtra in 1960. In the late 1970s Bombay witnessed a construction boom, with a significant increase in population owing to the influx of migrants. By 1986 it had overtaken Calcutta as the most populated Indian city. The city's secular fabric was torn in 1992, after large-scale Hindu-Muslim riots caused extensive losses to life and property. A few months later, on March 12, simultaneous bombings of the city's establishments by the underworld killed around three hundred. In 1995, the city was renamed Mumbai after the right wing Shiv Sena party came into power in Maharashtra, in keeping with their policy of renaming colonial institutions after historic local appellations.