text
stringlengths 188
632k
|
---|
Helsinki – The low-frequency, inaudible sounds made by wind power stations are not damaging to human health despite widespread fears that they cause unpleasant symptoms, research published in Finland on Monday said.
A number of studies have already concluded that the audible noise from the energy-generating windmills does not cause health impacts beyond annoyance and sleep disturbance in people living close by.
However, the two-year Finnish project, commissioned by the government, examined the impact of low-frequency — or infrasound — emissions which cannot be picked up by the human ear.
People in many countries have blamed the infrasound waves for symptoms ranging from headaches and nausea to tinnitus and cardiovascular problems, researchers said.
Scientists used interviews, sound recordings and laboratory tests to explore possible health effects on people living within 20 kilometers (12 miles) of the generators.
Yet the findings "do not support the hypothesis that infrasound is the element in turbine sound that causes annoyance," researchers said, adding: "It is more likely that these symptoms are triggered by other factors such as symptom expectancy."
Tests also found no evidence that wind turbine sounds affected heart rates, the study said.
Wind power can be one of the cheapest forms of renewable energy and has spread widely in recent years, not least in China, the United States and Brazil.
Fifteen percent of the EU's energy comes from wind power, according to 2019 research by industry body WindEurope, with Denmark, Ireland and Portugal the member states most reliant on it.
Opponents of the windmills, which can stand up to 140 meters (460 feet) high, argue they blight the landscape and have an adverse effect on the well-being of people living in the vicinity.
In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right. |
Grease traps are designed to intercept the grease and solid waste from a water drainage system, before it enters the sewer system. Essentially a grease trap works by separating the different waste products that have been washed down the drain, and sorts them into different compartments based on their viscosity.
Most waste contains small amounts of oils and other chemicals; which enter into a septic tank or treatment facility, where they are slowly broken down by micro organisms in the waste water. While smaller amounts of waste can be easily managed; larger amounts such as grease and waste by products from commercial kitchens, hospitals and factories are a little different. If allowed to enter into the sewer system, these will eventually cause a blocked drain or pipe, and can even cause contamination of another water supply. |
This article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations. (May 2018) (Learn how and when to remove this template message)
The Nara period (奈良時代, Nara jidai) of the history of Japan covers the years from AD 710 to 794. Empress Genmei established the capital of Heijō-kyō (present-day Nara). Except for a five-year period (740–745), when the capital was briefly moved again, it remained the capital of Japanese civilization until Emperor Kanmu established a new capital, Nagaoka-kyō, in 784, before moving to Heian-kyō, modern Kyoto, a decade later in 794.
Japanese society during this period was predominately agricultural and centered on village life. Most of the villagers followed Shintoism, a religion based on the worship of natural and ancestral spirits named kami.
The capital at Nara was modeled after Chang'an, the capital city of the Tang dynasty. In many other ways, the Japanese upper classes patterned themselves after the Chinese, including adopting the Chinese writing system, Chinese fashion, and a Chinese version of Buddhism.
Nara period literature
Concentrated efforts by the imperial court to record its history produced the first works of Japanese literature during the Nara period. Works such as the Kojiki and the Nihon Shoki were political, used to record and therefore justify and establish the supremacy of the rule of the emperors within Japan.
With the spread of written language, the writing of Japanese poetry, known in Japanese as waka, began. The largest and longest-surviving collection of Japanese poetry, the Man'yōshū, was compiled from poems mostly composed between 600 and 759 CE. This, and other Nara texts, used Chinese characters to express the sounds of Japanese, known as man'yōgana.
Economic, livelihood, and administrative developments
Before the Taihō Code was established, the capital was customarily moved after the death of an emperor because of the ancient belief that a place of death was polluted. Reforms and bureaucratization of government led to the establishment of a permanent imperial capital at Heijō-kyō, or Nara, in AD 710. The capital was moved shortly (for reasons described later in this section) to Kuni-kyō (present-day Kizugawa) in 740–744, to Naniwa-kyō (present-day Osaka) in 744���745, to Shigarakinomiya (紫香楽宮, present-day Shigaraki) in 745, and moved back to Nara in 745. Nara was Japan's first truly urban center. It soon had a population of 200,000 (representing nearly 7% of the country's population) and some 10,000 people worked in government jobs.
Economic and administrative activity increased during the Nara period. Roads linked Nara to provincial capitals, and taxes were collected more efficiently and routinely. Coins were minted, if not widely used. Outside the Nara area, however, there was little commercial activity, and in the provinces the old Shōtoku land reform systems declined. By the mid-eighth century, shōen (landed estates), one of the most important economic institutions in prehistoric Japan, began to rise as a result of the search for a more manageable form of landholding. Local administration gradually became more self-sufficient, while the breakdown of the old land distribution system and the rise of taxes led to the loss or abandonment of land by many people who became the "wave people" (furōsha). Some of these formerly "public people" were privately employed by large landholders, and "public lands" increasingly reverted to the shōen.
Factional fighting at the imperial court continued throughout the Nara period. Imperial family members, leading court families, such as the Fujiwara, and Buddhist priests all contended for influence. Earlier during this period, Prince Nagaya seized power at the court after the death of Fujiwara no Fuhito. Fuhito was succeeded by four sons, Muchimaro, Umakai, Fusasaki, and Maro. They put Emperor Shōmu, the prince by Fuhito's daughter, on the throne. In 729, they arrested Nagaya and regained control. However, as a major outbreak of smallpox spread from Kyūshū in 735, all four brothers died two years later, resulting in temporary reduction in the Fujiwara dominance. In 740, a member of the Fujiwara clan, Hirotsugu, launched a rebellion from his base in Fukuoka, Kyushu. Although defeated, it is without doubt that the Emperor was heavily shocked about these events, and he moved the palace three times in only five years from 740, until he eventually returned to Nara. In the late Nara period, financial burdens on the state increased, and the court began dismissing nonessential officials. In 792 universal conscription was abandoned, and district heads were allowed to establish private militia forces for local police work. Decentralization of authority became the rule despite the reforms of the Nara period. Eventually, to return control to imperial hands, the capital was moved in 784 to Nagaoka-kyō and in 794 to Heian-kyō (literally Capital of Peace and Tranquility), about twenty-six kilometers north of Nara. By the late eleventh century, the city was popularly called Kyoto (capital city), the name it has had ever since.
Cultural developments and the establishment of Buddhism
Some of Japan's literary monuments were written during the Nara period, including the Kojiki and Nihon Shoki, the first national histories, compiled in 712 and 720 respectively; the Man'yōshū, an anthology of poems; and the Kaifūsō, an anthology written in Chinese by Japanese emperors and princes.
Another major cultural development of the era was the permanent establishment of Buddhism. Buddhism was introduced by Baekje in the sixth century but had a mixed reception until the Nara period, when it was heartily embraced by Emperor Shōmu. Shōmu and his Fujiwara consort were fervent Buddhists and actively promoted the spread of Buddhism, making it the "guardian of the state" and a way of strengthening Japanese institutions.
During Shōmu's reign, the Tōdai-ji (literally Eastern Great Temple) was built. Within it was placed the Great Buddha Daibutsu: a 16-metre-high, gilt-bronze statue. This Buddha was identified with the Sun Goddess, and a gradual syncretism of Buddhism and Shinto ensued. Shōmu declared himself the "Servant of the Three Treasures" of Buddhism: the Buddha, the law or teachings of Buddhism, and the Buddhist community.
Although these efforts stopped short of making Buddhism the state religion, Nara Buddhism heightened the status of the imperial family. Buddhist influence at court increased under the two reigns of Shōmu's daughter. As Empress Kōken (r. 749–758) she brought many Buddhist priests into court. Kōken abdicated in 758 on the advice of her cousin, Fujiwara no Nakamaro. When the retired empress came to favor a Buddhist faith healer named Dōkyō, Nakamaro rose up in arms in 764 but was quickly crushed. Kōken charged the ruling emperor with colluding with Nakamaro and had him deposed. Kōken reascended the throne as Empress Shōtoku (r. 764–770).
The empress commissioned the printing of 1 million prayer charms — the Hyakumantō Darani — many examples of which survive. The small scrolls, dating from 770, are among the earliest printed works in the world. Shōtoku had the charms printed to placate the Buddhist clergy. She may even have wanted to make Dōkyō emperor, but she died before she could act. Her actions shocked Nara society and led to the exclusion of women from imperial succession and the removal of Buddhist priests from positions of political authority.
Many of the Japanese artworks and imported treasures from other countries during the era of Emperors Shōmu and Shōtoku are archived in Shōsō-in of Tōdai-ji temple. They are called Shōsōin treasures and illustrate the cosmopolitan culture known as Tempyō culture. Imported treasures show cultural influences of Silk Road areas, including China, Korea, India, and the Islamic Empire. Shosoin stores more than 10,000 paper documents so-called Shōsōin documents (正倉院文書). These are records written in the reverse side of the sutra or in the wrapping of imported items that survived as a result of reusing wasted official documents. Shōsōin documents contribute greatly to the research of Japanese political and social systems of the Nara period, while they even indicate the development of Japanese writing systems (such as katakana).
The first authentically Japanese gardens were built in the city Nara at the end of the eighth century. Shorelines and stone settings were naturalistic, different from the heavier, earlier continental mode of constructing pond edges. Two such gardens have been found at excavations; both were used for poetry-writing festivities.
The Nara court aggressively imported Chinese knowledge about civilization (Tang Dynasty) by sending diplomatic envoys known as kentōshi to the Tang court every twenty years. Many Japanese students, both lay and Buddhist priests, studied in Chang'an and Luoyang. One student named Abe no Nakamaro passed the Chinese civil examination to be appointed to governmental posts in China. He served as Governor-General in Annam or Chinese Vietnam from 761 through 767. Many students who returned from China, such as Kibi no Makibi, were promoted to high government posts.
Tang China never sent official envoys to Japan, for Japanese kings, or emperors as they styled themselves, did not seek investiture from the Chinese emperor. A local Chinese government in Lower Yangzi Valley sent a mission to Japan to return Japanese envoys who entered China through Balhae. The Chinese local mission could not return home due to the An Lushan Rebellion and remained in Japan.
The Hayato people (隼人) in Southern Kyushu frequently resisted rule by the Yamato dynasty during the Nara period. They are believed to be of Austronesian origin and had a unique culture that was different from the Japanese people. However, they were eventually subjugated by the Ritsuryō.
Relations with the Korean kingdom of Silla were initially peaceful, with regular diplomatic exchanges. However, the rise of Balhae north of Silla destabilized Japan-Silla relations. Balhae sent its first mission in 728 to Nara, which welcomed them as the successor state to Goguryeo, with which Japan had been allied until Silla unified the Three Kingdoms of Korea.
- 710: Japan's capital is moved from Fujiwara-kyō to Heijō-kyō, modeled after China's capital Chang'an
- 712: The collection of tales Kojiki
- 717: The Hōshi Ryokan is founded, and it survives to become Japan's (and the world's) second oldest known hotel in 2012. (The oldest was founded in 705.)
- 720: The collection of tales Nihon Shoki
- 735–737: A devastating smallpox epidemic spread from Kyushu to eastern Honshu and Nara, killing an estimated one-third of the Japanese population in these areas. The epidemic is said to have led to the construction of several prominent Buddhist structures during this time period as a form of appeasement.
- 743: Emperor Shōmu issues a rescript to build the Daibutsu (Great Buddha), later to be completed and placed in Tōdai-ji, Nara
- 752: The Great Buddha (Daibutsu) at Tōdai-ji was completed
- 759: The poetic anthology Man'yōshū
- 784: The emperor moves the capital to Nagaoka
- 788: The Buddhist monk Saichō founds the monastery of Mt Hiei, near Kyoto, which becomes a vast ensemble of temples
- Dolan, Ronald E. and Worden, Robert L., ed. (1994) "Nara and Heian Periods, A.D. 710–1185" Japan: A Country Study. Library of Congress, Federal Research Division.
- Ellington, Lucien (2009). Japan. Santa Barbara: ABC-CLIO. p. 28. ISBN 978-1-59884-162-6.
- Shuichi Kato; Don Sanderson (15 April 2013). A History of Japanese Literature: From the Manyoshu to Modern Times. Routledge. pp. 12–13. ISBN 978-1-136-61368-5.
- Shuichi Kato; Don Sanderson (15 April 2013). A History of Japanese Literature: From the Manyoshu to Modern Times. Routledge. p. 24. ISBN 978-1-136-61368-5.
- Bjarke Frellesvig (29 July 2010). A History of the Japanese Language. Cambridge University Press. pp. 14–15. ISBN 978-1-139-48880-8.
- See Wybe Kuitert, Two Early Japanese Gardens 1991
- Lockard, Craig A. (2009). Societies Networks And Transitions: Volume B From 600 To 1750. Wadsworth. pp. 290–291. ISBN 978-1-4390-8540-0.
- William George Aston says this in his note, see Nihongi: Chronicles of Japan from the Earliest Times to A.D. 697, translated from the original Chinese and Japanese by William George Aston. Book II, note 1, page 100. Tuttle Publishing. Tra edition (July 2005). First edition published 1972. ISBN 978-0-8048-3674-6
- Kakubayashi, Fumio. 1998. 隼人 : オーストロネシア系の古代日本部族' Hayato : An Austronesian speaking tribe in southern Japan.'. The bulletin of the Institute for Japanese Culture, Kyoto Sangyo University, 3, pp.15-31 ISSN 1341-7207.
- The Hayato dance appears repeatedly in the Kojiki, Nihon Shoki, and Shoku Nihongi, performed on the occasion of paying tribute to the court and for the benefit of foreign visitors.
- Suzuki, Akihito (July 2011). "Smallpox and the Epidemiological Heritage of Modern Japan: Towards a Total History". Medical History. 55 (3): 313–318. doi:10.1017/S0025727300005329. PMC 3143877. PMID 21792253.
- Farris, William Wayne (2017). The Historical Demography of Japan to 1700 (Routledge Handbook of Premodern Japanese History). Abingdon, United Kingdom: Routledge. pp. 252–253. ISBN 978-0415707022.
- Kohn, George C. (2002). Encyclopedia of Plague and Pestilence: From Ancient Times to the Present. Princeton, New Jersey: Checkmark Books. p. 213. ISBN 978-0816048939.
- Jannetta, Ann Bowman (2014). Epidemics and Mortality in Early Modern Japan. New York, New York: Princeton University Press. pp. 65=67. ISBN 978-0816048939.
- Brown, Delmer M. (1993). Cambridge History of Japan: Ancient Japan.
- Farris, William (1993). Japan's Medieval Population: Famine, Fertility, and Warfare in a Transformative Age. University of Hawai'i Press, Honolulu.
- Ooms, Herman (2009). Imperial Politics and Symbolics in Ancient Japan: The Tenmu Dynasty. pp. 650–800.
- Sansom, George Bailey, G. B. (1978). Cambridge History of Japan: Ancient Japan.
- Kornicki, Peter F. (2012). "The Hyakumantō darani and the origins of printing in eighth-century Japan". International Journal of Asian Studies. 9: 9:43–70. doi:10.1017/S1479591411000180.
- Bender, Ross (2012). Friday, Karl (ed.). "Emperor, Aristocracy, and the Ritsuryō State: Court Politics in Nara". Japan Emerging: Premodern History to 1850. Westview Press. Retrieved October 11, 2012.
- Kojima, Noriyuki (1994). Shin Nihon Koten Bungaku Zenshū: Nihon Shoki (vol. 1). Shōgakukan. ISBN 978-4-09-658002-8.
- This article incorporates public domain material from the Library of Congress Country Studies website http://lcweb2.loc.gov/frd/cs/. – Japan
|History of Japan||Succeeded by| |
The Father of American Psychology
William James was the first American psychologist. He has been quoted as saying that “the first psychology class I ever attended was the one I taught.” This course was offered at Harvard University in 1875 and it began the study of psychology in America. William strove to understand human consciousness as a whole; he did not compartmentalize the psyche. His vast interests included religious experiences, the nature of belief, free will and the human instincts- just to name a few. In 1890 he published “Principles of Psychology” in which he developed a comprehensive theory of consciousness based upon laboratory findings and philosophical speculation. He was also one of the founders of the American Society for Psychical Research. William defined psychology as “the description and explanation of states of consciousness as such.”
Five aspects of William James’ theory of mental life are:
1. Experience is complete chaos without selective interest. Before something can be experienced it must
be attended to.
2. Thoughts emerge from a stream of consciousness. An individual thought receives its force, focus and
direction from the thoughts which precede it.
3. Pragmatism, which holds that truth is to be tested by the practical consequences of belief. It can be
summed up with the phrase: “whatever works, is likely true.”
4. The self is really made up of many selves that exist in a fluctuating field.
5. Human beings have animal instincts, such as fear, that are inborn rather than taught.
William James was one of the most original thinkers in the field of depth psychology. His ideas continue to impact how we conceive ourselves and our world. For further reading I would recommend “The Varieties of Religious Experience” and “The Will to Believe.” |
- "Inheritance" by Hannah, age 14, East Boston, Mass.
Over my nine years of teaching visual art in schools, surrealism has been one of my favorite styles of art to explore with my classes and a consistent student favorite, as well. Surrealism originated as a cultural, literary and artistic movement in the early 1900s, and it aimed to reject the conventional — to break free from the limitations of reality and the conscious, rational mind. The name "surrealism" in and of itself expresses this fundamental goal, with the roots of the word — sur indicating "above" or "beyond," and real meaning "fact" or "reality" — converging to literally mean "beyond reality."
Artists who championed the surrealist movement embraced the bizarre, the strange and uncanny, the unconventional or impossible, and the unconscious. Surrealists like Salvador Dalí, Frida Kahlo, Max Ernst, Dorothea Tanning and Remedios Varo all put paint to canvas to create dreamlike — or sometimes nightmarish — surrealist scenes rife with symbolic meaning. Contemporary surrealist artist Vladimir Kush emulates the surrealist style in his work, as well.
For kids and teens, surrealism presents an opportunity to exercise their imaginations. When pushed to reject or transform the normal and rational, young artists are able to flex their creative muscles, often coming up with far more original and exciting artistic compositions than are possible with realistic landscapes or still-life art.
Surrealism also offers a particularly expressive mode for students, as surrealist artwork can contain symbolism unrestrained by the laws of reality. For example, students might choose to communicate the concept of time by incorporating an hourglass into their artwork; to represent the notion of family with a home or a human heart; or the idea of transformation with a caterpillar and butterfly.
- "Break Free and Bloom" by Hawa, age 13, Winooski
In one of my favorite surrealist art units, I have students begin by choosing a simple object — a teapot, a pair of headphones, a flower — and inventing a surrealist scene based around it. As they progress, students can build symbolism into the imagery they draw around it or forgo this thought process and simply let their unconscious mind guide them as they create as strange a scene as they can imagine.
Portraiture also lends itself well to surrealism, as young artists can easily transform a portrait to incorporate surrealist qualities by playing with the background or context of the portrait, or transforming aspects of the person portrayed.
Another personal favorite project — and a wonderful, accessible art-making opportunity for children who become frustrated with drawing and painting — is surrealist collage. By using scissors to carefully and closely cut out parts of magazine pictures or photos, then gluing those different pieces onto one background image, young artists can build incredible scenes.
Getting Started: Surrealist Drawing
- "New Life," by Elsa, age 13, East Boston, Mass.
- Have your young artist choose an object or person to draw.
- Encourage them to sketch out ideas for their surrealist drawing before beginning their final draft. Ask guiding questions.
- How might you change the scale (the size of one thing in relation to another) in your picture to make it surrealist? Could you make something normally small very big, and something normally big smaller? Example: Draw a person or a whole city inside of a glass bottle, or an ocean with a desert island inside of a tea cup. Draw a bottle of soda as tall as a building in a city skyline.
- What would be a very strange or impossible setting in which to find this object or person? Examples: Draw a person's face embedded in the trunk of a tree, or a person underwater. Draw a clock floating in outer space.
- Could you make something in this picture floating or flying that cannot float or fly in real life?
- Can you combine two things in your picture into one, or replace part of an object or person with something else? Example: Replace the sails on a boat with butterfly wings. Replace the leaves on a tree with pages from a book. Replace a person's hair with flowers or fire.
Getting Started: Surrealist Collage
- Gather magazines from which your young artist can choose images to incorporate into their collage.
- Have your young artist select a background image onto which the parts of other pictures will be glued.
- Depending on the child's age, you may wish to model carefully cutting around edges of images for them, or help them with the step of cutting out the parts of their chosen images.
- Have them use a glue stick to glue and attach the pieces of their surrealist collage onto their background images.
- Pro tip: Parts of images can be layered to look more like one surrealist scene. Example: Cutting out the opening of a window or doorway in an image can allow you to layer a different setting underneath. This same trick can also allow you to make it look like a different piece of the picture is entering through the window or door by tucking the cut-out piece underneath the edge of the doorway.
Please note that many of the most prominent surrealist artists (particularly Salvador Dalí and Frida Kahlo) often incorporated adult themes or imagery into their artwork. Be aware of this when searching online or exploring the topic of surrealist art together. |
Colonial America's oldest unsolved mystery involved remains that have been known only as "JR102C," or "JR" for short, but their owner's true name may have finally been uncovered. The bones were found, buried in a coffin, under an old roadbed in Jamestown in 1996, WTKR reports. Researchers knew the skeletal remains belonged to a 19-year-old man who may have been from Europe, who had probably been living in Jamestown for a few years, and who likely had the status of a gentleman (because of the coffin). His right leg bones were twisted and broken below the knee, and a lead musket ball and lead shot were found there; researchers determined that's what killed him. (The ammunition would have ruptured a major artery, NPR explains.)
Now, the Jamestown Rediscovery Project says, new research has uncovered a 1624 duel between George Harrison and Richard Stephens. The remains may belong to Harrison, who was shot in the leg and died from the wound. "This wound shows that the person was killed by getting hit in the side of the knee. So in a duel, you stand sideways and this would come through like that," says a director for the project. However, one mystery still remains: “That’s a combat round. It’s almost like a shotgun but it also has a main bullet. So you wouldn’t think unless somebody was cheating in the duel that they would have that kind of a load." It's not the only recent nefarious discovery at Jamestown—scientists found evidence of cannibalism, as well. |
The need to encourage cultural and racial diversity among sign language interpreters has been recognized by the RID; as of the 2017 annual report, RID’s national membership is 87% white/Caucasian and 96% hearing. Unfortunately, Deaf interpreter education is significantly more sparse than that of their hearing peers. The vast majority of traditional interpreter education programs are not equipped to provide the unique curriculum needed to support Deaf interpreting students. In addition, we recognize that 40 hours of training is not nearly enough to prepare even the most motivated of Deaf interpreting students with the necessary skills to be prepared to work professionally.
Mentoring has been proven to be an effective tool in interpreter education, so we developed the Deaf Interpreter Academy (DIA) program. This new program embodies three initiatives: DIA in Interpreter Training Programs; DIA POC/T Mentoring Program; and DIA Next Step Advanced Training for Deaf Interpreters.
The pilot DIA POC/T program invited six Deaf interpreter POC candidates and 12 mentors to work in collaboration. The 17-week program included weekly mentoring, observations, and practicum experiences. One unique factor of this program is that people of color provided the services to people of color in order to create a safe space in which to discuss shared experiences.
This workshop will provide participants with an overview of the initiative and the fundamentals of the program. Discussion will include power and privilege insights, the intersectionality paradigm shift, and ways to create collaborative opportunities in your own communities.
After a short panel discussion introducing some of the pilot program participants, and an overview of the program, this workshop will guide small group discussion brainstorming ways in which this program can be emulated in local communities. |
Santorini, officially known by its ancient Greek name Thera, is one of the Cyclades, small islands in the southern Aegean, each with its own long history. Thera-Santorini, born of gigantic volcanic eruptions, has two important archaeological sites: the prehistoric city at Akrotiri and the ruins of the ancient city-state Thera on difficult of access Mesa Vouno.
It is only natural, therefore, that many books have been written and many important international conferences organized relating to issues of historical, archaeological and geophysical interest. By contrast, the bibliography on Santorini as an historical wineland, with its interesting singularities both in the system of training the vines and the processing of the grapes in the kánaves, as the winemaking installations of preindustrial Santorini are called, is poor.
These singularities were studied in the framework of a research project conducted in the late 1960s by the Wine Institute, research foundation of the then Ministry of Agriculture. The results with regard to the viniferous quality of certain grape varieties cultivated for centuries on the island surpassed all expectations and in 1972 the responsible committee recommended the recognition and protection of the place name Santorini as “Appellation of Origin of High Quality” for certain types of white wines of the island. The relevant ministerial decision defined the terms that the vineyards of the islands of Thera and Therasia must fulfil, as well as the conditions of processing the grapes, the musts and the wines, so that the wine produced is entitled to bear the name of the island on the label of the bottle.
Almost two decades rolled by until winemaking conditions in the kánaves of preindustrial Santorini began to adapt to the new era ushered in by the legislative specifications. In the meantime, two large modern wineries had begun operating: the first, of capacity 2,000 tonnes, was built and opened in 1980 by the Boutaris Company, while the second, of capacity 3,000 tonnes, was constructed on its own 2.9 ha plot by the Union of Santorini Cooperatives and was inaugurated in 1991.
In 1993, twenty-one years after the toponym Santorini had been established in the market as an Appellation of Origin, the Stylianos and Fany Boutaris Foundation decided to produce a collective volume on the natural and cultural wealth of vinicultural Santorini, whose place name denoted an island of particular beauty, as well as an offspring of this island, the wine santorini.
Another twenty years have elapsed and this book, published in 1994 in three languages (Greek, French, English), has long become a collector’s item, difficult to come by, which depicted the transitional period from preindustrial Santorini of the nineteenth century to Santorini of the last quarter of the twentieth century. During these last twenty years, the picture of the island has changed radically. The traditional way of training the vines continues, but the methods of modern technology which are now applied in all the wineries, large and small, have enhanced the viniferous wealth of native grape varieties with the production of a wide range of old and new wine types, the distinctive aromas and tastes of which bear witness to the volcanic origin of the soil that the vine roots penetrate. Because the Santorini vineyard, never infected by phylloxera, is one of the very few self-rooted vineyards in Europe.
Furthermore, in 2011 implementation commenced of a “Strategic Plan” to promote Greek wines, under the umbrella of a special European Union Programme to publicize the wines of producing countries in the foreign markets. Among the measures foreseen, four Greek “local varieties” were selected as ambassadors of Greek wine production, due to the quality of the wines produced from these indigenous grape varieties, adapted for centuries to the ecosystem of their region of cultivation. Among these four “local varieties”, the only white grape variety selected is the Asýrtiko of Santorini, the dominant cultivar, which covers 75% of the total vineyard of the island.
All these favourable developments led the Union of Santorini Cooperatives to propose, in agreement with all the winemaking agencies of the island, the publication of a new book on vinicultural Santorini. This aims to present to a wider reading public the Santorini we love, the “daughter of climactic wrath”, as the Nobel-laureate poet Odysseus Elytis dubbed the island born of the anger of the volcano, as well as the child of its vines, the “glory of crystal” of another Greek poet, Nikos Kavvadias.
The book is dedicated to the vinegrowers of the island, who have kept viticulture alive through the centuries on this rock of the Aegeis. |
Obesity causes number of heartburn sufferers to soar (and women are more likely to be affected)
Huge rise in people suffering from acid reflux, which causes heartburn, linked to obesity and fatty foodsExperts are concerned because reflux can trigger oesophageal cancer, which is also on the increase
Obesity is driving a 50 per cent rise in people suffering acid reflux over the last decade, according to new research.
Experts are concerned because reflux, one of the main causes of heartburn, can trigger oesophageal cancer, which is also on the increase.
The condition where acid from the stomach leaks up into the gullet, or oesophagus, has been linked to obesity, diets high in fatty foods, alcohol and smoking..
Heartburn: The number of people suffering acid reflux has jumped almost 50 per cent in a decade, according to researchers
Obesity increases reflux because abdominal fat puts pressure on the ring of muscle at the bottom of the oesophagus – the 10-inch tube connecting the throat to the stomach – which normally prevents stomach acid from flowing back.
However, some people develop acid reflux for no known reason while others have a problem with the muscle itself.
Symptoms of the condition, include heartburn, an unpleasant sour taste in the mouth caused by stomach acid coming back up the gullet and difficulty swallowing.
It is treated with advice on lifestyle such as losing weight and acid-suppressing drugs.
The latest research found the proportion of people suffering reflux rose from 11.6 per cent in 1995-97to 17.1 per cent in 2006-09, a jump of 50 per cent.
The research involving almost 30,000 people in Norway also found women are more at risk than men of developing the condition, known medically as gastro-oesophageal reflux disease (Gord).
Middle-aged people suffered the most severe symptoms, said the study published in the medical journal Gut.
Between 1995-97 and 2006-09 the prevalence of acid reflux symptoms rose 30 per cent, while that of severe symptoms rose by 24 per cent.
The prevalence of acid reflux symptoms experienced at least once a week rose by 47 per cent.
Women under 40 were the least likely to have any acid reflux, but were more likely to develop symptoms as they got older.
The prevalence was stable among men, regardless of their age.
Almost all of those with severe acid reflux experienced symptoms and/or used medication to treat them at least once a week, compared with around one in three of those with mild symptoms.
Acid reflux symptoms can spontaneously disappear without medication, but this happened to only one in 50 people with symptoms each year during the study.
“We need to identify earlier people at risk of what is an epidemic of oesophageal cancer, and one that has so recently killed the writer Christopher Hitchens”
The link with oesophageal cancer is caused by changes in cells in the gullet, possibly from cancer-causing agents contained within the stomach acid.
The researchers, from Norway, Sweden and King”s College London, said: ‘The increasing prevalence of acid reflux is alarming, because it will most likely contribute to the increasing incidence of cancer of the oesophagus in the western population.’
Professor High Barr, secretary of theBritish Society of Gastroenterology”s oesophageal section, said occasional reflux affected as many as one in five people.
But severe reflux, where sufferers experience persistent episodes of heartburn, should be investigated because it may be causing changes to the lining of the gullet that couldlead to cancer.
Professor Barr said: ‘Having indigestion after a curry isn’t necessarily a cause for concern, and about half of those with persistent reflux will not have any damage to their gullet.
‘But we need to identify earlier people at risk of what is an epidemic of this type of cancer, and one that has so recently killed the writer Christopher Hitchens.’
The UK has the highest rate of oesophageal cancer in Europe, particularly adenocarcinoma, the main typeof oesophageal cancer that is on the up.
High levels of alcohol consumption – aggravated by smoking – are well-known risk factors for the disease.
Experts say that by not smoking or drinking alcohol, and by choosing a healthy diet and maintaining a healthy weight, most of the 8,000 oesophageal cancer cases that are diagnosed each year in the UK could be prevented.
Professor Barr said: “’We have several national trials under way to find the patients who need regular endoscopies – where a telescope is put down the gullet to look for damage – and treatment for pre-cancerous changes.”
Dr Stuart Riley, chairman of the British Society of Gastroenterology”s oesophageal section, said ‘This study reflects our concern that the incidence of oesophageal cancer is rising and we are seeking with the Department of Health to develop appropriate strategies to alert our population to the risk of persistentsevere heartburn.’ |
by Linda Ravin Lodding, illustrated by Claire Fletcher
Little Bee Books, 2016
Painting Pepette is a fun way to introduce children to great Parisian artists.
Josette and her rabbit Pepette live in Paris and live in a house filled with fine art of her relatives, including her dog. But one day Josette noticed there was no painting of Pepette. So she went to Montmartre, where the best artists in Paris painted to find someone to paint Pepette. Several artists, like Picasso, Matisse and Dali, take a hand at painting Pepette. While the artists and many sidewalk critiques love the renderings, they are not right for Josette. When Matisse points out, “But through art we can see the world any way we want,” Pepette knows exactly what to do. A fulfilling story with loose, color-filled illustrations and a study in what makes a Picasso, Matisse, or Dali painting theirs. |
Information for Parents & Students
Students - Interested in Capture the Flag?
Do you like puzzles? Are you passionate about solving a mystery? Are you interested in pursuing a career in Cybersecurity? Want to be a super hero in the hacking world? If you answered yes to any of these questions, then a Capture the Flag Ethical Hacking Cybersecurity competition is for you.
MAGIC’s competitions are for entry-level high school, college, and nontraditional college students interested in pursuing a career in Cybersecurity.
So what skills do you need to get started in CTF’s?
1. Persistence. You cannot give up easily. There have been numerous times where one attack and you get the flag. Satisfaction guaranteed!
2. Google-fu. In the IT world, Google is your best friend. Chances are if you are thinking it, someone else has already encountered it, and come up with 10 different solutions for it. The real MAGIC is sifting through all the information to find your answer.
3. Willingness to learn. Information Technology changes every day. You, as a hacker must adapt as well. You should have a love of learning. Books, videos, manuals. Always strive to learn something new!
4. CLI (Command line interface). You need to be comfortable moving around the command line. There are dozens of tools that only run from the command line! Find the tool that works for you.
5. Python. Knowing a programming or scripting language is essential in the Security realm and becoming more and more requested in the IT field in general. You don’t need to be the Bill Gates of programming, but it helps to be able to read code. One of the easiest and most popular languages in Python! That’s all you need to get started in CTF’s. No really!
Everything else is learned through practice and hard work. In the hacker world there are a variety of tools, languages, OS’s, and programs out there that it can be overwhelming to figure out where to start. Check out some resources we put together for you to get started.
Kali Linux Resources
Kali Linux is a forensic and security-focused distribution based on Debian’s Testing branch. Kali Linux is designed with penetration testing, data recovery and threat detection in mind. This is the environment of choice for cybersecurity.
Python is an easy to learn programming language that is perfect for a beginner to any language. It’s ease of use makes it perfect for creating simple scripts and working your way up to full programs.
Parents- Why promote CTF to your child?
According to national polls, 1.8 million cybersecurity jobs will go unfilled in the next 5 years. As the digital age expands, our lives become vulnerable to cyber attacks everyday. If your child has an interest in technology, he/she might be the next “white hat” out there.
Cybersecurity careers are one of the fastest growing occupations today. By promoting these competitions, you are guiding your child to a responsible and exciting use of their computer talents. Cyber competitions encourage players to utilize real world methods and best practices to achieve their goals in a legal and responsible manner.
MAGIC’s Capture the Flag is one resource to help show your child that being a good guy in the hacking world is a satisfying achievement. Students just starting out need a reliable source for information on the subject of cybersecurity. MAGIC’s Capture the Flag competitions provide a safe, legal and real world environment for your child to experience ethical hacking up close and personal.
Don’t assume your child knows the difference between a “black hat” hacker and a “white hat” ethical hacker. A “black hat" is someone that hacks with the intent to steal information, while a white hat hacks with the intent to prevent information theft. MAGIC promotes the “white hat” approach and applies the principles of ethical hacking during our competitions. Your child will be challenged to work as a team to solve puzzles in an unstructured setting. This informal style of learning replicates situations they will face in the real world and leaves them eager to learn more!
So what is Ethical Hacking and how does it relate to cybersecurity? Ethical hacking is where a hacker known as a “white hat" tries to find vulnerabilities of systems and applications, or other security measures through various methods with the purpose of helping to secure those systems. You can look at it like a bank hiring someone to try and break into their safe so that they can find where their weaknesses are. Ethical hacking serves the purpose of putting security measures through a real world test but without the real world risk.
We have heard from several parents who ask, “My child’s interest in coding and hacking has me concerned. What can I do to keep them on the right path?” First, and foremost, support them. The world is growing by leaps and bounds in the technological sector. Your child’s interest will help to advance future technological discoveries. So give them resources to help nurture and foster a good learning environment. Buy them books, take them to conventions, sign them up for summer programs. Anything to get them engaged in the field! Next, challenge them! Give them tasks to complete and reward them when they do so. Ask them to make a program to keep track of chores, or to make a game. Just like real space, cyberspace has no limit. Lastly, get involved yourself! Ask them to teach you some programming or sit down and have them explain what they are doing and try it yourself. Who knows, soon you might be doing capture the flags right beside them!
Additional resources for parents:
Here are some great articles for both parents and students interested in ethical hacking:
Even if your child is still in High school, now is the time to start looking into college programs that will encourage and provide a solid experience in cybersecurity.
There are various government agencies that offer internships and scholarship programs for students with guaranteed jobs right after school. Apply! Apply! Apply! |
By Harry Jones and Jacob Herd, Expect Everything young correspondents from Ireland
17th and 18th century
Science was becoming wildly popular throughout the world at this time in history. People were questioning more and looking for answers. In this time women were viewed as intellectually inferior so getting into science was much more difficult. Margret Cavendish a 17th century aristocrat was once of the first advocates for women in science. Her 1666 book “Observations Upon Experimental Philosophy” both encouraged women to get into the sciences and critiqued the science of Francis Bacon.
Laura Bassi was the first woman to earn a university chair in a scientific field. She was also the third woman to obtain a university degree in the western world. Towards the end of the 18th century she also became the world’s first female professor.
Charlotte Frolich was the first female historian in Sweden and was also the first woman to be published by the Royal Swedish Academy of Science. 7 years later Eva Ekebald became the first woman inducted into the organisation.
19th and 20th century
The latter part of the 19th century saw a rise in education for women. All around the UK schools for girls were set up. The Crimean war also gained Florence Nightingale high regard and with her public following she was allowed to set up a school for nursing.
In the early 20th century Marie Skłodowska-Curie became the first woman to win a Nobel prize in physics and then went on to become a double Nobel Prize winner in 1911 for chemistry. Only three other people in history have won two Nobel prizes.
In 1998 the L’Oréal-UNESCO Awards for Women in Science were set up. One award is given for each geographical region of Africa, the Middle East, Asia-Pacific, Europe, Latin America, The Caribbean and North America. These awards were set up to reward and acknowledge innovative women in science and further encourage women to go into the sciences.
Here is also a link to a great video about some other important female scientists in history.
The first female African-American to get a PHD in Chemistry
“Courage is like – it’s a habitus, a habit, a virtue: you get it by courageous acts. It’s like you learn to swim by swimming. You learn courage by couraging”.
Double Nobel Prize Winner
“Life is not easy for any of us. But what of that? We must have perseverance and above all confidence in ourselves. We must believe that we are gifted for something and that this thing must be attained.” |
Pupils from Hermitage Primary presented their technology skills at a local authority event today and showcased their creative skills by premiering not one, but two original animated films.
The pupils from Primary 3 and Primary 7, who together form the Hermitage Animation Team, wrote, illustrated, animated, directed and edited their short films to help promote two of the school’s Eco targets.
The pupils also wrote a presentation to explain the process, from original idea to the final cut, and ran hands-on workshops where they got to teach their animation skills to parents and pupils from other schools.
The event was a great success and the pupils had some wonderful feedback.
We would like to extend our thanks to the pupils for all of their hard work and for giving up their Saturday to present at the event. Also, a big thank you to the parents for giving up their time of a weekend and for supporting the event.
You can watch both of their short animated films below, as well as a short video explaining the creative process.
Invasion of the Energy Monster
Written, Animated and Directed by: Alice, Coral, Ella and Emma
An energy zapping fiend invades the school. Can he be stopped?
Revenge of the Trash Cyclops
Written, Animated and Directed by: Ciaran, Lilla, Sam and Sonja
An evil plan to bury the world in trash could be Bio Guy’s toughest test. |
One of the most contentious questions that come up in science-based medicine that we discuss on this blog is the issue of screening asymptomatic individuals for disease. The most common conditions screened for that we, at least, have discussed on this blog are cancers (e.g., mammography for breast cancer, prostate-specific antigen screening for prostate cancer, ultrasound screening for thyroid cancer), but screening goes beyond just cancer. In cancer, screening is a particularly-contentious issue. For example, by simply questioning whether mammography saves as many lives lost to breast cancer as advocates claim, one can find oneself coming under fire from some very powerful advocates of screening who view any questioning of mammography as an attempt to deny “life-saving” screening to women. That’s why I was very interested when I saw a blog post on The Gupta Guide that pointed me to a new systematic review by John Ioannidis and colleagues examining the value of screening as a general phenomenon, entitled “Does screening for disease save lives in asymptomatic adults? Systematic review of meta-analyses and randomized trials.”
Before I get into the study, let’s first review some of the key concepts behind screening asymptomatic individuals for disease. (If you’re familiar with these concepts, you can skip to the next section.) The act of screening for disease is based on a concept that makes intuitive sense to most people, including physicians, but might not be correct for many diseases. That concept is that early intervention is more likely to successfully prevent complications and death than later intervention. This concept is particularly strong in cancer, for obvious reasons. Compare, for example, a stage I breast cancer (less than 2 cm in diameter, no involvement of the lymph nodes under the arm, known as axillary lymph nodes) with a stage III cancer (e.g., a tumor measuring greater than 5 cm and/or having lots of axillary lymph nodes involved). Five year survival is much higher for treated stage I than for treated stage III, and, depending on the molecular characteristics, the stage I cancer might not even require chemotherapy and can be treated with breast conserving surgery (“lumpectomy” or partial mastectomy) far more frequently than the stage III cancer. So it seems intuitively true that it would be better to catch a breast cancer when it’s stage I rather than when it’s stage III.
Unfortunately, that’s not necessarily the case. The reasons are phenomena known as lead time bias, length bias, and overdiagnosis. Lead time bias has been explained multiple times (e.g., here and here), but perhaps the best explanation for a lay public I’ve ever found of lead time bias (although he doesn’t call it that) involves a hypothetical example of cancer of the thumb by Aaron Carroll. Given that cancer survival is measured from the time of diagnosis, if a tumor is diagnosed at an earlier time in its course through the use of a new advanced-screening detection test, the patient’s survival will appear to be longer, even if earlier detection has no real effect on the overall length of survival, as illustrated below:
Lead time bias, in other words, can give the appearance of longer survival even when treatment has no effect whatsoever on the progress of the disease. Patients are simply diagnosed earlier in the disease time course and only appear to live longer when in reality they simply carry the diagnosis screened for longer.
The second concept is length bias. In general, we can’t continually screen for disease so an interval has to be chosen. This introduces a bias. Length bias refers to comparisons that are not adjusted for rate of progression of the disease. The probability of detecting a cancer before it becomes clinically detectable is directly proportional to the length of its preclinical phase, which is inversely proportional to its rate of progression. In other words, slower-progressing tumors have a longer preclinical phase and a better chance of being detected by a screening test before reaching clinical detectability, leading to the disproportionate identification of slowly-progressing tumors by screening with newer, more sensitive tests, with the faster-growing tumors becoming symptomatic “between screenings.” Thus, survival due to the early detection of cancer is a complicated issue.
Finally, there is overdiagnosis. This is a term that refers to disease detected that would likely never progress within the timeframe of the patient’s remaining lifetime to cause a problem. For cancer (for example) it seems completely counterintuitive that there could be overdiagnosis, but there is. As we have learned over the last several years, cancer overdiagnosis is definitely an issue, particularly for the prostate and breast, with perhaps as high as one in three mammography-detected breast cancers in asymptomatic women being overdiagnosed. Basically, this chart illustrates the concept well:
In this chart, which shows growth rates of four hypothetical tumors (A, B, C, and D), tumor D would not be detected because it was growing too fast, while tumor A is growing so slowly that it would likely be overdiagnosed when it reaches the threshold of detectability. Only tumors B and C could potentially benefit from being detected while still confined to the organ. This sort of graph explains why ever-more-sensitive tests that detect disease earlier and earlier have the potential to result in more overdiagnosis.
Finally, overdiagnosis almost inevitably results in overtreatment. Once physicians have detected a disease, be it a cancer, an asymptomatic abdominal aortic aneurysm, or whatever, the onus is on them to treat it.
Now, on to Ioannidis’ review.
Screening, huh! What is it good for? (Probably not absolutely nothing.)
In Ioannidis’ systematic review of meta-analyses and randomized clinical trials, he and coauthors Nazmu and Julianne Saquib take a different approach than I’ve usually seen. Most such systematic reviews and meta-analyses examine screening for only one disease, while this one examines screening for a number of diseases. Also, the key questions to be asked included:
- Does the screening test result in a decrease in mortality due to the disease being screened for (known as disease-specific mortality)?
- Does the screening test result in a decrease in mortality due to all causes?
The background is explained right in the introduction:
Screening for disease is a key component of modern health care. The rationale is simple and attractive—to detect diseases early in asymptomatic individuals and to treat them in order to reduce morbidity, mortality and the associated costs. However, the role of screening often comes into question. Some high-profile controversies have appeared lately in this regard. For example, for breast cancer, the United States Preventive Services Task Force (USPSTF) currently recommends against routine mammographic screening for women aged 40–49 years after retracting its previous recommendation in favour of mammography, as the data failed to show that benefit outweighed harm. The decision against screening drew sharp criticism from various interest groups including patients who overestimate the benefit of screening. Similarly, USPSTF now recommends against screening for prostate cancer in healthy men because harms from prostate specific antigen (PSA) screening exceed the benefit, trials do not show improvement in long-term survival and screening carries a high risk of over-diagnosis with adverse consequences. Again, heated debates have been generated around this change of recommendation, both in the scientific and the popular press.
Some screening tests were entrenched in clinical and public health practice before randomized controlled trials (RCTs) became widely used. As the screening agenda encompasses a large number of tests, and new ones are continuously proposed, it is useful to reassess the evidence supporting their use. Our research question is whether recommended screening tests, among asymptomatic adults, have evidence from RCTs on mortality for diseases where death is a common outcome. In particular, is there evidence of mortality reduction, either disease-specific or all-cause, from screening? To this end, we have compiled and examined systematically the evidence from individual RCTs and meta-analyses thereof for screening tests that have been proposed for detecting major diseases in adults who have no symptoms.
None of this should come as a surprise to regular readers of this blog. But how to answer such a large question regarding the potential benefits and harms of routine screening? Basically, what Saquib et al. did was search the United States Preventive Services Task Force (USPSTF), Cochrane Database of Systematic Reviews, and PubMed, looking for recommendation status, category of evidence, and availability of randomized clinical trials (RCTs) on mortality for screening tests for diseases in asymptomatic adults (excluding pregnant women and children) from the USPSTF. They then identified the relevant RCTs. The chart below summarizes the existing state of evidence identified in the systematic review:
You can examine the data in Tables 1 and 2 yourselves, in which Saquib et al. examined meta-analyses and individual trials for 39 screening tests for 19 diseases in asymptomatic adults. What is very clear is that there is little strong RCT evidence of benefit for disease-specific mortality for many of these modalities. Indeed, The Gupta Guide notes that, for the six diseases/conditions for which the USPSTF recommends screening, only five of them have strong RCT evidence for a reduction in disease-specific mortality: breast, cervical, and colorectal cancer; abdominal aortic aneurysm (AAA); and type 2 diabetes. There were no randomized trials for screening for hypertension.
Basically, in the individual RCTs examined, the risk of disease-specific mortality was reduced in 16 out of 54 tests (30%), while all-cause mortality was reportedly reduced in 4 of 36 RCTs (11%). Examples of tests for which disease-specific mortality was reduced included ultrasound for AAA (42%-73% risk reduction), mammography for breast cancer (0% to 27% risk reduction), and screening for cervical cancer (11% to 48% risk reduction). For the meta-analyses examined, the risk of disease-specific mortality was reduced in analyses of four of eleven tests, but none for all-cause mortality. Examples of screening tests for which disease-specific mortality was reduced included ultrasound for AAA (again, risk reduction of 45%) and mammography (10% to 25% risk reduction). However, none of the meta-analyses showed an all-cause mortality benefit, while all-cause mortality was reduced only by 3%-13% in the RCTs examined. Basically, the findings were disappointing.
The authors note several possible reasons for their findings:
There are many potential underlying reasons for the overall poor performance of screening in reducing mortality: the screening test may lack sufficient sensitivity and specificity to capture the disease early in its process; there are no markedly effective treatment options for the disease; treatments are available but the risk-benefit ratio of the whole screening and treatment process is unfavourable; or competing causes of death do not allow us to see a net benefit. Often, these reasons may coexist. Whether screening saves lives can only be reliably proven with RCTs. However, even for newly proposed tests, we suspect that their adoption in practice may evade RCT testing. A very large number of tests continuously become available due to technological advancement. One may be tempted to claim a survival benefit of screening based on observational cohorts showing improved survival rates, but these are prone to lead-time and other types of bias. Even RCTs can be biased sometimes, as has been discussed and hotly debated in the controversy over mammography.
The authors also note that they did not examine evidence from other trial designs, such as cohort and case-control studies, which could be a potential weakness. Of course, they also make the obvious defense that these studies are generally less robust than RCTs and more prone to biases. The other concession they make is that it can be incredibly difficult to detect reductions in all-cause mortality for the simple reason that the disease being screened for almost always represents only a fraction of causes of death. This means that even a large drop in mortality due to screening for one disease would, even under ideal conditions, result in only a much smaller drop in all-cause mortality. Such a drop is very difficult to detect in an RCT because of the enormous numbers involved.
What is the proper metric to evaluate a screening test?
This point leads us naturally into the discussions of this study. Presented with the study were commentaries by Peter C. Gøtzsche of the Nordic Cochrane Center and Paul Taylor of the Institute of Health Informatics, University College London, entitled “Screening: a seductive paradigm that has generally failed us” and “Tempering expectations of screening: what is the most authoritative advice we can give, given the data that we have?“, respectively. Basically, Gøtzsche, as one might expect based on his previous criticisms of mammography, argues that total mortality should be the primary outcome in screening trials of mortality and that the main focus of screening trials should be to “quantify the harm.” Taylor, on the other hand, argues that in reality the results of Saquib et al. are not so bad, given that 30% of trials showed a disease-specific benefit and even proponents of screening would expect more trials to fail than not. He also points out the difficulties of using all-cause mortality as the primary outcome.
First, let’s see what Gøtzsche argues:
Screening proponents often say that disease-specific mortality is the right outcome, arguing that in order to show an effect on total mortality, trials would become unrealistically large. I believe this argument is invalid, for both scientific and ethical reasons. We do randomized trials in order to avoid bias, and our primary outcome should therefore not be a biased one. Drug interventions are usually more common in a screened group, and they tend to increase mortality for a variety of non-disease related reasons.
While it is true that overtreatment could potentially increase mortality for reasons other than disease, I believe that Gøtzsche is holding screening to an unrealistically high standard. Using his standard, it would be pointless to screen for virtually anything. Why? I like to use breast cancer as an example to illustrate the difficulties of using all-cause mortality as the be-all and end-all for screening. I know the numbers are rough and the analysis simplistic, but the magnitude is illustrative and close enough to give you an idea of the issues involved. This argument takes the form of a simple thought experiment. Consider first that approximately 40,000 women a year die of breast cancer in the US. However, there are approximately 2.5 million total deaths per year, which means that approximately 1.25 million women die every year (estimated to be 1.26 million in 2011). Thus, breast cancer is the cause of approximately 3.2% of female deaths every year. Consequently, if we could prevent 100% of breast cancer deaths (an unrealistic goal), at most we would expect to see a reduction in all-cause mortality of 3.2%.
Aha! I hear some of you saying. You’re counting all female deaths, even those of children, where breast cancer is so incredibly unlikely to be a cause that these deaths should be discounted. Fair enough. Let’s look at women under the age of 40 (the age at which screening begins, making this the group of women for whom screening is instituted with the goal of preventing death from breast cancer). If we subtract the 63,125 deaths that occur per year in women before the age of 40, we’re left with 1,196,875 deaths, which leads us to estimate the number of breast cancer deaths as 3.3% of total deaths of women aged 40 and above—not much different. Of course, some women do die before age 40 of breast cancer. In the US, approximately 1,160 women under 40 die every year of the disease. That brings the proportion of deaths in women over the age of 40 due to breast cancer back down to 3.2%
So let’s say mammography, as an upper estimate, results in a 27% decrease in breast cancer-specific mortality in women over 40. Under ideal circumstances, that would translate into a less than 0.9% decrease in all-cause mortality. The numbers of subjects and years of follow-up in an RCT needed to detect such a small number would be prohibitively expensive. Obviously, for diseases that cause a higher percentage of overall deaths, it will be easier to detect a reduction in all-cause mortality, but for most diseases it’s very difficult indeed to tease out an all-cause mortality except using clinical trials so huge as to be impractical. Of course, this cuts both ways. If you see a decrease in overall mortality due to screening reported that’s bigger than the proportion of deaths expected to be caused by the disease in the age group studied, then something odd is going on, possibly bias.
In any case, I realize my “back of the envelope” calculations are simplistic. They don’t, for example, rigorously consider the trial period and how many deaths would be expected during, for example, one, two, three, or four decades or adjust for age other than in the crudest manner. My point in having done them isn’t to give exact numbers, but rather to illustrate that the question of whether to use disease-specific mortality or all-cause mortality as a primary endpoint in trials screening for diseases that can result in death is not as straightforward a question as Prof. Gøtzsche argues, although he is correct to note that screening is not without harm and that the harms of screening for some diseases can outweigh the benefits. In an ideal world, he’d be correct about all-cause mortality as an endpoint, too, but our world is not ideal, and detecting such small differences in all-cause mortality is often beyond what is feasible and our resources can support.
Taylor, in his response, also notes another confounder:
But there’s the rub. If breast cancer deaths are reduced, but all-cause mortality is unaffected, is this because detecting the latter requires that more statistical power be deployed? Or is it, as Gøtzsche has suggested, because the harms of screening increase deaths from other causes? The most serious cause of harm is overdiagnosis. The independent UK panel took the view that the best estimate of overdiagnosis could be provided by comparing the rates of cancer detection in the screened and the unscreened groups of randomized controlled trials. The problem is that when most trials ended, screening was offered to the women in the control groups, creating overdiagnosis in the follow-up period. The panel therefore restricted their attention to three trials in which no screening was offered to the control group during follow-up. This is a very limited set of data. Saquib, Saquib and Ioannidis ignore the question of harms presumably because there simply are not enough RCT data to review.
In other words, even in these studies, it’s hard to tease out the sorts of data we are interested in so many decades later. Gøtzsche concludes, without presenting concrete evidence to that effect, that the harms of screening negate the benefits, thus resulting in no detectable decline in all-cause mortality. Advocates assume, without proving, that it is a matter of lack of statistical power, as I discussed above, and that bigger trials with more power would detect all-cause mortality decreases. Chances are, it’s both, the relative proportion of each contribution varying according to the specific disease and screening test under study. One thing is certain, though. All screening results in some degree of overdiagnosis. As I’ve said time and time again, whenever you screen for a condition, you will always find a lot more of it. Always. Just consider the 16-fold increase in the incidence of ductal carcinoma in situ since the mammography era began.
There is no doubt in my mind that screening has, in general, been oversold, represented in some cases as a magic bullet that will save far more lives than it actually can. It’s been a relatively recent realization on the part of physicians that overdiagnosis is a real problem because it leads to overtreatment, which can cause harm. Also, in the case of cancer, improvements in treatment could well be blunting any benefits observed due to screening, as Taylor noted in his commentary. On the other hand, that screening has not lived up to its promise does not mean it is useless, as some critics have charged. It is not unreasonable, as Taylor described, to value other outcomes besides mortality. Unfortunately, we’re in the messy and contentious process of trying to determine which screening tests do save lives. As Saquib et al put it:
We argue that for diseases where short- and medium-term mortality are a relatively common outcomes, RCT should be the default evaluation tool and disease-specific and all-cause mortality should be routinely considered as main outcomes. Our overview suggests that even then, all-cause mortality may hardly ever be improved. One may argue that a reduction in disease-specific mortality may sometimes be beneficial even in the absence of a reduction in all-cause mortality. Such an inference would have to consider the relative perception of different types of death by patients (e.g. death by cancer vs death by other cause), and it may entail also some subjectivity. For diseases where mortality outcomes are potentially important but only in the very long term, one has to consider whether the use of other, intermediate outcomes and/or other quasi-experimental designs that may be performed relatively quickly with very large sample sizes (e.g. before and after the introduction of a test) are meaningful alternatives to very long-term RCTs or may add more bias and confusion in a field that has already seen many hot debates. Screening may still be highly effective (and thus justifiable) for a variety of other clinical outcomes, besides mortality. However, our overview suggests that expectations of major benefits in mortality from screening need to be cautiously tempered.
In other words, the science of screening is messy, and we need to be careful not to be too optimistic. Personally, I tend to agree with Taylor, that better risk stratification will be necessary. Screening tends to benefit most populations at high risk for the disease being screened for. Such stratification could allow for—dare I say it?—personalized screening based on individual risk factors.
One wonders how much of that sort of research will be funded in President Obama’s Precision Medicine Initiative, should it be funded. I might have to look into that for a future topic. |
Every alias for a computer or server is associated with a specific CNAME in the DNS database. Consider, for example, a set of URLs (Uniform Resource Locators) that all belong to a single organization and that all direct online visitors to the same Web site. Each of these URLs is an alias for a single canonical name that is associated with an IP address in the DNS database.
In addition to facilitating the use of multiple URLs for a single Web site, CNAMEs can be convenient when a well-known organization changes its canonical name. The CNAME will then redirect people who enter an old URL to the correct Web site, even if the old URL is no longer official. |
The National September 11 Memorial & Museum (known separately as the 9/11 Memorial and 9/11 Memorial Museum) commemorate the September 11, 2001 attacks, which killed 2,977 victims, and the World Trade Center bombing of 1993, which killed six. The memorial is located at the World Trade Center site, the former location of the Twin Towers, which were destroyed during the September 11 attacks.
The 70-foot high columns known as “tridents” because of their three-pronged tops, were salvaged from the wreckage of the North Tower.
Main hall of the museum, showing the last column standing at center, and the "bathtub" retaining wall around the foundation at left.
As the recovery at the World Trade Center site neared completion, the Last Column, a 58-ton, 36-foot- tall piece of welded plate steel, was removed from the site in a solemn ceremony on May 30, 2002. In the weeks that followed, recovery workers, first responders, volunteers and victims' relatives signed the column and affixed to it memorial messages, photographs, and other tributes.
This piece of steel, once part of the north façade of the North Tower, was located at the point of impact where hijacked Flight 11 pierced the building at the 93rd through the 99th floors.
North Tower Antenna
This 19.8-foot-long fragment was about one-twentieth of the 360-foot-tall transmission tower atop the North Tower. Six broadcast engineers affiliated with five television stations were working from offices on floors 104 and 110 of the North Tower on 9/11. None of the engineers survived. Transmissions for most stations failed shortly after hijacked Flight 11 pierced the North Tower. All transmissions ceased by 10:28 a.m., when the tower collapsed.
FDNY Ladder Company 3
Members of FDNY Ladder Company 3, located in Manhattan’s East Village, bravely responded to the World Trade Center on September 11, 2001. Led by decorated Captain Patrick “Paddy” John Brown, Ladder Company 3 asked a dispatcher to deploy its members to the disaster. Eleven of them, many of whom had just gone off duty after finishing their overnight shifts, entered the North Tower.
The rear mount aerial truck was parked on West Street near Vesey Street. When the North Tower collapsed, the truck was damaged beyond repair, with its entire front cab destroyed.
The 9/11 museum pavilion has a deconstructivist design, resembling a partially collapsed building (mirroring the attacks).
The Memorial’s twin reflecting pools are each nearly an acre in size and feature the largest manmade waterfalls in North America. The pools sit within the footprints where the Twin Towers once stood.
The names of every person who died in the 2001 and 1993 attacks are inscribed into bronze panels edging the Memorial pools, a powerful reminder of the largest loss of life resulting from a foreign attack on American soil and the greatest single loss of rescue personnel in American history.
One World Trade Center (also known as the Freedom Tower) is the main building of the rebuilt World Trade Center complex. The supertall structure has the same name as the North Tower of the original World Trade Center, which was completely destroyed in the terrorist attacks of September 11, 2001. The new skyscraper stands on the northwest corner of the 16-acre (6.5 ha) World Trade Center site, on the site of the original 6 World Trade Center. With a total height of 1,776 feet (541 m), its height in feet is a deliberate reference to the year when the United States Declaration of Independence was signed. |
From early childhood, people tell their children to brush their teeth well. They follow the same routines throughout their lives to brush and floss their teeth at least twice daily. In the most basic form, this is what makes up the framework of oral hygiene and health maintenance. Children may not understand the benefits of following an oral hygiene routine as much as an adult, but they should be taught in such a way that they can understand it is important.
It often happens that as we grow up, we fail to maintain the level of cleanliness and care in our mouths as much as we did in our childhood, with parental supervision. Brushing and flossing becomes less carefully done and may even be skipped at times. And this is where the risk of suffering from dental diseases increases. It is important to follow a preventive oral health maintenance scheduleto prevent the need for treatment of dental problems.It is a fact that no one likes to go through fillings, crowns, dentures, RCT, or any such dental procedure. But they are needed when the oral health has suffered from neglect.
Flossing for Cleaning Plaque between the Teeth
Tooth structure in our mouths is such that toothbrushes cannot reach between the teeth to effectively clean away the plaque. Surface stains and food particles can be removed by brushing your teeth. But for debris and plaque between teeth, flossing is required. Often,people find out that they need dental fillings after they have their tooth enamel eroded away from the effects of improper care. Just a few minutes of flossing can save you from tooth decay in those hard to reach areas.
Correct Technique of Brushing
Brushing your teeth will not prevent the occurrence of disease unless the correct technique is used. Some people brush their teeth from side to side, or scrub the brush too harshly over the teeth. As well, plaque collected on the molars cannot be cleaned with an incorrect brushing technique. So, it is important to slowly and steadily clean the teeth, by brushing using the correct technique. To learn such a technique,and to have a complete dental examination, visit an expert dentist in Surrey. |
An average adult loses approximately two litres of water daily. Once the body loses five per cent of its total water volume, symptoms of fatigue and general discomfort will be observed. If you’re not properly hydrated, your body can’t perform at its highest level. You may experience fatigue, muscle cramps, dizziness, or more serious symptoms.
Understanding the water you drink
Health practitioners advise us to drink enough water for a healthy lifestyle. Tap water is maintained to be at neutral pH of 7.0 or slightly acidic, whereas ionized alkaline water has a higher pH value and contains more hydroxide (OH-) ions. The ions make ph Balancer’s water antioxidant-rich, which helps to improve the quality of one’s health. The free-radical scavenging antioxidants also help to protect cells and encourage wellness.
Restore Your Health To The Right Balance
Every drop of pH Balancer encapsulates the invigorating energy of the ocean. Smooth tasting and instantly refreshing, this water aids in neutralizing the body’s acidic waste, removing toxins and restoring a healthy balance inside your body. It is also perfect for athletes as the small-clustered water molecules are easily absorbed; helping to flush out metabolic waste and replenishing water and electrolytes lost during exercise.
Rehydrate, Restore and Rejuvenate
Smaller clusters of water molecules are more readily absorbed, allowing our bodies to rehydrate faster.
Our bodies’ ideal pH level is restored when acidic waste products have been removed.
The free-radical scavenging antioxidants will help to protect cells and assist in rejuvenation.
pH Balancer Ocean Alkaline Ion Water
pH Balancer is sourced from pristine sea areas. With ion membrane exchange technology, the concentration of seawater is increased six to seven times. After going through the distillation process, the sea salt crystallizes out. The condensate is then collected as the source of pH Balancer. Subsequently, it undergoes a series of filtrations, reverse osmosis, electrolysis and sterilization; followed by bottling and packaging. |
When the Young Lieutenant Met the Wild Mustangs
From Texas Standard:
He was 22 years old, riding his horse south of Corpus Christi in the vicinity of what would one day be called the King Ranch. But that wouldn’t happen for another twenty years.
This vast stretch of sandy prairie was still known as “The Wild Horse Desert."
In some ways it was a spooky place – ghostly. You would see horse tracks everywhere, but no people. There were plenty of worn trails, but the population was merely equestrian.
Folks reckoned that these horses were the descendants of the ones that arrived with Cortez, when he came to conquer the Aztecs. Some had escaped, migrated north, and bred like rabbits (if you can say that about horses).
Our young man – actually a newly minted second lieutenant from West Point – was riding with a regiment of soldiers under the command of General Zachary Taylor. They were under orders to establish Fort Texas on the Rio Grande and enforce that river as the southern border of the U.S. Fort Texas would shortly become Fort Brown, the fort that Brownsville, Texas would take its name from.
The young lieutenant, who had excelled as a horseman at West Point, was so impressed with the seemingly infinite herds of wild horses in South Texas that he made a note of it in his journal. He said:
"A few days out from Corpus Christi, the immense herd of wild horses that ranged at that time between the Nueces and the Rio Grande was directly in front of us. I rode out a ways to see the extent of the herd. The country was a rolling prairie, and from the higher ground, the vision was obstructed only by the curvature of the earth. As far as the eye could reach to the right, the herd extended. To the left, it extended equally. There was no estimating the number of animals in it; I doubt that they could all have been corralled in the State of Rhode Island, or Delaware, at one time. If they had been, they would have been so thick that the pasture would have given out the first day."
Both General Taylor and his Second Lieutenant would distinguish themselves on that journey.
Zachary Taylor had no idea that this Wild Horse Desert would lead to him on to victory in Mexico and to political victory back home. He would become the 12th President of the United States.
His dashing second lieutenant would also ascend to the presidency, 20 years after him.
The young man on high ground, surveying the primordial scene of thousands of mustangs grazing before him, would become the hero of many battles in the years ahead. He would ultimately lead the union forces to victory in the Civil War – and become the youngest president of the U.S. His presidential memoirs would become a runaway bestseller – a book Mark Twain would publish and call "the most remarkable work of its kind since Caesar’s Commentaries." It is that book that gives us this story.
It was written by Hiram U. Grant. Well that was his birth name. But when he entered West Point, due to a clerical error, the name Hiram was dropped and his middle name became his first name, the name you know him by: Ulysses. Ulysses S. Grant.
Listen to the full audio in the player above.
W.F Strong is a Fulbright Scholar and professor of Culture and Communication at the University of Texas Rio Grande Valley. And at Public Radio 88 FM in Harlingen, Texas, he’s the resident expert on Texas literature, Texas legends, Blue Bell Ice Cream, Whataburger (with cheese) and mesquite smoked brisket. |
You have an IDEA. How do you transform that Idea into a product that you can market and sell? How do you make the hardware? Create a schematic symbol and capture the design? What is PCB layout, and why do I need a Gerber file? Can I simulate the design before I build it? How long does it take to design a simple board? How much does it cost to fabricate a PCB? How much to assemble it with the components?
There are a lot of questions in the design process, and we’re here to help answer them. Below is an overview – there are entire books devoted to each of these concepts – that helps explain these concepts. A few books which cover different aspects of the design and development process we suggest you take a look at include:
The Hardware Startup: Building Your Product, Business, and Brand (Amazon – $23.81)
Prototype to Product: A Practical Guide for Getting to Market (Amazon – $34.99)
Designing Embedded Hardware: Create New Computers and Devices (Amazon – $40.24)
Practical Electronics for Inventors, Fourth Edition (Amazon – $21.93)
The flowchart below shows the basic stages an Idea goes through to becoming a tangible reality. In this article, we focus on the Hardware part of product development. Please refer to the other articles in this series for explanations of Software, User Application and Testing.
The Hardware block is expanded below for easier reference. This is a high-level diagram of the basic flow to create electronics; The nearly a dozen stages indicated in the diagram could have multiple sub-stages, depending on the design complexity and requirements.
To illustrate this process, we’ll use the Idea of a smart garage door opener. The idea is that a homeowner wants to use a smart-phone to check if the home garage door is opened or closed. The homeowner would also like to be able to remotely open or close the door with the smart-phone. For example, to let a guest in or a child that forgot/lost a key.
Here is an example from Chamberlain:
Referring back to the flow chart diagram at the beginning of this article, the Idea needs to be conceptualized so that it can be shared with everyone that will be working on it. The Concept set out what the idea is, what it does, who will use it and in what context. Even if it’s just you, it still important to write out and document the Concept so you can focus on specifics. This is the starting point for the requirements of the idea.
The Concept is very similar to the idea, but with added details and refinement: A garage door opener that is powered with standard US 120V, 10A circuit. It should have a wired push-button to open/close manually, an integrated light fixture and safety sensor inputs (e.g., doesn’t close if something/someone is in the way). Should be able to handle single wide and double wide doors. It connects wirelessly with a smart-phone (Apple/iOs or Android, no Windows, Blackberry or Others). As this is a fairly generalized description, it shouldn’t take more than 1 hour to write out a Concept of a thought out Idea.
Next is to do an Analysis or Feasibility study. This is where you investigate all the possible (and even the impossible) ways you can realize the idea. It’s best to partition the concept in various blocks or modules to make the analysis more cohesive. Some major blocks in this Concept for a smart garage door opener are Motor, Lighting, Power, Communications, Enclosure, Controls.
It’s worth emphasizing that at this initial stage, all the various possibilities should be taken into account, researched and explored. Nothing should be dismissed. The Feasibility process is to document the findings, and then rank them; Pick the top candidate as the “recommendation” for going forward.
Let’s dive into the Communications: Should the device have wired or wireless capabilities? There are many wireless communications methods available including Cellular, Radio, WiFi, WiMax, Bluetooth, Zigbee and others. Wired solutions could include standard Ethernet or Ethernet-Over-Powerline. Each of these should be examined to determine if it would meet the requirements, what are the advantages and disadvantages for each, the availability and cost. Other things for consideration may include the size, weight, power usage, and regulatory compliances.
For a smaller, not too complicated product (and these metrics of course depend on your background, experience with similar products, and many other factors) budget about 45 hours for the Feasibility study. This may seem like a large up-front time investment, but being thorough at the beginning pays off huge dividends as the project progresses. The cost for making changes at this stage is also very low; The cost increases exponentially with each stage.
As an example: Imagine the scenario where you launch the product – Yay! A happy homeowner buys your smart garage door opener and has just paid the installer to put it. The homeowner launches the smart-phone app but can’t “find” the garage door opener. After a long talk with your (expensive) help support tech, the problem is found out that opener is “too far away” to communicate.
No longer a happy homeowner when it comes to your product (and you can be sure you’ll hear about it on Twitter, Facebook and Yelp). You have to take a return on the opener and reimburse the owner for the cost of the installation/removal. Expensive! And you find out that as more customers buy and install the open, more than half have the same problem. This means a recall and redesign. Which could have been avoided by a thorough and detailed investigation during the Feasibility stage.
Having completed the Feasibility study and defined solid recommendations for how to meet the Concept requirements, the next phase involves defining the Hardware, Software and User Application.
Let’s start with Hardware. This is the physical, tangible “thing” that you are going to produce. In the case of the smart garage door opener, there is a motor, gears, a metal case and a clear plastic light cover. A three-prong power cord plugs into an outlet and connects to the power supply inside the opener. Cables and wires connect all the printed circuit boards (PCB) together. Each board has chips, connectors, heat-sinks and other components. There are nuts, bolts, grommets and fasteners holding everything together. All of this is “hardware”. You can see some of this in the picture below.
These items can be put into sub-groups under hardware: Board Specification, Mechanical Specification and System Specification. Now let’s look at each of these in a little more detail.
(BTW – If you are really interested in Chamberlain Smart Garage Door Openers, CNET has an excellent review. The image above is used with thanks from that article, by Tyler Lizenby)
The System Specification is where the “custom-off-the-shelf” (COTS) or “third-party” pieces are called out. In this case, the power supply (for converting the 120V AC from the house receptacle into 5V DC in the opener), the cable harnesses (connect the AC/DC power supply to each board), the gears, the screws and fasteners.
From this specification, a System Bill Of Materials (BOM) is created which lists every unique item: a description of what it is, the vendor, the vendor part number, an optional internal part number and a placement designator. The BOM is used by the purchasing department to buy the necessary amount of materials to build the unit, and by the manufacturing group to know how to assemble it. For some products there literally no items that would be included in this specification, and for other products this is the only specification when they are built completely using COTS. I budget 15 hours on average for developing this specification.
The Mechanical Specification defines the brackets, the metal case that houses everything, the clear plastic light cover, etc.. This specification provides the detail drawings of each piece with all dimensions, holes, hole diameters, bend lines and stamp areas. The designer uses a CAD (Computer Aided Design) program to create all the documents. The CAD files are used by the metal and plastic fabricators to create the housing. The specification can be created fairly easily as the bulk of the work is in the actual CAD file creation. Figure on 3 hours for the specification; but the CAD work can take from a few hours to many 10’s of hours depending on what is being designed. For a garage door housing and brackets, 10 hours should be sufficient.
The CAD files are sent to the fabricator who cuts, bends, stamps, drills and otherwise works on the metal to create what is on the drawings. Similarly, a product may use a 3-D Printing (an additive process) or a CNC milling-machine (a subtractive process) to create pieces of the product. Rapid prototype companies can provide quick turn-around of low volumes in 1 to 3 days, but creating mechanical pieces can often take 1 to 3 weeks.
Hardware Board Specification
The Board Specification, also referred to as the Hardware Specification, contains all of the information about the various modules, chips, resistors, capacitors, antennae, heat sinks and other electronics, interconnect and passive components on a board. If there are multiple boards in a design, then there would typically be a specification for each board. This specification has the basic characteristics and implementation details for each component, and is a very condensed summary of the data sheet. This document focuses on the electrical parameters and connectivity of the components.
If you are not familiar with all the types of components available to you as a designer, there is a helpful 3-Volume Encyclopedia of Electronic Components that set includes key information on electronics parts for your projects—complete with photographs, schematics, and diagrams.
You’ll learn what each one does, how it works, why it’s useful, and what variants exist.
Volume 1: Resistors, Capacitors, Inductors, Switches, Encoders, Relays, Transistors (Amazon – $20.95)
Once you understand the components, here is an excellent reference provides the essential information that every circuit designer needs to produce a working circuit, as well as information on how to make a design that is robust, tolerant to noise and temperature, and able to operate in the system for which it is intended. It looks at best practices, design guidelines, and engineering knowledge gained from years of experience, and includes practical, real-world considerations for components and printed circuit boards (PCBs) as well as their manufacturability, reliability, and cost: The Circuit Designer’s Companion (Amazon – $69.33)
Of particular importance is the pin-out (e.g., what each pin or connection of a device actual does) for the components. Also included is the power usage and timing information. It’s also useful to include the device sizes (e.g., width, length, height and weight) for reference. How each device is used and how it connects with other devices on the board is described in this document. As such, the board specification is used as the guidelines for the schematics.
This is a rather detailed document; It can take 40 hours to read through data sheets and determine how devices work together. In the case of our garage door opener example, how does the processor on the board control the motor to turn the gears which opens/closes the door? How does the door status get read from a sensor and sent to the communications chip? All of the signals travel on wires / board traces to the pins of chips and the designer has to determine which pins and wires are connected.
This is a good time to order the components needed for the making the first few boards (e.g., the prototypes). For purchasing individual or small quantities of parts, consult the following:
Arrow – http://www.arrownac.com/
Digi-Key – http://www.digikey.com/
Element 14 (Newark) – http://www.element14.com/community/welcome
Mouser – http://www.mouser.com/
For an illustrative example of what occurs in the next stages let’s look at the PIC24FJ16GA004 microcontroller from Microchip, and a USB Micro-B connector from Assmann WSW Components. The microcontroller is an “active” component, where as the USB connector is a “passive” component.
This is what the actual components looks like (not to scale), and what would be soldered to the Printed Circuit Board (PCB):
This is the pin-out diagram, which would be included in the Hardware Board Specification:
The schematic symbol created for the schematic library:
The physical dimensions of the device, along with the solder areas:
The layout symbol created for the PCB library:
Hardware Design / Schematic Capture
With a comprehensive board specification and the relevant datasheets, a schematic can be created. The first step is to make a diagram or symbol for each component. These are also called “library elements”. This is a representation that shows the electrical connection for the device. This also has Power and Ground pins, which are often not shown on the schematic as these are common to many devices (the exception is when there are specific, non-common power and/or ground connections, often the case with communications or precision components).
You’ll need some tools to get started at this stage. The “Eagle” tools from Cadsoft (a division of Autodesk) are excellent for beginners, hobbyists, entrepreneurs as well as full scale industry engineering firms. The software scales from simple designs to very complex ones. Another tool vendor to consider is Altium. Most of the vendors have fully functional free version that are typically limited by either the number of components, the number of schematic pages, the physical board dimensions, number of board layers or some combination of all of these.
It’s also helpful to have a physical book to refer to as you are working with the these tools. The online materials available in .pdf format are extensive, but printing them can be costly and reading online can be straining.
For Cadsoft Eagle tools, we recommend:
PCB Design in EAGLE – Part 1: Learn about EAGLE’s user interface, adding parts, schematics, (Amazon Kindle Unlimited – $0.00)
Many schematic tools vendors include the most often used components in a packaged library, and vendors also have symbol libraries for their devices. However, there are many different schematic tools, and many different components; It is often the case that schematic symbols for major components in the design will need to be created. Once all the symbols are available, the engineer can connect the pins together as required for the design.
At this stage, all of the interconnects, passives, electromagnetic and active components for the design are accounted for on the schematic. The library for each of these contains a description, a vendor, a vendor part number and an optional internal part number. When a symbol is placed on the schematic, it is assigned a unique reference number which is used in the board assembly process. Once all of the parts are placed on the schematic, a parts Bill Of Materials (BOM) can be generated. This is used by the purchasing group to know how many of which item to buy.
Also, once all of the components have been connected or “wired” together on the schematic, a “netlist” can be created. This is a file which lists the network of connections between the components on the board and is used in creating the physical layout of the board.
This entire process is referred to the “Hardware Design” or “Schematic Capture”. For a small design with only a few components, budget about 15 hours for this step.
Depending on the selected components and their functionality, it may be possible to simulate the operational characteristics of the design. Many manufacturers provide simulation models for their components, and it is also possible to develop models “in-house” (or pay a third-party developer to create them). A first level simulation may only use simple static timing analysis (STA) models and generate cycle-based results. This is generally a less computationally intensive method to validate and verify basic operations .
For example, are address and data lines connected properly? Is combinatorial logic generating the expected output for a given input. At the other extreme, a fully simulated schematic will have parameters for all of the input buffer circuits, output driver circuits and internal times.
This timing parameters includes such things as:
- Input Delay
- Output Delay
- Min/Max/Typical Input Skew
- Min/Max/Typical Output Skew
- Internal Propagation Delay
It is also possible to enter wire or line delay estimated on the nets connecting the components. Since the nets are an abstraction of the physical wires and board traces, these are based on an understanding of how the actual board will be realized.
PCB Design / Board Layout
In the Printed Circuit Board (PCB) Design, also called “Board Layout”, stage the engineer determines how the physical board will look with all of the components in place. Using another software tool (although usually from the same vendor as the schematic capture tools), the designer creates a 2-dimensional shape of a board (e.g., a 3 inch x 5 inch rectangle or 4.5 inch diameter circle) to represent the PCB.
Similar to the schematic capture library, there must e PCB library element for each component. Whereas the schematic symbol was a conceptual representation of the electrical connections of the component, the PCB symbol is an exact physical representation (width, length, height). The layout symbol shows the solder areas for the pins and pads for surface-mount parts, as well as where the holes are for through-hole components.
The PCB engineer places the components on the board and begins the process of placing metal traces to create the netlist connections indicated in the schematic. The “nets” in the schematic are virtual, abstracted connections, and the PCB connections are where the actual metal will be on the board. This is the stage where the ground (e.g., GND) and power (e.g., VCC or VDD) planes and connections are created.
The “writing” for the part outlines, component numbering, company name / logo, product information, etc. is put into a “silk screen” on either/both the top and bottom sides of the PCB.
PCB design is both a science and an art, especially for analog designs and high speed designs. For a simple, low-frequency, digital design the PCB layout can take at least as long as the schematic capture stage. In this example, would budget about 15 hours.
At the end of the board layout phase, the CAD software will generate a file to be used for the physical PCB creation. This file is commonly called a “Gerber” file. Each Gerber file represents only one PCB layer. That means you will usually get seven files for a two-layer board :
- Top layer
- Bottom layer
- Solder Stop Mask
- Solder Stop Mask
- Silk Top
- Silk Bottom
- Drill – some PCB fabricators may want a different format file named “excellon.cam”
There are various design rule checks (DRC) which can be performed on the layout at this time. Many of these DRC relate to the PCB fabrication and Board Assembly – how closely parts can be placed together, what the minimum thickness is for a metal trace, the types of angles and curves permitted on metal traces, etc. Following these rules, and correcting any/all violations at this stage will help in the following stages.
At a minimum, the physical connection netlist from the PCB layout should be compared to the netlist from the schematic capture. There should be a one-to-one correspondence between the two netlists. If they don’t match up, there is either a trace missing on the board, a trace added on the board or an incorrect trace routing on the board.
Simulations can become more accurate at this stage since the connections between components is defined. The characteristics of the connections depend on such things as the length, width and amount of material (e.g., the copper pour) used in the trace; The number of connections between different layers of the PCB (e.g., the vias), the dielectric / type of PCB material (e.g., FR4) and other factors.
It is also possible to simulate more than just the electrical signal characteristics of this virtual board. Heat maps and temperature profiles can be created; Electromagnetic interference (EMI) and electromagnetic radiation (EMR) – essentially how much “noise” does the system create at various frequencies – scans can be generated.
At this point, the actual design process is complete.
There are many places which will create the green (sometimes red) glass-reinforced epoxy laminate sheets that are the printed circuit board. A small, rectangular, 2-sided PCB can be produced very quickly for nominal cost. Many companies can quick-turn up to 10 PCBs in under 2 days for $25 to $50 each.
Here is an example of a “bare” board:
Board Assembly /PCB Stuffing
With the components you ordered during the specification stage and the boards back from the fabricator, it’s time to “assemble” everything. Also called “stuffing” the board, this is where the parts are soldering to the board to create a finished product. Many designers will use the same company for fabrication and assembly. This can save on costs, since it’s a packaged price to include both services. And it can also save on time since there is no delay in mailing the PCB to you, and then you mailing the PCB plus components to the assembler.
For reference, a small, 2-sided board with low number of components can be assembled in under 2 days for less than $50 per board. The cost drops significantly for a 1- to 2-week delivery. Per unit costs also decrease when ordered in volume, so the final production costs will considerably less than the initial prototype.
The final example, an assembled PCB:
To summarize the hardware design process, starting with your Idea and taking it through to an assembled board:
Concept (1 Hour) – Result is a one-paragraph to one-page document.
Analysis / Feasibility Study (45 Hours) – Result is a 20- to 50-page document.
Hardware System Spec (15 Hours) – Result is a 1- to 10-page document and a Bill Of Materials (BOM).
Mechanical System Spec (5 Hours) – Result is 1- to 5-page document
Mechanical Design ( 5 Hours – 100 Hours) – Result is CAD drawing files
Hardware Board Specification (40 Hours) – Result is 5- to 20-page document
Hardware Design / Schematic Capture (15 Hours) – Result is a Netlist and a Bill Of Materials (BOM)
PCB Design / Board Layout (15 hours) – Result is a Netlist and a set of Gerber files
With these time estimates in mind, it’s possible for a solo-designer to go from an Idea to a prototype ready design in 3 to 4 weeks (at 40 hours / week). With a dedicated team, this can be expedited to under 2 weeks (!).
What Idea do you want to start with?
Share in the comments your favorite tools for Schematic Capture and for PCB Layout. Also, if please feel free to recommend any PCB fabrication and Board Assembly companies you would use for your next project.
See here for more information on the pricing. |
Causality is the relationship between causes and effects. It is considered to be fundamental to all natural science, especially physics. Causality is also a topic studied from the perspectives of philosophy and statistics.
Cause and effect in physics
In physics it is helpful to interpret certain terms of a physical theory as causes and other terms as effects. Thus, in classical (Newtonian) mechanics a cause may be represented by a force acting on a body, and an effect by the acceleration which follows as quantitatively explained by Newton's second law. For different physical theories the notions of cause and effect may be different. For instance, in the general theory of relativity, acceleration is not an effect (since it is not a generally relativistic vector); the general relativistic effects comparable to those of Newtonian mechanics are the deviations from geodesic motion in curved spacetime. Also, the meaning of "uncaused motion" is dependent on the theory being employed: for Newton it is inertial motion (constant velocity with respect to an inertial frame of reference), in the general theory of relativity it is geodesic motion (to be compared with frictionless motion on the surface of a sphere at constant tangential velocity along a great circle). So what constitutes a "cause" and what constitutes an "effect" depends on the total system of explanation in which the putative causal sequence is embedded.
A formulation of physical laws in terms of cause and effect is useful for the purposes of explanation and prediction. For instance, in Newtonian mechanics, an observed acceleration can be explained by reference to an applied force. So Newton's second law can be used to predict the force necessary to realize a desired acceleration.
In classical physics, a cause should always precede its effect. In relativity theory the equivalent restriction limits causes to the back (past) light cone of the event to be explained (the "effect"), and any effect of a cause must lie in the cause's front (future) light cone. These restrictions are consistent with the grounded belief (or assumption) that causal influences cannot travel faster than the speed of light and/or backwards in time.
Another requirement, at least valid at the level of human experience, is that cause and effect be mediated across space and time (requirement of contiguity). This requirement has been very influential in the past, in the first place as a result of direct observation of causal processes (like pushing a cart), in the second place as a problematic aspect of Newton's theory of gravitation (attraction of the earth by the sun by means of action at a distance) replacing mechanistic proposals like Descartes' vortex theory; in the third place as an incentive to develop dynamic field theories (e.g., Maxwell's electrodynamics and Einstein's general theory of relativity) restoring contiguity in the transmission of influences in a more successful way than did Descartes' theory.
The empiricists' aversion to metaphysical explanations (like Descartes' vortex theory) lends heavy influence against the idea of the importance of causality. Causality has accordingly sometimes been downplayed (e.g., Newton's "Hypotheses non fingo"). According to Ernst Mach the notion of force in Newton's second law was pleonastic, tautological and superfluous. Indeed it is possible to consider the Newtonian equations of motion of the gravitational interaction of two bodies,
as two coupled equations describing the positions and of the two bodies, without interpreting the right hand sides of these equations as forces; the equations just describe a process of interaction, without any necessity to interpret one body as the cause of the motion of the other, and allow one to predict the states of the system at later (as well as earlier) times.
The ordinary situations in which humans singled out some factors in a physical interaction as being prior and therefore supplying the "because" of the interaction were often ones in which humans decided to bring about some state of affairs and directed their energies to producing that state of affairs—a process that took time to establish and left a new state of affairs that persisted beyond the time of activity of the actor. It would be difficult and pointless, however, to explain the motions of binary stars with respect to each other in that way.
The possibility of such a time-independent view is at the basis of the deductive-nomological (D-N) view of scientific explanation, considering an event to be explained if it can be subsumed under a scientific law. In the D-N view, a physical state is considered to be explained if, applying the (deterministic) law, it can be derived from given initial conditions. (Such initial conditions could include the momenta and distance from each other of binary stars at any given moment.) Such 'explanation by determinism' is sometimes referred to as causal determinism. A disadvantage of the D-N view is that causality and determinism are more or less identified. Thus, in classical physics, it was assumed that all events are caused by earlier ones according to the known laws of nature, culminating in Pierre-Simon Laplace's claim that if the current state of the world were known with precision, it could be computed for any time in the future or the past (see Laplace's demon). However, this is usually referred to as Laplace determinism (rather than `Laplace causality') because it hinges on determinism in mathematical models as dealt with in the mathematical Cauchy problem. Confusion of causality and determinism is particularly acute in quantum mechanics, this theory being acausal in the sense that it is unable in many cases to identify the causes of actually observed effects or to predict the effects of identical causes, but arguably deterministic in some interpretations (e.g. if the wave function is presumed not to actually collapse as in the many-worlds interpretation, or if its collapse is due to hidden variables, or simply redefining determinism as meaning that probabilities rather than specific effects are determined).
In modern physics, the notion of causality had to be clarified. The insights of the theory of special relativity confirmed the assumption of causality, but they made the meaning of the word "simultaneous" observer-dependent. Consequently, the relativistic principle of causality says that the cause must precede its effect according to all inertial observers. This is equivalent to the statement that the cause and its effect are separated by a timelike interval, and the effect belongs to the future of its cause. If a timelike interval separates the two events, this means that a signal could be sent between them at less than the speed of light. On the other hand, if signals could move faster than the speed of light, this would violate causality because it would allow a signal to be sent across spacelike intervals, which means that at least to some inertial observers the signal would travel backward in time. For this reason, special relativity does not allow communication faster than the speed of light.
In the theory of general relativity, the concept of causality is generalized in the most straightforward way: the effect must belong to the future light cone of its cause, even if the spacetime is curved. New subtleties must be taken into account when we investigate causality in quantum mechanics and relativistic quantum field theory in particular. In quantum field theory, causality is closely related to the principle of locality. However, the principle of locality is disputed: whether it strictly holds depends on the interpretation of quantum mechanics chosen, especially for experiments involving quantum entanglement that satisfy Bell's Theorem.
Despite these subtleties, causality remains an important and valid concept in physical theories. For example, the notion that events can be ordered into causes and effects is necessary to prevent (or at least outline) causality paradoxes such as the grandfather paradox, which asks what happens if a time-traveler kills his own grandfather before he ever meets the time-traveler's grandmother. See also Chronology protection conjecture.
"Small variations of the initial condition of a nonlinear dynamical system may produce large variations in the long term behavior of the system."
This opens up the opportunity to understand a distributed causality.
A related way to interpret the Butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions. In classical (Newtonian) physics, in general, only those conditions are (explicitly) taken into account, that are both necessary and sufficient. For instance, when a massive sphere is caused to roll down a slope starting from a point of unstable equilibrium, then its velocity is assumed to be caused by the force of gravity accelerating it; the small push that was needed to set it into motion is not explicitly dealt with as a cause. In order to be a physical cause there must be a certain proportionality with the ensuing effect. A distinction is drawn between triggering and causation of the ball's motion. By the same token the butterfly can be seen as triggering a tornado, its cause being assumed to be seated in the atmospherical energies already present beforehand, rather than in the movements of a butterfly.
Causal dynamical triangulation
Causal dynamical triangulation (abbreviated as "CDT") invented by Renate Loll, Jan Ambjørn and Jerzy Jurkiewicz, and popularized by Fotini Markopoulou and Lee Smolin, is an approach to quantum gravity that like loop quantum gravity is background independent. This means that it does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves. The Loops '05 conference, hosted by many loop quantum gravity theorists, included several presentations which discussed CDT in great depth, and revealed it to be a pivotal insight for theorists. It has sparked considerable interest as it appears to have a good semi-classical description. At large scales, it re-creates the familiar 4-dimensional spacetime, but it shows spacetime to be 2-d near the Planck scale, and reveals a fractal structure on slices of constant time. Using a structure called a simplex, it divides spacetime into tiny triangular sections. A simplex is the generalized form of a triangle, in various dimensions. A 3-simplex is usually called a tetrahedron, and the 4-simplex, which is the basic building block in this theory, is also known as the pentatope, or pentachoron. Each simplex is geometrically flat, but simplices can be "glued" together in a variety of ways to create curved spacetimes. Where previous attempts at triangulation of quantum spaces have produced jumbled universes with far too many dimensions, or minimal universes with too few, CDT avoids this problem by allowing only those configurations where cause precedes any event. In other words, the timelines of all joined edges of simplices must agree.
Thus, maybe, causality lies in the foundation of the spacetime geometry.
In causal set theory causality takes an even more prominent place. The basis for this approach to quantum gravity is in a theorem by David Malament. This theorem states that the causal structure of a spacetime suffices to reconstruct its conformal class. So knowing the conformal factor and the causal structure is enough to know the spacetime. Based on this Rafael Sorkin proposed the idea of Causal Set Theory. Causal Set Theory is a fundamentally discrete approach to quantum gravity. The causal structure of the spacetime is represented as a Poset, while the conformal factor can be reconstructed by identifying each poset element with a unit volume.
- Causal contact
- Causal system
- Particle horizon
- Philosophy of physics
- Wheeler–Feynman time-symmetric theory for electrodynamics
- Green, Celia (2003). The Lost Cause: Causation and the Mind–Body Problem. Oxford: Oxford Forum. ISBN 0-9536772-1-4. Includes three chapters on causality at the microlevel in physics.
- Bunge, Mario (1959). Causality: the place of the causal principle in modern science. Cambridge: Harvard University Press.
- e.g. R. Adler, M. Bazin, M. Schiffer, Introduction to general relativity, McGraw–Hill Book Company, 1965, section 2.3.
- Ernst Mach, Die Mechanik in ihrer Entwicklung, Historisch-kritisch dargestellt, Akademie-Verlag, Berlin, 1988, section 2.7.
- A. Einstein, "Zur Elektrodynamik bewegter Koerper", Annalen der Physik 17, 891–921 (1905).
- Bohm, David. (2005). Causality and Chance in Modern Physics. London: Taylor and Francis.
- Causal Processes, Stanford Encyclopedia of Philosophy
- Caltech Tutorial on Relativity — A nice discussion of how observers moving relatively to each other see different slices of time.
- Faster-than-c signals, special relativity, and causality. This article explains that faster than light signals do not necessarily lead to a violation of causality.
- by John G. Cramer:
- EPR Communication: Signals from the Future? "In this column I want to tell you about this causality-violating communications scheme and its possible consequences."
- The Transactional Interpretation of Quantum Mechanics "3.10 The Arrow of Time in the Transactional Interpretation – The formalism of quantum mechanics, at least in its relativistically invariant formulation, is completely even handed in dealing with the "arrow" of time, the distinction between future and past time directions." |
A novel summary is about a digest of all main points. It must need to be a shorter version on all the main topics and the main arguments depending on what type of novel you have. In summarizing, you need to know the purpose of the novel as well as highlight the important parts. Illustrative examples and smaller details should not be included since it is not important. Like a how to summarize a document, the summary of a novel must need to be a recast of the original material using your own words.
How to Summarize a Novel
- Summary of novel must need to be reasonable as well as impartial without introducing any value and bias judgments. You should avoid adding your own opinions because it will look like a novel review and not a summary anymore.
- Before you begin to make a summary, you need to get the main ideas of the novel. You need to know what type of novel. If possible, it is important when you create your outline plan and set your word limit whenever you need to make a summary.
- You need to check the novel for its table of contents. With this, you get ideas about its structure and its nature of what it covers.
- The next thing you will do is to look for the novel quickly to get ideas about its scope. By doing this, you will have a general idea on what you will cover. If the novel has headings, it helps you to have headings.
- After you read the novel, you need to start taking notes. Do not worry because if they are not grammatically correct, you can correct it with your own. Your task is to capture the main ideas of the content.
- Make a list about the topics of the novel, then write one or 2sentene in each section you will identify. Focus on the main points and does not include illustrative examples. Do not be tempted in filling out your summary with minor information because you will no longer have a good summary.
These are important things you need to know when you need to understand how to summarize a novel. It is necessary to remember it so that the next time you needs to write, you will not have a hard time. It will serve as your guide. |
Many people assume that the car’s battery is the only thing that powers all electrical components in the car.
But this is not true. The alternator plays a vital role in supplying power to the car’s AC, recharging the battery and ignition.
When your alternator is faulty, the battery power is drained fast, and you will find yourself with a stalled car. But what can cause the alternator to go bad?
6 Causes of why your Alternator Not Charging
There are a few things that can cause an alternator to not charge anymore.
When your alternator is not charging, the most common causes are worn carbon brushes inside the alternator, a damaged alternator, a broken belt, a faulty fuse, or a wiring issue.
Here is a more detailed list of the 5 most common causes of an alternator that is not charging.
1. Worn out carbon brushes or damaged alternator
The most common reason why your car is not charging is actually because of a worn-out or damaged alternator.
You can carefully tap it with a hammer while the car engine is running while checking it with a multimeter on the car battery to see if the voltage is changing.
If the voltage change and go back to normal when you tap it lightly with a hammer while the car is running – the Carbon brushes are worn out and needs to be replaced in the alternator, or the whole alternator.
Sometimes there is an electrical problem in the alternator, and even if the voltage does not change, it might be damaged.
Another common cause is a bad diode plate or a voltage regulator. You might need some knowledge about alternators to replace these in most cases.
It was more common to replace parts inside the alternator like the carbon brushes, diode plate, or voltage regulator on older cars. Nowadays, alternators are quite cheap, and mostly it is more worth to replace the whole alternator.
2. Broken Serpentine belt
A closer observation of the alternator will reveal a pulley and belt system that works to convert mechanical energy into electrical energy.
The serpentine belt powers the alternator, and if it fails, the alternator will stop working the moment your belt wears out and breaks, or the pulley becomes damaged.
This can also happen if the serpentine belt is not tightened correctly. Most cars have automatic tensioners, but these can fail, so it is better to double-check.
Some older cars have manual tensioners, and in this case, you might have to tighten the serpentine belt.
The serpentine belt and pulleys are often pretty easy and cheap to replace.
3. Faulty fuse
There is often a huge fuse connected to the alternator’s big power cable. It is often an 80A fuse or more and is most often found in your car’s fuse box in the engine bay.
Fuses blow up due to a power surge, or they wear out. When this happens, the current will stop flowing from the alternator. The solution is to check your car’s manual for the particular fuse that controls the alternator and replace it.
In some cars, you might also find another small fuse to the alternator’s control—usually a 15A to 20A fuse.
4. Wiring issues or connectors
An alternator does usually has 3 or 4 wires to it to function properly. You will find one big main cable together with two or three small ones.
All these wires are important for the alternator’s function, and if one breaks off – you might lose the charging function.
Check the big power cable connectors between the alternator and the car battery to ensure there is no corrosion there. You can usually find that the cable will get warm if there is a bad connection somewhere.
Check or measure these wires with a multimeter. Remember that measuring them is not always correct because you have to load test wires that are half broken or have a bad connection.
You should usually have 12 volts on one of these wires, and the other one goes to the battery light on the dashboard. If you have a 3rd one, it does often go to the engine control unit. To measure this properly, you need a wiring diagram of your specific car model.
5. Damaged car battery
The alternator and the car’s battery work hand in hand. A really bad car battery might actually not take care of the charge from the alternator, which will cause the alternator not to charge at all.
In theory a car can run only with the alternator charge, but this can cause heavy voltage spikes and other strange symptoms so this means that a bad car battery can cause an alternator to not charge also.
6. Engine control module error
Cars are increasingly coming with modern electronics. In this regard, the engine control unit (ECU) controls most of the car’s electrical components.
Modern cars also control the alternator. In some rare cases, there might be a problem with the engine control module not controlling the alternator’s charging.
Check for any trouble codes with an OBD2 scanner to determine if anything is any other damaged part that prevents it from charging.
In some rare cases, there might actually be a faulty engine control unit. But always check all other possible causes first.
Diagnosing an alternator that won’t charge
There are some easy steps you can go through to check the function of your alternator.
- Tap the alternator carefully with a hammer while the engine is running; if the charging goes back to normal, the carbon brushes inside it are worn out and need a replacement.
- Check the large power cable to the alternator and the fuse, usually, a large 40-60 amp fuse near the battery.
- Check the ground cable between the engine and the body.
- Check the serpentine belt and make sure the alternator is spinning with the engine.
- Check the small power supply wire and the charging light wire. You can measure it with a multimeter, but you might need a wiring diagram and some car electronic skills to do this correctly.
- You can measure the diode assembly and the voltage regulator to make sure they are not damaged. You can replace these with some alternators, but it is often the same price to buy a new alternator nowadays. It is up to you what you think is more worth it. |
The Gachalá emerald, one of the most valuable and famous emeralds in the world, was found in the year 1967, in the mine called Vega de San Juan, located in Gachala, a town in Colombia, located 142 km (88 mi) from Bogota. Gachalá Chibcha means "place of Gacha. " Now in the United States, it was donated to the Smithsonian Institution by the New York jeweler, Harry Winston.
The emerald was named in honor of Gachalá, the place where it was found.
- Shape: Emerald
- Color: Intense green
- Carats: 858 Carats
- Weight: 172 grams
- Size: 5 centimeters
- Year of extraction: 1967
The emerald is part of the permanent collection of the Smithsonian Institution in (Washington, D.C.), donated in 1969 by the American jeweler Harry Winston; it is labeled under number 122078 in the catalog.
- http://www.mnh.si.edu/earth/text/dynamicearth/6_0_0_GeoGallery/geogallery_specimen.cfm?SpecimenID=4015&categoryID=1&categoryName=Gems&browseType=name Ficha técnica de la Esmeralda Gachalá de la página web oficial del Smithsonian Institution of Washington, (Consulted on 06-19-2011)
- Digital Artícle from the Colombian Newspaper El Diario, published on 06-04-2008 (Consulted on 19-06-2011).
- Emerald history
- Gachala Emerald
Media related to Gachalá Emerald at Wikimedia Commons |
|Photo by Marty Horowitz 10/30/2014|
Even if you don't know the name of this insect, you can be sure you're looking at an adult. In the insect world, adults have wings : - )
Like all animals, this one started out as a fertilized egg. Dragonflies undergo 'incomplete' metamorphosis, which is not a complete rebuild, but is nevertheless pretty amazing. What hatches in this type of metamorphosis is called a nymph. Nymphs also live to eat - but instead of covering up and taking themselves apart; nymphs eat, grow, shed, repeat. At a certain point, the code for 'build wings' is turned on, and their final shed reveals their adult form. In the case above, a male Filigree Skimmer. |
• Fourteen percent of high schoolers and 8 percent of middle schoolers are late for or miss school each day because of tiredness.
• Thirty percent of high schoolers and 6 percent of middle schoolers fall asleep in school each day.
• Wake County, N.C., middle schools switched to an hour later start time and increased test scores in math and reading.
• Sleep-deprived teens participate in more violent and property crime than other teens.
Sources: National Sleep Foundation, Mary Carskadon, University of Nebraska
Sleep deprivation and early school start times contribute to teenage delinquency, risk-taking, depression, pregnancy, obesity and diabetes, a national sleep expert told Chattanoogans on Thursday.
Mary Carskadon spoke at three events sponsored by the University of Tennessee at Chattanooga about her research on sleep and the teenage brain.
Early school start times, changes in teen sleep patterns and outside social pressure are combining to rob teens of vital sleep, which can cause serious problems, Carskadon said.
"Sleep is just getting squeezed out of their lives," she said.
Research by her and other scientists, studies by the Brookings Institute and national polls by the National Sleep Foundation show a correlation between early school start times and a host of behavioral, educational and developmental troubles.
The biobehavioral scientist spoke at the Oak Street Center in First-Centenary United Methodist Church near UTC on Thursday evening and at two other events earlier in the day.
Preteen children tire easily and fall asleep early, which means kids usually get the rest they need, she said. But beginning in puberty, teens' brains begin to allow them to stay up later, often causing them to miss out on the full nine hours of sleep research shows they need for development.
UTC criminal justice professor Robert Thompson spearheaded getting Carskadon to speak.
Thompson has cited research linking early school start times and delinquency in his push for a look at local juvenile crime problems.
Boyd Patterson, coordinator of the Chattanooga Gangs Task Force, spoke briefly at the introduction of the Thursday evening lecture, saying Carskadon's research could benefit the task force's work.
He told the crowd of 50 listeners a majority of people in gangs joined when they were teens and among gang members, the teens often commit the most crimes.
Understanding teen behavior is crucial to working on gang problems, he said.
"Crime suppression and arrests are part of it," Patterson said. "Efforts like this, prevention and intervention, have to be in place."
Minnie Pruitt and Brea Watson, both UTC students and criminal justice majors, said they saw connections between sleep problems and both academic success and avoiding trouble.
Both women went to high school in Memphis where classes started at 7:45 a.m. Pruitt said her earliest class at UTC is 9 a.m.
"I grasp more ideas in class and more information," she said.
Dr. Anuj Chandra heads the Advance Center for Sleep Disorders, with offices here and in Cleveland, Tenn., and Trenton, Ga.
"As a society, we're all sleep-deprived," Chandra said in an interview before the lecture. He called Carskadon's research pinpointing a biological source for teen sleep patterns "groundbreaking."
Before scientists discovered this proof, many just assumed teens were lazy, he said.
Chandra's advice to parents is to set limits early, by children's age 9 or 10, to set up good habits.
"No TV in the bedroom, cellphones are turned off before bed," he said. |
The Council of Europe celebrates its 70th anniversary this week! It is an important organisation for Inclusion Europe: We work with the Council to promote the rights of people with intellectual disabilities and their families; and we use membership in the Council of Europe as a basis for our own geographical expansion, with currently 74 members in 39 countries.
“70 years after its foundation, the Council of Europe is our continent’s leading human rights organisation,” says a statement by its Secretary-General, Thorbjørn Jagland. “47 member states have come together to agree common standards on human rights, democracy and the rule of law. All 830 million people living in this common legal space have an ultimate right of appeal to the European Court of Human Rights. This is unprecedented in European history, and an achievement that we should celebrate. The European Convention on Human Rights and the European Social Charter are the living roots from which our Organisation grows.”
Why is the Council of Europe important to people with intellectual disabilities and their families?
The Council of Europe, which is not a European Union body, has some instruments and institutions that are especially important to people with intellectual disabilities and their families:
- The European Convention on Human Rights. The convention, which came into force in 1953, was the first instrument to implement some of the rights stated in the Universal Declaration of Human Rights and make them binding.
- The European Court of Human Rights, which for example ruled in 2012 that the right to be free from torture and ill-treatment had been violated in the case of a person with a disability. It was also the first time the Court found the right to liberty had been violated in a social care case.
- The Istanbul convention on violence against women and domestic violence, which is important for ending violence against women with intellectual disabilities
- The Commissioner for Human Rights. The task of the Commissioner is to promote awareness of and respect for human rights in the 47 Council of Europe member states. He or she examines the human rights situation during regular visits to these states, talking to both governments and civil society. In 2017, for example, the Commissioner published a report about education.
- The European Social Charter. It is a treaty that guarantees fundamental social and economic rights as a counterpart to the European Convention on Human Rights, which refers to civil and political rights. The charter deals with a broad range of everyday human rights related to employment, housing, health, education, social protection and welfare. It was adopted in 1961.
How did Inclusion Europe work with the Council of Europe?
In 2012 Inclusion Europe prepared an easy-to-read document requested by the Council of Europe: Recommendations from the Council of Europe to European governments: How to make sure people with disabilities can take part in political and public life (.pdf)
We are a member of the Conference of International Non-governmental Organisations (Conference of INGOs), which is consulted by the Council on relevant topics.
The Parliamentary Assembly of the Council of Europe drafts reports and resolutions, many of which deal with issues that are important for people with disabilities. In 2017, for example, Inclusion Europe contributed to the topic of political participation.
How we engage with the Council of Europe today
We are urging the Council of Europe to stop the work on the Optional Protocol to the Oviedo Convention, as the document is harmful to people with intellectual disabilities.
We have lodged collective complaints against Belgium in 2017 and against France in 2018 on behalf of our national members in those countries. (Read more about the complaint against France.) The complaints are based upon the European Social Charter.
Last year we joined other disability groups in protesting the Council of Europe’s decision to suspend the Committee of Experts on People with Disabilities.
* * *
Congratulations to the Council of Europe on its anniversary and its contributions towards achieving equal rights of people with intellectual disabilities and their families in Europe!
Our work brings the voice of people with intellectual disabilities and their families where decisions about their future are made.
This has always been incredibly important. It is even more so with the Covid pandemic drastic impact on their rights and lives.
Being visible and vocal on issues directly affecting millions of people requires your support.
Become Inclusion Europe supporter and help us keep doing our work. |
Taking the ‘forever’ out of ‘forever’ chemicals
When Earl Tennant first reached out to environmental attorney Rob Bilott about the mysterious ailments befalling the cows on his West Virginia farm, no one was talking about “forever chemicals.”
In 1998, few people had heard of perfluorooctanoic acid (PFOA), a persistent fluorinated chemical nicknamed “C8” that the nearby DuPont plant was using to manufacture Teflon.
Twenty-two years later, communities across the U.S. are getting a crash course in PFOA and other poly- and perfluoroalkyl substances (PFAS) as they grapple with drinking water contaminated with these “forever chemicals,” so called because of their failure to break down in the environment. In Colorado, Fountain, Security and Widefield south of Colorado Springs and parts of south Adams County are among them.
An emerging body of evidence shows the fluorinated chemicals can cause cancer and developmental, endocrine, renal and metabolic problems.
Mines welcomed Bilott to campus earlier this year for the Herbert L. and Doris S. Young Environmental Issues Symposium. In a keynote talk and panel discussion, Bilott helped shed light on the future of PFAS—and Mines’ important role in the fight.
“I’m afraid we’re probably looking at another 20 years of research into what do we do with this stuff,” Bilott said during his keynote. “How do we handle it now that we’re finally realizing it’s out there and we’ve all been exposed and it’s everywhere?”
Tackling the PFAS problem
Today, Chris Higgins, professor of civil and environmental engineering, and his colleagues at Mines are at the forefront of the fight against “forever chemicals.” Researchers are making an impact in the areas of fate and transport—how these chemicals move and accumulate in the environment, Higgins’ expertise —and remediation—what to do once they’re in drinking water, the focus of fellow Civil and Environmental Engineering faculty Timothy Strathmann and Chris Bellona.
In many ways, Mines was perfectly poised to be a leader on PFAS, Higgins said, because of its historic strengths in environmental chemistry and water issues and its commitment to solving real-world problems.
“What’s particularly unique is I feel like we are better suited to deal with practical problems than a lot of other schools,” Higgins said. “It’s not that we don’t do fundamental research, but our engagement with industry, with practitioners, with real problems, is very well recognized on campus and encouraged.”
The future of “forever chemicals”
Mines’ partners on the PFAS problem include the Colorado School of Public Health and Colorado Department of Public Health and Environment (CDPHE), both of which were represented, along with Bilott and Higgins, on the panel at the Young Environmental Issues Symposium.
The current EPA health advisory limit for PFAS in drinking water is 70 parts per trillion—a “really low level” when you’re talking about the firefighting foams that have been linked to contamination south of Colorado Springs and elsewhere, Higgins said. “One five-gallon bucket of this foam—of this historical foam containing the PFOS and PFOA—has enough of those chemicals in it to contaminate a water supply for 27,000 people for an entire year.”
Manufacturers are moving away from the worst offenders—C8s— but the replacement foams still contain PFAS that are just as persistent in the environment while less accumulative in the body, Higgins said. “There’s been a movement to this as a potential Band-Aid until we get to a point where we have fluorine-free foams.”
Colorado is working hard to limit PFAS exposure throughout the state, said Tracie A. White ’98, remediation program manager at CDPHE. Potential state legislation introduced this year would give CDPHE the authority to require public utilities to test for PFAS in both their source and finished water. Under the bill, facilities with PFAS-containing foam would also have to register with the state and prove they are properly capturing and disposing of the foam.
On the remediation side, researchers at Mines and other institutions are making headway on a number of promising technologies that could treat contaminated water while it’s still in the ground. White said, “We are excited to be pilot testing a couple of these different technologies at Peterson Air Force Base during this upcoming year.” |
Latin for "by head," meaning to be determined by the number of people. Under a will, this is the most common method of determining what share of property each beneficiary gets when one of the beneficiaries dies before the willmaker, leaving children of his or her own. For example, Fred leaves his house jointly to his son Alan and his daughter Julie. But Alan dies before Fred does, leaving two young children. If Freds will states that heirs of a deceased beneficiary are to receive the property per capita, Julie and the two grandchildren will each take a third. If, on the other hand, Freds will states that heirs of a deceased beneficiary are to receive the property per stirpes, Julie will receive one-half of the property, and Alans two children will share his half in equal shares (through Alan by right of representation). |
The educational plan of the maisons d’éducation of the Legion of Honor aspires to form young girls in step with their time, and to prepare them to a life in keeping with the moral values of the Institution.
« Making pupils autonomous, ensuring they live a dignified and independent existence »: the mission bequeathed by Napoleon more than two centuries ago is still topical. To this day, the educational program pursues two priorities: the transmission of knowledge and personal development.
In a spirit of high standards and benevolence, the two schools implement educational methods that encourage personal reflection and autonomy as well as intellectual openness, through a variety of cultural excursions and extracurricular activities.
The development of judgment and access to critical thinking are fostered. Pupils are expected to be dedicated to their studies.
Great attention is given to promoting multidisciplinary synergy, so as to create an environment that is beneficial to intellectual development and to making the students active participants in their training.
The girls’ motivation for scientific studies is also encouraged. They are provided with the most advanced technological resources to help them develop their interest in scientific and technical culture.
Artistic and musical culture
The maisons d’éducation possess a long-standing tradition of teaching music and the visual arts, practiced with a view to forging the students’ esthetic taste, to encourage them to surpass themselves and to learn the pleasure of creativity.
The music classes offered at Les Loges are highly appreciated for the quality and variety of the instruction. Nearly half of the schoolgirls learn how to play an instrument. In Saint-Denis, 150 students take music and singing lessons. They give concerts once a year in honor of the French President and on various occasions for the Grand Chancellor, as well as performances outside school.
The visual arts also hold a special place in the curriculum, as do theater and dance. In Saint-Denis, the architectural setting contributes to the unfolding of esthetic awareness.
The maisons d’éducation strive to hand down a sense of transcending one’s own abilities in the spirit of the values of the Legion of Honor: high standards, individual merit, self-esteem and respect for others.
The education provided aims to develop an appreciation for hard work, fruitful involvement and emulation.
Fostering independence, encouraging initiative, a sense of responsibility and developing creativity form the foundation of the educational enterprise.
Solidarity among students
At the same time, the solidarity that governs student relations is seen as a major factor for self-development and success. The years spent together as boarders favor a feeling of belonging, giving rise to long-lasting friendships and reinforcing mutual support.
Mutual assistance, a respect for differences, attention for others and the notion of the common good are living principles experienced daily. In certain subjects, a buddy system is used to pair up the strongest and weakest and teach them teamwork.
Openness to the world
Openness to the world outside is one of the priorities of the maisons d’éducation.
Trips abroad are organized regularly (Great Britain, Germany, Spain, Portugal, Unites States, Senegal, China, etc.), notably as academic exchanges with schools that share the same educational approach.
These exchanges foster the students’ awareness of the values that have been handed down to them and that they are in turn expected to pass on.
Duty of remembrance
The duty of commemoration features among the essential objectives of the schools. Studying the past and its historical figures is indispensable for the young generations to develop an informed interest in history.
Through work involving the past, these young women, whose presence in the maisons d’éducation is due to filial ties and parental merit, are constantly reminded of this precept.
A sense of civic service is also cultivated through participation in a number of official ceremonies at which they represent the Institution. |
Late termination of pregnancy
||The examples and perspective in this article deal primarily with the United States and do not represent a worldwide view of the subject. (June 2009)|
Late termination of pregnancy (TOP) or late-term abortions are abortions which are performed during a later stage of pregnancy. Late-term abortions are more controversial than abortion in general because the fetus is more developed and sometimes viable.
A late-term abortion often refers to an induced abortion procedure that occurs after the 20th week of gestation. The exact point when a pregnancy becomes late-term, however, is not clearly defined. Some sources define an abortion after 16 weeks as "late". Three articles published in 1998 in the same issue of the Journal of the American Medical Association could not agree on the definition. Two of the JAMA articles chose the 20th week of gestation to be the point where an abortion procedure would be considered late-term. The third JAMA article chose the third trimester, or 27th week of gestation.
The point at which an abortion becomes late-term is often related to the "viability" (ability to survive outside the uterus) of the fetus. Sometimes late-term abortions are referred to as post-viability abortions. However, viability varies greatly among pregnancies. Nearly all pregnancies are viable after the 27th week, and no pregnancies are viable before the 21st week. Everything in between is a "grey area".
- Canada: During the year 2009, 29% of induced abortions were performed before 8 weeks, 41% at 9 to 12 weeks, 7% at 13 to 16 weeks and 2% over 21 weeks.
- England and Wales: In 2005, 9% of abortions occurred between 13 and 19 weeks, while 1% occurred at or over 20 weeks.
- New Zealand: In 2003, 2.03% of induced abortions were done between weeks 16 and 19, and 0.56% were done over 20 weeks.
- Norway: In 2005, 2.28% of induced abortions were performed between 13 and 16 weeks, 1.24% of abortions between 17 and 20 weeks, and 0.20% over 21 weeks. Between February 15, 2010 and December 1, 2011, a total number of ten abortions were performed between 22 to 24 weeks. These have been declared illegal by The Norwegian Directorate of Health.
- Scotland: In 2005, 6.1% of abortions were done between 14 and 17 weeks, while 1.6% were performed over 18 weeks.
- Sweden: In 2005, 5.6% of abortions were carried out between 12 and 17 weeks, and 0.8% at or greater than 18 weeks.
- United States: In 2003, from data collected in those areas that sufficiently reported gestational age, it was found that 6.2% of abortions were conducted between 13 and 15 weeks, 4.2% between 16 and 20 weeks, and 1.4% at or after 21 weeks. Because the Centers for Disease Control and Prevention's annual study on abortion statistics does not calculate the exact gestational age for abortions performed past the 20th week, there are no precise data for the number of abortions performed after viability. In 1997, the Guttmacher Institute estimated the number of abortions in the U.S. past 24 weeks to be 0.08%, or approximately 1,032 per year.
In 1987, the Alan Guttmacher Institute collected questionnaires from 1,900 women in the United States who came to clinics to have abortions. Of the 1,900 questioned, 420 had been pregnant for 16 or more weeks. These 420 women were asked to choose among a list of reasons they had not obtained the abortions earlier in their pregnancies. The results were as follows:
- 71% Woman didn't recognize she was pregnant or misjudged gestation
- 48% Woman found it hard to make arrangements for abortion
- 33% Woman was afraid to tell her partner or parents
- 24% Woman took time to decide to have an abortion
- 8% Woman waited for her relationship to change
- 8% Someone pressured woman not to have abortion
- 6% Something changed after woman became pregnant
- 6% Woman didn't know timing is important
- 5% Woman didn't know she could get an abortion
- 2% A fetal problem was diagnosed late in pregnancy
- 11% Other
As of 1998, among the 152 most populous countries, 54 either banned abortion entirely or permitted it only to save the life of the pregnant woman.
In addition, another 44 of the 152 most populous countries restricted abortions after a particular gestational age: 12 weeks (Albania, Armenia, Azerbaijan, Belarus, Bosnia-Herzegovina, Bulgaria, Croatia, Cuba, Czech Republic, Denmark, Estonia, France, Georgia, Greece, Kazakhstan, Kyrgyz Rep., Latvia, Lithuania, Macedonia, Moldova, Mongolia, Norway (additional restrictions after 18 weeks), Russian Federation, Slovakia, Slovenia, South Africa, Ukraine, Tajikistan, Tunisia, Turkey, Turkmenistan, Uzbekistan, and Yugoslavia), 13 weeks (Italy), 14 weeks (Austria, Belgium, Cambodia, Germany, Hungary, and Romania), 18 weeks (Sweden), viability (Netherlands and to some extent the United States), and 24 weeks (Singapore and Britain). In these countries, abortions after the general gestational age limit are allowed only under restricted circumstances, which include, depending on country, risk to the woman's life, physical or mental health, fetal malformation, cases where the pregnancy was the result of rape, or poor socio-economic conditions. For instance, in Italy, abortion is allowed on request up until 90 days, after which it is allowed only if the pregnancy or childbirth pose a threat to the woman’s life, a risk to physical health of the woman, a risk to mental health of the woman; if there is a risk of fetal malformation; or if the pregnancy is the result of rape or other sexual crime. Denmark provides a wider range of reasons, including social and economic ones, which can be invoked by a woman who seeks an abortion after 12 weeks. Abortions at such stages must in general be approved by a doctor or a special committee, unlike early abortions which are performed on demand. The ease with which the doctor or the committee allows a late term abortion depends significantly by country, and is often influenced by the social and religious views prevalent in that region.
Some countries, like Canada, China (Mainland only) and Vietnam have no legal limit on when an abortion can be performed.
As of April 2007, 36 states had bans on late-term abortions that were not facially unconstitutional under Roe v. Wade (i.e. banning all abortions) or enjoined by court order. In addition, the Supreme Court in the case of Gonzales v. Carhart ruled that Congress may ban certain late-term abortion techniques, "both previability and postviability".
The Supreme Court has held that bans must include exceptions for threats to the woman's life, physical health, and mental health, but four states allow late-term abortions only when the woman's life is at risk; four allow them when the woman's life or physical health is at risk, but use a definition of health that pro-choice organizations believe is impermissibly narrow. Assuming that one of these state bans is constitutionally flawed, then that does not necessarily mean that the entire ban would be struck down: "invalidating the statute entirely is not always necessary or justified, for lower courts may be able to render narrower declaratory and injunctive relief."
Also, 13 states prohibit abortion after a certain number of weeks' gestation (usually 24 weeks). The U.S. Supreme Court held in Webster v. Reproductive Health Services that a statute may create "a presumption of viability" after a certain number of weeks, in which case the physician must be given an opportunity to rebut the presumption by performing tests. Therefore, those 13 states must provide that opportunity. Because this provision is not explicitly written into these 13 laws, as it was in the Missouri law examined in Webster, pro-choice organizations believe that such a state law is unconstitutional, but only "to the extent that it prohibits pre-viability abortions".
Ten states require a second physician to approve. The U.S. Supreme Court struck down a requirement of "confirmation by two other physicians" (rather than one other physician) because "acquiescence by co-practitioners has no rational connection with a patient's needs and unduly infringes on the physician's right to practice". Pro-choice organizations such as the Guttmacher Institute therefore interpret some of these state laws to be unconstitutional, based on these and other Supreme Court rulings, at least to the extent that these state laws require approval of a second or third physician.
Nine states have laws that require a second physician to be present during late-term abortion procedures in order to treat a fetus if born alive. The Court has held that a doctor's right to practice is not infringed by requiring a second physician to be present at abortions performed after viability in order to assist in saving the life of the fetus.
There are at least three medical procedures associated with late-term abortions:
- Dilation and evacuation (D&E)
- Early labour induction
- Intact dilation and extraction (IDX or D&X), sometimes referred to as "partial-birth abortion"
Abortions done for fetal abnormality are usually performed with induction of labor or with IDX; elective late-term abortions are usually performed with D&E.
- Graham, RH; Robson, SC; Rankin, JM (January 2008). "Understanding feticide: an analytic review.". Social science & medicine (1982) 66 (2): 289–300. doi:10.1016/j.socscimed.2007.08.014. PMID 17920742.
- Torres, Aida and Forrest, Jacqueline Darroch. (1988). Why Do Women Have Abortions. Family Planning Perspectives, 20 (4), 169-176. Retrieved April 19, 2007.
- Weihe, Pál, Steuerwald, Ulrike, Taheri, Sepideh, Færø, Odmar, Veyhe, Anna Sofía, & Nicolajsen, Did. (2003). The Human Health Programme in the Faroe Islands 1985-2001. In AMAP Greenland and the Faroe Islands 1997-2001. Danish Ministry of Environment. Retrieved April 19, 2007.
- Sprang, M.L, and Neerhof, M.G. (1998). Rationale for banning abortions late in pregnancy. Journal of the American Medical Association, 280 (8), 744-747.
Grimes, D.A. (1998). The continuing need for late abortions. Journal of the American Medical Association, 280 (8), 747-750.
- Gans Epner, J.E., Jonas, H.S., Seckinger, D.L. (1998). Late-term abortion. Journal of the American Medical Association, 280 (8), 724-729.
- Globe & Mail. (2012). Percentage distribution of induced abortions by gestation period. Retrieved December 7th, 2012.
- Government Statistical Service for the Department of Health. (July 4, 2006). Abortion statistics, England and Wales: 2005. Retrieved May 10, 2007.
- Statistics New Zealand. (January 31, 2005). Demographic Trends 2004. Retrieved April 19, 2007.
- Statistics Norway. (April 26, 2006). Induced abortions, by period of gestation and the womans age. 2005. Retrieved January 17, 2006.
- The Norwegian Directorate of Health. (May 7, 2012). Senaborter etter 22. uke Retrieved May 11, 2012.
- ISD Scotland. (May 24, 2006). Percentage of abortions performed in Scotland by estimated gestation. Retrieved May 10, 2007.
- Nilsson, E., Ollars, B., & Bennis, M.. The National Board of Health and Welfare. (May 2006). Aborter 2005. Retrieved May 10, 2007.
- Strauss, L.T., Gamble, S.B., Parker, W.Y, Cook, D.A., Zane, S.B., & Hamdan, S. (November 24, 2006). Abortion Surveillance - United States, 2003. Morbidity and Mortality Weekly Report, 55 (11), 1-32. Retrieved May 10, 2007.
- Guttmacher Institute. (January 1997). The Limitations of U.S. Statistics on Abortion. Retrieved April 19, 2007.
- Anika Rahman, Laura Katzive and Stanley K. Henshaw. A Global Review of Laws on Induced Abortion, 1985-1997, International Family Planning Perspectives (Volume 24, Number 2, June 1998).
- Guttmacher Institute. (April 1, 2007). State Policies on Later-Term Abortions. State Policies in Brief. Retrieved April 19, 2007.
- Ayotte v. Planned Parenthood, 546 U.S. 320 (2006).
- Webster v. Reproductive Health Services, 492 U.S. 490 (1989).
- NARAL Pro-Choice America. (2007). "Delaware." Who Decides? The Status of Women's Reproductive Rights in the United States. Retrieved April 19, 2007.
- Doe v. Bolton, 410 U.S. 179 (1973).
- Planned Parenthood Ass'n v. Ashcroft, 462 U.S. 476, 486-90 (1983).
- Gina Gonzales as told to Barry Yeoman, "I Had An Abortion When I Was Six Months Pregnant", Glamour |
PCSR Civil Service Exam Review Guide 16
This is the 16th of our review series. This week, we are going to discuss rate, base, and percentage.
PART I: MATH
1.) Introducing Rate, Base, and Percentage
2.) Calculating for Percentage
3.) Calculating for the Rate
4.) Calculating for the Base
5.) Calculating Rate, Base, and Percentage
6.) Calculating Discounts and Interests
Part II: ENGLISH
Visit our complete review guide. |
Sleep my Treasure
Sleep, sleep, my treasure,
The long day’s pleasure
Has tired the birds, to their nests they creep;
The garden still is
Alight with lilies,
But all the daisies are fast asleep.
Sleep, sleep, my darling,
Dawn wakes the starling,
The sparrow stirs when he sees day break;
But all the meadow
Is wrapped in shadow,
And you must sleep till the daisies wake!
– E. Nesbit
The poet E.Nesbit refers to the baby as a mother’s treasure whom she is putting to sleep. She is telling her baby that the birds are tired and in they have crept into their nests, the garden is alight with lilies but all the daisies are asleep. She further tells her baby that dawn wakes up the starlings and the sparrow starts moving, early in the morning. But the meadow is still covered in darkness so the mother tells her baby to sleep till the daisies wake up.
treasure – (here) the little child.
pleasure – happiness
creep – to move slowly and quietly.
alight with – lit up with.
starling – a bird with dark, shiny feathers and a loud call.
stirs – moves slightly.
meadow – grassland.
wrapped – covered.
1. Answer the following questions:
1. Who is the speaker in the poem?
Ans. The mother is the speaker in the poem.
2. To whom is the poem addressed?
Ans. The poem is addressed to the baby.
3. What time is being described in the poem?
Ans. The time described in the 1st stanza is dusk and in the 2nd stanza it is dawn.
4. Name the white and bright things mentioned in the poem.
Ans. The white things mentioned in the poem are the lilies and daisies. And the bright things mentioned in the poem are garden dawn and daybreak.
2. Guess the meaning of the following from the context:
(a) The garden still is alight with lilies.
Ans. The white lilies make the garden appear to be bright and well lit.
(b) Dawn wakes the starling.
Ans. When the sun rises and it is dawn, the starling wakes up.
(c) The meadow is wrapped in shadow:
Ans. The meadow is covered with darkness.
3. What is your favourite time of the day? Describe it in detail.
My favorite time of the day is morning. In the morning it is bright and sunny. I also feel energetic in the morning after a good night’s rest. If I wake up early in the morning I can do a lot of my work and then have time for my hobbies and play.
4. Visit a library: Find and read stories and poems written by Edith Nesbit.
The Fields of Flanders
the fields were all glad and gay
With silver daisies and silver may;
There were kingcups gold by the river’s edge
And primrose stars under every hedge.
This year the fields are trampled and brown,
The hedges are broken and beaten down,
And where the primroses used to grow
Are little black crosses set in a row.
And the flower of hopes, and the flowers of dreams,
The noble, fruitful, beautiful schemes,
The tree of life with its fruit and bud,
Are trampled down in the mud and the blood.
The changing seasons will bring again
The magic of Spring to our wood and plain;
Though the Spring be so green as never was seen
The crosses will still be black in the green.
The God of battles shall judge the foe
Who trampled our country and laid her low. . . .
God! hold our hands on the reckoning day,
Lest all we owe them we should repay
5. Draw word webs for the following. Begin with the given word and go on writing as many other words associated with it, as you can. Use these words to write other related words to form a word web. |
|Metropolitan County||Tyne and Wear|
|Towns/Cities||Stanhope, Wolsingham, Bishop Auckland, Willington, Durham, Chester-le-Street, Sunderland|
|- location||Wearhead, County Durham, UK|
|- elevation||340 m (1,115 ft)|
|- location||North Sea, UK|
|- elevation||0 m (0 ft)|
|Length||96 km (60 mi)|
The River Wear (// WEER) in North East England rises in the Pennines and flows eastwards, mostly through County Durham to the North Sea in the City of Sunderland. At 60 mi (97 km) long, it is one of the region's longest rivers, wends in a steep valley through the cathedral city of Durham and gives its name to Weardale in its upper reach and Wearside by its mouth.
Geology and history
The Wear rises in the east Pennines, an upland area raised up during the Caledonian orogeny. Specifically, the Weardale Granite underlies the headwaters of the Wear. Devonian Old Red Sandstone in age, this Weardale Granite does not outcrop[n 1] but was surmised by early geologists, and subsequently proven to exist as seen in the Rookhope borehole. It is the presence of this granite that has retained the high upland elevations of this area (less through its relative hardness, and more due to isostatic equilibrium) and accounts for heavy local mineralisation, although it is considered that most of the mineralisation occurred during the Carboniferous period. Mining of lead ore has been known in the area of the headwaters of the Wear since the Roman occupation and continued into the nineteenth century when it accounts for the early extension of the then-new railways westwards along the Wear valley. Fluorspar is another mineral sporadically co-present with Weardale Granite and became important in the manufacture of steel from the late c19 into the c20; in many cases the steel industries were able to take previously unwanted excavation heaps. Fluorspar explains why iron and steel manufacture flourished in the Wear valley, Consett and Teesside during the nineteenth century. Overlying are three Carboniferous minerals: Limestone, Coal Measures as raw materials for iron and steel manufacture, and sandstone, useful as a refractory material. The last remaining flourspar mine closed in 1999 following legislation re water quality. A mine at Rogerley Quarry Frosterley is operated by an American consortium who occasionally work it for specimen minerals. Minco are currently exploring the North Pennines and the upper Wear catchment for potential reserves of zinc at lower levels. Ironstone which was important as the ore was won from around Consett and Tow Law, then around Rookhope, while greater quantities were imported from just south of the southerly Tees in North Yorkshire. These sources were in due course depleted or became uneconomic. Spoil heaps from the abandoned lead mines can still be seen[where?], and since the last quarter of the twentieth century have been the focus of attention for the recovery of gangue minerals in present mining, such as fluorspar for the smelting of aluminium. However, abandoned mines and their spoil heaps continue to contribute to heavy metal mineral pollution of the river and its tributaries. This has significance to fishing in times of low flow and infrastructure costs as the River Wear is an important source of drinking water for many of the inhabitants along its course. The former cement works at Eastgate, until recently run by Lafarge, was based on an inlier of limestone. The site recently gained planning permission to form a visitor complex show casing and eco village using alternative technology including "hot rocks" water heating system. The underlying granite has been drilled and reports confirm their presence. Bardon Aggregates continue to quarry at Heights near Westgate and operate a tarmac "blacktop" plant on site.
Mineral extraction has also occurred above St John's Chapel with the extraction of Ganister which was used in the steel process at Consett. Around Frosterley, Limestone, Sand (crushed sandstone) and Frosterley Marble http://en.wikipedia.org/wiki/Frosterley have been worked and the Broadwood Quarry recently expanded into ground held on an old licence. The crushing plant continues to operate. A quarry at Bollihope was also mooted on a similar basis but plans seem to have been discontinued. Frosterley Marble was used extensively in church architecture, there are local examples in St Michael's church Frosterley and Durham Cathedral.
The upland area of Upper Weardale retains a flora that relates, almost uniquely in England, to the end of the last Ice Age, although it almost or entirely lacks the particular rarities that make up the unique "Teesdale Assemblage" of post-glacial plants. This may, in part, be due to the Pennine areas of Upper Weardale and Upper Teesdale being the site of the shrinking ice cap. The glaciation left behind many indications of its presence, including lateral moraines and material from the Lake District and Northumberland, although surprisingly few drumlins. After the Ice Age, the Wear valley became thickly forested, however during the Neolithic period and increasingly in the Bronze Age, were largely deforested for agriculture — many woods break up rolling fields and crags in Weardale but not on the scale of for instance in the region Kielder Forest partly in Northumberland National Park.
It is thought that the course of the River Wear, prior to the last Ice Age, was much as it is now as far as Chester-le-Street. This can be established as a result of boreholes, of which there have been many in the Wear valley due to coal mining. However, northwards from Chester-le-Street, the Wear may have originally followed the current route of the lower River Team. The last glaciation reached its peak about 18,500 years ago, from which time it also began a progressive retreat, leaving a wide variety of glacial deposits in its wake, filling existing river valleys with silt, sand and other glacial till. At about 14,000 years ago, retreat of the ice paused for maybe 500 years at the city of Durham. This can be established by the types of glacial deposits in the vicinity of Durham City. The confluence of the River Browney was pushed from Gilesgate (the abandoned river valley still exists in Pelaw Woods), several miles south to Sunderland Bridge (Croxdale). At Chester-le-Street, when glacial boulder clay was deposited blocking its northerly course, the River Wear was diverted eastwards towards Sunderland where it was forced to cut a new, shallower valley. The gorge cut by the river through the Permian magnesian limestone can be seen most clearly at Ford Quarry.
In the 17th edition of Encyclopædia Britannica (1990), reference is made to a pre-Ice Age course of the River Wear outfalling at Hartlepool.
Much of the River Wear is associated with the history of the Industrial Revolution. Its upper end runs through lead mining country, until this gives way to coal seams of the Durham coalfield for the rest of its length. As a result of limestone quarrying, lead mining and coal mining, the Wear valley was amongst the first places to see the development of railways. The Weardale Railway continues to run occasional services between Stanhope and Wolsingham.
Rising in the east Pennines, its head waters consisting of several streams draining from the hills between Killhope Law and Burnhope Seat, the head of the river is held to be in Wearhead, County Durham at the confluence of Burnhope Burn and Killhope Burn. This is shown on Ordnance Survey maps, and on the County Durham GIS online. However, a map produced by Durham County Council, and used on an interpretation board at Cowshill shows the River Wear taking in the northwest Killhope Burn from Wearhead up to Killhope. Excepting that this apparent extension of the Wear is an error, it can be assumed that there are attempts to reclassify Killhope Burn as the River Wear - on some analyses this practice of backtracking is common in the study of rivers as it gives the River Wear an issue as the source instead of a confluence, to which this article's Geology relates. The River Wear is a spate river and has been heavily influenced by previous government funded drainage schemes (gripping) with a view to improving marginal agricultural land. The river rises very quickly and has experienced much heavy flooding resulting in enhanced river bank erosion
The river flows eastwards through Weardale, one of the larger valleys of west County Durham, subsequently turning south-east, and then north-east, meandering its way through the Wear Valley still in County Durham to the North Sea where it outfalls at Wearmouth in the main locality of Monkwearmouth on Wearside in the City of Sunderland. The 60 miles (97 km) from head to mouth. Prior to the creation of Tyne and Wear, the Wear had been the longest river in England with a course entirely within one county. The Weardale Way, a long-distance public footpath, roughly follows the entire route, including the length of Killhope Burn.
Wearhead to Bishop Auckland
There are several towns, sights and tourist places along the length of the river. The market town of Stanhope is known in part for the ford across the river. From here the river is followed by the line of the Weardale Railway, which crosses the river several times, through Frosterley, Wolsingham, and Witton-le-Wear to Bishop Auckland.
Bishop Auckland to Durham
On the edge of Bishop Auckland the Wear passes below Auckland Park and Auckland Castle, the official residence of the Bishop of Durham and its deer park. A mile or so downstream from here, the Wear passes Binchester Roman Fort, Vinovia, having been crossed by Dere Street, the Roman road running from Eboracum (now York) to Coria (now Corbridge) close to Hadrian's Wall. From Bishop Auckland the River Wear meanders in a general northeasterly direction, demonstrating many fluvial features of a mature river, including wide valley walls, fertile flood plains and ox-bow lakes. Bridges over the river become more substantial, such as those at Sunderland Bridge (near Croxdale), and Shincliffe. At Sunderland Bridge the River Browney joins the River Wear.
When it reaches the city of Durham the River Wear passes through a deep, wooded gorge, from which several springs emerge, historically used as sources of potable water. A few coal seams are visible in the banks. Twisting sinuously in an incised meander, the river has cut deeply into the "Cathedral Sandstone" bedrock. The high ground (bluffs) enclosed by this meander is known as the Peninsula, forming a defensive enclosure, at whose heart lies Durham Castle and Durham Cathedral and which developed around the Bailey into Durham city. That area is now a UN World Heritage Site. Beneath Elvet Bridge are Brown's Boats (rowing boats for hire) and the mooring for the Prince Bishop, a pleasure cruiser.
In June each year, the Durham Regatta, which predates that at Henley, attracts rowing crews from around the region for races along the river's course through the city. Seven smaller Regattas and Head Races are held throughout the rest of the year, which attract a lower number of competitors. There are 14 boathouses and 20 boat clubs based on the Wear in Durham.
Two weirs impede the flow of the river at Durham, both originally created for industrial activities. The Old Fulling Mill is now an archaeological museum. The second weir, beneath Milburngate Bridge, now includes a salmon leap and fish counter, monitoring sea trout and salmon, and is on the site of a former ford. Considering that 138,000 fish have been counted migrating upriver since 1994, it may not be surprising that a family of cormorants live on this weir, and can frequently be watched stretching their wings in an attempt to cool off after feeding.
Durham to Chester-le-Street
Between Durham City and Chester-le-Street, 6 miles (10 km) due north, the River Wear changes direction repeatedly, flowing south westwards several miles downstream having passed the medieval site of Finchale Priory, a former chapel and later a satellite monastery depending on the abbey church of Durham Cathedral. Two miles downstream, the river is flowing south eastwards. The only road bridge over the Wear between Durham and Chester-le-Street is Cocken Bridge. As it passes Chester-le-Street, where the river is overlooked by Lumley Castle, its flood plain has been developed into The Riverside, the home pitch of Durham County Cricket Club. Passing through the Lambton Estate (still owned by the Lambton family, and briefly a lion park during the 1970s) the river becomes tidal, and navigable.
Chester-le-Street to Sunderland
On exiting the Lambton estate the river leaves County Durham and enters the City of Sunderland, specifically the southern/south-eastern edge of the new town of Washington. At Fatfield the river passes beneath Worm Hill, around which the Lambton Worm is reputed to have curled its tail.
Already the riverbanks are showing evidence of past industrialisation, with former collieries and chemical works. A little further downstream the river passes beneath the Victoria Viaduct, (formally called the Victoria Bridge). Named after the newly crowned queen, the railway viaduct opened in 1838, was the crowning achievement of the Leamside Line, then carrying what was to become the East Coast Main Line. A mile to the east is Penshaw Monument, a local iconic landmark. As the river leaves the environs of Washington, it forms the eastern boundary of Washington Wildfowl Trust.
Having flowed beneath the A19 trunk road, the river enters the suburbs of Sunderland. The riverbanks show further evidence of past industrialisation, with former collieries, engineering works and dozens of shipyards. In their time, Wearside shipbuilders were some of the most famous and productive shipyards in the world. The artist L. S. Lowry visited Sunderland repeatedly and painted pictures of the industrial landscape around the river. Three bridges cross the Wear in Sunderland: the Queen Alexandra Bridge to the west, and the Wearmouth rail and road bridges in the city centre.
On both banks at this point there are modern developments, some belonging to the University of Sunderland (St. Peter's Campus; Scotia Quay residences) and to the National Glass Centre. A riverside sculpture trail runs alongside this final section of its north bank. The St Peter's Riverside Sculpture Project was created by Colin Wilbourn, with crime novelist and ex-poet Chaz Brenchley. They worked closely with community groups, residents and schools.
As the river approaches the sea, the north bank (Roker) has a substantial residential development and marina. A dolphin nicknamed Freddie was a frequent visitor to the marina, attracting much local publicity. However, concern was expressed that acclimatising the dolphin to human presence might put at risk the safety of the dolphin regarding the propellors of marine craft. The south bank of the river is occupied by what remains of the Port of Sunderland, once thriving and now almost gone.
- List of crossings of the River Wear
- Rowing clubs on the River Wear
- List of rivers of England
- Harry Watts – multiple River Wear life-saver
Notes and References
- i.e. appear on the surface
- "Geology: Granite in the North Pennines". Retrieved 2008-01-25.
- Ordnance survey website
- "Durham Regatta". Retrieved 2008-01-26.
- Durham College Rowing. "Boat Clubs in Durham". Retrieved 2008-12-28.
- "Lutheran Music". Retrieved 2008-07-12.[dead link]
- "The Lambton Worm". The Legends and Myths of Britain. Retrieved 2007-06-17.
- "Alice in Sunderland". chazbrenchley.co.uk. Retrieved 2007-06-17.
- "St Peter's Riverside Sculpture Project". chazbrenchley.co.uk. Retrieved 2007-06-17.
- Talbot, Bryan (2007). Alice in Sunderland: An Entertainment. London: Jonathon Cape. pp. 95–107. ISBN 0-224-08076-8.
- Natural Environment Research Council, Institute of Geological Sciences, 1971, "British Regional Geology: Northern England" Fourth Edition, HMSO, London.
- Johnson, G.A.L. & Hickling, G. (eds.), 1972, "Geology of Durham County", Transactions of the Natural History Society of Northumberland, Durham and Newcastle upon Tyne, Vol.41, No.1.
- 'Wear River', "Encyclopaedia Britannica", 17th Edition, 1990.
Sunderland Bridge over the River Wear near Durham City. The brown colouration of the water is due to moorland peat that has been washed into the river upstream during heavy rainfall.
Baths Bridge is a footbridge named after the now closed swimming baths at the southern end of the bridge. In warm summer weather, youths used to eschew the chlorinated water of the swimming pool, and throw themselves into the river from the centre of Baths Bridge, a feat that required courage as the river is shallow.
New Elvet Bridge over the River Wear in Durham City
Prebends Bridge over the River Wear in Durham City.
Milburngate Bridge (foreground) and Framwellgate Bridge (background) are respectively modern and ancient. The series of weirs is on the site of an ancient ford, whereas a modern fish-counter on one of the weirs allows the National Rivers Authority to count the fish (mostly trout and salmon) migrating upstream.
Hylton Viaduct carries the A19 over the River Wear. North and South Hylton are respectively on the north and south banks of the river; Washington lies upstream and Sunderland downstream. Many years ago, a ferry plied the river close by, and the name of the road from which this photograph was taken is called Ferryboat Lane. Washington's Nissan car factory is just beyond the viaduct, built on the site of the former Sunderland Airport.
The original Wearmouth Bridge, when it was first opened in 1796, was the largest single span bridge in the world, and only the second iron bridge after the one at Ironbridge in Shropshire. It was reconstructed in 1857 and again in 1927 with the addition of a large steel arch support structure to cope with the heavier volume of traffic. It is the last bridge the Wear flows under before it reaches the North Sea. |
Bodh Gaya, a village situated in the state of Bihar, India, is where Prince Siddhartha attained enlightenment beneath a peepal tree 2600 years ago and later evolved as Gautama Buddha – the king of enlightenment. Popularly mentioned across all Buddhist countries in the world as holy, it is the most important Buddhist pilgrimage sites for all Buddhist followers globally.
Situated by the bank of river Neranjana, the village was then known as Uruwela. King Asoka was the first to build a temple here – and today ‘it is one of the earliest Buddhist temples built entirely in brick, still standing in India, from the late Gupta period.’
What are the major attractions?
Bodhgaya attracts thousands of tourists and monks from around the world every year; for prayer, study, chants, vows and meditation – especially during the Kaal Chakra Mela.
The most religious site in town is the “Bodhi tree” that flourishes inside the Mahabodhi Temple complex (also considered as a UNESCO world heritage site). It is the tree where Buddha attained enlightenment.
The people and culture of Bodh Gaya
The people of Bodhgaya are very pious and most of them follow Buddhism as their religion. There are a few who follow Hinduism. No matter what religion one follows, the Maha Bodhi temple, is revered and respected by each and every native of Gaya.
For generations, people have been meditating, praying and surrendering themselves to the spiritual presence before the Bodhi tree in the temple, offering prayers or making wishes. Many people are known to visit Bodhgaya only for studying Buddhism.
Other monasteries to explore?
Being a global interest for many other Buddhist countries – a number of monasteries and temples to visit are built by foreign Buddhist communities in their national style. A day exploring these is a great opportunity for tourists to go around Bodhgaya and visit the different monasteries built by countries like Japan, China, Bhutan, Thailand, Sri Lanka, Vietnam, Myanmar and Nepal. There are monasteries built by the Sikkimese and the Tibetans too.
Each monastery has been built in a particular way so that it reflects the diverse Buddhist cultures of these regions and also their different architectural styles.These monasteries reflect as to how great a distance the religion of Buddhism has covered, after it was first preached by Lord Buddha in and around the region of present-day Bihar. In the monastery – meditation classes are held in the morning and evening; the temple also houses a guest house for the convenience of pilgrims. Few monasteries also have prayer halls, a meditation hall, and a library (which generally has books in Vietnamese and English).
The ambience is the amalgamation of monastic tranquillity, a Tibetan market with small-town hustle and the recitation of “Buddham Saranam Gacchami, Dhammam Saranam Gacchami, Sangham Saranam Gacchami”. This Mantra is also called the three gems of Buddhism.
About the Author: Ritika Kumari is a Bengaluru based fashion communication student from Gaya, Bihar. She is a zoology honours graduate from Magadh University who decided to make a career in designing from National Institute of Fashion Technology, Bengaluru. Loves Indian traditions and culture, a big foodie who loves to cook as well, chatter box, a passionate traveller and Indian daily soaps addict. |
What is the place of utilitarianism in the broader libertarian tradition?
As its name implies, utilitarianism takes utility as its cynosure. So why did John Stuart Mill, one of utilitarianism’s principal exponents, say that it could equivalently be called “Happiness theory?” Utility, Mill says, must not be regarded in the colloquial sense of something “opposed to pleasure.” Quite to the contrary, Mill means to denominate utility just in terms of gains in pleasure and decreases in pain. Mill thus refused to accept the notion that there is a necessary contrariety in the relationship between “the useful” and “the agreeable or the ornamental.” Rather he made these practically inseparable and sought to develop an idea of pleasure that subsumed the higher pleasures.
Probably the most gifted and certainly the most outstanding of Jeremy Bentham’s philosophical disciples, Mill stands out as a key figure in nineteenth century liberalism. It is in Mill that we find the most fully developed and best articulated statement of utilitarianism, and it is Mill’s utilitarianism that has the broadest implications for contemporary libertarian thought. A certain ambivalence has always characterized the relationship between libertarianism as a distinct current and the utilitarianism of Bentham and Mill—and not without reason. For just as the utilitarian philosophy was capable of producing classics of liberal and libertarian thought such as Mill’s On Liberty, it has also lent itself to some rather unlibertarian results.
The British historian Maurice Cowling indeed went as far as arguing that Mill’s thought features “a carefully disguised intolerance,” quite unlike “the libertarianism for which Mill’s doctrine is sometimes mistaken.” Even if Cowling had an ideological ax to grind, Mill’s work nevertheless shows a decided tendency to apply the principle of liberty inconsistently and unevenly. Because utility—not liberty—was of paramount importance in Mill’s thought, the individual and her freedoms could fall by the wayside before putative utilitarian justifications for intervention and control. In all attempts at utilitarian calculus, including Mill’s, there is a certain conceit, a belief that we mere mortals have to ability to previse the potential quantities of pleasure and pain that will issue from a given course of action. Under the sway of this conceit, Mill mistakenly conflated the greatest happiness in abstraction with the same notion as it manifested in his own mind and judgment. This error lead to political forms that would sanction levels of coercive social engineering and constraint which could not be called libertarian. Eminent political theory scholar, the late George W. Carey, maintained that the John Stuart Mill revisionists, those seeking to emend the hitherto dominant narrative of Mill as champion of civil libertarianism and diversity, would certainly prevail ultimately. Much of this revisionism addresses Mill’s apparent acceptance of “the Religion of Humanity,” an idea inherited from the French positivist philosopher Auguste Comte, whose influence on Mill, many revisionists argue, has been insufficiently explored. Comte’s positivism builds a comprehensive and all‐embracing vision of society grounded in strong empiricist claims; for Comte, given advancements in human knowledge through observation (and using inapt analogies to Newtonian physics) it was entirely possible to construct the next and indeed final development in human society. In spite of his many critiques of Comte’s ideas, Mill’s engagements with them left an indelible mark on his life and work.
Yet Mill’s Harm Principle functions as one of the simplest and truest statements of the most fundamental libertarian proposition. If one’s actions avoid causing harm to anyone else, then those actions are permissible and must not be the subject of “compulsion or control.” Thus, an individual’s “own good, either physical or moral, is not a sufficient warrant” to impose such coercion. Arguing that “leaving people to themselves is always better, cæteris paribus, than controlling them,” Mill’s political philosophy, whatever its flaws, is a centerpiece of the liberal tradition.
An early champion of equality between woman and men, Mill reasoned insistently that the “legal subordination” of the former to the latter was inherently wrong and based purely on untested conjecture on the nature and abilities of women. And notwithstanding his own pretensions to knowing what edicts and controls would produce the greatest happiness, Mill upheld freedom of speech and of opinion with the argument that none of us is “an infallible judge of opinions”—that, to discover the best results and solutions, it is necessary to allow free exchange and traffic in ideas. This is Mill at his most liberal, and therefore most libertarian. The willingness to challenge accepted wisdom, the distaste for appeals to tradition or custom, evokes the originator of Mill’s utilitarian philosophy, Jeremy Bentham, who secured a place for himself in the libertarian tradition alongside Mill..
Published in the same year in which the Declaration of Independence was adopted, Bentham’s A Fragment On Government is his attempt to expound the “fundamental axiom” that “it is the greatest happiness of the greatest number that is the measure of right and wrong.” Trained as a lawyer following his father, the young Bentham studied under the tutelage of the great jurist William Blackstone, a criticism of whom was Bentham’s initial motivation. That criticism took the form of Bentham’s Fragment, which law professor Richard Posner says “made two fundamental criticisms of [Blackstone’s] the Commentaries.” Most notably, Bentham criticized Blackstone as, in the words of F.C. Montague, “a dogmatist and sworn enemy of all reformation,” insufficiently attuned to the defects and shortcomings of the English law. For Bentham, “reformation in the moral” sphere was analogous to “discoveries in the natural world,” Blackstone’s Commentaries being not just an impediment to the former, but its “determined and persevering enemy.” Taking up David Hume’s critical appraisal of social contract theory, Bentham positions himself as a radical to Blackstone’s conservative, dismissing “the Original Contract” as one of many fictions to which, he argues, the Commentaries offer support.
Bentham argues that, possessed by the spirits of “ornament” and “authority,” Blackstone has submitted a work that appeals not to our reason, but rather attempts to subdue it with “theological flourish.” Such genuflection before the “footstool of Authority” subverts law’s proper basis and thus yields a paradigm in “which the utility of [a law] has no imaginable connection” with the circumstances under which it was passed. This, primarily, is the source of Bentham’s critical opprobrium. He would cast aside the trappings of tradition, reputation, and authority, which, he contends, are more likely to illude than to illuminate. Chiefly indebted to Hume in his concern with utility, Bentham argues that it is this concern—“the happiness of his people”—that ought to guide the sovereign, that submission to political authority is properly vindicated only on this basis, if at all.
The theory of the social contract, then, is incapable of providing an acceptable account of legitimate government power; for as Hume wrote in A Treatise of Human Nature, “were you to ask the far greatest part of the nation, whether they had ever consented to the authority of their rulers, … they wou’d be inclin’d to think very strangely of you.” It was thus clear enough “that the affair depended not on their consent,” that no one could rightly be obliged by a contract or promise of which she is unaware. Whether or not social contract theory was “overthrown” or “demolished” in the Treatise as completely as Bentham imagined, he was careful to admit that the interests of citizens would nevertheless hold the polity together in the absence of such a theory.
James Mill, student of Bentham and father of John Stuart Mill, is another important utilitarian liberal. In his Essay on Government, James Mill recommences the argument for government as simply a utilitarian mechanism meant to “increase to the utmost the pleasures, and diminish to the utmost the pains, which men derive from one another.” Mill scholars such as William Thomas have suggested that while Bentham’s philosophical legacy is more esteemed in the academy, Mill’s exoteric, accessible tracts were more influential on practical politics and legislative reforms such as the Reform Act of 1832. To achieve maximum happiness, Mill argues, government must be established for the primary purpose of preventing the stronger from living at the expense of the weaker, from instituting slavery as a way to obtain “the means of subsistence” without laboring. Already, then, Mill’s Essay shows itself as an important one for libertarians. Clearly, Mill argues, if the ultimate goal is to engineer a social and political system that will effectuate the most pleasure and least pain, then reducing the greatest number to slavery cannot be the solution. After considering and dismissing various possible modes of government—Democratical, Aristocratical, and Monarchical—Mill prescribes a representative government, carefully explaining the various devices for ensuring that the representatives share the interests of the whole people. For Mill, such a system could solve the paradox inherent in government, that is, that the few entrusted with the coercive power to protect us may use that power “to take the objects of desire from the members of the community.” Mill wanted to carefully limit government to prevent this kind of elite predation and avoid its disastrous economic consequences.
Like his friend David Ricardo, James Mill is among the pioneering fathers of classical economics, his many unique contributions to which are often overshadowed both by his general association with Benthamite utilitarianism and by the legacy of John Stuart Mill. In Commerce Defended, the elder Mill sets forth a paradigmatic apology for economic liberalism, offering one of the first elucidations of the economic principle that later came to be called “Say’s Law.” Also known as the Law of Markets, that principle holds that “production of commodities creates, and is the one and universal cause which creates a market for the commodities produced.” While Murray Rothbard asserts, in Volume II of An Austrian Perspective on the History of Economic Thought, that Mill “appropriated the law” from Jean‐Baptiste Say, scholars such as William O. Thweatt point out that Mill’s “full and balanced” presentation of the Law of Markets in 1808 actually antedates Say’s work on the principle. Mill’s Commerce Defended, moreover, easily confuted the economic fallacies and errors of the protectionists and mercantilists of his day, arguments like those of William Spence that commerce itself is relatively unimportant in the creation of wealth and imports can never effect a gain for the importing country. Mill deftly exposed the problems with such arguments, championing free trade and spending years of his life popularizing the ideas of Smith and Ricardo.
Though the utilitarians leave a mixed and often contradictory legacy to libertarians, their contributions are nonetheless an important element of classical liberalism and classical economics. Understanding the utilitarians’ arguments on rights and on the relationship between government and pleasure/pain outcomes offers today’s libertarian a valuable foundation for approaching a contemporary public policy world obsessed with studies, empirical data, and the testimony of experts. Whatever our thoughts about utilitarian arguments, they are far more common in today’s legislative debates (and political debates more generally) than, for instance, appeals to natural rights. |
Talk:Socialism in India
|WikiProject Socialism||(Rated Start-class, High-importance)|
|WikiProject India / Politics||(Rated Start-class, Mid-importance)|
|This article is written in Indian English (colour, realise, analyse), and some terms used in it may be different or absent from other varieties of English. According to the relevant style guide, this should not be changed without broad consensus.|
Bose, a source i've used in Forward Bloc and U. Muthuramalingam Thevar articles, argues that 1955 is the decisive year for Congress to declare itself socialist. I don't have the book with me at the moment, but will check it up later. --Soman (talk) 20:30, 4 January 2008 (UTC)
(1964)CPI on the other hand, launched the idea of a United Front together with the Indian National Congress.
Regarding the political situation in the colonized world, the second congress of the Communist International stipulated at a united front should be formed between the proletariat, peasantry and national bourgeosie in the colonial countries. Amongst the twenty-one conditions drafted by Lenin ahead of the congress was the 11th thesis which stipulated that all communist parties must support the bourgeois-democratic liberation movements in the colonies. Notably some of the delegates opposed the idea of alliance with the bourgeoisie, and preferred support to communist movements of these countries instead. Their criticism was shared by the Indian revolutionary M.N. Roy, who attended as a delegate of the Communist Party of Mexico. The congress removed the term ‘bourgeois-democratic' in what became the 8th condition.
The sixth congress of the Communist International met in 1928. In 1927 the Kuomintang had turned on the Chinese communists, which led to a review of the policy on forming alliances with the national bourgeoisie in the colonial countries. The congress did however make a differentiation between the character of the Chinese Kuomintang and the Indian Swarajist Party, considering the latter as an unreliable ally but not a direct enemy. The congress called on the Indian communists to utilize the contradictions between the national bourgeosie and the British imperialists.
The first article in an Indian publication (in English) that mentions the name of Marx & Engels was Modern Review in March 1912. The short biographical article titled Karl Marx – a modern Rishi was written by the Germany-based Indian revolutionary Lala Har Dayal.
The first biography on Karl Marx, in an Indian language was written by R. Rama Krishna Pillai in 1914.
Marxism made a major impact in India media at the time of the Russian Revolution. Of particular interest to many Indian papers and magazines was the Bolshevik policy of right to self-determination of all nations. Bipin Chandra Pal and Bal Gangadhar Tilak were amongst the prominent Indians who expressed their admiration of Lenin and the new rulers in Russia. Abdul Sattar Khairi and Abdul Zabbar Khairi went to Moscow, immediately on hearing about the revolution. In Moscow, they met Lenin and conveyed their greetings to him. The Russian Revolution also had an impact of émigré Indian revolutionaries, such as the Ghadar Party in North America.Cite error: A
<ref> tag is missing the closing
</ref> (see the help page).
The Khilafat movement had a significant impact on the emergence of early Indian communism. Many Indian Muslims left India to join the defense of the Caliphate. Several of them became communists whilst being in Soviet territory. Even some Hindus joined the Muslim muhajirs in the travels to the Soviet areas.
One one Indians impressed with developments in Russia was S. A. Dange in Bombay. In 1921, Dange published a pamphlet titled Gandhi Vs. Lenin, a comparative study of the approaches of both the leaders; but, Lenin coming out as better of the two. Together with Ranchoddas Bhavan Lotvala, a local mill-owner, a library of Marxist Literature was set up and publishing of translations of Marxist classics began.. In 1922, with Lotvala's help, Dange launched the English weekly, Socialist, the first Indian Marxist journal."
The First World War was accompanied with a rapid increase of industries in India, resulting in a growth of an industrial proletariat. At the same time prices of essential commodities increased. These were factors that contributed to the build up of the Indian trade union movement. Unions were formed in the urban centres across India, and strikes were organized. In 1920, the All India Trade Union Congress was founded.
On May 1, 1923 the Labour Kisan Party of Hindustan was founded in Madras, by Singaravelu Chettiar. The LKPH organized the first May Day celebration in India, and this was also the first time the red flag was used in India.
On December 25, 1925 a communist conference was organized in Kanpur. Colonial authorities estimated that 500 persons took part in the conference. The conference was convened by a man called Satyabhakta, of whom little is known. Satyabhakta is said to have argued for a ‘national communism’ and against subordination under Comintern. Being outvoted by the other delegates, Satyabhakta left both the conference venue in protest. The conference adopted the name ‘Communist Party of India’. Groups such as LKPH dissolved into the unified CPI. The emigré CPI, which probably had little organic character anyway, was effectively substituted by the organization now operating inside India.
The Communist Party of India was founded in Tashkent on October 17, 1920, soon after the Second Congress of the Communist International. The founding members of the party were M.N. Roy, Evelina Trench Roy (Roy’s wife), Abani Mukherji, Rosa Fitingof (Abani’s wife), Mohammad Ali (Ahmed Hasan), Mohammad Shafiq Siddiqui and M.P.B.T. Acharya.
The CPI began efforts to build a party organisation inside India. Roy made contacts with Anushilan and Jugantar groups in Bengal. Small communist groups were formed in Bengal (led by Muzaffar Ahmed), Bombay (led by S.A. Dange), Madras (led by Singaravelu Chettiar), United Provinces (led by Shaukat Usmani) and Punjab (led by Ghulam Hussain). However, only Usmani became a CPI party member. —Preceding unsigned comment added by Soman (talk • contribs) 22:04, 4 January 2008 (UTC)
Bhambhri argues that “…Mulayam Yadav has casteised socialism by equating it with the interest of upwardly mobile backward caste peasantry whose interests he defended and promoted as UP Chief Minister.”
Between 1921 and 1924 there were four conspiracy trials against the communist movement; First Peshawar Conspiracy Case, Second Peshawar Conspiracy Case, Moscow Conspiracy Case and the Cawnpore Bolshevik Conspiracy Case. In the first three cases, Russian-trained muhajir communists were put on trial. However, the Cawnpore trial had more political impact. On March 17 1924, M.N. Roy, S.A. Dange, Muzaffar Ahmed, Nalini Gupta, Shaukat Usmani, Singaravelu Chettiar, Ghulam Hussain and R.C. Sharma were charged, in Cawnpore (now spelt Kanpur) Bolshevik Conspiracy case. The specific charge was that they as communists were seeking "to deprive the King Emperor of his sovereignty of British India, by complete separation of India from imperialistic Britain by a violent revolution." Pages of newspapers daily splashed sensational communist plans and people for the first time learned such a large scale about communism and its doctrines and the aims of the Communist International in India.
Singaravelu Chettiar was released on account of illness. M.N. Roy was in Germany and R.C. in French Pondicherry, and therefore could not be arrested. Ghulam Hussain confessed that he had received money from the Russians in Kabul and was pardoned. Muzaffar Ahmed, Nalini Gupta, Shaukat Usmani and Dange were sentenced for various terms of imprisonment. This case was responsible for actively introducing communism to a larger Indian audience.. Dange was released from prison in 1925.
- It looks like good stuff. Why don't you put it straight in the article?--Conjoiner (talk) 23:04, 4 January 2008 (UTC)
Copy-pasted from RSP page, but useful for this one:
Development of Anushilan Marxism
A major section of the Anushilan movement had been attracted to Marxism during the 1930s, many of them studying marxist-leninist literature whilst serving long jail sentences. A minority section broke away from the Anushilan movement and joined the Communist Consolidation, and later the Communist Party of India. The majority of the Anushilan marxists did however, whilst having adopted marxist-leninist thinking, feel hesitant over joining the Communist Party.
The Anushilanites distrusted the political lines formulated by the Communist International. They criticized the line adopted at the 6th Comintern congress of 1928 as 'ultra-left sectarian'. The Colonial theses of the 6th Comintern congress called upon the communists to combat the 'national-reformist leaders' and to 'unmask the national reformism of the Indian National Congress and oppose all phrases of the Swarajists, Gandhists, etc. about passive resistance'. Moreover, when Indian leftwing elements formed the Congress Socialist Party in 1934, the CPI branded it as Social Fascist. When the Comintern policy swung towards Popular Frontism at its 1935 congress, at the time by which the majority of the Anushilan movement were adopting a marxist-leninist approach), the Anushilan marxists questioned this shift as a betrayal of the internationalist character of the Comintern and felt that the International had been reduced to an agency of Soviet foreign policy. Moreover, the Anushilan marxists opposed the notion of 'Socialism in One Country'.
However, although sharing some critiques against the leadership of Joseph Stalin and the Comintern, the Anushilan marxists did not embrace Trotskyism. Buddhadeva Bhattacharya writes in 'Origins of the RSP' that the "rejection of stalinism did not automatically mean for them [the Anushlian Samiti] acceptance of trotskyism. Incidentally, the leninist conception of international socialist revolution is different from Trotsky's theory of Permanent Revolution which deduces the necessity of world revolution primarily from the impossibility of the numerically inferior proletariat in a semi-feudal and semi-capitalist peasant country like Russia holding power for any length of time ans successfully undertaking the task of socialist construction in hand without the proletariat of the advanced countries outside the Soviet Union coming to power through an extension of sociaist revolution in these countries and coming to the aid of the proletariat of the U.S.S.R.
Anushlian marxists adhered to the marxist-leninist theory of 'Permanent' or 'Continuous' Revolution. '...it is our interest and task to make the revolution permanent' declared Karl Marx as early as 1850 in course of his famous address to the Communist League, 'until all more or less possessing classes have been forced out of their position of dominance, the proletariat has conquered state power, and the association of proletarians, not only in one country but in all dominant countries of the world, has advanced so far that competition among the proletarians of these countries has ceased and that at least the decisive productive forces are concentrated in the hands of the proletarians.'"
By the close of 1936 the Anushilan marxists at the Deoli Detention Jail in Rajputana drafted a document formulating their political line. This document was then distributed amongst the Anushilan marxists at other jails throughout the country. When they were collectively released in 1938 the Anushilan marxists adopted this document, The Thesis and Platform of Action of the Revolutionary Socialist Party of India (Marxist-Leninist): What Revolutionary Socialism Stands for, as their political programme in September that year.
At this point the Anushilan marxists, recently released from long jail sentences, stood at a cross-roads. Either they would continue as a separate political entity or they would join an existing political platform. They felt that they lacked the resources to build a separate political party. Joining the CPI was out of the question, due to sharp differences in political analysis. Neither could they reconcile their differences with the Royists. In the end, the Congress Socialist Party, appeared to be the sole platform acceptable for the Anushilan marxists. The CSP had adopted Marxism in 1936 and their third conference in Faizpur they had formulated a thesis that directed the party to work to transform the Indian National Congress into an anti-imperialist front.
During the summer of 1938 a meeting took place between Jayaprakash Narayan (leader of CSP), Jogesh Chandra Chatterji, Tribid Kumar Chaudhuri and Keshav Prasad Sharma. The Anushilan marxists then discussed the issue with Acharya Narendra Deva, a founder of CSP and former Anushilan militant. The Anushilan marxists decided to join CSP, but keeping a separate identity within the party.
In the CSP
The great majority of the Anushilan Samiti had joined the CSP, not only the Marxist sector. The non-Marxists (who constituted about a half of the membership of the Samiti), although not ideologically attracted to the CSP, felt loyalty towards the Marxist sector. Moreover, around 25% of the HSRA joined the CSP. This group was led by Jogesh Chandra Chatterji.
In the end of 1938 Anushilan marxists began publishing The Socialist from Calcutta. The editor of the journal was Satish Sarkar. Although the editorial board included several senior CSP leaders like Acharya Narendra Deva, it was essentially an organ of the Anushilan marxist tendency. Only a handful issues were published.
The Anushilan marxists were soon to be disappointed by developments inside the CSP. The party, at that the time Anushilan marxists had joined it, was not a homogenous entity. There was the Marxist trend led by J.P. Narayan and Narendra Deva, the Fabian socialist trend led by Minoo Masani and Asoka Mehta and a Gandhian socialist trend led by Ram Manohar Lohia and Achyut Patwardan. To the Anushilan marxists differences emerged between the ideological stands of the party and its politics in practice. These differences surfaced at the 1939 annual session of the Indian National Congress at Tripuri. Ahead of the session there were fierce political differences between the leftwing Congress president, Subhas Chandra Bose, and the section led by Gandhi. As the risk of world war loomed, Bose wanted to utilize the weaking of the British empire for the sake of Indian independence. Bose was reelected as the Congress president, defeating the Gandhian candidate. But at the same session a proposal was brought forward by G.B. Pant, through which gave Gandhi veto over the formation of the Congress Working Committee. In the Subjects Committee, the CSP opposed the resolution along with other leftwing sectors. But when the resolution was brought ahead of the open session of the Congress, the CSP leaders remained neutral. According to Subhas Chandra Bose himself, the Pant resolution would have been defeated if the CSP had opposed it in the open session. J.P. Narayan stated that although the CSP was essentially supporting Bose's leadership, they were not willing to risk the unity of the Congress. Soon after the Tripuri session the CSP organised a conference in Delhi, in which fierce criticism was directed against their 'betrayal' at Tripuri.
The Anushilan marxists had clearly supported Bose both in the presidential election as well by opposing the Pant resolution. Jogesh Chandra Chatterji renounced his CSP membership in protest against the action by the party leadership.
Soon after the Tripuri session, Bose resigned as Congress president and formed the Forward Bloc. The Forward Bloc was intended to function as a unifying force for all leftwing elements. The Forward Bloc held its first conference on June 22-23 1939, and at the same time a Left Consolidation Committee consisting of the Forward Bloc, CPI, CSP, the Kisan Sabha, League of Radical Congressmen, Labour Party and the Anushilan marxists. Bose wanted the Anushilan marxists to join his Forward Bloc. But the Anushilan marxists, although supporting Bose's anti-imperialist militancy, considered that Bose's movement was nationalistic and too eccletic. The Anushilan marxists shared Bose's view that the relative weakness of the British empire during the war should have been utilised by independence movement. At this moment, in October 1939, J.P. Narayan tried to stretch out an olive branch to the Anushilan marxists. He proposed the formation of a 'War Council' consisting of himself, Pratul Ganguly, Jogesh Chandra Chatterjee and Acharya Narendra Deva. But few days later, at a session of the All India Congress Committee, J.P. Narayan and the other CSP leaders pledged not to start any other movements parallel to those initiated by Gandhi.
Foundation of RSPI(ML)
The Left Consolidation Committee soon fell into pieces, as the CPI, the CSP and the Royists deserted it. Bose assembled the Anti-Compromise Conference in Ramgarh, Bihar, now Jharkhand. The Forward Bloc, the Anushilan marxists (still members of the CSP at the time), the Labour Party and the Kisan Sabha attended the conference. The conference spelled out that no compromise towards the Britain should be made on behalf of the Indian independence movement. At that conference the Anushilan marxists assembled to launch their own party, the Revolutionary Socialist Party of India (Marxist-Leninist) severing all links to the CSP. The first general secretary of the party was Jogesh Chandra Chatterji.
The first War Thesis of RSP in 1940 took the called for "turning imperialist war into civil war". But after the attack by Germany on the Soviet Union, the line of the party was clarified. RSP meant that the socialist Soviet Union had to be defended, but that the best way for Indian revolutionaries to do that was to overthrow the colonial rule in their own country. RSP was in sharp opposition to groups like Communist Party of India and the Royist RDP, who meant that antifascists had to support the Allied war effort.
L. K. Advani on Avadi resolution and socialism, in 2005; "The first one, in hind sight, was a case of what may be termed as ‘bad economics’.
The second, incontrovertibly, is the worst case of ‘bad politics’ in independent India till date.
At the Avadi session of the AICC, the ruling party of the day adopted a resolution that committed India to follow the path of “socialistic pattern of society”, with the public sector occupying the “commanding heights of the economy”. The influence of the foreign mindset on this resolution apart, what Avadi did to stifle the productive potential of India in the decades that followed is now a widely recognized fact." --Soman (talk) 15:21, 5 January 2008 (UTC)
2004 Rajya Sabha discussion
Socialist movement in India, O.P. Ralhan
In November 1929 Jayaprakash Narayan (JP), a young Bihari who during his 7 years of studies in the USA had been recruited to socialism, returned to India. JP met with Gandhi and Nehru, and exchanged political views with them. Narayan was recruited to the Congress, and attended the Lahore Congress session at which he came into contact with several national leaders. Through Nehru, Narayan was appointed as the head of the Department of Labour Research at the Allahabad office of the All India Congress Committee. During the Civil Disobedience Movement, Narayan took charge as Acting General Secretary of the Congress. He too, was soon arrested and jailed by the colonial authorities. In Nasik jail, he got close contacts with other leading members of the freedom movement how were also inclined towards socialism; personalities like Masani, Lohia and Mehta. At this point, Narayan was a convinced Marxist and saw the Soviet collectivizations of agriculture as a model for India to follow. However, he could not reconcile himself with the denounciation of Gandhi by the CPI. Narayan and Masani were released from jail in April 1934. Narayan convened a meeting in Patna on May 17, 1934, which founded the Bihar Congress Socialist Party. Narayan became general secretary of the party and Acharya Narendra Deva became president. The Patna meeting gave a call for a socialist conference which would be held in connection to the Congress Annual Conference. At this conference, held in Bombay October 22-October 23 1934, they formed a new All India party, the Congress Socialist Party. Narayan became general secretary of the party, and Masani joint secretary. The conference venue was decorated by Congress flags and a portrait of Karl Marx. In the new party the greeting 'comrade' was used. Masani mobilised the party in Bombay, whereas Kamala Devi Chattopdhaya and Puroshottam Trikamdas organised the party in other parts of Maharashtra. The constitution of the CSP defined that the members of CSP were the members of the Provisional Congress Socialist Party and that they were all required to be members of the Indian National Congress. Members of communal organizations or political organizations whose goals were incompatible with the ones of CSP, were barred from CSP membership. Narayan organized the CSP relief work in Kutch in 1939. On the occasion of the 1940 Ramgarh Congress Conference CPI released a declaration called Proletarian Path, which sought to utilize the weakened state of the British Empire in the time of war and gave a call for general strike, no-tax, no-rent policies and mobilising for an armed revolution uprising. The National Executive of the CSP assembled at Ramgarh took a decision that all communists were expelled from CSP.
The Communist Party in Kerala, EMS
In July 1937, the first Kerala unit of CPI was founded at a clandestine meeting in Calicut. Five persons were present at the meeting, E.M.S. Namboodirapad, Krishna Pillai, N.C. Sekhar, K. Damodaran and S.V. Ghate. The first four were members of the CSP in Kerala. The latter, Ghate, was a CPI Central Committee member, who had arrived from Madras.
Contacts between the CSP in Kerala and the CPI began in 1935, when P. Sundarayya (CC member of CPI, based in Madras at the time) met with EMS and Krishna Pillai. Sundarayya and Ghate visited Kerala at several times and met with the CSP leaders there. The contacts were facilitated through the national meetings of the Congress, CSP and All India Kisan Sabha.
As of 1934 the mid-1930s, the main centres of activity of CPI were Bombay, Calcutta and Punjab. The party began extending its activities to Madras as well. A group of Andhra and Tamil students, amongst them P. Sundarayya, were recruited to the CPI by Amir Hyder Khan.
In 1936-1937, the cooperation between socialists and communists reached its peak. At the 2nd congress of the CSP, held in Meerut in January 1936, a thesis was adopted which declared that there was a need to build 'a united Indian Socialist Party based on Marxism-Leninism'. At the 3rd CSP congress, held in Faizpur, several communists were included into the CSP National Executive Committee.
Copied from the non-functional article Indian left
Major left parties:
- All India Forward Bloc
- Communist Party of India
- Communist Party of India (Marxist)
- Revolutionary Socialist Party
CPI(M) splinter groups:
- BTR-EMS-AKG Janakeeya Samskarika Vedi
- Communist Marxist Party
- Communist Party of Revolutionary Marxists
- Janathipathiya Samrakshana Samithy
- Janganotantrik Morcha
- Lok Sangharsh Morcha
- Madhya Pradesh Kisan Mazdoor Adivasi Kranti Dal
- Marxist Communist Party of India
- Marxist Coordination Committee
- Orissa Communist Party
- Paschimbanga Ganatantrik Manch
- Party of Democratic Socialism
CPI splinter groups:
- Krantikari Samyavadi Party
- Lal Nishan Party
- Lal Nishan Party (Leninvadi)
- Rashtravadi Communist Party
- Revolutionary Communist Party of India
Forward Bloc splinter groups/Netajist groups:
- All India Forward Bloc (Subhasist)
- All India Netaji Revolutionary Party
- Democratic Forward Bloc
- Forward Bloc (Socialist)
- Marxist Forward Bloc
RSP splinter groups:
- All India Coordination Committee of Communist Revolutionaries
- Andhra Pradesh Coordination Committee of Communist Revolutionaries
- Centre of Communist Revolutionaries of India
- Communist Party of India (Marxist-Leninist)
- Communist Party of India (Marxist-Leninist) (Kanu Sanyal)
- Communist Party of India (Marxist-Leninist) (Mahadev Mukherjee)
- Communist Party of India (Marxist-Leninist) Central Team
- Communist Party of India (Marxist-Leninist) Janashakti
- Communist Party of India (Marxist-Leninist) Liberation
- Communist Party of India (Marxist-Leninist) Naxalbari
- Communist Party of India (Marxist-Leninist) New Democracy
- Communist Party of India (Marxist-Leninist) People's War
- Communist Party of India (Marxist-Leninist) Red Flag
- Communist Party of Indian Union (Marxist-Leninist)
- Communist Party of United States of India
- Communist Revolutionary League of India
- Marxist-Leninist Committee
- Provisional Central Committee, Communist Party of India (Marxist-Leninist)
- Revolutionary Communist Centre of India (Maoist)
- Unity Centre of Communist Revolutionaries of India (Marxist-Leninist)
- Biplobi Bangla Congress
- Bolshevik-Leninist Party of India, Ceylon and Burma
- Chhattisgarh Mukti Morcha
- Democratic Socialist Party (Prabodh Chandra)
- Kunabi Sena
- Loktantrik Samajwadi Party
- Gana Abhiyan Orissa
- Peasants and Workers Party of India
- Radical Democratic Party
- West Bengal Socialist Party
- Left Democratic Front
- Left Front
- United Socialist Organization —Preceding unsigned comment added by Soman (talk • contribs) 12:51, 21 January 2008 (UTC)
Marxism in India into Socialism in India. Marxism in India is within the scope of the Socialism in India article. The article is not really about Indian interpretations on Marxist theory, but about the contemporary left movement in India. Thus they can be merged. --Soman (talk) 12:48, 21 January 2008 (UTC) —copied from WP:Proposed mergers#January 2008 Flatscan (talk) 02:33, 6 April 2009 (UTC)
- M.V.S. Koteswara Rao. Communist Parties and United Front - Experience in Kerala and West Bengal. Hyderabad: Prajasakti Book House, 2003. p. 12-13
- M.V.S. Koteswara Rao. Communist Parties and United Front - Experience in Kerala and West Bengal. Hyderabad: Prajasakti Book House, 2003. p. 48, 84-85
- M.V.S. Koteswara Rao. Communist Parties and United Front - Experience in Kerala and West Bengal. Hyderabad: Prajasakti Book House, 2003. p. 47-48
- M.V.S. Koteswara Rao. Communist Parties and United Front - Experience in Kerala and West Bengal. Hyderabad: Prajasakti Book House, 2003. p. 82, 103
- M.V.S. Koteswara Rao. Communist Parties and United Front - Experience in Kerala and West Bengal. Hyderabad: Prajasakti Book House, 2003. p. 82
- M.V.S. Koteswara Rao. Communist Parties and United Front - Experience in Kerala and West Bengal. Hyderabad: Prajasakti Book House, 2003. p. 103
- M.V.S. Koteswara Rao. Communist Parties and United Front - Experience in Kerala and West Bengal. Hyderabad: Prajasakti Book House, 2003. p. 83
- Riepe, Dale. Marxism in India in Parsons, Howard Lee and Sommerville, John (ed.) Marxism, Revolution and Peace. Amsterdam: John Benjamins Publishing Company, 1977. p. 41.
- Sen, Mohit. The Dange Centenary in Banerjee, Gopal (ed.) S.A. Dange - A Fruitful Life. Kolkata: Progressive Publishers, 2002. p. 43.
- M.V.S. Koteswara Rao. Communist Parties and United Front - Experience in Kerala and West Bengal. Hyderabad: Prajasakti Book House, 2003. p. 83-84
- Satyabhakta then formed a party called National Communist Party, which lasted until 1927.
- M.V.S. Koteswara Rao. Communist Parties and United Front - Experience in Kerala and West Bengal. Hyderabad: Prajasakti Book House, 2003. p. 92-93
- M.V.S. Koteswara Rao. Communist Parties and United Front - Experience in Kerala and West Bengal. Hyderabad: Prajasakti Book House, 2003. p. 88-89
- Ganguly, Basudev. S.A. Dange - A Living Presence at the Centenary Year in Banerjee, Gopal (ed.) S.A. Dange - A Fruitful Life. Kolkata: Progressive Publishers, 2002. p. 63.
- M.V.S. Koteswara Rao. Communist Parties and United Front - Experience in Kerala and West Bengal. Hyderabad: Prajasakti Book House, 2003. p. 89
- C.P. Bhambhri, p. 197-198
- Ralhan, O.P. (ed.) Encyclopedia of Political Parties New Delhi: Anmol Publications p. 336, Rao. p. 89-91
- Saha, Murari Mohan (ed.), Documents of the Revolutionary Socialist Party: Volume One 1938-1947. Agartala: Lokayata Chetana Bikash Society, 2001. p. 20-21
- Saha, Murari Mohan (ed.), Documents of the Revolutionary Socialist Party: Volume One 1938-1947. Agartala: Lokayata Chetana Bikash Society, 2001. p. 21-25
- Saha, Murari Mohan (ed.), Documents of the Revolutionary Socialist Party: Volume One 1938-1947. Agartala: Lokayata Chetana Bikash Society, 2001. p. 28
- In Saha, Murari Mohan (ed.), Documents of the Revolutionary Socialist Party: Volume One 1938-1947. Agartala: Lokayata Chetana Bikash Society, 2001. p. 34
- Saha, Murari Mohan (ed.), Documents of the Revolutionary Socialist Party: Volume One 1938-1947. Agartala: Lokayata Chetana Bikash Society, 2001. p. 29
- Saha, Murari Mohan (ed.), Documents of the Revolutionary Socialist Party: Volume One 1938-1947. Agartala: Lokayata Chetana Bikash Society, 2001. p. 35-37
- Saha, Murari Mohan (ed.), Documents of the Revolutionary Socialist Party: Volume One 1938-1947. Agartala: Lokayata Chetana Bikash Society, 2001. p. 37, 52
- Saha, Murari Mohan (ed.), Documents of the Revolutionary Socialist Party: Volume One 1938-1947. Agartala: Lokayata Chetana Bikash Society, 2001. p. 38-42
- Saha, Murari Mohan (ed.), Documents of the Revolutionary Socialist Party: Volume One 1938-1947. Agartala: Lokayata Chetana Bikash Society, 2001. p. 43-45
- Saha, Murari Mohan (ed.), Documents of the Revolutionary Socialist Party: Volume One 1938-1947. Agartala: Lokayata Chetana Bikash Society, 2001. p. 44-46
- Saha, Murari Mohan (ed.), Documents of the Revolutionary Socialist Party: Volume One 1938-1947. Agartala: Lokayata Chetana Bikash Society, 2001. p. 46-47
- Ralhan, O.P. (ed.). Encyclopedia of Political Parties - India - Pakistan - Bangladesh - National -Regional - Local. Vol. 24. Socialist Movement in India. New Dehli: Anmol Publications, 1997. p. 32
- p. 33
- p. 34
- p. 49
- p. 50-55
- p. 56
- p. 57
- p. 58
- p. 59
- p. 60, 91
- p. 60
- p. 59
- p. 60, 91
- p. 61
- p. 82
- E.M.S. Namboodiripad. The Communist Party in Kerala - Six Decades of Struggle and Advance. New Delhi: National Book Centre, 1994. p. 6
- E.M.S. Namboodiripad. The Communist Party in Kerala - Six Decades of Struggle and Advance. New Delhi: National Book Centre, 1994. p. 7
- E.M.S. Namboodiripad. The Communist Party in Kerala - Six Decades of Struggle and Advance. New Delhi: National Book Centre, 1994. p. 7
- E.M.S. Namboodiripad. The Communist Party in Kerala - Six Decades of Struggle and Advance. New Delhi: National Book Centre, 1994. p. 7
- E.M.S. Namboodiripad. The Communist Party in Kerala - Six Decades of Struggle and Advance. New Delhi: National Book Centre, 1994. p. 44
- E.M.S. Namboodiripad. The Communist Party in Kerala - Six Decades of Struggle and Advance. New Delhi: National Book Centre, 1994. p. 45 |
“Keeping time” is a made-up idea. We don’t own time, and in fact, it is very much the opposite. Time owns us. Time makes us greedy for more when we don’t have enough, and ungrateful and anxious when we want it to pass quickly. As a society, we seem to never be satistfied.
The most flagrant manipulation of time to-date, is the observation of Daylight Savings Time (DST). You know, the beloved or hated “fall back–spring ahead” practices that take place in the spring and fall of every year.
Just the facts, Ma’am:
- Benjamin Franklin was the first person to suggest the concept of Daylight Savings Time, but didn’t know how to implement it
- The purpose of DST is to “save” energy
- DST was first implemented in 1918 during WWI to save resources. Actually, the Germans did it first.
- Since the end of WWII, DST has not been federally enforced and states may choose to observe it
- During wartime, DST was observed year round
- Also during 1973 and 1974 during the Arab Oil Embargo, DST was once again observed throughout the winter.
- There are conflicting studies on whether or not DST actually saves resources and whether or not it is good for your health.
- Arizona (except for residents of the Navajo Indian Reservation), Hawaii, Puerto Rico, the Virgin Islands, American Samoa, Guam, and the Northern Marianas Islands do not observe DST.
- Some other countries in the world observe DST and some do not. Japan hasn’t observed it in 60 years, but some scientists now suggest that it may reduce the nation’s energy crisis due to the Fukishima Disaster.
- In 2005, the Energy Policy Act was enacted, mandating a controversial month-long extension of daylight saving time that began in 2007.
As far as I can tell, the point of DST is to have an extra hour of daylight in the the spring and summer to reduce energy expenditures such as artificial light. Arizona said heck no. Since DST would make it light until 9 o’clock, and the residents there would be getting an awfully late start to their sundown activities, which are important in such crazy heat. Plus, DST adds and extra hour of sweltering heat onto the day, especially for those south of the Mason-Dixon Line. So, say you get off work earlier because of DST, you’re still going home to a hot, hot house where you use the A/C for that hour.
So… where are the energy savings???
To the further that point, scientists like Hedrick Wolff of the University of Washington have noted that in saving the hour of energy in the evening only adds it to the hour of darkness in the mornings. Other studies have shown that workers have more on-the-job injuries, get 40 minutes less sleep, and other scientists such as Till Roenneberg suggest that circadian rhythms never recover from DST.
And clearly, the person who invented and implemented DST didn’t have kids.
Personally, what troubles me the most about it is that it is totally unnatural. Proponents of DST say that it leads to a more active and healthy lifestyle, and that it “marks the beginning of spring” for most. I struggle to not find that to be a load of crap, because with the arrival of spring and summer is the arrival of more light; naturally. It’s kind of hard to have a proper study of that idea when nature is in your favor anyway, right?
What would happened if we just abandoned the outdated, war-time practice? Who would suffer, if anyone? Who would lose money? Surely, it must be about money at the end-of-the-day, right? Because if it wasn’t, would we still do it?
I just don’t see the point in it.
What do you think? Do you love DST? Or hate it? And why? Share your thoughts with us, either in the comments or on facebook. I’d love to hear your opinion.
I think I’ve made my opinion clear. *Steps down from soapbox* 🙂 Thanks for joining me.
- National Geographic Daily News
- Why Arizona Doesn’t Observe DST–abc15.com
- Daylight Saving Change: Energy Boon or Waste of Time
- Standard Time |
The Order of the Most Holy Trinity for the Redemption of the Captives or The Order of the Most Holy Trinity for short (Latin: Ordo Sanctissimae Trinitatis redemptionis captivorum, Ordo Sanctissimae Trinitatis, also known as Trinitarians) is a Catholic religious order that was founded in the area of Cerfroid, some 80 km northeast of Paris, at the end of the twelfth century. The founder was St. John de Matha, whose feast day is celebrated on 17 December. From the very outset, a special dedication to the mystery of the Holy Trinity has been a constitutive element of the Order's life. The founding-intention for the Order was the ransom of Christians held captive by non-Christians, a consequence of crusading and pirating along the Mediterranean coast of Europe. The Order has the initials "O.SS.T." The Order’s distinctive cross of red and blue can be traced to its beginnings.
Between the eighth and the fifteenth centuries medieval Europe was in a state of ongoing war with the expanding Muslim world. Christians took up arms to defend against the advance of Muslims. Arabs successfully managed to subjugate North Africa, most of Spain, Southern France and took over Sicily making the Mediterranean, previously a Roman lake, now a Muslim one. In Christian lands, in the daily conflicts of this secular struggle, Saracens plundered all that could be transported: animals, provisions, fabrics, precious metals, money and especially men, women and children who would be sold for a good price. Privateering and piracy on the Mediterranean sea were aggressive and violent means used by Muslims to harass their Christian enemies and, above all, to obtain large profits and easy gains. For over six hundred years, these constant armed confrontations produced numerous war prisoners on both sides. Islam’s captives were reduced to the state of slaves since they were war booty and submitted to the absolute dominion of their Moorish owners. Such was the condition of countless Christians in the Southern European countries.
The threat of capture, whether by pirates or coastal raiders, or during one of the region's intermittent wars, was not a new but rather a continuing threat to the residents of Catalonia, Languedoc, and the other coastal provinces of medieval Christian Europe.
The redemption of captives is listed among the corporal works of mercy. The period of the Crusades, when so many Christians were in danger of falling into the hands of non-Christians, witnessed the rise of religious orders vowed exclusively to this pious work.
Pope Innocent III granted the order and its rule approval with his letter Operante divine dispositionis clementia, issued on 17 December 1198. Soon after papal approbation, the Trinitarian ministry to Christian captives was incorporated into the Order's title: Order of the Holy Trinity for the Ransom of Captives. In addition to the Order's purpose of ransoming Christian captives, each local community of Trinitarians served the people of its area. And so, their ministry included: hospitality, care of the sick and poor, churches, education, etc. Eventually, the Trinitarians also assumed the work of evangelization.
Brother John's founding intention expanded quickly beyond the three initial foundations (Cerfroid, Planels, Bourg-la-Reine) into a considerable network of houses committed to the ransom of Christian captives and the works of mercy conducted in their locales. Trinitrian tradition considers St. Felix of Valois cofounder of the Order and companion of John of Matha at Cerfroid, near Paris. In Cerfroid the first Trinitarian community was established and it is considered the mother house of the whole Order.
The first generation of Trinitarians could count some fifty foundations. In northern France, the Trinitarians were known as “Mathurins” because they were based in the church of Saint-Mathurin in Paris from 1228 onwards. Ransoming captives required economic resources. Fundraising and economic expertise constituted important aspects of the Order's life. The Rule's requirement of "the tertia pars" -- that one-third of all income to be set aside for the ransom of Christian captives—became a noted characteristic of the Order.
St. Louis installed a house of their order in his château of Fontainebleu. He chose Trinitarians as his chaplains, and was accompanied by them on his crusades.
Throughout the centuries, the Trinitarian Rule underwent several revisions, notably in 1267 and in 1631. It has been complemented by statutes and constitutions. The thirteenth century was a time of vitality, whereas the following centuries brought periods of difficulty and even decline in some areas. The Council of Trent (1545–1563) was a major turning-point in the life of the Church. Its twenty-fifth session dealt with regulars and nuns and the reform of religious orders. Reforming interests and energies manifested themselves among Trinitarians in France with the foundation at Pontoise, north of Paris, during the last quarter of the sixteenth century. Reform-minded Trinitarians in Spain first established the movement known as the Recollection and then, under the leadership of St. John Baptist of the Conception, a movement at Valdepeñas (Ciudad Real) known as the Spanish Discalced Trinitarians at the very end of the sixteenth century. Far-reaching periods of growth and development followed this rebirth.
In succeeding centuries, European events such as revolution, governmental suppression and civil war had very serious consequences for the Trinitarian Order and it declined significantly. During the last decades of the nineteenth century, the Trinitarians began to grow slowly in Italy and Spain. Its members dedicated themselves to fostering and promoting devotion to the Holy Trinity, evangelising non-Christians, assisting immigrants, educating the young and to becoming involved in parishes. Today the Trinitarian Family is composed of priests, brothers, women (enclosed nuns and active sisters) as well as committed laity. They are distinguished by the cross of red and blue which dates from the origins of the Order. Trinitarians are found throughout Europe and in the Americas as well as in Africa, India, Korea and the Philippines.
Since along with the Order’s mission of ransoming Christian captives, each Trinitarian Community served the people of its area by performing works of mercy, redemption and mercy are at the center of the Trinitarian charism.
Our Lady of Good Remedy
Our Lady of good Remedy is the patroness of the Order of the Most Holy Trinity. Devotion to Mary under this ancient title is widely known in Europe and Latin America. Her feast day is celebrated on October 8.
- DeMatha Catholic High School, the only college preparatory and secondary educational school in the Americas run by the Trinitarian Order.
- San Tommaso in Formis, the Trinitarian church in Rome
- San Carlo alle Quattro Fontane in Rome
- Scapular of the Most Blessed Trinity
- "About the Trinitarians"
- Order of the Blessed Virgin of Mercy
- Brodman, James William, Ransoming Captives in Crusader Spain:The Order of Merced on the Christian-Islamic Frontier, 1986]
- Moeller, Charles. "Order of Trinitarians." The Catholic Encyclopedia. Vol. 15. New York: Robert Appleton Company, 1912. 22 Feb. 2013
- Order of the Most Holy Trinity
- Alban Butler, Paul Burns, Butler's Lives of the Saints (Continuum International Publishing Group, 2000), 5.
- Our Lady of Good Remedy
|Wikimedia Commons has media related to Trinitarian Order.|
- Letter Of Pope John Paul II To The Minister General Of The Order Of The Most Holy Trinity
- Trintarian Official site
- Trinitarian Order
- Adare Trinitarian church |
A flashbulb memory is a highly detailed, exceptionally vivid 'snapshot' of the moment and circumstances in which a piece of surprising and consequential (or emotionally arousing) news was heard. The term "Flashbulb memory" suggests the surprise, indiscriminate illumination, detail, and brevity of a photograph; however flashbulb memories are only somewhat indiscriminate and are far from complete. Evidence has shown that although people are highly confident in their memories, the details of the memories can be forgotten.
Flashbulb memories are one type of autobiographical memory. Some researchers believe that there is reason to distinguish flashbulb memories from other types of autobiographical memory because they rely on elements of personal importance, consequentiality, emotion, and surprise. Others believe that ordinary memories can also be accurate and long-lasting if they are highly distinctive, personally significant, or repeatedly rehearsed.
Flashbulb memories have six characteristic features: place, ongoing activity, informant, own affect, other affect, and aftermath. Arguably, the principal determinants of a flashbulb memory are a high level of surprise, a high level of consequentiality, and perhaps emotional arousal.
- 1 Historical overview
- 2 Positive vs. Negative
- 3 Methods
- 4 Accuracy
- 5 Demographic differences in flashbulb memories
- 6 Improving flashbulb memories
- 7 Controversy: special mechanism hypothesis
- 8 Models
- 9 Neurological bases
- 10 Critique of flashbulb memory research
- 11 See also
- 12 References
The term flashbulb memory was coined by Brown and Kulik in 1977. They formed the special-mechanism hypothesis, which argues for the existence of a special biological memory mechanism that, when triggered by an event exceeding critical levels of surprise and consequentiality, creates a permanent record of the details and circumstances surrounding the experience. Brown and Kulik believed that although flashbulb memories are permanent they are not always accessible from long term memory. The hypothesis of a special flashbulb-memory mechanism holds that flashbulb memories have special characteristics that are different from those produced by "ordinary" memory mechanisms. The representations created by the special mechanism are detailed, accurate, vivid, and resistant to forgetting. Most of these initial properties of flashbulb memories have been debated since Brown and Kulik first coined the term. Ultimately over the years, four models of flashbulb memories have emerged to explain the phenomenon: the photographic model, the comprehensive model, the emotional-integrative model, and importance-driven model; additionally studies have been conducted to test the validity of these models.
Positive vs. Negative
It is possible for both positive and negative events to produce Flashbulb Memories. When the event is viewed as a positive event, individuals show higher rates of reliving and sensory imagery, also showed having more live-qualities associated with the event. Individuals view these positive events as central to their identities and life stories, resulting in more rehearsal of the event, encoding the memory with more subjective clarity.
Compared to positive flashbulb memories, events seen as negative by a person have demonstrated having used more detailed- oriented, conservative processing strategies. Negative flashbulb memories are more highly unpleasant causing a person to avoid reliving the negative event. This avoidance could possibly lead to a reduction of emotional intense memory. The memory stays intacted in an individual who experiences a negative flashbulb memory but have a more toned down emotional side. With negative flashbulb memories they are seen to have more consequences.
Flashbulb Memories can be produced, but do not need to be from a positive or negative event. A study has shown that flashbulb memories can be produced by experiencing a type of brand related interaction. Using Krispy Kreme and Build A bear it was found that these two brands produced a definitional flashbulb memory, but brands lacking strongly differentiated positioning does not produce a flashbulb memory. These “Flashbulb Brand Memories” were viewed very much like conventional flashbulb memories for the features of strength, sharpness, vividness, and intensity.
Research on flashbulb memories generally shares a common method. Typically, researchers conduct studies immediately following a shocking, public event. Participants are first tested within a few days of the event, answering questions via survey or interview regarding the details and circumstances regarding their personal experience of the event. Then groups of participants are tested at for a second time, for example six months, a year, or 18 months later. Generally, participants are divided into groups, each group being tested at different interval. This method allows researchers to observe the rate of memory decay, the accuracy and the content of flashbulb memories.
Many[who?] feel that flashbulb memories are not accurate enough to be considered their own category of memory. One of the issues is that flashbulb memories may deteriorate over time, just like everyday memories. Also, it has been questioned whether flashbulb memories are significantly different from everyday memories. A number of studies suggest that flashbulb memories are not especially accurate, but that they are experienced with great vividness and confidence. Many experimenters question the accuracy of Flashbulb Memories, but rehearsal of the event is to blame. Errors that are rehearsed through retelling and reliving can become a part of the memory. Because Flashbulb memories happen only a single time, there are no opportunities for repeated exposure or correction. Errors that are introduced early on are more likely to remain. Many individual see these events that create Flashbulb memories as very important and want to “never forget”, which may result in overconfidence in the accuracy of the flashbulb memory. The most important thing in creating a flashbulb memory is not what occurs at the exact moment of hearing striking news, rather what occurs after hearing the news. The role of post-encoding factors such as retelling and reliving is important when trying to understand the increase in remembrance after the event has already taken place.
Such research focuses on identifying reasons why flashbulb memories are more accurate than everyday memories. It has been documented that importance of an event, the consequences involved, how distinct it is, personal involvement in the event, and proximity increase the accuracy of recall of flashbulb memories.
Stability over time
It has been argued that flashbulb memories are not very stable over time. A study conducted on the recollection of flashbulb memories for the Challenger Space Shuttle disaster sampled two independent groups of subjects on a date close to the disaster, and another eight months later. Very few subjects had flashbulb memories for the disaster after eight months. Considering only the participants who could recall the source of the news, ongoing activity, and place, researchers reported that less than 35% had detailed memories. Another study examining participants' memories for the Challenger Space Shuttle explosion found that although participants were highly confident about their memories for the event, their memories were not very accurate three years after the event had occurred. A third study conducted on the O.J. Simpson murder case found that although participants' confidence in their memories remained strong, the accuracy of their memories declined 15 months after the event, and continued to decline 32 months after the event.
However, other studies still show that the uniqueness of flashbulb memories is due to the confidence of those who remember them. A study conducted on the bombing in Iraq and a contrasting ordinary event showed no difference for memory accuracy over a year period; however, participants showed greater confidence when remembering the Iraqi bombing than the ordinary event despite no difference in accuracy. Likewise, when memories for the 9/11 World Trade Center attack were contrasted with everyday memories, researchers found that after one year, there was a high, positive correlation between the initial and subsequent recollection of the 9/11 attack. This indicates very good retention, compared to a lower positive correlation for everyday memories. Participants also showed greater confidence in memory at time of retrieval than time of encoding.
Relation to autobiographical memory
Some studies indicate that flashbulb memories are not more accurate than other types of memories. It has been reported that memories of high school graduation or early emotional experiences can be just as vivid and clear as flashbulb memories. Undergraduates recorded their three most vivid autobiographical memories. Nearly all of the memories produced were rated to be of high personal importance, but low national importance. These memories were rated as having the same level of consequentiality and surprise as memories for events of high national importance. This indicates that flashbulb memories may just be a subset of vivid memories and may be the result of a more general phenomenon.
When looking at Flashbulb memories and “control memories” (non- flashbulb memories) it has been observed that flashbulb memories are incidentally encoded into one's memory, whereas if one wanted to, a non-flashbulb memory can be specifically encoded in a person’s memory. Both of these types of memories have vividness that accompanies the memory but it was found that for flashbulb memories, the vividness was much higher and never decreases compared to control memories, which in fact did decrease over time.
Flashbulb memory has always been classified as a type of autobiographical memory, which is memory for one's everyday life events. Emotionally neutral autobiographical events, such as a party or a barbecue, were contrasted with emotionally arousing events that were classified as flashbulb memories. Memory for the neutral autobiographical events was not as accurate as the emotionally arousing events of Princess Diana's death and Mother Teresa's death. Therefore, flashbulb memories were more accurately recalled than everyday autobiographical events. In some cases, consistency of flashbulb memories and everyday memories do not differ, as they both decline over time. Ratings of vividness, recollection and belief in the accuracy of memory, however, have been documented to decline only in everyday memories and not flashbulb memories.
The latent structure of a flashbulb memory is taxonic, and qualitatively distinct from non-flashbulb memories. It has been suggested that there are “optimal cut points” on flashbulb memory features that can ultimately divide people who can produce them from those who cannot. This follows the idea that flashbulb memories are a recollection of “event-specific sensory-perceptual details” and are much different from other known autobiographical memories. Ordinary memories show a dimensional structure that involves all levels of autobiographical knowledge, whereas flashbulb memories appear to come from a more densely integrated region of autobiographical knowledge. Flashbulb memories and non-flashbulb memories also differ qualitatively and not just quantitatively. Flashbulb memories are considered a form of autobiographical memory but involve the activation of episodic memory, where as everyday memories are a semantic form of recollections. Being a form of autobiographical recollections flashbulb memories are deeply determined by the reconstructive processes of memory, and just like any other form of memory are prone to decay.
Importance of an event
Brown and Kulik emphasized that importance is a critical variable in flashbulb memory formation. In a study conducted by Brown and Kulik, news events were chosen so that some of them would be important to some of their subjects, but not to others. They found that when an event was important to one group, it was associated with a comparatively high incidence of flashbulb memories. The same event, when judged lower on importance by another group, was found to be associated with a lower incidence of flashbulb memory. The retelling or rehearsal of personally important events also increases the accuracy of flashbulb memories. Personally important events tend to be rehearsed more often than non-significant events. A study conducted on flashbulb memories of the Loma Prieta earthquake found that people who discussed and compared their personal stories with others repeatedly had better recall of the event compared to Atlanta subjects who had little reason to talk about how they had heard the news. Therefore, the rehearsal of personally important events can be important in developing accurate flashbulb memories. There has been other evidence that shows that personal importance of an event is a strong predictor of flashbulb memories. A study done on the flashbulb memory of the resignation of the British prime minister, Margaret Thatcher, found that the majority of UK subjects had flashbulb memories nearly one year after her resignation. Their memory reports were characterized by spontaneous, accurate, and full recall of event details. In contrast, a low number of non-UK subjects had flashbulb memories one year after her resignation. Memory reports in this group were characterized by forgetting and reconstructive errors. The flashbulb memories for Margaret Thatcher's resignation were, therefore, primarily associated with the level of importance attached to the event.
When Princess Diana died, it was a very important and surprising event. It affected people across the globe. When looking at accuracy, the importance of the event can related to how accurate an individual’s flashbulb memory is. Reports found that among British participants no forgetting occur over 4 years since the event. Events that are highly surprising and are rated as highly important to an individual may be preserved in the memory for a longer period of time, and have the qualities of recent events compared to those not as affected. If an event has a strong impact on an individual these memories are found to be kept much longer.
It was proposed that the intensity of initial emotional reaction, rather than perceived consequentiality, is a primary determinant of flashbulb memories. Flashbulb memories of the 1981 assassination attempt on President Reagan were studied, and it was found that participants had accurate flashbulb memories seven months after the shooting. Respondents reported flashbulb memories, despite low consequentiality ratings. This study only evaluated the consequence of learning about a flashbulb event, and not how the consequences of being involved with the event affects accuracy. Therefore, some people were unsure of the extent of injury, and most could only guess about the eventual outcomes. Two models of flashbulb memory state that the consequences of an event determines the intensity of emotional reactions. The Importance Driven Emotional Reactions Model indicates that personal consequences determine intensity of emotional reactions. The consequence of an event is a critical variable in the formation and maintenance of a flashbulb memory. These propositions were based on flashbulb memories of the Marmara earthquake. The other model of flashbulb memory, called the Emotional-Integrative model, proposes that both personal importance and consequentiality determine the intensity of one's emotional state. Overall, the majority of research found on flashbulb memories demonstrates that consequences of an event play a key role in the accuracy of flashbulb memories. The death of Pope John Paul II did not come as a surprise but flashbulb memories were still found in individuals who were affected. This shows a direct link between emotion and event memory, and emphasizes how attitude can play a key factor in determining importance and consequentially for an event. Events being high in importance and consequentially lead to more vivid and long-lasting flashbulb memories.
Distinctiveness of an event
Some experiences are unique and distinctive, while others are familiar, commonplace, or are similar to much that has gone on before. Distinctiveness of an event has been considered to be a main contributor to the accuracy of flashbulb memories. The accounts of flashbulb memory that have been documented as remarkably accurate have been unique and distinctive from everyday memories. It has been found that uniqueness of an event can be the best overall predictor of how well it will be recalled later on. In a study conducted on randomly sampled personal events, subjects were asked to carry beepers that went off randomly. Whenever the beeper sounded, participants recorded where they were, what they were doing, and what they were thinking. Weeks or months later, the participants' memories were tested. The researchers found that recall of action depends strongly on uniqueness. Similar results have been found in studies regarding distinctiveness and flashbulb memories; memories for events that produced flashbulb memories, specifically various terrorist attacks, had high correlations between distinctiveness and personal importance, novelty, and emotionality. It has also been documented that if someone has a distinctive experience during a meaningful event, then accuracy for recall will increase. During the 1989 Loma Prieta earthquake, higher accuracy for the recall of the earthquake was documented in participants who had distinctive experiences during the earthquake, often including a substantial disruption in their activity.
Personal involvement and proximity
It has been documented that people that are involved in a flashbulb event have more accurate recollections compared to people that were not involved in the event. Recollections of those who experienced the Marmara earthquake in Turkey had more accurate recollections of the event than people who had no direct experience. In this study, the majority of participants in the victim group recalled more specific details about the earthquake compared to the group that was not directly affected by the earthquake, and rather received their information about it from the news. Another study compared Californians' memories of an earthquake that happened in California to the memories of the same earthquake formed by people who were living in Atlanta. The results indicated that the people that were personally involved with the earthquake had better recall of the event. Californians' recall of the event were much higher than Atlantans', with the exception of those who had relatives in the affected area, such that they reported being more personally involved. The death of Pope John Paul II has created many Flashbulb Memories among people who were more religiously involved with the Catholic Church. The more involved someone is to a religion, city or group, the more importance and consequentially is reported for an event. More emotions are reported, resulting in more consistent Flashbulb Memories.
A study conducted on the September 11, 2001 terrorist attacks demonstrates that proximity plays a part in the accuracy of recall of flashbulb memories. Three years after the terrorist attacks, participants were asked to retrieve memories of 9/11, as well as memories of personally selected control events from 2001. At the time of the attacks, some participants were in the downtown Manhattan region, closer to the World Trade Center, while others were in Midtown, a few miles away. The participants who were closer to downtown recalled more emotionally significant detailed memories than the Midtown participants. When looking solely at the Manhattan participants, the retrieval of memories for 9/11 were accompanied by an enhancement in recollective experience relative to the retrieval of other memorable life events in only a subset of participants who were, on average, two miles from the World Trade Center (around Washington Square) and not in participants who were, on average, 4.5 miles from the World Trade Center (around the Empire State Building). Although focusing only on participants that were in Manhattan on 9/11, the recollections of those closer to the World Trade Center were more vivid than those who were farther away. The downtown participants reported seeing, hearing, and even smelling what had happened. Personal involvement in, or proximity to, a national event could explain greater accuracy in memories because there could be more significant consequences for the people involved, such as the death of a loved one, which can create more emotional activation in the brain. This emotional activation in the brain has been shown to be involved in the recall of flashbulb memories.
Source of Information
When looking at the source of knowledge about an event, hearing the news from the media or from another person does not cause a difference in reaction, rather causes a difference in the type of information that is encoded to ones memory. When hearing the news from the media, more details about the events itself are better remembered due to the processing of facts while experiencing high levels of arousal, whereas when hearing the news from another individual a person tends to remember personal responses and circumstances.
Demographic differences in flashbulb memories
Although people of all ages experience flashbulb memories, different demographics and ages can influence the strength and quality of a flashbulb memory.
In general, younger adults form flashbulb memories more readily than older adults. One study examined age-related differences in flashbulb memories: participants were tested for memory within 14 days of an important event and then retested for memory of the same event 11 months later. Even 11 months after the event occurred, nearly all the younger adults experienced flashbulb memories, but less than half of the older adults met all the criteria of a flashbulb memory. Younger and older adults also showed different reasons for recalling vivid flashbulb memories. The main predictor for creating flashbulb among younger adults was emotional connectedness to the event, whereas older adults relied more on rehearsal of the event in creating flashbulb memories. Being emotionally connected was not enough for older adults to create flashbulbs; they also needed to rehearse the event over the 11 months to remember details. Older adults also had more difficulty remembering the context of the event; the older adults were more likely to forget with whom they spoke and where events took place on a daily basis. If older adults are significantly impacted by the dramatic event, however, they could form flashbulb memories that are just as detailed as those that younger adults form. Older adults that were personally impacted by or close to September 11 recalled memories that did not differ in detail from those of younger adults. Older adults were found to be more confident in their memories than younger adults, in regards to whom they were with, where they were, and their own personal emotions at the time of hearing the news of 9/11. Older adults remembered a vast majority of events between the ages of 10 and 30, a period known as the “reminiscence bump”. During that period, events occur during a time of finding one's identity and peak brain function. These events tend to be more talked about than events occurring outside this period. Flashbulb memories from the “reminiscence bump” are better remembered by older adults than are memories are having recently occurred.
Generally the factors that influence flashbulb memories are considered to be constant across cultures. Tinti et al. (2009) conducted a study on memories of Pope John Paul II's death amongst Polish, Italian, and Swiss Catholics. The results showed that personal involvement was most important in memory formation, followed by proximity to the event.
Flashbulb memories differ among cultures with the degree to which certain factors influence the vividness of flashbulb memories. For example, Asian cultures de-emphasis individuality; therefore Chinese and Japanese people might not be as affected by the effects of personal involvement on vividness of flashbulb memories. A study conducted by Kulkofsky, Wang, Conway, Hou, Aydin, Johnson, and Williams (2011) investigated the formation of flashbulb memories in 5 countries: China, the United Kingdom, the United States, Germany, and Turkey. Overall participants in the United States and the United Kingdom reported more memories in a 5 minutes span than participants from Germany, Turkey, and China. This could simply be due to the fact that different cultures have different memory search strategies. In terms of flashbulb memories, Chinese participants were less affected by all factors related to personal closeness and involvement with the event. There were also cultural variations in effects of emotional intensity and surprise.
Although not much research has been done on gender and flashbulb memories, one study notes the existence of gender effects on the presence of various factors which contribute to flashbulb memories. Researchers had Israeli University students complete questionnaires regarding their memories for various terrorist attacks. Men rated the distinctiveness of their flashbulb-producing event significantly higher than females did. Additionally, men had memories with significantly more detail than women. Women however, reported significantly higher rates of emotional reactivity. It is unclear how generalizable these findings are as they are the results from only one study.
Other studies conducted in this area of research yielded findings indicating that women are able to produce more vivid details of events than men. One such study had participants fill out questionnaires pertaining to the Senate hearings that confirmed Clarence Thomas as a Supreme Court Justice (Morse, 1993). The questionnaire contained four sections. The first asked about vivid images associated with the weekend the hearing took place, and the participants were asked to rate the two most vivid images using 7-point bipolar scales. The scale rated for "personal importance, unexpectedness of the recalled event, consequentiality of the event, vividness of the memory, and emotional intensity of the recalled event." The second section contained questions on autobiographical events not recently thought of and also used the 7-point scale format. The third section inquired on the number of hours watching or listening to media coverage of the hearing, and the fourth asked about details of the memories that were reported. 94 respondents were surveyed, and of those there were 62 females, 31 males, and one person who did not indicate gender. The study found that half of the individuals reported vivid memory images associated with the hearings. 64% of women reported images as opposed to 33% men. 77% of women reported having had stimulated recall of an autobiographical event, while only 27% of men indicated having experienced such recall. Beyond the two rated memories given in the first section, women were more likely than men to report additional imagery (24% of women and 6% of men). There was no difference in the average amount of time spent consuming media on the hearing.
A large body of research was conducted into events taking place during the terrorist attacks on 9/11, although it was not specifically geared toward finding gender differences.In one study researchers had participants answer questions to establish "consistent flashbulb memory," which consists of details about where the participants were at the time of the attacks, what they were doing, etc. In 2002 it was found that 48% of respondents fulfilled these requirements, and of those people 49% were women and 47% were men. They found that in 2003 45% of respondents surveyed met the criteria for having "consistent flashbulb memory." Of those 45%, women made up 46% of the group while men made up 44% (Conway, 2009). Women seemed more likely to have a more consistent memory for the event than men in this study. It should be noted that temporal distance from the incident decreases the memory consistency.
Biological reasons for gender variances in flashbulb memory may be explained by amygdala asymmetry. The amygdala is a part of the limbic system, and is linked with memory and emotion. Memory is enhanced by emotion, and studies have shown that people are more likely to remember a negative event than a neutral or positive one. Investigations into the amygdala revealed "people who showed strong amygdala activation in response to a set of positive or negative stimuli (relative to other study participants) also showed superior memory for those stimuli (relative to other study participants)". This may explain why flashbulb memory typically involves traumatic events. When viewing emotional content, research has shown that men enhance their memory by activating their right amygdala while women activate the left side. The functional asymmetry of amygdala activation between genders is exemplified in experimentation with lesions and brain-damaged patients. One study found using a case-matched lesion approach that a "man with right-sided amygdala damage developed major defects in social conduct, emotional processing and personality, and decision making, whereas the man with left-sided amygdala damage did not". The reverse effect was found between two women. An experiment was conducted that had 12 men and 12 women view an assortment of images (emotional and nonemotional). Three weeks after the experiment a follow-up study was conducted testing the memory of those individuals, and it was "revealed that highly emotional pictures were remembered best, and remembered better by women than by men". One study performed an MRI scan on 40 patients after showing them aversive and non-aversive photographs proceeded by a warning stimulus. This experiment found that "previously reported sex differences of memory associations with left amygdala for women and with right amygdala for men were confined to the ventral amygdala during picture viewing and delayed memory". Although it is still unclear how lateralization affects memory, there may be a more effective relationship between activation of the left amygdala and memory than activation of right and memory. Generally speaking, studies testing differences between genders on episodic memory tasks revealed that "women consistently outperform men on tasks that require remembering items that are verbal in nature or can be verbally labeled" (Herlitz, 2008). In addition, it seems that "women also excel on tasks requiring little or no verbal processing, such as recognition of unfamiliar odors or faces" (Herlitz, 2008). Men only seem to excel in memory tasks that require visuospatial processing. Gender differences are also very apparent in literature pertaining to autobiographical memory research. "Compared to men, women´s recall is more accurate and, when not specifically prompted, their narratives are longer than men´s" (Aizpura, 2010). To sum up these gender differences, most literature on memory indicates that:
"Women use a greater quantity and variety of emotion words than men when describing their past experiences (Adams, Kuebli, Boyle, & Fivush, 1995; Bauer et al., 2003; Fivush et al., 2003; Hess et al., 2000). Women include not only a greater number of references to their own emotional states but also a greater number of references to the emotional states of others. In addition, when asked to recall emotional life experiences, women recall more memories of both positive and negative personal experiences than men" (Bloise, 2007).
Overall women seem to have better memory performance than men in both emotional and non-emotional events.
There are many problems with assaying gender differences found in the research into this topic. Most apparent is that it is heavily reliant on self-reporting of events. Inaccuracy of findings could result from bias questions or misremembering on the part of the participants. There is no way to completely verify the accuracy of accounts given by the subjects in a study. Additionally there are many indications that eye-witness memory can often be fallible. Emotion does not seem to improve memory performance in situation that involves weapons. One study found that eyewitnesses remembered details about perpetrators less clearly when a weapon was involved in the event (Pickel, 2009). Accuracy in these situations is compromised by a phenomenon known as the weapon focus effect. Further complicating matters is the time frame in which people are surveyed in relation to the event. Many studies fall victim to surveying people well after the events have transpired. Thus, there is a validity issue with much of the research into flashbulb memory in general, as well as any apparent gender differences found therein.
Improving flashbulb memories
A number of studies have found that flashbulb memories are formed immediately after a life changing event happens or when news of the event is relayed. Although additional information about the event can then be researched or learned, the extra information is often lost in memory due to different encoding processes. A more recent study, examining effects of the media on flashbulb memories for the September 11, 2001 attacks, shows that extra information may help retain vivid flashbulb memories. Although the researchers found that memory for the event decreased over time for all participants, looking at images had a profound effect on participants memory. Those who said they saw images of the September 11th attacks immediately retained much more vivid images 6-months later than those who said they saw images hours after they heard about the attacks. The latter participants failed to encode the images with the original learning of the event. Thus, it may be the images themselves that lead some of the participants to recall more details of the event. Graphic images may make an individual associate more with the horror and scale of a tragic event and hence produce a more elaborate encoding mechanism. Furthermore, perhaps looking at images may help individuals retain vivid flashbulb memories months, and perhaps even years, after an event occurs.
Controversy: special mechanism hypothesis
The special-mechanism hypothesis has been the subject of considerable discussion in recent years, with some authors endorsing the hypothesis and others noting potential problems.This hypothesis divides memory processes into different categories, positing that different mechanisms underlie flashbulb memories. Yet many argue that flashbulb memories are simply the product of multiple, unique factors coalescing.
Data concerning people's recollections of the Reagan assassination attempt provide support for the special-mechanism hypothesis. People had highly accurate accounts of the event and had lost very few details regarding the event several months after it occurred. Additionally, an experiment examining emotional state and word valence found that people are better able to remember irrelevant information when they are in a negative, shocked state. There is also neurological evidence in support of a special mechanism view. Emotionally neutral autobiographical events, such as a party, were compared with two emotionally arousing events: Princess Diana's death, and Mother Teresa's death. Long-term memory for the contextual details of an emotionally neutral autobiographical event was significantly related to medial temporal lobe function and correlated with frontal lobe function, whereas there was no hint of an effect of either medial temporal lobe or frontal lobe function on memory for the two flashbulb events. These results indicate that there might be a special neurobiological mechanism associated with emotionally arousing flashbulb memories.
Studies have shown that flashbulb memories can result from non-surprising events, such as the first moon landing, and also from non-consequential events. While Brown and Kulik defined flashbulb memories as memories of first learning about a shocking event, they expand their discussion to include personal events in which the memory is of the event itself. Simply asking participants to retrieve vivid, autobiographical memories has been shown to produce memories that contain the six features of flashbulb memories. Therefore, it has been proposed that such memories be viewed as products of ordinary memory mechanisms. Moreover, flashbulb memories have been shown to be susceptible to errors in reconstructive processes, specifically systematic bias. It has been suggested that flashbulb memories are not especially resistant to forgetting. A number of studies suggest that flashbulb memories are not especially accurate, but that they are experienced with great vividness and confidence. Therefore, it is argued that it may be more precise to define flashbulb memories as extremely vivid autobiographical memories. Although they are often memories of learning about a shocking public event, they are not limited to such events, and not all memories of learning about shocking public events produce flashbulb memories.
The photographic model
Brown and Kulik proposed the term flashbulb memory, along with the first model of the process involved in developing what they called flashbulb accounts. The photographic model proposes that in order for a flashbulb account to occur in the presence of a stimulus event, there must be, a high level of surprise, consequentiality, and emotional arousal. Specifically, at the time in which an individual first hears of an event, the degree of unexpectedness and surprise is the first step in the registration of the event. The next step involved in registration of flashbulb accounts is the degree of consequentiality, which in turn, triggers a certain level of emotional arousal. Brown and Kulik described consequentiality as the things one would imagine may have gone differently if the event hadn't occurred, or what consequences the event had on an individual's life. Furthermore, Brown and Kulik believed that high levels of these variables would also result in frequent rehearsal, being either covert ("always on the mind") or overt (ex. talked about in conversations with others). Rehearsal, which acts as a mediating process in the development of a flashbulb account, creates stronger associations and more elaborate accounts. Therefore, the flashbulb memory becomes more accessible and vividly remembered for a long period of time.
Some researchers recognized that previous studies of flashbulb memories are limited by the reliance on small sample groups of few nationalities, thus limiting the comparison of memory consistency across different variables. The comprehensive model was born out of similar experimentation as Brown and Kulik's, but with a larger participant sample. One major difference between the two models is that the Photographic Model follows more of a step-by-step process in the development of flashbulb accounts, whereas the Comprehensive Model demonstrates an interconnected relationship between the variables. Specifically, knowledge and interest in the event affects the level of personal importance for the individual, which also affects the individual's level of emotional arousal (affect). Furthermore, knowledge and interest pertaining to the event, as well as the level of importance, contribute to the frequency of rehearsal. Therefore, high levels of knowledge and interest contribute to high levels of personal importance and affect, as well as high frequency of rehearsal. Finally, affect and rehearsal play major roles in creating associations, thus enabling the individual to remember vivid attributes of the event, such as the people, place, and description of the situation.
An Emotional-Integrative Model of flashbulb memories integrates the two previously discussed models the Photographic Model and the Comprehensive Model. Similar to the Photographic Model, the Emotional-Integrative Model states that the first step toward the registration of a flashbulb memory is an individual's degree of surprise associated with the event. This level of surprise triggers an emotional feeling state, which is also a result of the combination of the level of importance (consequentiality) of the event to the individual, and the individual's affective attitude. The emotional feeling state of the individual directly contributes to the creation of a flashbulb memory. To strengthen the association, thus enabling the individual to vividly remember the event, emotional feeling state and affective attitude contribute to overt rehearsal (mediator) of the event to strengthen the memory of the original event which, in turn, determines the formation of a flashbulb memory. According to the Emotional-Integrative model flashbulb memories can also be formed for expected events. The formation of flashbulb memories in this case depends greatly on a high emotional relationship to the event and rehearsal of the memory.
Importance-driven emotional reactions model
This model emphasizes that personal consequences determine intensity of emotional reactions. These consequences are, therefore, critical operators in the formation and maintenance of flashbulb memories. This model was based on whether traumatic events were experienced or not during the Marmara earthquake. According to the findings of this study, the memories of the people who experienced the earthquake were preserved as a whole, and unchanged over time. Results of the re-test showed that the long-term memories of the victim group are more complete, more durable and more consistent than those of the comparison group. Therefore, based on this study, a new model was formed that highlights that consequences play a very large role in the formation of flashbulb memories.
Flashbulb memories compared to traumatic memories
As discussed previously, flashbulb memories are engendered by a highly emotional, surprising events. How are these memories different from memories for traumatic events? The answer is stress. Traumatic events involve some element of fear or anxiety. While flashbulb memories can include components of negative emotion, these elements are generally absent.
There are though, some similarities between the two and flashbulb memories. During a traumatic event, high arousal can increase attention to central information leading to increased vividness and detail—similar to flashbulb memory. Another similar characteristic is that memory for traumatic events is enhanced by emotional stimuli. However, the largest difference between the nature of flashbulb memories and traumatic memories is the amount of information regarding unimportant details that will be encoded in the memory of the event. In high-stress situations, arousal dampens memory for peripheral information—such as context, location, time, or other less important details.
Laboratory studies have related specific neural systems to the influence of emotion on memory. Cross-species investigations have shown that emotional arousal causes neurohormonal changes, which engage the amygdala. The amygdala modulates the encoding, storage, and retrieval of episodic memory. These memories are later retrieved with an enhanced recollective experience, similar to the recollection of flashbulb memories. The amygdala, therefore, may be important in the encoding and retrieval of memories for emotional public events. Since the role of the amygdala in memory is associated with increased arousal induced by the emotional event, factors that influence arousal should also influence the nature of these memories. The constancy of flashbulb memories over time varies based on the individual factors related to the arousal response, such as emotional engagement and personal involvement with the shocking event. The strength of amygdala activation at retrieval has been shown to correlate with an enhanced recollective experience for emotional scenes, even when accuracy is not enhanced. Memory storage is increased by endocrine responses to shocking events; the more shocking an individual finds an event, the more likely a vivd flashbulb memory will develop.
There has been considerable debate as to whether unique mechanisms are involved in the formation of flashbulb memories, or whether ordinary memory processes are sufficient to account for memories of shocking public events. Sharot et al. found that for individuals who were close to the World Trade Center, the retrieval of 9/11 memories engaged neural systems that are uniquely tied to the influence of emotion on memory. The engagement of these emotional memory circuits is consistent with the unique limbic mechanism that Brown and Kulik suggested. These are the same neural mechanisms, however, engaged during the retrieval of emotional stimuli in the laboratory. The consistency in the pattern of neural responses during the retrieval of emotional scenes presented in the laboratory and flashbulb memories suggests that even though different mechanisms may be involved in flashbulb memories, these mechanisms are not unique to the surprising and consequential nature of the initiating events.
Evidence indicates the importance of the amygdala in the retrieval of 9/11 events, but only among individuals who personally experienced these events. The amygdala's influence on episodic memory is explicitly tied to physiological arousal. Although simply hearing about shocking public events may result in arousal, the strength of this response likely varies depending on the individual's personal experience with the events.
Critique of flashbulb memory research
Flashbulb memory research tends to focus on public events that have a negative valence. There is a shortage on studies regarding personal events such as accidents or trauma. This is due to the nature of the variables needed for flashbulb memory research: the experience of a surprising event is hard to manipulate. Additionally, little research has been done on gender differences and flashbulb memory, although it exists for general memory.
- Brown, Roger; Kulik, James (1977). "Flashbulb memories". Cognition 5 (1): 73–99. doi:10.1016/0010-0277(77)90018-X.
- Robinson-Riegler, Bridget (2012). Cognitive Psychology. Boston: Allyn & Bacon. pp. 297–299. ISBN 978-0-205-03364-5.
- Conway, Martin A. (1995). Flashbulb memories (Essays in cognitive psychology). ISBN 978-0863773532.
- Pillemer, David B. (March 1990). "Clarifying flashbulb memory concept: Comment on McCloskey, Wible, and Cohen (1988)". Journal of Experimental Psychology: General 119 (1): 92–96. doi:10.1037/0096-34188.8.131.52.
- McCloskey, Michael; Wible, Cynthia G.; Cohen, Neal J. (June 1988). "Is there a special flashbulb-memory mechanism?" (PDF). Journal of Experimental Psychology: General 117 (2): 171–181. doi:10.1037/0096-34184.108.40.206.
- Weaver, Charles A. (March 1993). "Do you need a "flash" to form a flashbulb memory?" (PDF). Journal of Experimental Psychology: General 122: 39–46. doi:10.1037/0096-34220.127.116.11.
- Neisser, U. (1982). "Snapshots or benchmarks", Memory Observed: Remembering in Natural Contexts, ed. 43–48, San Francisco: Freeman
- Brown, R.; Kulik, J. (1977). "Flashbulb Memories". Cognition 5 (1): 73–99. doi:10.1016/0010-0277(77)90018-X.
- Cohen, N; McCloskey, M.; Wible, C. (1990). "Flashbulb memories and underlying cognitive mechanisms: Reply to Pillemer". Journal of Experimental Psychology 119: 97–100. doi:10.1037/0096-3418.104.22.168.
- Er, N. (2003). "A new flashbulb memory model applied to the Marmara earthquake". Applied Cognitive Psychology 17 (5): 503–517. doi:10.1002/acp.870.
- Bohn, A.; Berntsen, D. (April 2007). "Pleasantness bias in flashbulb memories: Positive and negative flashbulb memories of the fall of the Berlin Wall among East and West Germans" (PDF). Memory & Cognition 35 (3): 565–577. doi:10.3758/BF03193295. PMID 17691154.
- Roehm Jr., Harper A.; Roehm, Michelle L. (January 2007). "Can brand encounters inspire flashbulb memories?". Psychology and Marketing 24 (1): 25–40. doi:10.1002/mar.20151.
- Sharot T., Martorella A., Delgado R., Phelps A. (2006). "How Personal experience modulates the neural circuitry of memories of September 11". Proceedings of the National Academy of Science: USA 104 (1): 389–394. doi:10.1073/pnas.0609230103. PMC 1713166. PMID 17182739.
- Schmolck, H. H., Buffalo, E. A., & Squire, L. R., H.; Buffalo, E.A.; Squire, L.R. (2000). "Memory distortions develop over time: Recollections of the O.J. Simpson trial verdict after 15 and 32 months". Psychological Science 11 (1): 39–45. doi:10.1111/1467-9280.00212. PMID 11228841.
- Neisser, U.; Winograd, E.; Bergman, E. T.; Schreiber, C. A.; Palmer, S. E.; Weldon, M. S. (July 1996). "Remembering the earthquake: direct experience vs. hearing the news". Memory 4 (4): 337–357. doi:10.1080/096582196388898. PMID 8817459.
- Talarico, J. M.; Rubin, D. C. (September 2003). "Confidence, not consistency, characterizes flashbulb memories" (PDF). Psychological Science 14 (5): 455–461. doi:10.1111/1467-9280.02453. JSTOR 40064167. PMID 12930476.
- Talarico, Jennifer M.; Rubin, David C. (July 2007). "Flashbulb memories are special after all; in phenomenology, not accuracy" (PDF). Applied Cognitive Psychology 21 (5): 557–578. doi:10.1002/acp.1293.
- Coluccia, Emanuele; Bianco, Carmela; Brandimonte, Maria A. (February 2010). "Autobiographical and event memories for surprising and unsurprising events". Applied Cognitive Psychology 24 (2): 177–199. doi:10.1002/acp.1549.
- Sharot, Tali; Delgado, Mauricio R.; Phelps, Elizabeth A. (December 2004). "How emotion enhances the feeling of remembering" (PDF). Nature Neuroscience 7 (12): 1376–1380. doi:10.1038/nn1353. PMID 15558065.
- Bohannon III, John Neil (July 1988). "Flashbulb memories for the space shuttle disaster: A tale of two theories". Cognition 29 (2): 179–196. doi:10.1016/0010-0277(88)90036-4. PMID 3168421.
- Weaver, C. (1993). "Do you need a "flash" to form a flashbulb memory?". Journal of Experimental Psychology 122: 39–46. doi:10.1037/0096-3422.214.171.124.
- Davidson, P. S. R.; Cook, S. P.; Glisky, E. L. (June 2006). "Flashbulb memories for September 11th can be preserved in older adults" (PDF). Aging, Neuropsychology, and Cognition 13 (2): 196–206. doi:10.1080/13825580490904192.
- Rubin, David C.; Kozin, Marc (February 1984). "Vivid memories". Cognition 16 (1): 81–95. doi:10.1016/0010-0277(84)90037-4. PMID 6540650.
- Kvavilashvili, L.; Mirani, J.; Schlagman, S.; Erskine, J. A. K.; Kornbrot, D. E. (June 2010). "Effects of age on phenomenology and consistency of flashbulb memories of September 11 and a staged control event". Psychology and Aging 25 (2): 391–404. doi:10.1037/a0017532. PMID 20545423.
- Davidson, Patrick S. R.; Glisky, Elizabeth L. (2002). "Is flashbulb memory a special instance of source memory? Evidence from older adults" (PDF). Memory 10 (2): 99–111. doi:10.1080/09658210143000227. PMID 11798440.
- Lanciano, T.; Curci, A. (2012). "Type or dimension? A taxometric investigation of flashbulb memories". Memory 20 (2): 177–188. doi:10.1080/09658211.2011.651088. PMID 22313420.
- Curci, A.; Lanciano, T. (April 2009). "Features of Autobiographical Memory: Theoretical and Empirical Issues in the Measurement of Flashbulb Memory". The Journal of General Psychology 136 (2): 129–150. doi:10.3200/GENP.136.2.129-152. PMID 19350832.
- Kvavilashvili, Lia; Mirani, Jennifer; Schlagman, Simone; Kornbrot, Diana E. (November–December 2003). "Comparing flashbulb memories of September 11 and the death of Princess Diana: Effects of time delays and nationality". Applied Cognitive Psychology 17 (9): 1017–1031. doi:10.1002/acp.983.
- Pillemer, David B. (February 1984). "Flashbulb memories of the assassination attempt on President Reagan". Cognition 16 (1): 63–80. doi:10.1016/0010-0277(84)90036-2. PMID 6540649.
- Er, Nurhan (July 2003). "A new flashbulb memory model applied to the Marmara earthquake" (PDF). Applied Cognitive Psychology 17 (5): 503–517. doi:10.1002/acp.870.
- Finkenauer, C.; Luminet, O.; Gisle, L.; El-Ahmadi, A.; Van Der Linden, M.; Philippot, P. (May 1998). "Flashbulb memories and the underlying mechanisms of their formation: Toward an emotional-integrative model" (PDF). Memory & Cognition 26 (3): 516–531. doi:10.3758/bf03201160. PMID 9610122.
- Tinti, Carla; Schmidt, Susanna; Sotgiu, Igor; Testa, Silvia; Curci, Antonietta (February 2009). "The role of importance/consequentiality appraisal in flashbulb memory formation: The case of the death of Pope John Paul II". Applied Cognitive Psychology 23 (2): 236–253. doi:10.1002/acp.1452.
- Brewer, W. (1988) "Memory for randomly sampled autiobiographical events." In U. Neisser & E. Winograd (Eds.), Remembering reconsidered: Ecological and traditional approaches to the study of memory, 21–90. New York: Cambridge University Press
- Edery-Halpern, G.; Nachson, I. (2004). "Distinctiveness in flashbulb memory: Comparative analysis of five terrorist attacks". Memory 12 (2): 147–157. doi:10.1080/09658210244000432. PMID 15250180.
- Bohannon III, John Neil; Gratz, Sami; Cross, Victoria Symons (December 2007). "The effects of affect and input source on flashbulb memories". Applied Cognitive Psychology 21 (8): 1023–1036. doi:10.1002/acp.1372.
- Cohen, G; Conway, M.; Maylor, E. (1993). "Flashbulb memories in older adults". Psychology and Aging 9 (3): 454–63. doi:10.1037/0882-79126.96.36.1994. PMID 7999330.
- Kvavilashili, L; Mirani, J.; Schlagman, S.; Erskine, J.; Kornbrot, D. (2010). "Effects of age on phenomenology and consistency of flashbulb memories of September 11 and a staged control event". Psychology of Aging 25 (2): 391–404. doi:10.1037/a0017532. PMID 20545423.
- Conway, A.; Skitka, L.; Hemmerich, J.; Kershaw, T. (2009). "FLashbulb memory for 11 September 2001". Applied Cognitive Psychology 23 (5): 605–23. doi:10.1002/acp.1497.
- Denver, J. Y.; Lane, S. M.; Cherry, K. E. (2010). "Recent versus remote: Flashbulb memory for 9/11 and self-selected events from the reminiscence bump". The International Journal Of Aging & Human Development 70 (4): 275–297. doi:10.2190/AG.70.4.a. PMID 20649160.
- Kulkofsky, S; Wang, Q.; Conway, M.; Hou, Y.; Aydin, C.; Johnson, K.; Williams, H. (2011). "Cultural variation in the correlates of flashbulb memories: An investigation in five countries". Memory 19 (3): 233–240. doi:10.1080/09658211.2010.551132. PMID 21500085.
- Morse, Claire K.; Woodward, Elizabeth M.; Zweigenhaft, R. L. (August 1993). "Gender Differences in Flashbulb Memories Elicited by the Clarence Thomas Hearings". The Journal of Social Psychology 133 (4): 453–458. doi:10.1080/00224545.1993.9712169. PMID 8231123.
- Conway, Andrew R. A.; Skitka, Linda J.; Hemmerich, Joshua A.; Kershaw, Trina C. (July 2009). "Flashbulb memory for 11 September 2001" (PDF). Applied Cognitive Psychology 23 (5): 605–623. doi:10.1002/acp.1497.
- Kensinger, Elizabeth A. (August 2007). "Negative Emotion Enhances Memory Accuracy: Behavioral and Neuroimaging Evidence" (PDF). Current Directions in Psychological Science 16 (4): 213–218. doi:10.1111/j.1467-8721.2007.00506.x.
- Tranel, Daniel; Bechara, Antoine (June 2009). "Sex-related functional asymmetry of the amygdala: Preliminary evidence using a case-matched lesion approach" (PDF). Neurocase 15 (3): 217–234. doi:10.1080/13554790902775492. PMC 2829120. PMID 19308794.
- Van Stegeren, Anda H. (January 2009). "Imaging stress effects on memory: A review of neuroimaging studies". Canadian Journal Of Psychiatry 54 (1): 16–27. PMID 19175976.
- MacKiewicz, Kristen L.; Sarinopoulos, Issidoros; Cleven, Krystal L.; Nitschke, Jack B. (September 2006). "The effect of anticipation and the specificity of sex differences for amygdala and hippocampus function in emotional memory" (PDF). Proceedings of the National Academy of Sciences 103 (38): 14200–14205. doi:10.1073/pnas.0601648103. PMC 1599934. PMID 16963565.
- Herlitz, Agneta; Rehnman, Jenny (February 2008). "Sex Differences in Episodic Memory" (PDF). Current Directions in Psychological Science 17 (1): 52–56. doi:10.1111/j.1467-8721.2008.00547.x.
- Aizpurua, A.; Koutstaal, W. (June 2010). "Autobiographical Memory and Flexible Remembering: Gender Differences" (PDF). World Academy Of Science, Engineering & Technology 66: 1631–1637.
- Bloise, Susan M.; Johnson, Marcia K. (February 2007). "Memory for emotional and neutral information: Gender and individual differences in emotional sensitivity" (PDF). Memory 15 (2): 192–204. doi:10.1080/09658210701204456. PMID 17534112.
- Pickel, Kerri L. (August 2009). "The weapon focus effect on memory for female versus male perpetrators". Memory 17 (6): 664–678. doi:10.1080/09658210903029412. PMID 19536689.
- Schaefer, E.G.; Halldorson, M.; Dizon-Reynante, C. (2011). "TV or not TV? Does the immediacy of viewing images of a momentous news event affect the quality and stability of flashbulb memories". Memory 19 (3): 251–266. doi:10.1080/09658211.2011.558512. PMID 21500086.
- Pillemer, D. B. (1990). "Clarifying flashbulb memory concept: Comment on McCloskey, Wible, and Cohen (1988)". Journal of Experimental Psychology: General 119 (1): 92–96. doi:10.1037/0096-34188.8.131.52.
- Lanciano, T.; Curci, A.; Semin, G. R. (2010). "The emotional and reconstructive determinants of emotional memories: An experimental approach to flashbulb memory investigation". Memory 18 (5): 473–485. doi:10.1080/09658211003762076.
- Winograd, Eugene; Killinger, William A. (September 1983). "Relating age at encoding in early childhood to adult recall: Development of flashbulb memories". Journal of Experimental Psychology: General 112 (3): 413–432. doi:10.1037/0096-34184.108.40.2063.
- Wright, D. B. (1993). "Recall of the Hillsborough disaster over time: Systematic biases of 'flashbulb' memories". Applied Cognitive Psychology 7 (2): 129–138. doi:10.1002/acp.2350070205.
- Larsen, S. F. (1992). "Affect and Accuracy in Recall: Studies of Flashbulb Memories", eds Winograd, E., Neisser, U. 43–48, Cambridge University Press, New York
- Conway, M. A.; Anderson, S. J.; Larsen, S. F.; Donnelly, C. M.; McDaniel, M. A.; McClelland, A. G.; Rawles, R. E.; Logie, R. H. (May 1994). "The formation of flashbulb memories". Memory & Cognition 22 (3): 326–343. doi:10.3758/BF03200860. PMID 8007835.
- Curci, A; Luminet, O. (2009). "Flashbulb memories for expected events: A test of the emotional-integrative model". Applied Cognitive Psychology 23: 98–114. doi:10.1002/acp.1444.
- Brewin, C.R. (April 2007). "Autobiographical memory for trauma: Update on four controversies". Memory 15 (3): 227–248. doi:10.1080/09658210701256423. PMID 17454661.
- Dolcos, Florin; Labar, Kevin S.; Cabeza, Roberto (2005). "Remembering one year later: Role of the amygdala and the medial temporal lobe memory system in retrieving emotional memories" (PDF). Proceedings of the National Academy of Sciences 102 (7): 2626–2631. doi:10.1073/pnas.0409848102. PMC 548968. PMID 15703295.
- Dolcos, F.; Labar, K. S.; Cabeza, R. (June 2004). "Interaction between the amygdala and the medial temporal lobe memory system predicts better memory for emotional events" (PDF). Neuron 42 (5): 855–863. doi:10.1016/S0896-6273(04)00289-2. PMID 15182723.
- Dolan, R. J.; Lane, R.; Chua, P.; Fletcher, P. (March 2000). "Dissociable temporal lobe activations during emotional episodic memory retrieval" (PDF). NeuroImage 11 (3): 203–209. doi:10.1006/nimg.2000.0538. PMID 10694462.
- Smith, A. P.; Henson, R. N.; Rugg, M. D.; Dolan, R. J. (September–October 2005). "Modulation of retrieval processing reflects accuracy of emotional source memory" (PDF). Learning & Memory 12 (5): 472–479. doi:10.1101/lm.84305. PMC 1240059. PMID 16204201.
- Ochsner, K. N. (June 2000). "Are affective events richly recollected or simply familiar? The experience and process of recognizing feelings past" (PDF). Journal of Experimental Psychology: General 129 (2): 242–261. doi:10.1037/0096-34220.127.116.11. PMID 10868336.
- McGaugh, J. L. (July 2004). "The amygdala modulates the consolidation of memories of emotionally arousing experiences". Annual Review of Neuroscience 27 (1): 1–28. doi:10.1146/annurev.neuro.27.070203.144157. PMID 15217324.
- Schmolck, H.; Buffalo, E. A.; Squire, L. R. (January 2000). "Memory Distortions Develop over Time: Recollections of the O.J. Simpson Trial Verdict After 15 and 32 Months" (PDF). Psychological Science 11 (1): 39–45. doi:10.1111/1467-9280.00212. PMID 11228841. |
The student PTA wants the whole school to get involved in anti-bullying. The student PTA does this because they want everybody to stop bullying and not be a bystander.
A bystander is a person that watches bullying and doesn't help the person by not reporting it or not telling them to stop. What the student PTA wants us to do is to stand up for others and yourself. We want Cooper Upper Elementary to be bully free. |
Nobel Laureate Adam Riess at his #LINO19 lecture. Photo/Credit: Patrick Kunkel/Lindau Nobel Laureate Meetings
Nobel Laureate Adam Riess had a busy third day at the 69th Lindau Nobel Laureate Meeting, presenting a lecture, debating in a panel discussion and meeting for an Open Exchange with young scientists. He was in-demand not only because he is visiting the Lindau Meetings for the first time, but also because his work tackles cosmological questions we have all asked at some point in our lives. When was the universe created? How will it end? And what is it made of?
Riess’ Nobel-awarded research, shared with Saul Perlmutter and Brian Schmidt, provided strong evidence for an answer to the second question. To get there, the researchers focused on measurements of one of the most important parameters in cosmology – the Hubble constant. The Hubble constant is needed to estimate the size and age of the universe. It gives the rate at which the universe is expanding today from the Big Bang.
Scientists already knew the Hubble constant was not constant (in time at least, it is however the same and therefore constant throughout space). In the distant past, the universe’s expansion rate was much larger, and then it shrank as the universe expanded.
It was therefore a complete surprise when Riess and Schmidt, and Perlmutter independently, revealed that the universe’s expansion is actually accelerating and therefore that its ultimate fate is to keep on expanding forever. “This was tremendously exciting, shocking, disturbing, revolutionary… any adjective you can throw at it,” he recalled. “But it doesn’t really answer the big question: why is the universe’s expansion accelerating?”
Riess’ current work may shed light on this question, and again studies revolve around the Hubble constant. When Edwin Hubble introduced his eponymous constant 90 years ago, his calculations yielded what we now know was a gross overestimate of H0 = 500 km/s/Mpc. By the 1960s it had shrunk to around 100 km/s/Mpc and it continued to shrink further through the following decades.
Recently, Riess and colleagues from the SH0ES (Supernovae H0 for the Equation of State) collaboration measured the Hubble constant to be 74 km/s/Mpc, with a precision of 1.9%.
To get to such a level of accuracy, Riess explained that his team had to build a strong ‘cosmic distance ladder’. This entailed measuring accurate distances to nearby galaxies and then moving to galaxies farther and farther away. For relatively nearby galaxies, the team used standard candles called Cepheid variables, common stars that pulsate at predictable rates that indicate their intrinsic brightness. For those farther away, they could also use much brighter but rarer cosmic yardsticks: exploding stars called Type Ia supernovae. By comparing these distances to measurements of an entire galaxy’s light, they could then calculate how fast the cosmos is expanding: the Hubble constant.
Yet despite its accuracy, this value surprisingly doesn’t match with the one derived from another key technique for calculating the universe’s expansion rate. ESA’s Planck satellite, which maps the cosmic microwave background – the relic afterglow from the Big Bang – was used by researchers including Nobel Laureate George Smoot to calculate how fast the universe was expanding when it was just 380,000 years old. They extrapolated this value forward to today and came to a Hubble constant value of 67 km/s/Mpc.
This mismatch presents a big problem for the current model of the universe, known as ΛCDM. “If we have reached a point of confirmation of this discrepancy, then we have to imagine new physics in ΛCDM,” Riess said.
Riess only hinted at what this new physics could be during his lecture, making the subsequent panel discussion (ominously titled ‘The Dark Side of the Universe’) in which some of these possibilities were explored all the more eagerly anticipated.
The discussion focused exclusively on dark matter and dark energy, that together are thought to make up 95% of the universe. Dark energy is perhaps the most mysterious of the two. It is an unknown phenomenon expanding space everywhere and making the universe balloon at an ever-faster rate. In 1997, Schmidt and Riess invoked dark energy to help explain their results: “We knew the only easy way to make the universe accelerate was some form of what would eventually be called dark energy, commonly referred to as the cosmological constant,” Schmidt said.
The cosmological constant was famously first suggested by Albert Einstein as a way to make his equations describe a static universe. He quickly dropped his version of the cosmological constant when astronomers in the 1920s discovered that the universe was not static after all, but expanding.
But the discovery of the accelerating universe allowed a cosmological constant to find favour once again as a way to make ΛCDM fit with what astronomers were seeing. Most importantly, adding a cosmological constant essentially has an anti-gravity effect on the universe, allowing it to push itself apart when it should be slowing down.
What this anti-gravity effect is remains a complete mystery. At least, a mystery to most: “I really don’t like the term dark energy,” said Nobel Laureate David Gross who, with Frank Wilczek, and David Politzer, working independently, was awarded the 2004 Nobel Prize in Physics for building a mathematical framework that came to be known as quantum chromodynamics. “It isn’t dark, it isn’t mysterious – it’s the only form of energy and pressure that looks the same to all observers.”
While dark energy is expanding space everywhere, dark matter – the second mystery – has an opposite binding effect on matter in the universe. It is an invisible substance forming a universal cosmic web. This cosmic web is thought to help form galaxies and prevent them from spinning apart.
Like Gross on dark energy, Riess was keen to downplay dark matter’s dark credentials: “One of my favourite observations is of the Bullet Cluster where you see two clusters of galaxies pass through each other in a collision,” he said. “Luminous matter is shocked and heated and left behind a little bit, and the dark matter separates and moves on, and you can actually see it.” But he added: “Our current descriptions of dark matter and dark energy are very phenomenological… that’s not a complete description of their physics.”
So, although these two dark constituents of the universe may not be as dark as we thought, there is still an awful lot to learn about them – a good reason for young scientists attending the 2019 Nobel Lindau Meeting to illuminate the dark side of the universe.
Videos of lectures and discussions of #LINO19 can be watched in our mediatheque. |
William Ewart Gladstone
2 of 6 portraits on display in Room 25 at the National Portrait Gallery
William Ewart Gladstone
by Sir John Everett Millais, 1st Bt
oil on canvas, 1879
49 1/2 in. x 36 in. (1257 mm x 914 mm)
Transferred from Tate Gallery: London: UK, 1957
Click on the links below to find out more:
Sitterback to top
- William Ewart Gladstone (1809-1898), Prime Minister and writer. Sitter associated with 315 portraits.
Artistback to top
- Sir John Everett Millais, 1st Bt (1829-1896), Painter and President of the Royal Academy. Artist associated with 41 portraits, Sitter in 74 portraits.
This portraitback to top
Gladstone sat for five, one-hour sittings for this portrait, remarking in his diary of July 6, 1879 that Millais' 'ardour and energy about his picture inspire a strong sympathy'
Linked publicationsback to top
- Audio Guide
- Victorian Portraits Resource Pack, p. 35
- Cooper, John, A Guide to the National Portrait Gallery, 2009, p. 42
- Cooper, John, Visitor's Guide, 2000, p. 80
- Funnell, Peter, Victorian Portraits in the National Portrait Gallery Collection, 1996, p. 35
- Funnell, Peter (introduction); Marsh, Jan, A Guide to Victorian and Edwardian Portraits, 2011, p. 54
- Funnell, Peter; Warner, Malcolm, Millais: Portraits, 1999 (accompanying the exhibition at the National Portrait Gallery from 19 February to 6 June 1999), pp. 150 ; 166
- Livingstone, Natalie, author., The mistresses of Cliveden / Natalie Livingstone., 2015, p. 307
- Piper, David, The English Face, 1992, p. 203
- Saywell, David; Simon, Jacob, Complete Illustrated Catalogue, 2004, p. 248
- Various contributors, National Portrait Gallery: A Portrait of Britain, 2014, p. 160
Events of 1879back to top
Current affairsWomen's education continues to grow, with the founding of women's colleges in Oxford. Somerville College took its name from the late Scottish scientific writer Mary Somerville. Lady Margaret Hall was founded by Elizabeth Wordsworth, great niece of the poet, and named after Margaret Beaufort, a medieval noblewoman and mother of Henry VII.
Art and scienceEdison invents the first practical electric light bulb.
The first prehistoric paintings, dating back 14,000 years, are discovered in the Altamira caves in Northern Spain when a young girl notices paintings of bison on the ceilings.
The French actress Sarah Bernhardt, already acclaimed for roles in plays such as Racine's Phèdre and Victor Hugo's Hernani, celebrates a successful season at London's Gaiety Theatre.
InternationalAnglo-Zulu war fought between British forces and the Zulus, after disputes between the Boers and Zulu leader Cetshywayo over the Utrecht border attracted British intervention. The British victory marked the end of the independent Zulu nation, although the Zulu's initial victory at Isandhlwana was a major surprise. The Battle of Rorke's Drift was dramatised in the film Zulu, starring Michael Caine, in 1964.
See this portrait
On display in Room 25 at the National Portrait Gallery
Exhibitions and displays
- At the Despatch Box: Gladstone in Action
Until 6 August
- The Drawing Room: Caricature
31 July, 13:00 |
Summer is upon us and its more important than ever to keep hydrated.
Whether you're working out, enjoying summer sports, doing yard work or sitting on the beach the risk of dehydration is increased in the heat of summer.
Be sure to drink lots of fluids before, during and after exercise.
If you're exercising for more than 2 hours, you may also be at risk for electrolyte depletion (through sweat) or hyponatremia (severely reduced blood sodium concentration from over-hydration), so be sure to balance your hydration with sports drinks containing sodium and other electrolytes.
ACSM, the American College of Sports Medicine, recommends the following fluid intake during exercise. Remember, however, that in extreme heat you lose more fluids & electrolytes than at moderate temperatures:
2 Hours Prior to Exercise: 17-20 oz (500-600 mL)
During Exercise: 7-10 oz (200-300 mL) every 10-20 minutes
After Exercise: 16-24 oz for every pound of body weight lost
(450-657 mL per 0.5 kg)
For a fun, safe and healthy summer, be careful and monitor your fluid intake in the heat . And on those sweltering dog days, crank up your AC, stay inside and do a CoreFitnessByJana workout! |
On January 22, 2017, approximately 470,000 women and men alike marched in Washington D.C. to advocate for women’s rights, says the New York Times. In support of the march in Washington, cities around the nation hosted sister marches, including one in the city of Orlando. Throughout the country, an estimated 3.3 million marchers came out to spread love and sisterhood, according to Vox’s Sarah Frostenson. Despite the many people marching to peacefully protest the new administration of Donald Trump, it was clear that the majority of the focus was on acceptance and unity.
The Orlando march was no exception. Lake-Sumter State College professor Toni Upchurch stressed this matter and shared her thoughts on Facebook after attending the march in Orlando. “We met so many people willing to stand up for the equality of all people.” This historical event caught the attention of major media, but also the eyes of Lake Sumter State College students. One female student, 18, claims that the Women’s March was a way for the country to show that “there are issues that women are willing to fight to fix and that they feel very strongly about.” Dual Enrollment student Alara Nigro sees the Women’s Marches as “a glimmer of hope,” that makes her feel proud to be a woman. It is clear that these beautiful demonstrations of the American spirit have inspired many across the nation, even those here on our own campus. |
Attention: For textbook, access codes and supplements are not guaranteed with used items.
This brilliant outline of Blake's thought and commentary on his poetry comes on the crest of the current interest in Blake, and carries us further towards an understanding of his work than any previous study. Here is a dear and complete solution to the riddles of the longer poems, the so-called Prophecies, and a demonstration of Blake's insight that will amaze the modern reader. The first section of the book shows how Blake arrived at a theory of knowledge that was also, for him, a theory of religion, of human life and of art, and how this rigorously defined system of ideas found expression in the complicated but consistent symbolism of his poetry. The second and third parts, after indicating the relation of Blake to English literature and the intellectual atmosphere of his own time, explain the meaning of Blake's poems and the significance of their characters. |
Visions of the Abstract in Art and Mathematics
Exploring common themes in modern art, mathematics, and science, including the concept of space, the notion of randomness, and the shape of the cosmos.
This is a book about art—and a book about mathematics and physics. In Lumen Naturae (the title refers to a purely immanent, non-supernatural form of enlightenment), mathematical physicist Matilde Marcolli explores common themes in modern art and modern science—the concept of space, the notion of randomness, the shape of the cosmos, and other puzzles of the universe—while mapping convergences with the work of such artists as Paul Cezanne, Mark Rothko, Sol LeWitt, and Lee Krasner. Her account, focusing on questions she has investigated in her own scientific work, is illustrated by more than two hundred color images of artworks by modern and contemporary artists.
Thus Marcolli finds in still life paintings broad and deep philosophical reflections on space and time, and connects notions of space in mathematics to works by Paul Klee, Salvador Dalí, and others. She considers the relation of entropy and art and how notions of entropy have been expressed by such artists as Hans Arp and Fernand Léger; and traces the evolution of randomness as a mode of artistic expression. She analyzes the relation between graphical illustration and scientific text, and offers her own watercolor-decorated mathematical notebooks. Throughout, she balances discussions of science with explorations of art, using one to inform the other. (She employs some formal notation, which can easily be skipped by general readers.) Marcolli is not simply explaining art to scientists and science to artists; she charts unexpected interdependencies that illuminate the universe.
Hardcover$44.95 T ISBN: 9780262043908 392 pp. | 6 in x 9 in 237 color illus.
Bridging seemingly dissimilar areas is remarkably difficult, but the rewards are many: expanding our thinking and opening new vistas. Marcolli guides us through complicated intersections for a journey that I urge you to take part in.
Julio Mario Ottino
Northwestern University; winner of the National Academy of Engineering Gordon Prize for Innovation in Engineering and Technology Education
In this unique book, Matilde Marcolli describes what modern art looks like through the lens of mathematical physics. Marcolli illuminates the concepts of chance, entropy, spacetime, and cosmology that connect art, science, and mathematics. Lumen Naturae is an invaluable interdisciplinary resource.
lecturer on the history of art, science, and mathematics at the School of Visual Arts in New York, and author of Mathematics and Art: A Cultural History
A vast, deeply personal, survey of artists' work expressing ideas from physics and mathematics that will help scientists understand some of the impulses behind modern abstract art—and simultaneously an intriguing introduction to modern physics and math for artists interested in science.
author of Pythagoras Trousers, a cultural history of physics, and The Pearly Gates of Cyberspace, a history of ideas about space
Marcolli rethinks understanding of space on all scales and types; she integrates the arts as a way of knowing on equal standing with contemporary sciences. But she does not avoid the conundrum that after millennia of human sciences, we know that 95% of the universe is unknown. This book lays out new paths.
Roger F. Malina
Professor of Physics and Art and Technology, University of Texas at Dallas, and Executive Editor of the Leonardo Publications |
The Battle of Yavin is the epic, fictional space battle between the Rebel Alliance and the Galactic Empire from the end of Star Wars Episode IV: A New Hope. This Rebel victory cemented their position as a credible threat to the Empire, drawing much-needed support for their cause.
The Empire, ideally, wanted to crush the Rebellion before it could grow powerful enough to become a significant problem for them. But they needed to find it first. To this end they planted a tracking device on the Millennium Falcon, which they knew would be headed for the Rebel's hidden base to deliver the stolen schematics for the Empire's ultimate weapon, the Death Star. They hoped to use the Death Star to destroy the entire moon the Rebels were using, wiping them all out in one massive decapitation strike.
For its part, the Rebellion was able to quickly analyze the schematics and identify a weakness in the Death Star's defenses. A two meter exhaust port for the Death Star's power reactor lead directly to the reactor's core, and if they could drop a proton torpedo into the shaft, it would set off a chain reaction in the reactor which could destroy the entire battle station. The Empire, although aware of this weakness, did not believe that this was realistically possible to do, and proceeded with the attack.
- Galactic Empire (Commanded by Grand Moff Tarkin)
- One Death Star battle station
- Superlaser (capable of destroying a planet)
- An unknown, but large, number of fixed turbolaser emplacements
- An unknown, but large, number of TIE fighters
- One TIE Advanced fighter (Darth Vader's personal fighter)
- Rebel Alliance (Commanded by Jan Dodonna)
- Pretty much the entire Rebel army. However, as the capital ships would have been useless against the Death Star, the only deployed resources were:
- 10-20 X-Wing fighters (Red Squadron)
- 4-10 Y-Wing fighter-bombers (Gold Squadron)
- 1 Heavily modified Correllian freighter (Millennium Falcon)
- 1 Jedi in training (Luke Skywalker)
The Battle of Yavin
The Arrival of the Death Star
The battle began with the arrival of the Death Star from hyperspace to the Yavin system. Fortunately for the Rebellion, the moon serving as their base happened to be on the far side of the planet Yavin IV at the time, and the Death Star would require approximately 30 minutes to orbit the planet and bring the Rebel base into sight to deploy its superlaser. Meanwhile the Rebellion's leaders quickly explained the Death Star's weakness and scrambled its fighters. Rebellion morale was low, as the feat seemed impossible. Imperial morale was high due to their unwavering belief that the massive and powerful Death Star was impervious to attack. Some would classify their attitude as cocky.
The Millennium Falcon meanwhile left the scene, as captain Han Solo was not part of, nor did he owe any allegiance to, the Rebellion. He had been rewarded by the Rebellion for services rendered to this point, and his most pressing need at the time was to pay off his debts to the gangster Jabba the Hutt. He also did not believe the Rebellion had any serious chance of victory.
The original battle plan was to have the X-Wing fighters escort the slower Y-Wing fighter-bombers to the massive trench which encircled the Death Star, destroying turbolaser emplacements and engaging any TIE fighters to clear a path for the Y-Wings to attempt the bombing run on the exhaust port. The Empire was initially not concerned about this attempt and thought their turbolasers would be sufficient to repel the attack.
It quickly became apparent to the Imperial forces however that the turbolaser emplacements were not accurate or nimble enough to hit the fast-moving fighters, having been designed to defend against capital ships because fighters were not originally perceived to be a threat to the station. The first ten minutes of the battle saw only one X-Wing fighter shot down by the fixed weapons. Instead they shut down the turbolasers to launch the TIE fighters and engage the Rebels in ship-to-ship dogfights.
While the X-Wings engaged the TIE fighters, three Y-Wings entered the trench to begin the bombing run. The Imperials noticed this and launched three more TIE fighters — Darth Vader's TIE Advanced with two wingmen — to intercept. The result was a spectacular failure for the Rebels. The already sluggish Y-Wings were trapped by the canyon-like walls of the Death Star Trench and all three were shot down within thirty seconds when the more agile TIE fighters entered the trench behind them. Darth Vader made all three kills.
At this point, Grand Moff Tarkin became firmly assured of the inevitability of Imperial victory. When approached by a subordinate who identified the Rebel strategy, he went on record saying "Evacuate? In our moment of triumph? I think you overestimate their chances."
With three minutes left until the Death Star had orbited the planet sufficiently to bring the moon base into sight, three X-Wing fighters, lead by Wedge Antilles, attempted to make the bombing run. Although the X-Wings held a smaller payload and a less advanced targeting system than the Y-Wings, the Y-Wings had proven too vulnerable. Only one proton torpedo was strictly necessary, and it was possible the more nimble X-Wings could evade the TIE fighters long enough to bring the target into range.
Two X-Wings were shot down by Darth Vader before the target was in sight, but Antilles was able to launch two proton torpedoes at the exhaust port. Unfortunately, there was too much sensor interference from the Death Star's electronic countermeasures for the targeting computers to handle, and the torpedoes impacted on the surface of the massive station doing only superficial damage.
With time quickly running out and only three X-Wing fighters and one Y-Wing left (the rest having been destroyed by dogfighting TIE fighters), Luke Skywalker and Wedge Antilles attempted one final bombing run on the Death Star's exhaust port, leaving Biggs Darklighter outside the trench to spot for TIE fighters. Antilles was forced to abort the run after suffering minor damage from Darth Vader's fighter. The decision was made to let Antilles go since the threat to the station was the fighter in the trench. Darklighter replaced Antilles as Skywalker's wingman, but was quickly shot down. In all, Darth Vader had six confirmed kills, significantly more than anyone else in the battle.
Luke Skywalker had joined the rebellion just before the battle, having come along with the Millennium Falcon to deliver the Death Star Schematics. Although an accomplished pilot, this was his first military mission and he was inexperienced with the X-Wing fighter. Electronic countermeasures were jamming his targeting computer, he was trying to hit a very small target, and he had three TIE fighters on his 6 o'clock position with maneuverability restricted by the walls of the trench.
Skywalker had two advantages though. First, he had just begun his training as a Jedi knight, giving him the ability to target the exhaust port by channeling The Force rather than relying on the jammed targeting computer. Second, the Millennium Falcon just happened to show up at this critical moment.
Han Solo, overcome with guilt over abandoning his new friends, returned to the Yavin system just in time to engage the three TIE fighters on Skywalker's tail. Flying in undetected against the sun, he hit one, and with the close wing formation imposed by the confines of the trench the other two TIE fighters collided. The two basic fighters were destroyed, but Darth Vader's TIE Advanced was structurally stronger and escaped with heavy damage.
Now free to worry only about the target in front of him, Skywalker shut off his targeting computer, causing not a small amount of confusion and worry in the command center, which was now in sight of the Death Star's superlaser. As the superlaser was commencing primary ignition, Skywalker fired two proton torpedoes and watched them enter the exhaust port. The remaining two X-Wings, one Y-Wing, and the Millennium Falcon fled the Death Star as the proton torpedoes made their way down the shaft to the reactor. Just before it was ready to fire, the Death Star exploded in a massive plasma cloud as a result of the proton torpedoes setting off a chain reaction in its reactor. The only survivor was Darth Vader, who escaped undetected in his damaged Tie Advanced. Rebel casualties were all fighters but two X-Wings, one Y-Wing, and the Millennium Flacon, but all personnel still stationed on the moon base were saved.
What went wrong?
Given the enormous disparity in resources between the Galactic Empire and the fledgeling Rebel Alliance, the Battle of Yavin should have been an easy and decisive victory for the Empire. Several factors contributed to their defeat.
Grand Moff Tarkin was contemptuous of the Rebel forces and did not believe they were a credible threat to the station, even though they had the complete schematics for the Death Star's design and had discovered its weakness. The exhaust port weakness was all but impossible to exploit even under normal circumstances, and the ray-shielding and electronic countermeasures should have ensured that it was completely impossible. As a result, the Death Star did not bring along any Star Destroyer escorts, and launched only a small percentage of TIE fighters from what must have been an enormous supply. Very likely, the Empire thought that they could destroy the Rebel Base without sustaining any casualties at all by using only the "invincible" Death Star.
The Jedi knights were all but extinct at the time of the Battle of Yavin. Emperor Palpatine, Darth Vader, Yoda, and Obi-Wan Kenobi were the only individuals left in the galaxy known to be able to channel The Force and perform feats normally thought to be impossible. Since the first two were on the Imperial side, Yoda was in hiding, and Kenobi was killed by Darth Vader shortly before the battle, Luke Skywalker was an unknown resource for the Rebellion.
No preparations made to deal with reinforcements.
Even considering the Empire's overconfidence and the wild card represented by Luke Skywalker, the battle still would have been lost had it not been for the unbelievably timely return of the Millennium Falcon. Had Han Solo arrived any earlier, he would have simply joined the battle and had little impact on the outcome (a Correllian freighter being too large to enter the Death Star trench). Had he arrived even seconds later, Luke Skywalker would almost certainly have been shot down by Darth Vader before launching the fateful torpedoes. Even though such an incredibly lucky feat of timing could not possibly have been accounted for, it revealed a complete lack of foresight on the part of the Empire for dealing with possible reinforcements. This was likely because they believed all Rebel forces were at their base at the time.
Fortunate positioning on the far side of Yavin IV.
The battle of Yavin would have been over before the Rebellion could have even scrambled their fighters had their moon base not been on the far side of the planet Yavin IV from the Death Star's hyperspace approach. If the moon was on the near side, the Death Star would have targeted and destroyed it very quickly, just as fast as it could get a sensor lock and charge the superlaser. Instead, the Rebellion had thirty minutes to scramble fighters and make three bombing runs on the exhaust port while the Death Star orbited Yavin IV.
During Skywalker's trench run, Darth Vader sensed the young Jedi's power. Vader subsequently made it his personal mission to track down this Rebel and turn him to the Dark Side of The Force.
The Battle of Yavin was a critical victory for the Rebel Alliance. The Galactic Empire lost their most powerful weapon, and the time and resources involved in its construction, and the enormous number of personnel and upper-echelon leadership gathered there. The Rebel Alliance demonstrated that they were a credible threat to the Empire. The Rebel base, now discovered, was moved to Hoth. The Empire devoted more time and resources into tracking down and destroying the Rebellion, going so far as to employ independent bounty hunter help.
The Battle of Yavin convinced several other systems that the corrupt and ubiquitous Empire was not all-powerful and could in fact be defeated. The Rebellion gained a number of powerful allies as a direct result of this victory, eventually gathering a force large enough to topple the Imperial government about six years later at the Battle of Endor.
This analysis of the Battle of Yavin does not take into account any of the Star Wars Expanded Universe, only the movies. |
Sentience is the manifestation of a mind, linked to intelligence and cognition. Sentient beings display the properties of consciousness and intelligence. The idea of sentient beings extends to other animals but not to plants. Plants have a strategy of existence, remarkable forms, beauty and adaptability, but no minds. In a modern sense, sentience is a composite property of brains that varies approximately with the size, organization and complexity of the brain.
A hierarchy of sentience exists in the living world. The hierarchy of sentience follows an evolutionary path with the oldest creatures such as worms and shelled creatures of the sea at the lowest levels of sentience.
Mammals are more recent and more elaborate creatures with high levels of sentience. Humans are mammals and have a natural affinity for some but not all other mammals. Thus, it is reasonable to assume that dolphins, whale’s, apes and humans have similar sentience. While dogs are sentient beings with consciousness, feelings and behaviors that are mostly congruent with human feelings and behaviors, they have smaller, more specialized brains. It is reasonable to assume that dogs operate at a somewhat different level of sentience.
Chimpanzees and Bonobos are our closest primate relatives; their sentience is similar to ours and they deserve our protection and respect. When you examine human and chimpanzee cognitive abilities in detail, there are differences between their ability and ours. For example, chimpanzees can learn “language” from human teachers, using symbols, signs and keyboards, but they are unlikely to create a virtual reality from sounds and symbols as humans do. Their social communications are rich and complex, however, and provide us with insights into the development of human spoken language.
Members of the philosophy departments at universities point to books and journal papers as evidence of the thought process and will cite intelligent argument or “reason” as the indispensable tool of philosophy.
The word “thought” is not so easily defined. In common use, thinking is equivalent to selftalk, the process of talking to yourself when you are not busy doing other things. Selftalk has important limitations that need to be understood before human cognitive processes can be understood. Self talk extends to conversation as group thinking, lectures, books and the constant chatter of media. |
Bedbug Mug Shot
Bedbugs. To the naked eye they look like little more than blood-filled dots, but look through a microscope and you see a monster. This image, digitally-colorized scanning electron micrograph, shows the underside of a bedbug's head and the first pair of its six jointed legs. Its mouthparts, used to pierce the skin and suck up blood, are shown in purple.
A view of a bedbug's ventral (or stomach) surface without added color. Its six legs and needlelike proboscis are visible.
In the Act
This photo from 2006 shows a bedbug in action, sucking blood from a human.
A Close Up
A closer view of the head of a bedbug, Cimex lectularius. Although they feed on blood, there is no evidence bedbugs spread disease. From a medical perspective, the biggest problem they can cause is an allergic reaction to their saliva, according to the U.S. Centers for Disease Control and Prevention.
Eye to Eye
A view of a bedbug's compound eye, in red. The single large eye is made up of many repeating units, known as "ommatidia." The compound eye is very sensitive to movement with each ommatidium turning off and on as objects pass across its field of view, according to the CDC.
C. lectularius, the common bedbug, hides in cracks and crevices in furniture, floors and walls and comes out at night to feed on its favorite meal, human blood. It grows up to 0.3 inches (7 millimeters) long and can live up to one year. The tiny hairlike structures shown on the back of this bedbug are not actually hairs but sensory structures called setae, according to the CDC.
A close-up view of the hairlike sensory structures known as setae. They are made of chitin, the same material that makes up the rest of the bedbug's tough outer skeleton.
This bedbug measures 0.2 inches (5 millimeters) long, less than a third the diameter of a dime. |
I for Ingredients! Quite important in cooking! We all use this term in our day to day language but how many of us can say that we know what is the exact meaning of ingredients is. So here we go Ingredients according to Wikipedia is defined as a substance that forms part of a mixture. For example, in cooking, recipes specify which ingredients are used to prepare a specific dish.
Ingredients forms the crux of any recipe, as the right amount of ingredients can taste the dish to the next level whereas incorrect proportions can mar the taste of the dish. So keep your Is dotted and your ingredients properly measured! |
Published August 23, 2012
Hamilton County Facts and Figures
- There are 124,444 households in Hamilton County: 50.20% were married couple families; 13.50% are female-headed families. Hamilton County, TN Encyclopedia All Experts, 2000
- In 2006, the proportion of births to unmarried women was 38.5%. National Vital Statistics Reports, Vol. 56, No 7. December 5, 2007
- According to the 2010 US Census, 20 million U.S. children now live in single-parent homes.
- The majority of African American children nationwide – 54 percent – are being raised by single mothers. Only 12 percent of African American families below the poverty line have both parents present, compared with 41 percent of poor Hispanic families and 32 percent of poor white families.
- The state’s divorce rate did drop 3.6 percent to 27,823 recorded divorces in 2005. Tennessee Department of Health
- Number of divorces in Hamilton County in 2012 was 1,3856, a decrease of 38 percent since 1997. Tennessee Department of Health
- In 2003, there were 3.8 divorces for every 1000 people in the U.S. Missouri Department of Health and Senior Services. 2003.
- In 2006, 43.9% of the births in Hamilton County were to unwed mothers compared to 41.4% statewide. Kids Count Data Book
- 46.2% of babies born in Hamilton County in 2007 were to unmarried parents. Tennessee Department of Health
- 79% of Hamilton County residents surveyed agree that “the most significant family, or social problem facing America is the physical absence of the father from the home.” This is from 69% in 1992. 1996 Gallup Poll of Fathering
- More than one in four (35%) Hamilton County adults have been divorced compared to 25% of adults nationwide. 2000 Barna Report
The Plight of Fatherlessness
- The United States is the world’s leader in fatherless families. U.S. Census Bureau
- In America, 24 million children live absent their biological father. National Fatherhood Initiative
- 63% of black children, 28% of white children, and 35 percent of Hispanic children are living in homes absent of their biological father. National Fatherhood Initiative, 2001
- Over 1.6 million babies were born out of the wedlock in 2012. U.S. Census Bureau
- 28% of America’s children live in mother-only families. U.S. Census Bureau 2010
- In 2002, 21.6 million adults identified themselves as divorced, representing 9.6% of the population, up from 4.3 million in 1970. U.S. Census Bureau
- In 2011, there were 877,000 divorces in the US compared to 920,000 in 2003. U.S. Census Bureau
Father's Time with Children
- Children ages 3 to 5 are read to by their fathers an average of 6 times a week. Source: A Child’s Day: 2006
- 36% of children younger than 6 had fifteen or more outings with their father in the last month. Source: A Child’s Day: 2006
- On average, a child in a two-parent family spends 1.2 hours each weekday and 3.3 hours on a weekend day directly interacting with his or her father. Overall, the average total time fathers in two-parent families are engaged with or accessible to their children is 2.5 hours on weekdays and 6.3 hours on weekend days. National Fatherhood Initiative
- Of children living with their mothers – whether as a result of non-marital birth or divorce – 35% never see their fathers, and 24% see their fathers less than once a month. Journal of Marriage and Family
Children in father-absent homes are almost four times more likely to be poor. In 2011, 12 percent of children in married-couple families were living in poverty, compared to 44 percent of children in mother-only families.
Source: U.S. Census Bureau, Children’s Living Arrangements and Characteristics: March 2011, Table C8. Washington D.C.: 2011.
- Only 15 percent of children with single biological fathers live below the poverty line. U.S. Census Bureau
- One-quarter of children living in single-mother homes in which the mother works are still poor. National Fatherhood Initiative
- Almost 75 percent of American children living in single-parent families will experience poverty before they turn 11 years old. Only 20 percent of children in two-parent families will do the same. The National Fatherhood Initiative
- Children who live apart from their fathers are more likely to be diagnosed with asthma and experience an asthma-related emergency even after taking into account demographic and socioeconomic conditions. Unmarried, cohabiting parents and unmarried parents living apart are 1.76 and 2.61 times, respectively, more likely to have their child diagnosed with asthma. Marital disruption after birth is associated with a 6-fold increase in the likelihood a children will require an emergency room visit and 5-fold increase of an asthma-related emergency. Source: Harknett, Kristin. Children’s Elevated Risk of Asthma in Unmarried Families: Underlying Structural and Behavioral Mechanisms. Working Paper #2005-01-FF. Princeton, NJ: Center for Research on Child Well-being, 2005: 19-27.
- Being raised by a single mother raises the risk of teen pregnancy, marrying with less than a high school degree, and forming a marriage where both partners have less than a high school degree.
- Source: Teachman, Jay D. “The Childhood Living Arrangements of Children and the Characteristics of Their Marriages.” Journal of Family Issues 25 (January 2004): 86-111.
- Children whose fathers are stable and involved are better off on almost every cognitive, social, and emotional measure developed by researchers. For example, high levels of father involvement are correlated with sociability, confidence, and high levels of self-control in children. Moreover, children with involved fathers are less likely to act out in school or engage in risky behaviors in adolescents. Source: Anthes, E. (2010, May/June). Family guy. Scientific American Mind.
- “….the absence of the father from the home affects significantly the behavior of adolescents and results in greater use of alcohol and marijuana.” Source: Beman, Deane Scott. “Risk Factors Leading to Adolescent Substance Abuse.”
- 85% of all children who show behavior disorders come from fatherless homes – 20 times the average. (Center for Disease Control) Fallen Fathers, 2008.
Children’s Sexual Development
- The absence of a biological father increases by 900 percent a daughter’s vulnerability to rape and sexual abuse boyfriends of custodial mothers. Fatherlessness statistics. National Fatherhood Initiative
- The absence of the father for boys has been linked to greater occurrences of effeminacy, higher dependence, less successful adult heterosexual adjustment, greater aggressiveness or exaggerated masculine behavior. Rekers, George, University of South Carolina of Medicine
- Data from the National Health Interview Survey indicated that both male and female adolescents who come from non intact families are more likely to have had sexual intercourse. National Fatherhood Initiative
- Children raised in single-parent families and surrounded by children of single-parent families at school are at the greatest risk of delinquency. Source: Anderson, Amy L. “Individual and contextual influences on delinquency; the role of the single-person family.” Journal of Criminal Justice, 30 (November 2002): 575-587.
- 63% of youth suicides are from fatherless homes (US Dept. Of Health/Census) – 5 times the average.
- 90% of all homeless and runaway children are from fatherless homes – 32 times the average.Fallen Fathers, 2008
- 80% of rapists with anger problems come from fatherless homes --14 times the average. Justice & Behavior, Vol 14, p. 403-26
- 85% of all youths in prison come from fatherless homes – 20 times the average. Fulton Co. Georgia, Texas Dept. of Correction- Fallen Fathers
- Young black men raised in single- parent families on welfare and living in public housing are twice as likely to engage in criminal activities compared to black men raised in two-parent families also on welfare and living in public housing. Hill, Anne, Underclass Behaviors in the United States: Measurements and Analysis of Determinants, 1993.
- In a study of preteens who committed murder, “the clearest finding pertain(ed) to family background”: a high percentage of preteen homicide offenders come from homes where the child was consistently at risk for witnessing or experiencing violence, usually at the hands of the primary male caretaker. National Fatherhood Initiative
- 70% of youths in State institutions are from fatherless homes. Department of Justice
- Fatherless children are twice as likely to drop out of school. U.S Department of Health and Human Services.
- Father involvement in schools is associated with the higher likelihood of a student getting mostly A's. This was true for fathers in biological parent families, for stepfathers, and for fathers heading single-parent families. Source: Nord, Christine Winquist, and Jerry West. Fathers’ and Mothers’ Involvement in Their Children’s Schools by Family Type and Resident Status. (NCES 2001-032). Washington, D.C.: U.S. Department of Education, National Center for Education Statistics, 2001.
- Students living in father-absent homes are twice as likely to repeat a grade in school; 10 percent of children living with both parents have ever repeated a grade, compared to 20 percent of children in stepfather families and 18 percent in mother-only families. Source: Nord, Christine Winquist, and Jerry West. Fathers’ and Mothers’ Involvement in Their Children’s Schools by Family Type and Resident Status. (NCES 2001-032). Washington, D.C.: U.S. Department of Education, National Center for Education Statistics, 2001.
- 71% of all high school dropouts come from fatherless homes. National Principals Associations: Report on the State of High Schools
- Kindergartners who live with single-parents are overrepresented in those lagging in health, social and emotional, and cognitive, outcomes. Thirty-three percent of children who were behind in all three areas were living with single parents. Only 22 percent not lagging behind in any areas. National Fatherhood Initiative
- Delinquent behavior on school property by African American male high school students is taking place at a high rate. National Center for Education Statistics, 2000
- Father involvement has a direct effect on a child’s externalizing and internalizing behavior. Differences in the level of involvement have significant effects on the behavioral outcomes of the child, but overall is more beneficial when the father lives with the child. Carlson, Marcia J. Family Structure, Father Involvement and Adolescent Behavioral Outcomes. 2005.
- Children whose fathers reported having a secure attachment relationship with their father had mothers with higher self-esteem. They also had higher attachments to their mothers. Caldera, Yvonne M. “Paternal Involvement and Infant Father Attachment: 2004.
- In a study of fathers’ interaction with their children in intact two-parent families, nearly 90% of the fathers surveyed said that being a father is the most fulfilling role a man can have. Yeung, W. Jean, “Children’s Time with Fathers in Intact Families.” Paper presented at the Annual Meeting of the American Sociological Association, Chicago, IL, August, 2000. |
Sharing the Vote
In the fight for abolition of slavery, the question of citizenship arose frequently, specifically, the question of suffrage, or the right to cast a vote in Federal, State, and local elections. Early on, people seeking the vote for those without land, of African or Indigenous heritage, and for women found common cause in the Abolitionist movement. As time went by, and emancipation of slaves and the enfranchisement of Black men seemed more possible, rifts appeared between those intent on securing woman suffrage and those focused on ending slavery.
Some remained focused on both. Rev. James E. Crawford (1811–88) was one of the island’s “best known and most highly respected citizens.” Born in Virginia in 1811, he was the son of Mary, an enslaved woman, and the planter who owned her. He escaped slavery by going to sea, and studied for the ministry in Rhode Island, where he met and married Ann, a free Black woman from South Carolina. Ann’s sister and niece, although born free, were enslaved in the South. To raise money to buy their freedom, Reverend Crawford traveled through the Northern states and Canada speaking about his life experiences. When he had the money, he traveled into the South in 1858, on the eve of the Civil War. Because he had light skin, he was able to pass as a white man, at risk of being returned to slavery himself if caught, and purchase his sister-in-law and her daughter out of slavery for $1,700. He brought them back to Nantucket.
For forty years he served as minister of the Pleasant Street Baptist Church on Nantucket, also working as a barber to support his family. He was a powerful voice on the subject of woman suffrage.
Reverend George Bradburn, born in Attleboro, Massachusetts, in 1806, was minister of the First Universalist Church on Nantucket from 1827 to 1834. He was later elected to represent Nantucket to the Massachusetts Legislature, where he became widely known as an activist, fighting for abolition and universal suffrage. He was chosen to be a delegate to the World Anti-Slavery Convention of 1840, in London, at which American women were refused participation, and he boycotted the conference along with the Motts, William Lloyd Garrison, and Henry Stanton. Later, despite increasing deafness, he was one of the speakers through the year of “100 conventions,” launched across the North by the Anti-Slavery Society. He was on an extended lecture tour with Frederick Douglass in Indiana when they were attacked and Douglass’ hand was broken.
Lucretia Coffin Mott (1793–1880), Elizabeth Cady Stanton (1815–1902), and Susan B. Anthony (1820–1906) were at the core of the woman suffrage movement, which resulted in the 19th Amendment, enacted in 1920. All three were active abolitionists, although as African American men were freed from slavery and given citizenship and the right to vote with the passage of the 13th, 14th, and 15th Amendments, they split with abolitionists and focused on securing their own rights.
Lucretia Coffin Mott, born on Nantucket in 1793, cousin to Phebe Coffin Hanaford and Benjamin Franklin, was active in the fight for abolition and universal suffrage from an early age. Moving off the island at age 11, she kept up correspondence
with many Nantucketers, including the sisters Eunice Starbuck Hadwen and Eliza Starbuck Barney.
“Being a native of the island of Nantucket, where women were thought something of, and had some connection with the business arrangements of life, as well as their homes, I grew up so thoroughly imbued with woman’s rights that it was the most important question of my life from a very early day.”
— Lucretia Coffin Mott
Elizabeth Cady Stanton, born in Johnstown, New York, to a family who kept a man named Peter Teabout enslaved in their household, would eventually become known as a force in the woman suffrage movement, working alongside Lucretia Coffin Mott and Susan B. Anthony. Having experienced the practice of slaveholding from an early age, it was only through her cousin Gerrit Smith, a wealthy abolitionist in Peterboro, New York, that she started her journey into abolitionist activism.
Mott and Stanton met as American delegates to the London World Anti-Slavery Convention in 1840 and became friends, with the elder Mott becoming a mentor to Stanton. The convention refused to seat American female delegates, and many in the American delegation, including William Lloyd Garrison and George Bradburn, withdrew in protest. This incident would lead Stanton, Mott, her sister Martha Coffin Wright, and Jane Hunt to organize the now-famous Seneca Falls Convention in 1848. In 1851, Stanton met Susan B. Anthony, with whom she would form a lifelong partnership in support of women’s rights.
Anthony, born to a Quaker family in Adams, Massachusetts, moved to Rochester, New York, and became closely involved with the woman suffrage movement. Like Stanton, she was deeply involved in the abolitionist movement, even to the point of becoming part of the Underground Railroad. Although she did fall out with other leaders for their refusal to include women in their demands for equality for all, she did continue to support anti-slavery movements and went on to help create the American Equal Rights Association to fight for both causes. Interestingly her cousin is the noted whaling captain George Anthony of New Bedford.
Elizabeth Cady Stanton and Susan B. Anthony remained very close and worked together for woman suffrage all the way until their deaths in 1902 and 1906, respectively, with Stanton often writing the speeches that Anthony would present in public. “She forged the thunderbolts, and I fired them,” said Susan B. Anthony, paying tribute to Stanton at her death. Stanton felt the same: “She [Anthony] supplies the facts and statistics, I the philosophy and rhetoric, and, together, we have made arguments that have stood unshaken through the storms of long years—arguments that no one has answered. Our speeches may be considered the united product of our two brains.”
In 1878, they created the language of the 19th Amendment, which was offered to Congress every year from that time until it passed in 1919, forty-one years later.
In the late 1870s Anna Gardner returned to Massachusetts and became active in the Association for the Advancement of Women, and Sorosis, a nationwide organization educating and empowering women, in company with Nantucketers Phebe Coffin Hanaford, Maria Mitchell and others in the national fight for women’s right to vote. Back on Nantucket she helped found the Nantucket chapter of Sorosis, acting as secretary and president, giving lectures and organizing activities.
Listen to an excerpt from Anna Gardner’s Keynote Speech given at the Sorosis Club 25th Anniversary Dinner ⇒
Phebe Ann Coffin Hanaford
Phebe Ann Coffin Hanaford (1829-1921) was a Nantucket-born Quaker, minister, author, poet, writer, abolitionist, temperance reformer, and champion of women’s rights. Her religious beliefs and personal convictions led her to Universalism. In 1868, Hanaford was the first woman in New England—and only the third in the United States—to be ordained a Universalist minister. Hanaford was an ardent abolitionist and women’s rights advocate, serving as the vice president of the Association for the Advancement of Women in 1874 and an active member of the American Equal Rights Association.
Hanaford and her husband separated shortly after her ordination. She lived with her partner Ellen Miles for the next 44 years, until Miles’s death in 1914. The closeness of their relationship was at least partly responsible for Hanaford’s dismissal from a congregation in New Jersey, which did not approve of ”the minister’s wife,” as Miles was described in one newspaper account.
Hanaford, like some others on Nantucket, found common cause between the needs of enslaved African-Americans and those of women for enfranchisement and agency in business and political affairs, including the right to vote. Following enactment of the constitutional amendments ending slavery, she concentrated her efforts on improving women’s lives, preaching, writing and speaking. She was active in the explosive growth of women’s clubs in the white community (there had been Black women’s clubs since the early 1800s) following the founding of the Sorosis and the New England Women’s Club, in 1856. Women’s clubs grew out of church fellowship organizations, but rapidly came to fill the void left when women were denied access to higher education.
Of the pioneers of woman suffrage activism, she was the only one who lived to see enactment of the 19th Amendment in 1920.
Reverend Phebe Ann Hanaford was an active member of a variety of women’s social reform, literary, philosophical, and scientific organizations. This collection of pins belonged to Hanaford and includes examples from the Women Suffrage Association, the General Federation of Women’s Clubs, the New York Women’s Press Club, the American Association for the Advancement of Science, Phalo Club, Philit Scipoma, the Political Study Club, the New Century Study Circle, the New York Society of the Daughters of 1812, and the Woman’s Congress at the 1896 Cuban-American Fair.
When did women first vote on Nantucket?
Women voted for the first time in February 1880. The Massachusetts legislature passed a law allowing women to vote in April 1879, but it was limited to voting for school committee members. Instructions for prospective women voters were published on the front page of The Inquirer and Mirror in a letter from the Massachusetts Suffrage Association. Women had to be 21 or older, citizens of the Commonwealth for over a year, and residents of Nantucket for six months. There were a number of requirements: 1) women had to be able to read the state constitution and to write their names, 2) prove that they had paid taxes and registered with the town assessors with a list of their estate, and 3) be willing to pay the poll tax. Women were advised to check that their names had been properly placed on the rolls at least ten days before each election.
The women who voted that first year were members of Nantucket’s Suffrage Club. Most were elderly reformers. The first woman to cast her ballot was Lydia Gardner Macy, Anna Gardner’s sister. She was joined by Eliza Starbuck Barney, 78, almost 30 years after she had attended the first woman’s suffrage conference in Massachusetts. Also voting were Anna Gardner, Harriet Coffin Peirce, and Charlotte Austin Joy. Only 13 women voted in that first election. The Inquirer and Mirror noted that the presence of women had not disrupted the town meeting. The men were “respectful” as the women filed in and took their seats to vote.
On August 18, 1920, the 19th amendment granting women the ballot in national elections was ratified. Ten days later, The Inquirer and Mirror reported that 181 women on Nantucket had already registered to vote with many more expected in the coming weeks. One hundred years later, voter turnout rates for women, who constitute more than half the population in the U.S., have equaled or exceeded voter turnout rates for men nationwide. In every presidential election since 1980, the proportion of female adults who voted has exceeded the proportion of male adults who voted. |
Returns a flat list from a given nested structure.
nest is not a sequence, tuple, or dict, then returns a single-element
In the case of dict instances, the sequence consists of the values, sorted by
key to ensure deterministic behavior. This is true also for
instances: their sequence order is ignored, the sorting order of keys is
used instead. The same convention is followed in
correctly repacks dicts and
OrderedDicts after they have been flattened,
and also allows flattening an
OrderedDict and then repacking it back using
a corresponding plain dict, or vice-versa.
Dictionaries with non-sortable keys cannot be flattened.
Users must not modify any collections used in
nest while this function is
nest: an arbitrarily nested structure or a scalar object. Note, numpy arrays are considered scalars.
A Python list, the flattened version of the input.
TypeError: The nest is or contains a dict with non-sortable keys. |
Double, double toil and trouble;
Fire burn and cauldron bubble.
William Shakespeare, Macbeth, Act IV Scene i
A hagged crone leans over a steaming pot, warts on her crooked nose, whispering incantations to turn people into frogs. This is most likely what many will think of when asked to describe a witch, and a certain William Shakespeare has a lot to answer for, after his description of the hags in his play Macbeth. However, perhaps he's not totally to blame, as witches have traditionally been misunderstood, persecuted, vilified and driven out of town by scared and angry people.
So where did the concept of the witch arise? The first witch may well have been the first woman! Eve? Jewish folklore tells that the first woman was in fact called Lilith. She was apparently Adam's first wife, but was unceremoniously cast out of paradise for being naughty. This idea of a woman being sin is quite possibly the beginnings of the concept of the witch - a woman who can only do evil. However, Lilith has also been worshipped as a goddess and seen by some feminists as a figurehead - so it's not all bad. Lilith may in fact be the Greek Goddess Hecate, just in another form. These powerful women were not called witches for a very long time, the word 'witch' only coming into common usage much later.
'Witch' itself most likely originates from the Old English wicce - meaning 'wise one who casts spells'. There is no record of whether this was the casting of good spells or bad spells, but it more likely the former, as persons who practiced witchcraft were quite often midwives, herbalists and to some extent apothecaries. It was not until the late 15th Century that witchcraft was perceived as more of a threat to the 'common people' and organised religions of Western Europe.
The Classic Witch
I'll get you, my pretty. And your little dog too!!
Wicked Witch of the West, The Wizard of Oz
So, what is the classic witch? Well, she generally lives in a ramshackle hut on the edge of town, is pretty old and wrinkly, has a pet black cat, wears black clothes including a pointy hat and goes down the shops on her broomstick. She can poison crops with a stare, alter shape into various animals including black cats, ravens, black hares and even the odd black horse. She might have a magic book of spells and a wand, but most likely brews potions in her huge black cauldron, which is kept hot by a fire made from the bodies of her victims!
The modern idea of the witch can be seen in the Disney movie Snow White & the Seven Dwarfs, where the evil queen disguises herself as an old woman to lure Snow White into biting a poison apple. This portrayal left a mark on western society's concept of the witch, along with the Wicked Witch of the West in the film The Wizard of Oz, and it is these images that are most commonly referred to, especially by young trick-or-treaters at Halloween. It is actually centuries old, and famous stories like the Brother's Grimm tales of 'Rapunzel' or 'Hansel and Gretel', all include a terrifying witch. Folklore from around the world mentions the witch as an evil being, quite probably the worst of all being Baba Yaga, an ugly old woman who ate children!
People were often so fearful of witches that many found solace, oddly enough, in magic charms or wards against witches! These superstitions included hagstones1, horseshoes and sprigs of rowan. This fear of men and women who practised witchcraft, and the belief that they were all devil worshippers and heretics, led to mass persecution, and killing.
Thou shalt not suffer a witch to live.2
King James Bible, Exodus 22:18
During the 15th and 16th Centuries, it was common practice for people to be hunted down if there was a suspicion that they might be a witch. Witches were thought to have made a pact with the Devil, giving them supernatural powers and all kinds of terrible abilities. To be in league with Satan or Lucifer, or Beezlebub was to work against God - the act of heresy. A manuscript of 1486, the Malleus Maleficarum3 by Heinrich Kramer and James Sprenger was the pocket guide of witchhunting at the time. The Inquisition, the secular authorities, and the Protestant reformers all persecuted witches. Two of the most famous witchhunts were that of the Englishman Matthew Hopkins (the 'Witchfinder General') and that in Salem, Massachusetts, USA (brought to life by Arthur Miller's play The Crucible). More contemporary witchhunts, while not hunting down actual 'witches' per se, include McCarthyism which led to the mis-trial of many innocent people in the United States.
Finding a Witch
Villager - We have found a witch. May we burn her?
Crowd - Burn her! Burn! Burn her! Burn her!
Sir Bedevere - How do you know she is a witch?
Villager - She looks like one.
From the film, Monty Python and the Holy Grail
To discover a witch was a fairly simple affair. Anyone with a wart or mole on their face, perhaps lived alone with only a pet for company and liked herb-gardening could be accused of the crime of being a witch. Once you had your accused witch there were various ways of testing their guilt.
Early methods of testing to see if the accused was in fact a witch were simply forms of torture, that any sane person would succumb to and confess to being a witch. Some of the more familiar were;
- Dunking - The witch would be tied to a contraption known as a dunking stool, which would then be lowered into a fast flowing river, lake or large water container (usually blessed), to see if they would float. If able to float, then the accused was possessed with the 'Spirit of Satan' and ergo, a witch. Questions of whether clothing had filled with air and formed a convenient buoyancy aid were usually dismissed out of hand. If, however, the 'witch' drowned or died of hypothermia, they had obviously not been under Satan's protective watch, and were therefore innocent. Unfortunately for the person concerned, they were also dead, so it all ceased to be of relevance.
- Scales of Justice - An alternative trial was often to weigh the accused against a copy of the Holy Bible. If heavier than the book, the accused was being pulled down by the weight of the 'Spirit of Satan', and therefore a witch. If however, the good book proved the weightier, the person was not a witch and was free to go. Again questions of simple physics were discarded as irrelevant. Variations of this method were used, including weighing the witch against other holy relics, or indeed any object that was undoubtedly lighter than the accused.
- Pricking the Flesh - This involved finding a mark of Satan upon the accused, usually a mole, wart, scar, skin blemish, birthmark or even third nipple. Once found, an accuser would push a knife or needle into the irregularity. If the wound drew blood, the 'witch' was innocent of the charge. If, however, no blood flowed from the wound, allegiance with the Devil was assumed. This test worked fairly well for the innocent, until some accusers found that a false knife, sleight of hand or even knowing places on the body where a needle could be inserted without pain or blood loss when 'pricking the flesh' could produce a more desired effect. The discovery of a witch!
- Burning at the Stake - Self explanatory really. The accused was tied to a stake and set fire to. If the witch survived the smoke and flames (highly unlikely), they were said to be in league with the Devil and able to endure the flames of Hell and thus taken from the flames and hanged. If they burnt to death however, they were innocent of the crime of being a witch. Burning was also the preferred way of disposing of a discovered witch, as it meant that holy ground was not tainted with a witch corpse.
The Modern Witch
I'm a witch! I-I can make pencils float. And I can summon the four elements. Okay, two, but four soon. A-and I'm dating a musician!
Willow Rosenberg, Buffy the Vampire Slayer TV Series
In the late 20th Century and early 21st Century, witches have become less of a taboo in western cultures. Dr Gerald Gardner, the co-founder of modern Wicca, assisted in debunking the myths surrounding witches in the 1940s by supporting the supposedly 'occult' religion. Witches, no longer feared, began to enjoy a renewed respect and just decades later the likes of Samantha4, Sabrina - the teenage witch, Willow Rosenberg from Buffy and the Charmed ones have made witchcraft increasingly popular. In literature the witch has also gone from being the evil hag who eats children, like Roald Dahl's horrible creations, to either the young adventurer such as Mildred Hubble - the Worst Witch and Hermione Granger, friend of Harry Potter, or the laughable 'Wyrd Sisters' in Terry Pratchett's Discworld series.
However, the witch that is waiting to scare you can still be found, in horror movies, comic books, dreams and the imagination. But with further tolerance of what was once deemed 'unholy', many young women (and men) have taken a strong interest in learning the ancient pagan arts or Wicca, and it is future generations who may turn to the past and become not only qualified midwives and nurses but also - wicce - witches. |
Sometimes, while doing our workout routines, you can acquire injuries. One of the most injuries obtained from doing exercises is wrist pain. Once you experience any discomfort, you should treat wrist pain before it escalates into something serious.
Wrist pains can lead to many injuries. One of them is stress fracture. This type of injury happens gradually, usually due to improper handling and execution of your workout routine. John Miller expounds more on stress fracture on this article:
What is a Stress Fracture?
One of the most common injuries in sport is a stress fracture. Stress fractures are tiny cracks in a bone. Stress fractures are caused by the repetitive application of force through bone that isn’t strong enough. Essentially, the bone is weaker than is required for the activity demands or exercise intensity.
The most common stress fractures occur in runners, but stress fracture can occur due to the demands of your sport e.g lumbar spine stress fractures in gymnasts and cricket bowlers.
Common running stress fractures include: foot (navicular, metatarsal), tibia (shin splints).
Stress fractures also can arise from normal use of a bone that’s been weakened by a condition such as osteoporosis.
Overcoming an injury like a stress fracture can be difficult because they normally occur in very active people, who hate to not exercise! Read more…
The wrist has multiple ranges of movement – flexion, extension, adduction, and abduction – making wrist mobility vital in our everyday activities. Without it, performing intense exercises is nearly impossible. And so, this is why it is essential to know how to take proper care of your wrist.
In an article by Victor Prisk, MD, he shares important tips on how to protect your wrist from future trauma:
SAVE YOUR WRISTS FROM INJURY
Damage to the wrists can occur gradually, in a progressive injury called a stress fracture—the result of training incorrectly, too intensely, or too often.
An ache during activity that develops into swelling and persistent pain even at rest could indicate the development of a stress fracture. (I’ve even seen stress fractures occur in the wrists of overzealous burpee addicts, so be warned.) When in doubt, see a doc.
Alternatively, nerve-compression syndromes like carpal tunnel can also cause wrist pain. Carpal tunnel syndrome affects the nerves to your thumb, index, middle, and the inside half of your ring finger and is often signaled by numbness, tingling, or hand clumsiness.
Wrist splinting and limiting your repetitive movements can improve symptoms. Read more…
Wrist pain can be a sign or symptom of an underlying injury. If you experience wrist pain, you shouldn’t disregard it and seek medical help at once. James Roland of Livestrong has compiled the most common injuries with wrist pain as their common sign:
Wrist Pain From Weightlifting
When lifting heavy weights, painful wrists can weaken your grip and take your focus off the exercise. If you’ve developed pain in your wrists from weightlifting, it could be a sign of strained ligaments or tendons, or a fracture. Get a conclusive diagnosis to prevent further damage and get the right treatment and recovery plan in place. If the pain is mild and you see no swelling or redness or experience no sharp pain — which is a signal to stop immediately — try resting or changing your workout to reduce the stress on your wrists if possible. Read more…
The wrists are complex joint full of bone, ligaments, connective tissue, muscles and nerves, making it possible for our hands to be use in lots of different ways. Having pains in this part of your body is not common, so you should treat wrist pain before it is too late. It might be also helpful if you do some wrist mobility exercises to keep it relaxed and healthy.
from The Nutrition Club http://thenutritionclub.blogspot.com/2015/09/how-to-treat-wrist-pain-before-its-too.html
via Blogger http://corneliussteinbeck.blogspot.com/2015/09/how-to-treat-wrist-pain-before-its-too.html
September 16, 2015 at 10:49AM |
This is a rather fascinating story about specialists in language development in children who studied a traditional population in the Bolivian Amazon, the Tsimane. What makes this group so interesting is that they, on average, spent less than one minute per hour talking to children under the age of four. This is up to ten times less than for children of the same age in industrialized countries. While the group of children that has been observed is rather small, the study does raise interesting questions. What does this mean for how what we think to know about learning our mother tongue?
From the press release:
In all human cultures, it takes little effort for children to learn the language(s) spoken by those around them. Although this process has fascinated several generations of specialists, it remains poorly understood. Most theories are based on the study of a small number of cultures, mainly in industrialized countries like the United States or France, where schooling is widespread and family size rather small.
Specialists on this subject have studied a population of forager-horticulturists from the Bolivian Amazon, the Tsimane. Thanks to a collaboration with anthropologists with specialist knowledge of this ethnic group, they enjoyed access to a unique database. From 2002 to 2005, the members of the Tsimane project visited groups of people in their homes at different times of the day. During their observations, they noted what each person present was doing, and with whom. This study, conducted in six representative villages, included nearly a thousand Tsimane.
Based on these observations, language development specialists found that, for all speakers combined, the time spent talking to a child under the age of four was less than one minute per hour. This is four times less than estimates for older people present at the same time and place. And up to ten times less than for young children in Western countries, according to estimates from previous studies.
Although mothers are the ones who speak to their child most often, as in our culture, they do so much less frequently. After the age of three, the majority of words spoken to young children come from other children, usually their siblings (the Tsimane have five on average, whereas French and American children have on average one sibling).
These results thus reveal wide intercultural variation in the linguistic experiences of young children. In developed countries, however, the development of language in children is correlated with the words spoken directly to him or her by adults, and not with the other words the child has heard. Is this correlation universal? Tsimane children grow up in a rich social world: at any moment, they will be surrounded by eight people on average. Do the conversations they hear, which take up around ten minutes per hour, contribute to their learning? Research is currently continuing on the ground: by recording the words spoken to Tsimane children, and those that they utter, the researchers hope to answer these questions.
Or Chimane.
Within the Labex IAST (Institute for Advanced Study in Toulouse).
Only when children reach the 8-11 age group are they spoken to more or less as often as adults.
Comparison with the six other publications in the literature that focus on estimating the frequency of verbal interactions with young children in different cultures.
Abstract of the study:
This article provides an estimation of how frequently, and from whom, children aged 0–11 years (Ns between 9 and 24) receive one-on-one verbal input among Tsimane forager-horticulturalists of lowland Bolivia. Analyses of systematic daytime behavioral observations reveal < 1 min per daylight hour is spent talking to children younger than 4 years of age, which is 4 times less than estimates for others present at the same time and place. Adults provide a majority of the input at 0–3 years of age but not afterward. When integrated with previous work, these results reveal large cross-cultural variation in the linguistic experiences provided to young children. Consideration of more diverse human populations is necessary to build generalizable theories of language acquisition. |
Sambhar Salt Lake:-
Lake Sambhar, India’s largest salt lake, sits west of the Indian city of Jaipur (Rajasthan) .About 90 square miles (230 square km) in area, it represents a depression of the Aravalli . This vast body of glacial saline is on average just 0.6 cm deep and never more than 3 m even just after the monsoon. It stretches in length for 22.5 km, its width varying between 3 and 11 km. Several seasonal freshwater streams, two of the major ones being the rivers Mendha and Rupangarh, feed it. The vast, roughly elliptically shaped lake. On the eastern end, the lake is divided by a 5-km long dam made of stone. East of the dam are salt evaporation ponds where salt has been farmed for a thousand years. The eastern section contains the reservoirs for salt extraction, canals and saltpans. Water from the vast shimmering western section is pumped to the other side via sluice gates when it reaches a degree of salinity considered optimal for salt extraction. The waters here are glacially still, edged with a glittering frost of salt. Flies abound, drawn by the blue-green algae in the water, and queue up in order to crawl into your mouth and ears. There is a sharp briny tang in the air that takes one straight back to coastal fish markets. An indigenously developed rail trolley system-the lines were laid by the British-takes one across the dam and to various far-flung points in the salt works.
More importantly, Sambhar has been designated as a Ramsar site (recognized wetland of international importance) because the wetland is a key wintering area for tens of thousands of flamingos and other birds that migrate from northern Asia. The lake is actually an extensive saline wetland, with water depths fluctuating from just a few centimeters (1 inch) during the dry season to about 3 meters (10 feet) after monsoon season. The specialized algae and bacteria growing in the lake provide striking water colors and support the lake ecology that, in turn, sustains the migrating waterfowl.
- Copyright: devendra bhardwaj (devendra) (186)
- Genre: Places
- Medium: Color
- Date Taken: 2006-01-01
- Categories: Nature
- Camera: D70, 28-300 Tamron XR
- Exposure: f/8, 1/250 seconds
- More Photo Info: view
- Photo Version: Original Version
- Theme(s): Salt Mining [view contributor(s)]
- Date Submitted: 2006-01-03 10:32 |
CICS Newsletter: January 2018
Happy New Year!
Every year, we think about what we can do to better our lives and ourselves as we start our new calendar. But how often do we think about our mental wellbeing? Here are five things you can do to better your mental health in 2018 from our friends at NAMI.
1. Stand Up to Stigma
Feeling ashamed and at fault for something that is out of your control is a weight that no one should have to carry.
Stigma can be incredibly challenging to overcome. It shouldn’t be this way, and you can help to change society’s way of thinking about mental health.
If someone is using language that you find offensive and improper, let them know. Inspire them to join our stigmafree movement, and commit to learn more about mental health. We all need to see the person, not the illness.
2. Take Care of Your Physical Health Too
We’ve all heard this time and time again and there are plenty of studies that prove how beneficial exercise, getting enough sleep and eating well improve overall wellbeing.
The challenging part is finding the motivation, time and effort. Start by creating a simple routine and sticking with it. For example, grocery shop and meal prep over the weekend or on your day off. Have set times during the week for working out or physical activity. Establishing this kind of structure is hard at first, but it’s easier once you get used to the routine.
3. Share Your Story
Opening up about your experiences is not only personally uplifting, but it also helps other people who can relate to you. Use one of NAMI’s platforms such as Ok2Talk, YANA or the mobile AIR app to share your story.
“The best way to encourage others, and to fight stigma, is to speak the truth about what we face every day,” said Anna, a member of our YANA Community. The great thing about these spaces is that you can remain anonymous if you prefer and feel safe sharing your experiences.
If you feel really motivated to share your experiences with others, you can also start your own blog. This will motivate you to consistently write and express yourself on a regular basis. Skutler, a member of our Ok2Talk Community, wrote, “I've always loved writing, but this is the first time I've shared my work with a larger audience, and I can't believe how many people have read and appreciated my personal journey.”
You can also become a presenter for NAMI’s “In Our Own Voice,” a presentation series that changes attitudes, assumptions and stereotypes by describing the reality of living with mental illness.
4. Make a Commitment to Stay Informed
Knowing what’s going on in the world of research can help you find out whether there are new ideas that might help improve your quality of life. For example, research shows that getting outside during the winter — even though it can be very cold — is important. Getting enough vitamin D is essential to your mood and overall wellbeing.
Here is a list of credible websites compiled by Karen Moeller, Pharm D, DCPP, and Brantley Underwood, Pharm D, MBA, that can help people find information online:
5. Do Something That You Love Every Day
Even if it’s just 30 minutes each day, read, color, go for a walk or talk with someone that you care about. These activities can bring you a sense of peace. It is so important to feel relaxed for at least part of every day. Our busy schedules frequently take over and stop us from making time for ourselves, but leaving time to do something that you love is essential. |
What Is Vasculitis?
Vasculitis is a general term that means inflammation of blood vessels. More than 20 unique diseases are classified as vasculitis. These diseases are uncommon and may affect any blood vessel in the body. Blood vessel inflammation is the pathological foundation for these diseases. Vasculitis can affect any person at anytime and causes damage by reducing blood flow to the affected organ. Vasculitis can affect one or multiple organs.
What are the causes of vasculitis?
The cause for most vasculitis diseases is unknown; however, causes that are known include medications, infections or cancer. Vasculitis may be temporary, lasting only as long as a patient is exposed to a causative agent, or it may be chronic, requiring medicine to control the disease.
Although more than 20 different vasculitis syndromes exist, each disorder is rare when compared to the prevalence of more common diseases such as diabetes, high blood pressure and coronary artery disease. Racial and gender predilections do occur with some types of vasculitis.
What are the treatments for vasculitis?
Removal of an offending agent or treatment of an underlying systemic disorder is the initial approach to treatment of some types of vasculitis. Prednisone with or without an additional immunosuppressant medication or biologic agent are often the drugs of choice for most vasculitides. Due to the rarity of these diseases, the diagnosis and treatment is often delayed. Although current therapies have substantially reduced the mortality rate associated with some vasculitides, morbidity due to disease damage or medication toxicity is an increasing problem. For example, nearly one third of patients with granulomatosis with polyangiitis (GPA or Wegener’s granulomatosis) will develop permanent disability secondary to their disease over a period of five years. Furthermore, 1,500 people are hospitalized each year with GPA. The mortality rate for those admitted with GPA is approximately 11 percent. The University of Utah Vasculitis Clinic was created in 2007 to provide expert care for patients afflicted with these unique diseases.
Conditions We Treat
- Giant cell arteritis (temporal arteritis)
- Takayasu arteritis
- Isolated aortitis
- Single organ vasculitis
- Polyarteritis nodosa
- Central nervous system vasculitis
- Adult IgA vasculitis (Henoch-Schonlein purpura)
- ANCA-associated vasculitis
- Granulomatosis with polyangiitis (Wegener’s granulomatosis)
- Microscopic polyangiitis
- Eosinophilic granulomatosis with polyangiitis (Churg-Strauss Syndrome)
- Cryoglobulinemic vasculitis
- Hypocomplementemic urticarial vasculitis
- Drug-induced vasculitis
- Behcet syndrome
- Cogan syndrome
- And others |
clock tower of Sultan Abdul Samad building and Kuala Lumpur Tower behind it.
The Sultan Abdul Samad Building is located in front of the Dataran Merdeka - literally the Independence Square. Designed by British architect A.C Norman, it was built in 1897 with a unique Moorish-style design. It is a building with a sentimental value, which witnesses the scene of the annual independence celebration.
And now after 51 years, Malaysia had go through a rapid development, which can be observe by its tall buildings. One of it is KL Tower, which was constructed in 1996.
Critiques | Translate
Alonso (247) 2009-01-10 20:32
Nice observation about both elements, imagine if there wasnt that tower behind but just the Sultan Abdul Samad Builing it would be a great picture but still, this is an excellent picture and the tower behind give us the whole story about how modern building are growing up so much. I do agree how you named this picture "old & new" cause in this particularly picture that building behind is a very important element of composition cause is telling that in front of that building there is something beautiful constructed many years ago and in the back there is us catching up with these old beautiful structures. good note!
- Copyright: azam zakaria (azam) (93)
- Genre: Places
- Medium: Black & White
- Date Taken: 2008-12-19
- Categories: Architecture
- Camera: Canon EOS 400D Digital
- Exposure: f/2.8, 1/2000 seconds
- More Photo Info: view
- Photo Version: Original Version
- Date Submitted: 2008-12-19 7:23 |
Consult City's Top Doctors, The Minute You Need To
First Consultation starting
@ ₹249 ₹499
Acute Lower Respiratory Infection
Acute lower respiratory infections cause mortality and sickness in both adults and children. From the epidemiological viewpoint, the acute lower respiratory infections include bronchiolitis, pneumonia, influenza, and bronchitis.
Acute Lower Respiratory Infection Symptoms
Acute lower respiratory infections can cause severe symptoms in infants and babies such as follows:
- A Cough
- A Headache
- A Sore Throat
- Muscle Pain
- Blocked Nose
Acute Lower Respiratory Infection Causes
The cause for the development of acute lower respiratory infection is due to various organisms such as viruses or bacteria.
- Bronchiolitis or acute bronchitis is caused by rhinovirus and influenza viruses
- Bronchiolitis in infants is caused by the ‘Respiratory Syncytial Virus (RSV)’
- Pneumonia in adults is caused by the ‘Streptococcus Pneumonia’ and in small children by ‘Respiratory Syncytial Virus (RSV)’
Pregnancy and Acute Lower Respiratory Infection
There is an association between exposure to moderate air pollution level and the development of lower respiratory tract infection (LRTI) and wheezing in early childhood.
Diagnosis of Acute Lower Respiratory Tract Infection
You can find acute lower respiratory infection specialist at mfine who have the expertise to diagnose lower respiratory tract infection by taking a medical history, from the onset of the symptoms and carrying out a thorough physical examination. The diagnosis of lower respiratory tract infection can be confirmed by means of other tests such as pulse oximetry, chest x-ray, blood tests, laboratory tests etc.
Understanding Acute Lower Respiratory Infection (LRTI)
Acute lower respiratory infection includes bronchiolitis, bronchitis, pneumonia, and influenza. LRTI is caused by various viruses and bacteria.
Acute Lower Respiratory Infection Treatment
Acute lower respiratory infections can be treated with antibiotics. Bronchitis can be cured with drinking enough fluids and taking adequate rest. Antibiotics are not used in the treatment of bronchitis as they have no effect on viruses.
You can get in touch with eminent general physicians who can offer you the right treatment for influenza. The pediatrician would diagnose the flu based on the medical history or the symptoms of the person.
You can come across adept acute lower respiratory infection doctors near me at mfine.
Top Hospitals on mfine
Give a missed call to 08061914343 to Download the App |
Butterflies and moths go through four stages in their life cycle – egg, several larvae or caterpillar phases (called instars), pupae and adult. Sphinx moth Pachysphinx occidentalis) caterpillars are beginning to burrow underground where they will pupate in shallow burrows before eventually metamorphosing into adults.
I previously posted pictures of an adult sphinx moth, also commonly called a hawk moth or a big poplar moth. The light green sphinx moth caterpillar is about six inches long and as much as an inch in diameter. It is big! White lateral stripes are visible on its back.
This specimen is in its last instar (larval stage) as evidenced by the short caudal (at or near the tail or posterior end) horn. When I saw this caterpillar it was burrowing into the ground near our house (Modoc County CA). I disturbed it long enough to quickly take pictures, then watched as it continued to create a shallow burrow in which to pupate.
Our yard is a perfect environment for sphinx moths. The caterpillars ravenously eat poplars, cottonwoods and willows, all of which we have in abundance.
Perhaps next year I will see the beautiful sphinx moth into which this caterpillar “morphs”. |
A broadly accessible introduction to robotics that spans the most basic concepts and the most novel applications; for students, teachers, and hobbyists.
The Robotics Primer offers a broadly accessible introduction to robotics for students at pre-university and university levels, robot hobbyists, and anyone interested in this burgeoning field. The text takes the reader from the most basic concepts (including perception and movement) to the most novel and sophisticated applications and topics (humanoids, shape-shifting robots, space robotics), with an emphasis on what it takes to create autonomous intelligent robot behavior. The core concepts of robotics are carried through from fundamental definitions to more complex explanations, all presented in an engaging, conversational style that will appeal to readers of different backgrounds. The Robotics Primer covers such topics as the definition of robotics, the history of robotics (“Where do Robots Come From?”), robot components, locomotion, manipulation, sensors, control, control architectures, representation, behavior (“Making Your Robot Behave”), navigation, group robotics, learning, and the future of robotics (and its ethical implications). To encourage further engagement, experimentation, and course and lesson design, The Robotics Primer is accompanied by a free robot programming exercise workbook that implements many of the ideas on the book on iRobot platforms.
The Robotics Primer is unique as a principled, pedagogical treatment of the topic that is accessible to a broad audience; the only prerequisites are curiosity and attention. It can be used effectively in an educational setting or more informally for self-instruction. The Robotics Primer is a springboard for readers of all backgrounds—including students taking robotics as an elective outside the major, graduate students preparing to specialize in robotics, and K-12 teachers who bring robotics into their classrooms. |
|This article does not cite any references or sources. (November 2006)|
A rag joint refers to certain flexible joints (flexure bearings) found on automobiles and other machines. They are typically found on steering shafts that connect the steering wheel to the steering gear input shaft, usually at the steering gear end. They provide a small amount of flex for a steering shaft within a few degrees of the same plane as the steering gear input shaft. It also provides some damping of vibration coming from the steering system, providing some isolation for the steering wheel.
This type of joint has also been used on drive shafts. Farm tractors and lawn and garden equipment have often used them in this application, and even some higher-power applications, such as some 1960s race cars, featured them. In automobile and truck prop shaft designs, they have now mostly been replaced by constant-velocity joints or driveshafts with pairs of universal joints. Rear-wheel drive cars have commonly used a lengthwise propeller shaft with a rubber doughnut joint at the gearbox end (limited movement) and a universal joint at the rear axle (greater movement), or vice versa. This gives articulation where needed, but also stops some of the vibration being transmitted into the body.
The joint consists of a piece of doughnut shaped rubber with reinforcing cords vulcanized in it, similar to a tire. This disc is bolted or riveted to flanges mounted on the ends of the shafts to connect the steering wheel shaft to the steering gear. The ragged cords can be seen on the edge of this piece of rubber, hence the term "rag joint". The bolt holes themselves are often reinforced by steel tubes moulded into the doughnut.
The origins of this form of universal joint are from early vehicles that used a disk of thick leather as a similar flexible joint. These were used into the 1920s. As rubber technology improved (particularly for its resistance to spilled mineral oils), it was possible to replace the leather by something longer-lasting. "Rag joints" were used on some American cars into the mid-1930s.
An older vehicle with loose steering or "play in the steering wheel" is often found to have a worn out rag joint. One can then reach inside the cab and wiggle the steering wheel while watching the rag joint move without the input shaft moving; this condition may cause the vehicle to fail the vehicle inspection, and indicates that the worn part needs to be replaced.
|Wikimedia Commons has media related to Rag joints.| |
4 Tips: Mind and Body Practices for Common Aging-Related Conditions
Many older adults are turning to complementary and integrative health approaches to promote health and well-being. Mind and body practices, in particular, including relaxation techniques and meditative exercise forms such as yoga, tai chi, and qi gong are being used by older Americans, both for fitness and relaxation, and because of perceived health benefits. A number of reviews of the scientific literature point to the potential benefit of mind and body approaches for symptom management, particularly for pain. Check out what the science says about mind and body practices for these 4 common aging-related conditions:
Osteoarthritis. Practicing tai chi—a traditional Chinese form of exercise—may be helpful for managing osteoarthritis of the knee. Guidelines issued by the American College of Rheumatology conditionally recommend tai chi, along with other non-drug approaches, for this condition.
Menopausal symptoms. Overall, there is scientific evidence suggesting that some mind and body approaches, such as yoga, tai chi, and meditation may provide some relief from common menopausal symptoms.
Sleep problems. Using relaxation techniques, (e.g., progressive relaxation, guided imagery, biofeedback, self-hypnosis, and deep breathing exercises) before bedtime can be helpful components of a successful sleep regimen.
Shingles. Tai chi may help older adults avoid getting shingles by increasing immunity to varicella-zoster virus and boosting the immune response to varicella vaccine in older people. While there have only been a few studies on the effects of tai chi on immunity to varicella, the results so far have been promising.
These mind and body practices are generally considered safe for healthy people when they’re performed appropriately. If you have any health problems, talk with both your health care provider and the complementary health practitioner/instructor before starting to use a mind and body practice. For information about natural products for common aging-related conditions, check out these tips. |
The words we use today may have different meanings tomorrow.
Did you know that the color pink used to be considered the most appropriate color for dressing baby boys, while light blue was the color worn by baby girls?
As social customs continually evolve, the same holds true for the words we use.
This is easiest to spot in the trendy use of slang terms. What is “far out” for one generation may be “totally rad” for another, and merely “cool” the next. Some people are laid-back, others are simply “chill.” More difficult to see are the instances when word meanings themselves change in dramatic ways. Who knew that “pink” used to be the word used when referring to the color yellow? We know that humans used to routinely be called “man,” but it is startling to learn that at one time all children were referred to as “girls.”
Writer and etymologist Paul Anthony Jones discovered an impressive number of these changes in his studies, and he has put them together in one place for our reading pleasure in The Accidental Dictionary, which examines 100 English words and traces the changes in use throughout history.
“The English language has such a checkered, sprawling history that I think it is likely more liable to change than most,” Jones said in an interview with The Huffington Post. He added:
If there’s one thing to take away from this book it’s that the language is always changing. It’s easy to think that once a word finds its way into the dictionary it’s set in stone, but that’s of course not the case. Not only are new words being coined and old words being lost every day, but existing words are being molded and mutated, and knocked into different shapes to better fit what we need them to mean. If this book only serves to prove that the English language is still very much active and alive, then it’s done its job. |
I know that IPv6 is the successor of IPv4. But I don't understand versioning policy. Why not IPv5? Did IPv1, IPv2, and IPv3 exist?
As noted elsewhere, IP version 5 was assigned to Internet Stream Protocol.
But IP versions actually start at 0, not 1. IPv0 was described in IEN 2. What might be called IPv1 was described in IEN 26. It called for a one-bit version field, which seems shortsighted today. IPv2 was described in IEN 28. These IP versions were experimental and never gained wide use.
What may also surprise you is that IP versions 7 through 9 have also already been defined. These were three other competing protocols, TP/IX, PIP and TUBA, respectively, which were invented around the same time as what became IPv6 and also intended to replace IPv4. If IPv6 ever needs to be replaced, we'll start at IPv10...
Version number 5 was already used for the experimental Internet Stream Protocol when the next-generation IP protocol was devised, so it became IPv6. |
In San Francisco, an experiment of great interest regarding the new abilities of AI
At IBM’s Watson West Center, located in the SoMa (South Martket) neighborhood in the heart of San Francisco, California, a man-versus-machine competition took place like none before it in the history of artificial intelligence (AI).
Before a public of about 50 journalists and Prof. Chris Reed of the University of Dundee, in Scotland, two expert human debaters faced off in real time with IBM’s “Project Debater” AI system.
The competition consisted of two debates. In each one, both parties had four minutes to state their own thesis, another four minutes to refute their adversary’s arguments, and at the end, two minutes for a final summary.
During the first debate, the AI system argued with the Israeli debate champion for 2016, Noa Avadia, about the statement,”We should subsidize space exploration.” In the second round, Project Debater defended the statement, “We should increase the use of telemedicine,” against another Israeli expert, Dan Zafrir.
The public was asked to vote at the end of the debates, and the results showed that the first round was won by a narrow margin by the human debater, while the AI system won the second debate. According to the spectators, Project Debater provided richer information than its human opponents, but was less skilled at presenting its thesis.
This is not the first time that computer industry colossus IBM has been the protagonist of this kind of man-vs-machine competition. In 1989, IBM’s computer Deep Thought defeated British chess player David Levy, becoming the first computer to play at the level of a human Grand Master. Also in 1989, IBM’s chess engine tried to beat then-world-champion Garry Kasparov, but it lost 2 to 0.
Deep Thought’s successor, Deep Blue, would eventually succeed, however, in February of 1996, beating Kasparov in the first game of a six-game tournament. Even though the reigning world champion eventually won the tournament 4 to 2, the dam was broken. The next year, after a major upgrade, Deep Blue actually beat the Russian champion—although not without controversy—3.5 to 2.5.
Looking for new challenges, IBM had its artificial intelligence system Watson (named after the company’s first president, Thomas J. Watson) play Jeopardy!, the most famous quiz show on American television. In February 2011, the supercomputer managed to beat the all-time champions of the game, Ken Jennings and Brad Rutter.
That same year, Noam Slonim, a scientist at IBM’s largest research center outside of the USA—the one in Haifa, Israel—proposed Debater, a project that is “scientifically interesting and challenging” and that “would have some business value. Something big, something that would make a difference.”
While Watson is a supercomputer capable of answering questions asked in natural language, Slonim’s idea was rather to develop an AI system able to carry on actual debates, even on complex topics.
The project was begun in 2012 under the direction of Ranit Aharonov. Unlike Deep Blue or Watson, “our goal is not to develop yet another system that is better than humans in doing something,” explains Aharonov. IBM’s goal is to create software able to face off with “a reasonably competent human, but not necessarily a world champion, and come across holding its own,” adds Arvind Krishna, IBM’s research director.
In order to be convincing, software like that of Project Debater has to be well informed about the various topics it will have to face. To achieve this goal, Aharonov’s team loaded the system’s memory with billions of data points taken from 300 million articles from journals and magazines, etc.
A judgment call
The debate in June was “the beginning of something that we can explore for many, many years,” said Noam Slonim, who first had the idea of Project Debater. Indeed, the AI system’s performance was described by professor Chris Reed, who was in the audience, as “really impressive.”
According to the professor of computer science at the University of Dundee, even though we are only taking the first steps along the road to understanding artificial intelligence, IBM’s computer was able to produce “a four-minute speech, on the fly, on a topic selected at random from a list of 40 on which it hadn’t already been trained to debate.”
As an orator, it still has a lot to learn. “The system has only the most rudimentary notion of argument structure and so often deviates from the main theme,” Reed observes. “It pays no heed to its audience, nor its opponent, and has no way of adapting its language or exploiting any of the hundreds of clever rhetorical techniques that help win over audiences.”
Support for the human decision-making process
For Reed, the true value of the technology won’t be seen in debating halls, but in applications or situations in which artificial intelligence systems can offer a contribution to the human decision-making process or to discussion; for example, in police incident rooms, or in the classroom.
The projects developers themselves see their AI system, whose future technologies will be commercialized in IBM Cloud, as support for the human decision-making process.
According to IBM research director Arvind Krisha, “all sorts of organizations might find value in software that can synthesize information and summarize the pros and cons of an issue,” explains Harry McCracken on the website Fastcompany.com. It “might even serve as an antidote to the spread of misleading information online, analyzing text and detecting biases,” he adds hopefully.
But, artificial intelligence can also inspire fear. Last April, Google cofounder Sergey Brin warned about the risks linked to AI. In the annual letter to shareholders of the umbrella company Alphabet, Brin writes that the revolution in the field of AI and other technological developments have led to “new opportunities, but also new responsibilities.”
“There are very legitimate and pertinent issues being raised, across the globe, about the implications and impacts of these advances,” Brin writes, although he declares himself “optimistic” about the potential for focusing these technologies on the world’s greatest problems. “We are on a path that we must tread with deep responsibility, care, and humility.”
He’s not alone in these concerns; renowned British astrophysicist, cosmologist, and mathematician Stephen Hawking, who died this past March 14, warned about the risks tied to AI, which has the potential to be, in his words, “either the best, or the worst thing, ever to happen to humanity.” Indeed, on the occasion of the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) in Cambride in the autumn of 2016, he said that AI is “crucial to the future of our civilisation and our species.”
“Quo vadis, homo?“
In an article on Project Debater, published last July 15 in Avvenire, Italian writer Giuseppe O. Longo calls to mind the concept of “Promethean shame” formulated by German philosopher Günther Anders (pseudonym of Günter Stern) in the book Die Antiquiertheit des Menschen (“The Obsolescence of Humankind”); that is, a “sense of gloom and discomfort that man feels in the face of the devices he himself has designed and built, which surpass him in every way.”
“Moved by this every-growing difference, we try to compete with the machines, and we emerge defeated and humiliated: who will still have the courage, or the desire, to play chess against a program like Deep Blue?” Longo asks. The computer scientist and teacher at the University of Trieste also recalls the warning given by Norbert Wiener—the “father” of cybernetics—regarding the “irreversible character” of certain innovations.
For Longo, it boils down to an anthropological choice. Indeed, we must decide “if we want to construct machines that think (for us) or machines that help us to think,” he writes, referring to the thought of Francesco Varanini. |
Brain imaging predicts future reading progress in children with dyslexiaby Melanie Moran Dec. 20, 2010, 2:00 PM
Brain scans of adolescents with dyslexia can be used to predict the future improvement of their reading skills with an accuracy rate of up to 90 percent, new research indicates. Advanced analyses of the brain activity images are significantly more accurate in driving predictions than standardized reading tests or any other measures of children’s behavior.
The finding raises the possibility that a test one day could be developed to predict which individuals with dyslexia would most likely benefit from specific treatments.
The research was published Dec. 20, 2010, in the Proceedings of the National Academy of Science.
“This approach opens up a new vantage point on the question of how children with dyslexia differ from one another in ways that translate into meaningful differences two to three years down the line,” Bruce McCandliss, Patricia and Rodes Hart Chair of Psychology and Human Development at Vanderbilt University’s Peabody College and a co-author of the report, said. “Such insights may be crucial for new educational research on how to best meet the individual needs of struggling readers.
“This study takes an important step toward realizing the potential benefits of combining neuroscience and education research by showing how brain scanning measures are sensitive to individual differences that predict educationally relevant outcomes,” he continued.
The research was primarily conducted at Stanford University and led by Fumiko Hoeft, associate director of neuroimaging applications at the Stanford University School of Medicine. In addition to McCandliss, Hoeft’s collaborators included researchers at MIT, the University of Jyväskylä in Finland and the University of York in the United Kingdom.
“This finding provides insight into how certain individuals with dyslexia may compensate for reading difficulties,” Alan E. Guttmacher, director of the National Institutes of Health’s Eunice Kennedy Shriver National Institute of Child Health and Human Development, which provided funding for the study, said.
“Understanding the brain activity associated with compensation may lead to ways to help individuals with this capacity draw upon their strengths,” he continued. “Similarly, learning why other individuals have difficulty compensating may lead to new treatments to help them overcome reading disability.”
The researchers used two types of brain imaging technology to conduct their study. The first, functional magnetic resonance imaging (fMRI), depicts oxygen use by brain areas involved in a particular task or activity. The second, diffusion tensor magnetic resonance imaging (DTI), maps white matter tracts that are the brain’s wiring, revealing connections between brain areas.
The 45 children who took part in the study ranged in age from 11 to 14 years old. Each child first took a battery of tests to determine their reading abilities. Based on these tests, the researchers classified 25 children as having dyslexia, which means that they exhibited significant difficulty learning to read despite having typical intelligence, vision and hearing and access to typical reading instruction.
During the fMRI scan, the youths were shown pairs of printed words and asked to identify pairs that rhymed, even though they might be spelled differently. The researchers investigated activity patterns in a brain area on the right side of the head, near the temple, known as the right inferior frontal gyrus, noting that some of the children with dyslexia activated this area much more than others. DTI scans of these same children revealed stronger connections in the right superior longitudinal fasciculus, a network of brain fibers linking the front and rear of brain.
When the researchers once again administered the reading test battery to the youths two and a half years later, they found that the 13 youths showing the stronger activation pattern in the right inferior frontal gyrus were much more likely to have compensated for their reading difficulty than were the remaining 12 youths with dyslexia. When they combined the most common forms of data analysis across the fMRI and DTI scans, they were able to predict the youths’ outcomes years later with 72 percent accuracy.
The researchers then adapted algorithms used in artificial intelligence research to refine the brain activity data to create models that would predict the children’s later progress. Using this relatively new technique, the researchers could use the brain scanning data collected at the beginning of the study to predict with over 90 percent accuracy which children would go on to improve their reading skills two and a half years later.
In contrast, the battery of standardized, paper-and-pencil tests typically used by reading specialists did not aid in predicting which of the children with dyslexia would go on to improve their reading ability years later.
“Our findings add to a body of studies looking at a wide range of conditions that suggest brain imaging can help determine when a treatment is likely to be effective or which patients are most susceptible to risks,” Hoeft said.
Hoeft further explained that the largest improvement was seen in reading comprehension, which is the ultimate goal of reading. The youths showed less improvement in other reading-related skills such as phonological awareness. Typically developing readers tend to develop phonemic awareness skills before developing fluency and comprehension skills.
Hoeft suggested the finding that youths with dyslexia recruited right brain frontal regions to compensate for their reading difficulties, rather than regions in the left side of their brains, as typical readers do, may have something to do with this.
The study is part of a rapidly developing field of research known as “educational neuroscience” that brings together neuroimaging studies with educational research to understand how individual learners differ in brain structure and activity and how learning can drive changes at the neural level. Such questions are now being effectively examined in young children even before reading instruction begins, McCandliss explained in a Proceedings of the National Academy of Science article published earlier this year.
“This latest study provides a simple answer to a very complex question—‘what can neuroscience contribute to complex issues in education?’” McCandliss said. “Here we have a clear example of how new insights and discoveries are beginning to emerge by pairing rigorous education research with novel neuroimaging approaches.”
The research was funded by the National Institute of Child Health and Human Development, the Stanford University Lucile Packard Children’s Hospital Child Health Research Program, the William and Flora Hewlett Foundation and the Richard King Mellon Foundation.
For more information on Peabody College, visit http://peabody.vanderbilt.edu. |
What Can You Do to Preserve Wildlife
We travel to places like Madagascar, Amazon, and Kenya to witness the incredible beauty of wildlife animals. We are allowed to observe the diverse ecosystem and species living in harmony, running free. The beauty of nature will leave us exhilarating. However, wildlife didn’t have it easy for many years now. We lost twenty-two frog species and sixty-five North American plants last year, and you can expect more habitat loss in the coming years.
Nicki Geigert is a wildlife photographer in love with the natural environment that surrounds her. She yearns to learn more about nature and wildlife, but she saw the impending doom of these beautiful creations that she published a book on rare and endangered species, shared images of nature, and posted blogs. She wants to dedicate herself to the natural world, and this is her effort to spread the word about the issues surrounding wildlife.
There is a long list of species that may have gone extinct and vulnerable throughout the years, and if not addressed immediately, will lose more wildlife species. Many people have warned and made last conservation efforts to save species before it’s too late. All these losses all boils down to human behavior. Hence, be an activist and start helping spread awareness to save the majestic species on Earth. Be like Nicki Geigert and help save, preserve, and protect wildlife. Here are small but impactful things you can do to help preserve wildlife.
Be An Educated Consumer
We have seen a growth in using eco-friendly alternatives for things we use every day for the past years. Why is this so? Of course, this is an effort to preserve Earth. The world is facing a significant problem that will affect faunas, floras, and even humans. It’s surprisingly simple to make positive changes to preserve wildlife just by figuring out where to start. Instead of using disposable plastics, choose products that are energy-efficient, durable, made from sustainable sources, and sustainably packaged. Moreover, learn more about the food you eat and where it comes from since the conversion of forests to rangeland for cattle is one of the leading causes of biodiversity loss. In essence, individual responsibility isn’t a silver bullet, but it can help strengthen the environmental movement.
Take Care of Wildlife Habitat
According to the International Union for Conservation of Nature, habitat destruction is the main threat to vulnerable and endangered species. Due to the urbanization of areas and tourism, more species are losing their homes and food source. However, you can help reduce this threat by planting more trees, restoring wetlands, cleaning up beaches in your area, and avoiding disturbing the natural habitats of wildlife. If you own land, try planting more trees and digging ponds to promote wildlife habitat. As much as possible, avoid unjustified cutting down of trees. Moreover, if you have pet cats, try to keep them indoors to become predators to wild birds and mice.
Make Your Home Wildlife Friendly
You might not know it, but your home can also be a home for the wildlife. Whether you have a small or big garden, this can be a source of a steady supply of food and water for any wildlife species. This also gives wild animals a place to raise families. So, plant native species of flowers, trees, and bushes in your yard to attract birds, butterflies, bees, and moths. Try replacing part of your lawn with garden beds, native plants, and flowers. You can also get crafty by building a birdhouse or bat house. Avoid using pesticides and other chemical components on your lawn, as this can lead to wildlife poisoning. Lastly, rethink your fall cleanup. Research more on how this can harm wildlife.
Take Part in Conservation Activities
Your voice matters as you can do wonders. You can encourage neighbors and locals to support policies that protect wildlife. Furthermore, working with other people is almost always more effective than making efforts alone. After all, we have a better chance of protecting wildlife together. Many groups are looking for volunteers and support from the community. They are usually independent groups, and if you’re generous enough, maybe you can help them financially for their cause. Join genuine groups that keep wildlife in the wild, such as ecotourism, photo safaris, or community-based humane education programs. You can also participate in conservation organization efforts on animal cruelty, hunting, and destruction of habitat. There are plenty of organizations with different goals, so you are sure to find like-minded people.
Please do not hesitate to use your voice to spread awareness to protect and preserve wildlife. We encourage you to speak and leave a comment on this post for everyone to read. Your opinions about wildlife conservation are important, and it is not something that should be left unsaid. |
A Swiss medical laboratory has found traces of polonium, a rare, highly radioactive metal, in the former Palestinian leader’s personal effects. Is that what killed the Nobel laureate? Arafat’s clothing and other items, including his toothbrush, contained polonium-210 levels 60 to 80 percent higher than would occur naturally.
Arafat was chairman of the PLO and headed the Palestinian Authority and was viewed by many Arabs as the symbol of Palestinian interests. He received the 1994 Nobel Peace Prize with Israeli leaders in recognition of their peace efforts. He fell ill in October 2004 and died a month later. The cause of his death was never determined, though doctors suspected a wide variety of diseases.
It was a scene that riveted the world for weeks: The ailing Yasser Arafat, first besieged by Israeli tanks in his Ramallah compound, then shuttled to Paris, where he spent his final days undergoing a barrage of medical tests in a French military hospital.
Eight years after his death, it remains a mystery exactly what killed the longtime Palestinian leader. Tests conducted in Paris found no obvious traces of poison in Arafat’s system. Rumors abound about what might have killed him – cancer, cirrhosis of the liver, even allegations that he was infected with HIV.
A nine-month investigation by Al Jazeera has revealed that none of those rumors were true: Arafat was in good health until he suddenly fell ill on October 12, 2004.
More importantly, tests reveal that Arafat’s final personal belongings – his clothes, his toothbrush, even his iconic kaffiyeh – contained abnormal levels of polonium, a rare, highly radioactive element. Those personal effects, which were analyzed at the Institut de Radiophysique in Lausanne, Switzerland, were variously stained with Arafat’s blood, sweat, saliva and urine. The tests carried out on those samples suggested that there was a high level of polonium inside his body when he died.
Hans Jørn Storgaard Andersen via Wikimedia Commons |
These sites provide basic information and an overview about electricity. Learn what electricity is, where it comes from, and what the sources of electrical energy are. There are also topics on electrical safety and electricity consumption as well as statistics. Includes lesson plans, online activities, experiments, games, and quizzes. There are also links to eThemes Resources on electrical safety, static and current electricity, and circuits, conductors, and batteries.
BBC: Using Electricity
Discover how to make a bulb light by placing different parts of a circuit. Then take an online quiz. NOTE: The "Talk" link leads to a discussion forum.
BBC: Electricity in the Real World
Watch an interactive presentation about how electricity works and what function of devices are used to create electricity. Take a quiz at the end of the presentation.
Energy Kids: What is Electricity?
Find quick information about science of electricity, electricity generation, moving electricity, and measuring electricity. Includes statistics on basic energy.
ThinkQuest: The Shocking Truth About Electricity
Follow links to read about interesting story of electricity.
Watch a movie about electricity and learn about different types of electricity, three parts of an electrical circuit, and how magnetic fields are created. Also, find out about the important role magnets play in creating electricity. Then take a quiz and do an activity. NOTE: Subscription is required for this site.
eThemes Resource: Safety: Electrical
These sites include tips for awareness and prevention of electrical hazards. Students learn how to be safe with electricity inside and outside of their homes. Includes safety tips about lightning and power lines.
eThemes Resource: Physics: Static Electricity
Learn about static electricity and how it works. Covers the topics of atoms, electrons and protons. Includes several illustrations, photos, animated movies, and many hands-on science experiments.
eThemes Resource: Physics: Current Electricity
These sites explain nature of current electricity. Here kids can learn about directions of electrons' flow, differences between direct current (DC) and alternating current (AC) electricity, and compare current to static electricity. The sites include photos, movies, hands-on activities, lesson plans, and a link to the eThemes Resource on Static Electricity.
eThemes Resource: Electricity: Circuits, Conductors, and Batteries
These sites have lots of illustrations and animations that demonstrate how electricity, circuits, and batteries work. Learn the difference between conductors and insulators. Includes suggested activities, hands-on experiments, and online quizzes. There is a link to an eThemes Resource on electrical safety.
Request State Standards |
The genesis of this piece came when I chanced upon an article about Isaac Newton and the Bible (“Sir Isaac Newton and the Bible” by Professor Arthur B. Anderson). This is an extremely laudatory and what can be considered a very truthful piece. The problem lies in the interpretation of the information presented, as illustrated by the last two paragraphs:
Sir Isaac Newton and all reputable scientists believed that today’s scarred and marred earth as the result of the great Flood. This was the common opinion of the majority of educated people until around the year 1870!!
In conclusion: Sir Isaac Newton was totally correct in his Observations. If the greatest scientist who ever lived had no problem believing the Bible, what excuse will evolutionists, atheists, agnostics, or other so called men of science have on Judgment Day!!
I will not disagree with Professor Anderson’s assertion that Newton, reputable scientists and the majority of educated people believed in the Flood until 1870, for that is essentially correct. Up until that point, there was no reason to believe otherwise.
But his selection of the year 1870 coincides rather nicely with the debate on the publication of Darwin’s Origin of the Species. It also coincides rather nicely with the development of geology and explanations that went beyond a Biblical explanation.
The problem that I have with his conclusion is two-fold. First, nothing in what Professor Anderson writes suggests what Isaac Newton would have said or thought if he had lived at the time of Darwin.
He was a physicist and a mathematician; his work on the science of optics, the development of calculus, and the development of the idea behind gravity all suggest a man interested in what was happening in the world; would he have dismissed Darwin’s work without a thought or might he have explored the premise behind the work? These are questions we can ask but which we cannot answer.
Second, Professor Anderson’s article also leaves out quite a bit of information about Newton, information that calls into question his rather emphatic conclusion.
In “A Study in Scarlet” Sherlock Holmes tells Dr. Watson that “it is a capital mistake to theorize before you have all the evidence.” A corollary to this is that you cannot and should not make the facts fit the theory. Professor Anderson’s conclusion seems to fit into that latter category. He wants, as do several others, to find scientists who have a professed interest in the Bible and God in order to discredit others or to give credence to their own viewpoint.
Like so many who will tell you that Thomas Jefferson was guided by God in the writing of the Declaration of Independence or how our founding fathers were devoted Christians, the evidence offered is often incomplete and the conclusions drawn are incorrect (see “Don’t Know Much History”).
Tycho Brahe is best known in history for the detailed observations that he made of the planets and the stars prior to the invention of the telescope. His observations of a supernova in 1572 contradicted the accepted notion that the cosmos (or universe) was fixed and unchanging. His observations of the movement of a comet in 1577 showed that comets were further away from the earth than was the moon, a conclusion that also contradicted the teachings of Aristotle.
In his observations of the heavens, Brahe determined that there was no parallax for the stars. Parallax is the apparent movement of something when you look at the object with one eye open and the other shut and then change the eye which is open and the eye which is shut. As you blink your eyes, the object you are looking at appears to move; that is what is known as parallax. (See http://www.digitalsky.org.uk/lunar_parallax.html or http://spot.colorado.edu/~underwod/astr/para.html for a demonstration.) Brahe showed that the stars did not exhibit such movement and this meant that either 1) the stars were very far away or 2) the earth was motionless at the center of the universe.
Like so many other instances of human thought, Brahe correctly formulated the responses to his thought but then chose the wrong answer. He did not believe that the stars could be as far away from the earth as his observations suggested so he concluded that the earth was motionless and at the center of the universe. (Adapted from http://csep10.phys.utk.edu/astr161/lect/history/brahe.html)
Isaac Newton did believe in the Bible as it was written; he had no other information upon which to make a conclusion. In fact, Newton’s writings concerning the Bible were as numerous as his other works and this should not be surprising considering that he was interested in the relationship of God to the universe. His work on the discovery of the law of gravitation told him how gravity worked but not why it worked. His study of the Bible was as much driven by a desire to understand how God made the universe as it was to understand who God was.
My notes on Isaac Newton include a book by Michael White, Isaac Newton – The Last Sorcerer and “Newton’s Hair” by Mark S. Lesney from Today’s Chemist at Work (April, 2003) that relied in part on White’s work. In his book, Michael White also describes the work that Newton did with regards to a scholarly examination of the Book of Daniel and an attempt to determine the end of time (which White says Newton determined to be 1948).
What is also included in the White biography that is not included in the Anderson article is that Newton was a Christian apostate and an adherent to the Arian heresy, a belief that Jesus was not divine. A further reading showed that even though he signed papers agreeing to a career in the ministry following his graduation with his Bachelor and Master’s degrees, he was reluctant to take that step when he was awarded his doctorate. It took a special dispensation from the King to allow him to remain as the Lucasian professor (a dispensation that still is in effect today).
We cannot say whether Newton would have accepted or rejected Darwin’s ideas. History tells us that he was very much opposed to those whose ideas were contrary to his own but we have nothing to suggest what he would have done if he had been presented with Darwin’s theory. But if we are to apply modern day situations to Isaac Newton and his own beliefs about God, Christ, and the church, it is very likely that many churches today would have rejected him for his thoughts and statements.
My own curiosity about the idea that Isaac Newton might have had fundamentalist type religious beliefs led me to another article, “Maxwell, Molecules, and Evolution” by Charles Petzold. In this article, Petzold points out that James Clerk Maxwell and several other early scientists (Lord Kelvin, Robert Boyle, Johannes Kepler, Michael Faraday, and Samuel F. B. Morse) are listed as “Christian men of science”. Now, it would be very difficult to presume that either Boyle or Kepler were opposed to Darwin’s theory for the simple fact is that they didn’t even know that there was such a theory. As Petzold also points out, the presumption that Maxwell might have been opposed to Darwin’s theory is made by very carefully selecting the words that Maxwell spoke and using them out of the context in which they were spoken.
It strikes me that modern day creationists or those who expound on the notion of intelligent design would quickly add Robert Boyle and Johannes Kepler to a list of scientists. I am sure that along with Newton, they accepted the notion of a creator who put into play the work of the universe. But like Newton, their belief in God is radically different from what they would have you believe it to be.
Robert Boyle could hardly be considered the paragon of virtue that one might suspect when given the label of “Christian man of science.” Now, I am familiar with Robert Boyle as the father of modern day chemistry. But it was a surprise to me that there was a corollary between his life and mine. In one aspect, I would agree with those who say that our students today are not given the full story about the individuals who laid the basis for what we do today. But I think that such full disclosures must include all the stories and not just the ones that support the point that the presenter wishes to present.
When he was young, Boyle was introduced to the works of Galileo and he became a strong supporter of his philosophy and approach. It was this that led Boyle to the study of science and mechanics, a study that would be reflected in his later achievements. But at the same time that he was being introduced to Galileo, he also experienced another transformative event that would shape his life, his philosophy and his science, a profound religious experience. In his autobiography, Boyle noted that that his conversion occurred during a majestic thunderstorm and that his spiritual change would be enduring and led him towards a strongly theistic perspective that informed his views on natural philosophy.
In 1643, at the age of 16, Boyle’s father died and he inherited the family estate. Here he settled in to begin a life as a writer, not of scientific manuscripts, but pious, moralistic tracts inspired by his newfound Christian faith.
He would begin the studies that would lead to the writing of The Skeptical Chemist shortly after this. His work in chemistry was aimed at establishing chemistry as a mathematical science that was based on a mechanistic theory of nature.
But even while developing the experimental methods would make chemistry a science, Robert Boyle, like Isaac Newton, also studied alchemy. While we today may see the difference between the two areas, it is likely that such practitioners like Newton and Boyle did not. It can even be suggested that the work that Newton and Boyle did in alchemy was driven by their religious beliefs.
Boyle believed wholeheartedly in the existence of a supernatural realm, a world in which humankind had little experience. For him, alchemy was the link between the two worlds; a link that might provide evidence of God’s existence.
For Boyle, alchemy was a gift from God that, along with chemistry, offered a path to the truth. He was hostile to views of nature that did not demonstrate a proper understanding or appreciation of God’s power in the world. And while he was a devout Christian, he despised taking oaths. His refusal to take holy orders prevented him from becoming the provost of Eton and he would decline the offer to serve as the President of The Royal Society because of the oath he would have had to take.
In reading the information about Boyle (from The Last Sorcerer and “Founding Chymist” by Richard A. Pizzi from Today’s Chemist at Work, August, 2003), I can’t help but think that there are many today who would not welcome Robert Boyle into their church. But like Newton, his work meant to show what God had done and he believed in experimentation over a priori theorizing. He drew on various sources as long as they could be confirmed by experimental results. It remains to be seen how he would have reacted to Darwin’s work or the view of the modern day church.
Another chemist whose interests also included theology was Joseph Priestley. And like the others mentioned, his religious beliefs were clearly outside the mainstream of orthodox religion. As a student, his studies lead him to question the orthodox tenets of the Calvinist faith. Like Newton, he could not find scriptural support for the Trinity
This decision effectively denied him access to the great universities of England and he attended a more liberal school where his interest in natural phenomena and experimentation were encouraged. And again, where today his unorthodox views on religion might be cause for his expulsion from the church, he was able to work in an environment where differences in opinion were looked upon as a means of discovering the truth and not as a sign of moral reprobation.
After his fundamental research on the nature of oxygen, done while serving as a minister in Birmingham, he published three rather controversial manuscripts: Letters to a Philosophical Unbeliever, an attempt to defend natural religion against the skepticism of David Hume; a History of the Corruptions of the Christianity, a direct attack on the central tenets of orthodox religion, particularly the doctrine of the Trinity; and a History of the Early Opinions Concerning Jesus Christ, where he set out to prove that the doctrine of the Trinity was not according to Scripture. Because of these publications, Priestley was denounced from the pulpit and a mob destroyed his home, laboratory, and library. Ultimately he was forced to move to America. (From “Joseph Priestley”)
In the same mode are the notes that I have gathered concerning Johannes Kepler. Kepler struggled very much with the conflict between his science and his faith. In reading the short biography that Charles Hummel put together in his book The Galileo Connection I also discovered that Kepler was a devout Christian whose interests in science often ran counter to the beliefs of the community. Parenthetically, Kepler, whose work was central to Galileo’s work and the confirmation of the Copernican model of the universe, died without a church. He would not sign a statement affirming a creed in the Lutheran church and so the Lutheran church denied him communion and employment in Lutheran universities. And because he was a Lutheran, the Catholic Church denied him communion and employment. (From the Galileo Connection)
I would have to say that Newton, Boyle, Priestley and Kepler were all men of faith. Their work was focused on seeing how God created this world and better understanding that creation. But their study was very much like that of Nikolai Copernicus.
He saw no conflict between his Christian faith and his scientific activity. During his forty years as a canon, he faithfully served his church with extraordinary commitment and courage. At the same time, he studied the world “which has been built for us by the Best and Most Orderly Workman of all.” He pursued his science with a sense of “loving duty to seek the truth in all things, in so far as God has granted that to human reason.” He declared that although his views were “difficult, almost inconceivable, and quite contrary to the opinion of the multitude, nevertheless in what follows we will with God’s help make them clearer than day – at least for those who are not ignorant of the art of mathematics. (From The Galileo Connection by Charles E. Hummel)
The work of a scientist is not to discover God nor is it to prove or disprove His existence. Such work can only be done, perhaps, in the heart of the individual. The work of the scientist is to examine the evidence before him and make sense of what that evidence means.
I remember a conversation I had with someone several years ago. At that time, the Missouri State Legislature was contemplating the passage of legislation that would have included the teaching of intelligent design in the biology curriculum. I told my colleague that if that legislation passed, I would resign immediately. Now, as a chemistry teacher, this legislation would not have affected my teaching (the benefit of not being certified to teach biology). But such legislation would have interfered with my rights as a teacher by dictating what I can or cannot teach; my colleague, who was both a Southern Baptist and a biology teacher, said that he would be right behind me.
The problem today is that we seek a world in which both faith and science are one and the same or they are permanently split. There are those who say that the argument between faith and science are part of a greater cultural war. And I would agree. But the issue at hand is not what one believes, either by actual evidence or faith alone but rather who controls the thought process.
The present discussion is all about power and who has the power. Both those who argue for the fundamentalist view of the world and those who argue for a more sectarian view of the world want to control the thought process of those who would like to learn about the world. And in their vigorous defense of their view and their vigorous attempts to deny the other viewpoint, they merely show the weakness of their own view.
It may be argued that the strength of one’s argument is inversely proportional to the strength of one’s belief system. The stronger the argument, the weaker the belief; it is entirely logical to assume that the fervor you put into keeping me from thinking about things is that your thoughts are indefensible. What was it that G. K. Chesterton said about atheism, that is was an argument for a “universal negative”?
It has been documented several times that employment at several Christian colleges is predicated on the signing of an oath that your beliefs are in line with that of the faith. I remind the reader that Boyle refused to sign such oaths, Priestley refused to sign such oaths, and Newton did so but then violated them even before the ink was dry on the vellum.
Science is an attempt to discover the world around us. Faith is an attempt to discover who we are. We need both in life and cannot replace the one with the other. We must ask ourselves as we begin the second decade of the 21st century if we are prepared to do both. We cannot discover the world around us if we do not know who we are nor can we find out who we are unless we can find out what this world is all about. |
Covid-19 variants, including UK strain, may lead to false negative results with molecular tests: US FDAJanuary 8, 2021
WASHINGTON (REUTERS) – Genetic variants of the novel coronavirus, including the one found in Britain, could impact the performance of some molecular Covid-19 tests and lead to false negative results, the US Food and Drug Administration (FDA) said on Friday (Jan 8).
The agency has alerted clinical laboratory staff and health care providers to the possible false negative results from any molecular test, and has asked them to consider such results in combination with clinical observations, and repeat testing with a different test if Covid-19 is still suspected.
The FDA, however, said the risk that these mutations will impact overall testing accuracy is low.
The more contagious variant of Covid-19 that has swept through the United Kingdom has been reported in at least five states in the US, National Institutes of Health Director Francis Collins said this week.
Scientists have said newly-developed vaccines should be equally effective against the new variant.
The FDA said on Friday its analysis found the performance of three tests, that have received emergency use authorisation, to be impacted by genetic variants of the coronavirus, but noted that the impact does not appear to be significant.
Thermo Fisher Scientific’s TaqPath Covid-19 combo kit and the Linea Covid-19 assay kit were found to have significantly reduced sensitivity from mutations, including the B117 variant or the so-called UK variant, according to the agency.
However, since both the tests are designed to detect multiple genetic targets, the overall test sensitivity should not be impacted, the FDA said.
Mesa Biotech’s Accula Sars-Cov-2 test performance can also be impacted by the genetic variants, according to the health regulator.
Source: Read Full Article |
George Washington understood the importance of naval power. He recognized the futility of trying to defend New York City, surrounded as it was by water that the British Navy could use to maneuver around his flanks. “The amazing advantage the Enemy derive from their Ships and command of the Water keeps us in a State of constant perplexity.”
British General William Howe used the ships of his brother Richard Howe’s fleet to move their troops to Staten Island and then on to Long Island, resulting in a flanking maneuver that forced the evacuation of the island. Howe used the fleet again to land at Kip’s Bay northeast of New York City, easily dispersing the Connecticut militia assigned to protect that area and forcing Washington to evacuate the city. A few days later, the British made another landing on the American’s northeastern flank on the peninsula at Throg’s Point. Washington brought part of his army into a position to stall the British move but also began moving the remainder of his forces from the island of Manhattan. The British fleet sat at anchor off the shore. Would they outflank him again?
Washington placed part of the army on the West Chester Heights overlooking the British on Throg’s Point, and rushed the main body over King’s Bridge to leave Manhattan Island. Col. John Glover and his brigade were posted to the north at East Chester. Forty-three year old Glover was a successful businessman who used the seafaring trades to make his living in Marblehead, Massachusetts. On the morning of October 18, 1776, from the heights that his position afforded him, Glover saw a small fleet of British ships landing large numbers of soldiers at Pell’s Point a couple of miles northeast of Throg’s Point. Using the cover of darkness, Gen. William Howe had made his move.
First, Howe made a diversionary move to the west toward Morrisiana but Glover was unaware of that; all he knew was that the British were landing at Pell’s Point in front of him. He sent one of his staff officers, William Lee, to inform his division commander, Maj. Gen. Charles Lee, and also to ask for orders. Keeping his eye on the British, Glover spotted their advance guard of thirty men. He quickly ordered a group of about forty men to reconnoiter and possibly delay the enemy. He knew it would be some time before General Lee could send instructions, so he acted on his own. “I did the best I could, and I disposed of my little party to the best of my judgment.”
Glover’s brigade was composed of four regiments and three cannon. One of the regiments was Glover’s own, the so-called “Marbleheaders.” It was designated the 14th Continental Regiment. Consisting of 179 men, they had already attained fame for their part in helping evacuate Washington’s army from Long Island. His second regiment, the 3rd Continental Regiment, was composed of 204 men and commanded by French and Indian War veteran William Shephard. Joseph Read of Uxbridge, Massachusetts, commanded the 13th Continental Regiment mustering about 226 men. The last regiment in the brigade, the 26th Continental, was led by Loammi Baldwin, of Woburn, Massachusetts and contained about 234 men.Baldwin had been sick in bed but roused himself just in time to join the regiment as it moved forward.
Sizing up the situation, Glover placed Read to the left of the approach road behind a wall out of sight of the British. He placed the regiments of Shephard and Baldwin in echelon to the right rear of Read, also under cover of stone walls. Glover’s own regiment was placed in reserve with the three pieces of artillery. Glover took up a position with Read’s men but doubted his dispositions in the face of the enemy: “I would have given a thousand worlds to have had General Lee, or some other experienced officer present to direct, or at least approve of what I had done.”
The British force numbered about 4,000: three blue clad Hessian regiments with red clad British grenadiers, light infantry, and dismounted dragoons. Glover had no more than 750, with only about 625 immediately facing the enemy. Glover accompanied the first group of forty Americans who gave the British advance guard a blast of fire and then hustled back toward their main body, as did the British advance party. Halted briefly by the short engagement, it took over an hour for the British to organize themselves and advance down the road.
Read’s men waited until the British were within one hundred feet and opened fire. The Continentals, many of whom had never been under fire before, quickly reloaded. They fired again and again as the surprised British light infantry and German jägers advanced. After seven rounds, Glover ordered Read’s outnumbered men to retreat. The British pushed on but found themselves confronted by more Americans.
Atop the slight rise called Prospect Hill, Shephard’s men were positioned behind a double stone wall. Blasted by this new line of Americans, the British halted and began a firefight that involved infantrymen’s muskets and seven cannon. The Americans held on for nearly an hour, firing about seventeen rounds apiece, before the enemy fire drove them from their position. Shephard was wounded in the neck.
British troops moved to outflank the Continentals as they found more room to maneuver. Baldwin’s regiment fired only one volley before it was ordered to retreat with the other two regiments and join Glover’s own regiment across the Hutchinson River. The British refused to follow any further and satisfied themselves with an artillery duel that lasted until dark.
Glover’s small battle, in which neither side lost more than thirty or forty men, allowed Washington to extract his men from Throg’s Neck and Manhattan Island. They took up positions near White Plains. Glover’s brigade rejoined the main army and participated in the rest of the New York campaign. The division commander sent his congratulations to Glover’s men:
General Lee returns his warmest thanks to Colonel Glover and the brigade under his command, not only for their gallant behavior yesterday, but for their prudent, cool, orderly and soldierlike conduct in all respects. He assures these brave men that he shall omit no opportunity of showing his gratitude.
General Washington included his approbation in the General Orders on October 21, 1776:
The hurried situation of the Gen. the last two days having prevented him from paying the attention to Colonel Glover and the officers and soldiers who were with him in the skirmish on Friday last that their merit and good behavior deserved, he flatters himself that his thanks, though delayed will nevertheless be acceptable to them.
Glover’s men retreated with the army from New York and through New Jersey. They performed yeoman service in transporting Washington’s army across the Delaware River on Christmas night, 1776. Although Washington lamented British mobility using their fleet, Glover’s men had provided a maritime force that provided the Continental Army with a similar mobility on water.
Washington valued Glover’s service. He was disappointed when Glover left the army at the beginning of 1777 and refused to accept a promotion to brigadier general. The commander in chief wrote him:
After the conversations, I had with you, before you left the army, last Winter, I was not a little surprised … As I had not the least doubt, but you would accept of the commission of Brigadier, if conferred on you by Congress, I put your name down in the list of those, whom I thought proper for the command, and whom I wished to see preferred.
Diffidence in an officer is a good mark, because he will always endeavour to bring himself up to what he conceives to be the full line of his duty; but I think … without flattery, that I know of no man better qualified than you to conduct a Brigade, You have activity and industry, and as you very well know the duty of a colonel, you know how to exact that duty from others.
Glover accepted the promotion and returned to the army, serving until the end of the war.
George Washington to John Hancock, July 25, 1775, in Frank E. Grizzard, ed.,The Papers of George Washington, Vol. 10, 11 June 1777-18 August 1777(Charlottesville, VA: University Press of Virginia, 2000), 410.
Nathan P. Sanborn, Gen. John Glover and his Marblehead Regiment in the Revolutionary War: a paper read before the Marblehead Historical Society, May 14, 1903(Marblehead, MA: Marblehead Historical Society, 1903), 25.
Washington to John Glover, April 26, 1777, in Philander D. Chase, ed.,The Papers of George Washington, Revolutionary War Series, vol. 9, 28 March 1777-10 June 1777(Charlottesville, VA: University Press of Virginia), 274. |
January 15, 2010 Archives
Advance work for an academic research paper that explores some aspect of game culture and theory. (What is a college research paper?)
Your presubmission report is a single word processor file, about 2-3 pages, uploaded to Turnitin.com, that includes the following, numbered sections:
(the same text is also available in HTML format)
From early flight simulators to multiplayer games like America's Army (see Figure 1), the
military has long recognized the potential for games and simulations to enable the teaching and testing of skills that could not be rehearsed in real-world environments. Ironically, these military links have been exploited by fearmongers, such as military psychologist and anti-video-game activist David Grossman, to drive a wedge between games and schools.
If you are particularly interested in games and learning, I strongly recommend Gee's What Video Games Have to Teach Us about Learning and Literacy.
Discusson Leaders: Jessie and Matt.
A new participation portfolio, covering your accomplishments since the first portfolio was due.
Instead of worrying about whether kids can absorb by playing games created by adults, let's consider what can they can accomplish by creating their own media for their peers..MIT's free tool Scratch is designed to get kids programming, so that they can create their own games and animations. (Watch a 5-minute intro to Scratch.)
Kids can start out just watching cartoon characters move around, but with a little guidance, they can start adding more sophisticated controls and program complex interactions.
- Whether they plan to be programmers or not when they grow up, they will use computers all their lives. Rather than let them think of what goes on inside that box as magic, or dismiss technology because "computers hate me"...
- Scratch introduces kids to the idea that everything that happens inside a computer follows a rule, and that -- at least until the robot uprising -- those rules come from people.
In about 30 minutes, these videos walk you through the steps of how to build a simple Breakout game in Scratch. In the last 2 videos, for another 15 or so minutes, I'm mostly tweaking a working demo.
How do these videos affect your thoughts on games and education, and on your own potential for creating interactive media?I'm am not requiring you to use this tool for class, but if you like what you see....
- I encourage you to consider using it to help present the creative part of your term project.
- You can download it free at scratch.mit.edu. The web is full of sample projects and user-created tutorials; here are some Scratch tutorials recorded by kids.
I think everyone has a pretty clear sense of what's coming next, so this update will be short.
- My task for the rest of the day will be to provide feedback on the presubmission reports, after which I will turn to your portfolios.
- There is no homework scheduled for Monday.
- Tuesday, there are two scheduled readings, and a four-page draft of your research paper is due.
- At some point next week, I will post another set of GriffinGate reading quizzes for the chapters we've chosen in Williams and Smith.
Meanwhile, please continue to share your successes and frustrations on the Presubmission Report page, which was yesterday's class discussion topic.
Even a quick scan of the portfolio submissions reveals plenty of enthusiasm and confidence. We've already accomplished so much! Best wishes to each of you as we prepare for the final stretch. |
Category Archives: Format Documents
Spacing can change the entire look of your document and how text fills the page horizontally and vertically. Spacing can also make text easier or more difficult to read. Word gives you multiple ways to format text.
Formatting the font color of hyperlinks is tricky, but not impossible in PowerPoint 2010. Hyperlinks are a great way to point to a web page, slide, file or email address.
Images can do a lot for your document to make it pop. You’ll be surprised how quickly flyers, brochures and newsletters come to life! But what if you’ve added graphics to pages, but they still don’t look quite right.
Microsoft Access is your best friend when it comes to time-saving techniques. You can create, modify and reuse the same data in multiple ways. Instead of starting from scratch, simply specify the data you want to pull. You can also … Continue reading
Say goodbye to spending wasted minutes formatting documents. Using the styles in Microsoft Word can speed things up, especially if you’re working on a long document. When you create a new document, Word opens the Normal template by default. This … Continue reading
Section breaks are great helpers when you’re formatting a Word document. They make everything easier, whether you’re using page borders on specific pages or changing margins on a single page. You should use section breaks whenever you want to use … Continue reading |
National Geographic has an excellent article by David Quammen about the science of bonobo behavior: "The Left Bank Ape: An Exclusive Look at Bonobo Behavior". Much has been made of the contrast between chimpanzee and bonobo behavior, often centered around the question of which of these two closest human relatives might be the better model for hominin origins. In reality, the Anthro 101 version of bonobo behavior radically oversimplifies their behavioral variation. As Quammen discusses, bonobo behavior in the wild holds some surprises for students enamored of the simplistic sex primate story.
That afternoon Hohmann and I sat beneath one of the thatch roofs discussing bonobo behavior. Few other researchers have seen bonobos in the act of predation, and those few reports generally involve small prey such as anomalures (only at Wamba) or baby duikers. Animal protein, insofar as bonobos get any, had seemed to come mainly from insects and millipedes. But Fruth and Hohmann reported nine cases of hunting by bonobos at Lomako, seven of which involved sizable duikers, usually grabbed by one bonobo, ripped apart at the belly while still alive, with the entrails eaten first, and the meat shared. More recently, here at Lui Kotale, they have seen another 21 successful predations, among which eight of the victims were mature duikers, one was a bush baby, and three were monkeys. Bonobos preying on other primates: This is a regular part of the bonobo diet, Hohmann said.
Sexiness, on the other hand, seemed to him less manifest than others, such as de Waal, had claimed. I could show Frans some of the behaviors that he would not think are possible in bonobos, Hohmann said. Infrequent sex, for instance. Yes, theres a great diversity of sexual acts in the bonobo repertoire, but a captive setting really amplifies all these behaviors. Bonobo behavior in the wild is differentmust be differentbecause bonobos are very busy making their living, searching for food.
Understanding the behavioral flexibility of both bonobos and chimpanzees is hugely important to the science of human origins. Meanwhile the continuing habitat loss and bushmeat trade threaten these creatures survival. Bonobo numbers remain fewer than 20,000 today. Their present genetic diversity is more comparable to the pattern of human variation than are chimpanzees, gorillas or orangutans. In that respect, at least, they may be the best primate model for our recent evolution. Hopefully genomics will begin to yield insights about the basis of bonobo-chimpanzee behavioral variation, which might open new doors to understand the evolution of the human brain. |
Skip to Navigation ↓
Having the information about spectral types was useful, but astronomers wanted to look for trends in the data. In 1911 Ejnar Hertzsprung plotted the absolute magnitude of stars against their colors. Two years later Henry Norris Russell independently did a similar graph using spectral types. Graphs of this type are known as Hertzsprung-Russell diagrams or H-R diagrams.
The most surprising thing about the H-R diagram was that the stars were not randomly scattered on it, but clustered in certain regions and along certain lines. The band that stretches across the diagram includes 90% of the stars in the night sky. This band is called the main sequence stars. The stars clustered at the upper right of the diagram include about 1% of the stars on the diagram, and are called giants and supergiants. Because of their cooler temperatures, they must be large to be as luminous as they are. The stars in the lower left of the diagram are called white dwarfs. They are very hot, but their luminosities are low, so they must be small. They make up about 9% of the stars on the diagram.
Explore: Find out more by using Star In A Box |
10 Apache Projects That Are Making a Difference
The Apache Software Foundation (ASF) is one of the most important and influential players in the modern open-source software development community. The ASF is perhaps still best known for its eponymous Web server, the Apache HTTP Server project, commonly referred to as "Apache." Apache has dominated the Web landscape since its creation in 1995 and currently holds the largest share in the Web server market. While the Web server is an important ASF project, it is currently just one out of over 100 projects that are operated by the open-source foundation. Among those projects are other hugely influential efforts such as the Apache Hadoop project. Hadoop has emerged in recent years to become the de facto standard technology behind the big data revolution. Apache is also home to the Lucene and Solr projects, which provide open-source search software. On the desktop user side, Apache is now the home to the OpenOffice project, which was originally started by Sun Microsystems and later operated by Oracle. eWEEK takes a look at some of the well-known and some lesser-known ASF projects. |
September 24th …
Samuel Pepys got himself a bargain on this day in 1665:
“…. And there, after breakfast, one of our watermen told us he had heard of a bargain of Cloves for us. And we went to a blind alehouse at the end of the town, to a couple of wretched, dirty seamen, who, poor wretches, had got together about 37 lb. of Cloves and 10 lb. of Nuttmeggs. And we bought all of them – the first at 5s. 6d. per lb and the latter at 4s. – and paid them in gold …..”
I get the distinct impression that our old friend Sam Pepys did not care to enquire too deeply into the source of the cloves. At least the poor wretches who sold them were a lot less poor at the end of the transaction:
- 5s. 6d. in 1665, is approximately equivalent today to ₤ 31.60
- 4s. in 1665, is equivalent today to ₤ 23 = $45.8 US = $ 55 AUD
Which means, by my reckoning, the wretches made away with almost $2,800 U.S in today's money. No doubt Sam himself made a decent profit too when he on-sold them.
Cloves are the dried flower buds of Caryophyllus aromaticus, a tree originating in the
They help Digestions, stay the Flux of the Belly, and are binding; they clear the sight, and the powder of them consumes and takes away the Web or Film in the Eye, as also Clouds and Spots: being beaten to Powder, and drunk with Wine or the Juice of Quinces they stay Vomiting, restore lost Appetite, fortifie the Stomach and Head, gently warm an over-cold Liver: and for this Reason they are given with success to such as have the Dropsie; the smell of the Oil of them is good against fainting Fits and Swoonings; and being chewed, they sweeten the Breath, and fasten the Teeth; the Powder of them in White-wine is given for Falling-Sickness, or Palsie, the distilled Water of Cloves is good against Surfeits and pestilential Diseases; receiving the Smoak of the Cloves into the Nostrils whilst they are burning on a Chafing-dish of Coals, opens the Pores of the Head.
Today’s recipe, inspired by the nautical location of the story, is from a famous cookbook of Pepys’ era – The Accomplisht Cook, by Robert May (1660). Naturally, it contains cloves. It is also quite do-able today.
To Stew a small Salmon, Salmon Peal, or Trout.
Take a Salmon, draw it, scotch the back, and boil it whole in a stew pan with white wine, (or in pieces) put to it also some whole cloves, large mace, slic’t ginger, a bay leaf or two, a bundle of sweet herbs well and hard bound up, some whole pepper, salt, some butter and vinegar, and an orange in halves; stew all together, and being well stewed, dish them in a clean scowred dish with carved sippets, lay on the spices and slict’t lemon, and run it over with beaten butter, and some of the gravy it was stewed in; garnish the dish with some fine searsed manchet, or searsed ginger.
Tomorrow’s Story …
On Corned Beef.
Quotation for the Day …
Salmon are like men: too soft a life is not good for them. James de Coquet. |
HAMBURG — A sign facing the Alster fountain in the historic city center of Hamburg, its most expensive shopping district, explains that besides its status as a city icon, the fountain helps oxygenate the water and is partially responsible of Hamburg’s improved water quality.
While the fountain has been providing oxygenation to Lake Alster for a quarter century, the notice announcing its ecological impact only went up this year, part of an awakening self-consciousness Hamburg has developed as a city of the future. By both reshaping how it sees the old and by audaciously building new development, Germany’s second-largest city after Berlin is positioning itself as a leader in urban design and practice, and spending billions of euros in the process.
The most dramatic stirring of the waters is an entirely new city center core set over 157 hectares, or 388 acres, once occupied by the industrial harbor. Mixing residential development, storefront shops, parks, entertainment venues, a cruise-ship terminal, a school, a university and offices, HafenCity (HarborCity) will cost roughly €10 billion, about $13.6 billion at the current exchange rate, when completed in 2025.
A second set of projects seeks to remake one of the most struggling parts of the city through urban densification on an island called Wilhelmsburg, across the Elbe River south of HafenCity. Hamburg is funding both environmentally and socially conscious building projects with a view to stoking a vibrant community life, a more efficient urban infrastructure and greener living. The goal is to improve the quality of life and attract new residents.Continue reading the main story
By embracing largely experimental design, city leaders hope to transform an area shared by factories, a garbage dump, the working harbor, public housing, and mostly working-class Turkish immigrant neighborhoods, into a model of so-called green community living. The price tag on the Wilhelmsburg projects is estimated to be more than €700 million, with the city contributing €90 million directly through IBA Hamburg, the city-run agency that organizes and licenses the projects.
Apart from these massive undertakings, Hamburg has implemented a number of relatively smaller ones: revamping the public transportation system to make it greener by running trams on electricity that does not produce carbon dioxide and by turning to the use of hydrogen-powered buses. The city also introduced an immensely popular bike-lending system and implemented a new garbage disposal and recycling plan.
The results, both actual and promised, have won the European Commission’s Green Capital 2011 award, making Hamburg the only city with more than one million people to be so honored in the four years the prize has been given.
Though Germany is weathering the global and European economic crises better than the rest of the West, Hamburg is still strapped and megaprojects like HafenCity remain difficult to finance and build. The city had to summon both political will and financial muscle. It helped that the project was planned and started in another era, with the first phase breaking ground in 2001. But costs for parts of the project, particularly the concert hall, have steadily risen, and not all Hamburgers have been happy about that.
In a case of private finance combining with the public will, both big projects rely heavily on private partners but retain oversight and control by the city-owned corporations involved in planning and execution. Financial backers in both cases were found among businesses and developers eager to be part of the city-initiated projects — either because of the superb location and prestigious addresses offered by the HafenCity, for example, or because of the opportunity to show leadership in green housing construction in the case of the IBA Hamburg.
The big projects Hamburg is currently building show a city, and successive city governments, willing to think big. Not content just to manage problems of the present, the City Senate, controlled by politicians of diverse stripes in the last 15 years, endeavored to use the positive economy and a growing population to build a denser, better-connected, greener and socially fairer city for the future.
Europe’s second-busiest port city after Rotterdam saw its star rise after the fall of the Berlin Wall and the opening of trading opportunities to Eastern Europe. Like many larger German cities, Hamburg’s population is growing. But unlike others, it is doing so consistently. Some 20,000 new residents move to the city of 1.8 million every year, according to the city officials.
Historically one of Germany’s commercial and industrial centers, Hamburg has benefited from the boom by attracting businesses to plant their flags and build regional or national headquarters. The German business software provider SAP; Airbus; the wind-energy department of Siemens; and Sharp, the Japanese company that manufactures electronics products, have all expanded their presence here in recent years. One of the most prominent structures in HafenCity houses the German headquarters of Unilever, the British-Dutch multinational food and consumer product giant.
HafenCity is being built on a prime parcel of land swapped from the harbor authority less than two decades ago. When finished, it will create housing for about 12,000 people, just slightly fewer than live in the historic district adjoining it. One-third of all units will be subsidized housing.
Office space to accommodate another 40,000 will be created in the orderly jumble of buildings that will be lined on the old harbor quays and follow the pattern of downtown streets.
“The real challenge is to produce a fine-grained mixed-use downtown,” said Jürgen Bruns-Berentelg, chief of HafenCity Hamburg, the city agency that is building and managing the project.
Though the new quarter is meant to connect seamlessly with the historic town center, urban planners want it to have a broader role: ensuring, through its mix of work and leisure elements, that the hum of daily life does not stop after offices close, the way it does in the adjoining commerce-heavy city center.
Looking toward a future built around sustainability, virtually all buildings in HafenCity adhere to strict German and Hamburger green-building standards, which promise to ensure reduced energy consumption.
First announced publicly in 1997, the building started in 2001 on the western edge of the zone once inhabited by cranes, warehouses and docks. Working eastward, the entire area — once part of a customs control zone and thus off limits to ordinary Hamburgers — is to be fully built by 2025.
To build the new neighborhood from scratch, public coffers will also fund schools, parks, a university, a vast concert hall, museums, a cruise ship terminal and the extension to the city’s subway system.
The HafenCity Hamburg has final say over the tenants who buy the land and the architectural design of the buildings. Two separate juries ensure that the businesses, public housing projects and commercial residential developments fit the ultimate plan of the neighborhood, over which the city retains control.
One of the peculiarities of HafenCity is that small lots are sold to separate commercial tenants, ensuring a lively mosaic, both in who inhabits a given building and in how it looks. The resulting mishmash, though still part of an overall design plan, resembles a more organically built urban landscape.
The city’s investment in HafenCity will run to some €2.2 billion, though €1.5 billion of that is expected to make its way back to city coffers through the sale of properties in the project. The investment in Wilhelmsburg under the IBA, the German acronym for international construction exhibit, will cost €90 million. The city is investing another €102.5 million to upgrade and build bridges and roads on the island.
“It is a big investment in our future,” said Jens Kerstan, chairman of the Grüne Alternative Liste, the local Green Party, an integral player in Hamburg’s city government. “The community must invest in the future.”
Though not a pittance for the strapped city, the investment is a fraction of the total cost thanks to numerous private partnerships, said Mr. Kerstan.
Could Hamburg be a model for other cities? In some ways, Hamburg is unique even by German standards. A free Hanseatic city historically, it still comprises its own state, so city politicians control a bigger proportion of taxpayers’ money than their colleagues in most other cities. The political structure, without the added bureaucracy of a state-level government, can also be more agile. And among a largely urban electorate, issues like subsidized housing, public transportation and funding for the arts, for example, can be less contentious than in places where suburban and rural residents have to be taken into account, too.
Wilhelmsburg is situated at the city’s edge and is shared by one of Germany’s largest copper smelters, some of Hamburg’s vast port facilities, and roughly 49,000 mostly lower-income residents.
IBA Hamburg, an agency created to manage the project, is tasked with finding solutions to three disparate challenges affecting most cities: urban landscape, social equity and climate change. Planners are trying to expand residential areas, essentially through making the neighborhood more dense and preventing urban sprawl, while creating socially advantageous conditions and building green.
“These problems are internationally relevant because they are problems that all cities have,” said Uli Hellweg, the managing director of the project.
While vastly smaller in scale than the glitzy HafenCity, the IBA project looks at innovative ways of remodeling existing urban infrastructure. Certifying and funding some 50 smaller projects, the IBA is part living laboratory, part city-size exhibition — meant to make Hamburg an example for other cities, regardless of their means and the level of their ambitions.
Started in 2007, the IBA Hamburg projects are to be exhibited to the world in 2013.
On a residential street, in the midst of a freshly remodeled postwar public housing track, stands a new community center. Made of glass and modern sustainable materials, the two-room multipurpose building seems like something out of a convention center with a far fancier address. The Türkischer Elternbund, an association of Turkish parents that runs an after-school tutoring program, shares the space with the occasional evening concert or workshop on taxes. Residents can rent the space for wedding receptions. The community surrounding the shared space was consulted before the plans were drawn up.
The community center offers solutions to the three main problems posed by the IBA, which like other projects in Wilhelmsburg — while practical for the island’s inhabitants — is meant to be a model for cities of the future. And, while most projects try to straddle all three themes, many have a clear focus on green building.
By 2050, Wilhelmsburg is supposed to be energy neutral, meaning that the neighborhood will produce as much energy locally through wind, sun and thermal as it uses in renewable items and in fossil fuels.
In Wilhelmsburg, say IBA organizers, the debate on climate change is not just theoretical. Because of its geography, Wilhelmsburg is threatened by rising sea levels. In 1962, a flood broke the dikes, killing hundreds and causing extensive damage. While the historic flood was not caused by man-made climate change, the fear of rising sea levels looms large on the river-island, Mr. Hellweg said.
A World War II bunker, built in 1943 to house 30,000 people and since derelict, will be retooled by 2013. It will be covered in photovoltaic cells to provide electricity and house a giant tank that stores heated industrial gray water to produce energy.
The city’s old, toxic garbage dump has been renamed Energy Hill. The methane gas it gives off is being piped to the local copper smelter for energy. Wind turbines cover it and will eventually supply about 4,000 homes.
Low-income housing projects are being revamped in consultation with residents. When asked what they most desired, residents didn’t bother to fill out and return the bureaucratic forms. The IBA-trained student ambassadors went out to interview residents, only to find out that the biggest wish was bigger apartments; the ensuing renovation project changed the design of the old buildings to make the living spaces bigger, while maintaining the same number of units.
A home for the elderly is being built with the city’s third Turkish bath to create a meeting place for the largely Turkish inhabitants and the community at large.
Solar panels are affixed to house rooftops and old buildings retooled for energy efficiency.
Most IBA projects are jointly financed by community organizations, private developers or the city housing authority. Commercial builders who want to demonstrate their role in what many think is a market with vast potential have financed a multitude of environmentally friendly show homes.
Though Wilhelmsburg commuters rely on existing bus and train routes to connect the neighborhood to the rest of Hamburg, the city is building a new subway line to connect HafenCity with the rest of the mass transit system. Hamburg has been experimenting with environmentally friendly electricity for both its extensive bus system and its commuter rail system. The city also recently installed fueling stations for a fleet of electric cars it plans to lease to residents.
The short-term bike rental system, after enjoying a huge success, is being expanded. Currently it consists of about 1,200 bikes available at 93 rental points and transports slightly more than 10 percent of the biking public. City leaders hope that the additional bikes will help decrease the number of cars on the road.
Earlier this year, Hamburg hosted a traveling exhibit that toured Europe. Mounted in old shipping containers reminiscent of the thousands that pass through the city’s deeper port across the Elba every year and contribute much to Hamburg’s coffers, the exhibit showed off the city’s progress and pondered the future of urban space.
“This is a big opportunity for Hamburg to position itself in an international context,” said Mr. Kerstan, the chairman of Hamburg’s Green Party, “and to let it be known that it is one of the leaders in one of the big debates of the 21st century.”Continue reading the main story |
Through a partnership with the World Intellectual Property Organisation Academy, and the Department of Basic Education, a program on IP4teachers has been developed to enable teachers to empower their pupils (the youth) to use their innovative and creative minds to develop “local solutions for local problems.” New knowledge and hence intellectual property has the ability to serve as a catalyst for social and economic prosperity and so what better way to empower the next generation than by enhancing the creativity that is already so abundant in them. The program can be rolled out to pupils from 6 until about 15 years old. Program takes place over approximately 3 months in 3 phases. Phase I: begins with a short workshop covering the basics of intellectual property before the teachers are registered for the DL-101 course – General Course on Intellectual Property. Phase II involves training in the IP4Youth multimedia toolkit (so that teaching can begin). Finally Phase III addresses sustainability and involves the development of a curriculum for consideration by the national government authority responsible for education. |
Performance, particularly on busy sites, can be critical - after all, if you can speed up your code by 10%, that decreases your hardware load by 10%, saving you the need to upgrade. There are a number of ways you can improve the performance of your scripts, and we will be covering as many as have space for. We will also be dispelling various myths about optimisation, and hopefully by the end of this chapter you will be confidently about to re-write fundamentally flawed algorithms, tune implementations of good algorithms, make your MySQL queries fly, and more.
Before we begin, I would like to make it quite clear that optimisation is the process of improving performance of your code, whether to use less space or run faster - it is usually a trade-off. Optimised code is not necessarily "perfect code", it is just better than unoptimised code.
Furthermore, there is rarely if ever such a thing as "too fast". In my spare time, I have been working on my own pet project: a gigantic online PHP strategy game. Actions occur at round end, which is triggered by a cron job every ten minutes. With a thousand dummy users in the system, round end takes over seven seconds, during which database writes are locked so that people cannot make changes. In the past I have spent hours and hours just to cut one part of that round end from 0.002 seconds to 0.001 seconds per player - it might not sound like a lot, but as far as I am concerned every single second counts. If you have tried out half of the recommendations here and find you have reduced the run-time for a script from four seconds down to one second, don't stop there - go for the fastest code you can get. |
Whether the Black Death that swept through Europe in the Middle Ages, killing a third of the population, was---as is commonly believed---bubonic plague or, as some British researchers believe, a mutant strain of Ebola virus.
Facts and myths surrounding Jack the Ripper are discussed by historian Christopher Frayling. Included: how the English press of 1888 helped create the image of the Ripper that continues to this day; and a tour of the murder sites in London.
“The Strange Case of Rudolph Hess” recalls a mission undertaken by the Nazi deputy to make peace with the British. Hess was ultimately convicted of war crimes at the Nuremberg Trials and spent the remainder of his life in Spandau Prison. |
A thick, hundred-acre mix of floating mud, soil and decaying vegetation marks the border of where aquatic harvesters are scooping up floating muck on Orange Lake.
ORANGE LAKE — A thick, hundred-acre mix of floating mud, soil and decaying vegetation marks the border of where aquatic harvesters are scooping up floating muck on Orange Lake. The work will go on for the next two months and it will barely make a dent.
The floating muck islands are called "tussocks" by those whose job it is to scoop up the material from lakes. These floating islands — home to vegetation, animals and birds — now cover much of the lake.
Some say it is a natural occurrence and should be left alone. Boat owners and fishermen curse the tussocks when they foul boat motors or make fishing impossible.
The nearly 13,000-acre Orange Lake is on the Marion-Alachua border. The tussocks on the southern tip have gone undisturbed long enough for hundreds of willow and red maple trees to have taken root, reaching more than six feet in height.
The harvesters, powered by paddles, lumber like dinosaurs along the water's surface, taking bites out of vegetative mass near the shore.
"It's all a result of time and no maintenance," said Mike Hulon, operations manager of Texas Aquatic Harvesting Inc., based in Lake Wales.
The company has hauled in three of its largest harvesters: one nearly 90 feet long and 16 feet wide and two others 70 feet long and 14 feet wide. The former is able to scoop 48,000 pounds of wet muck before heading for shore and unloading. The two smaller harvesters scoop 24,000 pounds each.
Hulon and a crew of seven people are contracted with the Florida Fish and Wildlife Conservation Commission to clear out 50 acres along the southern tip of the lake near South Shore Fish Camp. The thickest tussocks, dotted by the trees, will be left alone. Instead, tussocks that block navigation from the southern portion of the lake are the target for removal this time.
To clean most of Orange Lake would take all eight of Hulon's large harvesters operating for the next five years.
For the past several years the three Texas Aquatic boats have been in Lake Hernando in Citrus County, cleaning that water body of the same vegetation that has become problematic in Orange Lake.
Regardless of their size, the harvesters operate the same way. They are powered by paddles in the rear of the boat. Conventional motors would get clogged by all the floating debris and break down.
The harvesters push their way into the floating tussocks and, with a rolling bar, take bites of vegetation and pull it onboard. When the boats can hold no more, they paddle their way to shore and use the same rolling bar to push the vegetation onto awaiting dump trucks, which haul it to an area landowner willing to take the muck for fertilizer.
Texas Aquatic will clear out the same 50 acres it did nine years ago. No one has gone in to clear the lake since.
"This particular lake has not been maintained …and it's absolutely built up," Hulon said. "It's Mother Nature trying to turn the lake into a marsh."
The tussocks are formed when vegetation dies and sinks to the bottom and decomposes. That material then rises again to the surface, creating a substrate onto which new vegetation can grow. The cycle repeats itself season after season until the floating tussocks are feet thick and dense enough to support even trees.
Complicating the cycle is that much of Orange Lake often goes dry, making it impossible for the harvesters to work and boaters and fishermen to utilize the lake.
The 90-foot harvester can clear about 0.43 acres per day. The 70-foot harvesters clear about 0.2 acres a day. The company charges about $225 per hour per craft, but adjusts its price based on the amount of work offered, Hulon said.
The tussocks often move based on the way the winds blow, Hulon said. That makes the job more difficult because an area just cleared could become congested with the vegetation if the winds send them the wrong way.
Tussocks like those in Orange Lake occur naturally, said Patrick McCord, the project's manager for the Fish and Wildlife Conservation Commission.
But the situation is complicated by the way man-made structures have affected the lake, he said.
One example is U.S. 301, which slices through much of the southeastern portion of Orange Lake. Water from the area has also been diverted into Orange Lake, thus altering the area watershed.
"There's such an imbalance with the lake's tussocks now," he said.
"But it's a combination of things. There's also a lot of nutrients in the lake … and another issue is with the lake going dry," McCord said.
When that happens, more vegetation grows on the newly dry areas. And when the water level rises, so does some of the vegetation.
McCord said the Fish and Wildlife Conservation Commission could do minimal maintenance to the lake and provide ample habitat for wildlife. But that would not leave much to help boating and fishing, he said.
The past 20 years have been the least consistent when it has come to Orange Lake's water levels, he said. That makes financial planning more difficult because often money for lake projects have to be requested more than a year in advance, while Orange Lake's depth can significantly drop within weeks.
Much of Orange Lake's future will be determined by what the community wants.
"We are looking for our stakeholders … to guide us by telling us what they want," McCord said.
Robbie Shidner is not shy about saying what he wants for the lake. The owner of South Shore Fish Camp bought the business in 1999.
"The lake was beautiful. The water was up high …and pretty clear," he said. "This was my dream job."
A few years later things went bad and business with it. A sinkhole in the lake took some of the water and drought took much of the rest, Shidner said.
"We normally have an active hurricane season to give us some relief and give us water … but that hasn't happened," he said.
Shidner, 53, works side jobs to help make ends meet. His wife works for the U.S. Postal Service.
"We have to be good stewards," he said. "(But with all the man-made area changes) it is now a man-made lake and man has to maintain it … and they haven't done that."
Contact Fred Hiers at 867-4157 or [email protected]. |
By Mojdeh Bayat
A copublication with the Council for extraordinary young ones (CEC), Addressing tough Behaviors and psychological healthiness concerns in Early formative years focuses on research-based options for educators to handle tough behaviors of kids in the course of early adolescence and hassle-free university years. using study from the fields of neuroscience, baby improvement, baby psychiatry, counselling and utilized habit research, the writer indicates easy techniques for lecturers to control behaviors and advertise psychological wellbeing and fitness and resilience in teenagers with difficult behaviors.
Addressing demanding Behaviors and psychological healthiness matters in Early adolescence provides a framework for top practices that are empirically established and feature been effectively used in the study room. An appreciation of the deep figuring out of tradition because it impacts curricular ways, relatives engagement, and baby progress and improvement is applied all through this entire, multidisciplinary source. Bayat references the newest learn within the box of kid psychological health and wellbeing and offers academic and intervention methods which are acceptable for all childrens with and with out disabilities.
Read Online or Download Addressing Challenging Behaviors and Mental Health Issues in Early Childhood PDF
Best curricula books
Academics fight on a daily basis to deliver caliber guideline to their scholars. Beset by way of lists of content material criteria and accompanying “high-stakes” responsibility assessments, many educators experience that either instructing and studying were redirected in ways in which are in all likelihood impoverishing if you train and people who study.
This e-book bargains research-based versions of exemplary perform for educators in any respect grade degrees, from fundamental university to school, who are looking to combine human rights schooling into their school rooms. It comprises ten examples of tasks which have been successfully applied in study rooms: from trouble-free institution, from heart tuition, 3 from highschool, from group collage, and one from a college.
Teachable Moments will check items of the vocation of what it ability to be a instructor in our college constructions this present day - via all the such a lot impactful reforms at the cloth of yank schooling. As directors, we see the rush for the necessity to create facts tables and pie charts in an try and make conclusions approximately enhancing educational practices to inspire scholar functionality.
Attach nature play, outside studies, and STEM studying for childrens with actions, real-life examples, and educator assets. Nurture younger children’s innate trends towards exploration, sensory stimulation, and STEM studying in case you attach open air studying and STEM curriculum. notice the developmental merits of outside studying and the way the wealthy variety of settings and fabrics of nature provides upward thrust to questions and inquiry for deeper studying.
- The Teacher's Ultimate Planning Guide: How to Achieve a Successful School Year and Thriving Teaching Career
- Web-Based Teaching and Learning across Culture and Age
- Creative Arts in Education and Culture: Perspectives from Greater China: 13 (Landscapes: the Arts, Aesthetics, and Education)
- Science Adventures with Children's Literature: A Thematic Approach (Through Children's Literature)
- Curriculum as Institution and Practice: Essays in the Deliberative Tradition (Studies in Curriculum Theory Series)
- Reconceptualizing STEM Education: The Central Role of Practices (Teaching and Learning in Science Series)
Additional resources for Addressing Challenging Behaviors and Mental Health Issues in Early Childhood
Addressing Challenging Behaviors and Mental Health Issues in Early Childhood by Mojdeh Bayat |
Alcohol abuse seems, at first glance, to be less harmful than other drugs. After all, it is legal and most people can indulge without major consequences. When alcohol use turns into alcoholism, however, the damaging effects it causes to the body can be tremendous. Alcohol can lead to immediate death from alcohol poisoning, but more often the effects are drawn out and painful over time. Medical News Today explains, “Alcohol contributes to over 200 diseases and injury-related health conditions including dependence and addiction, liver cirrhosis, cancers, and unintentional injuries such as motor vehicle accidents, falls, burns, assaults, and drowning. Around 88,000 people in the U.S die from alcohol-related causes every year. This makes it the third leading preventable cause of death.” Alcoholism can lead to a variety of health concerns, including liver disease, pancreatitis, cardiomyopathy, stomach ulcers, cancer, immune system dysfunction, nerve damage, brain damage, osteoporosis, and mental health problems. One of the most well-known symptoms of alcoholism is liver damage. Alcohol’s effect on the live can range from immediate to long-term Cindy Kuzma, in a 2017 Vice article entitled How Alcohol Affect the Liver, explains, “In some cases, the consequences are truly frightening: A single episode of serious bingeing can cause a condition called acute alcoholic hepatitis, an inflammation of the liver, says Hardeep Singh, a hepatologist with St. Joseph Hospital in Orange, California. Symptoms include abdominal pain, nausea, and vomiting. (This is a very serious problem—about half the time, patients die within a month.)” In the long-term, fat begins to build up in the liver and interferes with necessary functions. This can lead to steatosis (fatty liver), alcoholic hepatitis, fibrosis, and cirrhosis.
Alcohol’s effects on the heart are just as damaging. According to Drink Aware, alcohol abuse can, overtime: “Increase the risk of high blood pressure. Drinking excessive amounts of alcohol causes raised blood pressure which is one of the most important risk factors for having a heart attack or a stroke. Increases in your blood pressure can also be caused by weight gain from excessive drinking,” and, “Heavy drinking weakens the heart muscle, which means the heart can’t pump blood as efficiently. It’s known as cardiomyopathy and can cause premature death, usually through heart failure. The heart may be enlarged.” Alcoholism is a progressive disease and, if left untreated, is almost always fatal.
Your story doesn’t have to be one of diminished health as a result of alcoholism. You can make the decision to seek help now and begin building a brighter future in sobriety. Oceanfront Recovery, located in beautiful Laguna Beach, offers a Detox program that is staffed with a compassionate team of care providers with the goal of making the process as comfortable and painless as possible. For information about Detox and other individualized treatment options, please call today: (877) 279-1777 |
According to the U.S. Consumer Product Safety Commission, an estimated 226,100 toy-related injuries were treated in U.S. hospital emergency rooms in 2018. Almost half of those incidents were injuries to the head and, unfortunately, most of those happened to children under the age of 15. It’s important to think about the safety of any gift you’re giving, especially if it’s a gift for a child!
Tips from AAO.org for choosing safe toys
- Avoid purchasing toys with sharp, protruding or projectile parts.
- Make sure children have appropriate supervision when playing with potentially hazardous toys or games that could cause an eye injury.
- Ensure that laser product labels include a statement that the device complies with 21 CFR (the Code of Federal Regulations) Subchapter J.
- If you give a gift of sports equipment, also give the appropriate protective eyewear with polycarbonate lenses. Check with your ophthalmologist to learn about protective gear recommended for your child’s sport.
- Check labels for age recommendations and be sure to select gifts that are appropriate for a child’s age and maturity.
- Keep toys that are made for older children away from younger children.
- If your child experiences an eye injury from a toy, seek immediate medical attention from an ophthalmologist.
The American Academy of Ophthalmology urges parents to avoid buying toys that can cause serious eye injuries, even blindness. This article was condensed from their EyeSmart article collection, and was written by Beatrice Shelton and edited by Anni Delfaro. Read the full article, including a helpful video, at AAO.org. |
Is the eco-efficiency in greenhouse gas emissions converging among European Union countries?
Eco-efficiency refers to the ability to produce more goods and services with less impact on the environment and less consumption of natural resources. This issue has become a matter of concern that is receiving increasing attention by politicians, scientists and academics. Furthermore, greenhouse gases emitted as a result of production processes have a heavy impact in the environment and also are the foremost responsible of global warming and climate change. This paper assesses convergence in eco-efficiency from greenhouse gas emissions in the European Union (EU). Eco-efficiency is assessed at both country and greenhouse-gas-specific levels using Data Envelopment Analysis techniques and directional distance functions, as recently proposed by Picazo-Tadeo et al. (2012). Then, convergence is evaluated using the Phillips and Sul (2007) approach that allows testing for the existence of convergence groups. Although the results point to the existence of different convergence clubs depending on the specific pollutant considered, they signal the existence of, at least, four clear groups of countries. The first two groups are conformed of core EU high-income countries (Benelux, Germany, Italy, Austria, the United Kingdom and Scandinavian countries). A third club is made up of peripheral countries (Spain, Ireland, Portugal, Greece) together with some Eastern countries (Latvia, Slovenia) and the rest of clubs consists of groups containing Eastern European countries.
|Date of creation:||Mar 2013|
|Date of revision:|
|Contact details of provider:|| Postal: |
Phone: 963 82 83 49
Fax: 963 82 83 54
Web page: http://www.estructuraeconomica.es
More information through EDIRC
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Myriam Nourry, 2009. "Re-Examining the Empirical Evidence for Stochastic Convergence of Two Air Pollutants with a Pair-Wise Approach," Environmental & Resource Economics, European Association of Environmental and Resource Economists, vol. 44(4), pages 555-570, December.
- Picazo-Tadeo, Andres J. & Reig-Martinez, Ernest & Hernandez-Sancho, Francesc, 2005. "Directional distance functions and environmental regulation," Resource and Energy Economics, Elsevier, vol. 27(2), pages 131-142, June.
- Francesc Hernandez-Sancho & Andres Picazo-Tadeo & Ernest Reig-Martinez, 2000. "Efficiency and Environmental Regulation," Environmental & Resource Economics, European Association of Environmental and Resource Economists, vol. 15(4), pages 365-378, April.
- Ezcurra, Roberto, 2007. "Is there cross-country convergence in carbon dioxide emissions?," Energy Policy, Elsevier, vol. 35(2), pages 1363-1372, February.
- Jobert, Thomas & Karanfil, Fatih & Tykhonenko, Anna, 2010. "Convergence of per capita carbon dioxide emissions in the EU: Legend or reality?," Energy Economics, Elsevier, vol. 32(6), pages 1364-1373, November.
When requesting a correction, please mention this item's handle: RePEc:eec:wpaper:1309. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Silviano Esteve)
If references are entirely missing, you can add them using this form. |
Q5: Series and Parallel DC Circuits
Find the voltage Vab across the open circuit in the circuit shown in Fig 5.
The 10 ohm resistor has zero current flowing through it because it is in series with an open circuit(Also, it has zero volts across it). Consequently, voltage division can be used to obtain V1. The result is
V1 = 100 * (60 / 40 + 60) = 60
Then a summation of voltage drops around the right-hand half of the circuit gives Vab – 30 – 0 – 40 + 10 = 0. Therefore, Vab = 60[V] |
- What are the different parts of a research study?
- What are the elements of a research paper?
- What is the correct order of writing a research paper?
- What is the most important part of a research paper?
- What are the two main types of research papers?
- Which one is the first section of the research paper?
- What are the parts of research paper and its definition?
- What are the 10 parts of research paper?
- What are the 5 parts of research paper?
- How many parts does a research paper have?
- What is the most difficult part of writing a research paper?
- What are the steps in writing a research?
- What are the components of a good research title?
- What are the 5 parts of qualitative research?
- What are the contents of Chapter 1 in research paper?
- What are the parts of a common research paper in order?
- What are the 10 types of research?
- What are the 10 steps of the research process?
What are the different parts of a research study?
Customary Parts of an Education Research PaperTitle/Cover Page.Contains the paper’s title, the author’s name, address, phone number, e-mail, and the day’s date.
Introduction and Statement of the Problem.Limitations of Study.Discuss your research methodology.
Main Body of Paper/Argument..
What are the elements of a research paper?
The basic elements of a research paper are:Title.Abstract.Introduction.Literature review.Methods.Results.Discussion/conclusion.References/bibliography.
What is the correct order of writing a research paper?
When writing an outline, you should keep in mind a typical research paper structure that commonly includes:a title page;an abstract;an introduction;a methodology section;findings/results;discussion;conclusion.
What is the most important part of a research paper?
Title, Abstract, Introduction (Statement of problem, Scope, Literature/Previous work) Method of study, Results, Analysis/Interpretation of Results, Conclusion then References. Of all these, the most important part of a research paper is the Results for that is the major contribution of the author to knowledge.
What are the two main types of research papers?
Although research paper assignments may vary widely, there are essentially two basic types of research papers. These are argumentative and analytical.
Which one is the first section of the research paper?
IntroductionIntroduction. For many students, writing the introduction is the first part of the process, setting down the direction of the paper and laying out exactly what the research paper is trying to achieve. For others, the introduction is the last thing written, acting as a quick summary of the paper.
What are the parts of research paper and its definition?
There are several parts that should be included in a research paper: title page, abstract, introduction, methodology chapter, body and conclusion. However, some of these parts may be optional, depending on the type and level of your paper.
What are the 10 parts of research paper?
The major parts of a research paper are abstract,Introduction,review of literature,research methods,findings and analysis,discussion,limitations,future scope and references.
What are the 5 parts of research paper?
There are five MAJOR parts of a Research Report:Introduction.Review of Literature.Methods.Results.Discussion.
How many parts does a research paper have?
six sectionsThe standard format of a research paper has six sections: Title and Abstract, which encapsulate the paper. Introduction, which describes where the paper’s research question fits into current science. Materials and Methods, which translates the research question into a detailed recipe of operations.
What is the most difficult part of writing a research paper?
The second most difficult part of research is introduction and conclusion. Whereas it is easier to write the body, it requires a casing around it. Introduction and conclusion summarizes researcher’s point of view and bridges concepts for the reader.
What are the steps in writing a research?
Basic Steps in the Research ProcessStep 1: Identify and develop your topic. Selecting a topic can be the most challenging part of a research assignment. … Step 2 : Do a preliminary search for information. … Step 3: Locate materials. … Step 4: Evaluate your sources. … Step 5: Make notes. … Step 6: Write your paper. … Step 7: Cite your sources properly. … Step 8: Proofread.
What are the components of a good research title?
The following parameters can be used to help you formulate a suitable research paper title:The purpose of the research.The scope of the research.The narrative tone of the paper [typically defined by the type of the research]The methods used to study the problem.
What are the 5 parts of qualitative research?
A popular and helpful categorization separate qualitative methods into five groups: ethnography, narrative, phenomenological, grounded theory, and case study.
What are the contents of Chapter 1 in research paper?
The first chapter of a proposal consists of several subheadings or sections: background, research questions, objectives, limitations, rationale, hypothesis (optional), statement of the problem, and methodology.
What are the parts of a common research paper in order?
A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. Many will also contain Figures and Tables and some will have an Appendix or Appendices.
What are the 10 types of research?
General Types of Educational ResearchDescriptive — survey, historical, content analysis, qualitative (ethnographic, narrative, phenomenological, grounded theory, and case study)Associational — correlational, causal-comparative.Intervention — experimental, quasi-experimental, action research (sort of)
What are the 10 steps of the research process?
A list of ten stepsSTEP 1: Formulate your question.STEP 2: Get background information.STEP 3: Refine your search topic.STEP 4: Consider your resource options.STEP 5: Select the appropriate tool.STEP 6: Use the tool.STEP 7: Locate your materials.STEP 8: Analyze your materials.More items… |
Desperate to blow off that filthy mood that has everyone around you walking on eggshells? Forget the glass of wine, bottle of whisky, barrel of beer. And forget just sitting in the corner feeling sorry for yourself.
Get up, start walking around the neighbourhood, and perform a few random acts of kindness – and simply wishing well – to strangers.
New research suggests you’ll be feeling better in about 12 minutes.
An experimental search for what works best
“Walking around and offering kindness to others in the world reduces anxiety and increases happiness and feelings of social connection,” said Douglas Gentile, professor of psychology at Iowa University, in a prepared statement.
“It’s a simple strategy that doesn’t take a lot of time that you can incorporate into your daily activities.”
Dr Gentile and two colleagues tested the benefits of three different techniques intended to reduce anxiety and increase happiness or wellbeing.
They did this by having student volunteers walk around a building for 12 minutes and practice one of the following strategies:
Loving-kindness: Looking at the people they see and thinking to themselves, “I wish for this person to be happy”. Students were encouraged to really mean it as they were thinking it.
Interconnectedness: Looking at the people they see and thinking about how they are connected to each other. It was suggested that students think about the hopes and feelings they may share or that they might take a similar class.
Downward social comparison: Looking at the people they see and thinking about how they may be better off than each of the people they encountered.
The study, published in the Journal of Happiness Studies, also included a control group in which students were instructed to look at people and focus on what they see on the outside, such as their clothing, the combination of colours, textures as well as makeup and accessories.
All students were surveyed before and after the walk to measure anxiety, happiness, stress, empathy and connectedness.
The researchers compared each technique with the control group and found those who practiced loving-kindness or wished others well felt happier, more connected, caring and empathetic, as well as less anxious.
Feeling sorry for someone doesn’t cut it
The interconnectedness group was more empathetic and, well, connected.
Downward social comparison showed no benefit, and was significantly worse than the loving-kindness technique.
“At its core, downward social comparison is a competitive strategy,” Dr Dawn Sweet, senior lecturer in psychology said.
“That’s not to say it can’t have some benefit, but competitive mindsets have been linked to stress, anxiety and depression.”
The researchers also examined how different types of people reacted to each technique, and what they found came as something of a surprise.
They expected those who were naturally mindful to benefit more from the loving-kindness strategy, also anticipating narcissistic types might to have a hard time wishing for others to be happy.
Even selfish people benefited
“This simple practice is valuable regardless of your personality type,” said Lanmiao He, a graduate student and part of the experimental team.
“Extending loving-kindness to others worked equally well to reduce anxiety, increase happiness, empathy and feelings of social connection.”
Dr Nicholas Hookway is Senior Lecturer in Sociology at The University of Tasmania. His research focuses on how morality, identity and giving behaviours are being reshaped by wider social change. A long-running project looks at how kindness is socially distributed among different age groups.
Regarding the Iowa study, he told The New Daily: “I’m not surprised that kindness reduces a range of modern ailments – anxiety, depression – because it expresses something fundamental, that we are relational and interdependent human beings.”
He said there was an “infrastructural quality” to kindness that is barely noticed but fundamental.
“Like roads or electricity, everyday acts of care and support are both banal and deeply significant, often unnoticed and invisible but key to making society possible,” Dr Hookway noted. |
For many people, giving to charity is not only part of their moral code, but also a part of their overall financial plan. If we have the means to help others as well as ourselves, it can bring not only a sense of satisfaction, but can also be helpful to someone less fortunate. The tax deduction certainly doesn’t hurt.
Leaving the debate on altruism for another day, let’s look at the act of giving. The moment a donation is made, two of the above elements are met. We feel pride for having done something, and the government will recognize that act in the form of a reduced tax bill. The real question left unanswered, and probably the most important, is if your donation was able to help someone or not.
It is, of course, quite difficult to track your specific dollar, but there are some websites that can help to get an idea of how your donation will be used. If you give $100 to a charity, which in turn uses $30 of that for fund-raising, and another $25 to pay for administration and salaries, then you have to question if your donation was used effectively.
Charity Navigator has been a great tool for looking at American charities. It assigns a rating based on how effectively a charity uses the money it receives. There has been a lack of such a site in Canada.
While the Charities and Giving section of the CRA offers some good information, it is not necessarily easy to compare charities.
Starting last year, however, Money Sense magazine started The Charity 100, a list of the top charities in Canada. They set out criteria and graded the charities based on how effectively money was being used. It certainly isn’t as extensive as Charity Navigator, but it does offer a great starting point for Canadians trying to decide where their money will help the most. |
Hamilton Wright, a doctor and State Department official who represented the United States at the International Opium Commission in Shanghai in 1909, and who was probably more influential than anyone else in the government on drug policy , reported the following in 1910: "The use of cocaine by the negroes of the South is one of the most elusive and troublesome questions which confront the enforcement of the law in most of the Southern states" (1910:49). He went on that the drug "is often the direct incentive to the crime of rape by the negroes of the South and other sections of the country" (1910:50).
Was there any evidence for this?
Green, who examined admissions to the Georgia State Sanitarium from 1909 to 1914 (a total of 2,119 blacks) found only three cases of narcotic addiction among black patients in contrast to 142 "drug psychoses" among whites. of the three, cocaine was used by itself once, and once in combination with morphine and alcohol. The third case involved the opiate laudanum (Green 1914:701). Green suggested that the very low cash income of blacks precluded their use of drugs, but predicted a higher prevalence rate in the North where "the negro is more prosperous" (1914:702).
Other data confirm low incidence and prevalence rates for the opiates among Southern blacks, Roberts (1885) reported an almost insignificant case rate in the Carolinas. In 1913, in Jacksonville, Florida, a survey of prescription records turned up 28.8% black opiate users, but since over half of the city's total population was black, the survey confirmed that "the white race is more prone to use opium than the negro" (Terry and Pellens 1928:25). Two years later in Tennessee, Brown found only 10% blacks among registered opiate users - significantly less than their proportion in the state overall (Brown 1915).
Although blacks in the Northern cities were hardly prosperous, and in the peak periods of unemployment (1908, 1914 and 1919-21) they were relatively worse off, they did enjoy higher wage rates than the Southerners. Was this associated with higher rates of narcotic use?
Two studies of Washington's institutionalized population - one of 175 workhouse inmates and another of patients treated in the city's hospitals between 1900 and 1908 - indicate that the number of cocaine users in that period was very small compared with the size of the alcoholic or even the opium addict population, and no particular concentration of blacks was observed (President's Homes Commission 1909:252 - 254).
Of course, there may be a large error of estimation in reliance on institutional figures, if we suppose that blacks would be less likely than whites to seek or receive treatment for drug addiction at sanitaria or hospitals. However, it does appear that the picture provided by institutional counts matches that given by close and involved observers such as the police.
Bloedorn, for example, provided evidence from admissions statistics of Bellevue Hospital that cocaine use in New York peaked in 1907 and dropped quite sharply from 1908 to 1909, remaining at a low level through the war (Bloedorn 1917). An almost identical pattern was reported by the chief of Washington's police, who described the cocaine problem as reaching "alarming proportions" around 1906-07, but substantially diminishing after the passage of the Pure Food and Drugs Act in 1906: "My information" he reported, "is that the sale of cocaine is about one-tenth of what it was before the present law went into effect" (President's Homes Commission 1909:255).
The implication to be drawn from the Homes Commission papers was that few officials regarded the use of cocaine as either an especially black problem, or, after 1909, as serious as the problem of heroin use, which began to develop at that time. Why then did Wright, who had read these same reports, insist on declaring that "the misuse of cocaine is. . .the most threatening of the drug habits that has ever appeared in this country" (1910:50) and that the principal carriers of the threat were black?
Fragmentary evidence indicates that blacks tended to use patent medicines more than whites in general. this reflected high relative mortality rates for influenza and bronchial infections (e.g. catarrh) (President's Home Commission 1909:210; Historical Statistics of the U.S. 1960:26, 33). There is also an indication that even where mortality rates were very similar, as between blacks and working-class whites in the Northern cities, blacks continued to spend a greater proportion of their income on medicine and health care (Du Bois 1909; Weber 1909; Kennedy et al. 1914; Helmer 1974).
This is relevant insofar as the common medicines for the treatment of pulmonary bronchial disorders were at this time compounded of opiate and cocaine mixtures (Young 1961). This suggests that blacks may have consumed relatively more narcotics on a per capita basis, at least in the form of patent medicine. This does not attempt to explain the higher rates of narcotic addiction in the Southern states (pre-war period) as a consequence of medically induced exposure to drugs, but since we have already shown that blacks were in fact under-represented amongst drug users in the region, this particular explanation is unsatisfactory.
It is possible that another factor may have been at work in stimulating cocaine use (and other narcotics) in the South- Prohibition. Between 1880 and 1910 this had spread form state to state, most rapidly and extensively in the South, and there were press reports at the time claiming that one of its effects had been to increase the substitution of drugs for liquor. On the other hand, black consumption of alcohol was far less than that of whites (Helmer 1974) so that Prohibition was less meaningful to them, and even at the price Wright quotes for cocaine in 1910- 25c a grain - few blacks working as sharecroppers or as laborers could have afforded it regularly and still have eaten and paid the rent.
The plain fact is that Wright, the chief authority for the claim of a black cocaine problem and later the virtual author of the Harrison Bill legislation to ban it, was reporting unsubstantiated gossip and quite dishonestly misrepresented the evidence before him. As evidence already quoted revealed, cocaine use reached a peak in 1907 and went sharply down thereafter. the import figures bear this out: in 1907 1.5 million pounds of cocoa leaves entered the country; the next year this was cut by more than half (Wright 1910: 33).
But if official concern about the black cocaine problem was based on myth, we find that when blacks in fact began using drugs on a wider scale, almost no notice was taken of it. Figure 1 illustrates the racial composition of the narcotic addict population in various cities and areas up to 1940, as provided by available surveys.
Whites clearly predominated in every case. Northern and Southern alike, and the Jacksonville group amounted to the largest proportion of black addicts for nearly 30 years. What the chart does not indicate are the major shifts in the black population from South to North, and the consequent change in the relative size of the black and white populations from place to place. since these will have affected the racial proportions of the addict group also, what we need to express is the relative likelihood of blacks becoming heavy narcotics users compared with whites over the same period.
A simple way to express this is to take the ratio of black to white users for each area and divide it by the ratio of the black to white total population for the same place. At unity we can say that blacks were as likely to use narcotics as whites in that locality; for fractions less than one, the smaller the score, the more under-represented blacks were among the users, and above unity, the larger the score, the more over-represented and hence more likely they were to become users as compared with whites.
Right at the end of the period we are considering, the evidence of the New York City Narcotic Clinic is especially interesting because it is the first reported instance of an over-representation of black narcotics users, and hence of a higher prevalence rate for them as compared with whites. Yet the facts were almost totally ignored. The City Health Commission, reporting on the drug problem in 1920, failed to mention the race of the clinic's patients: what struck him most was that the majority of them were under twenty-five years of age. He reported that over two-thirds were straight heroin users (Copeland 1920); only 10% admitted to mixing cocaine with heroin or morphine; and an insignificant number claimed to prefer the use of cocaine itself. The clinic experienced almost no demand for it. In other words, the drug which ten years before publicists and legislators had blamed on the blacks was relatively uncommon in 1920, whereas the heroin habit, which young New York blacks were developing at a faster rate than whites, was all but invisible.
During the war and immediately afterward, the newspapers were curiously silent on the race of narcotic users - curious because stories of black sexual assaults on whites were legion, and because just a few years before cocaine had been widely thought to be involved in this kind of violence. In 1919 racial tension reached a high point. Lynch mobs murdered 78 blacks 78 blacks in that year, many of them accused of rape, and race riots broke out in several cities including Washington and Chicago, where again claims of sexual were involved. Neither cocaine nor other drugs were mentioned in the press as a contributing cause. Instead the blame was laid on socialist and radical agitators, members of International Workers of the World, the Bolsheviks, even on Harvard graduates (Helmer 1974).
We learn something important about the ideology of narcotics from this. For just as it was pure invention that Bolshevik agitators had led blacks to riot during 1919, so it was an invention of the same kind that at the beginning of the decade cocaine had been "a potent incentive in driving humbler negroes all over the country to abnormal crimes" (Wright 1910:51). Both functioned as myths to explain why it would happen that otherwise docile, passive (humble was Wright's term for inferior) black people would riot against the impoverished conditions in which they were confined.
In the period just considered, this condition, along with the condition of the entire working class, experienced several fluctuations, each of them paralleled by evidence or claims of a severe drug problem. Unemployment, for example, rose sharply between 1907 and 1908 (the peak of the cocaine problem) between 1913 and 1914 (the onset of the heroin problem), and again between 1919 and 1921.
The war itself had stimulated the reconstruction of the Northern labor force by inducing the large-scale emigration of blacks out of the rural South to man in the labor-scarce urban economy. As this economy changed with the demobilization from a condition if labor scarcity to labor surplus, the tension between working-class whites and blacks rose as the necessity for competition for jobs and declining wages was forced upon them (Tuttle 1970). Rape, crime, drug addiction, and bolshevism were elements of the hostile stereotype to emerge in this conflict, and their relation to the real state of things was immaterial. The assault against white women, like the bolshevik's attack against Americanism, or the image of the cocaine fiend, were all constituents of a common ideology designed to justify and legitimize the repression with which black social and economic claims were met. They were not additive, however: either cocaine led blacks to run amuck or else bolshevism did, but never both. It took another thirty years before those two could be put together. |
First response by MARTIN DODGE, theorist of spatiality and mapping practise at the University of Manchester and co-editor of The Map Reader (2011); second response by GILES GOODLAND, lexicographer and poet (A Spy in the House of Years, 2001, Capital, 2006).
MARTIN DODGE: Word-Geography of Cornwall
An apparently simple map of the toe of England is stamped authoritatively, in capitalised san serif, as BUSSA, intimately conjoined to an elongated zone of KEEVE. Such maps, serving as a commonsense template onto which all manner of thematic information can easily be presented, are part and parcel of research reports and academic papers. The presentation purports to be straightforward, the map as an accurate conveyance of the results from author to reader, typically designed to display some singular overt meaning on spatial distribution and thereby connoting a simple geographical explanation. The familiar, trustworthy cartographic voice speaks to us thus: “as you can clearly see BUSSA covers this area of Cornwall and this is certainly being caused by ….’.
Thematic cartography is a powerful mode of scientific communication for geographical patterns, along with its near cousins, the diagram for spatial processes and the graph for statistical trends. The invention of diagrammatic representations of data is relatively recent in the history of knowledge and some argue it to be one of the underpinnings of the modern world (Bender and Marrinan 2010; for discussion of the evolving capability of such diagrams and thematic maps see Michael Friendly’s ‘Milestones’ project). This avowedly uncomplicated example mapping of lexical data, selected from the 1980 volume Word-Geography of Cornwall by D.J. Northe and A. Sharpe, deploys much of the iconicity of diagrammatic objectivity to enhance truth claims of the real linguistic patterns being brought into being by the act of inscription. The austerity of black and white display, the planar flatness of the presentation and the willingness to employ empty white space, are all subtle declarations of honesty: “see I have nothing to hide”. The evident concentration on the naming numerous places in Cornwall with consistent little labels implies a meticulous attention to detail and believability of the data. The crispness of the wrinkled Cornish coastline, with many inlets and jutting headlands, further connotes conscientiousness of the cartography, but from inspection is clearly also heavy generalised to meet scale rules and highly selective in what places are deemed worthy of naming.
The overall shape of the land works hard to separate the knowable territory of data from the void of Devonshire beyond the border and the endless seas running off to the edges of the display. The county of Cornwall as a meaningful, immutably self-contained spatial unit stands rock like in the nothingness. The solidity, the concreteness, is exaggerated in this case by the distinctive and recognisable coastal outline of Cornwall. The shape is one of the most emblematic elements of the map of the British Isles, which is itself part of the essential national iconography, endless (re)represented to us, a shape so familiar from being sheered into our memories from an early age, a mental cartographic construct of Englishness. In a lot of ways the map is almost blandscape: it only shows what needs to be seen in terms of linguists’ simplified data and is deliberately blind to the complexity of the landscapes of Cornwall that reflect real language diversity.
Mapping what we should see
The cartographer seeks to focus our attention squarely on the six areas overplotted on the supposedly real territory of Cornwall. These are oddly shaped and boldly labelled in words that are English but unusual somehow. Their sinuous contour shapes are intriguing to the eye and start the brain thinking. What might be the reason for the small little CULS bubble and the distinction between zones, such as the drooping loop of TRENDLE and concavity of TRUNDLE…? The patterns being mapped surely imply that something of the underlying process is geographical, perhaps the physical separation of people in the past caused by wide rivers, the effects of elevation, underlying geological conditions, the changing agricultural landscape. Yet from scanning the map it’s not immediately obvious, at least to my eye, what might be really causing the shapes. Why, for example, does KEEVE separate the area of STUG from the CULS? Is this a deep glaciated valley or a dividing ridge line of hills?
Of course, the distribution and shapes of the word zones that superficially appear to be accurately mapped could be largely unrelated in terms of the underlying geography. Their particular lexical pattern might be spurious, not spatially derived from reality but an artefact of the data collection and processing. A different sampling strategy could well have given rise to a very different looking spatial pattern of words. This is a serious issue with the validity of much spatial analysis that is oftentimes exacerbated by plausible looking thematic cartography. It has been termed MAUP, the modifiable area unit problem, which as renowned quantitative geographer Stan Openshaw (1982, 4) noted: “[i]f the areal units or zones are arbitrary and modifiable, then the value of any work based upon them must be in some doubt and may not possess any validity independent of the units which are being studied.” A related technical issue is that spatially interpolating, from a limited number of sample points, to create continuous isolines [like contours for elevation] can be a notoriously subjective process.
The sinuously smooth curves of the isogloss ‘envelopes’, suitably demarked by the solidity of line work, implies a sharp boundary in the data that is often far from true on the ground, which is likely arising from gradual shifts in tone. Cartographic design favours the delineation of spatial certainty and there are quite a number of challenges in effectively conveying uncertainty on maps which readers can usefully interpret. Moreover, the ubiquitous choropleth map design, when deployed conventionally, has many problems with strictly divided zones that can only be assigned to one linguistic class and also generates what has been termed the ecological fallacy. This is the situation where a map encourages the reader to erroneously allot the average value for the zone to all individuals within the zone; so for example in the KEEVE zone everyone would be assumed to use Keeve, and that distribution is equal and universal across the zone, which is very unlikely with most social phenomena. These fundamental weaknesses with choropleth representations were pointed out by cartographic thinkers long ago, including by J.K. Wright in 1936, when he mapped population density in Cape Cod and advocated the alternative dasymetric approach to display social data in a more realistic fashion. Given how widely and unreflectively used thematic mapping has become, typically as adjunct to the main thrust of scholarly analysis, one need to approach default of ‘scientific’ cartographic representations with knowing eyes and sceptical thoughts.
Maps as words
It is also interesting that cartography, as a visual endeavour premised on graphicacy, is often contrasted with textual artefacts created by writing processes using typographic conventions. This is an overly simplistic binary and many maps are often richly textual as well as being graphical. Topographic mapping – a general purpose map of terrain and physical landscape – in particular is a deeply typographic enterprise; just think of the amount and variety of text on an Ordnance Survey Landranger sheet or the density of street names arrayed across the pages of an A-Z atlas. Textual elements on the space of the map itself can have great significance in cartographers’ work communicating information. Text as toponyms is a form of writing in which the spatial position of words has extra meaning through the link of the page position to the geographic location on ground. There are many technical challenges in the placement of text on the maps, along with the difficult objective choices about what features to label within the designated scale of representation – evident in the Word-Geography of Cornwall. (The innate design skills around the aesthetically pleasing text placement have proven particularly hard to replicate in software for mapping.)
Toponymic text on map can also be seen to be full of subjectivity, for example in the ways that social hierarchies are clearly denoted through the chosen script and font size. The spelling of place names itself can be read as a politically-loaded practice, particularly in the colonial conquest where indigenous rights are removed by place renaming, most evident in the cartographic tracing of territory (see Monmonier 2006). Textual absences, a positive lack of the naming of place, can render areas silenced on the map, and demonstrate most clearly how cartographic practice is not an instrumental mirror of territorial truth but is actively constitutive in the ongoing creation of geographical imaginaries. To understand some of the intersections of the textual with the spatial critical cartographers have in the past borrowed ideas from literary theory and sought to read the rhetorical position of the map as a text. As J.B. Harley (1989: 7-8) noted in his seminal paper on critical cartography:
“’Text’ is certainly a better metaphor for maps than the mirror of nature. Maps are a cultural text. By accepting their textuality we are able to embrace a number of different interpretative possibilities. Instead of just the transparency of clarity we can discover the pregnancy of the opaque.”
So perhaps lexicography and cartography are not such distinct ways of knowing the world as they might first appear.
Bender, J. and Marrinan, M. 2010. The Culture of Diagram (University of Stanford Press, Stanford, CA)
Harley, J.B. 1989. Deconstructing the map. Cartographica, 26, 1-20.
Monmonier, M. 2006. From Squaw Tit to Whorehouse Meadow: How maps name, claim, and inflame (University of Chicago Press, Chicago).
Openshaw, S. 1982. The Modifiable Areal Unit Problem; CATMOG 38 (GeoBooks Norwich). Available online here.
Robinson, A.H. 1982. Early Thematic Mapping in the History of Cartography (University of Chicago Press, Chicago).
Wright, J.K. 1936. A method of mapping densities of population: with Cape Cod as an example. Geographical Review 26(1), 103-10.
© 2012 Martin Dodge
GILES GOODLAND: Word Map
A bit of bronze, a battered flint, a broken bussa, a single word or expression,—each carries us back to a period when manners, dress, domestic appliances, and the prevalence of a now forgotten tongue were scattered up and down our land. I were down along to cozen Zaccy’s laast ‘count day of our Bal, and had as fitty a pear of ploffy mabyers as one wed wish to put a knife en, and a thoomping figgy pudden, with a little coostom after. Cozen Nic’s Gracey met with a misforten, for she thraw’d over a cloam buzza of scaal cream on the planchen, and scat en all to midjens and jowds, and crazed a squeer. A scrovey great bussa. Thecky owld Pot edn’t no valley ‘toal, ‘tes nort but a owld Bussa what my man berried en tha taty-plat for to taake ‘way tha smill ov pilchurs out ov un. Nonsense, my good wumman, says tha Passun, theere’s Latten sure ’nuff, says Un Polly, an’ that’s owld Zack’s work too, that es! waun day thecky murrick seed un ‘pon tha taable, an’ cut some letters ‘pon un ’cause he’s a braavish schollard, he es. Stop, Mrs. Polglaaze, says Passun, I’ve a got a little picture ov the Pot, an’ tha descripshun put ento English. Put ento English says un Polly, why ‘twor English what our booay wrote ‘pon un, he shawed et to me an’ faather. Passun showed her tha picture an’ raid et down to her, but she stopped un oal to waunce an’ said, Aw, Sur! much larnin’ ‘ave maade ‘ee maazed—’tes nort but thes—lev me raid ut to yer Honor. IT IS JAM POT IT IS. Why a scatt all to midjans and jouds for the nons, A cloam buzza of scale milk about on the scons.
An ale of a similar nature goes in Cornwall by the rather uneuphonious title of Laboragol. A physician, a native of that place, informed him that the preparation was made of malt almost burnt in an iron pot, mixed with some of the barm which rises on the first working in the keeve, a small quantity of which invigorates the whole mass and makes it very heady. Machinery for the Preparation of Tin, Copper, and Lead Ores: Jigging-Machines, Cornish Stamps, Husband’s Pneumatic Stamps, Buddies, Kieves, Calciners, Sluice Frames, Pulverizers, Copper-ore Dressers, the Frue Vanner. St. Nighton’s or St. Nectan’s Kieve is a secluded waterfall, not particularly easy to reach. The chief cascade falls about 40 ft. into a circular basin of rock, the kieve as the Cornish call it. Legend says that St. Nectan had an oratory here, and that when dying he threw the silver bell of his chapel into the waterfall. There is also a tale of two sisters, foreigners, who came to live on the site of his cell. No one knew who they were; they lived and died unknown. Hawker wrote a poem on the subject, in which, with his customary loose archaeology, he gives St. Neot’s name to the spot instead of St. Nectan’s. All trace of the buried treasure was lost, until discovered by the men who were engaged in enlarging the potato-bury, potato-camp, potato-cave, potato-clamp, potato-grave, potato-hale, potato-heap, potato-hog, potato- hole, potato-pie, potato-pile, potato-pit, potato-stack, potato-tump.
A shallow wooden tub for butter, milk, or whey. Used for cooling beer. A brewer’s cooler. A circular trough or tray in which bakers mix their dough. I walked on and seed a clock with a face as big as a baking trendle. A large wooden vessel for milk used at milking-time. A brewer’s cooler. Used chiefly for scalding pigs. A large, shallow, oval tub, made of wood or earthenware, and used for many purposes, chiefly for curing bacon. In common use. The oval tub in which a pig is ‘scalded’ is always called a trendle. A clock with a face as big as a baking-trendle. A circular earthwork. Chisenbury Camp, or Trendle, as it is vulgarly called. The term Trendle is applied to circular earthen works. A large oval tub some five to six feet in its greater axis, used for many purposes, but chiefly for scalding pigs. Vats, tubs, trundles, ladders, poles. Here the old custom of employing sworn and licensed winders is diligently adhered to, and they are engaged to strip off the coarse part of the fleece, and to wind up only the better kind of wool; to tie about half a dozen fleeces together, and to ticket the weight of each bundle, or as it is there called a trendle.
A circular object; specif. : a A wheel, esp. of a wheelbarrow. b A kind of large wooden tub, a thing made and set on low wheels to draw heavy burdens on. Sheep-dung; anything globular. It had a toothed sector on the end of the working-beam, working into a trundle which, by means of two pinions. The suite has a king-sized bed, a sitting area with TV/VCR, a trundle bed, and two baths, one with a Jacuzzi tub. The pen-knife when ground, is worked on a trundle or glazier, which is a wooden wheel about four feet in diameter and two inches, commonly employed to describe portable grinding slabs such as those found at Gwernvale, Etton and The Trundle (i, e. the round hill, a very large British earthwork in the same neighbourhood); medium-rise blocks were surrounded by a one-way motorway system and the trundle of efficient trams.
Stuke, stunt, sturdy, stutter, stug (a vulgar word.) The same word is Cornish also for a milk-pail. A coarse brown earthenware pan of an oval form is called a stugg. This last word is also commonly used. Near Crotern walls, and by the quarry, Us cumm’d right up beside Tresmarry, And just a stugg’d was we ; By Orange Grove, still in the borough, In the town place as we cum thorough, The dairy pans we see. Hanging out on the upper side like the stug or thrumb mats, which we sometimes see lying in a passage. A German machinegun team with a Stug III nearby. Deployed primarily in infantry formations, it was based on the Panzer III chassis and buglehorn stringed and garnished or and in base a stug’s head couped. A stug at gaze in a holly bush (a stag’s head erased is sometimes used). In the panel immediately in front is a group of three stugs. The panel adjoining the inscription bears a representation of St. Michael protecting them from the ravages of stugs and other infects.
Clap a carle on the culs and he’ll shit in your loof. He had wanted to be away from little places, the narrow places of his past, from funny little culs-de-sac with cold culs : assortiment de charcuteries. Poeme en quatre chants, large paper, plates, vignettes and culs de lampe after Eisen, old calf, a Cornish Man, in the Flying Island, etc., by RS, a Passenger in the Hector. All are interspersed with quaint culs and numerous 16th century MS. notes. The conveying it to the different “winzes” or communicating shafts, and the “fast-ends” or culs de sac.
© 2012 Giles Goodland |
To find a list of specially selected UN publications for COP 20, click here.
Objectives of Pilot Projects
Objectives of the pilot projects include to:
- foster a systematic and country-driven process to strengthen human resources, learning, and skills development
- determine specific actions to enhance climate change learning and strengthen learning institutions
- ensure that climate change learning is linked to and helps to achieve national climate change objectives
- augment mobilization of resources for training and skills development from national budgets and external partners (UN organizations/country teams, bilateral donors, foundations)
- ultimately, create a strengthened human resource base in the country to enhance implementation of the UNFCCC |
- Which state has the most toll roads?
- What state has the highest toll roads?
- Why is Route 66 famous?
- Why do we pay tolls on roads?
- What is FASTag toll?
- Who owns toll roads in America?
- What is a Turnpike Road in USA?
- Who invented the toll road?
- Do all states have toll roads?
- What is a turnpike in history?
- Who built the first turnpikes in the United States?
- What was unique about the first highway in the United States?
- Do toll roads ever become free?
- What states do not have toll roads?
- What is the longest road in the United States?
- Why are streets called pikes?
- Which president created highways?
- Is Toll Free After 3 minutes?
Which state has the most toll roads?
FloridaFlorida has 719 miles of toll roads crisscrossing the state — the most in the nation, according to federal data..
What state has the highest toll roads?
FloridaFlorida. The Sunshine State has more toll roads, bridges and causeways than any other state in the union. There are over 700 miles of toll roads in the state, and in Central Florida, they are particularly difficult to avoid.
Why is Route 66 famous?
US Highway 66, popularly known as “Route 66,” is significant as the nation’s first all-weather highway linking Chicago to Los Angeles. … Route 66 reduced the distance between Chicago and Los Angeles by more than 200 miles, which made Route 66 popular among thousands of motorists who drove west in subsequent decades.
Why do we pay tolls on roads?
Why do we pay the toll? In India, for every state or national highway/expressway, a fee is charged for raising the cost incurred in constructing as well as for maintaining the roads. This fee is called toll and is a kind of tax. … If this is less, you will be charged based on the actual length of the road.
What is FASTag toll?
FASTag is a device that employs Radio Frequency Identification (RFID) technology for making toll payments directly from the prepaid account linked to it. It is affixed on the windscreen of your vehicle and enables you to drive through toll plazas, without stopping for cash transactions.
Who owns toll roads in America?
The Toll Roads are owned by the state of California and operated by The Transportation Corridor Agencies (TCA). TCA is comprised of two joint powers authorities formed by the California legislature in 1986 to plan, finance, construct and operate Orange County’s 67-mile public toll road system.
What is a Turnpike Road in USA?
Toll roads, especially near the East Coast, are often called turnpikes; the term turnpike originated from pikes, which were long sticks that blocked passage until the fare was paid and the pike turned at a toll house (or toll booth in current terminology).
Who invented the toll road?
19th-century plank roads were usually operated as toll roads. One of the first U.S. motor roads, the Long Island Motor Parkway (which opened on October 10, 1908) was built by William Kissam Vanderbilt II, the great-grandson of Cornelius Vanderbilt.
Do all states have toll roads?
However toll roads are not in all 50 states, so it is a good idea to check and see if you will have to pay for any of the roads you are planning to use. If you are traveling on certain roads in California, New York, Texas, Florida, Georgia, Virginia, New Jersey and many other states, you may encounter a toll road.
What is a turnpike in history?
Turnpikes were originally toll gates that prevented passage along a road unless a toll was first paid. Over time in America the word ‘Turnpikes came to mean a toll road rather than a toll gate. … A gate, called a turnpike, was set across a road to stop a travelers passage until a fee, or toll, had been paid.
Who built the first turnpikes in the United States?
The first private turnpike in the United States was chartered by Pennsylvania in 1792 and opened two years later. Spanning 62 miles between Philadelphia and Lancaster, it quickly attracted the attention of merchants in other states, who recognized its potential to direct commerce away from their regions.
What was unique about the first highway in the United States?
It was built by the government. It was constructed by a private company.
Do toll roads ever become free?
While there has been one historical case of a toll lane becoming free after it’s debt was paid, there hasn’t been another since. In that case, in 1977, the turnpike between Dallas and Fort Worth was turned into part of I-30 once it’s debts were paid. … There are no laws mandating toll roads are handled on a state level.
What states do not have toll roads?
As of January 2014, the states of Alaska, Arizona, Arkansas, Hawaii, Idaho, Iowa, Michigan, Mississippi, Missouri, Montana, Nebraska, Nevada, New Mexico, North Dakota, South Dakota, Tennessee, Vermont, Wisconsin, and Wyoming have never had any toll roads, while Connecticut, Kentucky, and Oregon have had toll roads in …
What is the longest road in the United States?
US Route 20US-20: 3,365 miles US Route 20, part of the US Numbered Highway System, is the longest road in America.
Why are streets called pikes?
Travelers have used the routes since the city’s founding and perhaps even before, when Native Americans were the only people who frequented Middle Tennessee. But they became pikes in the 19th century, when private companies made the improvements, such macadam paving, that justified charging a toll.
Which president created highways?
President Dwight D. EisenhowerFrom the day President Dwight D. Eisenhower signed the Federal-Aid Highway Act of 1956, the Interstate System has been a part of our culture as construction projects, as transportation in our daily lives, and as an integral part of the American way of life.
Is Toll Free After 3 minutes?
If the waiting time exceeds 3 minutes, the vehicle is not required to pay toll tax as per the NHAI rules for toll plaza. Then one can pass free of cost i.e. the toll tax-free after 3 minutes of waiting. |
The nursing image has become a major issue in the society as people have different perception about nursing. Some believe that nurses do their duties out of kindness. This has influenced the nursing image as most people do not see nursing as a good profession. Only few people in the society see nursing as an important profession and consider the qualification of the nurses (Younge & Niekerk, 2004). This has led to shortage of nurses in the country as minimal people join the nursing profession.
In addition, nurses do not have an opportunity to express themselves and this has made it hard for them to talk more about nursing image and led to poor working environment (Finkelman & Kenner, 2010). As a result nurses are required to use various strategies to improve the nursing voice and image. For instance nurses can use campaigns and the media to improve the nursing image and voice. This paper explores the nursing voice and the image. It examines the history of nursing image and voice. It will also examine how the media portrays the nursing image and how it has affected nursing image.
Lastly, the paper will analyze the future challenges (Graham & Claborn, 2006). The nursing image consists of various things. Firstly, the image of nursing involves the perception of nursing by people in the society and how nursing defines and sees itself. Nurses differ on the definition of nursing and this has made it hard to enhance the nursing image. Florence Nightingale contributed a lot in the development of the nursing image. Nightingale established nursing. However, the image of nursing is as a result of the folk image, the military image and the religious image.
The folk image of nursing was as a result of the past cultures and civilization. During this time, nursing was considered a female role. Most people viewed nursing as an extension of mothering and mothers and family members who had the right skills were required to offer nursing services. The folk image of nursing consisted of various concepts like care, love, service and even support. The concepts have been transferred into ethical and professional codes of nursing (Graham &Claborn, 2006). The religious image has contributed a lot to nursing image.
The teaching of Christ stressed on various concepts that are common in nursing like love and services to other people. This led to extension of care to other people in the society. People cared for widows, the sick and the poor. Religious groups and people developed hospitals and home visiting services to serve the sick and the poor. The religious styles have played an important role in modern nursing as nurses are supposed to show love and care for the sick in the society. The ethical principles held by the religious leaders have been integrated into ethical codes of nursing and the professional codes of nursing.
Nurses were supposed to offer care and services to people out of love for human kind. The same idea is common among many people in the modern society and this has influenced the nursing profession negatively. The military image resulted from the need to have people care for people who were wounded after a battle. Women were required to care for the wounded. So, they waited at the edge of the battle fields to take care of people wounded in the war. The battle field medical care led to establishment of surgical procedures and wound care.
Florence Nightingale also contributed a lot to the development of wound care. Florence became famous after enhancing the care for the wounded at scutari during Crimean war in 19th century. The need for trained and skilled nurses to care for the wounded led to development of nursing. Florence Nightingale recognised the need to educate nurses so as to achieve optimum care for the sick. As a result Florence carried out research in sanitation and health to improve the nursing practice. She also developed a curriculum that was used to educate nurses (Younge & Niekerk, 2004).
Though Florence Nightingale led to development of nursing, she also affected the nursing image. Most people believe that Florence Nightingale has affected the nursing image thorough her beliefs. Florence believes that nurses should be lower in rank than doctors. In addition, Florence believes that nurses should not speak in public. Her beliefs have affected the nursing image greatly as many people do not know more about nurses and the nursing profession. Also, the beliefs have influenced the nursing image and voice as nurses are not able to express themselves.
In the modern society, nurses are subordinate to doctors and are not allowed to talk in public and therefore have no voice (Graham & Claborn, 2006). In today’s society media has affected the nursing image and profession through lack of attention. The media does not give enough attention to nursing and health care and this has resulted to poor perception about nursing. This was supported in a series of studies carried out to determine how the media affected the nursing image. A study was carried out in 1997 to determine ow the media portrayed nursing image and how it affected the nursing image.
The study noted that only few articles in the newspapers and magazines were related to health care. Articles in the magazines and newspapers linked to heath care accounted for less than ten percent of all the articles. Additionally, the results from the study showed that the media portrayed nurses negatively. The nurses were represented as incidental to nursing. The study showed that the public was not aware of the role of nurses. Hence, this has forced nurses to fight for their rights.
Majority of the nurses want to be thought of as independent and decision makers in their nursing fields instead of being subordinate to the doctors (Graham & Claborn, 2006). There are various implications that will hinder nurses, health care organisations and nursing institutions from improving nursing image (Buresh & Gordon, 2006). Examples include advancement in technology and the media. The growth in technology has affected the roles of nurses as nurses are required to integrate technology in health care services. Thus, nurses and nursing institutions should be able to adapt to the technology changes in order to improve the nursing image.
In addition, the media will continue to affect nurses and nursing image as long as it portrays nurses negatively. The media shows the public views when it portrays nurses negatively. Hence, nurses and nursing institutions should be able to monitor the media regularly to avoid influencing nursing image. They should be aware of nursing issues in different media and then develop strategies to counteract the negative image created by the media (Kearney, Richardson & Giulio, 2000). In conclusion, the nursing image and voice has affected the recruitment and retention of nurses and nurse students.
Most students find it hard to join the nursing profession because of the bad nursing image and thus join other careers (Kearney, Richardson & Giulio,P 2000). Poor nursing image has affected the working condition and quality of care and led to high nurse turn over. This is because nurses are not able to cope with the challenges in nursing field. The nursing image has been affected by the media, leaders and the nursing staff (Buresh & Gordon, 2006). The media has portrayed the nursing profession and nurses badly for the last few decades. For instance, it has shown nurses as being incompetent.
Moreover, the media has not given attention to health care issues and thus made it hard for people to know the role of nurses and nursing profession. The leaders have also viewed nurses as being subordinate and thus this has made them invisible (Finkelman & Kenner, 2010). Advertising nursing and using campaigns will help improve nursing image as it will make people aware of role of nurses and the nursing profession. Also, monitoring the media and adapting to technology changes will also help improve the nursing image (Joel & Kelly, 2003). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.