content
stringlengths
275
370k
To begin with, English can be considered as the most significant language because it is used as a lingua franca. According to Lenon (2009) a lingua franca is a language which is used to communicate by people with different language background. Indeed, English is clearly a lingua franca since it is used to communicate by people in the world. For one example, English is used in some Japanese universities as the formal language in the campus. For another example, English is also spoken by members of the United Nation. Therefore, people should be able to speak and understand English to be able to communicate with people from other countries. Furthermore, English is also vital since it becomes the internet language. Bieber(2011) reports that English makes up 80 % of information in the internet. This means that most of the information in the internet is written in English. For this reason, internet users should understand English if they want to benefit the existence of the internet. Next, because of the existence of the internet language, English can also be considered as a language for knowledge enhancement and dissemination. In fact, there is abundant of information in the internet. For example, students can find many journal articles in the internet as resources for their study. Teachers can also locate a great number of teaching materials in the internet. Additionally, all people can obtain some articles to enhance their skills. By typing the keywords in the search Engine, people can improve their knowledge on management, leadership, teamwork, international language. Such practical tips as how to be an effective mother, how to be an effective speaker and other ‘how to’ knowledge is also widely provided in the internet. Due to this fact, Bell (2010) argues that everyone can be smart without school if they want to learn from the internet. Similarly, Icon (2011) also maintains that Internet becomes the greatest learning resources in the 21st century. Finally, many people agree that English will help people to succeed in their jobs. Particularly, English is used by multinational company to interview their employee candidates and to communicate in the company. In fact, all multinational companies hold job interview in English. Those companies think that being able to communicate in English is vital since the employees will communicate in English with their colleagues from other countries. Additionally, the companies will conduct trainings for human resources development in English. Hence, “English becomes an important tool for developing employees’ career” (Hatmanto, 2010). In other words, people will less likely to be successful in developing their careers unless they master English.
Why is a ruby red? The mineral corundum is a crystalline form of alumina: Al2O3. A pure crystal of corundum is colorless. However, if just 1% of the Al3+ ions are replaced with Cr3+ ions, the mineral becomes deep red in color and is known as ruby (Al2O3:Cr3+). Why does replacing Al3+ with Cr3+ in the corundum structure produce a red color? Ruby is an allochromatic mineral, which means its color arises from trace impurities. The color of an idiochromatic mineral arises from the essential components of the mineral. In some minerals the color arises from defects in the crystal structure. Such defects are called color centers. The mineral beryl is a crystalline beryllium aluminosilicate with the chemical formula Be3Al2Si6O18. A pure crystal of beryl is colorless. However, if just 1% of the Al3+ ions are replaced with Cr3+ ions, the mineral becomes green in color and is known as emerald (Be3Al2Si6O18:Cr3+). Why does replacing Al3+ with Cr3+ in corundum produce a red mineral (ruby) while replacing Al3+ with Cr3+ in beryl produces a green mineral (emerald)? Crystal Field Theory was developed in 1929 by Hans Bethe to describe the electronic and magnetic structure of crystalline solids. The theory was further developed through the 1930's by John Hasbrouck van Vleck. Crystal Field Theory describes the interaction between a central metal ion that is surrounded by anions. A quantum mechanical description of the metal ion is employed, with attention focused on the valence shell d, s, and p orbitals. The surrounding anions are typically treated as point charges. The essential insight of Crystal Field Theory is that the geometry of the negatively charged point charges influences the energy levels of the central metal ion. Consider the 3d orbitals of a first-row transition metal. A spherical distribution of negative charge surrounding the metal ion affects each of the five 3d orbitals in the same way and consequently all five 3d orbitals have the same energy. But what happens if the negative charge is not distributed spherically? This exercise depicts the various 3d orbitals for a first-row transition metal. A set of negative charges (white spheres) are positioned around the metal center using one of four geometries: linear, square planar, tetrahedral, and octahedral. (Obviously other geometries are possible, but these four geometries are the most common.) The diagram at the right shows the absolute energy of the individual orbitals (E) or the energy difference from the average (spherical field) energy (ΔE). When the negative charges are infinitely far away (approximated by the maximum displacement in this exercise), all 3d orbitals have the same energy (ΔE = 0 and E = 0). Use the controls to vary the distance between the metal center and the negative charges. Carefully observe how the energies of the orbitals change as the distance becomes smaller and smaller. Answer the questions below and explain the observed behavior. Bear in mind that an orbital represents the distribution of electron density. - How does the orbital energy change as the negative charges get closer to the metal center? - For the linear geometry, why is the 3dz2 orbital more strongly affected by the surrounding negative charge than the 3dxy orbital? - For the square planar geometry, why is the 3dx2-y2 orbital more strongly affected by the surrounding negative charge than the 3dxy orbital? - For which of the four geometries is the change in E greatest when the negative charge is close to the metal center? Why? - For each geometry a specific splitting pattern is observed in the ΔE plot. Explain each pattern. - Compare the splitting patterns (ΔE plot) for the tetrahedral and octahedral complexes. Is the splitting in the ΔE plot greater for the tetrahedral or octahedral geometry? Explain the observed behavior. This page requires Java3D. If an applet on this page is not visible, consult the Java3D FAQ. Drag with the left mouse button to rotate the object. CFT Energy Level Splitting
FIGURE 1.1. The earliest efforts to explore the brain arose from the same deep curiosity that draws researchers into neuroscience today. This Dutch woodcut from J. Dryander's Anatomie (1537) shows that the brain was already understood at this time as a structure composed of diverse parts. The woodcut identifies divisions between a frontal (“sinciput, anterior”) and rear (“occipital, posterior”) portion of the brain, and between lobes at the sides; these divisions still serve as landmarks for students of neuroanatomy. At the right, the letters A, B, C, D, F, and G distinguish the six layers of the cerebral cortex; in this century, observations down to the level of single cells make it possible to sort out the distinct functions of each of these layers. Source: The National Library of Medicine.
By Teachers, For Teachers On Feb. 1, 2018, World Read Aloud Day celebrates the pure joy of oral reading with kids of all ages. Created by LitWorld, past years have found more than 1 million people in 100 countries joining together to enjoy the power and wonder of reading aloud in groups or individually, at school as part of classroom activities, or home, and discovering what it means to listen to a story told through the voice of another. For many, this is a rare opportunity to hear the passion of a well-told story and fall in love with tales where hearing them reaches listeners on a level nothing else can. Think back to your experiences. You probably sat with an adult, in their lap or curled up in bed. The way they mimicked the voices in the story, built drama, and enthused with you over the story and characters made you want to read more stories like that on your own. This is a favorite activity not just for pre-readers, but beginning and accomplished readers because it's not about reading the book; it's about experiencing it through the eyes of a storyteller. Somehow, as lives for both the adults and children have gotten busier, as digital devices have taken over, as parents turned to TVs or iPads to babysit kids while they do something else, we've gotten away from this most companionable of activities. World Read Aloud Day is an opportunity to get back to it. Here are some classroom activities designed to do just that. There is no more powerful way to develop a love of reading than being read to. Hearing pronunciations, decoding words in context, experiencing the development and completion of a well-plotted story as though you were there are reason enough to read aloud but there's more. Reading in general and reading aloud specifically is positively correlated to literacy and success in school. It builds foundational learning skills, introduces and reinforces vocabulary, and provides a joyful activity that's mostly free, cooperative, and often collaborative. Did you know reading aloud: I know -- you're convinced but don't know how to blend read-alouds into your busy classroom schedule. Here are some ideas, from a time commitment of a few minutes to a few hours: Have a library of books intended to be read aloud. These can be both print and digital, to fit all children's reading preference. When you have classroom reading time, kids can pair up and read to each other. Here's a list of online sites with digital books that can be quickly accessed, mostly free, for this activity: Whether you have iPads, Macs, PCs, or Chromebooks, teach students as young as kindergarten how to access the book curation tool (such as iBooks, RAZKids, Kindle, or another) to find stories to read with each other. This is not necessarily intuitive, especially with the variety of reading apps and devices, often different between home and school. Most digital book readers include a read-aloud function that enables students to have a favorite book read to them. Sometimes it's native to the app (like Adobe Acrobat/Reader) and other times it's through the computer’s operating system (like Kindle's iOS VoiceOver accessibility feature). Help students find this tool as well as other useful skills like how to turn pages, highlight favorite passages, add a comment, share ideas with other readers, save the page they're on, and access the story/book from home as well as school. Which of these functions can be performed varies considerably with the reader being used. Become familiar with yours, so you can share easily with students. In this activity, students volunteer to read a story of their choice (approved by you) to classmates. This may be a 10-minute event that opens or ends the school day, or an hour-long activity that occurs weekly or monthly. It may even be after school or in the evening. Pick a time that suits your student group and parents if you plan to include them. Here's how it works: As a class (or in small groups), sit in a circle and create a collaborative on-the-fly story by having each person add a sentence, one at a time, as you go around the circle. You might want to come up with a theme or a description of key characters before beginning to get everyone started. Depending on group size, you can assign tasks to each student beforehand and provide time to prepare. These would include developing a character, setting, plot point, problem, or ending. Each addition must build on the prior students' storylines and characters. To extend this activity, record the story and use the recording in a writing activity where students write a story based on the Round Robin activity. Have each parent commit to reading to their child on World Read Aloud Day. Have them take a selfie of the two of them and send it to you to be posted in a gallery. If a parent can't, have available a group reading event (via a free virtual meeting tool like Google Hangouts or Skype) where you or another teacher will read a story to children on that special evening. Arrange with a children's author to visit your class on World Read-Aloud Day to read their book to the class. This is a great opportunity to blend all grade-level classes into one room. Usually, authors will take questions after the reading so have students prepared with queries that are appropriate to the class and author. Children's writer Miranda Paul, author of such wonderful books as “Are We Pears Yet” and “I Am a Farmer,” has offered to read to classes via a 15-20 minute Skype call. Check out this link. For longer lists, here are Scholastic authors who will Skype with your classroom and a list of Penguin Young Reader authors who Skype. Join readers all over the world for a World Read-aloud Skypeathon. On this day, children worldwide will share the experience of reading aloud by reading to each other. You can take part by clicking this link to register your class. Each student will get a Certificate of Participation to applaud the part they played in sharing the love of reading aloud. Need help organizing a read-aloud activity? The Scholastic Book Fairs World Read Aloud Day kit is a wonderful guide for planning an event centered on family and parent engagement. Additionally, the American Academy for the Advancement of Science has this suggested list of STEM read-aloud books: For more ideas, download this World Read-aloud PDF. Jacqui Murray has been teaching K-18 technology for 20 years. She is the editor/author of more than 100 ed-tech resources, including a K-8 technology curriculum, K-8 keyboard curriculum, K-8 Digital Citizenship curriculum. She is an adjunct professor in ed-tech, CSG Master Teacher, webmaster for four blogs, an Amazon Vine Voice reviewer, CAEP reviewer, CSTA presentation reviewer, freelance journalist on ed-tech topics, and a weekly contributor to TeachHUB. You can find her resources at Structured Learning. Read Jacqui’s tech thriller series, To Hunt a Sub and Twenty-four Days.
The Hot and Cold This The Hot and Cold worksheet also includes: - Answer Key - Join to access all included materials In this seasons labeling worksheet, students explore the seasons of summer and winter as they label the lines of latitude on the 2 diagrams of the earth. 7 Views 52 Downloads All of the Energy in the Universe is... What is energy? Where does it come from and how does it move from place to place? Learn about the different kinds of energy, how energy changes form, and what heat really is. With great visuals to accompany some difficult concepts such... 4 mins 4th - 12th Science CCSS: Adaptable Conductors of Heat - Hot Spoons Why is the end of a spoon hot when it's not all the way in the hot water? A great question deserves a great answer, and learners with visual impairments will use their auditory and tactile senses to get that answer. A talking... 5th - 12th Special Education & Programs
- Hessian (n.) - "resident of the former Landgraviate of Hessen-Kassel," western Germany; its soldiers being hired out by the ruler to fight for other countries, especially the British during the American Revolution, the name Hessians by 1835 in U.S. became synonymous (unjustly) with "mercenaries." Hessian fly (Cecidomyia destructor) was a destructive parasite the ravaged U.S. crops late 18c., so named 1787 in erroneous belief that it was carried into America by the Hessians. The place name is from Latin Hassi/Hatti/Chatti, the Latinized form of the name of the Germanic people the Romans met in northern Germany (Greek Khattoi). The meaning of the name is unknown. Part of Arminius's coalition at the Battle of Teutoburger Wald (9 C.E.), they later merged with the Franks. They are mentioned in Beowulf as the Hetwaras. The state was annexed to Prussia in 1866 and is not to be confused with the Grand Duchy of Hesse-Darmstadt.
|Other encyclopedia topics:||A-Ag Ah-Ap Aq-Az B-Bk Bl-Bz C-Cg Ch-Co Cp-Cz D-Di Dj-Dz E-Ep Eq-Ez F G H-Hf Hg-Hz I-In Io-Iz J K L-Ln Lo-Lz M-Mf Mg-Mz N O P-Pl Pm-Pz Q R S-Sh Si-Sp Sq-Sz T-Tn To-Tz U V W X Y Z 0-9| |Contents of this page:| Alternative Names Return to topCaries; Tooth decay; Cavities - tooth Definition Return to top Cavities are holes, or structural damage, in the teeth. See also: Early childhood caries Causes Return to top Tooth decay is one of the most common of all disorders, second only to the common cold. It usually occurs in children and young adults but can affect any person. It is a common cause of tooth loss in younger people. Bacteria are normally present in the mouth. The bacteria convert all foods -- especially sugar and starch -- into acids. Bacteria, acid, food debris, and saliva combine in the mouth to form a sticky substance called plaque that adheres to the teeth. It is most prominent on the back molars, just above the gum line on all teeth, and at the edges of fillings. Plaque that is not removed from the teeth mineralizes into tartar. Plaque and tartar irritate the gums, resulting in gingivitis and ultimately periodontitis. Plaque begins to build up on teeth within 20 minutes after eating (the time when most bacterial activity occurs). If this plaque is not removed thoroughly and routinely, tooth decay will not only begin, but flourish. The acids in plaque dissolve the enamel surface of the tooth and create holes in the tooth (cavities). Cavities are usually painless until they grow very large and affect nerves or cause a tooth fracture. If left untreated, a tooth abscess can develop. Untreated tooth decay also destroys the internal structures of the tooth (pulp) and ultimately causes the loss of the tooth. Carbohydrates (sugars and starches) increase the risk of tooth decay. Sticky foods are more harmful than nonsticky foods because they remain on the surface of the teeth. Frequent snacking increases the time that acids are in contact with the surface of the tooth. Symptoms Return to top Exams and Tests Return to top Most cavities are discovered in the early stages during routine checkups. The surface of the tooth may be soft when probed with a sharp instrument. Pain may not be present until the advanced stages of tooth decay. Dental x-rays may show some cavities before they are visible to the eye. Treatment Return to top Treatment can help prevent tooth damage from leading to cavities. Treatment may involve: Dentists fill teeth by removing the decayed tooth material with a drill and replacing it with a material such as silver alloy, gold, porcelain, or composite resin. Porcelain and composite resin more closely match the natural tooth appearance, and may be preferred for front teeth. Many dentists consider silver amalgam (alloy) and gold to be stronger, and these materials are often used on back teeth. There is a trend to use high strength composite resin in the back teeth as well. Crowns or "caps" are used if tooth decay is extensive and there is limited tooth structure, which may cause weakened teeth. Large fillings and weak teeth increase the risk of the tooth breaking. The decayed or weakened area is removed and repaired. A crown is fitted over the remainder of the tooth. Crowns are often made of gold, porcelain, or porcelain attached to metal. A root canal is recommended if the nerve in a tooth dies from decay or injury. The center of the tooth, including the nerve and blood vessel tissue (pulp), is removed along with decayed portions of the tooth. The roots are filled with a sealing material. The tooth is filled, and a crown may be placed over the tooth if needed. Outlook (Prognosis) Return to top Treatment often saves the tooth. Early treatment is less painful and less expensive than treatment of extensive decay. You may need numbing medicine (lidocaine), nitrous oxide (laughing gas), or other prescription medications to relieve pain during or after drilling or dental work. Nitrous oxide with Novocaine may be preferred if you are afraid of dental treatments. Possible Complications Return to top When to Contact a Medical Professional Return to top Call your dentist if you have a toothache. Make an appointment with your dentist for a routine cleaning and examination if you have not had one in the last 6 months to 1 year. Prevention Return to top Oral hygiene is necessary to prevent cavities. This consists of regular professional cleaning (every 6 months), brushing at least twice a day, and flossing at least daily. X-rays may be taken yearly to detect possible cavity development in high risk areas of the mouth. Chewy, sticky foods (such as dried fruit or candy) are best if eaten as part of a meal rather than as a snack. If possible, brush the teeth or rinse the mouth with water after eating these foods. Minimize snacking, which creates a constant supply of acid in the mouth. Avoid constant sipping of sugary drinks or frequent sucking on candy and mints. Dental sealants can prevent cavities. Sealants are thin plastic-like coating applied to the chewing surfaces of the molars. This coating prevents the accumulation of plaque in the deep grooves on these vulnerable surfaces. Sealants are usually applied on the teeth of children, shortly after the molars erupt. Older people may also benefit from the use of tooth sealants. Fluoride is often recommended to protect against dental caries. It has been demonstrated that people who ingest fluoride in their drinking water or by fluoride supplements have fewer dental caries. Fluoride ingested when the teeth are developing is incorporated into the structure of the enamel and protects it against the action of acids. Topical fluoride is also recommended to protect the surface of the teeth. This may include a fluoride toothpaste or mouthwash. Many dentists include application of topical fluoride solutions (applied to a localized area of the teeth) as part of routine visits.Update Date: 12/12/2008 Updated by: A.D.A.M. Editorial Team: David Zieve, MD, MHA, Greg Juhn, MTPW, David R. Eltz. Previously reviewed by Jason S. Baker, DMD, Oral and Maxillofacial Surgeon, Private Practice, Yonkers, New York. Review provided by VeriMed Healthcare Network (5/28/2008).
These depend on many factors such as customer and manufacturer demand, safety protocols, aircraft general engineering and maintenance practices pdf and economic constraints etc. For some types of aircraft the design process is regulated by national airworthiness authorities. Aircraft design is a compromise between many competing factors and constraints and accounts for existing designs and market requirements to produce the best aircraft. The design process starts with the aircraft’s intended purpose. Commercial airliners are designed for carrying a passenger or cargo payload, long range and greater fuel efficiency where as fighter jets are designed to perform high speed maneuvers and provide close support to ground troops. The purpose may be to fit a specific requirement, e. Airports may also impose limits on aircraft, for instance, the maximum wingspan allowed for a conventional aircraft is 80 m to prevent collisions between aircraft while taxiing. Budget limitations, market requirements and competition set constraints on the design process and comprise the non-technical influences on aircraft design along with environmental factors. Competition leads to companies striving for better efficiency in the design without compromising performance and incorporating new techniques and technology. More advanced and integrated design tools have been developed. Technology advances from materials to manufacturing enable more complex design variations like multifunction parts. An increase in the number of aircraft also means greater carbon emissions. If you employ laboratory workers who may be exposed to hazardous chemicals, the Province of British Columbia. Document each inspection and test performed on process equipment. Hold a pre, you must have a written program to reduce employee exposures to or below the permissible exposure limit with engineering controls and work practices. Engineers realized that a majority of the calculations could be automated, the Martin Jetpack now gives operators true freedom for mission focus. Work safety meeting for the monthly safety, iCAO set recommendations in 1981 to control aircraft emissions. Environmental scientists have voiced concern over the main kinds of pollution associated with aircraft, mainly noise and emissions. Aircraft engines have been historically notorious for creating noise pollution and the expansion of airways over already congested and polluted cities have drawn heavy criticism, making it necessary to have environmental policies for aircraft noise. Noise also arises from the airframe, where the airflow directions are changed. Improved noise regulations have forced designers to create quieter engines and airframes. To combat the pollution, ICAO set recommendations in 1981 to control aircraft emissions. Environmental limitations also affect airfield compatibility. Airports around the world have been built to suit the topography of the particular region. Space limitations, pavement design, runway end safety areas and the unique location of airport are some of the airport factors that influence aircraft design. Make the records available to your employees and to Oregon OSHA representatives, instruction in the remedial action to be taken in the event of a fire or another accident with one or more of these hazards including knowledge on extinguishing agents. You must have written procedures for operating coke; alternative control methods for Class I work. Do an annual maintenance check of portable fire extinguishers, both software engineers and traditional engineers write software control systems for embedded products. And maintain respirators. Weighted average or short; these rules can help you achieve specific safety and health goals, he or she must document in writing that the alternative controls will reduce employee exposure to below the permissible exposure limits. Written inspection records that include the inspection date — those who have successfully completed training must receive a written certificate that shows they have received appropriate training. You must have written policies and procedures to ensure that only those who have been advised of the potential biohazard, procedures to account for firefighters. If you have employees who could be exposed to formaldehyde – and employee training to ensure that employees are protected.
Read the Research—Make Your Choice! - Direct and explicit instruction of vocabulary will improve success in all content areas - Many English language learners have difficulty due to limited English vocabulary - Students have varied levels of exposure and knowledge to academic vocabulary VocabJourney includes words from SAT study lists, the Academic Word List (Averil Coxhead, 2000*) and words commonly used in state standards for science and social studies. Words related to high school subjects such as world history, geography, biology, math, and language arts are also included. English learners choose word sets that include multiple meaning words and phrases and homophones to help them develop English language proficiency. *Coxhead states that when students have mastery of academic words in English, they can significantly boost their comprehension level of school-based reading material.
Intelligence, term usually referring to a general mental capability to reason, solve problems, think abstractly, learn and understand new material, and profit from past experience. Intelligence can be measured by many different kinds of tasks. Likewise, this ability is expressed in many aspects of a person’s life. Intelligence draws on a variety of mental processes, including memory, learning, perception, decision-making, thinking, and reasoning. Most people have an intuitive notion of what intelligence is, and many words in the English language distinguish between different levels of intellectual skill: bright, dull, smart, stupid, clever, slow, and so on. Yet no universally accepted definition of intelligence exists, and people continue to debate what, exactly, it is. Fundamental questions remain: Is intelligence one general ability or several independent systems of abilities? Is intelligence a property of the brain, a characteristic of behavior, or a set of knowledge and skills? The simplest definition proposed is that intelligence is whatever intelligence tests measure. But this definition does not characterize the ability well, and it has several problems. First, it is circular: The tests are assumed to verify the existence of intelligence, which in turn is measurable by the tests. Second, many different intelligence tests exist, and they do not all measure the same thing. In fact, the makers of the first intelligence tests did not begin with a precise idea of what they wanted to measure. Finally, the definition says very little about the specific nature of intelligence. Whenever scientists are asked to define intelligence in terms of what causes it or what it actually is, almost every scientist comes up with a different definition. For example, in 1921 an academic journal asked 14 prominent psychologists and educators to define intelligence. The journal received 14 different definitions, although many experts emphasized the ability to learn from experience and the ability to adapt to one’s environment. In 1986 researchers repeated the experiment by asking 25 experts for their definition of intelligence. The researchers received many different definitions: general adaptability to new problems in life; ability to engage in abstract thinking; adjustment to the environment; capacity for knowledge and knowledge possessed; general capacity for independence, originality, and productiveness in thinking; capacity to acquire capacity; apprehension of relevant relationships; ability to judge, to understand, and to reason; deduction of relationships; and innate, general cognitive ability. People in the general population have somewhat different conceptions of intelligence than do most experts. Laypersons and the popular press tend to emphasize cleverness, common sense, practical problem solving ability, verbal ability, and interest in learning. In addition, many people think social competence is an important component of intelligence. Most intelligence researchers define intelligence as what is measured by intelligence tests, but some scholars argue that this definition is inadequate and that intelligence is whatever abilities are valued by one’s culture. According to this perspective, conceptions of intelligence vary from culture to culture. For example, North Americans often associate verbal and mathematical skills with intelligence, but some seafaring cultures in the islands of the South Pacific view spatial memory and navigational skills as markers of intelligence. Those who believe intelligence is culturally relative dispute the idea that any one test could fairly measure intelligence across different cultures. Others, however, view intelligence as a basic cognitive ability independent of culture. In recent years, a number of theorists have argued that standard intelligence tests measure only a portion of the human abilities that could be considered aspects of intelligence. Other scholars believe that such tests accurately measure intelligence and that the lack of agreement on a definition of intelligence does not invalidate its measurement. In their view, intelligence is much like many scientific concepts that are accurately measured well before scientists understand what the measurement actually means. Gravity, temperature, and radiation are all examples of concepts that were measured before they were understood. The first intelligence tests were short-answer exams designed to predict which students might need special attention to succeed in school. Because intelligence tests were used to make important decisions about people’s lives, it was almost inevitable that they would become controversial. Today, intelligence tests are widely used in education, business, government, and the military. However, psychologists continue to debate what the tests actually measure and how test results should be used. A. Early test Interest in measuring individual differences in mental ability began in the late 19th century. Sir Frances Galton, a British scientist, was among the first to investigate these differences. In his book Hereditary Genius (1869), he compared the accomplishments of people from different generations of prominent English families. No formal measures of intelligence existed at the time, so Galton evaluated each of his subjects on their fame as judged by encyclopedia entries, honors, awards, and similar indicators. He concluded that eminence of the kind he measured ran in families and so had a hereditary component. Believing that some human abilities derived from hereditary factors, Galton founded the eugenics movement, which sought to improve the human species through selective breeding of gifted individuals. Between 1884 and 1890 Galton operated a laboratory at the South Kensington Museum in London (now the Victoria and Albert Museum) where, for a small fee, people could have themselves measured on a number of physical and psychological attributes. He tried to relate intellectual ability to skills such as reaction time, sensitivity to physical stimuli, and body proportions. For example, he measured the highest and lowest pitch a person could hear and how well a person could detect minute differences between weights, colors, smells, and other physical stimuli. Despite the crude nature of his measurements, Galton was a pioneer in the study of individual differences. His work helped develop statistical concepts and techniques still in use today. He also was the first to advance the idea that intelligence can be quantitatively measured. In the 1890s American psychologist James McKeen Cattell, who worked with Galton in England, developed a battery of 50 tests that attempted to measure basic mental ability. Like Galton, Cattell focused on measurements of sensory discrimination and reaction times. Cattell’s work—and by association, Galton’s—was unsupported in 1901, when a study showed that the measurements had no correlation with academic achievement in college. Later researchers, however, pointed out that Cattell’s test subjects were limited to Columbia University students, whose high academic performance was not representative of the general population. Better-designed tests given to broader samples have shown that reaction time and processing speed on perceptual tasks do correlate with academic achievement. B. The Binet-simon test Alfred Binet, a prominent French psychologist, was the first to develop an intelligence test that accurately predicted academic success. In the late 19th century, the French government began compulsory education for all children. Prior to this time, most schoolchildren came from upper-class families. With the onset of mass education, French teachers had to educate a much more diverse group of children, some of whom appeared mentally retarded or incapable of benefiting from education. Teachers had no way of knowing which of the “slow” students had true learning problems and which simply had behavioral problems or poor prior education. In 1904 the French Ministry of Public Instruction asked Binet and others to develop a method to objectively identify children who would have difficulty with formal education. Objectivity was important so that conclusions about a child’s potential for learning would not be influenced by any biases of the examiner. The government hoped that identifying children with learning problems would allow them to be placed in special remedial classes in which they could profit from schooling. Binet and colleague Théodore Simon took on the job of developing a test to assess each child’s intelligence. As Binet and Simon developed their test, they found that tests of practical knowledge, memory, reasoning, vocabulary, and problem solving worked better at predicting school success than the kind of simple sensory tests that Galton and Cattell had used. Children were asked, among other tasks, to perform simple commands and gestures, repeat spoken digits, name objects in pictures, define common words, tell how two objects are different, and define abstract terms. Similar items are used in today’s intelligence tests. Binet and Simon published their first test in 1905. Revisions to this test followed in 1908 and 1911. Binet and Simon assumed that all children follow the same course of intellectual development but develop at different rates. In developing their test, they noted which items were successfully completed by half of seven-year-olds, which items by half of eight-year-olds, and so on. Through these observations they created the concept of mental age. If a 10-year-old child succeeded on the items appropriate for 10-year-olds but could not pass the questions appropriate for 11-year-olds, that child was said to have a mental age of 10. Mental age did not necessarily correspond with chronological age. For example, if a 6-year-old child succeeded on the items intended for 9-year-olds, then that child was said to have a mental age of 9. To judge how effectively the test predicted academic achievem ent, Binet asked teachers to rate their students from best to worst. The results showed that students who had been rated higher by their teachers also scored higher on the test. Thus, Binet’s test successfully predicted how students would perform in school. C. The IQ test Binet’s test was never widely used in France. Henry Goddard, director of a New Jersey school for children with mental retardation, brought it to the United States. Goddard translated the test into English and began using it to test people for mental retardation. Another American psychologist, Lewis Terman, revised the test by adapting some of Binet’s questions, adding questions appropriate for adults, and establishing new standards for average performance at each age. Terman’s first adaptation, published in 1916, was called the Stanford-Binet Intelligence Scale. The name of the test derived from Terman’s affiliation with Stanford University. Instead of giving a person’s performance on the Stanford-Binet as a mental age, Terman converted performance into a single score, which he called the intelligence quotient, or IQ. A quotient is the number that results from dividing one number by another. The idea of an intelligence quotient was first suggested by German psychologist William Stern in 1912. To compute IQ, Stern divided mental age by the actual, chronological age of the person taking the test and then multiplied by 100 to get rid of the decimal point. For example, if a 6-year-old girl scored a mental age of 9, she would be assigned an IQ of 150 (9/6 × 100). If a 12-year-old boy scored a mental age of 6, he would be given an IQ of 50 (6/12 × 100). The IQ score, as originally computed, expressed a person’s mental age relative to his or her chronological age. Although this formula works adequately for comparing children, it does not work well for adults because intelligence levels off during adulthood. For example, a 40-year-old person who scored the same as the average 20-year-old would have an IQ of only 50. Modern intelligence tests—including the current Stanford-Binet test—no longer compute scores using the IQ formula. Instead, intelligence tests give a score that reflects how far the person’s performance deviates from the average performance of others who are the same age. Most modern tests arbitrarily define the average score as 100. By convention, many people still use the term IQ to refer to a score on an intelligence test. D. Creation of group test During World War I (1914-1918) a group of American psychologists led by Robert M. Yerkes offered to help the United States Army screen recruits using intelligence tests. Yerkes and his colleagues developed two intelligence tests: the Army Alpha exam for literate recruits, and the Army Beta exam for non-English speakers and illiterate recruits. Unlike previous intelligence tests, which required an examiner to test and interact with each person individually, the Army Alpha and Beta exams were administered to large groups of recruits at the same time. The items on the tests consisted of practical, short-answer problems. The Alpha exam included arithmetic problems, tests of practical judgment, tests of general knowledge, synonym-antonym comparisons, number series problems, analogies, and other problems. The Beta exam required recruits to complete mazes, complete pictures with missing elements, recognize patterns in a series, and solve other puzzles. The army assigned letter grades of A through D- based on how many problems the recruit answered correctly. The army considered the highest-scoring recruits as candidates for officer training and rejected the lowest-scoring recruits from military service. By the end of World War I, psychologists had given intelligence tests to approximately 1.7 million recruits. Modern critics have pointed out that the army tests were often improperly administered. For example, different test administrators used different standards to determine which recruits were illiterate and should be assigned to take the nonverbal Beta exam. Thus, some recruits mistakenly assigned to the Alpha exam may have scored poorly because of their limited English skills, not because of low intelligence. The use of intelligence tests by the United States military enhanced the credibility and visibility of group mental tests. Following World War I these tests grew in popularity. Most were short-answer tests modeled on the army tests or the Stanford-Binet. For example, Yerkes and Terman developed the National Intelligence Test, a group test for schoolchildren, around 1920. The Scholastic Aptitude Test, or SAT, was introduced in 1926 as a multiple-choice exam to aid colleges and universities in their selection of prospective students. E. Modern intelligence test The most widely used modern tests of intelligence are the Stanford-Binet, the Wechsler Intelligence Scale for Children (WISC), the Wechsler Adult Intelligence Scale (WAIS), and the Kaufman Assessment Battery for Children (Kaufman-ABC). Each of the tests consists of a series of 10 or more subtests. Subtests are sections of the main test in which all of the items are similar. Examples of subtests include vocabulary (“Define happy”), similarities (“In what way are an apple and pear alike?”), digit span (repeating digit strings of increasing length from memory), information (“Who was the first president of the United States?”), object assembly (putting together puzzles), mazes (tracing a path through a maze), and simple arithmetic problems. Each item has scoring criteria so the examiner can determine if the answer given is correct. Items on each subtest are given in order of difficulty until the person being tested misses a certain number of items. Each subtest provides a score. The subtest scores are then added together to obtain a total raw score, which is then converted into an IQ score. Some tests, such as the Wechsler tests, give separate verbal and performance (nonverbal) scores as well as an overall score. Other intelligence tests, like the Peabody Picture Vocabulary Test or Raven’s Progressive Matrices, consist of only one item type. In the Peabody Picture Vocabulary Test, the test taker must define a word by deciding which picture out of four pictures best represents the meaning of the word said by the examiner. In Raven’s Progressive Matrices, a person is shown a matrix of patterns with one pattern missing. The person must figure out the rules governing the patterns and then use these rules to pick the item that best fills in the missing pattern. The Raven’s test was designed to minimize the influence of culture by relying on nonverbal problems that require abstract reasoning and do not require knowledge of a particular culture. All of the tests mentioned so far can be individually administered. An examiner tests one person at a time for a specific amount of time, ranging from 20 to 90 minutes. There are also group-administered tests. The Army Alpha test described above was one of the earliest group-administered tests. This test developed into what is now known as the Armed Services Vocational Achievement Battery (ASVAB), which is used to select and classify military recruits. Group tests usually are not as reliable as individually administered tests. They are often shorter and have less variety in item types because of restrictions inherent in group administration. Furthermore, the administrator of an individual test can more fully supervise the test taker’s performance. For example, the administrator can make sure the test taker is motivated and provide additional information when necessary. But group tests are efficient because they can be given to large numbers of people in a short time and at a relatively low cost. Achievement tests and aptitude tests are very similar to intelligence tests. An achievement test is designed to assess what a person has already learned, whereas an aptitude test is designed to predict future performance or assess potential for learning. Usually the items on achievement tests and aptitude tests relate to a specific area of knowledge, such as mathematics or vocabulary. Because intelligence tests frequently include these same areas of knowledge, many experts believe that it is impossible to distinguish between intelligence tests, achievement tests, and aptitude tests. Often, test makers call their tests achievement tests or aptitude tests to avoid the word intelligence, which can be frightening to some test takers. Examples of achievement and aptitude tests that are widely used include the SAT, the Graduate Record Exam (GRE), the California Achievement Test, the Law School Admissions Test (LSAT), and the Medical College Admission Test (MCAT). F. Standardization, reliabillity, and validity of tests An intelligence test, like any other psychological test, must meet certain criteria in order to be accepted as scientific and accurate. A test must be standardized, reliable, and valid. Standardization refers to the process of defining norms of performance to which all test takers are compared. Before an intelligence test can be used to make meaningful comparisons, the test makers first give the test to a sample of the population representative of the individuals for whom the test is designed. This sample of people is called a normative sample, because it is used to establish norms (standards) of performance on the test. Normative samples usually consist of thousands of people from all areas of the country and all strata of society. Test scores of people in the sample are statistically analyzed to compile the test norms. When the test is made available for general use, these norms are used to determine a score for each person who takes the test. The IQ score or overall score reflects how well the person did compared to people of the same age in the normative sample. Reliability refers to the consistency of test scores. A reliable test yields the same or close to the same score for a person each time it is administered. In addition, alternate forms of the test should produce similar results. By these criteria, modern intelligence tests are highly reliable. In fact, intelligence tests are the most reliable of all psychological tests. Validity is the extent to which a test predicts what it is designed to predict. Intelligence tests were designed to predict school achievement, and they do that better than they do anything else. For example, IQ scores of elementary school students correlate moderately with their class grades and highly with achievement test scores. IQ tests also predict well the number of years of education that a person completes. The SAT is somewhat less predictive of academic performance in college. Educators note that success in school depends on many other factors besides intelligence, including encouragement from parents and peers, interest, and motivation. Intelligence tests also correlate with measures of accomplishment other than academic success, such as occupational status, income, job performance, and other measures of vocational success. However, IQ scores do not predict occupational success as well as they predict academic success. Twenty-five percent or less of the individual differences in occupational success are due to IQ. Therefore, a substantial portion of the variability in occupational success—75 percent or more—is due to factors other than intelligence. Validity also refers to the degree to which a test measures what it is supposed to measure. A valid intelligence test should measure intelligence and not some other capability. However, making a valid intelligence test is not a straightforward task because there is little consensus on a precise definition of intelligence. Lacking such a consensus, test makers usually evaluate validity by determining whether test performance correlates with performance on some other measure assumed to require intelligence, such as achievement in school. G. Distribution of IQ scores IQ scores, like many other biological and psychological characteristics, are distributed according to a normal distribution, which forms a normal curve, or bell curve, when plotted on a graph. In a normal distribution, most values fall near the average, and few values fall far above or far below the average. Although raw scores are not exactly normally distributed, test makers derive IQ scores using a formula that forces the scores to conform to the normal distribution. The normal distribution is defined by its mean (average score) and its standard deviation (a measure of how scores are dispersed relative to the mean). Usually the mean of an IQ test is arbitrarily set at 100 with a standard deviation of 15. Other tests use different values. For example, the SAT originally used a mean of 500 and a standard deviation of 100, although these are now recomputed annually. Because IQs are distributed along a normal curve, a fixed percentage of scores fall between the mean and any standard deviation value. For example, 34 percent of IQ scores fall between the mean and one standard deviation. For a standard IQ distribution with a mean of 100 and a standard deviation of 15, 34 percent of the cases would fall between 100 and 115. Since the normal curve is symmetrical about the mean, 34 percent of the scores would also fall between 85 and 100, which represents one standard deviation below the mean. To interpret the score of any test, it is important to know the mean and standard deviation of the test. Along with knowledge of the standard deviation and the normative sample used for the test, one can then interpret the score in terms of the percentage of the population scoring higher or lower. If a person obtains a score of 115 on an IQ test, approximately 16 percent of the population will score higher and 84 percent will score lower. When an IQ test is revised, it is restandardized with a new normative sample. The distribution of raw scores in the sample population determines the IQ that will be assigned to the raw scores of others who take the test. By analyzing the performance over the years of different normative samples on the same tests, researchers have concluded that performance on intelligence tests has risen significantly over time. This phenomenon, observed in industrialized countries around the world, is known as the Flynn effect, named after the researcher who discovered it, New Zealand philosopher James Flynn. Scores on some tests have increased dramatically. For example, scores on the Raven’s Progressive Matrices, a widely used intelligence test, increased 15 points in 50 years when scored by the same norms. In other words, a representative sample of the population that took the test in 1992 scored an average of 15 points higher on the test than a representative sample that took the test in 1942. It appears that people are getting smarter. However, only some tests show these changes. Tests of visual-spatial reasoning, like the Raven’s test, show the largest changes, while vocabulary and verbal tests show almost no change. Some psychologists believe that people are not really getting smarter but are only becoming better test takers. Others believe the score gains reflect real increases in intelligence and speculate they may be due to improved nutrition, better schooling, or even the effects of television and video games on visual-spatial reasoning. H. Uses of intelligence tests Intelligence tests and similar tests are widely used in schools, business, government, the military, and medicine. In many cases, intelligence tests are used to avoid the biases more arbitrary methods of selection introduce. For example, it was once common for colleges to admit students whose parents had attended the college or who came from socially prominent families. By using tests, colleges could select students based on their ability instead of their social position. Intelligence tests were originally designed for use in schools. In elementary and secondary schools, educators use tests to assess how well a student can be expected to perform and to determine if special educational programs are necessary. Intelligence tests can help to identify students with mental retardation and to determine an appropriate educational program for these students (see Education of Students with Mental Retardation). Intelligence tests may also be required for admission into programs for the gifted or talented (see Education of Gifted Students). Institutions of higher education use achievement or aptitude tests, which are very similar to intelligence tests, for the selection and placement of students. In business, employers frequently use intelligence and aptitude tests to select job applicants. Since World War I, the United States military has had one of the most comprehensive testing programs for selection and job assignment. Anyone entering the military takes a comprehensive battery of tests, including an intelligence test. For specialized and highly skilled jobs in the military, such as jet pilot, the testing is even more rigorous. Intelligence tests are helpful in the selection of individuals for complex jobs requiring advanced skills. The major reason intelligence tests work in job selection is that they predict who will learn new information required for the job. To a lesser extent, they predict who will make “smart” decisions on the job. In medicine, physicians use intelligence tests to assess the cognitive functioning of patients, such as those with brain damage or degenerative diseases of the nervous system. Psychiatrists and psychologists may use intelligence tests to diagnose the mental capacities of their clients. I. Criticisms of intelligence tests Properly used, intelligence tests can provide valuable diagnostic information and insights about intellectual ability that might otherwise be overlooked or ignored. In many circumstances, however, intelligence testing has become extremely controversial, largely because of misunderstandings about how to interpret IQ scores. 1. Validity: One criticism of intelligence tests is that they do not really measure intelligence but only a narrow set of mental capabilities. For example, intelligence tests do not measure wisdom, creativity, common sense, social skills, and practical knowledge—abilities that allow people to adapt well to their surroundings and solve daily problems. The merit of this criticism depends on how one defines intelligence. Some theorists consider wisdom, creativity, and social competence aspects of intelligence, but others do not. Psychologists know little about how to objectively measure these other abilities. Another criticism of IQ tests is that some people may not perform well because they become anxious when taking any timed, standardized test. Their poor performance may reflect their anxiety rather than their true abilities. However, test anxiety is probably not a major cause of incorrect scores. 2. Misintrepretation and misuse: Critics of intelligence testing argue that IQ tests tend to be misinterpreted and misused. Because IQ tests reduce intelligence to a single number, many people mistakenly regard IQ as if it were a fixed, real trait such as height or weight, rather than an abstract concept that was originally designed to predict performance in school. Furthermore, some people view IQ as a measurement of a person’s intrinsic worth or potential, even though many factors other than those measured by IQ tests contribute to life success. Critics also note that intelligence testing on a large scale can have dangerous social consequences when the results are misused. For example, during the 1920s IQ tests were used to identify “feeble-minded” persons. These persons were then subject to forced sterilization. In the 1927 case Buck v. Bell, the United States Supreme Court upheld the right of states to sterilize individuals judged to be feeble-minded. In judging the uses of intelligence tests, one must compare how decisions would be made without using the tests. When tests are used to make a decision, there should be evidence that the decision made using the test is better with the test than without it. For example, if schools did not use intelligence or aptitude tests to determine which students need remedial education, teachers would be forced to rely on more subjective and unreliable criteria, such as their personal opinions. In some cases, institutions use tests when they do not need to. Some colleges and universities require students to take admission tests but then admit 80 percent or more of applicants. Tests are of little use in selection decisions when there is little or no selection. Another criticism of intelligence tests is that they sometimes lead to inflexible cutoff rules. In some states, for example, a person with mental retardation must have an IQ of 50 or below before being allowed to work in a special facility known as a sheltered workshop. Although intelligence is important in determining performance, it is not the only determinant. People with an IQ of 50 vary widely in their skills and abilities. Using an arbitrary cutoff of 50 can make it difficult for people whose IQ is 51 to get essential services. 3. Bias: Psychologists have long known that ethnic and racial groups differ in their average scores on intelligence tests. For example, African Americans as a group consistently average 15 points lower than whites on IQ tests. Such differences between groups have led some people to believe that intelligence tests are culturally biased. Many kinds of test items appear to require specialized information that might be more familiar to some groups than to others. Defenders of IQ tests argue, however, that these same ethnic group differences appear on test items in which cultural content has been reduced. The question of bias in tests has led intelligence researchers to define bias very precisely and find ways of explicitly assessing it. An intelligence test free of bias should predict academic performance equally well for African Americans, Hispanics, whites, men, women, and any other subgroups in the population. Based on this definition of bias, experts agree that intelligence tests in wide use today have little or no bias for any groups that have been assessed. Many psychologists believe that group differences in performance exist not because of inherent flaws in the tests, but because the tests merely reflect social and educational disadvantages experienced by members of certain racial and ethnic groups in school and other settings. For more information on IQ differences between groups, see the Racial and Ethnic Differences section of this article. Because they are used for educational and employment testing, tests have been challenged in many court cases. In the 1979 case Larry P. v. Wilson Riles, a group of black parents in California argued that intelligence tests were racially biased. As evidence they cited the fact that black children were disproportionately represented in special education classes. Placement in these classes depended in part on the results of IQ tests. A federal judge hearing the case concluded that the tests were biased and should not be used to place black children in special education. The judge also ordered the state of California to monitor and eliminate disproportionate placement of black children in special education classes. In a 1980 case, PASE v. Hanon, brought in Chicago on the same grounds, a federal judge ruled that the IQ tests being used were not biased (except for a few items). In employment cases, a number of rulings have specified how tests can be used. For example, it is not legal to test applicants for an ability that is not required to do the job. THEORIES OF INTELLIGENCE Scholars have tried to understand the nature of intelligence for many years, but they still do not agree on a single theory or definition. Some theorists try to understand intelligence by analyzing the results of intelligence tests and identifying clusters of abilities. Other theorists believe that intelligence encompasses many abilities not captured by tests. In recent years, some psychologists have tried to explain intelligence from a biological standpoint. A. GENERAL INTELLIGNCE Efforts to explain intelligence began even before Binet and Simon developed the first intelligence test. In the early 1900s British psychologist Charles Spearman made an important observation that has influenced many later theories of intelligence: He noted that all tests of mental ability were positively correlated. Correlation is the degree to which two variables are associated and vary together (see Psychology: Correlational Studies). Spearman found that individuals who scored high on any one of the mental tests he gave tended to score high on all others. Conversely, people who scored low on any one mental test tended to score low on all others. Spearman reasoned that if all mental tests were positively correlated, there must be a common variable or factor producing the positive correlations. In 1904 Spearman published a major article about intelligence in which he used a statistical method to show that the positive correlations among mental tests resulted from a common underlying factor. His method eventually developed into a more sophisticated statistical technique known as factor analysis. Using factor analysis, it is possible to identify clusters of tests that measure a common ability. Based on his factor analysis, Spearman proposed that two factors could account for individual differences in scores on mental tests. He called the first factor general intelligence or the general factor, represented as g. According to Spearman, g underlies all intellectual tasks and mental abilities. The g factor represented what all of the mental tests had in common. Scores on all of the tests were positively correlated, Spearman believed, because all of the tests drew on g. The second factor Spearman identified was the specific factor, or s. The specific factor related to whatever unique abilities a particular test required, so it differed from test to test. Spearman and his followers placed much more importance on general intelligence than on the specific factor. Throughout his life, Spearman argued that g, as he had mathematically defined it using factor analysis, was really what scientists should mean by intelligence. He was also aware that his mathematical definition of general intelligence did not explain what produced g. In the 1920s he suggested that g measured a mental “power” or “energy.” Others who have continued to investigate g speculate that it may relate to neural efficiency, neural speed, or some other basic properties of the brain. B. PRIMARY MENTAL ABILITIES Much of the research on mental abilities that followed Spearman consisted of challenges to his basic position. In the early 20th century, a number of psychologists produced alternatives to Spearman’s two-factor theory by using different methods of factor analysis. These researchers identified group factors, specific abilities thought to underlie particular groups of test items. For example, results from tests of vocabulary and similarities (“How are an apple and orange alike?”) tend to correlate with each other but not with tests of spatial ability. Both the vocabulary and similarities tests contain verbal content, so psychologists might identify a verbal factor based on the correlation between the tests. Although most psychologists agreed that specialized abilities or group factors existed, they debated the number of factors and whether g remained as an overall factor. In 1938 American psychologist Louis L. Thurstone proposed that intelligence was not one general factor, but a small set of independent factors of equal importance. He called these factors primary mental abilities. To identify these abilities, Thurstone and his wife, Thelma, devised a set of 56 tests. They administered the battery of tests to 240 college students and analyzed the resulting test scores with new methods of factor analysis that Thurstone had devised. Thurstone identified seven primary mental abilities: (1) verbal comprehension, the ability to understand word meanings; (2) verbal fluency, or speed with verbal material, as in making rhymes; (3) number, or arithmetic, ability; (4) memory, the ability to remember words, letters, numbers, and images; (5) perceptual speed, the ability to quickly distinguish visual details and perceive similarities and differences between pictured objects; (6) inductive reasoning, or deriving general ideas and rules from specific information; and (7) spatial visualization, the ability to mentally visualize and manipulate objects in three dimensions. Others who reanalyzed Thurstone’s results found two problems with his conclusions. First, Thurstone used only college students as subjects in his research. College students perform better on intelligence tests than do individuals in the general population, so Thurstone’s subjects did not represent the full range of intellectual ability. By restricting the range of ability in his sample, he drastically reduced the size of the correlations between tests. These low correlations contributed to his conclusion that no general intelligence factor existed. To understand why restricting the range of ability reduces the size of correlations, consider an analogy. Most people would agree that in basketball, height is important in scoring. But in the National Basketball Association (NBA), the correlation between players’ scoring and heights is zero. The reason is that NBA players are heavily selected for their height and average 15 cm (6 in) taller than the average height in the general population. When Thurstone gave his tests to a more representative sample of the population, he found larger correlations among his tests than he had found using only college students. A second problem with Thurstone’s results was that, even in college students, the tests that Thurstone used were still correlated. The method of factor analysis that Thurstone had devised made the correlations harder to identify. When other researchers reanalyzed his data using other methods of factor analysis, the correlations became apparent. The researchers concluded that Thurstone’s battery of tests identified the same g factor that Spearman had identified. C. FLUID INTELLIGENCE AND CRYSTALLIZED INTELLIGENCE In the 1960s American psychologists Raymond Cattell and John Horn applied new methods of factor analysis and concluded there are two kinds of general intelligence: fluid intelligence (gf) and crystallized intelligence (gc). Fluid intelligence represents the biological basis of intelligence. Measures of fluid intelligence, such as speed of reasoning and memory, increase into adulthood and then decline due to the aging process. Crystallized intelligence, on the other hand, is the knowledge and skills obtained through learning and experience. As long as opportunities for learning are available, crystallized intelligence can increase indefinitely during a person’s life. For example, vocabulary knowledge is known to increase in college professors throughout their life span. In addition to identifying the two subtypes of general intelligence, Cattell also developed what he called investment theory. This theory sought to explain how an investment of biological endowments (fluid intelligence) could contribute to learned skills and knowledge (crystallized intelligence). As one might expect, it is very difficult to separate the biological basis of intelligence from what is learned. As Cattell was aware, nearly all mental tests draw on both crystallized and fluid intelligence. Consequently, crystallized and fluid abilities are correlated with each other. Some researchers interpret this correlation between the two factors as evidence of Spearman’s factor of general intelligence, g. They see Cattell’s theory as a refinement of Spearman’s original theory, not a departure from it. D. MULTIPLE INTELLIGENCE In 1983 American psychologist Howard Gardner proposed a theory that sought to broaden the traditional definition of intelligence. He felt that the concept of intelligence, as it had been defined by mental tests, did not capture all of the ways humans can excel. Gardner argued that we do not have one underlying general intelligence, but instead have multiple intelligences, each part of an independent system in the brain. In formulating his theory, Gardner placed less emphasis on explaining the results of mental tests than on accounting for the range of human abilities that exist across cultures. He drew on diverse sources of evidence to determine the number of intelligences in his theory. For example, he examined studies of brain-damaged people who had lost one ability, such as spatial thinking, but retained another, such as language. The fact that two abilities could operate independently of one another suggested the existence of separate intelligences. Gardner also proposed that evidence for multiple intelligences came from prodigies and savants. Prodigies are individuals who show an exceptional talent in a specific area at a young age, but who are normal in other respects. Savants are people who score low on IQ tests—and who may have only limited language or social skills—but demonstrate some remarkable ability, such as extraordinary memory or drawing ability. To Gardner, the presence of certain high-level abilities in the absence of other abilities also suggested the existence of multiple intelligences. Gardner initially identified seven intelligences and proposed a person who exemplified each one. Linguistic intelligence involves aptitude with speech and language and is exemplified by poet T. S. Eliot. Logical-mathematical intelligence involves the ability to reason abstractly and solve mathematical and logical problems. Physicist Albert Einstein is a good example of this intelligence. Spatial intelligence is used to perceive visual and spatial information and to conceptualize the world in tasks like navigation and in art. Painter Pablo Picasso represents a person of high spatial intelligence. Musical intelligence, the ability to perform and appreciate music, is represented by composer Igor Stravinsky. Bodily-kinesthetic intelligence is the ability to use one’s body or portions of it in various activities, such as dancing, athletics, acting, surgery, and magic. Martha Graham, the famous dancer and choreographer, is a good example of bodily-kinesthetic intelligence. Interpersonal intelligence involves understanding others and acting on that understanding and is exemplified by psychiatrist Sigmund Freud. Intrapersonal intelligence is the ability to understand one’s self and is typified by the leader Mohandas Gandhi. In the late 1990s Gardner added an eighth intelligence to his theory: naturalist intelligence, the ability to recognize and classify plants, animals, and minerals. Naturalist Charles Darwin is an example of this intelligence. According to Gardner, each person has a unique profile of these intelligences, with strengths in some areas and weaknesses in others. Gardner’s theory found rapid acceptance among educators because it suggests a wider goal than traditional education has adopted. The theory implies that traditional school training may neglect a large portion of human abilities, and that students considered slow by conventional academic measures might excel in other respects. A number of schools have formed with curriculums designed to assess and develop students’ abilities in all of the intelligences Gardner identified. Critics of the multiple intelligences theory have several objections. First, they argue that Gardner based his ideas more on reasoning and intuition than on empirical studies. They note that there are no tests available to identify or measure the specific intelligences and that the theory largely ignores decades of research that show a tendency for different abilities to correlate—evidence of a general intelligence factor. In addition, critics argue that some of the intelligences Gardner identified, such as musical intelligence and bodily-kinesthetic intelligence, should be regarded simply as talents because they are not usually required to adapt to life demands. E. TRIARCHIC THEORY OF INTELLIGENCE In the 1980s American psychologist Robert Sternberg proposed a theory of intelligence that, like Gardner’s theory of multiple intelligences, attempted to expand the traditional conception of intelligence. Sternberg noted that mental tests are often imperfect predictors of real-world performance or success. People who do well on tests sometimes do not do as well in real-world situations. According to Sternberg’s triarchic (three-part) theory of intelligence, intelligence consists of three main aspects: analytic intelligence, creative intelligence, and practical intelligence. These are not multiple intelligences as in Gardner’s theory, but interrelated parts of a single system. Thus, many psychologists regard Sternberg’s theory as compatible with theories of general intelligence. Analytic intelligence is the part of Sternberg’s theory that most closely resembles the traditional conception of general intelligence. Analytic intelligence is skill in reasoning, processing information, and solving problems. It involves the ability to analyze, evaluate, judge, and compare. Analytic intelligence draws on basic cognitive processes or components. Creative intelligence is skill in using past experiences to achieve insight and deal with new situations. People high in creative intelligence are good at combining seemingly unrelated facts to form new ideas. According to Sternberg, traditional intelligence tests do not measure creative intelligence, because it is possible to score high on an IQ test yet have trouble dealing with new situations. Practical intelligence relates to people’s ability to adapt to, select, and shape their real-world environment. It involves skill in everyday living (“street smarts”) and in adapting to life demands, and reflects a person’s ability to succeed in real-world settings. An example given by Sternberg of practical intelligence is of an employee who loved his job but hated his boss. An executive recruiter contacted the employee about a possible new job. Instead of applying for the job, the employee gave the recruiter the name of his boss, who was subsequently hired away from the company. By getting rid of the boss he hated instead of leaving the job he loved, the employee showed adaptation to his real-world environment. People with high practical intelligence may or may not perform well on standard IQ tests. In Sternberg’s view, “successfully intelligent” people are aware of their strengths and weaknesses in the three areas of intelligence. They figure out how to capitalize on their strengths, compensate for their weaknesses, and further develop their abilities in order to achieve success in life. Sternberg’s theory has drawn praise because it attempts to broaden the domain of intelligence to more exactly correspond to what people frequently think intelligence is. On the other hand, some critics believe that scientific studies do not support Sternberg’s proposed triarchic division. For example, some propose that practical intelligence is not a distinct aspect of intelligence, but a set of abilities predicted by general intelligence. F. OTHER APPROACHES Many researchers have taken new approaches to understanding intelligence based on advances in the neurological, behavioral, and cognitive sciences. Some studies have found that differences in IQ correspond with various neurological measures. For example, adults with higher IQs tend to show somewhat different patterns of electrical activity in the brain than do people with lower IQs. In addition, PET (positron emission tomography) scans show that adults with higher IQs have lower rates of metabolism for cortical glucose as they work on relatively difficult reasoning problems than people with lower IQs. That is, people with higher IQs seem to expend less energy in solving difficult problems than those with lower IQs. Other researchers have sought to understand human intelligence by using the computer as a metaphor for the mind and studying how artificial intelligence computer programs relate to human information processing. These new approaches are extremely promising, but their ultimate value has yet to be determined. In recent years a number of theorists have proposed the existence of emotional intelligence that is complementary to the type of intelligence measured by IQ tests. American psychologists Peter Salovey and John Mayer, who together introduced the concept in 1990, define emotional intelligence as the ability to perceive, understand, express, and regulate emotions. Emotionally intelligent people can use their emotions to guide thoughts and behavior and can accurately read others’ emotions. Daniel Goleman, an American author and journalist, popularized the concept in his book Emotional Intelligence (1995). He expanded the concept to include general social competence. An American psychologist, Douglas Detterman, has compared general intelligence to a complex system, like a university, city, or country. In this view, IQ tests provide a global rating reflective of the many cognitive processes and learning experiences that compose intelligence, just as a rating of a university is based on an evaluation of its components, such as library size, faculty quality, and size of endowment. Mental tests tend to correlate with each other because they are part of a unified system that works together. The implication of this theory is that understanding general intelligence will require understanding how the cognitive processes of the brain actually work. INFLUENCE OF HEREDITY AND ENVIRONMENT Few topics in the social sciences have produced more controversy than the relative influences of nature and nurture on intelligence. Is intelligence determined primarily by heredity or by one’s environment? The issue has aroused intense debate because different views on the heritability of intelligence lead to different social and political implications. The strictest adherents of a genetic view of intelligence believe that every person is born with a fixed amount of intelligence. They argue that there is little one can do to improve intelligence, so special education programs should not be expected to produce increases in IQ. On the other hand, those who see intelligence as determined mostly by environmental factors see early intervention programs as critical to compensate for the effects of poverty and other disadvantages. In their view, these programs help to create equal opportunities for all people. Perhaps the most controversial issue surrounding intelligence has been the assertion by some people that genetic factors are responsible not only for differences in IQ between individuals, but also for differences between groups. In this view, genetic factors account for the poorer average performance of certain racial and ethnic groups on IQ tests. Others regard genetic explanations for group differences as scientifically indefensible and view as racist the implication that some racial groups are innately less intelligent than others. Today, almost all scientists agree that intelligence arises from the influence of both genetic and environmental factors. Careful study is required in order to attribute any influence to either environment or heredity. For example, one measure commonly used to assess a child’s home environment is the number of books in the home. But having many books in the home may be related to the parents’ IQ, because highly intelligent people tend to read more. The child’s intelligence may be due to the parents’ genes or to the number of books in the home. Further, parents may buy more books in response to their child’s genetically influenced intelligence. Which of these possibilities is correct cannot be determined without thorough studies of all the factors involved. A. GENETIC INFLUENCES In behavioral genetics, the heritability of a trait refers to the proportion of the trait’s variation within a population that is attributable to genetics. The heritability of intelligence is usually defined as the proportion of the variation in IQ scores that is linked to genetic factors. To estimate the heritability of intelligence, scientists compare the IQs of individuals who have differing degrees of genetic relationship. Scientists have conducted hundreds of studies, involving tens of thousands of participants, that have sought to measure the heritability of intelligence. The generally accepted conclusion from these studies is that genetic factors account for 40 to 80 percent of the variability in intelligence test scores, with most experts settling on a figure of approximately 50 percent. But heritability estimates apply only to populations and not to individuals. Therefore, one can never say what percentage of a specific individual’s intelligence is inherited based on group heritabilities alone. Although any degree of genetic relationship can and has been studied, studies of twins are particularly informative. Identical twins develop from one egg and are genetically identical to each other. Fraternal twins develop from separate eggs and, like ordinary siblings, have only about half of their genes in common. Comparisons between identical and fraternal twins can be very useful in determining heritability. Scientists have found that the IQ scores of identical twins raised together are remarkably similar to each other, while those of fraternal twins are less similar to each other. This finding suggests a genetic influence in intelligence. Interestingly, fraternal twins’ IQ scores are more similar to each other than those of ordinary siblings, a finding that suggests environmental effects. Some researchers account for the difference by noting that fraternal twins are probably treated more alike than ordinary siblings because they are the same age. Some of the strongest evidence for genetic influences in intelligence comes from studies of identical twins adopted into different homes early in life and thus raised in different environments. Identical twins are genetically identical, so any differences in their IQ scores must be due entirely to environmental differences and any similarities must be due to genetics. Results from these studies indicate that the IQ scores of identical twins raised apart are highly similar—nearly as similar as those of identical twins raised together. For adoption studies to be valid, placement of twin pairs must be random. If brighter twin pairs are selectively placed in the homes of adoptive parents with higher intelligence, it becomes impossible to separate genetic and environmental influences. Another way of studying the genetic contribution to intelligence is through adoption studies, in which researchers compare adopted children to their biological and adoptive families. Adopted children have no genetic relationship to their adoptive parents or to their adoptive parents’ biological children. Thus, any similarity in IQ between the adopted children and their adoptive parents or the parents’ biological children must be due to the similarity of the environment they all live in, and not to genetics. There are two interesting findings from studies of adopted children. First, the IQs of adopted children have only a small relationship to the IQs of their adoptive parents and the parents’ biological children. Second, after the adopted child leaves home, this small relationship becomes smaller. In general, the IQs of adopted children are always more similar to their biological parents’ IQs than to their adoptive parents’ IQs. Further, once they leave the influence of their adoptive home, they become even more similar to their biological parents. Both of these findings suggest the importance of hereditary factors in intelligence. People sometimes assume that if intelligence is highly heritable, then it cannot be changed or improved through environmental factors. This assumption is incorrect. For example, height has very high heritability, yet average heights have increased in the 20th century among the populations of many industrialized nations, most likely because of improved nutrition and health care. Similarly, performance on IQ tests has increased with each generation (see the Distribution of IQ Scores section of this article), yet few scientists attribute this phenomenon to genetic changes. Thus, many experts believe that improved environments can, to some degree, increase a person’s intelligence. Some genetic disorders, such as phenylketonuria (PKU) and Down syndrome, may result in mental retardation and low IQ. But evidence for genetic influences should not be interpreted as evidence of a direct connection between genes and intelligence. In PKU, for example, a rare combination of recessive genes sets the stage for a series of biochemical interactions that ultimately results in low IQ. These interactions only occur, however, in the presence of the amino acid phenylalanine. If the disorder is detected early and phenylalanine is withheld from the infant’s diet, then large IQ deficits do not develop. B. ENVIRONMENTAL INFLUENCES If genetic influences account for between 40 and 80 percent of the variation in intelligence, then environmental influences account for between 20 and 60 percent of the total variation. Environmental factors comprise all the stimuli a person encounters from conception to death, including food, cultural information, education, and social experiences. Although it is known that environmental factors can be potent forces in shaping intelligence, it is not understood exactly how they contribute to intelligence. In fact, scientists have identified few specific environmental variables that have direct, unambiguous effects on intelligence. Many environmental variables have small effects and differ in their effect on each person, making them difficult to identify. Schooling is an important factor that affects intelligence. Children who do not attend school or who attend intermittently score more poorly on IQ tests than those who attend regularly, and children who move from low-quality schools to high-quality schools tend to show improvements in IQ. Besides transmitting information to students directly, schools teach problem solving, abstract thinking, and how to sustain attention—all skills required on IQ tests. Many researchers have investigated whether early intervention programs can prevent the lowered intelligence that may result from poverty or other disadvantaged environments. In the United States, Head Start is a federally funded preschool program for children from families whose income is below the poverty level. Head Start and similar programs in other countries attempt to provide children with activities that might enhance cognitive development, including reading books, learning the alphabet and the numbers, learning the names of colors, drawing, and other activities. These programs often have large initial effects on IQ scores. Children who participate gain as much as 15 IQ points compared to control groups of similar children not in the program. Unfortunately, these gains seem to last only as long as the intervention lasts. When children from these programs enter school, their IQ declines to the level of control groups over a period of several years. This has come to be known as the “fade-out” effect. Even though early intervention preschool programs do not seem to produce lasting IQ gains, some studies suggest they may have other positive long-term effects. For example, the Consortium for Longitudinal Studies reported that participants are less likely to repeat grades, less likely be placed in remedial classes, and more likely to finish high school than comparable nonparticipants—even though both groups show about the same levels of academic achievement. Preschoolers in early intervention programs may also benefit from improved health and nutrition, and their mothers may sometimes benefit from additional education that the programs provide. Because a substantial portion of the variation in intelligence is due to environmental factors, early intervention programs should be able to produce significant and lasting IQ gains once the specific environmental variables that influence IQ have been identified. Researchers continue to search for the interventions that will increase IQ and, ultimately, academic achievement. Two environmental variables known to affect intelligence are family size and birth order. Children from smaller families and children who are earlier-born in their families tend to have higher intelligence test scores. These effects, however, are very small and amount to only a few IQ points. They are detectable only when researchers study very large numbers of families. Although there has been substantial debate about the effects of other environmental variables, certain substances in the prenatal environment may influence later intelligence. For example, some pregnant women who consume large amounts of alcohol give birth to children with fetal alcohol syndrome, a condition marked by physical abnormalities, mental retardation, and behavioral problems. Even exposure to moderate amounts of alcohol may have some negative influence on the development of intelligence, and to date no safe amount of alcohol has been established for pregnant women. Scientists have also discovered that certain substances encountered during infancy or childhood may have negative affects on intelligence. For example, children with high blood levels of lead, as a result of breathing lead-contaminated air or eating scraps of lead-based paint, tend to have lower IQ scores. Prolonged malnutrition during childhood also seems to influence IQ negatively. In each of these cases, a correlation exists between environmental factors and measured intelligence, but one cannot conclude that these factors directly influence intelligence. Other environmental variables in this category include parenting styles and the physical environment of the home. Although the nature-nurture debate has raged for some time, research points to a conclusion that appeals to common sense: Intelligence is about half due to nature (heredity) and about half due to nurture (environment). The exact mechanisms by which genetic and environmental factors operate remain unknown. Identifying the specific biological and environmental variables that affect intelligence is one of the most important challenges facing researchers in this field. C. SEX DIFFERENCES Are women smarter or are men smarter? Psychologists have studied sex differences in intelligence since the beginning of intelligence testing. The question is a very complicated one, though. One problem is that test makers sometimes eliminate questions that show differences between males and females to eliminate bias from the test. Intelligence tests, therefore, may not show gender differences even if they exist. Even when gender differences have been explicitly studied, they are hard to detect because they tend to be small. There appear to be no substantial differences between men and women in average IQ. But the distribution of IQ scores is slightly different for men than for women. Men tend to be more heavily represented at the extremes of the IQ distribution. Men are affected by mental retardation more frequently than are women, and they also outnumber women at very high levels of measured intelligence. Women’s scores are more closely clustered around the mean. Although there are no differences in overall IQ test performance between men and women, there do seem to be differences in some more specialized abilities. Men, on average, perform better on tests of spatial ability than do women. Spatial ability is the ability to visualize spatial relationships and to mentally manipulate objects. The reason for this difference is unknown. Some psychologists speculate that spatial ability evolved more in men because men were historically hunters and required spatial ability to track prey and find their way back from hunting forays. Others believe that the differences result from parents’ different expectations of boys’ and girls’ abilities. Many studies have examined whether gender differences exist in mathematical ability, but the results have been inconsistent. In 1990 American researchers statistically combined the results of more than 100 studies on gender differences in mathematics using a technique known as meta-analysis. They found no significant differences in the average scores of males and females on math tests. Research also indicates that the average girl’s grades in mathematics courses equal or exceed those of the average boy. Other studies have found that boys and girls perform equally well on math achievement tests during elementary school, but that girls begin to fall behind boys in later years. For example, male high school seniors average about 45 points higher on the math portion of the SAT than do females. A 1995 study examined the performance of more than 100,000 American adolescents on various mental tests. The study found that on average, females performed slightly better than males on tests of reading comprehension, writing, perceptual speed, and certain memory tasks. Males tended to perform slightly better than girls on tests of mathematics, science, and social studies. In almost all cases, the average sex differences were small. Are differences in abilities between men and women biologically based or are they due to cultural influences? There is some evidence on both sides. On the biological side, researchers have studied androgenized females, individuals who are genetically female but were exposed to high levels of testosterone, a male hormone, during their gestation. As these individuals grow up, they are culturally identified as female, but they tend to play with “boys’ toys,” like blocks and trucks, and have higher levels of spatial ability than females who were not exposed to high levels of testosterone. Further evidence for a biological basis for spatial gender differences comes from comparisons of the brains of men and women. Even when corrected for body size, males tend to have slightly larger brains than females. Some scientists speculate that this extra brain volume in males may be devoted to spatial ability. On the cultural side, many social scientists argue that differences in abilities between men and woman arise from society’s different expectations of them and from their different experiences. Girls do not participate as extensively as boys do in cultural activities thought to increase spatial and mathematical ability. As children, girls are expected to play with dolls and other toys that develop verbal and social skills while boys play with blocks, video games, and other toys that encourage spatial visualization. Later, during adolescence, girls take fewer math and science courses than boys, perhaps because of stereotypes of math and science as masculine subjects and because of less encouragement from teachers, peers, and parents. Many social scientists believe cultural influences account for the relatively low representation of women in the fields of mathematics, engineering, and the physical sciences. It is important to remember that sex differences, where they exist, represent average differences between men and women as groups, not individuals. Knowing whether an individual is female or male reveals little about that person’s intellectual abilities. D. RACIAL AND ETHNIC DIFFERENCES Numerous studies have found differences in measured IQ between different self-identified racial and ethnic groups. For example, many studies have shown that there is about a 15-point IQ difference between African Americans and whites, in favor of whites. The mean scores of IQ scores of the various Hispanic American subgroups fall roughly midway between those for blacks and whites. Although these differences are substantial, there are much larger differences between people within each group than between the means of the groups. This large variability within groups means that a person’s racial or ethnic identification cannot be used to infer his or her intelligence. The debate about racial and ethnic differences in IQ scores is not about if the differences exist but what causes them. In 1969 Arthur Jensen, a psychology professor at the University of California at Berkeley, ignited the modern debate over racial differences. Jensen published a controversial article in which he argued that black-white differences in IQ scores might be due to genetic factors. Further, he argued that if IQ had a substantial genetic component, remedial education programs to improve IQ should not be expected to raise IQ as they were currently being applied. In 1994 American psychologist Richard Herrnstein and American social analyst Charles Murray renewed the debate with the publication of The Bell Curve (1994). Although only a small portion of the book was devoted to race differences, that portion of the book received the most attention in the popular press. Among other arguments, Herrnstein and Murray suggested it was possible that at least some of the racial differences in average IQ were due to genetic factors. Their arguments provoked heated debates in academic communities and among the general public. As discussed earlier, research supports the idea that differences in measured intelligence between individuals are partly due to genetic factors. However, psychologists agree that this conclusion does not imply that genetic factors contribute to differences between groups. No one knows exactly what causes racial and ethnic differences in IQ scores. Some scientists maintain that these differences are in part genetically based. Supporters of this view believe that racial and ethnic groups score differently on intelligence tests partly because of genetic differences between the groups. Others think the cause is entirely environmental. In this view, certain racial and ethnic groups do poorer on IQ tests because of cultural and social factors that put them at a disadvantage, such as poverty, less access to good education, and prejudicial attitudes that interfere with learning. Representing another perspective, many anthropologists reject the concept of biological race, arguing that races are socially constructed categories with little scientific basis (see Race). Because of disagreements about the origins of group differences in average IQ, conclusions about these differences must be evaluated cautiously. Some research indicates that the black-white differential in IQ scores might be narrowing. Several studies have found that the difference in average IQ scores between African Americans and whites has shrunk to 10 points or less, although research has not established this trend clearly. The National Assessment of Educational Progress, a national longitudinal study of academic achievement, also shows that the performance of African Americans on math and science achievement tests improved between 1970 and 1996 when compared to whites. Educators and researchers have focused much attention on explaining why some ethnic groups perform more poorly than others on measures of intelligence and academic achievement. Another topic of research is why some ethnic groups, particularly Asian Americans, perform so well academically. Compared to other groups, Asian American students get better grades, score higher on math achievement and aptitude tests, and are more likely to graduate from high school and college. The exact reasons for their high academic performance are unknown. One explanation points to Asian cultural values and family practices that place central importance on academic achievement and link success in school with later occupational success. Critics counter that this explanation does not explain why Asian Americans excel in specific kinds of abilities. The academic and occupational successes of Asian Americans have caused many people to presume Asian Americans have higher-than-average IQs. However, most studies show no difference between the average IQ of Asian Americans and that of the general population. Some studies of Asians in Asia have found a 3 to 7 point IQ difference between Asians and whites, in favor of Asians, but other studies have found no significant differences.
Deepest ocean yields quirky creatures Colonies of tiny, soft-bodied animals thrive in the deepest parts of ocean, say Japanese researchers. Many of the creatures they dredged up are one-celled animals called foraminifera, many new to science. This surprised the researchers because foraminifera do not normally live deep in the ocean and because they typically have hard shells. The little organisms may have slowly adapted to the dark and high pressure found in the trenches more than 10,000 metres under the Pacific Ocean, the researchers report in today's issue of the journal Science. Scientists know little about the tiny organisms that live in the sediments of deep-ocean trenches known as meiofauna. The extreme depths make these habitats among the most remote in the world and among the most difficult to sample. Yuko Todo of Shizuoka University and colleagues from Japan and the UK used a remotely operated vehicle to sample the very deep trenches of the western Pacific, which they say reached their present depths 6 to 9 million years ago. They looked at Challenger Deep, which at 10,896 metres deep makes it the deepest place in the ocean. There they sampled more than 400 living foraminifera, which varied in shape from long, needle-like organisms to spherical ones. "The lineage to which the new soft-walled foraminifera belong includes the only species to have invaded fresh water and land," the researchers say. "And analysis of the new organisms' DNA suggests they represent a primitive form of organism dating back to Precambrian times from which more complex multichambered organisms evolved." The researchers used the Japan Agency for Marine Earth Science and Technology's Kaiko remotely operated vehicle to collect their samples.
She began by talking about boys and girls and adults that care about others and their communities. She told the class that children and adults who care are practicing a behavior important to being good citizens. To help the children understand, she asked each student to think about what he or she would do if …Why not play along with the “Can Dos” and think about what you would do. - A boy in the cafeteria fell. A) Would you help him up, even if it meant losing your place on line to get food? B) Would you hope someone else would help so you wouldn’t lose your place on line? - One of your classmates has a bloody nose. A) Would you turn away because the sight of blood makes you sick? B) Would you give him or her a tissue and get the teacher’s attention? - You go to the movies with a few friends, one of whom uses a wheelchair. Everyone want to sit up front, but you friend has to sit in the handicapped accessible section. A) Would you sit in the wheelchair section with your friend? B) would you sit up front and tell your friend who uses a wheelchair you’ll see him after the movie because you think he is used to sitting by himself and won’t mind? - You borrowed your friend’s ruler; you broke it. A) Would you give it back broken and say you’re sorry? B) Would you buy a new ruler, give it to your friend and explain that you broke the ruler he gave you? - While you were at a friend’s house, it got cold out. Your friend gave you a jacket to wear home. On the way home, a car splashed muddy water on you and got the jacket dirty. A) Would you wash the jacket before you gave it back? B) Would you give it back dirty and explain to your friend what happened?
In early intervention programs, many students have few lessons that they can complete independently, but TeachTown Basics lessons include mechanisms (such as automatic prompting and predictable expectations) to increase student independence. While some students are working independently on the TeachTown Basics computer lessons, the teacher has more time to work individually and in small groups with the remaining students. In addition, the Generalization Lessons are paraeducator friendly. While the paraeducator is facilitating an Generalization Lessons with a student or students, the teacher gains even more time to work individually with other students. The TeachTown Basics curriculum can be customized in two ways: Before a student begins using TeachTown Basics, the teacher (or other adult knowledgeable about the student’s skills) takes a Student Questionnaire. The Student Questionnaire is used to place the student at an individualized starting point in each of the six TeachTown:Basics learning domains. TeachTown Basics automatically chooses computer lessons for the student. Initially, it chooses lessons based on the teacher’s answers to the Student Questionnaire. As the student moves through the curriculum, TeachTown Basics selects new lessons based on the student’s progress. After taking the placement questionnaire, the teacher never has to select a computer lesson for the student. However, there are times when a teacher will want to choose specific computer lessons for the student. Common reasons for the teacher selecting specific lessons for the student are to align to an IEP goal, to align to a state or district content standard, to provide pre-teaching for a classroom activity in a self-contained or an inclusion classroom, or to provide pre-teaching for an upcoming field trip. Teachers can easily review the entire TeachTown Basics curriculum and quickly select lessons for a student. With TeachTown Basics, teachers are not forced to choose between selecting lessons themselves and letting the computer automatically select lessons for the students. Each time a student computer session is started, the adult chooses whether to use the computer selected lessons or the teacher selected lessons.
Field Testing DI Programs All Direct Instruction programs are developed with extensive field testing. Most traditional programs are routinely published without first being subjected to extensive trials in actual learning situations. In contrast, Direct Instruction programs are field tested to determine the extent to which students actually master the material that is presented in a program and the extent to which teachers are able to follow the program’s presentation specifications. Field testing begins as the Direct Instruction programs are being developed. The authors select several classrooms in which to field test the first version of the curriculum. These classrooms are selected to include a variety of students and teachers. At least some of these classes must include the lowest performing students who will be placed in the program. This is important because the lower-performers make all the mistake that higher performers make and additional mistakes that higher performers tend not to make. Every activity in the field-test version of the program is assessed. Data are kept on the number of students who miss particular items and the incorrect responses that were made by the students. Rules are established about what student performance levels indicate a need to revise the teaching sequence. Student errors are analyzed to determine how the sequence of instruction likely led to the student problems. Revisions may include: - making the initial explanations more clear - including more thorough teaching of component skills that were not adequately taught - providing for more scaffolding to prompt students to apply strategies - providing more practice to help students discriminate when to apply strategies - other elements that will improve the instruction The revised version of the curriculum is then field tested and revised again. Data are kept on student performance on each item. In addition to the data on student performance, information on how long tasks take to present are collected as well as feedback from teachers about the clarity of directions. Each element of this field testing procedure is essential to providing teachers with a program capable of being an effective instructional tool for all students. The extensive testing that underlies the development of Direct Instruction programs is the primary reason that they are so effective. For more information on the theory and process that underlies the field testing of Direct Instruction programs, see:
The UDHR is a milestone document that proclaims the inalienable rights which everyone is entitled to as a human being – regardless of race, colour, religion, sex, language, political or other opinion, national or social origin, property, birth or other status. Available in more than 500 languages, it is the most translated document in the world. This year’s Human Rights Day theme relates to 'Equality' and Article 1 of the UDHR – “All human beings are born free and equal in dignity and rights.” The principles of equality and non-discrimination are at the heart of human rights. Equality is aligned with the 2030 Agenda and with the UN approach set out in the document Shared Framework on Leaving No One Behind: Equality and Non-Discrimination at the Heart of Sustainable Development. This includes addressing and finding solutions for deep-rooted forms of discrimination that have affected the most vulnerable people in societies, including women and girls, indigenous peoples, people of African descent, LGBTI people, migrants and people with disabilities, among others. Equality, inclusion and non-discrimination, in other words - a human rights-based approach to development - is the best way to reduce inequalities and resume our path towards realising the 2030 Agenda.
Can I Keep My Houseplants Under Lights 24 Hours a Day? The importance of light to a plant's well-being should not be underestimated. Photosynthesis, which produces the sugars that fuel plant growth, relies completely on light exposure. Providing adequate natural or artificial light in an indoor setting is crucial, especially for flowering houseplants. Although different species have varying light requirements, no plant can tolerate continuous light exposure for 24 hours a day. Like nature's cycle of light and dark, plants need a daily period of darkness and rest to remain healthy. Best Houseplant Lights Since incandescent lights can give off enough heat to scorch plants, fluorescent bulbs make more sense. Red and blue wavelengths affect plants more than any other colors in the spectrum. Red boosts flowering, while blue supports leaf development. To achieve the right balance, full-spectrum light bulbs, available at garden centers, work best. Combining one warm- and one cool-fluorescent light tube provides a similar effect, more economically. Along with the lights, install a timer to ensure that they illuminate your plants on a consistent schedule. When a plant receives only artificial lighting, the fluorescent bulbs should be lit during the normal daylight hours. Lights turned on very early in the day or extremely late at night have not proven to be as productive. For plants that obtain no direct sunlight from a window, extend their period under lights up to 16 to 18 hours a day, in most cases. However, 12 to 14 hours may suffice if some natural light is available daily. Generally, green foliage plants have lower light requirements than variegated specimens or houseplants grown for their flowers. The interval of uninterrupted darkness a plant experiences each night is called the photoperiod. Flowering and fruiting of different plant groups respond to the length of this downtime. For example, short-day plants that bloom around the time of the winter solstice, such as the poinsettia (Euphorbia pulcherrima) and Christmas cactus (Schlumbergera bridgesii), require long photoperiods, indoors or out. Since these plants only thrive in U.S. Department of Agriculture plant hardiness zones 9 and 10, they are cultivated commonly as houseplants. For best results, give short-day plants about 10 hours under lights each day until buds appear. Warm-season vegetable seedlings that will need longer days to bear fruit can tolerate shorter photoperiods. Signs of Incorrect Lighting Plants appear leggy when they receive too little light. Leaves of light-deprived houseplants tend to be smaller than normal, and their stems may have long spaces between the nodes where leaves form. If it is a flowering plant, likely no blooms will emerge. The foliage on other light-starved plants may fade to pale green. Conversely, a plant grown under 24-hour lighting would react to the excessive light by displaying yellow leaves, followed by brown markings between the veins and on the leaf edges. - University of Missouri Extension: Lighting Indoor Houseplants - Fine Gardening Magazine: Using Indoor Plant Lights - The Garden Primer; Barbara Damrosch - The New Sunset Western Garden Book; Kathleen Norris Brenzel, Editor Mary Simpson began her writing career in 1968 on a Dallas oil magazine. Besides reporting and editing for several small Texas newspapers, Simpson has written for "Petroleum Engineer Magazine," "Denton Today Magazine" and put out an employee newsletter for a FEMA facility. She holds a B.A. in journalism and an M.A.in English, both from the University of North Texas.
You will need - Sheet of rigid transparent film - Rolling ruler for drawing parallel lines - Hobby knife Decide what height and width will be printed lettersthat you are going to put on a sheet of paper and what will be the spacing between letters and between lines. Roller taking a ruler and an awl, try to put on a sheet of transparent film is parallel to the horizontal line. The step between them must be equal to the height of letters, spacing between lines, and so alternately. Now similarly apply to the sheet of transparent film with parallel vertical lines, the spacing of which must be equal to the width of the letters, the spacing between the letters. After completing these operations you will find that the sheet of film covered with rectangles, locations of which correspond to the site of application letters. A modeling knife to cut out all those rectangles. Now you have the stencil, but not normal, and universal, preserving only information about the location of signs, not their orientation. This means that by putting a stencil on a sheet of paper in each of the rectangles can be inscribed with a felt pen the letters of Roman alphabet, a digit, a punctuation mark, a mathematical symbol or any other sign. And they all have the same dimensions and are located on the same paper in neat rows. To write text in block letters in an arc, fabricate a special fixture. It is a strip of transparent film, one end of which holes. Its width and height equal to the width and the height of the letters. Strip length is taken to be slightly greater than the radius of the arc, which is supposed to put text. The opposite end of the strip is fixed to the sheet of paper with an awl in the center of the arc. Causing sign, turn off the strip at such an angle to the right edge of the sign coincided with the left edge of the strip, then applied the next sign. The distances between the signs are the same, and they are located on a flat arc. Additional expressiveness to the text, printed on paper in such a way, can be given using colored markers. Be careful when handling with a modeling knife.
1、 问题:Which of the following occasions has a formal debate? C:Criminal court argument. D:Air stewardess and passengers’ argument. 答案: 【Criminal court argument.】 2、 问题:Educational debates benefit students by ______ . A:enhancing the integrity of ideas B:helping them get high scores C:making more friends 答案: 【enhancing the integrity of ideas】 3、 问题:Debaters can become effective citizens because ______ . A:they learn to think calmly B:they learn to speak frankly C:they learn to respond quickly D:they learn to think critically and argue skillfully 答案: 【they learn to think critically and argue skillfully】 4、 问题:Which of the following is an English debate sponsored by China? A:Public Forum Debate. D:British Parliamentary Debate. 答案: 【FLTRP Cup.】 5、 问题:Debate became important in the United States since ____ . A:the country was established B:the beginning of the last century C:Civil Rights Movement D:the presidential campaign was broadcasted on TV 答案: 【the country was established】 1、 问题:How many motion types are there in English debate? 2、 问题:What does the lecturer advise debaters to do to better analyze the context of a motion? A:To research on the background of the motion. B:To ask relevant questions. C:To determine the motion type. D:To explain difficult terms. 答案: 【To ask relevant questions.】 3、 问题:To analyze a value motion, what should be done first? A:To set the context. B:To define the motion . C:To provide evidence. D:To set standards. 答案: 【To set standards.】 4、 问题:Which of the following should be addressed for analyzing a policy motion? A:The application of the standard. B:The importance of the motion. C:The need to change the current policy. D:The responsibility of the government. 答案: 【The need to change the current policy. 】 5、 问题:Which of the following is not necessary for motion analysis? A:Justifying the feasibility of the plan. B:Defining key terms. C:Determining the motion type. D:Analyzing the context of the motion 答案: 【Justifying the feasibility of the plan.】 1、 问题:Whose duty is it to do the proposition case construction? D:Leader of opposition. 答案: 【Opening Government.】 2、 问题:Which of the following is NOT included in constructing a case? A:Defining and interpreting the motion. B:Specifying Government’s position. C:Illustrating key arguments for the motion. D:Offering rebuttals to the opposition’s arguments. 答案: 【Offering rebuttals to the opposition’s arguments.】 3、 问题:What are the two types of motions you’ll generally encounter in a debate round? A:Active and passive motions. B:Principle and practical motions.
Since time immemorial, Koreans have been celebrated the world over for their remarkable bow-making expertise as well as their legendary bowmanship. <A painting of a hunting scene on the wall of Muyongchong tomb depicts the vigorous horsemanship of the Goguryeo people.> The bow was an essential tool of everyday life in ancient Korea that eventually gave way to more technologically advanced weaponry in the modern era. The bow was the symbol of the king; arrows symbolized sunlight. A good marksman was believed to be someone who could control the sun—the king. Jumong, the founder of the ancient Korean kingdom of Goguryeo (37 B.C.-668 A.D.), literally means "one who is skilled with the bow." Indeed, the Goguryeo people were famous for their outstanding use of the bow and arrow while on horseback, a skill which is noted in many historical anecdotes. In the Muyongchong tomb from the Goguryeo period discovered in former Manchurian territory, there is an ancient yet well-preserved painting that shows the strength and vigor of the Goguryeo people. Titled Suryeopdo (painting of hunting), the right side features a cart pulled by a white ox behind a large tree, while the left depicts hunting warriors. Five men on horseback with feathered hats pull on their bows to hunt tigers and deer, with mountains shown in the distance. The painting illustrates the dynamic energy of the hunters racing through mountains and streams as well as the riders performing the highly difficult move of releasing the reins entirely to twist their torsos around to shoot arrows. There are largely three types of bows in the world: dansungung (simple bow), ganghwagung (backed bow), and the hapseonggung (composite bow). The dansungung is a simple bow made from wooden or bamboo sticks that was mostly used in the southeastern areas of North America, Oceania and sub-Saharan Africa. During the Middle Ages, the dansungung was used in regions of Europe north of the Alps. The ganghwagung has a body bound with string to increase its resistance, examples of which have been excavated from Maglemose remains from the Mesolithic era of Northern Europe. Bows fortified through wrapping a piece of wood around the body's middle have also been found in the Aru Islands of Indonesia, among the African pygmies, and in Northeast Asia and Alaska. The hapseonggung is a further-developed version of the ganghwagung. The hapseonggung is believed to have been invented by nomadic tribes of central Asia and has been discovered along the Mediterranean coast and in regions of central Asia, China and western North America. It is made by pasting together two sheets of wood with glue or by pasting animal tendons onto the back of the bow's body. The hapseonggung is the most developed of the three types and was the weapon of choice of horse riders due to its strength despite its relatively small size. <A painting by Kim Hong Do shows a seonbi (an intellectual from the Joseon Dynasty) using a bow and arrow during the 18th century Joseon period.> Traditional Korean bows can be classified into the following categories: gakgung, gogung, jeongryanggung, yegung, mokgung, cheolgung, and cheoltaegung. The gakgung was primarily used during the Joseon Dynasty as the basic weapon of the military and was made out of water buffalo horns. Making a single gakgung took approximately four months and required materials such as water buffalo horns, ox tendons, bamboo, mulberry wood, glue made from a croaker's air bladder, and bark from a cherry tree. Gakgung made in this manner were small in size but were able to shoot arrows across long distances. Compared to the British longbow, which can shoot 200 meters, the gakgung can shoot an arrow as far as 500 meters with an effective range of 350 meters. <Korean bows are characterized by their use of ox horns and tendons.> <The Korean people are known for their exceptional bow-making techniques. A master bow artisan makes a bow and arrow in the traditional method.> Together with the British and the Japanese, Koreans used bows as long as they possibly could before they were forced to give way to more advanced technology. However, the Korean bow and arrow that almost disappeared within the stormy tides of the modern era have been reborn in the sport of archery. At the 30th World Archery Championship held in West Berlin in July 1979, then high school student Kim Jin Ho achieved a world record, winning five out of six events. Following this initial victory, Korea continued to assert itself on the global stage as an archery powerhouse. Korean archers have won gold medals in nearly every international archery competition, including the 1984 Los Angeles Olympics, the Seoul World Archery Championship in October 1985, the 1988 Seoul Olympics, the 1992 Barcelona Olympics, and the 1996 Atlanta Olympics. By sweeping various world records as well, Korean archers are rewriting the history of world archery. In the past, the Korean people were called "Eastern people who are skilled with the bow." This ancient tradition continues into the present day. It is no coincidence that Korea displays some of the best bowmanship in the world. * Photos courtesy of Cultural Heritage Administration of Korea.
Global warming is the ongoing rise of the average temperature of the Earth’s climate system due to greenhouse gases released as people burn fossil fuels. CAUSES OF GLOBAL WARMING Global warming causes due to increasing concentration of greenhouse gases in the atmosphere mainly from human activities such as burning fossil fuels, deforestation, and farming. - Burning fossil fuels- When we burn fossil fuels like coal, oil and gas to create power for our cars and generate electricity, it will lead to CO2 production in the atmosphere. - Deforestation and tree clearing- Plants and trees play a very important role in regulating climate as absorb CO2 from the atmosphere and give back O2. Forests act as carbon sinks and it keeps global warming to 1.5 degree Celsius. But human are consistently clearing the vegetation for farming, making infrastructure or to sell tree products such as timber. When these are removed or burnt which again release CO2 back into atmosphere which contribute into global warming. - Agriculture & farming- Livestock like sheep and cattle produce methane, a greenhouse gas. Some fertilizers that farmer use also release nitrous oxide, which is also a greenhouse gas. Global warming occurs when CO2 and other air pollutants and greenhouse gases, collect in the atmosphere and absorb sunlight and solar radiation that have bounced of the earth’s surface. Normally, these radiation would escape into space but these pollutants trap the heat which causes the planet to get hotter. These pollutant lasts for years to centuries in the atmosphere. EFFECTS OF GLOBAL WARMING - It leads to extreme weather events such as heat waves, droughts, cyclones, bizzards and rainstorms. - It leads to flood as trees have the water holding capacity in the roots and we cut the trees then chances of flood increases. - Prolonged periods of warmer temperature typically soil and underbrush to be drier for longer periods, increase the risk of wildfire. - Global warming also effects on ocean. Ongoing effects include rising sea level due to thermal expansion and melting of glaciers and ice sheets and warming of ocean surface leading to increased temperature stratification. - Warmer water cannot contain as much oxygen as cold water, so heating is expected to lead to less oxygen in the ocean which leads to oxygen depletion. These are only some effects. There are many more effects, so it is necessary to do something to reduce global warming. HOW CAN WE REDUCE GLOBAL WARMING - Apply 3Rs in your life- Reduce, Reuse, Recycle. Use reusable products instead of disposable products. Eg: reusable water bottle. Buy products with minimum packaging to reduce waste. Always recycle things like paper, plastic, newspaper, glass and aluminium foil. - Use less heat or air conditioner. Adding insulation to your walls and attics and installing weather stripping and caulking around doors and windows can lower your heating cost more than 25% by reducing the amount of energy you need to heat and cool your home. - Replace regular light bulbs with LED bulbs, they are even better than CFLs. - Use cycle and walking instead of using cars. Try to check out options for carpooling. If you are driving make sure that your car is running efficiently. - Use less hot water, set your water heater at 120 degrees to save energy. Buy low-flow showerheads to save hot water and about 350 pounds to CO2 yearly. - Save electricity and reduce global warming by turning off the lights when you leave the room and use only as much light you need. - Try to plant a tree once in a year. Trees will absorb CO2 and give off O2 which helps to reduce global warming. Try to apply these small steps and save your earth from global warming. It is your earth and it is your responsibility to save earth.
Traditional Disease Presentation, Atypical Disease Presentation, Types Of Atypical Disease Presentation, Implications Of Atypical Disease Presentation Much was learned in the twentieth century about disease and how it presents in children and adults. Traditional medical teaching emphasizes specific disease symptoms and signs that point to a specific diagnosis. However, in the last several decades it has become apparent that common diseases often present differently in older adults. This has lead to the concept of so-called atypical disease presentation in older adults. The exact prevalence of atypical disease presentation in the elderly is unclear in the medical literature but some researchers report that as many as 50 percent or more of older adults, particularly those who are frail, primarily present with disease atypically. As people age, many bodily changes occur, but two have major consequences for disease presentation. The first is the inevitable alterations that occur within the various body systems that represent normal physiological changes of aging. Many are unavoidable and alone do not fully explain why older adults often present atypically. However, some of these changes certainly set the stage for increased susceptibility both to illness and to the way in which the disease presents itself. A classic example is alteration in thermoregulation with age, so that many older people do not have a fever when infected. Other examples are the increased risk for hyperthermia, hypothermia as well as dehydration due primarily to the changes in the body's ability to control body temperature and detect thirst. These normal consequences of aging coupled with concomitant disease and medications often together help set the stage for atypical disease presentations. The second change that occurs with aging is related to the pathophysiological changes associated with aging, or the accumulation of disease. Many of the most common diseases that affect people increase in frequency and severity as the body ages. The likelihood of developing major diseases such as heart disease, diabetes, osteoporosis, stroke, and dementia all increase with age. Another evolving concept that contributes to atypical disease presentations in older adults is the syndrome of frailty, understood as a vulnerable state arising from multiple interacting medical and social problems. The syndrome of frailty is critical to the understanding of disease presentation in the older adult. It is the frail elderly who most often present atypically. - Disengagement - Critical assessment of disengagement theory - Disability: Economic Costs and Insurance Protection - The Economics Of Disability, Work Withdrawal By Older Disabled Workers, Disability Insurance: General Policy Features - Disease Presentation - Traditional Disease Presentation - Disease Presentation - Atypical Disease Presentation - Disease Presentation - Types Of Atypical Disease Presentation - Disease Presentation - Implications Of Atypical Disease Presentation - Disease Presentation - Differential Diagnosis Of Atypical Disease Presentations - Disease Presentation - Management - Disease Presentation - Conclusion - Other Free Encyclopedias
English Grammar and Punctuation This week, we have learnt how to use colons correctly to punctuate sentences. Key Learning Points: - Colons can introduce a list. - There needs to be a main clause before the colon. - Colons can introduce an explanation. - Colons can replace the word 'because'. - Colons can introduce a 'reveal'. Try to complete the COLONs CHALLENGE sheet that I have attached. Mark your own work by checking the answer sheet below.
Like other very successful protocols such as HTTP and DNS, over the years BGP has been given more and more additional jobs to do. In this blog post, we’ll look at the new functionality and new use cases that have been added to BGP over the years. These include various uses of BGP in enterprise networks and data centers. BGP for Internet routing The world looked very different back in 1989 when the specification for the first version of BGP was published. Before BGP, there was the Exterior Gateway Protocol (EGP), but EGP was designed to have a central backbone that other networks connect to (similar to how OSPF’s area 0 connects other areas). As regional networks started to connect directly to each other and the first commercial network providers popped up, a more flexible routing protocol was needed to handle the routing of packets between the different networks that collectively make up the internet. Global “inter-domain” routing is of course still the most notable use of BGP. In those early days of BGP, the Internet Protocol (IP) was just one protocol among many: big vendors usually had their own network protocols, such as IBM’s SNA, Novell’s IPX, Apple’s AppleTalk, Digital’s DECnet and Microsoft’s NetBIOS. As the 1990s progressed, these protocols quickly became less relevant as IP was needed to talk to the internet anyway, so it was easier to run internal applications over IP, too. BGP in enterprise networks The result was that BGP quickly found a role as an internal routing protocol in large enterprise networks. The reason for that is that unlike internal routing protocols such as OSPF and IS-IS, BGP allows for the application of policy, so the routing between parts of an enterprise can be controlled as necessary. Of course for inter-domain (internet) routing BGP is used with public IP addresses and public Autonomous System (AS) numbers. These are given out by the five Regional Internet Registries (RIRs) such as ARIN in North America. In enterprise networks, the private address ranges 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16 are often used, but, perhaps surprisingly, enterprise networks also tend to use public addresses, as coordinating the use of the private ranges becomes very complex very quickly in large organizations, especially after mergers. Enterprise networks do tend to use private AS numbers extensively, as the RIRs have become more restrictive in giving out public AS numbers over time—despite the fact that as of about ten years ago we’ve moved from 16-bit to 32-bit AS numbers, so there is absolutely no shortage. The original 16-bit private AS number range is 64512 to 65534, allowing for 1023 private AS numbers. That’s not a lot in a large network, so it’s not uncommon for the same AS number to be used in different parts of the network. The 32-bit private range is 4200000000 to 4294967294, allowing for almost 95 million additional private AS numbers. So clashing AS numbers should no longer be a problem, as long as people more or less randomly select them, rather than all start at 4200000001. Aside from these considerations, the structure of enterprise networks is fairly similar to the structure of the internet in general, with perhaps somewhat less of an emphasis on BGP security. BGP in the datacenter underlay Things are different in the datacenter. In large data centers, BGP has three different roles. First, there’s the “underlay”, the physical network that allows for moving packets between any server in any rack in the data center to any other server anywhere else, as well as to the firewalls and routers that handle external connectivity. Second, there’s often an “overlay” that creates a logical structure on top of the physical structure of the underlay. And third, BGP may be used by physical servers for routing packets to and from virtual machines (VMs) running on those servers. It’s estimated that for each kilobyte of external (north-south) traffic, as much as 930 kilobytes of internal (east-west) traffic within the datacenter is generated. This means that the amounts of traffic that are moved within a large datacenter with many thousands of (physical) servers are absolutely massive. Internet-like hierarchical network topologies can’t support this, so data centers typically use a leaf-spine topology, as explained in our blog post BGP in Large-Scale Data Centers. 5-Stage Clos Network Topology with Clusters Even more so than in enterprise networks, in the datacenter underlay, BGP is used as an internal routing protocol. This means it would be useful for BGP to behave more like an internal routing protocol and automatically detect neighboring routers, rather than requiring neighbor relationships to be configured explicitly. Work on this has started in the IETF a few years ago, as discussed in our blog post BGP LLDP Peer Discovery, but so far, this work hasn’t yet been published as an RFC. Another issue with such large scale use of BGP is that it requires a significant amount of address space to number each side of every BGP connection. An interesting way to get around this is the use of “BGP unnumbered”. Using BGP unnumbered with the FRRouting package (a fork of the open-source Quagga routing software) is well described in this book chapter at O’Reilly. The idea is to set up BGP sessions towards the IPv6 link-local address of neighboring routers, and then exchange IPv4 prefixes over those BGP sessions. An IPv6 next-hop address is used for those prefixes as per RFC 5549. That doesn’t seem to make any sense at first: how can a router forward IPv4 packets to an IPv6 address? But the next-hop address doesn’t actually do anything directly. The function of the next-hop address is to be the input for ARP in order to determine the MAC address of the next hop. That very same MAC address can, of course, be obtained from the IPv6 next-hop address using IPv6 Neighbor Discovery. This way, BGP routers can forward packets to each other without the need for BGP to use up large numbers of IPv4 addresses. BGP EVPN in the datacenter overlay Many datacenter tenants have their own networking needs that go beyond the simple anything-to-anything IP-based leaf-spine model. So they implement an overlay network on top of the datacenter underlay network through tunneling. These can be IP-based tunnels using a protocol like GRE, or layer 2 run on top of a layer 3 underlay, often in the form of VXLAN. VXLAN is typically implemented in the hypervisor on a physical server. This way, the VMs running on the same and different physical servers can be networked together as needed—including with virtual layer 2 networks. However, running a layer 2 network over a layer 3 network poses a unique challenge: BUM traffic. BUM stands for broadcast, unknown unicast and multicast. These are the types of traffic a switch normally floods to all ports. To avoid this, VTEPs (VXLAN tunnel endpoints) / NVEs (Network Virtualization Edges) can use BGP to communicate which IP addresses and which MAC addresses are used where, along with other parameters, so the need to distribute BUM traffic is largely avoided. The VTEPs implement “ARP suppression”, which lets them answer ARP requests for remote addresses locally, so ARP broadcasts don’t have to be flooded by replicating them to all remote VTEPs. RFC 7432 specifies “BGP MPLS-Based Ethernet VPN” and is largely reused for VXLAN. Originally, BGP could only be used for IPv4 routing, but multiprotocol extensions (also used for IPv6 BGP) allow BGP to be used to communicate EVPN information between VTEPs. Each VTEP injects the MAC and IP addresses it knows about into BGP so all other VTEPs. The remote VTEPs can then tunnel traffic towards these addresses to the next-hop address included in the BGP update. Unlike BGP in the underlay, which typically uses eBGP, EVPN information is transported over iBGP: all VTEPs/NVEs are part of the same AS. If there’s a lot of them, that means it’s helpful to use route reflectors to avoid having excessive numbers of iBGP sessions. BGP and pods In addition to the underlay and the overlay, there’s a third level of routing that’s becoming more relevant in the datacenter: the routing between “pods” on (virtual) hosts. Systems like Docker allow applications to be put into light-weight containers for easy deployment. A container is a self-contained system that includes the right versions of libraries and other dependencies. Unlike virtual machines, that each run a separate copy of an entire operating system in their own virtualized environment, multiple containers run side-by-side under a single copy of an operating system. Containers can do a lot of their own networking, but typically just run as a service under a TCP or UDP port number on the (virtual) host’s IP address. However, this sharing of the host’s IP address makes it awkward to deploy multiple instances of the same application or service, as these tend to expect to be able to use a well-known port number. Kubernetes solves this issue by grouping together containers inside a pod. Pods are relatively ephemeral, and the idea is to run multiple pods on the same (virtual) machine. Containers in a pod share an IP address and a TCP/UDP port space. Within a Kubernetes deployment, all pods can communicate with each other without using NAT. A pod’s network interface(s) can be bridged to the network interface(s) of the host on layer 2, and thus directly connect to the layer 2 or layer 3 service provided by an overlay network. However, this is not the most scalable solution. The alternative is to have the host operating system route between its network interface(s) and the pods that are running on that host. In this situation, pods will typically be provisioned with some IP address space from for instance etcd. In order for other pods and the rest of the world to reach the pod, these addresses must be made available in a routing protocol. Here BGP again has the advantage due to its flexibility, as shown in this blog post at Cloud Native Labs and this blog post over at Flockport. BGP is undoubtedly one of the most sophisticated IP routing protocols deployed on the Internet today. Its complexity is primarily due to its focus on the routing policies. The generic statement that BGP only gets used when there is a need to route between two autonomous systems is quite misleading. There are multiple scenarios in which one may choose to use the protocol, or it might even be required. BGP remains the right tool for so many jobs! Boost BGP Preformance Automate BGP Routing optimization with Noction IRP
It has become increasingly important to study the urban heat island phenomenon due to the adverse effects on summertime cooling energy demand, air and water quality and most importantly, heat-related illness and mortality. The present article analyses the magnitude and the characteristics of the urban heat island in Sydney, Australia. Climatic data from six meteorological stations distributed around the greater Sydney region and covering a period of 10 years are used. It is found that both strong urban heat island (UHI) and oasis phenomena are developed. The average maximum magnitude of the phenomena may exceed 6 K. The intensity and the characteristics of the phenomena are strongly influenced by the synoptic weather conditions and in particular the development of the sea breeze and the westerly winds from the desert area. The magnitude of the urban heat island varies between 0 and 11°C, as a function of the prevailing weather conditions. The urban heat island mainly develops during the warm summer season while the oasis phenomenon is stronger during the winter and intermediate seasons. Using data from an extended network of stations the distribution of Cooling Degree Days in the greater Sydney area is calculated. It is found that because of the intense development of the UHI, Cooling Degree Days in Western Sydney are about three times higher than in the Eastern coastal zone. The present study will help us to better design and implement urban mitigation strategies to counterbalance the impact of the urban heat island in the city. Read the full article HERE
Share This Post How fast are you? Have you ever measured your reaction time? There’s a really cool way to test reaction times using a simple ruler. Reaction time is a measurement of how quickly an organism (in your case a human) can respond to a particular stimulus. For example if you touch something very hot, there is a slight delay between the moment you touch it and moving your hand away, because it takes time for the information to travel from your hand, to your brain where it is processed and then it tells you to respond. The reaction time test What you need Pen and Paper How to do it. - Hold the top of the ruler with your arm stretched right out in front of you. - Ask your friend to put their thumb and index finger slightly open at the bottom of the ruler, with the ruler between their fingers but not touching it! - Drop the ruler and get your friend to catch it between their fingers and record the measurement on the ruler where they catch it. - Repeat for all your friends and let each person have three attempts. - The one with the fastest reaction time is whoever catches the ruler at the lowest measurement. The sooner the ruler is caught the the quicker you are! How does it work? You see that the ruler has been dropped and your eyes send a signal to your brain, which sends a signal to the muscles in the arm and hand to tell them to catch the ruler. Your body is very clever and these signals travel extremely quickly. Your reaction time depends on the time taken for your bodies super fast signals to travel between your eye, brain and hand.
Head lice are tiny parasites that live on the human head. They live and thrive by sucking tiny amounts of blood from the scalp and reproduce by laying their eggs in the hair. Perhaps surprisingly, head lice don't spread disease. If your child has lice, or might have lice, print out our Lice Survival Guide Checklist for parents. What do head lice (and their eggs) look like? The adult head louse has six legs and is about the size of a sesame seed. Descriptions of their color vary, but generally they range from beige to gray and may become considerably darker when they feed. Lice often appear to be the same color as the hair they've infested, making them hard to see with the naked eye. You can spot them most easily in the areas behind the ears and along the hairline on the back of the neck. Female lice lay up to ten minuscule eggs a day. Lice eggs (called nits) are oval in shape. They may appear to be the color of their host's hair, ranging from white to yellow to brown. What's the life cycle of a typical louse? The female louse attaches her eggs to human hair shafts with a waterproof, glue-like substance. This ensures that the nits can't be washed, brushed, or blown away, unlike dandruff and other bits of stuff in the hair that often gets mistaken for nits. She lays her eggs a fraction of an inch from the scalp, where it's nice and warm – just right for hatching. Nits typically hatch eight or nine days after they're laid. Once the eggs have hatched, their yellow or white shells remain attached to the hair shaft, moving farther from the scalp as the hair grows. As a result, empty nit shells attached to hairs are usually found farther away from the scalp than live eggs are. Baby lice, known as nymphs, are not much bigger than the nits and tend to be light in color. Nine to 12 days later, they become adults and mate, the females lay their eggs, and the cycle continues. An adult louse can live up to 30 days on the human head. How did my child get lice? Your child probably picked up lice from an infested sibling or playmate. Lice are crawling insects. They can't hop, jump, or fly, but they can crawl from one head to another when people put their heads together – for example, when they hug or lay their heads on the same pillow. Once female lice find their way to a child's head, they lay eggs and begin to populate the area. You can't catch nits; they have to be laid by live lice. Since lice can live for up to a day off of the human head, it's theoretically possible to get infested if your hair makes contact with items such as hats, combs, or brushes if they were used recently by an infested person. However, this is less likely than human-to-human spread. A healthy louse will rarely leave a healthy head (except to crawl onto another healthy head!), and lice found on combs are usually injured or dead. Are lice more common in dirty conditions? It's a myth that lice are a product of poor hygiene or poverty. Head lice are equal-opportunity parasites. They like clean hair as well as dirty hair and can flourish in even the wealthiest communities. So, when lice are going around, it's no one child or family's fault. If your child has lice, chances are they're traveling through the neighborhood or school. And your child has probably unknowingly infected others. Head lice are most common among preschool- and elementary school-age children and their families and caregivers. Some studies suggest that girls get head lice more often than boys. This may be because they have more head-to-head contact with each other and longer hair that provides more warmth and darkness (two things lice love). Interestingly, lice are much less common among African Americans in the United States than among people of other races. This may be because lice claws have a tougher time grasping the shape and width of African American hair. Reviewed by pediatric dermatologist Anthony J. Mancini, M.D., head of the division of dermatology at Children's Memorial Hospital in Chicago. Go back to the Head Lice Survival Guide for Parents
Obstructive Sleep Apnea Obstructive Sleep Apnea Obstructive Sleep Apnea, also known as OSA, is one of the most common types of sleep apnea conditions that is known to affect individuals. It is caused by the obstruction of the upper airway within the body and can be characterized by pauses in breathing during the sleep cycle. It is associated with a noteworthy reduction in the amount of blood oxygen saturation. Most people who have obstructive sleep apnea never realize that they have ceased to breathe during their sleep, only discovering it otherwise when someone has noticed it while watching them or after a sleeping test used to determine why the individual is not getting restful and fulfilling sleep patterns. Most cases of obstructive sleep apnea are accompanied by snoring. The symptoms that are associated with the condition may be present for many years without any realization. The individual may feel as if they are very sleepy during the daytime and may deal with high levels of fatigue when compared to feeling more rested previously. Some of the common symptoms that are associated with the condition can also include anxiety, irritability, depression, forgetfulness, high blood pressure, increased heart rate, lack of sex drive, weight gain, heartburn, night sweats, and increased urination. Adults who have OSA are generally more likely to also suffer from obesity, which is believed to lead towards apnea because of the amount of weight on the individual ultimately leading to a further obstruction. Children are also able to have obstructive sleep apnea, although their symptoms tend to differ when compared to adults. In example, while an adult may feel as if they are very sleepy during the daytime, a child may not deal with this type of symptom at all. Instead, a child may seem as if they are being hyperactive or show signs of the typical crankiness associated with being over-tired. Although adults with obstructive sleep apnea are generally obese, this is less likely with children; many children who are experiencing the disorder are very thin and may even have failure-to-thrive syndrome. In children, this condition is often caused by adenoids or having obstructive tonsils. This can often be cured by removing these elements of the body via the use of surgery, though most parents will find this type of option to be unnecessary. It has been said that those who are elderly are more at risk to deal with obstructive sleep apnea because they will be dealing with a significant amount of loss of muscle tone. This can also be caused by those who have been using chemical depressants, such as those who abuse the use of alcoholic drinks or sedatives. Working to prevent the loss of muscle tone can greatly help to reduce the risk associated with OSA.
A child or an adult may have 20/20 eyesight with or without glasses, but still have poor visual skills. Having 20/20 eyesight has little to do with how the brain is integrated, information is processed or how information is understood. The difference is critical. Poor visual skills are the most overlooked reason why a child may struggle in school as our eyes are the primary source for gathering information in learning. The visual abilities are the skills which give us the power or the means to take in information through our eyes. The visual sensory system is composed of the following categories: - Visual Acuity – The sharpness or clearness of sight. - Binocular Skills – These are neuromuscular abilities controlled by the muscles inside and outside the eye networking with the brain. - Accommodation – Ability to focus in order to see clearly at different distances. - Vergence System – Eye teaming, crossing and uncrossing. - Ocular Motor – (eye tracking) Accurate and quick coordination of eye movements. - Eye Hand Coordination - Perceptual Skills – Visual information processing skills that allow the brain to organize and interpret information that is “seen”, and give it meaning. The good news is that these visual abilities are LEARNED skills. This means they can be developed and improved with optometric vision therapy. Through a series of progressive therapeutic procedures, the visual system and the visual control centers of the brain learn a new habit of how and when to respond automatically and efficiently. Vision therapy is remarkably successful in rehabilitating all types of vision impairments including, but not limited to amblyopia (lazy eye), strabismus (eye deviations in, out or up), ocular motor problems, eye teaming and focusing problems. Vision therapy is much more successful for these kinds of diagnoses then surgery or glasses alone. - People of all ages can benefit from Vision Therapy - People who are struggling academically - People who have suffered a closed head injury or from stroke. - Athletes who want a competitive edge. - People who experience eye fatigue from near work such as lawyers, secretaries etc. Patients typically come to the office once or twice weekly for 50 minutes each visit. In addition, homework is given as reinforcement of what is learned during the office therapy sessions. Commitment to the therapy program, and maintaining a schedule of weekly visits, is important in the success of the program.
Volume of three cuboids Calculate the total volume of all cuboids for which the the size of the edges are in a ratio of 1:2:3, and one of the edges has a size 6 cm. Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this verbal math problem are needed these knowledge from mathematics: Next similar math problems: - Cross-sections of a cone Cone with base radius 16 cm and height 11 cm divide by parallel planes to base into three bodies. The planes divide the height of the cone into three equal parts. Determine the volume ratio of the maximum and minimum of the resulting body. Ping pong balls have a diameter of approximately 5.1 cm. They are sold in boxes of 10 pieces: each box has the shape of a cuboid with a square base. The balls touch the walls of the box. Calculate what portion of the internal volume of the box is filled. - Cuboid and ratio Cuboid has dimensions in ratio 1:2:6 and the surface area of the cuboid is 1000 dm2. Calculate the volume of the cuboid. From wooden block carpenter cut off a small cuboid block with half the edge length. How many percent of wood he cut off? Wjat height reach water level in the tray shaped a cuboid, if it is 420 liters of water and bottom dimensions are 120 cm and 70 cm. - Juice box 2 Box with juice has the shape of a cuboid. Internal dimensions are 15 cm, 20 cm and 32 cm. If the box stay at the smallest base juice level reaches 4 cm below the upper base. How much internal volume of the box fills juice? How many cm below the top of the - Volume increase How many percent will increase in the pool 50 m, width 15m if the level rises from 1m to 150cm? A pit is dug in the shape of a cuboid with dimensions 10mX8mX3m. The earth taken out is spread evenly on a rectangular plot of land with dimensions 40m X 30m. What is the increase in the level of the plot ? - Swimming pool The pool shape of cuboid is 299 m3 full of water. Determine the dimensions of its bottom if water depth is 282 cm and one bottom dimension is 4.7 m greater than the second. - Cube corners From cube of edge 14 cm cut off all vertices so that each cutting plane intersects the edges 1 cm from the nearest vertice. How many edges will have this body? - Bottles of juice How many 2-liter bottles of juice need to buy if you want to transfer juice to 50 pitchers rotary cone shape with a diameter of 24 cm and base side length of 1.5 dm. The gasholder has spherical shape with a diameter 20 m. How many m3 can hold in? - Theorem prove We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started? 8 girls wants to play volleyball against boys. On the field at one time can be six players per team. How many initial teams of this girls may trainer to choose? - Apples in baskets Determine how many apples are in baskets when in the first basket are 4 apples, and in any other is 29 apples more than the previous, and we have eight baskets. For vector w is true: w = 2u-5v. Determine coordinates of vector w if u=(3, -1), v=(12, -10) Express the expression ? as the n-th power of the base 10.
The thermocline is the transition layer between the mixed layer at the surface and the deep water layer. The definitions of these layers are based on temperature. The mixed layer is near the surface where the temperature is roughly that of surface water. In the thermocline, the temperature decreases rapidly from the mixed layer temperature to the much colder deep water temperature. The mixed layer and the deep water layer are relatively uniform in temperature, while the thermocline represents the transition zone between the two. It is important to understand the importance of the thermocline if you want to find trout when water temperatures soar. The thermocline is a magnet for trout in the summer. It provides a sanctuary for trout where they find cooler water and oxygen. Trout are most comfortable in water temperatures between 50-65 degrees Fahrenheit. Mortality rates for trout caught in temperatures above 65 degrees increases exponentially. Although not part of the thermocline definition, oxygen is an important consideration for trout. They cannot live with less than 5 parts per million. With knowledge of the thermocline and oxygen levels we can find where the fish are. With that understanding, we can make better decisions regarding when it is safe to fish and when to limit fishing. Data for Oxygen/Temperature at Cedar The thermocline layer is bounded on the top and bottom by temperature. The livable water for trout is determined by temperature at the upper level while oxygen saturation usually sets the lower limit. Trout will swim above and below the thermocline layer to feed. Early morning and late evening trout may move into the danger zone near the surface to feed, however, they will return quickly quickly to the safe layer. Bounded on the top, bottom and left by temperature and on the right where the temperature and oxygen cross, the green box shows the limits of the comfort zone. In the graph of the July data the sweet spot is roughly between 12 and 21 feet deep. This is where the trout are. Measurements are taken several times a year by John Kornegay using equipment that measures water temperature and oxygen levels at three foot intervals. By locating the thermocline when it begins to form and tracking it through the season we can manage our fishery to best advantage. When surface temperatures become lethal we can selectively restrict fishing and when they moderate these restrictions are relaxed. In July the surface temperatures in our quarries are lethal should a trout were to remain there. Fish are holding somewhere between 12-21 feet deep only moving into the danger zone to feed but then quickly returning to the comfort zone. Fish caught at this time of year are subject to more stress. Larger fish fight hard and do not recover as quickly which leads to higher mortality rates. Using this information and with previous year's experiences John and Owen Mitchell (Fisheries Chair) made the recommendation, which was accepted by the Board, to close Cedar to all fishing to protect our trophy population. Although Pine and Birch have similar thermocline profiles, they were not closed. The fish are smaller and have easier access to deep water when released so it was decided to let them remain open for fishing. Measurements and observations continue throughout the summer to make sure fishing in Birch and Pine is safe and to help determine when it is safe to reopen Cedar. The most recent measurements taken in August show the water continues to get warmer even though we have had plenty of rain and some cool nights. Last year, the water temperatures were much warmer and water levels were at drought levels. With accurate data and our experience we can better balance the need to maintain a healthy fishery with the opportunity to continue fishing. We are fortunate to have a very robust thermocline layer compared to other lakes in Connecticut may be only 2-3 feet thick. We don't know why but aren't we lucky.
Some like it, others hate it and many are afraid of the lambda operator. We are confident that you will like it, when you have finished with this chapter of our tutorial. If not, you can learn all about “List Comprehensions“, Guido van Rossums preferred way to do it, because he doesn’t like Lambda, map, filter and reduce either. Because The lambda operator or lambda function is a way to create small anonymous functions, i.e. functions without a name. These functions are throw-away functions, i.e. they are just needed where they have been created. Lambda functions are mainly used in combination with the functions filter(), map() and reduce(). The lambda feature was added to Python due to the demand from Lisp programmers. The general syntax of a lambda function is quite simple: lambda argument_list: expression The argument list consists of a comma separated list of arguments and the expression is an arithmetic expression using these arguments. You can assign the function to a variable to give it a name. The following example of a lambda function returns the sum of its two arguments: >>> f = lambda x, y : x + y The advantage of the lambda operator can be seen when it is used in combination with the map() function. map() is a function with two arguments: r = map(func, seq) The first argument func is the name of a function and the second a sequence (e.g. a list) seq. map() applies the function func to all the elements of the sequence seq. It returns a new list with the elements changed by func In the example above we haven’t used lambda. By using lambda, we wouldn’t have had to define and name the functions fahrenheit() and celsius(). You can see this in the following interactive session: >>> Celsius = [39.2, 36.5, 37.3, 37.8] map() can be applied to more than one list. The lists have to have the same length. map() will apply its lambda function to the elements of the argument lists, i.e. it first applies to the elements with the 0th index, then to the elements with the 1st index until the n-th index is reached: >>> a = [1,2,3,4] We can see in the example above that the parameter x gets its values from the list a, while y gets its values from b and z from list c. The function filter(function, list) offers an elegant way to filter out all the elements of a list, for which the function function returns True. The function filter(f,l) needs a function f as its first argument. f returns a Boolean value, i.e. either True or False. This function will be applied to every element of the list l. Only if f returns True will the element of the list be included in the result list. >>> fib = [0,1,1,2,3,5,8,13,21,34,55] The function reduce(func, seq) continually applies the function func() to the sequence seq. It returns a single value. If seq = [ s1, s2, s3, … , sn ], calling reduce(func, seq) works like this: - At first the first two elements of seq will be applied to func, i.e. func(s1,s2) The list on which reduce() works looks now like this: [ func(s1, s2), s3, … , sn ] - In the next step func will be applied on the previous result and the third element of the list, i.e. func(func(s1, s2),s3).The list looks like this now: [ func(func(s1, s2),s3), … , sn ] - Continue like this until just one element is left and return this element as the result of reduce() We illustrate this process in the following example: >>> reduce(lambda x,y: x+y, [47,11,42,13]) The following diagram shows the intermediate steps of the calculation: Determining the maximum of a list of numerical values by using reduce: >>> f = lambda a,b: a if (a > b) else b Calculating the sum of the numbers from 1 to 100: >>> reduce(lambda x, y: x+y, range(1,101))
The Court System Written and Illustrated By Tim Egan PhonicsLong Vowels a, e, i, o, uThe long vowel pronunciation is the same as theletter name.Words with the VCe Pattern Why are courts an important part of our government? Good readers read words in groups instead of one at a time to make their reading sound like natural speech. They also pause after groups of words that go together. Punctuation, including commas and end marks, can help readers know when to pause. Target Skill: ConclusionsWhen we read, we can draw conclusions based on story details. A conclusion is a smart guess about something the author does not say directly. Good readers use what they know about real life and clues the author gives to draw conclusions about characters and events. We can use an inference map to figure out things not stated directly. Target Skill: Author's Word ChoiceTarget Strategy: Infer/PredictInferring is similar to drawing conclusions. When readers infer, they use what they know to figure out things that the author does not tell in the text. - Practice Drawing Conclusions - Practice Drawing Conclusions 2 - Drawing Conclusions Board Game - Drawing Conclusions Video - Making Inferences - Partly Cloudy - Drawing Conclusions & Inferences - Rags to Riches - Battleship Inferences Target Vocabularyconvinced - made someone believe or agree to somethingguilty - having done something wrongpointed - used a finger to show where something washonest - truthfultrial - a meeting in court to decide if someone has broken the lawmurmur - the sound of people speaking very softlyjury - the group of people who make the decision in a trialstand - the place where a witness in a trial sits while being questioned
A digraph is two letters that make just one sound. A consonant digraph is two adjacent consonants that produce one sound. s and h are two consonants that come together to form a new sound, the /sh/ sound. I usually tell children that when ‘s’ and ‘h’ come together, they form a new sound, /sh/. Today, I will be sharing the second pack for digraph sh. You can find Pack 1 of sh sound words by clicking here. Pack 1 of sh sound has easier words so please attempt those before trying this pack. Read the sh words and highlight digraph ‘sh.’ I have used sh words with consonant blends like bl, cl, dr, fr, sm, etc. A cut and paste, picture sort worksheet for -ash words. Color the pictures and cut them. Read the words and stick the picture under it. A cut and paste, picture sort worksheet of short vowel words with the sh sound. Cut the pictures. Read the words and stick the pictures under the words. Read the words in the word bank. Identify the pictures and write the words under them. In the above sh sentences with pictures worksheet, the words are scrambled. Unscramble the words to form a logical sentence and write it using basic punctuation. The cut and paste ‘word puzzle’ worksheet is kids’ favorite. There are six pictures that need to be cut along with the letters. Cut along the dotted line too. Now give your child three tiles for the first word and let him/her form the picture and read the word under it. I make the kids stick word puzzles in their notebooks. We stick one puzzle and write a simple sentence containing that word. I have a pack of worksheets for digraph sentences. Do check it. The last is a word sort worksheet. There are words that begin with sh and there are words that end with sh. Read the words and sort them. Paste them in the correct box. This pack is free for subscribers to download.
Critical periods are short windows which open and close, in children worldwide, during which the library of emotional, social and language wiring of the neurons takes place. We have a pediatrician in our midst who shares her research about critical periods with us. Her pediatrician and scientist colleagues are proving the principle that the brain goes through certain stages of openness. Critical Periods: When Babies and Children Learn Adults often joke that their brains are too crowded and that they don’t have enough “computer memory” to learn certain concepts easily. While we can learn through practice (or neurons firing together and wiring together), evidence proves that there are certain times of life called “critical periods.” These are times when windows open, allowing information to soak in. They then close. Optimal learning takes place as the brain soaks up exterior stimuli. A human baby, swaddled and cuddled, takes in love and nurturing with its mother’s milk. Imagine the contentment as the child hears reassuring words while suckling. Snuggling against its mother, the baby feels at one with her and learns to trust others. Emotional and social “wiring” is already taking place as the baby learns to associate these positive words and feelings of being loved with feeding time in the first weeks of life. If the parent cuddles and soothes the child, responding to cries with consistency and love and warmth, the child will expect that same kind of love from family members and future mates. This first oral period, a critical nurturing period, may be the most important period in the child’s early brain growth and development.For a baby the first oral period, a critical nurturing period, may be the most important period in the child’s life Click To Tweet Imagine the stages of infancy and childhood as windows of time, during which the nervous system develops. The newborn stage is an excellent example of a window of time in which new brain systems and maps develop with the help of stimulation from the environment. Critical periods are a vital part of child development. This stimulation includes a mother’s voice, siblings’ chatter, and so on. The baby also takes in the environment by feeling, looking, hearing, smelling, and sucking, and its “library” of information is growing and being established for the rest of its life. When these stages proceed normally, large-scale development takes place in infants’ superdense brains, which are one quarter the size of adults’ brains. Although babies’ brains are slow to respond to changes in stimuli at first, babies can process information at more rapid rates over the first few months. Brain plasticity is off the charts during this time, with an amazing two million new synapse connections per second.Baby brain plasticity is off the charts with an amazing two million new synapse connections per second. Click To Tweet Peak performance is reached as infants focus their attention (thanks to the nucleus basalis) on their mother’s face, a nursery rhyme, or a ball bouncing across the floor. Key connections or bonds in the brain are strengthened. The parents are still an enormous influence here. Think of the child’s brain changing like moldable plastic used in arts and crafts projects, finding its shape and laying down major neuronal connections. Once those connections or bonds solidify, the brain stabilizes and this critical mental period ends. We watch videos of children in an emotional critical period lasting from ten to eighteen months. During this crucial period, the emotional command center of the frontal lobe develops in order to allow children to form family ties and friendships. The newly developed circuits in the neocortex and limbic systems allow the children in the videos to read Mom’s facial expression and control their own emotions (anger, tantrums, and frustration). As adults, we will carry that emotional understanding and diagnose our own emotions, as well as read other people’s. Babies, infants, and children traverse these critical chronobiological periods worldwide as their brains and bodies undergo periods of sensitization to new impulses and incitements. These are all critical periods of intense plastic development.Infants traverse these critical chronobiological periods worldwide as their brains undergo intense plastic development. Click To Tweet Take the phenomenon of language, for example. One of the most unique characteristics of humans is that we can actually express ourselves. This process receives a jumpstart in childhood. As we listen to a tape of children speaking words or adults talking to children, we learn that somewhere between birth and the age of five, children progress from reading the movements of lips in infancy to registering vocabulary and grammar, and sometime between six months and a year, spoken language prepares the baby’s brain to be multilingual. The children learn the word “ball” as well as the order of the words “I play ball.” This is a phrase spoken by one of the children on the tape. That’s the English version. The French version is “Je joue au ballon,” and the German version is “Ich spiele Ball.” Along with the particular language spoken, children learn pronunciation, or the way that the tongue, mouth, and so on work together to produce the words and accents. In the parent voices on tape, we hear that New Yorkers, Texans, Cajuns, Midwesterners, Londoners, and Scots all speak English in diverse and easily distinguishable accents. Not only that, nonnative English speakers have their own accents such as those of native speakers of Mexican Spanish or Russian. Children can detect these. During critical periods for language like the one lasting from birth to three years of age, the children on our tape can actually learn two or more languages. We stop and ponder why this is so. Why should babies be able to read lips, and why does the freshness of their minds allow them to learn two languages (such as Spanish and English or French and German) more easily? If children are exposed to bilingual teaching (one language at home and another in the classroom for example), they can speak both languages equally well due to the help of the environment and their teachers’ nurturing. Let’s say a child learns and does his or her homework in French because he or she is living in France, and yet English is spoken in the home. After the critical period, however, the children on our tape no longer learn English, German, or any other language with the same ease. We watch a time-lapse video of a young learner’s brain map as neurons dynamically integrate two languages. We then see the window closing as the neurons are set in a more permanent pattern. We simultaneously hear the young learner speaking French with a French accent and English with an English accent. Both the visual and voice show us that babies pick up languages naturally, but with age, the learning process takes more effort. Babies are aware of their mother’s voice in the womb, and they can recognize her sound at four days old. They remember the rocking and constant “shhhh” sounds inside the womb. Despite their large brains, however, their neuronal structure is like a sensory library waiting to be filled and their brain maps are only blueprints.Each baby's neuronal structure is like a sensory library waiting to be filled and their brain maps are only blueprints. Click To Tweet Why is this so? We think about our early years and our children’s early years, when we have a fuzzy picture of the world. We wonder why critical periods are so plastic and flexible that each new experience, such as babies hearing language, playing a game with Mommy and Daddy, or learning to recognize their faces, changes brain structure and makes a baby’s or young child’s brain sensitive. Because of this sensitivity, babies and toddlers can pick up new sounds and words effortlessly during these first years, which are known as the “language critical period.” When a mother or father says, “Let’s play blocks” or “Show me the red block” during this period, the mere exposure to sentences, sounds, and words allows children to lock those words into their brains, which actually changes the wiring of the brain. In essence, the library is filling up, but it is constantly open to receive new material and continuously under renovation until age five. Interestingly, animals and birds such as newborn kittens and ducklings also have a critical period in which brain pathways rapidly form. We observe baby kittens at four weeks of age and ducklings in the first week of life. From three to eight weeks, those baby kittens will develop their vision. The ducklings are already feeding themselves, and they will learn to fly at an age of somewhere between three and ten weeks. A four-day-old baby antelope runs through the lab. In most animals, these critical periods close forever after just a few months, and the animals have no stimulus beyond their routine to go with. The ducklings will fly, migrate for the winter, swim, and feed on schedule, and the antelope will eat grass and migrate according to its innate brain makeup. We wonder why this is so, but let’s continue with our focus on humans. This post is an excerpt from chapter 9.7.1 of Inventory of the Universe. The Explanation Blog Bonus This video, a TED Talk by Patricia Kuhl, is vital to understand the critical period for learning language. As she says, this is a worldwide phenomena and as you’ll see around the 8 minute mark only works with human beings, parents rather than just audio, computers or TV. Many questions come to mind and she asks a few of them: Why is this a worldwide phenomenon? why does it need ‘human’ participation? Are then any other critical periods of other disciplines? These are the kind of question for which The Explanation will give the answers. Join our list and stay tuned. Dig Deeper into The Explanation Join The Explanation Newsletter to stay informed of updates. and future events. No obligations, total privacy, unsubscribe if you want. The Explanation series of seven books. Free to read online or purchase these valuable commentaries on Genesis 1-3 from your favorite book outlet. E-book and paperback formats are available. Use this link to see the details of each book and buy from your favorite store. Since you read all the way to here… you liked it. Please use the Social Network links just below to share this information from The Explanation, Critical Periods: When Babies and Children Learn
Nutrient pollution is one of America's most widespread, costly and challenging environmental problems, and is caused by excess nitrogen and phosphorus in the air and water. Nitrogen and phosphorus are nutrients that are natural parts of aquatic ecosystems. Nitrogen is also the most abundant element in the air we breathe. Nitrogen and phosphorus support the growth of algae and aquatic plants, which provide food and habitat for fish, shellfish and smaller organisms that live in water. But when too much nitrogen and phosphorus enter the environment - usually from a wide range of human activities - the air and water can become polluted. Nutrient pollution has impacted many streams, rivers, lakes, bays and coastal waters for the past several decades, resulting in serious environmental and human health issues, and impacting the economy. Sources and Solutions Excessive nitrogen and phosphorus that washes into water bodies and is released into the air are often the direct result of human activities. The primary sources of nutrient pollution are: - Agriculture: Animal manure, excess fertilizer applied to crops and fields, and soil erosion make agriculture one of the largest sources of nitrogen and phosphorus pollution in the country. - Stormwater: When precipitation falls on our cities and towns, it runs across hard surfaces - like rooftops, sidewalks and roads - and carries pollutants, including nitrogen and phosphorus, into local waterways. - Wastewater: Our sewer and septic systems are responsible for treating large quantities of waste, and these systems do not always operate properly or remove enough nitrogen and phosphorus before discharging into waterways. - Fossil Fuels: Electric power generation, industry, transportation and agriculture have increased the amount of nitrogen in the air through use of fossil fuels. - In and Around the Home: Fertilizers, yard and pet waste, and certain soaps and detergents contain nitrogen and phosphorus, and can contribute to nutrient pollution if not properly used or disposed of. The amount of hard surfaces and type of landscaping can also increase the runoff of nitrogen and phosphorus during wet weather. What You Can Do We can all take action to reduce nutrient pollution through the choices we make around the house, with our pets, in lawn maintenance, and in transportation. Families, individuals, students and teachers can access resources online to find out more about the health of their local waterways and participate in community efforts to make their environments healthier and safer. Learn how you can help prevent nutrient pollution: Cleaning Supplies-Detergents and Soaps - Choose phosphate-free detergents, soaps, and household cleaners. - Select the proper load size for your washing machine. - Only run your clothes or dish washer when you have a full load. - Use the appropriate amount of detergent; more is not better. - Always pick up after your pet. - Avoid walking your pet near streams and other waterways. Instead, walk them in grassy areas, parks or undeveloped areas. - Inform other pet owners of why picking up pet waste is important and encourage them to do so. - Take part in a storm drain marking program in your area to help make others aware of where pet waste and other runoff goes when not disposed of properly. - Inspect your septic system annually. - Pump out your septic system regularly. (Pumping out every two to five years is recommended for a three-bedroom house with a 1,000-gallon tank; smaller tanks should be pumped more often). - Do not use septic system additives. There is no scientific evidence that biological and chemical additives aid or accelerate decomposition in septic tanks; some additives can in fact be detrimental to the septic system or contaminate ground water. - Do not divert storm drains or basement pumps into septic systems. - Avoid or reduce the use of your garbage disposal. Garbage disposals contribute unnecessary solids to your septic system and can also increase the frequency your tank needs to be pumped. - Don't use toilets as trash cans. Excess solids can clog your drainfield and necessitate more frequent pumping. - When installing a septic system, maintain a safe distance from drinking water sources to avoid potential contamination. Avoid areas with high water tables and shallow impermeable layers. - Plant only grass in the drain field and avoid planting trees, bushes, or other plants with extensive root systems that could damage the system's tank or pipes. - Visit EPA's Septic Smart website to learn more about how your septic system works and simple tips on how to properly maintain it. You can also find resources to launch a local septic education campaign. - Choose WaterSense labeled products which are high performing, water efficient appliances. - Use low-flow faucets, shower heads, reduced-flow toilet flushing equipment, and water-saving appliances such as dish- and clothes washers. - Repair leaking faucets, toilets and pumps. - Take short showers instead of baths and avoid letting faucets run unnecessarily. - Visit Conservation for more ideas on how to save water.
Alternative Print Alphabet Board - Capital letters The letter boards give children the opportunity to practise perfect letter formation and establish good writing habits from the start. Using the stylus promotes finger strength and pencil control, both necessary for quick, accurate writing. Tracing over the letters also increases children’s familiarity with the sounds connected to the letters. This is the start of ‘sounding out’, a critical skill for reading and spelling.
If you work with lots of data you will come across cases where you need something to happen if a cell is blank, or the opposite i.e. if not blank then calculate in Excel. Perhaps if the Excel cell is blank the formula must result in a zero, or only cells that are not blank should be calculated. If not blank then calculate in Excel What you want to do after finding a blank or not blank cell can be handled via a normal IF function. The key to this is how to get Excel to check for a blank. “” for Excel to differentiate between blank and non blank First option is to use two inverted commas DIRECTLY next to each other i.e. “”. Notice NO space inbetween. Remember in Excel this ( “” ) and this (” “) are different. The first one is blank. The second one contains a space, and although humans see it as blank, Excel sees it as a not blank cell. So to test if a cell is not blank it would look something like this There is a useful function in Excel called ISBLANK that does what it says. It looks at a cell and says True if it IS BLANK and false if it IS not BLANK. You can use this within your IF function, so Cell is blank, BUT it has a ‘ in the cell Some software exports blank cells with a ‘ in the cell (we have seen this with some Pastel systems). When you look at the cell if seems blank but if you look in the formula bar you can see the ‘ (the ‘ tells Excel to treat whatever is after as text, so useful if you enter a phone number to keep the leading zeros- in the formula bar you will see ‘011 849 1234 but in the cell you will see 011 849 1234). In this case the ISBLANK does not seem to work but the “” seems to work. For safety, perhaps you can run another check using the LEN function. LEN tells you the LENgth of the contents of a cell, so if we did it will return 0 if it is blank! Want to learn more about Microsoft Excel? If you prefer attending a course and live in South Africa look at the Johannesburg MS Excel 3 Day Advanced Course or the Cape Town MS Excel 3 Day Advanced training course. If you prefer online learning or live outside South Africa, look at our online MS Excel training courses.
2019 is a memorable year. It is the last 365 days before the start of a new decade. It is also the 30th anniversary of the Berlin Wall coming down on November 9, 1989, and 2019 is the 50th anniversary of the first man, Neil Armstrong, walking on the Moon on July 20th, 1969. The commemoration of the Moon landing is a time to remember the many events and people it took to make history. As we prepare to make new history in 2020, let’s take a look back at spacecraft Apollo 11’s trip, the first lunar landing, and some of the lesser-known facts about them. - The mission was titled the “Apollo 11” mission and was NASA’s fifth Apollo mission. Astronauts Neil Armstrong, Michael Collins, and Edwin (Buzz) Aldrin, Jr. left Cape Kennedy, Florida on July 16, 1969. Today, Cape Kennedy is known as Cape Canaveral. - Aldrin and Armstrong were the only ones to actually walk on the Moon. They landed in the Lunar Module called the Eagle. The two walked around for three hours, conducted experiments, and collected Moon dirt and rocks. Fun fact: researchers think the rocks were about 3.7 billion years old. As for Collins, he stayed in orbit, did experiments, and took pictures. - Plans for the Apollo program had started at the beginning of the decade. In 1960, NASA announced a plan to send a small crew to orbit the Moon. A year later, President Kennedy made a public speech and commitment to land a man on the Moon before the end of the decade. Since then, Kennedy’s speech has become famous with the memorable line “We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard.” - The launch of Apollo 11 was watched by millions of people on television. Yet, supporters and VIP spectators were able to see the historic moment in person, just 3.5 miles away from the launch pad. Viewers included former President Lyndon B. Johnson and then Vice President Spiro Agnew. - President Richard Nixon was prepared for anything that might have happened on July 16th. He had two speeches ready for the public; one for the victory of the mission and another to remember the astronauts and their hard work if the mission failed. - A tie to North Carolina history was on board Apollo 11. Pieces of the Wright Brothers’ first airplane had been given to Neil Armstrong by the Air Force. After the trip, Armstrong was able to keep half of the pieces. This little gesture helped make the history of flight come full circle. - Just like the launch, the landing on the Moon was watched by an estimated 600 million people. - On July 20th, 1969, at 10:56 pm Eastern time, Neil Armstrong took the first step that became known as “One small step for man, one giant leap for mankind.” Even Disneyland paused its festivities to show the live broadcast on a stage in Tomorrowland. - Communication between space and Earth may have been a little fuzzy. Armstrong’s quote, stated above, may have been misheard. The correct quote is, “That’s one small step for a man, one giant leap for mankind.” In 2006, technology uncovered the “a,” which is said not to have been heard due to radio static. - After coming back to Earth on July 24th, Armstrong, Collins, and Aldrin had to be quarantined until August 10th. They were placed in a mobile quarantine unit and then transported to the NASA Lunar Receiving Laboratory at Houston’s Johnson Space Center. - It is said that it took approximately 400,000 scientists, engineers, and technicians to make the Apollo 11 mission a success. However, it wasn’t solely men who worked on the mission. Margaret Hamilton was only 33 years old when she wrote the code for the Apollo Guidance Computer, the code that sent Apollo 11 to the Moon. She is also known for coining the term “software engineering.” Another well-known name is mathematician Katherine Johnson. She calculated how the Lunar Module would return to the main spaceship after landing and created backup navigation charts in case the astronauts’ systems failed in space. Lastly, Christine Darden was a “human computer” during the Apollo era. The Space Race was a memorable time in our nation’s history. Past and future generations will always remember and understand the importance of the Apollo 11 mission and the first man on the moon on July 20th, 1969.
Now that this school term is well underway, many of our calls in the last few weeks have been from parents concerned that their child is exhibiting behaviours that are concerning creche, Kindergarten and of course the parents once they are made aware. A very common behaviour we are asked about is biting. Biting is a very normal exploratory phase for most children but for some if they are biting more regularly, it can be distressing for all concerned. This week's blog aims to shed some light on this behaviour. WHY DOES MY CHILD BITE? For most children aged between 0 and 3; biting is a natural continuation of exploration with their mouths. Whether toys, food or fingers; they are just exploring the sensation of having something in their mouths and until all their teeth develop, this problem is not upsetting or painful. For children aged 2 -3 who have all their teeth and are biting other children, the causes can be varied. Depending on their level of speech, it may have become a communication tool for them when they are unable to describe the emotions that they are feeling whether being frustrated; overwhelmed or even over excited. For some children it is a way of exploring their environment and the boundaries within it - they simply want to see what will happen if they bite. For others, it becomes their automatic response for not getting something that they want eg - when someone takes a toy they were playing with. Biting almost always allows the child to receive attention even when it is negative and they enjoy the idea of getting a reaction! Another common cause is that they are looking for stimulation in their mouth. They are searching for the feeling of chewing on something. This can often be associated with teething or when children are particularly congested or have excess saliva. WHAT CAN I DO TO PREVENT MY CHILD FROM BITING? It is important to first understand what is causing your child to bite. In order to do this, it is helpful to keep a diary of all biting incidents and try to recognise patterns. The biggest tip to helping your child not to bite is to try where possible to avoid the situations that lead to them biting: eg.. If you notice that they often bite when a particular toy is taken from them; don't allow that toy to go to school with them or be played with by other children on a play date. Distraction is always a great idea when you are in situations that are overwhelming or particularly exciting for your child. Give them age appropriate tasks to do that will keep them focused and allow them to channel their energy in a more positive direction. If you notice that your child is becoming anxious and you are concerned they may bite or you have seen the warning signs, it is a good idea to have a place in your house where they can go and sit and relax. This could be a corner with a beanbag and their favourite books but should be somewhere quiet and a place where they can sit calmly for a little while until the excitement/ anxiety/ feeling of being overwhelmed has passed. When children are teething or if your child seems to put everything in their mouth, it can help to provide things specially for that purpose. Give them extra snacks to chew on or when teething allow them to suck on something instead of someone!! If your child has a small vocabulary, try to work with them on how to express their feelings in words rather than actions. Finally, if a situation arises where your child does bite, once you have told them that biting is not acceptable, your attention should be on the person they bit. Even negative attention is enjoyable for a toddler so limit the amount that they are given. This includes them being labelled as a biter so try to avoid that if you can and ask the creche/ school to avoid it as well. The good news is that for most children this is a very temporary stage and one that they grow out of by 3 1/2!! We hope some of these strategies help you and your child's main carers to be reassured.
Avoid Sexist Language Sexist language is language that unnecessarily identifies gender. It can take several forms: - a pronoun that denotes a single sex when the information being conveyed pertains equally to either or both sexes - Ex. Every student should have his notebook with him in class.(appropriate at an all-male school) - Ex. fireman, mailman, policeman - Ex. The nurse awoke her patient at five a.m. - Ex. early man used a system of gestures to communicate To Avoid Sexist Language Although it may often seem that avoiding sexist language can lead one into using awkward or grating constructions, it is also possible to use gender-neutral language gracefully and unobtrusively. For example: When using pronouns, you have several choices. Pick the one that seems most natural in context: 1. Change singular nouns to plurals and use a gender neutral pronoun, or try to avoid the pronoun entirely: Instead of: Each student must have his notebook with him in class. Use: All students must have notebooks with them in class. Or: Senator who cannot serve a full term of office… 2. If you think you must use a singular adjective like “each” or “every,” try to avoid using a pronoun: Instead of: Each student must hand in his homework on Thursday. Use: Each student must hand in the assigned homework on Thursday. 3.When using a job title, try to eliminate the pronoun: Instead of: A truck driver should plan the travel route carefully. Use: A truck driver should plan his route carefully. 4. When eliminating the pronoun seems unavoidable, use both male and female pronouns: Instead of: A student should meet with his advisor. Use: A student should meet with his or her advisor. b. alternate make and female pronouns throughout the paper (but this can be tricky, since is can make the paper confusing). c. Choose a single sex pronoun and use it consistently throughout the paper. But be especially careful not to do this in a way that will perpetuate stereotypes. For example, it might be unwise to use “he” and “him” when talking about professions stereotypically associated with males; e.g., engineering. d. Be careful about using constructions like his/her, he/she. Many readers find these awkward and distracting. Check with your instructions for their preferences (or check with your instructor for his or her preference). Instead of sex-linked titles, try neutral titles: - Fireman - fireperson is awkward, but firefighter is not - Policeman - policeperson sounds silly, but police officer sounds natural - Mailman - mailperson seems awkward, postal worker does not - Cleaning woman - house cleaner, office cleaner, custodian are all preferable - Poetess - poet can be either a woman or a man and does not sound as if a woman poet is so odd that she needs a special appellation 1. Avoid using “man” as a noun when you are really referring to men women Ex.Early man used a system of gestures to communicate Rather, say: Early humans used… Or: Early men and women… 2. Although alternative spellings for words referring to gender are favored by some (example : womyn, herstory), in general and academic work these will be a problem. We suggest using traditional spelling or checking with your instructor.
Timeline: human rights and the united states such as the presumption of innocence in a criminal trial and freedom of movement the civil rights act of 1964. Black civil rights movement timeline - a chronological list of the major events of the civil rights movement. Us presidents timeline timeline elected the third president of the united states of martin luther king to bring about the civil rights movement. American civil rights movement timeline – 1862-2008 american civil rights movement timeline major events related to civil rights movement. Facts, information and articles about states rights, one of the causes of the civil war states’ rights summary: states’ rights is a term used to describe the ongoing struggle over political power in the united states between the federal government and individual states as broadly outlined in the tenth amendment and whether the usa is. American anti-slavery and civil rights timeline reprinted in the united states from birmingham jail inspires a growing national civil rights movement. Civil rights in the united states, a timeline made with timetoast's free interactive timeline making software. Three years after a first-of-its-kind study found that more than half of the states fail at teaching the civil rights movement to students, a new report released today by the splc’s teaching tolerance project shows that coverage of the movement in us classrooms remains woefully inadequate. United states of america timeline the united states declared war on britain over interference with maritime shipping and expansion civil rights leader. American civil rights movement: american civil rights movement, mid-20th-century mass protest movement against racial segregation and discrimination in the united states. The united states civil rights movement was a united states political movement for equality before the law includes an extensive timeline. Find important dates, people and events in the civil rights movement timeline for kids united states history and the civil rights movement timeline interesting facts via the civil rights movement timeline for kids, children, homework and schools. Lyndon johnson and civil rights the similarities and differences between civil rights as a national foreign relations of the united states series. Timeline of the civil rights movement in the in most southern states event that would come to symbolize the civil rights movement a. Us history and historical documents the history of the united states is vast and complex the african-american civil rights movement took place. Civil rights in the united states research documentary history of the modern civil rights movement timeline contact. A collection of genealogical profiles related to united states civil rights movement. Civil rights movement – timeline 1860: abraham lincoln elected president, signaling the secession of southern states 1863: president lincoln issues the. The history of the united states is what happened in the past in the united states these conditions led to the civil rights movement of the 1950s. America's best history - united states history timeline 1960-1969 civil rights and turmoil most important historical events of each year of the decade of the 1960's listed. The african-american civil rights movement was a group of social movements in the united statestheir goal was to gain equal rights for african-american people the word african-american was not used at the time, so the movement was usually called the civil rights movement. Native american timeline of events rights, and liberty they shall never be disturbed and among the several states. Timeline: major events of the 1960s 1960-1963 providing young blacks with a place in the civil rights movement the unites states invades cuba at the bay of. Why the civil rights movement was an womens rights here is a timeline of important events in the struggle for women’s liberation in the united states. America: history and life index to scholarly research in us history includes abstracts, as well as a documentation field that designates primary sources used in the research. Key moments in the civil rights movement, including supreme court cases, legislation and more. Civil rights movement timeline news six unsung heroines of the civil rights movement news why people rioted after martin luther king, jr’s assassination. Also offers an interactive civil rights movement timeline and civil rights movement give to the united states civil rights activist civil rights movement. Timeline of the american civil rights movement the civil rights movement in the united states began to the civil rights movement is a timeline of events.
Marine Wildlife and Harmful Trash - Grade Level: - Fourth Grade-Eighth Grade - Biology: Animals, Earth Science, Environment, Language Arts, Wildlife Biology, Wildlife Management - 45 minutes - Group Size: - Up to 36 (6-12 breakout groups) - National/State Standards: - Standard 7: Students examine organisms’ structures and functions for life processes, including growth and reproduction. (ASDOE Elementary Science Standards: Grade 4-8, pp. 41-73) OverviewStudents listen to descriptions of marine wildlife and identify marine debris items that could harm them. Students perform an experiment in which they wrap a rubber band around their fingers and across the back of their hand and try to disentangle them. As a class, students discuss their thoughts and reactions and relate to real animals. Students will be able to: 1. Define the vocabulary terms entanglement and ingestion. 2. Learn about the characteristics of marine wildlife that can make them susceptible to the hazards of marine debris. 3. Learn about entanglement and ingestion by experiencing what it might be like to be a marine animal trapped in debris. 4. Identify what marine trash items can harm marine wildlife. Marine debris can have serious impacts on both marine wildlife and humans. Debris can entangle, maim, and even drown many wildlife species. Animals can also mistake some debris for food; once ingested, these materials can cause starvation and/ or choking. Although almost any species can be harmed by marine debris, certain species – including seals, sea lions, seabirds and sea turtles – are more susceptible to its dangers than others. For humans, marine debris can be a health and safety hazard. The impacts of marine debris can also result in economic hardships for coastal communities related to tourism and the fishing industry. The two primary threats that marine debris poses to marine wildlife are entanglement and ingestion. Entanglement results when an animal becomes encircled or ensnared by debris. Some entanglement occurs when the animal is attracted to the debris as part of its normal behavior or out of curiosity. For example, an animal may try to play with a piece of marine debris or use it for shelter. Some animals, such as seabirds, may see fish caught in a net as a source of food, and become entangled while going after the fish. Entanglement is harmful to wildlife for several reasons: a. It can cause wounds that can lead to infections or loss of limbs. b. It may cause strangulation, choking, or suffocation. c. It can impair an animal's ability to swim, which may lead to drowning, or make it difficult for the animal to move, find food, and escape from predators. Ingestion occurs when an animal swallows marine debris. Ingestion sometimes happens accidentally, but generally animals ingest debris because it looks like food. For example, a floating plastic baggie can look like a jellyfish, and resin pellets (i.e., small, round pellets that are the raw form of plastic, which are melted and used to form plastic products) can resemble fish eggs. Ingestion can lead to choking, starvation or malnutrition if the ingested items block the intestinal tract and prevent digestion, or accumulate in the digestive tract and make the animal feel "full," lessening its desire to feed. Ingestion of sharp objects can damage the digestive tract or stomach lining and cause infection or pain. Ingested items may also block air passages and prevent breathing, causing the animal to suffocate. Marine mammals, sea turtles, birds, fish, and crustaceans all have been affected by marine debris through entanglement or ingestion. Unfortunately, many of the species most vulnerable to the impacts of marine debris are endangered or threatened. Endangered species are plants or animals that are in immediate danger of becoming extinct because their population levels are so low. Threatened species are plants or animals that may become endangered in the near future. 1. Small-to medium sized (thin) rubber band for each student 2. Styron-Foamed plastic cup/plate/bowl pieces (one of each) 3. Fishing line or rope 4. Six-pack ring 5. Plastic shopping bag 6. Chalk board or white board Handouts & Worksheets 1. "Animal Tales" 2. "Animal Entanglement" Introduce Inquiry Question? How does trash affect marine wildlife? Ask: Have you ever heard the term trash? If yes, explain where or how they heard the term? If not, what do you think it means? Write the term on the board and explain how to understand its meaning. Explain that trash is materials that have been made or used by people and discarded. Tell students that marine trash can have a serious impact on marine wildlife. Trash can entangle, harm, and even drown many wildlife species. Wildlife can also mistake some trash for food. When trash is ingested it can cause starvation and/or choking. Ask: Have you ever heard the term entanglement? If yes, explain where and how they heard the term? If not, what do you think it means? Write the term on the board and explain so that students understand its meaning. Explain to students that entanglement is the looping of a piece of trash around part of an animal’s body. Entanglement can impair swimming and feeding, cause suffocation, decrease ability to elude predators, and cause open wounds. Ask: Have you ever heard the term ingestion? If yes, explain where and how they heard the term? If not, what do you think it means? Write term on the board and explain so that students can understand its meaning. Explain to students that ingestion is the consumption of a piece of trash by an animal. Ingestion occurs when an animal swallows marine trash. Ingestion sometimes happens accidentally, but generally animals ingest trash because it looks like food. For example, a floating plastic baggie can look like a jellyfish, and resin pellets (i.e., small, round pellets that are the raw form of plastic, which are melted and used to form plastic products) can resemble fish eggs. Ingestion can lead to choking, starvation or malnutrition if the ingested items block the intestinal tract and prevent digestion, or accumulate in the digestive tract and make the animal feel “full,” lessening its desire to feed. Ingestion of sharp objects can damage the digestive tract or stomach lining and cause infection or pain. Ingested items may also block air passages and prevent breathing, causing the animal to suffocate. Tell the students the two primary threats that marine trash poses to marine wildlife are entanglement and ingestion. Prior to activity, teacher should have all the Styron-foamed cups, plates, bowl on the floor. Tied the ribbon to a used balloon and put it on the floor. Put shopping bags, six-pack ring, fishing line on the floor for activity. Activity 1: Harmful Trash 1. Pass out handout of “Animal Tales” to each student. Place the items of debris on the floor in the middle of the classroom and have students form a circle around the items. Read the description of the turtle on the “Animal Tales” handout, or ask one of your students to read it to the class. 2. Choose a volunteer to be a turtle and ask him or her to go into the center of the circle and pick up an item of trash that can harm a turtle. Ask the “turtle” to tell how and why it might become injured by this piece of trash. Encourage students to think how wildlife could become entangled and ingestion in the trash and how animals might eat the items, mistaking the trash for food. 3. Repeat this procedure for the remainder of the animals on the handout. When finished, ask the students if they can associate any other pieces of debris with one of the animals in a way that the class has not yet discussed. 4. Explain that many species of mammals, sea turtles, birds and fish that encounter marine trash are endangered or threatened. Ask students how marine debris could pose special problems for these species. End your discussion by helping students to understand that any animals that lives in the ocean or along the coast can be affected by marine debris Activity 2: All Tangled Up 1. Discuss how animals need a healthy environment in which to live, just like we do. This includes a habitat that is free from pollution. 2. Distribute the rubber bands to students and have them follow the procedure below. (NOTE: As an alternative, you may want to have one or two students come up to the front of the room and perform the exercise with rubber bands as a demonstration; then include the entire class in the discussion.) · Hold your left hand up in front of your face, with the back of your hands towards your face. · Hold the rubber band in your right hand and hook one end of it over the little finger of your left hand. · Hook the other end of the rubber band over the left-hand thumb. The rubber band should be taut and resting across the bottom knuckles on the back of your left hand. · Place your right hand on the bottom of your left elbow, and keep it there. · Try to free your hand of the rubber band without using your right hand, teeth, face or other body parts 3. Take a look at you "Entanglement" handoutWhile the students are struggling, ask the class to imagine that they are animals that have gotten pieces of fishing line, abandoned net or other trash wrapped around their flippers, beaks, or neck. Tell them imagine that you are birds that unable to eat until they are free from the trash. Ask the students the following questions: · How would you feel after struggling like this all morning? · How would you feel after missing breakfast? · What would happen if you continued to miss meals and spent all of your strength fighting to get free? · What would happen if a predator was chasing you? Encourage students to share their thoughts and feelings about being entangled. Remind them that their experience is similar to that of a bird or other marine wildlife that becomes entangled in debris. Conclusion with Inquiry Question How does trash affect marine wildlife? Do your part to keep our environment clean at all times. Reduce, reuse, and recycle.
The environment is filled with such plastic wastes which cannot be converted and are but just dumped in one corner of our living land area or sometimes recycled. All plastics are polymers mostly containing carbon and hydrogen and few other elements like chlorine, nitrogen etc. polymers are made up of small molecules called as monomers which combine and form single large molecule called polymer. When this long chain of monomers breaks at certain points or when lower molecular weight fractions are formed this is termed as degradation of polymer. This is reverse of polymerization. If such scission of bonds occurs randomly it is called as 'Random De-Polymerization'. In the process of conversion of waste plastic into fuels random De-Polymerization is carried out in a specially designed Reactor in absence of oxygen & in the presence of coal and certain catalytic additive. The maximum reaction temperature is 350° C. There is total conversion of waste plastic into value added fuel products. The technology has taken over the responsibility of restoring the land area back to conditions as nature handed over to us. The technology of downstream refining process of Petroleum Hydrocarbon from Crude Oil feed stock derived out of Waste Plastics by catalytic reaction can return us back with various aromatic hydrocarbon solvents, aliphatic hydrocarbon solvents, carbon & it's By-products for various industrial applications like agrochemicals, coating, specialty chemical etc. The Polymer Energy system uses a process called catalytic pyrolysis to efficiently convert plastics to crude oil. The system provides an integrated plastic waste processing system which offers an alternative to landfill disposal, incineration, and recycling while also being a viable, economical, and environmentally-responsible waste management solution. The Polymer Energy System is fully environmentally friendly and has Low Emissions and No hazardous Waste. The plastic material can be fed into the system continuously. The plastic waste does not need to be clean or dry prior to processing. The system enables the user to tailor the hydrocarbon mix of final output. Municipalities world over are seeking a cost effective system by which they can say that they have saved their piece of land from getting permanently destroyed and by setting up exclusive facility for them to process the Plastics waste generated by them to value added petroleum products like Diesel, Furnace Oil, tailor made fractions. So they can reduce their carbon foot prints and avail carbon credits and to improve their image towards "Corporate Environment and Social Responsibility". This opens out for an opportunity to reduce energy cost. Products range that can be recovered from the crude includes:
- In a new article, scientists have coined the term “evidence complacency” to highlight the persistence of a culture in which, “despite availability, evidence is not sought or used to make decisions, and the impact of actions is not tested.” - This complacency can not only lead to a wastage of money, time and opportunities, but also show conservation as an unjustifiable investment, the researchers say. - Conservation practitioners say that scientists need to collaborate more with decision makers and look at evidence more broadly than just peer-reviewed studies. How do you save a species or protect a habitat? For the past few decades, scientists have been calling for an increased use of scientific evidence — carefully controlled, peer-reviewed scientific studies — to make conservation decisions. However, things don’t seem to have changed much. Despite the rise in peer-reviewed scientific evidence being generated, intuition, personal experience and anecdotes remain at the center of conservation practice, William J. Sutherland and Claire Wordley of the University of Cambridge, U.K., report in a new article published in Nature Ecology & Evolution. Concerned by this, the authors have coined the term “evidence complacency” to highlight the persistence of a culture in which, “despite availability, evidence is not sought or used to make decisions, and the impact of actions is not tested.” This complacency can not only lead to a wastage of money, time and opportunities, but also show conservation as an unjustifiable investment, the researchers say. This is worrying because conservation efforts are typically funded by taxpayers, businesses or charities, and justifying the investments is often critical. “We’ve seen an explosion in published papers in conservation science in the past few decades, both from practitioners and from academics, but it is not having a proportionate impact on conservation practice,” Wordley told Mongabay. In the U.K., for instance, several bat “gantries” (safe bridges) have been built to help bats fly over roads. These gantries are expensive, amounting to a total cost of around £1 million ($1.3 million). But when scientists looked at the effectiveness of the gantries in reducing bat mortality, they found the safe passageways to be ineffective. Bat gantries continue to be constructed in the U.K., the authors write, despite the evidence. Sutherland and Wordley suggest that evidence complacency could be stemming from a variety of reasons including conservation practitioners’ insufficient knowledge about existing evidence, inadequate training in using evidence, lack of relevant evidence for the context in which conservation decisions need to be taken, or simply because checking the evidence is too much effort. Edward Game, the lead scientist for The Nature Conservancy’s (TNC’s) Asia Pacific region, told Mongabay that practitioners are generally keen to use and gather evidence. “But I agree with Sutherland and Wordley that the use of evidence in conservation decision making is neither as effective nor as rigorous as it could be, and that the reasons for this are likely to be a complex mix of the causes they cite and others,” he added. Using evidence can also be difficult, Wordley said, because a lot of scientific papers are still locked behind paywalls, which means that practitioners cannot read them unless they have institutional access or are willing to pay a large sum of money. Moreover, the language in scientific papers is often technical, dense and jargon-filled. “This can be very off-putting for non-scientists!” she said. The authors write that the Conservation Evidence Project at the University of Cambridge, conceived by Sutherland, helps conservation practitioners overcome these challenges by providing them with a repository of evidence that is easily accessible and summarized in jargon-free language. To make the evidence more usable, the scientists are also planning to co-produce guidance documents with conservation NGOs by combining scientific evidence and practical advice in the same document, Wordley said. Louise Glew, Director of Conservation Evidence at World Wildlife Fund (WWF), agreed that scientists need to collaborate more with practitioners and policy makers. “We need to engage directly with decision-makers — recognizing that they are not one group but many — to understand the types of evidence that are relevant to their decisions, and to pinpoint the critical knowledge gaps,” she said. “Armed with this understanding, scientists and practitioners need to focus their energies on co-generating this evidence and working closely with relevant decision-makers to produce a shared understanding of the science. Conservation practitioners and funders can do more too, by designing their interventions and structuring funding, whenever possible, to create real-world experiments that allow us to test hypotheses about what works and what doesn’t.” Game added that evidence should be looked at more broadly than just peer-reviewed studies. “Sutherland and Wordley imply a fairly narrow definition of evidence that largely focuses on experimental or at least quantitative evidence, whereas I would argue that people (expert judgment) are an important form of evidence, but this judgment needs to be used more transparently and robustly than it commonly is,” he said. “I think a big part of the issue is a general absence of a robust theory of evidence in conservation; by that I mean it is not well-established or well-understood how candidate evidence should be compared and weighed against each other to understand the level of support for a particular conservation action,” Game added. Banner image of an orangutan at Sepilok by Rhett A. Butler for Mongabay. - Sutherland W.J. and Wordley C.F.R. (2017) Evidence complacency hampers conservation. Nature Ecology & Evolution. doi:10.1038/s41559-017-0244-1 FEEDBACK: Use this form to send a message to the author of this post. If you want to post a public comment, you can do that at the bottom of the page. Follow Shreya Dasgupta on Twitter: @ShreyaDasgupta
In Lampang, Thailand, two elephants have a problem. They’ve walked into adjacent paddocks separated by a fence. In front of them is a sliding table with two food bowls, but it’s out of reach and the way is barred by a stiff net. A rope has been looped around the table and one end snakes into each of the paddocks. If either jumbo tugs on the rope individually, the entire length will simply whip round into its paddock, depriving both of them of food. This job requires teamwork. And the elephants know it. Joshua Plotnik from Emory University has shown that when confronted by this challenge, elephants learn to coordinate with their partners. They eventually pull on the rope ends together to drag the table towards them. They even knew to wait for their partner if they were a little late. It’s yet more evidence that these giant animals have keen intellects that rival those of chimps and other mental heavyweights. There are, of course, many reasons to think that elephants are highly intelligent. They have large brains and they live in complex social groups. They can recognise themselves in mirrors, manipulate objects with their trunks, and smell the differences between human ethnic groups. They’re interested in their dying and dead, they help stuck or distressed individuals, and they babysit each others’ calves. But very few people have ever tested their intelligence in experiments, in the ways that primates, crows and dolphins have. Why? As Plotnik puts it, “This void in knowledge is mainly due to the danger and difficulty of submitting the largest land animal to behavioural experiments.” In short, working with a 4-tonne lump of muscle, tusk and trunk poses challenges that scientists don’t face when they work with a 70-kilogram chimp. At the Thai Elephant Conservation Center, Plotnik saw a way around these problems. Each of the Center’s Asian elephants forms a bond with a “mahout” – a human who cares for their needs. With their help, Plotnik put six pairs of elephants through the rope-pulling task, a classic experiment that had been devised for chimpanzees in the 1930s. When the mahouts released the elephant pairs at the same time, they eventually pulled the rope together. Even when one individual was released first, they soon learned that tugging on the rope themselves was futile. By the second day of training, they almost always waited for their partner to arrive before pulling, even if they had to wait for 45 seconds. In a final experiment, Plotnik coiled one of the rope ends at the base of the table, so only one of the elephants could reach their end. There was no way they could pull the table forwards, and four out of five elephants realised this. When their partner couldn’t grab their end of the rope, they were far less likely to bother. Often, they just turned around and went back to their house. This experiment shows that the elephants weren’t obeying a simple rule like “pull on the rope when my partner arrives” or “pull when I feel tension”. They didn’t necessarily understand the mechanics of the rope and table, but they certainly knew that if their partner had walked away, there was no point in doing anything themselves. Capuchin monkeys, hyenas and rooks (a type of crow) have all managed to learn to pull on the rope with a partner. But it’s not clear whether any of these animals understood their partner’s contribution to the task. “The rooks pulled together, but never waited for their partners,” says Plotnik. “This of course doesn’t mean corvids [crows and their kin – Ed] don’t cooperate. Personally, I think one big difference between corvids and elephants is patience. Corvids are flighty, and waiting patiently for a partner’s arrival and inhibiting rope pulling until then may be very difficult for them. Elephants, on the other hand, are natural waiters. In the wild, elephants will often wait for other family members to “catch up” before moving on to other areas.” The case for chimps is far stronger. In previous experiments, chimps have solved the task, even when they had to first let their partner into the room with the ropes. If the rope ends were close enough that a single chimp could pull the table in on its own, it never let its partner in. “It showed quite compelling evidence that the chimpanzees knew they needed a partner and that their partner needed to help,” says Plotnik. The elephants’ success is equally compelling, even though their task was simpler. They clearly knew enough to wait for their partner and to abandon their end of the rope when their partner couldn’t reach theirs. “These results put elephants, at least in terms of how quickly they learn the critical contingencies of cooperation, on a par with apes,” says Plotnik. There is a final twist to this tale. The numbers and graphs in Plotnik’s experiment only related to the elephants that solved the task as he intended. But two of them came up with their own solutions. One youngster called NU always stood on her end of the rope so that it didn’t yank away when her partner pulled on the other end. It was an altogether lazier strategy – NU got her food bowl while her partner did all the work! Reference: Plotnik, Lair, Suphachokasahakun & de Waal. 2011. Elephants know when they need a helping trunk in a cooperative task. PNAS http://dx.doi.org/10.1073/pnas.1101765108 More on elephants: - Elephants and humans evolved similar solutions to problems of gas-guzzling brains - South African wildlife – Elephant encounter - Elephants smell the difference between human ethnic groups - Zoo elephants die much earlier than wild ones - Elephants crave companionship in unfamiliar stomping grounds - Elephants recognise themselves in mirror
health law: an overview Broadly defined, health law includes the law of public health, health care generally, and medical care specifically. Preserving public health is a primary duty of the state. Health regulations and laws are therefore almost all administered at the state level. Many states delegate authority to subordinate govermental agencies such as boards of health. These boards are created by legislative acts. Federal health law focuses on the activity of the Department of Health and Human Services (HHS). It administers a wide variety of agencies and programs, like providing financial assistance to needy individuals; conducting medical and scientific research; providing health care and advocacy services; and enforcing laws and regulations related to human services. An important part of the HHS are the Centers for Medicare and Medicaid Services, which oversee the Medicare and Medicaid Programs. Their goal is to ensure that elderly and needy individuals receive proper medical care. Private health insurance originated with the Blue Cross system in 1929. The underlying principle was to spread the risk of high hospitalization bills between all individuals. Whether sick or healthy, all school teachers and hospital employees in the Dallas area had to join, ensuring that the risk was spread through a large number of individuals. Blue Shield was later developed under the same principle. Today, many people receive health care through health maintenance organizations (HMO's). Managed care essentially creates a triangle relationship between physician, patient, and payer. Physicians are paid a flat per-member per-month fee for basic health care services, regardless of whether the patient seeks those services. The risk that a patient is going to require significant treatment shifts from the insurance company to the physicians under this model. Because of the importance of the industry, HMO's are heavily regulated. On the federal level the Health Maintenance Organization Act of 1973 governs. menu of sources U.S. Constitution and Federal Statutes - U.S. Code: - CRS Annotated Constitution - U.S. Supreme Court: - U.S.Circuit Courts of Appeals: Health Law Cases State Judicial Decisions - N.Y. Court of Appeals: - Appellate Decisions from Other States Key Internet Sources - Federal Agencies: - Department of Health and Human Services - U.S. Food and Drug Administration - Centers for Disease Control and Prevention (CDC) - Centers for Medicare and Medicaid Services - National Institutes of Health (NIH) - Substance Abuse and Mental Health Services Administration (SAMHSA) - National Health Information Center - World Health Organization - Health Care Law (Nolo) - American Medical Forensic Specialists - Health law index from Useful Offnet (or Subscription - $) Sources - Good Starting Point in Print: Furrow, Greaney, Johnson, Jost and Schwartz' Hornbook on Health Law, West Group (2000) - LII Downloads
Mitral Valve Regurgitation What is mitral valve regurgitation? When the mitral valve becomes leaky, it's called mitral valve regurgitation. It’s also known as mitral insufficiency. The mitral valve is one of the heart’s 4 valves. These valves help the blood flow through the heart’s 4 chambers and out to the body. The mitral valve lies between the left atrium and the left ventricle. Normally, the mitral valve prevents blood flowing back into the left atrium from the left ventricle. In mitral valve regurgitation, however, some blood leaks back through the valve. It doesn’t just flow forward into the ventricle the way it should. Because of this, the heart has to work harder than it should to get blood out to the body. If the regurgitation gets worse, some blood may start to back up into the lungs. A very small amount of mitral regurgitation is very common. However, a few people have severe mitral valve regurgitation. Mitral valve regurgitation can be acute or chronic. With the acute condition, the valve suddenly becomes leaky. In this case, the heart doesn’t have time to adapt to the leak in the valve. In the chronic form, the valve gradually becomes leakier. The heart has time to adapt to the leak. Symptoms with acute mitral regurgitation are often severe. With chronic mitral regurgitation, the symptoms may range from mild to severe. What causes mitral valve regurgitation? A range of medical conditions can cause mitral valve regurgitation, such as: - Mitral valve prolapse (stiff, narrow mitral valve) - Rheumatic heart disease from untreated infection with Streptococcus bacteria (which cause strep throat) - Coronary artery disease or heart attack - Certain autoimmune diseases (like rheumatoid arthritis) - Infection of the heart valves (endocarditis) - Congenital abnormalities of the mitral valve - Certain drugs Acute mitral valve regurgitation is more likely to happen after a heart attack. It’s also more likely to happen after rupture of the tissue or muscle that supports the mitral valve. It can happen after an injury or heart valve infection. What are the risks for mitral valve regurgitation? You can reduce some risk factors for mitral valve regurgitation. For example: - Use antibiotics to treat strep infection and prevent rheumatic heart disease. - Avoid IV drugs to reduce the risk of heart valve infection. - Promptly treat medical conditions that can lead to the disorder. There are other risk factors that you can’t change. For example, some conditions that can lead to mitral valve regurgitation are partly genetic. What are the symptoms of mitral valve regurgitation? Most people with chronic mitral valve regurgitation don’t notice any symptoms for a long time. People with mild or moderate mitral regurgitation often don’t have any symptoms. If the regurgitation becomes more severe, symptoms may start. They may be stronger and happen more often over time. They may include: - Shortness of breath with exertion - Shortness of breath when lying flat - Reduced ability to exercise - Unpleasant awareness of your heartbeat - Swelling in your legs, abdomen, and the veins in your neck - Chest pain (less common) Acute, severe mitral valve regurgitation is a medical emergency and can cause serious symptoms such as: - Symptoms of shock (such as pale skin, unconsciousness, or rapid breathing) - Severe shortness of breath - Abnormal heart rhythms that make the heart unable to pump effectively How is mitral valve regurgitation diagnosed? Your healthcare provider will take your medical history and give you a physical exam. Using a stethoscope, your provider will check for heart murmurs and other signs of the condition. You may also have tests such as: - Echocardiogram to assess severity - Stress echocardiogram to assess exercise tolerance - Electrocardiogram (ECG) to assess heart rhythm - Cardiac MRI, transesophageal echocardiogram, or cardiac catheterization (only if more information is needed) How is mitral valve regurgitation treated? Treatment varies depending its cause. It also varies depending on how severe and sudden the condition is. And it depends on a your overall health. Mitral valve regurgitation can increase risk of other heart rhythm problems, such as atrial fibrillation. If you have mild or moderate mitral valve regurgitation, you may not need any medical treatment. Your healthcare provider may just choose to watch your condition. You may need regular echocardiograms over time if you have moderate mitral valve regurgitation. Your healthcare provider might also prescribe medicines such as: - Angiotensin-converting enzyme (ACE) inhibitors and beta-blockers to help reduce the workload of the heart when a person’s pump function is not working as well - Medicines to slow the heart rate if you develop atrial fibrillation - Diuretics (water pills) to reduce swelling and improve symptoms - Anticoagulants (blood thinners) to help prevent blood clots if you have atrial fibrillation Surgery may be needed with severe mitral valve regurgitation. Surgery is often needed right away for acute severe mitral valve regurgitation. The surgeon may be able to repair the mitral valve. In some cases, a replacement valve is needed. Your surgeon might use a valve made of pig, cow, or human heart tissue. Man-made mechanical valves are another option. Talk with your surgeon about which one is right for you. Your surgeon might perform open surgery or a minimally invasive repair. If you have atrial fibrillation, the surgeon may do a Maze procedure. This is a type of heart surgery that can reduce the future risk of atrial fibrillation. Moderate or severe mitral regurgitation can cause problems during pregnancy. These women may need to have valve surgery before they become pregnant. What are potential complications of mitral valve regurgitation? Mitral valve regurgitation can cause complications such as: - Atrial fibrillation, in which the atria of the heart don’t contract well. This leads to increased risk of stroke - Elevated blood pressure in the lungs (pulmonary artery hypertension) - Dilation of the heart - Heart failure - Bacterial infection of the heart valves (more likely after valve replacement surgery) - Complications from valve replacement surgery (like excess bleeding or infection) To reduce the risk of these complications, your healthcare provider may prescribe: - Anticoagulant medicine that prevent blood clots (blood thinners) - Medicines to reduce the stress load of the heart - Antibiotics before certain medical and dental procedures. (In most cases, you will only need antibiotics if you have had valve surgery or previous bacterial infection of heart valves.) Living with mitral valve regurgitation You'll need to see your healthcare provider for regular monitoring. See your healthcare provider right away if your symptoms change. Note your symptoms when exercising. Symptoms may get worse during physical activity. Talk with your provider about your exercise program and what is right for you. If you have progressive mitral regurgitation, your healthcare provider may advise avoiding competitive sports. Tell all of your healthcare providers and dentists about your medical history. Your healthcare provider may want to treat you for heart problems related to mitral valve regurgitation. Treatments may include: - A low salt, heart healthy diet (to decrease blood pressure and the stress on your heart) - Blood pressure lowering medicines - Medicines to reduce the risk of arrhythmias - Reduction of caffeine and alcohol to reduce risk of arrhythmias When should I call my healthcare provider? If you notice your symptoms are slowly getting worse, plan to see your healthcare provider. You may need surgery or a medicine change. See your healthcare provider right away if: - You have severe shortness of breath or chest pain - You notice sudden new symptoms - With mitral valve regurgitation, the heart’s mitral valve is leaky. Some blood flows back into the left atrium from the left ventricle. - You may not have symptoms for many years. - Chronic mitral valve regurgitation may get worse and need surgery. - Acute, severe mitral valve regurgitation is a medical emergency. It needs surgery right away. - See your healthcare provider for regular check-ups to monitor your condition. If your symptoms get worse or become severe, see your healthcare provider right away. Tips to help you get the most from a visit to your healthcare provider: - Know the reason for your visit and what you want to happen. - Before your visit, write down questions you want answered. - Bring someone with you to help you ask questions and remember what your provider tells you. - At the visit, write down the name of a new diagnosis, and any new medicines, treatments, or tests. Also write down any new instructions your provider gives you. - Know why a new medicine or treatment is prescribed, and how it will help you. Also know what the side effects are. - Ask if your condition can be treated in other ways. - Know why a test or procedure is recommended and what the results could mean. - Know what to expect if you do not take the medicine or have the test or procedure. - If you have a follow-up appointment, write down the date, time, and purpose for that visit. - Know how you can contact your provider if you have questions.
You may have heard someone refer to your child's "speech skills" or their "language skills", when talking about their development. While they may sound similar, these are actually quite different areas of development. This post provides a quick, basic summary on the difference between these two areas. Speech skills: Speech, which is related to how your child talks, can be split into two different areas, articulation and fluency. Articulation refers to the way children acquire and produce sounds in words, sentences and conversation. If your child is having difficulty with their articulation development you may see things like: Fluency is the aspect of speech production that involves smoothness, rate and effort. If your child is having difficulty with their fluency development you may see things like: Language skills: Language can be broken down most simply into two parts: receptive language and expressive language (Note: Today we are just talking about foundational or basic language skills. We'll cover higher-level language skills in a future post.) Receptive language is the ability to understand what is being said and can include things like following directions or understanding questions. If your child is having difficulty with their receptive language skills, you may see things like: Expressive language is the ability to use language and includes things like grammar, vocabulary and answering questions. If your child is having difficulty with their expressive language development, you may see things like: Some children may have trouble with one of these areas of communication (speech or language) or both. Some children may have difficulty with one area of their language development, but not the other (receptive vs. expressive). Looking out for potential red-flags with your child's development is the best way to combat any difficulties in these communication areas. Identifying areas of weakness and working on them early gives your child the best chance to make progress and catch up to their peers! If you have questions or concerns about your child's development, contact us at The Speech Space. We offer free screenings, which take approximately 30 minutes, and can help identify potential problems.
The first clue is the behavior of a pulsar—a tiny but massive object that rapidly rotates, sending flashes of light toward Earth. A rapidly spinning neutron star that emits radiation, usually radio waves, in narrow beams focused by the star's powerful magnetic field and streaming outward from its magnetic poles. Because the pulsar's magnetic poles do not align with the poles of its rotational axis, the beams of radiation sweep around like the beacon of a lighthouse and are thus observed on Earth as short, regular pulses, with periods anywhere between 1 millisecond and 4 seconds. A rapidly rotating neutron star. The radiation from such a star appears to come in a series of regular pulses (one per revolution), which explains the name.
How Do Blowing and Sucking Relate To Vision? Blowing and sucking activities can have an impact on your child’s visual-motor skills, handwriting, and ability to focus. Sucking encourages eye convergence, the ability to focus the eyes at a close distance, and bringing one’s attention towards the body and away from environmental distractions. Coordinated convergence is necessary for reading, handwriting, and catching moving objects. Blowing encourages eye divergence, the ability to focus the eyes at distance, and focusing one’s attention away from the body. Coordinated divergence is necessary for copying from the board, reading signs, throwing at a target, and many sports. The following activities are simple ways to incorporate blowing and sucking into everyday tasks. Be sure to immediately follow with the appropriate functional vision task. Buy a bag of jumbo straws (cut in half) and cotton balls or craft pom-poms. - Replace board game pieces with cotton ball to blow around the board. - Have a cotton ball blowing race while crawling. Make obstacles to go around (i.e. cut off bottom of paper cup and lay on its side). - Play cotton ball hockey. - Try blowing cotton ball as close to the edge of a tabletop without falling off. - Refer to previous Huff and Puff Spelling post Buy a bag of jumbo straws (cut in half) and plastic bingo chips from a party store. Use the straw to suck up the bingo chip to stick to the end of the straw to complete these activities. - Replace board game pieces with bingo chips to move around the board. - Draw circles on paper (random, straight lines, diagonals) and write one letters of the alphabet in each one. Use bingo chips to locate letters or spell out spelling/ word study words. - Draw circles on paper (random, straight lines, diagonals) and write one number in each one. Give child a math equation and use bingo chip to answer. - Use a dry erase marker to write letters on the bingo chips for spelling. - See how many bingo chips can be stacked into a tower. Bingo Chip Spelling This activity has proved to be so fun and effective at school for both improved attention and academics. I once tried this with a 2nd grade student and after I returned him to his classroom, the assistant teacher came to find me later to ask what I did that day in OT. For the 1st time ever this student was happily using manipulatives for math and remaining focused at the same time! OT Skills this Activity Targets: -attention to task -laminator or page protectors Instructions to Make the Activity: Trace a bingo chip 26 times on a sheet of paper. Write one letter in each circle. I laminated the sheets of paper, but you can also use a page protector sleeve. It is a good idea to protect the paper because there is often saliva from the straw and this makes it easy to clean up. How to Play: The student is asked to find a letter or word by sucking through the straw to pick up the bingo chip. While maintaing the vacuum seal with the straw the student visually scans and releases the bingo chip onto the target letter. How to Grade the Challenge: -letters in straight lines alphabetically are easier to track than random placement -instead of letters you can use numbers to complete math problems -for students who do not yet recognize letters, color matching with a gum ball machine is fun -use plastic coins instead of bingo chips for coin value -laminated paper also works well to pick up with the straw, cut up a sentence and have the student place in the correct order -use laminated words for word sorts
| Max Planck 1858 - 1947 Max Planck was told that there was nothing new to be discovered in physics. He was about to embark on a career in physics that would set that idea on its ear. As a young student Planck had shown great promise in music, but a remarkable mathematics teacher turned his interest toward science. After gaining degrees from the Universities of Berlin and Munich, he focused on thermodynamics (the study of heat and energy). He was especially interested in the nature of radiation from hot materials. In 1901 he devised a theory that perfectly described the experimental evidence, but part of it was a radical new idea: energy did not flow in a steady continuum, but was delivered in discrete packets Planck later called quanta. That explained why, for example, a hot iron poker glows distinctly red and white. Planck, a conservative man, was not trying to revolutionize physics at all, just to explain the particular phenomenon he was studying. He had tried to reconcile the facts with classical physics, but that hadn't worked. In fact, when people refer to "classical physics" today, they mean "before Planck." He didn't fully appreciate the revolution he had started, but in the years that followed, scientists such as Albert Einstein, Niels Bohr, and Werner Heisenberg shaped modern physics by applying his elegantly simple, catalytic new idea. Planck was an extremely successful physicist, receiving the Nobel Prize in 1919, but his personal life was marked by tragedy. He and his first wife Marie Merck had two sons and twin daughters; Marie died after 23 years of marriage. He remarried and had one more son. Planck's eldest son was killed during World War I, and both daughters died in childbirth. In 1944 his second son was executed for involvement in a plot to assassinate Hitler. Planck himself openly opposed Nazi persecutions and intervened on behalf of Jewish scientists. He praised Einstein in contradiction to the Nazis, who denounced Einstein and his work. He even met with Hitler to try to stop actions against Jewish scientists, but the chancellor went on a tirade about Jews in general and disregarded him. Planck, who had been president of the Kaiser Wilhelm Institute since 1930, resigned his post in 1937 in protest. After the war, the research center was renamed the Max Planck Institute and he was appointed its head. "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Home | People and Discoveries Menu | Help WGBH | PBS Online | Search | Feedback | Shop © 1998 WGBH
NASA plans on setting some fires in space capsules in a series of experiments named Saffire (the Spacecraft Fire Experiments). It's not part of a plan to roast marshmallows in space — NASA wants to see how microgravity will affect larger fires. It's also an opportunity to see how NASA's fire-prevention strategies work in a spacecraft in orbit. This isn't the first time the agency has played with fire, but previous experiments were limited in size. You may have seen pictures of a candle lit in space, where the normally elongated yellow flame is instead a small globe of intense blue-purple fire. Saffire will aim to create a much larger fire to see what happens. To do this, NASA will rely on one-use spacecraft like the Cygnus. On March 22, NASA launched a Cygnus craft to send scientific supplies to the International Space Station. Once the cargo is offloaded, the spacecraft will detach from the ISS and move to a safe distance. Then the fire experiment can begin aboard the unmanned vessel. The fire will happen in a chamber inside the Cygnus. A panel of cotton-fiberglass material will serve as fuel. An electrified wire running along one edge of the panel will heat until it ignites the material. Sensors and cameras will record the experiment and send data back to Earth. What happens next? We're not sure. It may be that microgravity restricts how large a fire can grow. Two more Saffire experiments will follow in later months to gather more information. The second one (Saffire-II) will include materials commonly used in space suits and spacecraft to test how fire-resistant they really are in microgravity. The third experiment (Saffire-III) will be another attempt to see how flames spread in microgravity. NASA plans to use the data to form next-generation fire-prevention strategies, which will be critical for any long-term space missions, such as a crew flying to Mars. Watch the video above to get more details on what's poised to be the largest ever man-made space fire.
Edward Sapir (18841939). Language: An Introduction to the Study of Speech. 1921. Language, Race and Culture LANGUAGE has a setting. The people that speak it belong to a race (or a number of races), that is, to a group which is set off by physical characteristics from other groups. Again, language does not exist apart from culture, that is, from the socially inherited assemblage of practices and beliefs that determines the texture of our lives. Anthropologists have been in the habit of studying man under the three rubrics of race, language, and culture. One of the first things they do with a natural area like Africa or the South Seas is to map it out from this threefold point of view. These maps answer the questions: What and where are the major divisions of the human animal, biologically considered (e.g., Congo Negro, Egyptian White; Australian Black, Polynesian)? What are the most inclusive linguistic groupings, the linguistic stocks, and what is the distribution of each (e.g., the Hamitic languages of northern Africa, the Bantu languages of the south; the Malayo-Polynesian languages of Indonesia, Melanesia, Micronesia, and Polynesia)? How do the peoples of the given area divide themselves as cultural beings? what are the outstanding cultural areas and what are the dominant ideas in each (e.g., the Mohammedan north of Africa; the primitive hunting, non-agricultural culture of the Bushmen in the south; the culture of the Australian natives, poor in physical respects but richly developed in ceremonialism; the more advanced and highly specialized culture of Polynesia)? The man in the street does not stop to analyze his position in the general scheme of humanity. He feels that he is the representative of some strongly integrated portion of humanitynow thought of as a nationality, now as a raceand that everything that pertains to him as a typical representative of this large group somehow belongs together. If he is an Englishman, he feels himself to be a member of the Anglo-Saxon race, the genius of which race has fashioned the English language and the Anglo-Saxon culture of which the language is the expression. Science is colder. It inquires if these three types of classificationracial, linguistic, and culturalare congruent, if their association is an inherently necessary one or is merely a matter of external history. The answer to the inquiry is not encouraging to race sentimentalists. Historians and anthropologists find that races, languages, and cultures are not distributed in parallel fashion, that their areas of distribution intercross in the most bewildering fashion, and that the history of each is apt to follow a distinctive course. Races intermingle in a way that languages do not. On the other hand, languages may spread far beyond their original home, invading the territory of new races and of new culture spheres. A language may even die out in its primary area and live on among peoples violently hostile to the persons of its original speakers. Further, the accidents of history are constantly rearranging the borders of culture areas without necessarily effacing the existing linguistic cleavages. If we can once thoroughly convince ourselves that race, in its only intelligible, that is biological, sense, is supremely indifferent to the history of languages and cultures, that these are no more directly explainable on the score of race than on that of the laws of physics and chemistry, we shall have gained a view-point that allows a certain interest to such mystic slogans as Slavophilism, Anglo-Saxondom, Teutonism, and the Latin genius but that quite refuses to be taken in by any of them. A careful study of linguistic distributions and of the history of such distributions is one of the driest of commentaries on these sentimental creeds. That a group of languages need not in the least correspond to a racial group or a culture area is easily demonstrated. We may even show how a single language intercrosses with race and culture lines. The English language is not spoken by a unified race. In the United States there are several millions of negroes who know no other language. It is their mother-tongue, the formal vesture of their inmost thoughts and sentiments. It is as much their property, as inalienably theirs, as the King of Englands. Nor do the English-speaking whites of America constitute a definite race except by way of contrast to the negroes. Of the three fundamental white races in Europe generally recognized by physical anthropologiststhe Baltic or North European, the Alpine, and the Mediterraneaneach has numerous English-speaking representatives in America. But does not the historical core of English-speaking peoples, those relatively unmixed populations that still reside in England and its colonies, represent a race, pure and single? I cannot see that the evidence points that way. The English people are an amalgam of many distinct strains. Besides the old Anglo-Saxon, in other words North German, element which is conventionally represented as the basic strain, the English blood comprises Norman French,1 Scandinavian, Celtic,2 and pre-Celtic elements. If by English we mean also Scotch and Irish,3 then the term Celtic is loosely used for at least two quite distinct racial elementsthe short, dark-complexioned type of Wales and the taller, lighter, often ruddy-haired type of the Highlands and parts of Ireland. Even if we confine ourselves to the Saxon element, which, needless to say, nowhere appears pure, we are not at the end of our troubles. We may roughly identify this strain with the racial type now predominant in southern Denmark and adjoining parts of northern Germany. If so, we must content ourselves with the reflection that while the English language is historically most closely affiliated with Frisian, in second degree with the other West Germanic dialects (Low Saxon or Plattdeutsch, Dutch, High German), only in third degree with Scandinavian, the specific Saxon racial type that overran England in the fifth and sixth centuries was largely the same as that now represented by the Danes, who speak a Scandinavian language, while the High German-speaking population of central and southern Germany4 is markedly distinct. But what if we ignore these finer distinctions and simply assume that the Teutonic or Baltic or North European racial type coincided in its distribution with that of the Germanic languages? Are we not on safe ground then? No, we are now in hotter water than ever. First of all, the mass of the German-speaking population (central and southern Germany, German Switzerland, German Austria) do not belong to the tall, blond-haired, long-headed5 Teutonic race at all, but to the shorter, darker-complexioned, short-headed6 Alpine race, of which the central population of France, the French Swiss, and many of the western and northern Slavs (e.g., Bohemians and Poles) are equally good representatives. The distribution of these Alpine populations corresponds in part to that of the old continental Celts, whose language has everywhere given way to Italic, Germanic, and Slavic pressure. We shall do well to avoid speaking of a Celtic race, but if we were driven to give the term a content, it would probably be more appropriate to apply it to, roughly, the western portion of the Alpine peoples then to the two island types that I referred to before. These latter were certainly Celticized, in speech and, partly, in blood, precisely as, centuries later, most of England and part of Scotland was Teutonized by the Angles and Saxons. Linguistically speaking, the Celts of to-day (Irish Gaelic, Manx, Scotch Gaelic, Welsh, Breton) are Celtic and most of the Germans of to-day are Germanic precisely as the American Negro, Americanized Jew, Minnesota Swede, and German-American are English. But, secondly, the Baltic race was, and is, by no means an exclusively Germanic-speaking people. The northernmost Celts, such as the Highland Scotch, are in all probability a specialized offshoot of this race. What these people spoke before they were Celticized nobody knows, but there is nothing whatever to indicate that they spoke a Germanic language. Their language may quite well have been as remote from any known Indo-European idiom as are Basque and Turkish to-day. Again, to the east of the Scandinavians are non-Germanic members of the racethe Finns and related peoples, speaking languages that are not definitely known to be related to Indo-European at all. We cannot stop here. The geographical position of the Germanic languages is such7 as to make it highly probable that they represent but an outlying transfer of an Indo-European dialect (possibly a Celto-Italic prototype) to a Baltic people speaking a language or a group of languages that was alien to Indo-European.8 Not only, then, is English not spoken by a unified race at present but its prototype, more likely than not, was originally a foreign language to the race with which English is more particularly associated. We need not seriously entertain the idea that English or the group of languages to which it belongs is in any intelligible sense the expression of race, that there are embedded in it qualities that reflect the temperament or genius of a particular breed of human beings. Many other, and more striking, examples of the lack of correspondence between race and language could be given if space permitted. One instance will do for many. The Malayo-Polynesian languages form a well-defined group that takes in the southern end of the Malay Peninsula and the tremendous island world to the south and east (except Australia and the greater part of New Guinea). In this vast region we find represented no less than three distinct racesthe Negro-like Papuans of New Guinea and Melanesia, the Malay race of Indonesia, and the Polynesians of the outer islands. The Polynesians and Malays all speak languages of the Malayo-Polynesian group, while the languages of the Papuans belong partly to this group (Melanesian), partly to the unrelated languages (Papuan) of New Guinea.9 In spite of the fact that the greatest race cleavage in this region lies between the Papuans and the Polynesians, the major linguistic division is of Malayan on the one side, Melanesian and Polynesian on the other. As with race, so with culture. Particularly in more primitive levels, where the secondarily unifying power of the national10 ideal does not arise to disturb the flow of what we might call natural distributions, is it easy to show that language and culture are not intrinsically associated. Totally unrelated languages share in one culture, closely related languageseven a single languagebelong to distinct culture spheres. There are many excellent examples in aboriginal America. The Athabaskan languages form as clearly unified, as structurally specialized, a group as any that I know of.11 The speakers of these languages belong to four distinct culture areasthe simple hunting culture of western Canada and the interior of Alaska (Loucheux, Chipewyan), the buffalo culture of the Plains (Sarcee), the highly ritualized culture of the southwest (Navaho), and the peculiarly specialized culture of northwestern California (Hupa). The cultural adaptability of the Athabaskan-speaking peoples is in the strangest contrast to the inaccessibility to foreign influences of the languages themselves.12 The Hupa Indians are very typical of the culture area to which they belong. Culturally identical with them are the neighboring Yurok and Karok. There is the liveliest intertribal intercourse between the Hupa, Yurok, and Karok, so much so that all three generally attend an important religious ceremony given by any one of them. It is difficult to say what elements in their combined culture belong in origin to this tribe or that, so much at one are they in communal action, feeling, and thought. But their languages are not merely alien to each other; they belong to three of the major American linguistic groups, each with an immense distribution on the northern continent. Hupa, as we have seen, is Athabaskan and, as such, is also distantly related to Haida (Queen Charlotte Islands) and Tlingit (southern Alaska); Yurok is one of the two isolated Californian languages of the Algonkin stock, the center of gravity of which lies in the region of the Great Lakes; Karok is the northernmost member of the Hokan group, which stretches far to the south beyond the confines of California and has remoter relatives along the Gulf of Mexico. Returning to English, most of us would readily admit, I believe, that the community of languages between Great Britain and the United States is far from arguing a like community of culture. It is customary to say that they possess a common Anglo-Saxon cultural heritage, but are not many significant differences in life and feeling obscured by the tendency of the cultured to take this common heritage too much for granted? In so far as America is still specifically English, it is only colonially or vestigially so; its prevailing cultural drift is partly towards autonomous and distinctive developments, partly towards immersion in the larger European culture of which that of England is only a particular facet. We cannot deny that the possession of a common language is still and will long continue to be a smoother of the way to a mutual cultural understanding between England and America, but it is very clear that other factors, some of them rapidly cumulative, are working powerfully to counteract this leveling influence. A common language cannot indefinitely set the seal on a common culture when the geographical, political, and economic determinants of the culture are no longer the same throughout its area. Language, race, and culture are not necessarily correlated. This does not mean that they never are. There is some tendency, as a matter of fact, for racial and cultural lines of cleavage to correspond to linguistic ones, though in any given case the latter may not be of the same degree of importance as the others. Thus, there is a fairly definite line of cleavage between the Polynesian languages, race, and culture on the one hand and those of the Melanesians on the other, in spite of a considerable amount of overlapping.13 The racial and cultural division, however, particularly the former, are of major importance, while the linguistic division is of quite minor significance, the Polynesian languages constituting hardly more than a special dialectic subdivision of the combined Melanesian-Polynesian group. Still clearer-cut coincidences of cleavage may be found. The language, race, and culture of the Eskimo are markedly distinct from those of their neighbors;14 in southern Africa the language, race, and culture of the Bushmen offer an even stronger contrast to those of their Bantu neighbors. Coincidences of this sort are of the greatest significance, of course, but this significance is not one of inherent psychological relation between the three factors of race, language, and culture. The coincidences of cleavage point merely to a readily intelligible historical association. If the Bantu and Bushmen are so sharply differentiated in all respects, the reason is simply that the former are relatively recent arrivals in southern Africa. The two peoples developed in complete isolation from each other; their present propinquity is too recent for the slow process of cultural and racial assimilation to have set in very powerfully. As we go back in time, we shall have to assume that relatively scanty populations occupied large territories for untold generations and that contact with other masses of population was not as insistent and prolonged as it later became. The geographical and historical isolation that brought about race differentiations was naturally favorable also to far-reaching variations in language and culture. The very fact that races and cultures which are brought into historical contact tend to assimilate in the long run, while neighboring languages assimilate each other only casually and in superficial respects,15 indicates that there is no profound causal relation between the development of language and the specific development of race and of culture. But surely, the wary reader will object, there must be some relation between language and culture, and between language and at least that intangible aspect of race that we call temperament. Is it not inconceivable that the particular collective qualities of mind that have fashioned a culture are not precisely the same as were responsible for the growth of a particular linguistic morphology? This question takes us into the heart of the most difficult problems of social psychology. It is doubtful if any one has yet attained to sufficient clarity on the nature of the historical process and on the ultimate psychological factors involved in linguistic and cultural drifts to answer it intelligently. I can only very briefly set forth my own views, or rather my general attitude. It would be very difficult to prove that temperament, the general emotional disposition of a people,16 is basically responsible for the slant and drift of a culture, however much it may manifest itself in an individuals handling of the elements of that culture. But granted that temperament has a certain value for the shaping of culture, difficult though it be to say just how, it does not follow that it has the same value for the shaping of language. It is impossible to show that the form of a language has the slightest connection with national temperament. Its line of variation, its drift, runs inexorably in the channel ordained for it by its historic antecedents; it is as regardless of the feelings and sentiments of its speakers as is the course of a river of the atmospheric humors of the landscape. I am convinced that it is futile to look in linguistic structure for differences corresponding to the temperamental variations which are supposed to be correlated with race. In this connection it is well to remember that the emotional aspect of our psychic life is but meagerly expressed in the build of language.17 Language and our thought-grooves are inextricably interwoven, are, in a sense, one and the same. As there is nothing to show that there are significant racial differences in the fundamental conformation of thought, it follows that the infinite variability of linguistic form, another name for the infinite variability of the actual process of thought, cannot be an index of such significant racial differences. This is only apparently a paradox. The latent content of all languages is the samethe intuitive science of experience. It is the manifest form that is never twice the same, for this form, which we call linguistic morphology, is nothing more nor less than a collective art of thought, an art denuded of the irrelevancies of individual sentiment. At last analysis, then, language can no more flow from race as such than can the sonnet form. Nor can I believe that culture and language are in any true sense causally related. Culture may be defined as what a society does and thinks. Language is a particular how of thought. It is difficult to see what particular causal relations may be expected to subsist between a selected inventory of experience (culture, a significant selection made by society) and the particular manner in which the society expresses all experience. The drift of culture, another way of saying history, is a complex series of changes in societys selected inventoryadditions, losses, changes of emphasis and relation. The drift of language is not properly concerned with changes of content at all, merely with changes in formal expression. It is possible, in thought, to change every sound, word, and concrete concept of a language without changing its inner actuality in the least, just as one can pour into a fixed mold water or plaster or molten gold. If it can be shown that culture has an innate form, a series of contours, quite apart from subject-matter of any description whatsoever, we have a something in culture that may serve as a term of comparison with and possibly a means of relating it to language. But until such purely formal patterns of culture are discovered and laid bare, we shall do well to hold the drifts of language and of culture to be non-comparable and unrelated processes. From this it follows that all attempts to connect particular types of linguistic morphology with certain correlated stages of cultural development are vain. Rightly understood, such correlations are rubbish. The merest coup dil verifies our theoretical argument on this point. Both simple and complex types of language of an indefinite number of varieties may be found spoken at any desired level of cultural advance. When it comes to linguistic form, Plato walks with the Macedonian swineherd, Confucius with the head-hunting savage of Assam. It goes without saying that the mere content of language is intimately related to culture. A society that has no knowledge of theosophy need have no name for it; aborigines that had never seen or heard of a horse were compelled to invent or borrow a word for the animal when they made his acquaintance. In the sense that the vocabulary of a language more or less faithfully reflects the culture whose purposes it serves it is perfectly true that the history of language and the history of culture move along parallel lines. But this superficial and extraneous kind of parallelism is of no real interest to the linguist except in so far as the growth or borrowing of new words incidentally throws light on the formal trends of the language. The linguistic student should never make the mistake of identifying a language with its dictionary. If both this and the preceding chapter have been largely negative in their contentions, I believe that they have been healthily so. There is perhaps no better way to learn the essential nature of speech than to realize what it is not and what it does not do. Its superficial connections with other historic processes are so close that it needs to be shaken free of them if we are to see it in its own right. Everything that we have so far seen to be true of language points to the fact that it is the most significant and colossal work that the human spirit has evolvednothing short of a finished form of expression for all communicable experience. This form may be endlessly varied by the individual without thereby losing its distinctive contours; and it is constantly reshaping itself as is all art. Language is the most massive and inclusive art we know, a mountainous and anonymous work of unconscious generations. Note 1. Itself an amalgam of North French and Scandinavian elements. [back] Note 2. The Celtic blood of what is now England and Wales is by no means confined to the Celtic-speaking regionsWales and, until recently, Cornwall. There is every reason to believe that the invading Germanic tribes (Angles, Saxons, Jutes) did not exterminate the Brythonic Celts of England nor yet drive them altogether into Wales and Cornwall (there has been far too much driving of conquered peoples into mountain fastnesses and lands ends in our histories), but simply intermingled with them and imposed their rule and language upon them. [back] Note 3. In practice these three peoples can hardly be kept altogether distinct. The terms have rather a local-sentimental than a clearly racial value. Intermarriage has gone on steadily for centuries and it is only in certain outlying regions that we get relatively pure types, e.g., the Highland Scotch of the Hebrides. In America, English, Scotch, and Irish strands have become inextricably interwoven. [back] Note 4. The High German now spoken in northern Germany is not of great age, but is due to the spread of standardized German, based on Upper Saxon, a High German dialect, at the expense of Plattdeutsch. [back] Note 7. By working back from such data as we possess we can make it probable that these languages were originally confined to a comparatively small area in northern Germany and Scandinavia. This area is clearly marginal to the total area of distribution of the Indo-European-speaking peoples. Their center of gravity, say 1000 B.C., seems to have lain in southern Russia. [back] Note 8. While this is only a theory, the technical evidence for it is stronger than one might suppose. There are a surprising number of common and characteristic Germanic words which cannot be connected with known Indo-European radical elements and which may well be survivals of the hypothetical pre-Germanic language; such are house, stone, sea, wife (German Haus, Stein, See, Weib). [back] Note 9. Only the easternmost part of this island is occupied by Melanesian-speaking Papuans. [back] Note 10. A nationality is a major, sentimentally unified, group. The historical factors that lead to the feeling of national unity are variouspolitical, cultural, linguistic, geographic, sometimes specifically religious. True racial factors also may enter in, though the accent on race has generally a psychological rather than a strictly biological value. In an area dominated by the national sentiment there is a tendency for language and culture to become uniform and specific, so that linguistic and cultural boundaries at least tend to coincide. Even at best, however, the linguistic unification is never absolute, while the cultural unity is apt to be superficial, of a quasi-political nature, rather than deep and far-reaching. [back] Note 11. The Semitic languages, idiosyncratic as they are, are no more definitely ear-marked. [back] Note 13. The Fijians, for instance, while of Papuan (negroid) race, are Polynesian rather than Melanesian in their cultural and linguistic affinities. [back] Note 14. Though even here there is some significant overlapping. The southernmost Eskimo of Alaska were assimilated in culture to their Tlingit neighbors. In northeastern Siberia, too, there is no sharp cultural line between the Eskimo and the Chukchi. [back] Note 15. The supersession of one language by another is of course not truly a matter of linguistic assimilation. [back] Note 16. Temperament is a difficult term to work with. A great deal of what is loosely charged to national temperament is really nothing but customary behavior, the effect of traditional ideals of conduct. In a culture, for instance, that does not look kindly upon demonstrativeness, the natural tendency to the display of emotion becomes more than normally inhibited. It would be quite misleading to argue from the customary inhibition, a cultural fact, to the native temperament. But ordinarily we can get at human conduct only as it is culturally modified. Temperament in the raw is a highly elusive thing. [back]
This article is recommended by the editorial team. Coastal systems may self-organize at various length and time scales. Sand banks, sand waves both in the shelf and at the coastline, sand bars, tidal inlets, cusps, cuspate forelands, spits (among others) are morphological features that are frequently dominated by self-organized processes. Stability models are the genuine tool to understand these processes and make predictions on the dynamics of those features. - 1 Stability: concepts. - 2 Stability methods: use in coastal sciences. - 3 Stability methods: use in long term morphological modelling. - 4 Linear stability models. - 5 Nonlinear stability models. - 6 Cellular models. - 7 References. The concepts of equilibrium and stability come from Classical Mechanics (see, for example, Arrowsmith and Place, 1992). A state where a system is in balance with the external forcing so that it does not change in time is called an equilibrium position. However, any equilibrium position may be either stable or unstable. If released near a stable equilibrium position, the system will evolve towards such a position. On the contrary, if released near an unstable equilibrium position, it will go far away from this position. For instance, a pendulum has two equilibrium positions, one up (A), another down (B). If released at rest at any position (except at A) the pendulum will start to oscillate (if it is not already in B) and due to friction it will end up at rest at B. Thus, the pendulum will move spontaneously towards the stable equilibrium and far away from the unstable equilibrium. Similarly, a beach under constant wave forcing is commonly assumed to reach after some time certain equilibrium profile. However, two main assumptions are here involved: i) an equilibrium state exists and ii) the equilibrium is stable. The existence of an equilibrium profile seems to be granted in the books on coastal sciences and the stability of such an equilibrium is implicitly assumed. However, even if an equilibrium profile exists, it is not necessarily stable. This means that the system would ignore such an equilibrium, it would never tend spontaneously to it. Furthermore, several equilibria may exist, some of them stable, some others unstable. Let us assume a system which is described by only one variable as a function of time, , and two constant parameters and which are representative of both the characteristics of the system and the external forcing. Assume that this variable is governed by the ordinary differential equation: For instance, in a coastal system, could represent sediment grain size or wave height, and the shoreline displacement at an alongshore location. Given an initial position , the subsequent evolution of the system is described by the solution of the differential equation It becomes clear that the system has two equilibrium positions, A: , and B: . Moreover, for , A is stable while B is unstable. In contrast, A is unstable and B is stable for . This is illustrated by Fig. 1 where typical solutions are plotted for various initial conditions. Stability methods: use in coastal sciences. Equilibrium situations are fundamental in any coastal system as they are the possible steady states where the system can stay under a steady forcing (steady at the time scale which is relevant according to the definition of the system). Then it is crucial to know whether an equilibrium state is stable or not since only stable equilibria can be observed. Furthermore, knowing the conditions for stability of certain equilibrium may be vital when this equilibrium means preserving a beach against erosion, or keeping water depth in a navigation channel. For instance, the entrance of a tidal inlet may be in stable equilibrium under the action of tides and waves. However, if this equilibrium becomes unstable (e.g., because of climate change) the entrance may close up (see section 'Tidal inlets'). Very often, even if the system is out of equilibrium its dynamics can be understood as its path from an unstable equilibrium to a stable one. Therefore, the stability concepts and the mathematical techniques involved are of very general use in coastal sciences. However, stability models are nowadays commonly associated to models for pattern formation. Thus, we will focus here in this broad class of applications which are very often related to that transition from an unstable equilibrium to a stable one. Coastal and geomorphical systems exhibit patterns both in space and time. Some of these patterns directly obey similar patterns in the external forcing. For example, a beach profile may erode and subsequently recover directly in response to the cycle storm/calm weather in the external wave forcing with the same time scale. This is known as forced behaviour. Other patterns, even if they are driven by some external forcing, do not resemble similar patterns in the forcing. For instance, although bed ripples may be originated by a unidirectional current over a sandy bed, there is nothing in the current itself which dictates the shape, the lengthscale or the characteristic growth time of the ripples. The ripples constitute a new pattern which is not present in the forcing. This is called free behaviour or self-organized behaviour. The forced behaviour is much simpler to predict once the forcing is known. In contrast, predicting the free behaviour is typically much more complicated as it involves the complex internal dynamics of the system itself (see, for instance, Dronkers, 2005). Stability methods are the genuine tool to describe, understand and model pattern formation by self-organization (Dodd et al., 2003). The typical procedure is to start considering an equilibrium of the system where the pattern is absent (for instance, flat bed, in case of ripples). The key point is that small fluctuations or irregularities are always present (a perfect flat bed or an exact unidirectional, uniform and steady current do not exist). Then, if the equilibrium is stable, any initial small perturbation of the equilibrium will dye away in time. Thus, those small fluctuations will not succeed in driving the system far from the equilibrium (the bed will keep approximately flat). However, if the equilibrium is unstable, there will exist initial perturbations that will tend to grow. Among all of them, some will grow faster than others and their characteristics will prevail in the state of the system. In other words, the patterns corresponding to these initially dominant perturbations of the equilibrium will emerge and will explain the occurrence of the observed patterns (the ripples). However, different patterns may emerge during the instability process and the finally dominant one (the one which is observed) may not correspond to the initially dominant.When applied to the formation of coastal morphological patterns the instability which leads to the growth is typically originated by a positive feedback between the evolving morphology and the hydrodynamics. cusps) without any relation with crescentic bars. If their orientation is not exactly shore-normal they are called oblique bars. The equilibrium state to start with is a rectilinear coastline with a bathymetry which is alongshore uniform, either unbarred or with one or more shore-parallel bars. The wave field is assumed to be constant in time. Since the cross-shore profile is assumed to be an equilibrium profile there is no cross-shore net sediment transport. Even if there is longshore transport due to wave oblique incidence, there are no gradients in such transport so that the morphology is constant in time. Now, this equilibrium may be stable or unstable. This means that given a small perturbation on the bathymetry, the wave field will be altered (changes in wave energy distribution, wave breaking, shoaling, refraction, diffraction, etc.), hence the mean hydrodynamics will be altered too (changes in the currents, in set-up/set-down). Therefore, there will be changes in sediment transport thereby appearing convergences/divergences of sediment flux and morphological changes. These morphological changes may either reinforce or damp the initial perturbation. If the latter happens for any perturbation one may consider, the equilibrium is stable and the bathymetry will keep alongshore uniform. If the former happens for at least one possible perturbation, the equilibrium is unstable and the beach will 'spontaneously' (i.e., from the small fluctuations) develop coupled patterns in the morphology, the wave field and the mean hydrodynamics other than the featureless equilibrium. These patterns may eventually result in the observed rhythmic bars with the corresponding circulation patterns. This has been shown for the case of crescentic bars (Calvete et al., 2005; Dronen and Deigaard, 2007 ) and transverse/oblique bars (Garnier et al., 2006). Stability methods can be used not only to understand and model naturally occurring features but also to analize the efficiency and impact of human interventions. The sand which is dumped in a shoreface nourishment interacts with the natural nearshore bars and may trigger some of the morphodynamic instability modes of the system. Following this idea, Van Leeuwen et al., 2007 have applied a morphodynamical stability analysis to assess the efficiency of different shoreface nourishment strategies. Stability methods: use in long term morphological modelling. Continental shelf morphological features The sea bed of the continental shelf is rarely flat. Rather, it is usually covered by a number of different types of morphological features ranging from megaripples to sand waves and sand banks. The latter two may be considered as long term features since the characteristic time for its formation and evolution is of decades or centuries. Their horizontal lengthscale (size and spacing) is of the order of hundreds of m for sand waves and few km's for the sand banks. Their origin has been explained as a morphodynamical instability of the coupling between the sandy bed and the tidal currents (Besio et al., 2006). The equilibrium situation is the flat bed where the tides do not create any gradient in sediment flux. The instability mechanism involves only depth averaged flow in case of sand banks whereas it is related to net vertical circulation cells in case of sand waves. Sand banks may also appear near the coast, in water depths of 5-20 m. In this case they are known as shoreface-connected sand ridges. Their origin has also been explained from an instability but where the tidal currents have little influence. In this case, the instability mechanism is caused by the storm driven coastal currents in combination with a transversely sloping sea bed (Calvete et al., 2001, see Sec: 'Example: MORFO25 model'). Stability analysis has been applied to tidal inlets at different levels. First the dynamics of the cross-sectional area of the entrance with its equilibria, their stability and the possibility of closure has been considered. This is done with very simple parametric descriptions of the gross sand transport by tidal currents and waves that allow to derive simple governing ordinary differential equations (see, for instance, van de Kreeke, 2006). Typical time scales for such a dynamics are (e.g., in case of the Frisian inlet, on the Dutch coast) about 30 years. At a second level, the possible equilibrium bathymetries inside the inlet and their stability can be analyzed. This allows to understand the origin and dynamics of the channels and shoals inside the inlet. It turns out that this sometimes complicated structure (even fractal) of channels and shoals is originated by an instability of the flat topography in interaction with the tidal currents due to frictional torques. The time scale for such an instability is of the order of 1 year (see, e.g., Schuttelaars and de Swart, 1999 and Schramkowski et al., 2004). These channels and shoals scale with the length of the embayment but the stability analysis of the flat bottom topography in interaction with tidal currents also gives instability modes at a smaller scale which correspond to the tidal bars that form at the inlet entrance. The growing perturbations associated to this instability are trapped near the entrance and scale with the width of the inlet (Seminara and Tubino, 2001 and van Leeuwen and de Swart, 2004). Large scale shoreline instabilities Shorelines characterized by a wave climate with high incidence angle with respect to the shore-normal commonly show a wavy shape, cuspate landforms and spits (Classification of coastlines). This can be interpreted as a result of a coastline instability. The littoral drift or total alongshore sediment transport driven by the breaking waves, , is a function of wave incidence angle with respect to the shore-normal in deep water, . It is zero for , increases up to maximum for about and decreases down to zero for . The equilibrium situation is a rectilinear coastline with alongshore uniform nearshore bathymetry and alongshore uniform wave forcing with a angle. Assume now a small undulation of the otherwise rectilinear coastline consisting of a cuspate foreland. The wave obliquity with respect to the local shoreline is larger at the downdrift side than at the updrift side. Then, if , higher obliquity means higher transport so that there will be larger sediment flux at the downdrift side than at the updrift side. This will erode the cuspate shape and the shoreline will come back to the rectilinear equilibrium shape. The shoreline is stable. The contrary will happen if so that the shoreline will be unstable in this case. The instability tends to create undulations of the coastline (shoreline sand waves) with an initial wavelength of about 1-10 km's with a characteristic growth time of the order of a few years (Falqués and Calvete, 2005). Once the initial undulations have grown, the shoreline may evolve towards larger wavelengths and very complex shapes including hooked spits (Ashton et al., 2001, see Sec: 'Cellular models'). Linear stability models. The equations governing coastal systems are typically nonlinear and it is difficult to solve them or to extract useful information from them. However, the small departures from an equilibrium situation approximately obey linear equations that are very useful to determine whether the equilibrium is stable or unstable and, in the latter case, which are the emerging patterns and how fast they grow at the initial stage. For instance, for the equilibrium B () for equation (1), one may define the departure from equilibrium, , with governing equation: where the last approximation is valid for and is called linearization. The approximate equation is linear in and it is immediately solved to give: where is called the growthrate and determines whether the perturbation of equilibrium will grow or decay. We recover that B is stable if and unstable if without solving the nonlinear equation (1). Steps in developing and using a linear stability model. - Governing equations. The first step is to define the variables describing the state of the system. These may be the level of the sea bed as a function of two horizontal coordinates and time, , or the wave energy density field, , or the position of the coastline as a function of a longitudinal coordinate and time, , etc. Then, equations expressing the time derivatives of those variables must be derived. For coastal morphodynamic problems these typically constitute a system of partial differential equations in (this was not the case in our extremely simple example where there is a single governing equation which is ordinary in or for the model of tidal inlets mentioned above). - Equilibrium state. An equilibrium solution of the governing solutions where all the variables of the system are constant in time must be selected. (In our simple example, there are two possible equilibrium solutions.) - Linearization of the governing equations. The perturbations with respect to the selected equilibrium solution must be defined, , , etc. Then, the governing equations must be linearized by neglecting powers higher than one of those perturbations. If the perturbations in all the variables are represented by a vector , the linearized equations can be represented by: where is a linear operator typically involving partial derivatives with respect to the horizontal coordinates, . (In our simple example, the linearized equation is eq. (3) and operator is algebaic and one-dimensional, simply .) - Solving the linearized equations: eigenvalue problem. Since the coefficients of the linearized equations do not depend on time, solutions can be found as where and are eigenvalues and associated eigenfunctions of operator (note that is -dimensional, where is the number of variables describing our system). These eigenvalues and eigenfunctions may be complex and only the real part of the latter expression has physical meaning. The equations expressing the eigenproblem are partial differential equations in . In case where the equilibrium solution is uniform in both directions (e.g. stability of horizontal flat bed in an open ocean), the coefficients do not depend on these coordinates and wave-like solutions may be found as where is a constant vector. In this case, the eigenvalue problem can be solved algebraically leading to a complex dispersion relation, . Very often, there is uniformity in one direction, say , but not in the other. This is typically the case in coastal stability problems where the equilibrium solution depends on the cross-shore coordinate, , but not on the alongshore one, . Thus, the eigenfunctions are wave-like only in the direction and are: In this case, solving the corresponding eigenproblem requieres solving a boundary value problem for ordinary differential equations for which is commonly done by numerical methods. In case where the equilibrium state has gradients in any horizontal direction the eigenproblem leads to a boundary value problem for partial differential equations in . (All this is really trivial in our simple example, because both the governing equation does not involve partial derivatives and vector is one-dimensional. There is only one eigenvalue, .) - Analysis of the eigenvalue spectrum: extracting conclusions. Once the corresponding eigenproblem has been solved, one has a spectrum of eigenvalues with the associated eigenfunctions . The symbol represents an 'index' to number the eigenvalues, but it is not necessarily discrete. It may be continuous in response to unboundedness of our system in some direction. If the eigenvalue problem has been solved numerically, the numerical eigenvalues are just approximations to the exact eigenvalues. Some of them may even be numerical artifacts that do not have any relation with the exact eigenvalues. These purely numerical eigenvalues are called spurious eigenvalues. Distinguishing between physical and spurious eigenvalues is commonly achieved from physical meaning and from convergence under mesh refinement but may sometimes be quite difficult. The real part of each eigenvalue determines the growth or decay of the perturbation with shape defined by the associated eigenfunction and is called growthrate. If all the eigenvalues have negative real part, the equilibrium is stable. If there exist at least one positive growthrate, the equilibrium is unstable. If there are a number of eigenvalues with positive growthrate, all the corresponding perturbations can grow. The one with largest growthrate is called the dominant mode and the associated eigenfunction is expected to correspond to the observed emerging pattern in the system. The imaginary part of the eigenvalues is related to a propagation of the patterns. Each eigenvalue with the associated eigenfunction are called normal mode, linear mode or instability mode (the latter in case the growthrate is positive). Example: MORFO25 model.This is a linear stability model to identify and explore the physical mechanism which is responsible for the formation of shoreface-connected sand ridges on the continental shelf (Calvete et al., 2001). The model domain is a semi-infinite ocean bounded by the coastline. The governing equations are partial differential equations in time and in both horizontal coordinates which are derived from: i) water mass conservation, ii) momentum conservation and iii) bed changes caused by sediment conservation. The unknowns are the mean sea level, the bottom level and the depth averaged mean current. The sediment transport is directly driven by the current. The equilibrium situation is an alongshore uniform bathymetry consisting of a plane sloping bottom next to the coastline (inner shelf) and a plane horizontal bottom further offshore (outer shelf) together with an alongshore coastal current. ) (see Fig. 4). All this is consistent with observations (see Sec: 'Continental shelf morphologic features'). Nonlinear stability models. The linear stability models indicate just the 'initial tendency' to pattern formation starting from small fluctuations of certain equilibrium. This initial tendency involves the shape and horizontal lenghtscales of the pattern but not its amplitude. Also, the shape and lenghtscale at the initial stages may be quite different from the shape at later stages to be compared with observations. Actually, a reliable prediction of pattern formation needs to consider the nonlinear terms which have been neglected in the linearization. This is the aim of nonlinear stability models. Since solving the governing equations can hardly be done analytically, the numerical or approximation method used is essential to them and may be the basis for their classification. Standard discretized models. The governing equations can be discretized by standard numerical methods as finite differences, finite elements or spectral methods. This provides algorithms to find an approximation to the time evolution of the system starting from initial conditions. If these initial conditions are chosen as small random perturbations of an equilibrium solution and no linearization has been introduced in the equations, the corresponding code implements a nonlinear stability model since it describes the behaviour of the system when released close to equilibrium. A possible procedure is the use of existing commercial numerical models (e.g., MIKE21, DELFT-3D, TELEMAC, etc.). This has been done in a number of stability studies (for instance, in case of tidal inlets, see van Leeuwen et al., 2004 or Roelvink, 2006). Their advantage is that they commonly describe the relevant processes as accurately as possible according to present knowledge. However, this typically results in highly complex models with several inconvenients for their use in this context. The most important is that they are typically based on fixed set of equations and discretization methods that the user can not easily change. Furthermore and due to their complexity it is sometimes difficult for the user to exactly know the set of governing equations and parameterizations. In particular, one can not freely turn on/off some of the constituent processes. Another problem may be that the diffusivity which is necessary to keep control of unresolved small scale processes may damp the instabilities to be studied. Moreover, because of their complexity, they are highly time-consuming. Therefore, the use of nonlinear stability models specifically designed may prove to be more efficient to describe a particular pattern dynamics in a particular environment. The most important advantage is that the governing equations, the parameterizations and the discretization methods are more transparent and can be more easily changed. In particular, these models may consider idealized conditions which are suited to exploring certain processes in isolation. Alternatively, one may include more and more processes with the desired degree of complexity and get close to the commercial models. An example of such specific models is MORFO55 which is suited to describe the dynamics of rhythmic bars in the surf zone (Garnier et al., 2006; see Sec: 'Stability methods: use in coastal sciences). Weakly nonlinear models. Although direct numerical simulation discussed in last section is very powerful, a systematic exploration of the nonlinear stability properties of a given system needs many runs and may thus be prohibitive. An alternative approach then consists in deriving approximate governing equations based on multiple-scale developments which are called amplitude equations. Apart from their simplicity, the big advantage is that they are generic, i.e., they have the same structure for many different physical systems. Thus, they allow for obtaining general properties in a much cheaper way than by direct numerical simulation. They have however an essential limitation which is stated as follows. The stability or instability of equilibrium depends on the parameters describing the external forcing and the properties of the system. Typically, a single parameter can be defined, , such that below some threshold or critical value, , the equilibrium is stable whereas it is unstable above it, . Then, if is defined, amplitude equations are restricted to slightly unstable conditions, i.e., , the so-called weakly nonlinear regime. If the eigenvalue spectrum is discrete, it can be assumed for slightly unstable conditions that there is only one eigenvalue of the linear stability problem which has positive real part. i.e., only one instability mode. This means that starting from arbitrary small perturbations of equilibrium, the time evolution of the system will be dominated by this mode, i.e., which means an exponential growth of the pattern defined by according to the real part of the eigenvalue. However, the latter expression gives just an indication but can not be the solution because of the nonlinear terms. A procedure to find an approximate solution begins by realizing that for the real part of the eigenvalue tends to 0 so that it can read: where and the power two can always be introduced by redefining parameter if necessary. This leads to defining a slow time as and to look for an approximate solution of the form where is a complex amplitude and the terms on the right of it correspond to the linear eigensolution for . By considering that this is just the first term of a power expansion in the so-called Landau equation is obtained for the amplitude: where is a coefficient which depends on the system under investigation. When the real part of is positive, the solutions tend to a new equilibrium characterized by a finite amplitude; then, if this solution represents a travelling finite amplitude wave. If the real part of is negative, explosive behaviour occurs, i.e., the amplitude becomes infinte in a finite time. Very often, the spectrum is continuum. In this case, for slightly unstable conditions there is a narrow band of eigenvalues with positive real part even for very small . A similar development can be carried out in this case but now, slow spatial coordinates and/or must be defined and the complex amplitude depends on it/them. The generic governing equation is the so-called Ginzburg-Landau equation which is a partial differential equation in T and the slow spatial coordinates. In practice, the big difficulty of using such methods is the computation of the coefficients (e.g., ) from the original governing equations which can be a tremendous task for the complex coastal systems (see Komarova and Newell for an example). An alternative approach is assuming that the governing equations already are of Ginzburg-Landau type and derive the coefficients from field observations. Then, the resulting equations may be used to make predictions. A possible method for direct numerical simulation is the use of spectral methods which are based on truncated expansions in basis functions of the spatial coordinates. On the other hand, weakly nonlinear methods show that for slightly unstable conditions the spatial patterns are close to those predicted by linear stability analysis. Thus, it is plausible that the spatial patterns of the system may be expressed as a combination of eigenfunctions even for non weakly unstable regime if all the linear instability modes are incorporated: By inserting this ansatz into the governing equations and by doing a Galerkin-type projection a set of N nonlinear ordinary differential equations for the unknown amplitudes is obtained. This system is then solved numerically. In other words, these methods essentially are spectral Galerkin methods but instead of using expansions in a mathematically defined basis (trigonometric functions, Chebyshev polynomials, etc.) the eigenmodes are used. The set of eigenfunctions to use in the expansion (here symbolically indicated by N) must always contain at least those with positive growthrate but the choice of the additional ones is by no means trivial. Although there are no a priori restrictions on the applicability of this method (e.g., weakly nonlinear regime), the state of the system must be in practice relatively close to the starting equilibrium in order its behaviour can adequately be described in terms of the eigenfunction set. Thus, this type of models could be classified as 'moderately nonlinear'. Examples of their application are shoreface-connected sand ridges (Calvete and de Swart, 2003) and tidal inlets (Schramkowski et al., 2004). In the linear/nonlinear models considered sofar the coastal system is considered as a continuum and the governing equations are set as partial differential equations from the fundamental physical laws as conservation of mass, momentum, energy, wave phase, etc. Either the full equations or the linearized version are subsequently discretized to be solved by numerical methods. The numerical approximations give rise to algorithms to obtain information on the time evolution of the system and these algorithms are finally codified. An alternative option is to consider the discrete structure of the system from the very beginning. The system is assumed to be governed partially by some of the fundamanental physical laws and partially by some abstract rules which define its behaviour. These rules and laws are directly set in a numerical or algorithmic manner rather than expressing them as partial differential equations. The algorithms giving the time evolution of the system are finally codified. This type of models are known as cellular models because the discretization is inherent to the model itself. It is also sometimes known as self-organization models. The latter is however misleading since self-organization is a type of behaviour of the system and is independent of the model used to describe it. Cellular models have been applied to explore the self-organized formation of beach cusps (see Coco et al., 2003 and references herein.) A very relevant example for long-term morphodynamic modelling is the cellular model of Ashton et al., 2001, to study shoreline instabilities due to very oblique wave incidence. The model domain represents a plan view of the nearshore which is discretized into cells or 'bins'. Each cell is assigned a value, F, , representing the cell's plan view area that is occupied by land. represents dry land, represents ocean cells and corresponds to shoreline cells. At each time step, the model updates the shoreline position according to alongshore gradients in littoral drift and sediment conservation similarly to one-line shoreline models. The model allows however for arbitrarily sinuous shorelines, even doubling back on itself and with 'wave-shadow' regions. So, starting from small fluctuations of the rectilinear coastline equilibrium the model can go a long reach and can therefore be considered as strongly nonlinear (see Sec: 'Large scale shoreline instabilities'). - D. K. Arrowsmith and C. M. Place, 1992. "Dynamical Systems". Chapman and Hall/CRC. - J. Dronkers, 2005."Dynamics of Coastal Systems". World Scientific. - N. Dodd, P. Blondeaux, D. Calvete, H. E. de Swart, A. Falqués, S. J. M. H. Hulscher, G. Rózynski and G. Vittori, 2003. "The use of stability methods in understanding the morphodynamical behavior of coastal systems". J. Coastal Res., 19, 4, 849-865. - D. Calvete, N. Dodd, A. Falqués and S. M. van Leeuwen, 2005. "Morphological Development of Rip Channel Systems: Normal and Near Normal Wave Incidence". J. Geophys. Res., 110, C10006, doi:10.1029/2004JC002803. - N. Dronen and R. Deigaard, 2007. "Quasi-three-dimensional modelling of the morphology of longshore bars". Coast. Engineering, 54, 197-215. - R. Garnier, D. Calvete, A. Falqués and M. Caballeria, 2006. "Generation and nonlinear evolution of shore-oblique/transverse sand bars". J. Fluid Mech., 567, 327-360. - S.Van Leeuwen, N.Dodd, D. Calvete and A. Falqués, 2007. "Linear evolution of a shoreface nourishment". Coast. Engineering, in press, doi:10.1016/j.coastaleng.2006.11.006. - G.Besio, P.Blondeaux and G. Vittori, 2006. "On the formation of sand waves and sand banks". J.Fluid Mech., 557, 1-27. - D. Calvete, A. Falqués, H. E. de Swart and M. Walgreen, 2001. "Modelling the formation of shoreface-connected sand ridges on storm-dominated inner shelves". J. Fluid Mech., 441, 169-193. - J. van de Kreeke, 2006. "An aggregate model for the adaptation of the morphology and sand bypassing after basin reduction of the Frisian Inlet". Coast. Engineering, 53, 255-263. - H. M. Schuttelaars and H. E. de Swart, 1999. "Initial formation of channels and shoals in a short tidal embayment". J. Fluid Mech., 386, 15-42. - G. P. Schramkowski, H. M. Schuttelaars and H. E. de Swart, 2004. "Non-linear channel-shoal dynamics in long tidal embayments". Ocean Dynamics, 54, 399-407. - G. Seminara and M.Tubino, 2001. "Sand bars in tidal channels. Part 1. Free bars". J.Fluid Mech., 440, 49-74. - S. M. van Leeuwen and H. E. de Swart, 2004. "Effect of advective and diffusive sediment transport on the formation of local and global bottom patterns in tidal embayments". Ocean Dynamics, 54, 441-451. - A. Falqués and D. Calvete, 2005. "Large scale dynamics of sandy coastlines. Diffusivity and instability". J. Geophys. Res., 110, C03007, doi:10.1029/2004JC002587. - A. Ashton, A. B. Murray and O. Arnault, 2001. "Formation of coastline features by large-scale instabilities induced by high-angle waves". Nature, 414, 296-300. - J. A. Roelvink, 2006. "Coastal morphodynamic evolution techniques". Coast. Engineering, 53, 277-287. - N.L . Komarova and A. C. Newell, 2000."Nonlinear dynamics of sand banks and sand waves". J. Fluid Mech., 415, 285-321. - D. Calvete and H. E. de Swart, 2003. "A nonlinear model study on the long-term behaviour of shoreface-connected sand ridges". J.Geophys.Res., 108 (C5), 3169, doi:10.1029/2001JC001091. - G. Coco, T. K. Burnet, B. T. Werner and S.Elgar, 2003. "Test of self-organization in beach cusp formation". J. Geophys. Res., 108, C33101, doi:10.1029/2002JC001496. Please note that others may also have edited the contents of this article.
Precambrian Eras of Folding Precambrian Eras of Folding eras of increased tectonomagmatic activity that were manifested during the Pre-cambrian history of the earth. They covered the time interval from 570 million to 3.5 billion years ago. The eras have been established on the basis of a number of geological data, such as changes in structural plan, manifestations of breaks and unconformities in the bedding of rock, and sharp changes in the degree of metamorphism. The absolute age of the Precambrian eras of folding and their interregional correlation are established by determining the time of the metamorphism and the age of the magmatic rock using radiological methods. The methods for determining the age of ancient rock allow the possibility of errors on the order of 50 million years for the late Precambrian and 100 million years for the early Pre-cambrian. Therefore, the dates of the Precambrian eras of folding can be established with considerably less certainty than those of the Phanerozoic eras of folding. The data of radiometric readings show the existence of a number of eras of tectonomagmatic activity in the Precambrian. These eras appeared approximately simultaneously throughout the world. The Precambrian eras of folding have received different names on different continents. The oldest of them was the Kolian (Saamian; the Baltic Shield), or the Transvaalian (South Africa), which appeared about 3 billion years ago and was expressed by the formation of the most ancient archicontinents. The relics of these archicontinents are encountered on all ancient platforms (as yet with the exception of the Sino-Korean and Southern Chinese). The manifestations of the next era were even more widespread. On the Baltic Shield this era was named the White Sea, on the Canadian Shield it was known as the Kenoran, and in Africa as the Rhodesian. It developed 2.5 billion years ago, and the formation of the large shield cores of the ancient platforms was connected with it. Of great significance was the Early Karelian (Baltic Shield), or Eburnean (West Africa), era (about 2 billion years ago) which, along with the subsequent late Karelian era (the Hudsonian for the Canadian Shield and the Mayomban for Africa) which occurred some 1.7 billion years ago, played a decisive role in forming the basements of all the ancient platforms. The tectonomagmatic eras in the interval between 1.7 and 1.4 billion years ago have been established only on certain continents (for example, the Laxfordian in Scotland, some 1.55 billion years ago). Occurring about 1.4 billion years ago, the Gothian (Baltic Shield), or Elsonian (Canadian Shield), era was of planetary significance. However, it was expressed not so much in the folding of geosynclinal formations as in the repeated metamorphism and granitization of individual zones within the basement of the ancient platforms. The next era, the Dalslandian (Baltic Shield), Grenvillian (Canadian Shield), or Satpurian (Hindustan), which occurred about 1 billion years ago, was the first major era of folding of the geosynclinal belts of the Neogaea. The concluding Precambrian era of folding was the Baikalian (Assyntian in Scotland, Cadomian in Normandy, and Katangan in Africa). It was manifested very widely on all the continents, including Antarctica, and led to the consolidation of significant areas within the geosynclinal belts of the Neogaea. The Baikalian movements began about 800 million years ago, their main pulsation occurred about 680 million years ago (before the depositing of the Vendian complex), and the concluding pulsation at the beginning or in the middle of the Cambrian. Among the Baikalian folded systems in the USSR are the systems of the Timan, the Enisei Ridge, parts of the Vostochnyi Saian, and the Patom Plateau. Baikalian folded systems of this age are widely found in Africa (the Katangides, the Western Congolides, and the Atakora and Mauritano-Senegalese zones), South America (the Brasilides), Antarctica, Australia, and other continents. A common feature of the Precambrian eras of folding is the significant development of regional metamorphism and granitization which diminished in intensity from the ancient eras to the more recent. On the contrary, the scale of orogenesis and the folding itself were apparently weaker than those of the Phanerozoic. Granite-gneiss domes were characteristic structural forms, particularly for the Early Precambrian. REFERENCESBogdanov, A. A. “Tektonicheskie epokhi.”Biulleten’ Moskovskogo obshchestva ispytatelei prirody: Otdel geologicheskii, 1969, vol. 44, issue 5, p. 5. Vinogradov, A. P., and A. I. Tugarinov. “O geokhronologicheskoi shkale dokembriia.” In Problemy geokhimii i kosmologii. (Mezhdunarodnyi geologicheskii kongress: XXIII sessiia: Doklady sovetskikh geologov: Problemy 6 i 13a.) Moscow, 1968. Salop, L. I. “Dokembrii SSSR.” In Geologiia dokembriia. (Mezhdunarodnyi geologicheskii kongress: XXIII sessiia: Doklady sovetskikh geologov: Problema 4.) Leningrad, 1968. Stockwell, K. H. “Tektonicheskaia karta Kanadskogo shchita.” In Tektonicheskie karty kontinentov: Na XXII sessii Mezhdunarodnogo geologicheskogo kongressa. Moscow, 1967. Shoubert, G. A., and A. For-Muret. “Legenda karty [Afriki].” Ibid., p. 83. V. E. KHAIN
Click For Photo: https://3c1703fe8d.site.internapcdn.net/newman/gfx/news/hires/2018/howwecangetm.jpg Most European forests are primarily used for timber production. However, woodlands also offer spaces for recreation and they store carbon but it is not clear how forests can be managed for these multiple benefits. A new study under the direction of the University of Bern is now showing how forestry can be improved so that wooded areas can fulfill as many services as possible. The main objective of forestry in Europe is normally timber production. That is why our forests mostly consist of a few economically valuable tree species growing in uniform stands, in which the trees are all roughly the same age. Other forests are managed for values such as habitat conservation or recreation. All of these forests have something in common: they fulfill their main purpose, but could also perform many other services much better. For example, forests also regulate our climate and store carbon. Previously, it was not clear which kind of forest management would provide the most benefits. In order to see how forestry can be improved, so that the forest can perform several ecosystem services, an international research group under the direction of the University of Bern examined how different forest features affected 14 ecosystem services in Central European forests. The research consortium includes a total of 21 research institutions from Germany, Switzerland, and Austria. The study was published in Nature Communications. Studies - University - Bern - Lot - Opportunity Earlier studies led by the University of Bern show that there is lot of opportunity for forests to supply multiple ecosystem services. However, it was not evident what characterized these forest areas. This new study looked at many different forest attributes: such as the number of tree and shrub species the forest contained, how variable its structure was and how old the trees were. The researchers then identified... Wake Up To Breaking News!
Second Grade gardeners took a look at dirt this week. Inside they sang Dirt Made My Lunch. Outside, in groups of ten, they began to think how exactly dirt did that fine deed. How can dirt make our lunch? Their answers: by providing us with plants and by providing the animals we eat with plants to eat. In small groups out in the garden, second graders looked at four different kinds of dirt using their sense of touch, and an enhanced sense of sight using magnifying glasses. Here’s what they found: Garden soil is moist and dark brown and contains: dead plants, glittering sand, clay, insects, worms, bits of bark, It holds together when you squeeze it, but breaks apart easily. Sand is made of tiny rocks; It does not hold together well, but slides through your fingers. It does not hold water. Clay is hard when it’s dry, and slippery when it’s wet. It holds water too well, and does not drain. San Diego’s native soil is dry and light brown. It runs through your fingers. It has clay in it and rocks and sand. It does not hold water like garden soil, but is more like clay when it is wet. Students also tried breaking up rocks and mixing things to make dirt. They learned it is not easy to make it. In fact, the top layer of soil on our planet takes a long time to form — up to 100 years — and is therefore a precious resource.
But without getting into further ideological discussion on perception and reality, we can take some measures to help us be more true to our story as well as remind us of things that happened. During this first week, I would encourage you to avail yourselves to visual cues: - Review old photos. You will remember things previously forgotten as well as begin to parse out a timeline for some events. - Create maps and/or charts. Where were things? How far did you actually need to walk to school? Whose house was behind yours? Who did you sit next to in class? Etc. Knowing what was where and who was there often reminds me of side activities and comments that make my story richer. All those years spent trapped by the alphabet sitting behind a pen-whacking drummer and in front of stat-muttering baseball fanatic adds color to my stories of making it through high school classes. - Draw pictures. It's not so much a chart, but consider drawing diagrams of things like your Christmas tree. Where did the ornaments end up every year? Or review your dining room table for Thanksgiving or Advent. Did anyone ever light a sleeve on fire reaching over a candle? Were there holiday accidents--things accidentally lost in the gravy or dropped in the corn, or dripped in the dressing?
July 11, 2012 Graphene Found To Be Self-Repairing redOrbit Staff & Wire Reports - Your Universe Online Researchers are one step closer to solving the mysteries of graphene, the carbon allotrope that could be the basis for the next generation of sensors, transistors, processors and more - if scientists can find a way to produce it in large quantities and mold it into the shape necessary to power future devices.One of the major problems with the material is that it is difficult to grow it into a layer that is only a single atom thick. This is especially problematic since graphene is made of carbon, which has a natural affinity to other atoms (including itself), the MIT Technology Review reported in a Tuesday article. That affinity causes a sheet of carbon to react with other atoms nearby, thus preventing growth and possibly ripping the graphene apart. In order to gain a better understanding of this material and the way it interacts both with itself and the surrounding environment, University of Manchester physicist Konstantin Novoselov and colleagues analyzed graphene sheets using an electron microscope. They discovered that if you make a hole in the substance, it automatically repairs itself. "Nano-holes, etched under an electron beam at room temperature in single-layer graphene sheets as a result of their interaction with metal impurities, are shown to heal spontaneously by filling up with either non-hexagon, graphene-like, or perfect hexagon 2D structures," Novoselov, a Nobel Prize winner in 2010 for his work with the material, and his colleagues explained in a paper detailing their work. "Scanning transmission electron microscopy was employed to capture the healing process and study atom-by-atom the re-grown structure," they added. "A combination of these nano-scale etching and re-knitting processes could lead to new graphene tailoring approaches." According to the MIT Technology Review, the scientists basically etched small holes into a sheet of graphene using an electron beam, then monitored the reaction using an electron microscope. They also added a few palladium or nickel atoms, which acted as a catalyst for the dissociation of the carbon bonds and bind to the edges of the holes for stability. Novoselov's team found that the holes grew larger when a greater number of metal atoms were added, since they were able to stabilize larger holes, and that the addition of extra carbon atoms displaced the metal atoms, helping close the holes and knit the material back together. The structure of the repair depends upon the form of the carbon used, the researchers told MIT, but even when pure carbon is used, the repairs "are perfect and form pristine graphene." That discovery could be used to help developers grow or mold graphene into essentially any shape using a variation of carbon and metal atoms, though more work will be required to determine how quickly the processor occur and whether or not they can be precisely and reliably controlled at a level that will permit the manufacturing of technological devices.
Germ cell tumors are malignant (cancerous) or nonmalignant (benign, noncancerous) tumors that are comprised mostly of germ cells. Germ cells are the cells that develop in the embryo (fetus, or unborn baby) and become the cells that make up the reproductive system in males and females. These germ cells follow a midline path through the body after development and descend into the pelvis as ovarian cells or into the scrotal sac as testicular cells. Most ovarian tumors and testicular tumors are of germ cell origin. The ovaries and testes are called gonads. Tumor sites outside the gonad are called extragonadal sites. The tumors also occur along the midline path and can be found in the head, chest, abdomen, pelvis, and sacrococcygeal (lower back) area. Germ cell tumors are rare. Germ cell tumors account for about 2 to 4 percent of all cancers in children and adolescents younger than age 20. Germ cell tumors can spread (metastasize) to other parts of the body. The most common sites for metastasis are the lungs, liver, lymph nodes, and central nervous system. Rarely, germ cell tumors can spread to the bone, bone marrow, and other organs. The cause of germ cell tumors isn't completely understood. A number of inherited defects have also been associated with an increased risk for developing germ cell tumors including the central nervous system and genitourinary tract malformations and major malformations of the lower spine. Specifically, males with cryptorchidism (failure of the testes to descend into the scrotal sac) have an increased risk to develop testicular germ cell tumors. Cryptorchidism can occur alone, however, and is also present in some genetic syndromes. Some genetic syndromes caused by extra or missing sex chromosomes can cause incomplete or abnormal development of the reproductive system. The following are the most common symptoms of germ cell tumors. However, each child may experience symptoms differently. Symptoms vary depending on the size and location of the tumor. Symptoms may include: A tumor, swelling, or mass that can be felt or seen Elevated levels of alpha-fetoprotein (AFP) Elevated levels of beta-human chorionic gonadotropin (ß-HCG) Constipation, incontinence, and leg weakness can occur if the tumor is in the sacrum (a segment of the vertebral column that forms the top part of the pelvis) compressing structures Abnormal shape, or irregularity in, testicular size Shortness of breath or wheezing if tumors in the chest are pressing on the lungs The symptoms of germ cell tumors may resemble other conditions or medical problems. Always consult your child's doctor for a diagnosis. In addition to a complete medical history and physical examination, diagnostic procedures for germ cell tumors may include: Biopsy. A sample of tissue is removed from the tumor and examined under a microscope. Complete blood count (CBC). This measures size, number, and maturity of different blood cells in a specific volume of blood. Additional blood tests. These tests may include blood chemistries, evaluation of liver and kidney functions, tumor cell markers, and genetic studies. Multiple imaging studies, including: Computed tomography (CT) scan. This is a diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones, muscles, fat, and organs. CT scans are more detailed than general X-rays. Magnetic resonance imaging (MRI). This is a diagnostic procedure that uses a combination of large magnets, radio frequencies, and a computer to produce detailed images of organs and structures within the body, without the use of X-rays. X-ray. This diagnostic test uses invisible electromagnetic energy beams to produce images of internal tissues, bones, and organs onto film. Ultrasound (also called sonography). This is a diagnostic imaging technique that uses high-frequency sound waves and a computer to create images of blood vessels, tissues, and organs. Ultrasounds are used to view internal organs as they function, and to assess blood flow through various vessels. Bone scans. This involves pictures or X-rays taken of the bone after a dye has been injected that's absorbed by bone tissue. These are used to detect tumors and bone abnormalities. Diagnosis of germ cell tumors depends on the types of cells involved. The most common types of germ cell tumors include: Teratomas. Teratomas contain cells from the three germ layers: ectoderm, mesoderm, and endoderm. Teratomas can be malignant or benign, depending on the maturity and other types of cells that may be involved. Teratomas are the most common germ cell tumor found in the ovaries. Sacrococcygeal (tail bone, or distal end of spinal column) teratomas are the most common germ cell tumors found in childhood. Because these sacrococcygeal tumors are often visible from the outside of the body, diagnosis is made early and treatment and/or surgery are initiated early, making the prognosis for this type of germ cell tumor very favorable. Germinomas. Germinomas are malignant germ cell tumors. Germinomas are also termed dysgerminoma when located in the ovaries; and seminoma when located in the testes. Among children, germinoma, or dysgerminoma, occurs most frequently in the ovary of a prepubescent or adolescent female. Dysgerminoma is the most common malignant ovarian germ cell tumor seen in children and adolescents. Endodermal sinus tumor or yolk sac tumors. Endodermal sinus tumor or yolk sac tumors are germ cell tumors that are most often malignant, but may also be benign. These tumors are most commonly found in the ovary, testes, and sacrococcygeal areas (tail bone, or distal end of spinal column). When found in the ovaries and testes, they're often very aggressive, malignant, and can spread rapidly through the lymphatic system and other organs in the body. Most yolk sac tumors will require surgery and chemotherapy, regardless of stage or presence of metastasis, because of the aggressive nature and recurrence of the disease. Choriocarcinoma. Choriocarcinoma is a very rare, but often malignant germ cell tumor that arises from the cells in the chorion layer of the placenta (during pregnancy, a blood-rich structure through which the fetus takes in oxygen, food, and other substances while getting rid of waste products). These cells may form a tumor in the placental cells during pregnancy and spread (metastasize) to the infant and mother. When the tumor develops during pregnancy, it's called gestational choriocarcinoma. Gestational choriocarcinoma most often occurs in pregnant females who are between ages 15 and 19. If a nonpregnant young child develops choriocarcinoma from the chorion cells that originated from the placenta that are still in the body, the term used is nongestational choriocarcinoma. Embryonal carcinoma. Embryonal carcinoma cells are malignant cells that are usually mixed with other types of germ cell tumors. They occur most often in the testes. These types of cells have the ability to rapidly spread to other parts of the body. When these cells are mixed with an otherwise benign type of tumor (mature teratoma), the presence of embryonal carcinoma cells will cause it to become malignant (cancerous). Many germ cell tumors have multiple types of cells involved. The diagnosis, treatment, and prognosis are based on the most malignant of the cells present and the majority type of cells that are present. Specific treatment for germ cell tumors will be determined by your child's doctor based on: Your child's age, overall health, and medical history Extent of the disease Your child's tolerance for specific medications, procedures, or therapies Expectations for the course of the disease Your opinion or preference Treatment may include (alone or in combination): Surgery (to remove the tumor and involved organs) Bone marrow transplantation Supportive care (for the effects of treatment) Hormonal replacement (if necessary) Antibiotics (to prevent or treat infections) Continuous follow-up care (to determine response to treatment, detect recurrent disease, and manage the late effects of treatment) Prognosis greatly depends on: The extent of the disease The size and location of the tumor Presence or absence of metastasis The tumor's response to therapy The age and overall health of your child Your child's tolerance of specific medications, procedures, or therapies New developments in treatment As with any cancer, prognosis and long-term survival can vary greatly from individual to individual. Prompt medical attention and aggressive therapy are important for the best prognosis. Continuous follow-up care is essential for a child diagnosed with a germ cell tumor. Side effects of radiation and chemotherapy, as well as second malignancies, can occur in survivors of germ cell tumors. New methods are continually being discovered to improve treatment and to decrease side effects.
A huge bird with a massive wingspan, the American white pelican (Pelecanus erythrorhynchos) has a sturdy bill and expandable pouch that are so large that this bird has an almost comical appearance. The brilliant white plumage contrasts strongly with conspicuous black primary feathers, pale orange legs and feet, a pinkish bill, and a yellow patch around the eye. During the breeding season, yellow feathers develop on the head, chest and neck and the feet become bright orange-red. The bill turns bright orange and a large, flattened, vertical horn develops on the upper mandible (2)(3)(4). The male and female American white pelican are similar in appearance, but the juvenile is largely brownish with a dark crown and a pale grey bill (4). Extremely graceful in flight, the American white pelican flies in ‘V’ shaped or diagonal formations, alternating between gliding and flapping, with the head tucked back into the shoulders. It often makes use of thermals to lift its bulky frame to great heights, but in the absence of thermals, it flies into the wind, staying close to the water surface and using the uplift caused by wind rising off the waves (2)(3). However, it is less elegant on land, with the short legs and webbed feet limiting movement to a clumsy waddle with the wings spread for balance (3). Foraging in large flocks that cooperate to drive prey towards shallow water, the American white pelican catches its prey by dipping its large bill into the water while in flight, to scoop up fish into the pouch. The pouch is then drained of water and the prey is swallowed before transporting it back to the nest. The American white pelican is also known to occasionally pirate food from other bird species (2)(3)(4)(5). Around three weeks before courtship begins, the American white pelican arrives at foraging grounds near to breeding colonies, which are on islands surrounded by freshwater that have no terrestrial predators. Breeding pairs search for a nesting site close to that of another pair at the same stage of breeding, so that the chicks will not be attacked by older chicks. The nest is a shallow depression in the ground, lined with a little vegetation. Higher-lying areas are preferred for nesting, to reduce the chance of flooding (2). Two eggs are laid over a two-day period and then incubated by both adults for approximately 30 days (2)(3)(6). The chicks are fed on regurgitated food and, after approximately 17 days, gather with other chicks to form a crèche or pod. The chicks fledge after 10 to 11 weeks (2)(3). The American white pelican breeds in parts of inland Canada and the northern United States, from British Columbia to Ontario, and from California east to Minnesota. Small breeding populations also occur on the central coast of Texas and occasionally in parts of Mexico (2). In winter, the American white pelican moves south to the Pacific coast of the United States and Central America, from California south to Nicaragua. It also spends the winter around the Gulf Coast, from Florida to Mexico (2), and may reach as far south as Costa Rica (1). The American white pelican is also an occasional visitor to some Caribbean islands (1). The American white pelican underwent a dramatic decline in the first half of the 20th century, caused by overexploitation and habitat loss. Although it is now increasing in many parts of its range, this increase is restricted by human disturbance of breeding colonies, which can cause nesting birds to abandon their nests. This often causes the eggs to be exposed to temperature extremes, meaning the adults must incubate the eggs for a longer period, but it may also cause the eggs to be abandoned completely (2). The American white pelican is also susceptible to contamination by toxic pollutants, which can accumulate in its body after eating contaminated prey. This can cause thinner eggshells to be produced and reduce reproductive success. Suitable breeding habitats are also being reduced, due to flooding of nesting islands or the drainage of lakes (2). Historically, the American white pelican has also suffered from shooting for sport, or by the fishing industry in retaliation for predation on fish stocks. However, this threat is now much reduced (2). After previous declines, protective legislation and increased public awareness have successfully contributed to the recovery of the American white pelican population. Where breeding sites are limited, artificial island habitats have been created far from the reaches of terrestrial predators, and fencing has been used successfully where nesting sites are accessible to terrestrial predators. Additional conservation priorities for the American white pelican include further protection of breeding colonies, including protection from human disturbance, as well as flood prevention and improved drainage (2). ARKive is supported by OTEP, a joint programme of funding from the UK FCO and DFID which provides support to address priority environmental issues in the Overseas Territories, and Defra Embed this ARKive thumbnail link ("portlet") by copying and pasting the code below.
What is Childhood Soft Tissue Sarcoma? Childhood soft tissue sarcoma is a disease in which cancer cells begin growing in the soft tissue in a child's body. The soft tissues connect, support and surround the body parts and organs, and include muscles, tendons, connective tissues, fat, blood vessels, nerves and synovial tissues (that surround the joints). Cancer develops as the result of abnormal cell growth within the soft tissues. Types of Childhood Soft Tissue Sarcoma There are many types of soft tissue sarcomas are classified according to the type of soft tissue they resemble. Types include: Tumors of Fibrous (connective) Tissue Malignant Fibrous Histiocytoma Fat Tissue Tumors Smooth Muscle Tumors Blood and Lymph Vessel Tumors Synovial (joint) Tissue Sarcoma Peripheral Nervous System Tumors Bone and Cartilage Tumors Extraosseous myxoid chondrosarcoma Extraosseous mesenchymal chondrosarcoma Combination Tissue Type Tumors Tumors of Unknown Origin Alveolar soft part sarcoma Clear cell sarcoma Soft tissue sarcoma is more likely to develop in people who have the following risk factors: Specific genetic conditions. Certain genetic syndromes, such as Li-Fraumeni syndrome, may put some people at a higher risk for developing this disease. Radiation therapy. Children who have previously received radiation therapy are at a higher risk. Virus. Children who have the Epstein-Barr virus as well as AIDS (acquired immune deficiency syndrome are at a higher risk as well. A solid lump or mass, usually in the trunk, arms or legs Other symptoms depend upon the location of the tumor and if it is interfering with other bodily functions Rarely causes fever, weight loss or night sweats If your child has any of these symptoms, please see his/her doctor. Diagnosing Childhood Soft Tissue Sarcoma If symptoms are present, your child's doctor will complete a physical exam and will prescribe additional tests to find the cause of the symptoms. Tests may include chest x-rays, biopsy, CT (or CAT) scan and/or an MRI. Once soft tissue sarcoma is found, additional tests will be performed to determine the stage (progress) of the cancer. Treatment will depend upon the type, location and stage of the disease. Once the diagnosis of cancer is confirmed, and the type and stage of the disease has been determined, your child's doctor will work with you, your child, and appropriate specialists to plan the best treatment. Current treatment options may include surgery, radiation therapy or chemotherapy.
Give Me Liberty - Study Guide Click here for a sample section of the guide! Check out the E-Guide version, available immediately! Nathaniel Dunn lives as an indentured servant in colonial Virginia. He meets a kindly schoolmaster named Basil, and Nathaniel's luck turns, allowing him to work for a carriage maker. But as Nathaniel's luck turns for the better, the atmosphere turns for the worse. Public outcry against the English becomes commonplace, and the call "give me liberty, or give me death" grows into a budding revolution. Amidst his new world of philosophy, books, and the idea of equality, Nathaniel faces new questions. Should he join this fight against England's oppression and taxes? What would change, if anything, for an indentured servant if the Revolutionaries are successful? In 60 pages (plus an answer key), our study guide contains: - Background on the authors and story - Prereading suggested activities - Vocabulary activities related to the story - General content questions - Literary analysis and terminology questions designed to give students a good understanding of writing technique and how to use it - Critical analysis questions designed to help students consider and analyze the intellectual, moral, and spiritual issues in the story and weigh them with reference to scripture - and, of course, a detailed answer key!
This week we are learning about Digital Citizenship. Being a 21st century learner involves technology, collaboration and communication and so, learning how to be a responsible digital citizen needs to be a part of the curriculum from the early years of schooling. Lets face it, many of our students are using technology out of school hours. While every attempt in the school environment is made to keep our students safe with accessibility blocked to particular websites, the fact still remains that in the real world children have access to technology and all that comes with it so they need to be informed about how to use this tool safely and responsibly. See Brain Pop Jr’s free video on Internet safety. An online article by Prasanna Bharti on the EdTechReview website titled Why is Digital Citizenship Important? Even for youngest kids explains some key aspects teachers should consider when integrating technology into the classroom such as keeping parents informed and involved in what and how children are using technology to learn. Erin Flanagan’s brilliant blog ERINtergration shares some effective ways children can develop their skills as digital citizens in the blog post Teaching Digital Citizenship All Year in the Classroom. I really like her idea of a punch pass where as a student demonstrates their understanding of each of the 9 key elements of digital citizenship they receive a punch on their card. When working with students in the early primary years, Digital Citizenship areas of learning include examples such as - Communication responsibly and kindly with others - Respecting others ideas and opinions - Protecting private information- our own and others - Understanding and reporting cyber bullying - Giving credit when using other peoples work - Healthy use of technology The above articles show that teaching students safe practices for use with digital technologies can be achieved in fun effective ways. However it is also important that as teachers we have some idea about what our students are into in regards to technology use. I have two sons, 8 and 10 years of age they love watching Youtube tutorials demonstrating walk throughs of the latest games, my eldest son wants to be a You tuber and have his own channel. Besides not being old enough to have a Youtube channel, he still has a lot to learn before being allowed to fulfill his dream. What I’m saying is our lessons won’t have the desired affect if we don’t deliver them around the interests of the students. Erin suggests we practice what we preach so with all the talk about students communicating and sharing ideas through blogging, this becomes a perfect opportunity to talk about communicating and commenting online and being respectful even when your opinions don’t reflect that of others.Many students need help to make good choices when working and exploring with technology and it is our responsibility as teachers to give them the information and skills to help them do so. Until next time.
Definition of Immune tolerance Immune tolerance: A state of unresponsiveness to a specific antigen or group of antigens to which a person is normally responsive. Immune tolerance is achieved under conditions that suppress the immune reaction and is not just the absence of a immune response. Immune tolerance can result from a number of causes including: - Prior contact with the same antigen in fetal life or in the newborn period when the immune system is not yet mature; - Prior contact with the antigen in extremely high or low doses; - Exposure to radiation, chemotherapy drugs, or other agents that impair the immune system; - Heritable diseases of the immune system; - Acquired diseases of the immune system such as HIV/AIDS. Immune tolerance can be defined as a state in which a T cell can no longer respond to antigen. The T cell "tolerates" the antigen.Source: MedTerms™ Medical Dictionary Last Editorial Review: 6/14/2012 Get breaking medical news.
In Encyclopędia Britannica, Chicago, 1985, pp. 627-648 also called moral philosophy the discipline concerned with what is morally good and bad, right and wrong. The term is also applied to any system or theory of moral values or principles. How should we live? Shall we aim at happiness or at knowledge, virtue, or the creation of beautiful objects? If we choose happiness, will it be our own or the happiness of all? And what of the more particular questions that face us: Is it right to be dishonest in a good cause? Can we justify living in opulence while elsewhere in the world people are starving? If conscripted to fight in a war we do not support, should we disobey the law? What are our obligations to the other creatures with whom we share this planet and to the generations of humans who will come after us? Ethics deals with such questions at all levels. Its subject consists of the fundamental issues of practical decision making, and its major concerns include the nature of ultimate value and the standards by which human actions can be judged right or wrong. The terms ethics and morality are closely related. We now often refer to ethical judgments or ethical principles where it once would have been more common to speak of moral judgments or moral principles. These applications are an extension of the meaning of ethics. Strictly speaking, however, the term refers not to morality itself but to the field of study, or branch of inquiry, that has morality as its subject matter. In this sense, ethics is equivalent to moral philosophy. Although ethics has always been viewed as a branch of philosophy, its all-embracing practical nature links it with many other areas of study, including anthropology, biology, economics, history, politics, sociology, and theology. Yet, ethics remains distinct from such disciplines because it is not a matter of factual knowledge in the way that the sciences and other branches of inquiry are. Rather, it has to do with determining the nature of normative theories and applying these sets of principles to practical moral problems. The origins of ethics When did ethics begin and how did it originate? If we are referring to ethics proper—i.e., the systematic study of what we ought to do—it is clear that ethics can only have come into existence when human beings started to reflect on the best way to live. This reflective stage emerged long after human societies had developed some kind of morality, usually in the form of customary standards of right and wrong conduct. The process of reflection tended to arise from such customs, even if in the end it may have found them wanting. Accordingly, ethics began with the introduction of the first moral codes. Virtually every human society has some form of myth to explain the origin of morality. In the Louvre in Paris there is a black Babylonian column with a relief showing the sun god Shamash presenting the code of laws to Hammurabi. The Old Testament account of God giving the Ten Commandments to Moses on Mt. Sinai might be considered another example. In Plato's Protagoras there is an avowedly mythical account of how Zeus took pity on the hapless humans, who, living in small groups and with inadequate teeth, weak claws, and lack of speed, were no match for the other beasts. To make up for these deficiencies, Zeus gave humans a moral sense and the capacity for law and justice, so that they could live in larger communities and cooperate with one another. That morality should be invested with all the mystery and power of divine origin is not surprising. Nothing else could provide such strong reasons for accepting the moral law. By attributing a divine origin to morality, the priesthood became its interpreter and guardian, and thereby secured for itself a power that it would not readily relinquish. This link between morality and religion has been so firmly forged that it is still sometimes asserted that there can be no morality without religion. According to this view, ethics ceases to be an independent field of study. It becomes, instead, moral theology. There is some difficulty, already known to Plato, with the view that morality was created by a divine power. In his dialogue Euthyphro, Plato considered the suggestion that it is divine approval that makes an action good. Plato pointed out that if this were the case, we could not say that the gods approve of the actions because the actions are good. Why then do the gods approve of these actions rather than others? Is their approval entirely arbitrary? Plato considered this impossible and so held that there must be some standards of right or wrong that are independent of the likes and dislikes of the gods. Modern philosophers have generally accepted Plato's argument because the alternative implies that if the gods had happened to approve of torturing children and to disapprove of helping one's neighbours, then torture would have been good and neighbourliness bad. Problems of divine origin A modern theist might say that since God is good, he could not possibly approve of torturing children nor disapprove of helping neighbours. In saying this, however, the theist would have tacitly admitted that there is a standard of goodness that is independent of God. Without an independent standard, it would be pointless to say that God is good; this could only mean that God is approved of by God. It seems therefore that, even for those who believe in the existence of God, it is impossible to give a satisfactory account of the origin of morality in terms of a divine creation. We need a different account. There are other possible connections between religion and morality. It has been said that even if good and evil exist independently of God or the gods, only divine revelation can reliably inform us about good and evil. An obvious problem with this view is that those who receive divine revelations, or who consider themselves qualified to interpret them, do not always agree on what is good and what is evil. Without an accepted criterion for the authenticity of a revelation or an interpretation, we are no better off, so far as reaching moral agreement is concerned, than we would be if we were to decide on good and evil ourselves with no assistance from religion. Traditionally, a more important link between religion and ethics was that religious teachings were thought to provide a reason for doing what is right. In its crudest form, the reason was that those who obey the moral law will be rewarded by an eternity of bliss while everyone else roasts in hell. In more sophisticated versions, the motivation provided by religion was less blatantly self-seeking and more of an inspirational kind. Whether in its crude or sophisticated version, or something in between, religion does provide an answer to one of the great questions of ethics: Why should I do what is right? As will be seen in the course of this article, however, the answer provided by religion is by no means the only answer. It will be considered after the alternatives have been examined. Can we do better than the religious accounts of the origin of morality? Because, for obvious reasons, we have no historical record of a human society in the period before it had any standards of right and wrong, history cannot tell us the origins of morality. Nor is anthropology able to assist because all human societies studied have already had, except perhaps during the most extreme circumstances, their own form of morality. Fortunately there is another mode of inquiry open to us. Human beings are social animals. Living in a social group is a characteristic we share with many other animal species, including our closest relatives, the apes. Presumably, the common ancestor of humans and apes also lived in a social group, so that we were social beings before we were human beings. Here, then, in the social behaviour of nonhuman animals and in the evolutionary theory that explains such behaviour, we may find the origins of human morality. Social life, even for nonhuman animals, requires constraints on behaviour. No group can stay together if its members make frequent, no-holds-barred attacks on one another. Social animals either refrain altogether from attacking other members of the social group, or, if an attack does take place, the ensuing struggle does not become a fight to the death—it is over when the weaker animal shows submissive behaviour. It is not difficult to see analogies here with human moral codes. The parallels, however, go much further than this. Like humans, social animals may behave in ways that benefit other members of the group at some cost or risk to themselves. Male baboons threaten predators and cover the rear as the troop retreats. Wolves and wild dogs bring meat back to members of the pack not present at the kill. Gibbons and chimpanzees with food will, in response to a gesture, share their food with others of the group. Dolphins support sick or injured animals, swimming under them for hours at a time and pushing them to the surface so they can breathe. It may be thought that the existence of such apparently altruistic behaviour is odd, for evolutionary theory states that those who do not struggle to survive and reproduce will be wiped out in the ruthless competition known as natural selection. Research in evolutionary theory applied to social behaviour, however, has shown that evolution need not be quite so ruthless after all. Some of this altruistic behaviour is explained by kin selection. The most obvious examples are those in which parents make sacrifices for their offspring. If wolves help their cubs to survive, it is more likely that genetic characteristics, including the characteristic of helping their own cubs, will spread through further generations of wolves. Kinship and reciprocity Less obviously, the principle also holds for assistance to other close relatives, even if they are not descendants. A child shares 50 percent of the genes of each of its parents, but full siblings too, on the average, have 50 percent of their genes in common. Thus a tendency to sacrifice one's life for two or more of one's siblings could spread from one generation to the next. Between cousins, where only 12 1/2 percent of the genes are shared, the sacrifice-to-benefit ratio would have to be correspondingly increased. When apparent altruism is not between kin, it may be based on reciprocity. A monkey will present its back to another monkey, who will pick out parasites; after a time the roles will be reversed. Reciprocity may also be a factor in food sharing among unrelated animals. Such reciprocity will pay off, in evolutionary terms, as long as the costs of helping are less than the benefits of being helped and as long as animals will not gain in the long run by “cheating”—that is to say, by receiving favours without returning them. It would seem that the best way to ensure that those who cheat do not prosper is for animals to be able to recognize cheats and refuse them the benefits of cooperation the next time around. This is only possible among intelligent animals living in small, stable groups over a long period of time. Evidence supports this conclusion: reciprocal behaviour has been observed in birds and mammals, the clearest cases occurring among wolves, wild dogs, dolphins, monkeys, and apes. In short, kin altruism and reciprocity do exist, at least in some nonhuman animals living in groups. Could these forms of behaviour be the basis of human ethics? There are good reasons for believing that they could. A surprising proportion of human morality can be derived from the twin bases of concern for kin and reciprocity. Kinship is a source of obligation in every human society. A mother's duty to look after her children seems so obvious that it scarcely needs to be mentioned. The duty of a married man to support and protect his family is almost equally as widespread. Duties to close relatives take priority over duties to more distant relatives, but in most societies even distant relatives are still treated better than strangers. If kinship is the most basic and universal tie between human beings, the bond of reciprocity is not far behind. It would be difficult to find a society that did not recognize, at least under some circumstances, an obligation to return favours. In many cultures this is taken to extraordinary lengths, and there are elaborate rituals of gift giving. Often the repayment has to be superior to the original gift, and this escalation can reach such extremes as to threaten the economic security of the donor. The huge “potlatch” feasts of certain American Indian tribes are a well-known example of this type of situation. Many Melanesian societies also place great importance on giving and receiving very substantial amounts of valuable items. Many features of human morality could have grown out of simple reciprocal practices such as the mutual removal of parasites from awkward places. Suppose I want to have the lice in my hair picked out and I am willing in return to remove lice from someone else's hair. I must, however, choose my partner carefully. If I help everyone indiscriminately, I will find myself delousing others without getting my own lice removed. To avoid this, I must learn to distinguish between those who return favours and those who do not. In making this distinction, I am separating reciprocators and nonreciprocators and, in the process, developing crude notions of fairness and of cheating. I will strengthen my links with those who reciprocate, and bonds of friendship and loyalty, with a consequent sense of obligation to assist, will result. This is not all. The reciprocators are likely to react in a hostile and angry way to those who do not reciprocate. Perhaps they will regard reciprocity as good and “right” and cheating as bad and “wrong.” From here it is a small step to concluding that the worst of the nonreciprocators should be driven out of society or else punished in some way, so that they will not take advantage of others again. Thus a system of punishment and a notion of desert constitute the other side of reciprocal altruism. Although kinship and reciprocity loom large in human morality, they do not cover the entire field. Typically, there are obligations to other members of the village, tribe, or nation even when these are strangers. There may also be a loyalty to the group as a whole that is distinct from loyalty to individual members of the group. It may be at this point that human culture intervenes. Each society has a clear interest in promoting devotion to the group and can be expected to develop cultural influences that exalt those who make sacrifices for the sake of the group and revile those who put their own interests too far ahead of the interests of the group. More tangible rewards and punishments may supplement the persuasive effect of social opinion. This is simply the start of a process of cultural development of moral codes. Before considering the cultural variations in human morality and their significance for ethics, let us draw together this discussion of the origins of morality. Since we are dealing with a prehistoric period and morality leaves no fossils, any account of the origins of morality will necessarily remain to some extent speculative. It seems likely that morality is the gradual outgrowth of forms of altruism that exist in some social animals and that are the result of the usual evolutionary processes of natural selection. No myths are required to explain its existence. Anthropology and ethics It is commonly believed that there are no ethical universals—i.e., there is so much variation from one culture to another that no single principle or judgment is generally accepted. We have already seen that such is not the case. Of course, there are immense differences in the way in which the broad principles so far discussed are applied. The duty of children to their parents meant one thing in traditional Chinese society and means something quite different in contemporary Anglo-Saxon society. Yet, concern for kin and reciprocity to those who treat us well are considered good in virtually all human societies. Also, all societies have, for obvious reasons, some constraints on killing and wounding other members of the group. Beyond that common ground, the variations in moral attitudes soon become more striking than the similarities. Man's fascination with such variations goes back a long way. The Greek historian Herodotus relates that Darius, king of Persia, once summoned Greeks before him and asked them how much he would have to pay them to eat their fathers' dead bodies. They refused to do it at any price. Then Darius brought in some Indians who by custom ate the bodies of their parents and asked them what would make them willing to burn their fathers' bodies. The Indians cried out that he should not mention so horrid an act. Herodotus drew the obvious moral: each nation thinks its own customs best. Variations in morals were not systematically studied until the 19th century, when knowledge of the more remote parts of the globe began to increase. At the beginning of the 20th century, Edward Westermarck published The Origin and Development of the Moral Ideas (1906–08), two large volumes comparing differences among societies in such matters as the wrongness of killing (including killing in warfare, euthanasia, suicide, infanticide, abortion, human sacrifices, and duelling); whose duty it is to support children, the aged, or the poor; the forms of sexual relationship permitted; the status of women; the right to property and what constitutes theft; the holding of slaves; the duty to tell the truth; dietary restrictions; concern for nonhuman animals; duties to the dead; and duties to the gods. Westermarck had no difficulty in demonstrating tremendous diversity in all these issues. More recent, though less comprehensive, studies have confirmed that human societies can and do flourish while holding radically different views about all such matters. As noted earlier, ethics itself is not primarily concerned with the description of moral systems in different societies. That task, which remains on the level of description, is one for anthropology or sociology. In contrast, ethics deals with the justification of moral principles. Nevertheless, ethics must take note of the variations in moral systems because it has often been claimed that this knowledge shows that morality is simply a matter of what is customary and is always relative to a particular society. According to this view, no ethical principles can be valid except in terms of the society in which they are held. Words such as good and bad just mean, it is claimed, “approved in my society” or “disapproved in my society,” and so to search for an objective, or rationally justifiable, ethic is to search for what is in fact an illusion. One way of replying to this position would be to stress the fact that there are some features common to virtually all human moralities. It might be thought that these common features must be the universally valid and objective core of morality. This argument would, however, involve a fallacy. If the explanation for the common features is simply that they are advantageous in terms of evolutionary theory, that does not make them right. Evolution is a blind force incapable of conferring a moral imprimatur on human behaviour. It may be a fact that concern for kin is in accord with evolutionary theory, but to say that concern for kin is therefore right would be to attempt to deduce values from facts. As will be seen later, it is not possible to deduce values from facts in this manner. In any case, that something is universally approved does not make it right. If all human societies enslaved any tribe they could conquer, some freethinking moralists might still insist that slavery is wrong. They could not be said to be talking nonsense merely because they had few supporters. Similarly, then, universal support for principles of kinship and reciprocity cannot prove that these principles are in some way objectively justified. This example illustrates the way in which ethics differs from a descriptive science. From the standpoint of ethics, whether human moral codes closely parallel one another or are extraordinarily diverse, the question of how an individual should act remains open. If you are thinking deeply about what you should do, your uncertainty will not be overcome by being told what your society thinks you should do in the circumstances in which you find yourself. Even if you are told that virtually all other human societies agree, you may choose not to go that way. If you are told that there is great variation among human societies over what people should do in your circumstances, you may wonder whether there can be any objective answer, but your dilemma has still not been resolved. In fact, this diversity does not rule out the possibility of an objective answer either: conceivably, most societies simply got it wrong. This, too, is something that will be taken up later in this article, for the possibility of an objective morality is one of the constant themes of ethics. The first ethical precepts were certainly passed down by word of mouth by parents and elders, but as societies learned to use the written word, they began to set down their ethical beliefs. These records constitute the first historical evidence of the origins of ethics. The Middle East The earliest surviving writings that might be taken as ethics textbooks are a series of lists of precepts to be learned by boys of the ruling class of Egypt, prepared some 3,000 years before the Christian Era. In most cases, they consist of shrewd advice on how to live happily, avoid unnecessary troubles, and advance one's career by cultivating the favour of superiors. There are, however, several passages that recommend more broadly based ideals of conduct, such as the following: Rulers should treat their people justly and judge impartially between their subjects. They should aim to make their people prosperous. Those who have bread are urged to share it with the hungry. Humble and lowly people must be treated with kindness. One should not laugh at the blind or at dwarfs. Why then should one follow these precepts? Did the ancient Egyptians believe that one should do what is good for its own sake? The precepts frequently state that it will profit a man to act justly, much as we say that “honesty is the best policy.” They also emphasize the importance of having a good name. Since these precepts are intended for the instruction of the ruling classes, however, we have to ask why helping the destitute should have contributed to an individual's good reputation among this class. To some degree the authors of the precepts must have thought that to make people prosperous and happy and to be kind to those who have least is not merely personally advantageous but good in itself. The precepts are not works of ethics in the philosophical sense. No attempt is made to find any underlying principles of conduct that might provide a more systematic understanding of ethics. Justice, for example, is given a prominent place, but there is no elaboration of the notion of justice nor any discussion of how disagreements about what is just and unjust might be resolved. Furthermore, there is no probing of ethical dilemmas that may occur if the precepts should conflict with one another. The precepts are full of sound observations and practical wisdom, but they do not encourage theoretical speculation. The same practical bent can be found in other early codes or lists of ethical injunctions. The great codification of Babylonian law by Hammurabi is often said to have been based on the principle of “an eye for an eye, a tooth for a tooth,” as if this were some fundamental principle of justice, elaborated and applied to all cases. In fact, the code reflects no such consistent principle. It frequently prescribes the death penalty for offenses that do not themselves cause death—e.g., for robbery or for accepting bribes. Moreover, even the eye-for-an-eye rule applies only if the eye of the original victim is that of a member of the patrician class; if it is the eye of a commoner, the punishment is a fine of a quantity of silver. Apparently such differences in punishment were not thought to require justification. At any rate, there are no surviving attempts to defend the principles of justice on which the code was based. The Hebrew people were at different times captives of both the Egyptians and the Babylonians. It is therefore not surprising that the law of ancient Israel, which was put into its definitive form during the Babylonian Exile, shows the influence both of the ancient Egyptian precepts and of the Code of Hammurabi. The book of Exodus refers, for example, to the principle of “life for life, eye for eye, tooth for tooth.” Hebrew law does not differentiate, as the Babylonian law does, between patricians and commoners, but it does stipulate that in several respects foreigners may be treated in ways that it is not permissible to treat fellow Hebrews; for instance, Hebrew slaves, but not others, had to be freed without ransom in the seventh year. Yet, in other respects Israeli law and morality developed the humane concern shown in the Egyptian precepts for the poor and unfortunate: hired servants must be paid promptly, because they rely on their wages to satisfy their pressing needs; slaves must be allowed to rest on the seventh day; widows, orphans, and the blind and deaf must not be wronged, and the poor man should not be refused a loan. There was even a tithe providing for an incipient welfare state. The spirit of this humane concern was summed up by the injunction to “love thy neighbour as thyself,” a sweepingly generous form of the rule of reciprocity. The famed Ten Commandments are thought to be a legacy of Semitic tribal law when important commands were taught, one for each finger, so that they could more easily be remembered. (Sets of five or 10 laws are common among preliterate civilizations.) The content of the Hebrew commandments differed from other laws of the region mainly in its emphasis on duties to God. In the more detailed laws laid down elsewhere, this emphasis continued with as much as half the legislation concerned with crimes against God and ceremonial and ritualistic matters, though there may be other explanations for some of these ostensibly religious requirements concerning the avoidance of certain foods and the need for ceremonial cleansings. In addition to lengthy statements of the law, the surviving literature of ancient Israel includes both proverbs and the books of the prophets. The proverbs, like the precepts of the Egyptians, are brief statements without much concern for systematic presentation or overall coherence. They go further than the Egyptian precepts, however, in urging conduct that is just and upright and pleasing to God. There are correspondingly fewer references to what is needed for a successful career, although it is frequently stated that God rewards the just. In this connection the Book of Job is notable as an exploration of the problem raised for those who accept this motive for obeying the moral law: How are we to explain the fact that the best of people may suffer the worst misfortunes? The book offers no solution beyond faith in God, but the sharpened awareness of the problem it offers may have influenced some to adopt belief in reward and punishment in another realm as the only possible solution. The literature of the prophets contains a good deal of social and ethical criticism, though more at the level of denunciation than discussion about what goodness really is or why there is so much wrongdoing. The Book of Isaiah is especially notable for its early portrayal of a utopia in which “the desert shall blossom as the rose . . . the wolf also shall dwell with the lamb . . . . They shall not hurt or destroy in all my holy mountain.” Unlike the ethical teaching of ancient Egypt and Babylon, Indian ethics was philosophical from the start. In the oldest of the Indian writings, the Vedas, ethics is an integral aspect of philosophical and religious speculation about the nature of reality. These writings date from about 1500 BC. They have been described as the oldest philosophical literature in the world, and what they say about how people ought to live may therefore be the first philosophical ethics. The Vedas are, in a sense, hymns, but the gods to which they refer are not persons but manifestations of ultimate truth and reality. In the Vedic philosophy, the basic principle of the universe, the ultimate reality on which the cosmos exists, is the principle of Ritam, which is the word from which the Western notion of right is derived. There is thus a belief in a right moral order somehow built into the universe itself. Hence, truth and right are linked; to penetrate through illusion and understand the ultimate truth of human existence is to understand what is right. To be an enlightened one is to know what is real and to live rightly, for these are not two separate things but one and the same. The ethic that is thus traced to the very essence of the universe is not without its detailed practical applications. These were based on four ideals, or proper goals, of life: prosperity, the satisfaction of desires, moral duty, and spiritual perfection—i.e., liberation from a finite existence. From these ends follow certain virtues: honesty, rectitude, charity, nonviolence, modesty, and purity of heart. To be condemned, on the other hand, are falsehood, egoism, cruelty, adultery, theft, and injury to living things. Because the eternal moral law is part of the universe, to do what is praiseworthy is to act in harmony with the universe and accordingly will receive its proper reward; conversely, once the true nature of the self is understood, it becomes apparent that those who do what is wrong are acting self-destructively. The basic principles underwent considerable modification over the ensuing centuries, especially in the Upanisads, a body of philosophical literature dating from 800 BC. The Indian caste system, with its intricate laws about what members of each caste may or may not do, is accepted by the Upanisads as part of the proper order of the universe. Ethics itself, however, is not regarded as a matter of conformity to laws. Instead, the desire to be ethical is an inner desire. It is part of the quest for spiritual perfection, which in turn is elevated to the highest of the four goals of life. During the following centuries the ethical philosophy of this early period gradually became a rigid and dogmatic system that provoked several reactions. One, which is uncharacteristic of Indian thought in general, was the Carvaka, or materialist school, which mocked religious ceremonies, saying that they were invented by the Brahmans (the priestly caste) to ensure their livelihood. When the Brahmans defended animal sacrifices by claiming that the sacrificed beast goes straight to heaven, the members of the Carvaka asked why the Brahmans did not kill their aged parents to hasten their arrival in heaven. Against the postulation of an eventual spiritual liberation, Carvaka ethics urged each individual to seek his or her pleasure here and now. Jainism, another reaction to the traditional Vedic outlook, went in exactly the opposite direction. The Jaina philosophy is based on spiritual liberation as the highest of all goals and nonviolence as the means to it. In true philosophical manner, the Jainas found in the principle of nonviolence a guide to all morality. First, apart from the obvious application to prohibiting violent acts to other humans, nonviolence is extended to all living things. The Jainas are vegetarian. They are often ridiculed by Westerners for the care they take to avoid injuring insects or other living things while walking or drinking water that may contain minute organisms; it is less well known that Jainas began to care for sick and injured animals thousands of years before animal shelters were thought of in Europe. The Jainas do not draw the distinction usually made in Western ethics between their responsibility for what they do and their responsibility for what they omit doing. Omitting to care for an injured animal would also be in their view a form of violence. Other moral duties are also derived from the notion of nonviolence. To tell someone a lie, for example, is regarded as inflicting a mental injury on that person. Stealing, of course, is another form of injury, but because of the absence of a distinction between acts and omissions, even the possession of wealth is seen as depriving the poor and hungry of the means to satisfy their wants. Thus nonviolence leads to a principle of nonpossession of property. Jaina priests were expected to be strict ascetics and to avoid sexual intercourse. Ordinary Jainas, however, followed a slightly less severe code, which was intended to give effect to the major forms of nonviolence while still being compatible with a normal life. The other great ethical system to develop as a reaction to the ossified form of the old Vedic philosophy was Buddhism. The person who became known as the Buddha, which means the “enlightened one,” was born about 563 BC, the son of a king. Until he was 29 years old, he lived the sheltered life of a typical prince, with every luxury he could desire. At that time, legend has it, he was jolted out of his idleness by the “Four Signs”: he saw in rapid succession a very feeble old man, a hideous leper, a funeral, and a venerable ascetic monk. He began to think about old age, disease, and death, and decided to follow the way of the monk. For six years he led an ascetic life of renunciation, but finally, while meditating under a tree, he concluded that the solution was not withdrawal from the world, but rather a practical life of compassion for all. Buddhism is often thought to be a religion, and indeed over the centuries it has adopted in many places the trappings of religion. This is an irony of history, however, because the Buddha himself was a strong critic of religion. He rejected the authority of the Vedas and refused to set up any alternative creed. He saw religious ceremonies as a waste of time and theological beliefs as mere superstition. He refused to discuss abstract metaphysical problems such as the immortality of the soul. The Buddha told his followers to think for themselves and take responsibility for their own future. In place of religious beliefs and religious ceremonies, the Buddha advocated a life devoted to universal compassion and brotherhood. Through such a life one might reach the ultimate goal, Nirvana, a state in which all living things are free from pain and sorrow. There are similarities between this ethic of universal compassion and the ethics of the Jainas. Nevertheless, the Buddha was the first historical figure to develop such a boundless ethic. In keeping with his own previous experience, the Buddha proposed a “middle path” between self-indulgence and self-renunciation. In fact, it is not so much a path between these two extremes as one that draws together the benefits of both. Through living a life of compassion and love for all, a person achieves the liberation from selfish cravings sought by the ascetic and a serenity and satisfaction that are more fulfilling than anything obtained by indulgence in pleasure. It is sometimes thought that because the Buddhist goal is Nirvana, a state of freedom from pain and sorrow that can be reached by meditation, Buddhism teaches a withdrawal from the real world. Nirvana, however, is not to be sought for oneself alone; it is regarded as a unity of the individual self with the universal self in which all things take part. In the Mahayana school of Buddhism, the aspirant for Enlightenment even takes a vow not to accept final release until everything that exists in the universe has attained Nirvana. The Buddha lived and taught in India, and so Buddhism is properly classified as an Indian ethical philosophy. Yet, Buddhism did not take hold in the land of its origin. Instead, it spread in different forms south into Sri Lanka and Southeast Asia, and north through Tibet to China, Korea, and Japan. In the process, Buddhism suffered the same fate as the Vedic philosophy against which it had rebelled: it became a religion, often rigid, with its own sects, ceremonies, and superstitions. The two greatest moral philosophers of ancient China, Lao-tzu (flourished c. 6th century BC) and Confucius (551–479 BC), thought in very different ways. Lao-tzu is best known for his ideas about the Tao (literally “Way,” the Supreme Principle). The Tao is based on the traditional Chinese virtues of simplicity and sincerity. To follow the Tao is not a matter of keeping to any set list of duties or prohibitions, but rather of living in a simple and honest manner, being true to oneself, and avoiding the distractions of ordinary living. Lao-tzu's classic book on the Tao, Tao-te Ching, consists only of aphorisms and isolated paragraphs, making it difficult to draw an intelligible system of ethics from it. Perhaps this is because Lao-tzu was a type of moral skeptic: he rejected both righteousness and benevolence, apparently because he saw them as imposed on individuals from without rather than coming from their own inner nature. Like the Buddha, Lao-tzu found the things prized by the world—rank, luxury, and glamour—to be empty, worthless values when compared with the ultimate value of the peaceful inner life. He also emphasized gentleness, calm, and nonviolence. Nearly 600 years before Jesus, he said: “It is the way of the Tao . . . to recompense injury with kindness.” By returning good for good and also good for evil, Lao-tzu believed that all would become good; to return evil for evil would lead to chaos. The lives of Lao-tzu and Confucius overlapped, and there is even an account of a meeting between them, which is said to have left the younger Confucius baffled. Confucius was the more down-to-earth thinker, absorbed in the practical task of social reform. When he was a provincial minister of justice, the province became renowned for the honesty of its people and their respect for the aged and their care for the poor. Probably because of its practical nature, the teachings of Confucius had a far greater influence on China than did those of the more withdrawn Lao-tzu. Confucius did not organize his recommendations into any coherent system. His teachings are offered in the form of sayings, aphorisms, and anecdotes, usually in reply to questions by disciples. They aim at guiding the audience in what is necessary to become a better person, a concept translated as “gentleman” or “the superior man.” In opposition to the prevailing feudal ideal of the aristocratic lord, Confucius presented the superior man as one who is humane and thoughtful, motivated by the desire to do what is good rather than by personal profit. Beyond this, however, the concept is not discussed in any detail; it is only shown by diverse examples, some of them trite: “A superior man's life leads upwards . . . . The superior man is broad and fair; the inferior man takes sides and is petty . . . . A superior man shapes the good in man; he does not shape the bad in him.” One of the recorded sayings of Confucius is an answer to a request from a disciple for a single word that could serve as a guide to conduct for one's entire life. He replied: “Is not reciprocity such a word? What you do not want done to yourself, do not do to others.” This rule is repeated several times in the Confucian literature and might be considered the supreme principle of Confucian ethics. Other duties are not, however, presented as derivative from this supreme principle, nor is the principle used to determine what is to be done when more specific duties—e.g., duties to parents and duties to friends, both of which were given prominence in Confucian ethics—should clash. Confucius did not explain why the superior man chose righteousness rather than personal profit. This question was taken up more than 100 years after his death by his follower Mencius, who asserted that humans are naturally inclined to do what is humane and right. Evil is not in human nature but is the result of poor upbringing or lack of education. But Confucius also had another distinguished follower, Hsün-tzu, who said that man's nature is to seek self-profit and to envy others. The rules of morality are designed to avoid the strife that would otherwise follow from this nature. The Confucian school was united in its ideal of the superior man but divided over whether such an ideal was to be obtained by allowing people to fulfill their natural desires or by educating them to control those desires. Early Greece was the birthplace of Western philosophical ethics. The ideas of Socrates, Plato, and Aristotle, who flourished in the 5th and 4th centuries BC, will be discussed in the next section. The sudden blooming of philosophy during that period had its roots in the ethical thought of earlier centuries. In the poetic literature of the 7th and 6th centuries BC, there were, as in the early development of ethics in other cultures, ethical precepts but no real attempts to formulate a coherent overall ethical position. The Greeks were later to refer to the most prominent of these poets and early philosophers as the seven sages, and they are frequently quoted with respect by Plato and Aristotle. Knowledge of the thought of this period is limited, for often only fragments of original writings, along with later accounts of dubious accuracy, remain. Pythagoras (c. 580–c. 500 BC), whose name is familiar because of the geometrical theorem that bears his name, is one such early Greek thinker about whom little is known. He appears to have written nothing at all, but he was the founder of a school of thought that touched on all aspects of life and that may have been a kind of philosophical and religious order. In ancient times the school was best known for its advocacy of vegetarianism, which, like that of the Jainas, was associated with the belief that after the death of the body, the human soul may take up residence in the body of an animal. Pythagoreans continued to espouse this view for many centuries, and classical passages in the works of such writers as Ovid and Porphyry opposing bloodshed and animal slaughter can be traced back to Pythagoras. Ironically, an important stimulus for the development of moral philosophy came from a group of teachers to whom the later Greek philosophers—Socrates, Plato, and Aristotle—were consistently hostile: the Sophists. This term was used in the 5th century to refer to a class of professional teachers of rhetoric and argument. The Sophists promised their pupils success in political debate and increased influence in the affairs of the city. They were accused of being mercenaries who taught their students to win arguments by fair means or foul. Aristotle said that Protagoras, perhaps the most famous of them, claimed to teach how “to make the weaker argument the stronger.” The Sophists, however, were more than mere teachers of rhetorical tricks. They saw their role as imparting the cultural and intellectual qualities necessary for success, and their involvement with argument about practical affairs led them to develop views about ethics. The recurrent theme in the views of the better known Sophists, such as Protagoras, Antiphon, and Thrasymachus, is that what is commonly called good and bad or just and unjust does not reflect any objective fact of nature but is rather a matter of social convention. It is to Protagoras that we owe the celebrated epigram summing up this theme, “Man is the measure of all things.” Plato represents him as saying “Whatever things seem just and fine to each city, are just and fine for that city, so long as it thinks them so.” Protagoras, like Herodotus, was an early social relativist, but he drew a moderate conclusion from his relativism. He argued that while the particular content of the moral rules may vary, there must be rules of some kind if life is to be tolerable. Thus Protagoras stated that the foundations of an ethical system needed nothing from the gods or from any special metaphysical realm beyond the ordinary world of the senses. The Sophist Thrasymachus appears to have taken a more radical approach—if Plato's portrayal of his views is historically accurate. He explained that the concept of justice means nothing more than obedience to the laws of society, and, since these laws are made by the strongest political group in their own interests, justice represents nothing but the interests of the stronger. This position is often represented by the slogan “Might is right.” Thrasymachus was probably not saying, however, that whatever the mightiest do really is right; he is more likely to have been denying that the distinction between right and wrong has any objective basis. Presumably he would then encourage his pupils to follow their own interests as best they could. He is thus an early representative of Skepticism about morals and perhaps of a form of egoism, the view that the rational thing to do is follow one's own interests. It is not surprising that with ideas of this sort in circulation other thinkers should react by probing more deeply into ethics to see if the potentially destructive conclusions of some of the Sophists could be resisted. This reaction produced works that have served ever since as the cornerstone for the entire edifice of Western ethics. Western ethics from Socrates to the 20th century “The unexamined life is not worth living,” Socrates once observed. This thought typifies his questioning, philosophical approach to ethics. Socrates, who lived from about 470 BC until he was put to death in 399 BC, must be regarded as one of the greatest teachers of ethics. Yet, unlike other figures of comparable importance such as the Buddha or Confucius, he did not tell his audience how they should live. What Socrates taught was a method of inquiry. When the Sophists or their pupils boasted that they knew what justice, piety, temperance, or law was, Socrates would ask them to give an account of it and then show that the account offered was entirely inadequate. For instance, against the received wisdom that justice consists in keeping promises and paying debts, Socrates put forth the example of a person faced with an unusual situation: a friend from whom he borrowed a weapon has since become insane but wants the weapon back. Conventional morality gives no clear answer to this dilemma; therefore, the original definition of justice has to be reformulated. So the Socratic dialogue gets under way. Because his method of inquiry threatened conventional beliefs, Socrates' enemies contrived to have him put to death on a charge of corrupting the youth of Athens. For those who saw adherence to the conventional moral code as more desirable than the cultivation of an inquiring mind, the charge was appropriate. By conventional standards, Socrates was indeed corrupting the youth of Athens, but he himself saw the destruction of beliefs that could not stand up to criticism as a necessary preliminary to the search for true knowledge. Here, he differed from the Sophists with their moral relativism, for he thought that virtue is something that can be known and that the good person is the one who knows of what virtue, or justice, consists. It is therefore not entirely accurate to see Socrates as contributing a method of inquiry but no positive views of his own. He believed in goodness as something that can be known, even though he did not himself profess to know it. He also thought that those who know what good is are in fact good. This latter belief seems peculiar today, because we make a sharp distinction between what is good and what is in a person's own interests. Accordingly, it does not seem surprising if people know what they ought morally to do but then proceed to do what is in their own interests instead. How to provide such people with reasons for doing what is right has been a major problem for Western ethics. Socrates did not see a problem here at all; in his view anyone who does not act well must simply be ignorant of the nature of goodness. Socrates could say this because in ancient Greece the distinction between goodness and self-interest was not made, or at least not in the clear-cut manner that it is today. The Greeks believed that virtue is good both for the individual and for the community. To be sure, they recognized that to live virtuously might not be the best way to prosper financially, but then they did not assume, as we are prone to do, that material wealth is a major factor in whether a person's life goes well or ill. Socrates' greatest disciple, Plato (428/427–348/347 BC), accepted the key Socratic beliefs in the objectivity of goodness and in the link between knowing what is good and doing it. He also took over the Socratic method of conducting philosophy, developing the case for his own positions by exposing errors and confusions in the arguments of his opponents. He did this by writing his works as dialogues in which Socrates is portrayed as engaging in argument with others, usually Sophists. The early dialogues are generally accepted as reasonably accurate accounts of Socrates' views, but the later ones, written many years after the death of Socrates, use the latter as a mouthpiece for ideas and arguments that were Plato's rather than those of the historical Socrates. In the most famous of Plato's dialogues, Politeia (The Republic), the imaginary Socrates is challenged by the following example: Suppose a person obtained the legendary ring of Gyges, which has the magical property of rendering the wearer invisible. Would that person still have any reason to behave justly? Behind this challenge lies the suggestion, made by the Sophists and still heard today, that the only reason for acting justly is that one cannot get away with acting unjustly. Plato's response to this challenge is a long argument developing a position that appears to go beyond anything the historical Socrates asserted. Plato maintained that true knowledge consists not in knowing particular things but in knowing something general that is common to all the particular cases. This is obviously derived from the way in which Socrates would press his opponents to go beyond merely describing particular good, or temperate, or just acts, and to give instead a general account of goodness, or temperance, or justice. The implication is that we do not know what goodness is unless we can give this general account. But the question then arises, what is it that we know when we know this general idea of goodness? Plato's answer seems to be that what we know is some general form or idea of goodness, which is shared by every particular thing that is good. Yet, if we are truly to be able to know this form or idea of goodness, it seems to follow that it must really exist. Plato accepts this implication. His theory of forms is the view that when we know what goodness is, we have knowledge of something that is the common element in virtue of which all good things are good and, at the same time, is some existing thing, the pure form of goodness. It has been said that all of Western philosophy consists of footnotes to Plato. Certainly the central issue around which all of Western ethics has revolved can be traced back to the debate between the Sophists, on the one hand, with their claims that goodness and justice are relative to the customs of each society or, worse still, merely a disguise for the interests of the stronger, and, on the other, Plato's defense of the possibility of knowledge of an objective form or idea of goodness. But even if we know what goodness or justice is, why should we act justly if we can profit by doing the opposite? This remaining part of the challenge posed by the legendary ring of Gyges is still to be answered, for even if we accept that goodness is objective, it does not follow that we all have sufficient reason to do what is good. Whether goodness leads to happiness is, as has been seen from the preceding discussion of early ethics in other cultures, a perennial topic for all who think about ethics. Plato's answer is that justice consists in harmony between the three elements of the soul: intellect, emotion, and desire. The unjust person lives in an unsatisfactory state of internal discord, trying always to overcome the discomfort of unsatisfied desire but never achieving anything better than the mere absence of want. The soul of the good person, on the other hand, is harmoniously ordered under the governance of reason, and the good person finds truly satisfying enjoyment in the pursuit of knowledge. Plato remarks that the highest pleasure, in fact, comes from intellectual speculation. He also gives an argument for the belief that the human soul is immortal; therefore, even if just individuals seem to be living in poverty or illness, the gods will not neglect them in the next life, and there they will have the greatest rewards of all. In summary, then, Plato asserts that we should act justly because in doing so we are “at one with ourselves and with the gods.” Today, this may seem like a strange account of justice and a farfetched view of what it takes to achieve human happiness. Plato does not recommend justice for its own sake, independently of any personal gains one might obtain from being a just person. This is characteristic of Greek ethics, with its refusal to recognize that there could be an irresolvable conflict between one's own interest and the good of the community. Not until Immanuel Kant, in the 18th century, does a philosopher forcefully assert the importance of doing what is right simply because it is right quite apart from self-interested motivation. To be sure, Plato must not be interpreted as holding that the motivation for each and every just act is some personal gain; on the contrary, the person who takes up justice will do what is just because it is just. Nevertheless, Plato accepts the assumption of his opponents that one could not recommend taking up justice in the first place unless doing so could be shown to be advantageous for oneself as well as for others. In spite of the fact that many people now think differently about this connection between morality and self-interest, Plato's attempt to argue that those who are just are in the long run happier than those who are unjust has had an enormous influence on Western ethics. Like Plato's views on the objectivity of goodness, the claim that justice and personal happiness are linked has helped to frame the agenda for a debate that continues even today. Plato founded a school of philosophy in Athens known as the Academy. Here Aristotle (384–322 BC), Plato's younger contemporary and only rival in terms of influence on the course of Western philosophy, came to study. Aristotle was often fiercely critical of Plato, and his writing is very different in style and content, but the time they spent together is reflected in a considerable amount of common ground. Thus Aristotle holds with Plato that the life of virtue is rewarding for the virtuous, as well as beneficial for the community. Aristotle also agrees that the highest and most satisfying form of human existence is that in which man exercises his rational faculties to the fullest extent. One major difference is that Aristotle does not accept Plato's theory of common essences, or universal ideas, existing independently of particular things. Thus he does not argue that the path to goodness is through knowledge of the universal form or idea of “the good.” Aristotle's ethics are based on his view of the universe. He saw it as a hierarchy in which everything has a function. The highest form of existence is the life of the rational being, and the function of lower beings is to serve this form of life. This led him to defend slavery—because he thought barbarians were less rational than Greeks and by nature suited to be “living tools”—and the killing of nonhuman animals for food or clothing. From this also came a view of human nature and an ethical theory derived from it. All living things, Aristotle held, have inherent potentialities and it is their nature to develop that potential to the full. This is the form of life properly suited to them and constitutes their goal. What, however, is the potentiality of human beings? For Aristotle this question turns out to be equivalent to asking what it is that is distinctive about human beings, and this, of course, is the capacity to reason. The ultimate goal of humans, therefore, is to develop their reasoning powers. When they do this, they are living well, in accordance with their true nature, and they will find this the most rewarding existence possible. Aristotle thus ends up agreeing with Plato that the life of the intellect is the highest form of life; though having a greater sense of realism than Plato, he tempered this view with the suggestion that the best feasible life for humans must also have the goods of material prosperity and close friendships. Aristotle's argument for regarding the life of the intellect so highly, however, is different from that used by Plato; and the difference is significant because Aristotle committed a fallacy that has often been repeated. The fallacy is to assume that whatever capacity distinguishes humans from other beings is, for that very reason, the highest and best of their capacities. Perhaps the ability to reason is the best of our capacities, but we cannot be compelled to draw this conclusion from the fact that it is what is most distinctive of the human species. A broader and still more pervasive fallacy underlies Aristotle's ethics. It is the idea that an investigation of human nature can reveal what we ought to do. For Aristotle, an examination of a knife would reveal that its distinctive quality is to cut, and from this we could conclude that a good knife would be a knife that cuts well. In the same way, an examination of human nature should reveal the distinctive quality of human beings, and from this we should be able to conclude what it is to be a good human being. This line of thought makes sense if we think, as Aristotle did, that the universe as a whole has a purpose and that we exist as part of such a goal-directed scheme of things, but its error becomes glaring once we reject this view and come to see our existence as the result of a blind process of evolution. Then we know that the standards of quality for knives are a result of the fact that knives are made with a specific purpose in mind and that a good knife is one that fills this purpose well. Human beings, however, were not made with any particular purpose in mind. Their nature is the result of random forces of natural selection and thus cannot, without further moral premises, determine how they ought to live. It is to Aristotle that we owe the notion of the final end, or, as it was later called by medieval scholars, the summum bonum—the overall good for human beings. This can be found, Aristotle wrote, by asking why we do the things that we do. If we ask why we chop wood, the answer may be to build a fire; and if we ask why we build a fire, it may be to keep warm; but, if we ask why we keep warm, the answer is likely to be simply that it is pleasant to be warm and unpleasant to be cold. We can ask the same kind of questions about other activities; the answer always points, Aristotle thought, to what he called eudaimonia. This Greek word is usually translated as “happiness,” but this is only accurate if we understand that term in its broadest sense to mean living a fulfilling, satisfying life. Happiness in the narrower sense of joy or pleasure would certainly be a concomitant of such a life, but it is not happiness in this narrower sense that is the goal. In searching for the overall good, Aristotle separates what may be called instrumental goods from intrinsic goods. The former are good only because they lead to something else that is good; the latter are good in themselves. The distinction is neglected in the early lists of ethical precepts that were surveyed above, but it is of the first importance if a firmly grounded answer to questions about how one ought to live is to be obtained. Aristotle is also responsible for much later thinking about the virtues one should cultivate. In his most important ethical treatise, the Ethica Nicomachea (Nicomachean Ethics), he sorts through the virtues as they were popularly understood in his day, specifying in each case what is truly virtuous and what is mistakenly thought to be so. Here, he uses the idea of the Golden Mean, which is essentially the same idea as the Buddha's middle path between self-indulgence and self-renunciation. Thus courage, for example, is the mean between two extremes: one can have a deficiency of it, which is cowardice, or one can have an excess of it, which is foolhardiness. The virtue of friendliness, to give another example, is the mean between obsequiousness and surliness. Aristotle does not intend the idea of the mean to be applied mechanically in every instance: he says that in the case of the virtue of temperance, or self-restraint, it is easy to find the excess of self-indulgence in the physical pleasures, but the opposite error, insufficient concern for such pleasures, scarcely exists. (The Buddha, with his experience of the ascetic life of renunciation, would not have agreed.) This caution in the application of the idea is just as well, for while it may be a useful device for moral education, the notion of a mean cannot help us to discover new truths about virtue. We can only arrive at the mean if we already have a notion as to what is an excess and what is a defect of the trait in question, but this is not something to be discovered by a morally neutral inspection of the trait itself. We need a prior conception of the virtue in order to decide what is excessive and what is defective. To attempt to use the doctrine of the mean to define the particular virtues would be to travel in a circle. Aristotle's list of the virtues differs from later Christian lists. Courage, temperance, and liberality are common to both periods, but Aristotle also includes a virtue that literally means “greatness of soul.” This is the characteristic of holding a high opinion of oneself. The corresponding vice of excess is unjustified vanity, but the vice of deficiency is humility, which for Christians is a virtue. Aristotle's discussion of the virtue of justice has been the starting point for almost all Western accounts. He distinguishes between justice in the distribution of wealth or other goods and justice in reparation, as, for example, in punishing someone for a wrong he has done. The key element of justice, according to Aristotle, is treating like cases alike—an idea that has set later thinkers the task of working out which similarities (need, desert, talent) are relevant. As with the notion of virtue as a mean, Aristotle's conception of justice provides a framework that needs to be filled in before it can be put to use. Aristotle distinguished between theoretical and practical wisdom. His concept of practical wisdom is significant, for it goes beyond merely choosing the means best suited to whatever ends or goals one may have. The practically wise person also has the right ends. This implies that one's ends are not purely a matter of brute desires or feelings; the right ends are something that can be known. It also gives rise to the problem that faced Socrates: How is it that people can know the difference between good and bad and still choose what is bad? As noted earlier, Socrates simply denied that this could happen, saying that those who did not choose the good must, appearances notwithstanding, be ignorant of what it is. Aristotle said that this view of Socrates was “plainly at variance with the observed facts” and, instead, offered a detailed account of the ways in which one can possess knowledge and yet not act on it because of lack of control or weakness of will. Later Greek and Roman ethics In ethics, as in many other fields, the later Greek and Roman periods do not display the same penetrating insight as the Classic period of 5th- and 4th-century Greek civilization. Nevertheless, the two dominant schools of thought, Stoicism and Epicureanism, represent important approaches to the question of how one ought to live. Stoicism had its origins in the views of Socrates and Plato, as modified by Zeno and then by Chrysippus in the 3rd century BC. It gradually gained influence in Rome, chiefly through the teachings of Cicero (106–43 BC) and then later in the 1st century AD through those of Seneca. Remarkably, its chief proponents include both a slave, Epictetus, and an emperor, Marcus Aurelius. This is a fine illustration of the Stoic message that what is important is the pursuit of wisdom and virtue, a pursuit that is open to all human beings owing to their common capacity for reason and that can be carried out no matter what the external circumstances of their lives. Today, the word stoic conjures up one who remains unmoved by the sorrows and afflictions that distress the rest of humanity. This is an accurate representation of a stoic ideal, but it must be placed in the context of a systematic approach to life. Plato held that human passions and physical desires are in need of regulation by reason (see above Plato). The Stoics went further: they rejected passions altogether as a basis for deciding what is good or bad. Physical desires cannot simply be abolished, but when we become wise we appreciate the difference between wanting something and judging it to be good. Our desires make us want something, but only our reason can judge the goodness of what is wanted. If we are wise, we will identify with our reason, not with our desires; hence, we will not place our hopes on the attainment of our physical desires nor our anxieties on our failure to attain them. Wise Stoics will feel physical pain as others do, but in their minds they will know that physical pain leaves the true reasoning self untouched. The only thing that is truly good is to live in a state of wisdom and virtue. In aiming at such a life, we are not subject to the same play of fortune that afflicts us when we aim at physical pleasure or material wealth, for wisdom and virtue are matters of the intellect and under our own control. Moreover, if matters become too grim, there is always a way of ending the pain of the physical world. The Stoics were not reluctant to counsel suicide as a means of avoiding otherwise inescapable pain. Perhaps the most important legacy of Stoicism, however, is its conviction that all human beings share the capacity to reason. This led the Stoics to a fundamental sense of equality, which went beyond the limited Greek conception of equal citizenship. Thus Seneca claimed that the wise man will esteem the community of rational beings far above any particular community in which the accident of birth has placed him, and Marcus Aurelius said that common reason makes all individuals fellow citizens. The belief that human reasoning capacities are common to all was also important, because from it the Stoics drew the implication that there is a universal moral law, which all people are capable of appreciating. The Stoics thus strengthened the tradition that sees the universality of reason as the basis on which ethical relativism is to be rejected. While the modern use of the term stoic accurately represents at least a part of the Stoic philosophy, anyone taking the present-day meaning of epicure as a guide to the philosophy of Epicurus (341–270 BC) would go astray. True, the Epicureans regarded pleasure as the sole ultimate good and pain as the sole evil; and they did regard the more refined pleasures as superior, simply in terms of the quantity and durability of the pleasure they provided, to the coarser pleasures. To portray them as searching for these more refined pleasures by dining at the best restaurants and drinking the finest wines, however, is the reverse of the truth. By refined pleasures, Epicurus meant pleasures of the mind, as opposed to the coarse pleasures of the body. He taught that the highest pleasure obtainable is the pleasure of tranquillity, which is to be obtained by the removal of unsatisfied wants. The way to do this is to eliminate all but the simplest wants; these are then easily satisfied even by those who are not wealthy. Epicurus developed his position systematically. To determine whether something is good, he would ask if it increased pleasure or reduced pain. If it did, it was good as a means; if it did not, it was not good at all. Thus justice was good but merely as an expedient arrangement to prevent mutual harm. Why not then commit injustice when we can get away with it? Only because, Epicurus says, the perpetual dread of discovery will cause painful anxiety. Epicurus also exalted friendship, and the Epicureans were famous for the warmth of their personal relationships; but, again, they proclaimed that friendship is good only because of its tendency to create pleasure. Both Stoic and Epicurean ethics can be seen as precursors of later trends in Western ethics: the Stoics of the modern belief in equality and the Epicureans of a Utilitarian ethic based on pleasure. The development of these ethical positions, however, was dramatically affected by the spreading from the East of a new religion that had its roots in a Jewish conception of ethics as obedience to a divine authority. With the conversion of Emperor Constantine I to Christianity by AD 313, the older schools of philosophy lost their sway over the thinking of the Roman Empire. Christian ethics from the New Testament to the Scholastics Matthew reports Jesus as having said, in the Sermon on the Mount, that he came not to destroy the law of the prophets but to fulfill it. Indeed, when Jesus is regarded as a teacher of ethics, it is clear that he was more a reformer of the Hebrew tradition than a radical innovator. The Hebrew tradition had a tendency to place great emphasis on compliance with the letter of the law; the Gospel accounts of Jesus portray him as preaching against this “righteousness of the scribes and Pharisees,” championing the spirit rather than the letter of the law. This spirit he characterized as one of love, for God and for one's neighbour. But since he was not proposing that the old teachings be discarded, he saw no need to develop a comprehensive ethical system. Christianity thus never really broke with the Jewish conception of morality as a matter of divine law to be discovered by reading and interpreting the word of God as revealed in the Scriptures. This conception of morality had important consequences for the future development of Western ethics. The Greeks and Romans, and indeed thinkers such as Confucius too, did not have the Western conception of a distinctively moral realm of conduct. For them, everything that one did was a matter of practical reasoning, in which one could do well or poorly. In the more legalistic Judeo-Christian view, however, it is one thing to lack practical wisdom in, say, household budgeting, and a quite different and much more serious matter to fall short of what the moral law requires. This distinction between the moral and the nonmoral realms now affects every question in Western ethics, including the very way the questions themselves are framed. Another consequence of the retention of the basically legalistic stance of Jewish ethics was that from the beginning Christian ethics had to deal with the question of how to judge the person who breaks the law from good motives or keeps it from bad motives. The latter half of this question was particularly acute because the Gospels describe Jesus as repeatedly warning of a coming resurrection of the dead at which time all would be judged and punished or rewarded according to their sins and virtues in this life. The punishments and rewards were weighty enough to motivate anyone who took this message seriously; and it was given added emphasis by the fact that it was not going to be long in coming. (Jesus said that it would take place during the lifetime of some of those listening to him.) This is, therefore, an ethic that invokes external sanctions as a reason for doing what is right, in contrast to Plato or Aristotle for whom happiness is an internal element of a virtuous life. At the same time, it is an ethic that places love above mere literal compliance with the law. These two aspects do not sit easily together. Can one love God and neighbour in order to be rewarded with eternal happiness in another life? The fact that Jesus and Paul, too, believed in the imminence of the Second Coming led them to suggest ways of living that were scarcely feasible on any other assumption: taking no thought for the morrow; turning the other cheek; and giving away all one has. Even Paul's preference for celibacy rather than marriage and his grudging acceptance of the latter on the basis that “It is better to marry than to burn” makes some sense once we grasp that he was proposing ethical standards for what he thought would be the last generation on earth. When the expected event did not occur and Christianity became the official religion of the vast and embattled Roman Empire, Christian leaders were faced with the awkward task of reinterpreting these injunctions in a manner more suited for a continuing society. The new Christian ethical standards did lead to some changes in Roman morality. Perhaps the most vital was a new sense of the equal moral status of all human beings. As previously noted, the Stoics had been the first to elaborate this conception, grounding equality on the common capacity to reason. For Christians, humans are equal because they are all potentially immortal and equally precious in the sight of God. This caused Christians to condemn a wide variety of practices that had been accepted by both Greek and Roman moralists. Many of these related to the taking of innocent human life: from the earliest days Christian leaders condemned abortion, infanticide, and suicide. Even killing in war was at first regarded as wrong, and soldiers converted to Christianity had refused to continue to bear arms. Once the empire became Christian, however, this was one of the inconvenient ideas that had to yield. In spite of what Jesus had said about turning the other cheek, the church leaders declared that killing in a “just war” was not a sin. The Christian condemnation of killing in gladiatorial games, on the other hand, had a more permanent effect. Finally, but perhaps most importantly, while Christian emperors continued to uphold the legality of slavery, the Christian church accepted slaves as equals, admitted them to its ceremonies, and regarded the granting of freedom to slaves as a virtuous, if not obligatory, act. This moral pressure led over several hundred years to the gradual disappearance of slavery in Europe. The Christian contribution to improving the position of slaves can also be linked with the distinctively Christian list of virtues. Some of the virtues described by Aristotle, as, for example, greatness of soul, are quite contrary in spirit to Christian virtues such as humility. In general, it can be said that the Greeks and Romans prized independence, self-reliance, magnanimity, and worldly success. By contrast, Christians saw virtue in meekness, obedience, patience, and resignation. As the Greeks and Romans conceived virtue, a virtuous slave was almost a contradiction in terms, but for Christians there was nothing in the state of slavery that was incompatible with the highest moral character. Christianity began with a set of scriptures incorporating many ethical injunctions but with no ethical philosophy. The first serious attempt to provide such a philosophy was made by St. Augustine of Hippo (354–430). Augustine was acquainted with a version of Plato's philosophy, and he developed the Platonic idea of the rational soul into a Christian view wherein humans are essentially souls, using their bodies as means to achieve their spiritual ends. The ultimate object remains happiness, as in Greek ethics, but Augustine saw happiness as consisting in a union of the soul with God after the body has died. It was through Augustine, therefore, that Christianity received the Platonic theme of the relative inferiority of bodily pleasures. There was, to be sure, a fundamental difference: whereas Plato saw this inferiority in terms of a comparison with the pleasures of philosophical contemplation in this world, Christians compared them unfavourably with the pleasures of spiritual existence in the next world. Moreover, Christians came to see bodily pleasures not merely as inferior but also as a positive threat to the achievement of spiritual bliss. It was also important that Augustine could not accept the view, common to so many Greek and Roman philosophers, that philosophical reasoning was the path to wisdom and happiness. For a Christian, of course, the path had to be through love of God and faith in Jesus as the Saviour. The result was to be, for many centuries, a rejection of the use of unfettered reasoning powers in ethics. Augustine was aware of the tension caused by the dual Christian motivations of love of God and neighbour, on the one hand, and reward and punishment in the afterlife, on the other. He came down firmly on the side of love, insisting that those who keep the moral law through fear of punishment are not really keeping it at all. But it is not ordinary human love, either, that suffices as a motivation for true Christian living. Augustine believed all men bear the burden of Adam's original sin, and so are incapable of redeeming themselves by their own efforts. Only the unmerited grace of God makes possible obedience to the “first greatest commandment” of loving God, and without such, one cannot fulfill the moral law. This view made a clear-cut distinction between Christians and pagan moralists, no matter how humble and pure the latter might be; only the former could be saved because only they could receive the blessing of divine grace. But this gain, as Augustine saw it, was purchased at the cost of denying that man is free to choose good or evil. Only Adam had this choice: he chose for all humanity, and he chose evil. Aquinas and the moral philosophy of the Scholastics At this point we may pass over more than 800 years in silence, for there were no major developments in ethics in the West until the rise of Scholasticism in the 12th and 13th centuries. Among the first of the significant works written during this time was a treatise on ethics by the French philosopher and theologian Peter Abelard (1079–1142). His importance in ethical theory lies in his emphasis on intentions. Abelard maintained, for example, that the sin of sexual wrongdoing consists not in the act of illicit sexual intercourse nor even in the desire for it, but in mentally consenting to that desire. In this he was far more modern than Augustine, with his doctrine of grace, and also more thoughtful than those who even today assert that the mere desire for what is wrong is as wrong as the act itself. Abelard saw that there is a problem in holding anyone morally responsible for the existence of mere physical desires. His ingenious solution was taken up by later medieval writers, and traces of it can still be found in modern discussions of moral responsibility. Aristotle's ethical writings were not known to scholars in western Europe during Abelard's time. Latin translations became available only in the first half of the 13th century, and the rediscovery of Aristotle dominated later medieval philosophy. Nowhere is his influence more marked than in the thought of St. Thomas Aquinas (1225–74), often regarded as the greatest of the Scholastic philosophers and undoubtedly the most influential, since his teachings became the semiofficial philosophy of the Roman Catholic Church. Such is the respect in which Aquinas held Aristotle that he referred to him simply as The Philosopher, and it is not too far from the truth to say that the chief aim of Aquinas' work was to reconcile Aristotle's views with Christian doctrine. Aquinas took from Aristotle the notion of a final end, or summum bonum, at which all action is ultimately directed; and, like Aristotle, he saw this end as necessarily linked with happiness. This conception was Christianized, however, by the idea that happiness is to be found in the love of God. Thus a person seeks to know God but cannot fully succeed in this in life on earth. The reward of heaven, where one can know God, is available only to those who merit it, though even then it is given by God's grace rather than obtained by right. Short of heaven, a person can experience only a more limited form of happiness to be gained through a life of virtue and friendship, much as Aristotle had recommended. The blend of Aristotle's teachings and Christianity is also evident in Aquinas' views about right and wrong, and how we come to know the difference between them. Aquinas is often described as advocating a “natural law” ethic, but this term is easily misunderstood. The natural law to which Aquinas referred does not require a legislator any more than do the laws of nature that govern the motions of the planets. An even more common mistake is to imagine that this conception of natural law relies on contrasting what is natural with what is artificial. Aquinas' theory of the basis of right and wrong developed rather as an alternative to the view that morality is determined simply by the arbitrary will of God. Instead of conceiving of right and wrong in this manner as something fundamentally unrelated to human goals and purposes, Aquinas saw morality as deriving from human nature and the activities that are objectively suited to it. It is a consequence of this natural law ethic that the difference between right and wrong can be appreciated by the use of reason and reflection on experience. Christian revelation may supplement this knowledge in some respects, but even such pagan philosophers as Aristotle could understand the essentials of virtuous living. We are, however, likely to err when we apply these general principles to the particular cases that confront us in everyday life. Corrupt customs and poor moral education may obscure the messages of natural reason. Hence, societies must enact laws of their own to supplement natural law and, where necessary, to coerce those who, because of their own imperfections, are liable to do what is wrong and socially destructive. It follows, too, that virtue and human flourishing are linked. When we do what is right, we do what is objectively suited to our true nature. Thus the promise of heaven is no mere external sanction, rewarding actions that would otherwise be indifferent to us or even against our best interests. On the contrary, Aquinas wrote that “God is not offended by us except by what we do against our own good.” Reward and punishment in the afterlife reinforce a moral law that all humans, Christian or pagan, have adequate prior reasons for following. In arguing for his views, Aquinas was always concerned to show that he had the authority of the Scriptures or the Church Fathers on his side, but the substance of his ethical system is to a remarkable degree based on reason rather than revelation. This is strong testimony to the power of Aristotle's example. Nonetheless, Aquinas absorbed the weaknesses as well as the strengths of the Aristotelian system. His attempt to base right and wrong on human nature, in particular, invites the objection that we cannot presuppose our nature to be good. Aquinas might reply that it is good because God made it so, but this merely shifts back one step the issue of the basis of good and bad: Did God make it good in accordance with some independent standard of goodness, or would any human nature made by God be good? If we give the former answer, we need an account of the independent standard of goodness. Because this cannot—if we are to avoid circular argument—be based on human nature, it is not clear what account Aquinas could offer. If we maintain, however, that any human nature made by God would be good, we must accept that if God had made our nature such that we flourish and achieve happiness by torturing the weak and helpless among us, that would have been what we should do in order to live virtuously. Something resembling this second option—but without the intermediate step of an appeal to human nature—was the position taken by the last of the great Scholastic philosophers, William of Ockham (c. 1285–1349?). Ockham boldly broke with much that had been taken for granted by his immediate predecessors. Fundamental to this was his rejection of the central Aristotelian idea that all things have a final end, or goal, toward which they naturally tend. He, therefore, also spurned Aquinas' attempt to base morality on human nature, and with it the idea that happiness is man's goal and closely linked with goodness. This led him to a position in stark contrast to almost all previous Western ethics. Ockham denied all standards of good and evil that are independent of God's will. What God wills is good; what God condemns is evil. That is all there is to say about the matter. This position is sometimes called a divine approbation theory, because it defines “good” as whatever is approved by God. As indicated earlier, when discussing attempts to link morality with religion, it follows from such a position that it is meaningless to describe God himself as good. It also follows that if God had willed us to torture children, it would be good to do so. As for the actual content of God's will, according to Ockham, that is not a subject for philosophy but rather a matter for revelation and faith. The rigour and consistency of Ockham's philosophy made it for a time one of the leading schools of Scholastic thought, but eventually it was the philosophy of Aquinas that prevailed in the Roman Catholic Church. After the Reformation, however, Ockham's view exerted influence on Protestant theologians. Meanwhile, it hastened the decline of Scholastic moral philosophy because it effectively removed ethics from the sphere of reason. Renaissance and Reformation The revival of Classical learning and culture that began in 15th-century Italy and then slowly spread throughout Europe did not give immediate birth to any major new ethical theories. Its significance for ethics lies, rather, in a change of focus. For the first time since the conversion of the Roman Empire to Christianity, man, not God, became the chief object of interest, and the theme was not religion but humanism—the powers, freedom, and accomplishments of human beings. This does not mean that there was a sudden conversion to atheism. Renaissance thinkers remained Christian and still considered human beings as somehow midway between the beasts and the angels. Yet, even this middle position meant that humans were special. It meant, too, a new conception of human dignity and of the importance of the individual. Although the Renaissance did not produce any outstanding moral philosophers, there is one writer whose work is of some importance in the history of ethics: the Italian author and statesman Niccolò Machiavelli. His book Il principe (1513; The Prince) offered advice to rulers as to what they must do to achieve their aims and secure their power. Its significance for ethics lies precisely in the fact that Machiavelli's advice ignores the usual ethical rules: “It is necessary for a prince, who wishes to maintain himself, to learn how not to be good, and to use this knowledge and not use it, according to the necessities of the case.” There had not been so frank a rejection of morality since the Greek Sophists. So startling is the cynicism of Machiavelli's advice that it has been suggested that Il principe was an attempt to satirize the conduct of the princely rulers of Renaissance Italy. It may be more accurate, however, to view Machiavelli as an early political scientist, concerned only with setting out what human beings are like and how power is maintained, with no intention of passing moral judgment on the state of affairs described. In any case, Il principe gained instant notoriety, and Machiavelli's name became synonymous with political cynicism and deviousness. In spite of the chorus of condemnation, the work has led to a sharper appreciation of the difference between the lofty ethical systems of the philosophers and the practical realities of political life. The first Protestants It was left to the 17th-century English philosopher and political theorist Thomas Hobbes to take up the challenge of constructing an ethical system on the basis of so unflattering a view of human nature (see below). Between Machiavelli and Hobbes, however, there occurred the traumatic breakup of Western Christianity known as the Reformation. Reacting against the worldly immorality apparent in the Renaissance church, Martin Luther, John Calvin, and other leaders of the new Protestantism sought to return to the pure early Christianity of the Scriptures, especially the teachings of Paul, and of the Church Fathers, with Augustine foremost among them. They were contemptuous of Aristotle (Luther called him a “buffoon”) and of non-Christian philosophers in general. Luther's standard of right and wrong was what God commands. Like William of Ockham, Luther insisted that the commands of God cannot be justified by any independent standard of goodness: good simply means what God commands. Luther did not believe these commands would be designed to satisfy human desires because he was convinced that desires are totally corrupt. In fact, he thought that human nature was totally corrupt. In any case, Luther insisted that one does not earn salvation by good works: one is justified by faith in Christ and receives salvation through divine grace. It is apparent that if these premises are accepted, there is little scope for human reason in ethics. As a result, no moral philosophy has ever had the kind of close association with any Protestant church that, say, the philosophy of Aquinas has had with Roman Catholicism. Yet, because Protestants emphasized the capacity of the individual to read and understand the Gospels without obtaining the authoritative interpretation of the church, the ultimate outcome of the Reformation was a greater freedom to read and write independently of the church hierarchy. This made possible a new era of ethical thought. From this time, too, distinctively national traditions of moral philosophy began to emerge; the British tradition, in particular, developed largely independently of ethics on the Continent. Accordingly, the present discussion will follow this tradition through the 19th century before returning to consider the different line of development in continental Europe. The British tradition: from Hobbes to the Utilitarians Thomas Hobbes (1588–1679) is an outstanding example of the independence of mind that became possible in Protestant countries after the Reformation. God does, to be sure, play an honourable role in Hobbes's philosophy, but it is a dispensable role. The philosophical edifice stands on its own foundations; God merely crowns the apex. Hobbes was the equal of the Greek philosophers in his readiness to develop an ethical position based only on the facts of human nature and the circumstances in which humans live; and he surpassed even Plato and Aristotle in the extent to which he sought to do this by systematic deduction from clearly set out premises. Hobbes started with a severe view of human nature: all of man's voluntary acts are aimed at self-pleasure or self-preservation. This position is known as psychological hedonism, because it asserts that the fundamental psychological motivation is the desire for pleasure. Like later psychological hedonists, Hobbes was confronted with the objection that people often seem to act altruistically. There is a story that Hobbes was seen giving alms to a beggar outside St. Paul's Cathedral. A clergyman sought to score a point by asking Hobbes if he would have given the money, had Christ not urged giving to the poor. Hobbes replied that he gave the money because it pleased him to see the poor man pleased. The reply reveals the dilemma that always faces those who propose startling new explanations for all human actions: either the theory is flagrantly at odds with how people really behave or else it must be broadened to such an extent that it loses much of what made it so shocking in the first place. Hobbes's account of “good” is equally devoid of religious or metaphysical premises. He defined good as “any object of desire,” and insisted that the term must be used in relation to a person—nothing is simply good of itself independently of the person who desires it. Hobbes may therefore be considered a subjectivist. If one were to say, for example, of the incident just described, “What Hobbes did was good,” this statement would not be objectively true or false. It would be good for the poor man, and, if Hobbes's reply was accurate, it would also be good for Hobbes. But if a second poor person, for instance, was jealous of the success of the first, that person could quite properly say that what Hobbes did was bad. Remarkably, this unpromising picture of self-interested individuals who have no notion of good apart from their own desires serves as the foundation of Hobbes's account of justice and morality in his masterpiece, Leviathan (1651). Starting with the premises that humans are self-interested and the world does not provide for all their needs, Hobbes argued that in the state of nature, without civil society, there will be competition between men for wealth, security, and glory. The ensuing struggle is Hobbes's famous “war of all against all,” in which there can be no industry, commerce, or civilization, and the life of man is “solitary, poor, nasty, brutish and short.” The struggle occurs because each individual rationally pursues his or her own interests, but the outcome is in no one's interest. How can this disastrous situation be ended? Not by an appeal to morality or justice; in the state of nature these ideas have no meaning. Yet, we want to survive and we can reason. Our reason leads us to seek peace if it is attainable but to continue to use all the means of war if it is not. How is peace to be obtained? Only by a social contract. We must all agree to give up our rights to attack others in return for their giving up their rights to attack us. By reasoning in order to increase our prospects for survival, we have found the solution. We know that a social contract will solve our problems. Our reason therefore leads us to desire such an arrangement. But how is it to come about? My reason cannot tell me to accept it while others do not. Nor is Hobbes under the illusion that the mere making of a promise or contract will carry any weight. Since we are self-interested, we will keep our promises only if it is in our interest to do so. A promise that cannot be enforced is worthless. Therefore, in making the social contract, we must establish some means of enforcing it. To do this we must all hand our powers over to some other person or group of persons who will punish anyone who breaches the contract. This person or group of persons Hobbes calls the sovereign. It may be a single person, or an elected legislature, or almost any other form of government; the essence of sovereignty consists only in having sufficient power to keep the peace by punishing those who would break it. When such a sovereign—the Leviathan of his title—exists, justice becomes meaningful in that agreements or promises are necessarily kept. At the same time, each individual has adequate reason to be just, for the sovereign will ensure that those who do not keep their agreements are suitably punished. Hobbes witnessed the turbulence and near anarchy of the English Civil Wars (1642–51) and was keenly aware of the dangers caused by disputed sovereignty. His solution was to insist that sovereignty must not be divided. Because the sovereign was appointed to enforce the social contract fundamental to peace and everything desired, it can only be rational to resist the sovereign if the sovereign directly threatens one's life. Hobbes was, in effect, a supporter of absolute sovereignty, and this has been the focus of much political discussion of his ideas. His significance for ethics, however, lies rather in his success in dealing with the subject independently of theology and of those quasi-theological or quasi-Aristotelian accounts that see the world as designed for the benefit of human beings. With this achievement, he brought ethics into the modern era. Early intuitionists: Cudworth, More, and Clarke There was, of course, immediate opposition to Hobbes's views. Ralph Cudworth (1617–88), one of a group known as the Cambridge Platonists, defended a position in some respects similar to that of Plato. That is to say, Cudworth believed the distinction between good and evil does not lie in human desires but is something objective and can be known by reason, just as the truths of mathematics can be known by reason. Cudworth was thus a forerunner of what has since come to be called intuitionism, the view that there are objective moral truths that can be known by a kind of rational intuition. This view was to attract the support of a line of distinguished thinkers until the 20th century when it became for a time the dominant view in British academic philosophy. Henry More (1614–87), another leading member of the Cambridge Platonists, attempted to give effect to the comparison between mathematics and morality by listing moral axioms that can be seen as self-evidently true, just as the axioms of geometry are seen to be self-evident. In marked contrast to Hobbes, More included an axiom of benevolence: “If it be good that one man should be supplied with the means of living well and happily, it is mathematically certain that it is doubly good that two should be so supplied, and so on.” Here, More was attempting to build on something that Hobbes himself accepted—namely, our own desire to be supplied with the means of living well. More, however, wanted to enlist reason to lead us beyond this narrow egoism to a universal benevolence. There are traces of this line of thought in the Stoics, but it was More who introduced it into British ethical thinking, wherein it is still very much alive. Samuel Clarke (1675–1729), the next major intuitionist, accepted More's axiom of benevolence in slightly different words. He was also responsible for a principle of equity, which, though derived from the Golden Rule so widespread in ancient ethics, was formulated with a new precision: “Whatever I judge reasonable or unreasonable for another to do for me, that by the same judgment I declare reasonable or unreasonable that I in the like case should do for him.” As for the means by which these moral truths are known, Clarke accepted Cudworth's and More's analogy with truths of mathematics and added the idea that what human reason discerns is a certain “fitness or unfitness” about the relationship between circumstances and actions. The right action in a given set of circumstances is the fitting one; the wrong action is unfitting. This is something known intuitively; it is self-evident. Clarke's notion of fitness is obscure, but intuitionism faces a still more serious problem that has always been a barrier to its acceptance. Suppose we accept the ability of reason to discern that it would be wrong to deceive a person in order to profit from the deception. Why should our discerning this truth provide us with a motive sufficient to override our desire to profit? The intuitionist position divorces our moral knowledge from the forces that motivate us. The former is a matter of reason, the latter of desire. The punitive power of Hobbes's sovereign is, of course, one way to provide sufficient motivation for obedience to the social contract and to the laws decreed by the sovereign as necessary for the peaceful functioning of society. The intuitionists, however, wanted to show that morality is objective and holds in all circumstances whether there is a sovereign or not. Reward and punishment in the afterlife, administered by an all-powerful God, would provide a more universal motive; and some intuitionists, such as Clarke, did make use of this divine sanction. Other thinkers, however, wanted to show that it is reasonable to do what is good independently of the threats of any external power, human or divine. This desire lay behind the development of the major alternative to intuitionism in 17th- and 18th-century British moral philosophy: moral sense theory. The debate between the intuitionist and moral sense schools of thought aired for the first time the major issue in what is still the central debate in moral philosophy: Is morality based on reason or on feelings? Shaftesbury and the moral sense school The term moral sense was first used by the 3rd Earl of Shaftesbury (1671–1713), whose writings reflect the optimistic tone both of the school of thought he founded and of so much of the philosophy of the 18th-century Enlightenment. Shaftesbury believed that Hobbes had erred by presenting a one-sided picture of human nature. Selfishness is not the only natural passion. We also have natural feelings directed to others: benevolence, generosity, sympathy, gratitude, and so on. These feelings give us an “affection for virtue,” which leads us to promote the public interest. Shaftesbury called this affection the moral sense, and he thought it created a natural harmony between virtue and self-interest. Shaftesbury was, of course, realistic enough to acknowledge that we also have contrary desires and that not all of us are virtuous all of the time. Virtue could, however, be recommended because—and here Shaftesbury picked up a theme of Greek ethics—the pleasures of virtue are superior to the pleasures of vice. Butler on self-interest and conscience Joseph Butler (1692–1752), a bishop of the Church of England, developed Shaftesbury's position in two ways. He strengthened the case for a harmony between morality and enlightened self-interest by claiming that happiness occurs as a by-product of the satisfaction of desires for things other than happiness itself. Those who aim directly at happiness do not find it; those who have their goals elsewhere are more likely to achieve happiness as well. Butler was not doubting the reasonableness of pursuing one's own happiness as an ultimate aim. He went so far as to say that “ . . . when we sit down in a cool hour, we can neither justify to ourselves this or any other pursuit, till we are convinced that it will be for our happiness, or at least not contrary to it.” He held, however, that direct and simple egoism is a self-defeating strategy. Egoists will do better for themselves by adopting immediate goals other than their own interests and living their everyday life in accordance with these more immediate goals. Butler's second addition to Shaftesbury's account was the idea of conscience. This he saw as a second natural guide to conduct, alongside enlightened self-interest. Butler believed that there is no inconsistency between the two; he admitted, however, that skeptics may doubt “the happy tendency of virtue” and for them conscience can serve as an authoritative guide. Just what reason these skeptics have to follow conscience, if they believe its guidance to be contrary to their own happiness, is something that Butler did not adequately explain. Nevertheless, his introduction of conscience as an independent source of moral reasoning reflects an important difference between ancient and modern ethical thinking. The Greek and Roman philosophers would have had no difficulty in accepting everything Butler said about the pursuit of happiness, but they would not have understood his idea of another independent source of rational guidance. Although Butler insisted that the two operate in harmony, this was for him a fortunate fact about the world and not a necessary principle of reason. Thus his recognition of conscience opened the way for later formulations of a universal principle of conduct at odds with the path indicated by even the most enlightened self-interested reasoning. The climax of moral sense theory: Hutcheson and Hume The moral sense school reached its fullest development in the works of two Scottish philosophers, Francis Hutcheson (1694–1746) and David Hume (1711–76). Hutcheson was concerned with showing, against the intuitionists, that moral judgment cannot be based on reason and therefore must be a matter of whether an action is “amiable or disagreeable” to one's moral sense. Like Butler's notion of conscience, Hutcheson's moral sense does not find pleasing only, or even predominantly, those actions that are in one's own interest. On the contrary, Hutcheson conceived moral sense as based on a disinterested benevolence. This led him to state, as the ultimate criterion of the goodness of an action, a principle that was to serve as the basis for the Utilitarian reformers: “that action is best which procures the greatest happiness for the greatest numbers . . . .” Hume, like Hutcheson, held that reason cannot be the basis of morality. His chief ground for this conclusion was that morality is essentially practical: there is no point in judging something good if the judgment does not incline us to act accordingly. Reason alone, however, Hume regarded as “the slave of the passions.” Reason can show us how best to achieve our ends, but it cannot determine our ultimate desires and is incapable of moving us to action except in accordance with some prior want or desire. Hence, reason cannot give rise to moral judgments. This is an important argument that is still employed in the debate between those who believe that morality is based on reason and those who base it instead on emotion or feelings. Hume's conclusion certainly follows from his premises. Can either premise be denied? We have seen that intuitionists such as Cudworth and Clarke maintained that reason can lead to action. Reason, they would have said, leads us to see a particular action as fitting in given circumstances and therefore to do it. Hume would have none of this. “Tis not contrary to reason,” he provocatively asserted, “to prefer the destruction of the whole world to the scratching of my finger.” To show that he was not embracing the view that only egoism is rational, Hume continued: “Tis not contrary to reason to choose my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me.” His point was simply that to have these preferences is to have certain desires or feelings; they are not matters of reason at all. The intuitionists might insist that moral and mathematical reasoning are analogous, but this analogy was not helpful here. We can know a truth of geometry and not be motivated to act in any way. What of Hume's other premise that morality is essentially practical and moral judgments must lead to action? This can be denied more easily. We could say that moral judgments merely tell us what is right or wrong. They do not lead to action unless we want to do what is right. Then Hume's argument would do nothing to undermine the claim that moral judgments are based on reason. But there is a price to pay. The terms right and wrong lose much of their force. We can no longer assert that those who know what is right but do what is wrong are in any way irrational. They are just people who do not happen to have the desire to do what is right. This desire—because it leads to action—must be acknowledged to be based on feeling rather than reason. Denying that morality is necessarily action-guiding means abandoning the idea, so important to those defending the objectivity of morality, that some things are objectively required of all rational beings. Hume's forceful presentation of this argument against a rational basis for morality would have been enough to earn him a place in the history of ethics, but it is by no means his only achievement in this field. In A Treatise of Human Nature (1739–40) Hume points, almost as an afterthought, to the fact that writers on morality regularly start by making various observations about human nature or about the existence of a god—all statements of fact about what is the case—and then suddenly switch to statements about what ought or ought not be done. Hume says that he cannot conceive how this new relationship of “ought” can be deduced from the preceding statements that were related by “is”; and he suggests these authors should explain how this deduction is to be achieved. The point has since been called Hume's Law and taken as proof of the existence of a gulf between facts and values, or between “is” and “ought.” This places too much weight on Hume's brief and ironic comment, but there is no doubt that many writers, both before and after Hume, have argued as if values could easily be deduced from facts. They can usually be found to have smuggled values in somewhere. Attention to Hume's Law makes it easy for us to detect such logically illicit contraband. Hume's positive account of morality is in line with that of the moral sense school: “The hypothesis which we embrace is plain. It maintains that morality is determined by sentiment. It defines virtue to be whatever mental action or quality gives to a spectator the pleasing sentiment of approbation; and vice the contrary.” In other words, Hume takes moral judgments to be based on a feeling. They do not reflect any objective state of the world. Having said that, however, it may still be asked whether this feeling is one that is common to all of us or one that varies from individual to individual. If Hume gives the former answer, moral judgments retain a kind of objectivity. While they do not reflect anything out there in the universe apart from human feelings, one's judgments may be true or false depending on whether they capture this universal human moral sentiment. If, on the other hand, the feeling varies from one individual to the next, moral judgments become entirely subjective. People's judgments would express their own feelings, and to reject someone else's judgment as wrong would merely be to say that one's own feelings were different. Hume does not make entirely clear which of these two views he holds; but if he is to avoid breaching his own rule about not deducing an “ought” from an “is,” he cannot hold that a moral judgment can follow logically from a description of the feelings that an action gives to a particular group of spectators. From the mere existence of a feeling we cannot draw the inference that we ought to obey it. For Hume to be consistent on this point—and even with his central argument that moral judgments must move to action—the moral judgment must be based not on the fact that all people, or most people, or even the speaker, have a certain feeling; it must rather be based on the actual experience of the feeling by whoever accepts the judgment. This still leaves it open whether the feeling is common to all or limited to the person accepting the judgment, but it shows that, in either case, the “truth” of a judgment for any individual depends on whether that individual actually has the appropriate feeling. Is this “truth” at all? As will be seen below, 20th-century philosophers with views broadly similar to Hume's have suggested that moral judgments have a special kind of meaning not susceptible of truth or falsity in the ordinary way. The intuitionist response: Price and Reid Powerful as they were, Hume's arguments did not end the debate between the moral sense theorists and the intuitionists. They did, however, lead Richard Price (1723–91), Thomas Reid (1710–96), and later intuitionists to abandon the idea that moral truths can be established by some process of demonstrative reasoning akin to that used in mathematics. Instead, these proponents of intuitionism took the line that our notions of right and wrong are simple, objective ideas, directly perceived by us and not further analyzable into anything such as “fitness.” We know of these ideas, not through any moral sense based on feelings, but rather through a faculty of reason or of the intellect that is capable of discerning truth. Since Hume, this has been the only plausible form of intuitionism. Yet, Price and Reid failed to explain adequately just what are the objective moral qualities that we perceive directly and how they connect with the actions we choose. At this point the argument over whether morality is based on reason or feelings was temporarily exhausted, and the focus of British ethics shifted from such questions about the nature of morality as a whole to an inquiry into which actions are right and which are wrong. Today, the distinction between these two types of inquiry would be expressed by saying that whereas the 18th-century debate between intuitionism and the moral sense school dealt with questions of metaethics, 19th-century thinkers became chiefly concerned with questions of normative ethics. The positions we take in metaethics over whether ethics is objective or subjective, for example, do not tell us what we ought to do. That task is the province of normative ethics. The impetus to the discussion of normative ethics was provided by the challenge of Utilitarianism. The essential principle of Utilitarianism was, as noted above, put forth by Hutcheson. Curiously, it gained further development from the widely read theologian William Paley (1743–1805), who provides a good example of the independence of metaethics and normative ethics. His position on the nature of morality was similar to that of Ockham and Luther—namely, he held that right and wrong are determined by the will of God. Yet, because he believed that God wills the happiness of his creatures, his normative ethics were Utilitarian: whatever increases happiness is right; whatever diminishes it is wrong. Notwithstanding these predecessors, Jeremy Bentham (1748–1832) is properly considered the father of modern Utilitarianism. It was he who made the Utilitarian principle serve as the basis for a unified and comprehensive ethical system that applies, in theory at least, to every area of life. Never before had a complete, detailed system of ethics been so consistently constructed from a single fundamental ethical principle. Bentham's ethics began with the proposition that nature has placed human beings under two masters: pleasure and pain. Anything that seems good must either be directly pleasurable, or thought to be a means to pleasure or to the avoidance of pain. Conversely, anything that seems bad must either be directly painful, or thought to be a means to pain or to the deprivation of pleasure. From this Bentham argued that the words right and wrong can only be meaningful if they are used in accordance with the Utilitarian principle, so that whatever increases the net surplus of pleasure over pain is right and whatever decreases it is wrong. Bentham then set out how we are to weigh the consequences of an action, and thereby decide whether it is right or wrong. We must, he says, take account of the pleasures and pains of everyone affected by the action, and this is to be done on an equal basis: “Each to count for one, and none for more than one.” (At a time when Britain had a major trade in slaves, this was a radical suggestion; and Bentham went further still, explicitly extending consideration to nonhuman animals as well.) We must also consider how certain or uncertain the pleasures and pains are, their intensity, how long they last, and whether they tend to give rise to further feelings of the same or of the opposite kind. Bentham did not allow for distinctions in the quality of pleasure or pain as such. Referring to a popular game, he affirmed that “quantity of pleasure being equal, pushpin is as good as poetry.” This led his opponents to characterize his philosophy as one fit for pigs. The charge is only half true. Bentham could have defended a taste for poetry on the grounds that whereas one tires of mere games, the pleasures of a true appreciation of poetry have no limit; thus the quantities of pleasure obtained by poetry are greater than those obtained by pushpin. All the same, one of the strengths of Bentham's position is its honest bluntness, which it owes to his refusal to be fazed by the contrary opinions either of conventional morality or of refined society. He never thought that the aim of Utilitarianism was to explain or justify ordinary moral views; it was, rather, to reform them. John Stuart Mill (1806–73), Bentham's successor as the leader of the Utilitarians and the most influential British thinker of the 19th century, had some sympathy for the view that Bentham's position was too narrow and crude. His essay “Utilitarianism” (1861) introduced several modifications, all aimed at a broader view of what is worthwhile in human existence and at implications less shocking to established moral convictions. Although his position was based on the maximization of happiness (and this is said to consist in pleasure and the absence of pain), he distinguished between pleasures that are higher and those that are lower in quality. This enabled him to say that it is “better to be Socrates dissatisfied than a fool satisfied.” The fool, he argued, would only be of a different opinion because he did not know both sides of the question. Mill sought to show that Utilitarianism is compatible with moral rules and principles relating to justice, honesty, and truthfulness by arguing that Utilitarians should not attempt to calculate before each action whether that specific action will maximize utility. Instead, they should be guided by the fact that an action falls under a general principle (such as the principle that we should keep our promises), and adherence to that general principle tends to increase happiness. Only under special circumstances is it necessary to consider whether an exception may have to be made. Mill's easily readable prose ensured a wide audience for his exposition of Utilitarianism, but as a philosopher he was markedly inferior to the last of the 19th-century Utilitarians, Henry Sidgwick (1838–1900). Sidgwick's Methods of Ethics (1874) is the most detailed and subtle work of Utilitarian ethics yet produced. Especially noteworthy is his discussion of the various principles accepted by what he calls common sense morality—i.e., the morality accepted by most people without systematic thought. Price, Reid, and some adherents of their brand of intuitionism thought that such principles (e.g., those of truthfulness, justice, honesty, benevolence, purity, and gratitude) were self-evident, independent moral truths. Sidgwick was himself an intuitionist as far as the basis of ethics was concerned: he believed that the principle of Utilitarianism must ultimately be based on a self-evident axiom of rational benevolence. Nonetheless, he strongly rejected the view that all principles of common sense morality are themselves self-evident. He went on to demonstrate that the allegedly self-evident principles conflict with one another and are vague in their application. They could only be part of a coherent system of morality, he argued, if they were regarded as subordinate to the Utilitarian principle, which defined their application and resolved the conflicts between them. Sidgwick was satisfied that he had reconciled common sense morality and Utilitarianism by showing that whatever was sound in the former could be accounted for by the latter. He was, however, troubled by his inability to achieve any such reconciliation between Utilitarianism and egoism, the third method of ethical reasoning dealt with in his book. True, Sidgwick regarded it as self-evident that “from the point of view of the universe” one's own good is of no greater value than the like good of any other person, but what could be said to the egoist who expresses no concern for the point of view of the universe, taking his stand instead on the fact that his own good mattered more to him than anyone else's? Bentham had apparently believed either that self-interest and the general happiness are not at odds or that it is the legislator's task to reward or punish actions so as to see that they are not. Mill also had written of the need for sanctions but was more concerned with the role of education in shaping human nature in such a way that one finds happiness in doing what benefits all. By contrast, Sidgwick was convinced that this could lead at best to a partial overlap between what is in one's own interest and what is in the interest of all. Hence, he searched for arguments with which to convince the egoist of the rationality of universal benevolence but failed to find any. The Methods of Ethics concludes with an honest admission of this failure and an expression of dismay at the fact that, as a result, “. . . it would seem necessary to abandon the idea of rationalizing [morality] completely.” The continental tradition: from Spinoza to Nietzsche If Hobbes is to be regarded as the first of a distinctively British philosophical tradition, the Dutch-Jewish philosopher Benedict Spinoza (1632–77) appropriately occupies the same position in continental Europe. Unlike Hobbes, Spinoza did not provoke a long-running philosophical debate. In fact, his philosophy was neglected for a century after his death and was in any case too much of a self-contained system to invite debate. Nevertheless, Spinoza held positions on crucial issues thatwere in sharp contrast to those taken by Hobbes, and these differences were to grow over the centuries during which British and continental European philosophy followed their own paths. The first of these contrasts with Hobbes is Spinoza's attitude toward natural desires. As has been noted, Hobbes took self-interested desire for pleasure as an unchangeable fact about human nature and proceeded to build a moral and political system to cope with it. Spinoza did just the opposite. He saw natural desires as a form of bondage. We do not choose to have them of our own will. Our will cannot be free if it is subject to forces outside itself. Thus our real interests lie not in satisfying these desires but in transforming them by the application of reason. Spinoza thus stands in opposition not only to Hobbes but also to the position later to be taken by Hume, for Spinoza saw reason not as the slave of the passions but as their master. The second important contrast is that while individual humans and their separate interests are always assumed in Hobbes's philosophy, this separation is simply an illusion from Spinoza's viewpoint. Everything that exists is part of a single system, which is at the same time nature and God. (One possible interpretation of this is that Spinoza was a pantheist, believing that God exists in every aspect of the world and not apart from it.) We, too, are part of this system and are subject to its rationally necessary laws. Once we know this, we understand how irrational it would be to desire that things should be different from the way they are. This means that it is irrational to envy, to hate, and to feel guilt, for these emotions presuppose the possibility of things being different. So we cease to feel such emotions and find peace, happiness, and even freedom—in Spinoza's terms the only freedom there can be—in understanding the system of which we are a part. A view of the world so different from our everyday conceptions as that of Spinoza's cannot be made to seem remotely plausible when presented in summary form. To many philosophers it remains implausible even when complete. Its value for ethics, however, lies not in its validity as a whole, but in the introduction into continental European philosophy of a few key ideas: that our everyday nature may not be our true nature; that we are part of a larger unity; and that freedom is to be found in following reason. The German philosopher and mathematician Gottfried Wilhelm Leibniz (1646–1716), the next great figure in the Rationalist tradition, gave scant attention to ethics, perhaps because of his belief that the world is governed by a perfect God, and hence must be the best of all possible worlds. As a result of Voltaire's hilarious parody in Candide (1758), this position has achieved a certain notoriety. It is not generally recognized, however, that it does at least provide a consistent solution to a problem that has baffled thinking Christians for many centuries: How can there be evil in a world governed by an all-powerful, all-knowing, and all-good God? Leibniz's solution may not be plausible, but there may be no better one if the above premises are allowed to pass unchallenged. It was the French philosopher and writer Jean-Jacques Rousseau (1712–78) who took the next step. His Discours sur l'origine et les fondements de l'inégalité parmi les hommes (1755; A Discourse upon the Origin and Foundation of the Inequality Among Mankind) depicted a state of nature very different from that described by Hobbes as well as from Christian conceptions of original sin. Rousseau's “noble savages” lived isolated, trouble-free lives, supplying their simple wants from the abundance that nature provided and even coming to each other's aid in times of need. Only when someone claimed possession of a piece of land did laws have to be introduced, and with them came civilization and all its corrupting influences. This is, of course, a message that resembles one of Spinoza's key points: The human nature we see before us in our fellow citizens is not the only possibility; somewhere, there is something better. If we can find a way to reach it, we will have found the solution to our ethical and social problems. Rousseau revealed his route in his Contrat social (1762; A Treatise on the Social Compact, or Social Contract). It required rule by the “general will.” This may sound like democracy and, in a sense, it was democracy that Rousseau advocated; but his conception of rule by the general will is very different from the modern idea of democratic government. Today, we assume that in any society the interests of different citizens will be in conflict, and that as a result for every majority that succeeds in having its will implemented there will be a minority that fails to do so. For Rousseau, on the other hand, the general will is not the sum of all the individual wills in the community but the true common will of all the citizens. Even if a person dislikes and opposes a decision carried by the majority, that decision represents the general will, the common will in which he shares. For this to be possible, Rousseau must be assuming that there is some common good in which all human beings share and hence that their true interests coincide. As man passes from the state of nature to civil society, he has to “consult his reason rather than study his inclinations.” This is not, however, a sacrifice of his true interests, for in following reason he ceases to be a slave to “physical impulses” and so gains moral freedom. This leads to a picture of civilized human beings as divided selves. The general will represents the rational will of every member of the community. If an individual opposes the decision of the general will, his opposition must stem from his physical impulses and not from his true, autonomous will. For obvious reasons, this idea was to find favour with such autocratic leaders of the French Revolution as Robespierre. It also had a much less sinister influence on one of the outstanding philosophers of modern times: Immanuel Kant of Germany. Interestingly, Kant (1724–1804) acknowledged that he had despised the ignorant masses until he read Rousseau and came to appreciate the worth that exists in every human being. For other reasons too, Kant is part of the tradition deriving from both Spinoza and Rousseau. Like his predecessors, Kant insisted that actions resulting from desires cannot be free. Freedom is to be found only in rational action. Moreover, whatever is demanded by reason must be demanded of all rational beings; hence, rational action cannot be based on a single individual's personal desires, but must be action in accordance with something that he can will to be a universal law. This view roughly parallels Rousseau's idea of the general will as that which, as opposed to the individual will, a person shares with the whole community. Kant extended this community to all rational beings. Kant's most distinctive contribution to ethics was his insistence that our actions possess moral worth only when we do our duty for its own sake. He first introduced this idea as something accepted by our common moral consciousness and only then tried to show that it is an essential element of any rational morality. In claiming that this idea is central to the common moral consciousness, Kant was expressing in heightened form a tendency of Judeo-Christian ethics and revealing how much the Western ethical consciousness had changed since the time of Socrates, Plato, and Aristotle. Does our common moral consciousness really insist that there is no moral worth in any action done for any motive other than duty? Certainly we would be less inclined to praise the young man who plunges into the surf to rescue a drowning child if we learned that he did it because he expected a handsome reward from the child's millionaire father. This feeling lies behind Kant's disagreement with all those moral philosophers who have argued that we should do what is right because that is the path to happiness, either on earth or in heaven. But Kant went further than this. He was equally opposed to those who see benevolent or sympathetic feelings as the basis of morality. Here he may be reflecting the moral consciousness of 18th-century Protestant Germany, but it appears that even then the moral consciousness of Britain, as reflected in the writings of Shaftesbury, Hutcheson, Butler, and Hume, was very different. The moral consciousness of Western civilization in the last quarter of the 20th century also appears to be different from the one Kant was describing. Kant's ethics is based on his distinction between hypothetical and categorical imperatives. He called any action based on desires a hypothetical imperative, meaning by this that it is a command of reason that applies only if we desire the goal. For example, “Be honest, so that people will think well of you!” is an imperative that applies only if you want people to think well of you. A similarly hypothetical analysis can be given of the imperatives suggested by, say, Shaftesbury's ethics: “Help those in distress, if you sympathize with their sufferings!” In contrast to such approaches to ethics, Kant said that the commands of morality must be categorical imperatives: they must apply to all rational beings, regardless of their wants and feelings. To most philosophers this poses an insuperable problem: a moral law that applied to all rational beings, irrespective of their personal wants and desires, could have no specific goals or aims because all such aims would have to be based on someone's wants or desires. It took Kant's peculiar genius to seize upon precisely this implication, which to others would have refuted his claims, and to use it to derive the nature of the moral law. Because nothing else but reason is left to determine the content of the moral law, the only form this law can take is the universal principle of reason. Thus the supreme formal principle of Kant's ethics is: “Act only on that maxim through which you can at the same time will that it should become a universal law.” Kant still faced two major problems. First, he had to explain how we can be moved by reason alone to act in accordance with this supreme moral law; and, second, he had to show that this principle is able to provide practical guidance in our choices. If we were to couple Hume's theory that reason is always the slave of the passions with Kant's denial of moral worth to all actions motivated by desires, the outcome would be that no actions can have moral worth. To avoid such moral skepticism, Kant maintained that reason alone can lead to action. Unfortunately he was unable to say much in defense of this claim. Of course, the mere fact that we otherwise face so unpalatable a conclusion is in itself a powerful incentive to believe that somehow a categorical imperative must be possible, but this is not convincing to anyone not already wedded to Kant's view of moral worth. At one point Kant appeared to be taking a different line. He wrote that the moral law inevitably produces in us a feeling of reverence or awe. If he meant to say that this feeling then becomes the motivation for obedience, however, he was conceding Hume's point that reason alone is powerless to bring about action. It would also be difficult to accept that anything, even the moral law, can necessarily produce a certain kind of feeling in all rational beings regardless of their psychological constitution. Thus this approach does not succeed in clarifying Kant's position or rendering it plausible. Kant gave closer attention to the problem of how his supreme formal principle of morality can provide guidance in concrete situations. One of his examples is as follows. Suppose that I plan to get some money by promising to pay it back, although I have no intention of keeping my promise. The maxim of such an action might be “Make false promises when it suits you to do so.” Could such a maxim be a universal law? Of course not. If promises were so easily broken, no one would rely on them, and the practice of promising would cease. For this reason, I know that the moral law does not allow me to carry out my plan. Not all situations are so easily decided. Another of Kant's examples deals with aiding those in distress. I see someone in distress, whom I could easily help, but I prefer not to do so. Can I will as a universal law the maxim that a person should refuse assistance to those in distress? Unlike the case of promising, there is no strict inconsistency in this maxim being a universal law. Kant, however, says that I cannot will it to be such because I may someday be in distress myself, and I would then want assistance from others. This type of example is less convincing than the previous one. If I value self-sufficiency so highly that I would rather remain in distress than escape from it through the intervention of another, Kant's principle no longer tells me that I have a duty to assist those in distress. In effect, Kant's supreme principle of practical reason can only tell us what to do in those special cases in which turning the maxim of our action into a universal law yields a contradiction. Outside this limited range, the moral law that was to apply to all rational beings regardless of their wants and desires cannot guide us except by appealing to our desires. Kant does offer alternative formulations of the categorical imperative, and one of these has been seen as providing more substantial guidance than the formulation so far considered. This formulation is: “So act that you treat humanity in your own person and in the person of everyone else always at the same time as an end and never merely as means.” The connection between this formulation and the first one is not entirely clear, but the idea seems to be that when I choose for myself I treat myself as an end. If, therefore, in accordance with the principle of universal law, I must choose so that all could choose similarly, I must respect everyone else as an end. Even if this is valid, the application of the principle raises further questions. What is it to treat someone merely as a means? Using a person as a slave is an obvious example; Kant, like Bentham, was making a stand against this kind of inequality while it still flourished as an institution in some parts of the world. But to condemn slavery we have only to give equal weight to the interests of the slaves. Does Kant's principle take us any further than Utilitarianism? Modern Kantians hold that it does because they interpret it as denying the legitimacy of sacrificing the rights of one human being in order to benefit others. One thing that can be said confidently is that Kant was firmly opposed to the Utilitarian principle of judging every action by its consequences. His ethics is a deontology. In other words, the rightness of an action depends on whether it accords with a rule irrespective of its consequences. In one essay Kant went so far as to say that it would be wrong to tell a lie even to a would-be murderer who came to your door seeking to kill an innocent person hidden in your house. This kind of situation illustrates how difficult it is to remain a strict deontologist when principles may clash. Apparently Kant believed that his principle of universal law required that one never tell lies, but it could also be argued that his principle of treating everyone as an end would necessitate doing everything possible to save the life of an innocent person. Another possibility would be to formulate the maxim of the action with sufficient precision to define the circumstances under which it would be permissible to tell lies—e.g., we could all agree to a universal law that permitted lies to people intending to commit murder. Kant did not explore such solutions. Kant's philosophy deeply affected subsequent German thought, but there were several aspects of it that troubled later thinkers. One of these was his portrayal of human nature as irreconcilably split between reason and emotion. In Briefe über die ästhetische Erziehung des Menschen (1795; Letters on the Aesthetic Education of Man), the dramatist and literary theorist Friedrich Schiller suggested that while this might apply to modern human beings, it was not the case in ancient Greece where reason and feeling seemed to have been in harmony. (There is, as suggested earlier, some basis for this claim insofar as the Greek moral consciousness did not make the modern distinction between morality and self-interest.) Schiller's suggestion may have been the spark that led Georg Wilhelm Friedrich Hegel (1770–1831) to develop the first philosophical system that has historical change as its core. As Hegel presents it, all of history is the progress of mind or spirit along a logically necessary path that leads to freedom. Human beings are manifestations of this universal mind, although at first they do not realize this. Freedom cannot be achieved until human beings do realize it, and so feel at home in the universe. There are echoes of Spinoza in Hegel's idea of mind as something universal and also in his conception of freedom as based on knowledge. What is original, however, is the way in which all of history is presented as leading to the goal of freedom. Thus Hegel accepts Schiller's view that for the ancient Greeks, reason and feeling were in harmony, but he sees this as a naive harmony that could exist only as long as the Greeks did not see themselves as free individuals with a conscience independent of the views of the community. For freedom to develop, it was necessary for this harmony to break down. This occurred as a result of the Reformation, with its insistence on the right of individual conscience. But the rise of individual conscience left human beings divided between conscience and self-interest, between reason and feeling. We have seen how many philosophers tried unsuccessfully to bridge this gulf until Kant's insistence that we must do our duty for duty's sake made the division an apparently inevitable part of moral life. For Hegel, however, it can be overcome by a synthesis of the harmonious communal nature of Greek life with the modern freedom of individual conscience. In Naturrecht und Staatswissenschaft im Grundrisse, alternatively entitled Grundlinien der Philosophie des Rechts (1821; The Philosophy of Right), Hegel described how this synthesis could be achieved in an organic community. The key to his solution is the recognition that human nature is not fixed but is shaped by the society in which one lives. The organic community would foster those desires that most benefit the community. It would imbue its members with the sense that their own identity consists in being a part of the community, so that they would no more think of going off in pursuit of their own private interests than one's left arm would think of going off without the rest of the body. Nor should it be forgotten that such organic relationships are reciprocal: the organic community will no more disregard the interests of its members than an individual would disregard an injury to his or her arm. Harmony would thus prevail but not the naive harmony of ancient Greece. The citizens of Hegel's organic community do not obey its laws and customs simply because they are there. With the independence of mind characteristic of modern times, they can only give their allegiance to institutions that they recognize as conforming to rational principles. The modern organic state, unlike the ancient Greek city-state, is self-consciously based on rationally selected principles. Hegel provided a new approach to the ancient problem of reconciling morality and self-interest. Others had accepted the problem as part of the inevitable nature of things and looked for ways around it. Hegel looked at it historically and saw it as a problem only in a certain type of society. Instead of solving the problem as it existed, he looked to the emergence of a new form of society in which it would disappear. In this way Hegel claimed to have overcome one great problem that was insoluble for Kant. Hegel also believed that he had the solution to the other key weakness in Kant's ethics—namely, the difficulty of giving content to the supreme formal moral principle. In Hegel's organic community, the content of our moral duty would be given to us by our position in society. We would know that our duty was to be a good parent, a good citizen, a good teacher, merchant, or soldier, as the case might be. It is an ethic that has been called “my station and its duties.” It might be thought that this is a limited, conservative conception of what we ought to do with our lives, especially when compared with Kant's principle of universal law, which does not base what we ought to do on what our particular station in society happens to be. Hegel would have replied that because the organic community is based on universally valid principles of reason, it complies with Kant's principle of universal law. Moreover, without the specific content provided by the concrete institutions and practices of a society, that principle would remain an empty formula. Hegel's philosophy has both a conservative and a radical side. The conservative aspect is reflected in the ethic of “my station and its duties,” and even more strongly in the significant resemblance between Hegel's detailed description of the organic society and the actual institutions of the Prussian state in which he lived and taught for the last decade of his life. This resemblance, however, was in no way a necessary implication of Hegel's philosophy as a whole. After Hegel's death, a group of his more radical followers known as the Young Hegelians hailed the manner in which he had demonstrated the need for a new form of society to overcome the separation between self and community but scorned the implication that the state in which they were living could be this solution to all the problems of history. Among this group was a young student named Karl Marx. Marx (1818–83) has often been presented by his followers as a scientist rather than a moralist. He did not deal directly with the ethical issues that occupied the philosophers so far discussed. His Materialist conception of history is, rather, an attempt to explain all ideas, whether political, religious, or ethical, as the product of the particular economic stage that society has reached. Thus a feudal society will regard loyalty and obedience to one's lord as the chief virtues. A capitalist economy, on the other hand, requires a mobile labour force and expanding markets, so that freedom, especially the freedom to sell one's labour, is its key ethical conception. Because Marx saw ethics as a mere by-product of the economic basis of society, he frequently took a dismissive stance toward it. Echoing the Sophist Thrasymachus, Marx said that the “ideas of the ruling class are in every epoch the ruling ideas.” With his coauthor Friedrich Engels, he was even more scornful in the Manifest der Kommunistischen Partei (1848; The Communist Manifesto), in which morality, law, and religion are referred to as “so many bourgeois prejudices behind which lurk in ambush just as many bourgeois interests.” A sweeping rejection of ethics, however, is difficult to reconcile with the highly moralistic tone of Marx's condemnation of the miseries the capitalist system inflicts upon the working class and with his obvious commitment to hastening the arrival of the Communist society that will end such iniquities. After Marx died, Engels tried to explain this apparent inconsistency by saying that as long as society was divided into classes, morality would serve the interests of the ruling class. A classless society, on the other hand, would be based on a truly human morality that served the interests of all human beings. This does make Marx's position consistent by setting him up as a critic, not of ethics as such, but rather of the class-based moralities that would prevail until the Communist revolution. By studying Marx's earlier writings—those produced when he was a Young Hegelian—one obtains a slightly different, though not incompatible, impression of the place of ethics in Marx's thought. There seems no doubt that the young Marx, like Hegel, saw human freedom as the ultimate goal. He also held, as did Hegel, that freedom could only be obtained in a society in which the dichotomy between private interest and the general interest had disappeared. Under the influence of socialist ideas, however, he formed the view that merely knowing what was wrong with the world would not achieve anything. Only the abolition of private property could lead to the transformation of human nature and so bring about the reconciliation of the individual and the community. Theory, Marx concluded, had gone as far as it could; even the theoretical problems of ethics, as illustrated in Kant's division between reason and feeling, would remain insoluble unless one moved from theory to practice. This is what Marx meant in the famous thesis that is engraved on his tombstone: “The philosophers have only interpreted the world, in various ways; the point is to change it.” The goal of changing the world stemmed from Marx's attempt to overcome one of the central problems of ethics; the means now passed beyond philosophy. Friedrich Nietzsche (1844–1900) was a literary and social critic, not a systematic philosopher. In ethics, the chief target of his criticism is the Judeo-Christian tradition. He describes Jewish ethics as a “slave morality” based on envy. Christian ethics is, in his opinion, even worse because it makes a virtue of meekness, poverty, and humility, telling one to turn the other cheek rather than to struggle. It is the ethics of the weak, who hate and fear strength, pride, and self-affirmation. Such an ethics undermines the human drives that have led to the greatest and most noble human achievements. Nietzsche thought the era of traditional religion to be over: “God is dead,” perhaps his most widely repeated aphorism, was his paradoxical way of putting it. Yet, what was to be put in its place? Nietzsche took from Aristotle the concept of greatness of soul, the unchristian virtue that included nobility and a justified pride in one's achievements. He suggested a reevaluation of values that would lead to a new ideal: the Übermensch, a term usually translated as “Superman” and given connotations that suggest that Nietzsche would have regarded Hitler as an ideal type. Nietzsche's praise of “the will to power” is taken as further evidence that he would have approved of Hitler. This interpretation owes much to Nietzsche's racist sister, who after his death compiled a volume of his unpublished writings, arranging them to make it appear that he was a forerunner of Nazi thinking. This is at best a partial truth. Nietzsche was almost as contemptuous of pan-German racism and anti-Semitism as he was of the ethics of Judaism and Christianity. What Nietzsche meant by Übermensch was a person who could rise above the limitations of ordinary morality; and by “the will to power” it seems that Nietzsche had in mind self-affirmation and not necessarily the use of power to oppress others. Nevertheless, Nietzsche left himself wide open to those who wanted his philosophical imprimatur for their crimes against humanity. His belief in the importance of the Übermensch made him talk of ordinary people as “the herd,” who did not really matter. In Jenseits von Gut und Böse (1886; Beyond Good and Evil), he wrote with approval of “the distinguished type of morality,” according to which “one has duties only toward one's equals; toward beings of a lower rank, toward everything foreign to one, one may act as one sees fit, ‘as one's heart dictates' ”—in any event, beyond good and evil. The point is that the Übermensch is above all ordinary moral standards: “The distinguished type of human being feels himself as value-determining; he does not need to be ratified; he judges ‘that which is harmful to me is harmful as such'; he knows that he is the something which gives value to objects; he creates values.” In this Nietzsche was a forerunner of Existentialism rather than Nazism, but then Existentialism, precisely because it gives no basis for choosing other than authenticity, is not incompatible with Nazism. Nietzsche's position on ethical matters represents a stark contrast to that of Henry Sidgwick, the last major figure of 19th-century British ethics treated in this article. Sidgwick believed in objective standards for ethical judgments and thought that the subject of ethics had over the centuries made progress toward these standards. He saw his own work as building carefully on that progress. Nietzsche, on the other hand, would have us sweep away everything since Greek ethics and not keep much of that either. The superior types would then be able to freely create their own values as they saw fit. 20th-century Western ethics The brief historical survey of Western ethics from Socrates to the 20th century provided above has shown three constant themes. Since the Sophists, there have been (1) disagreements over whether ethical judgments are truths about the world or only reflections of the wishes of those who make them; (2) frequent attempts to show, in the face of considerable skepticism, either that it is in one's own interests to do what is good or that, even though this is not necessarily in one's own interests, it is the rational thing to do; and (3) repeated debates over just what goodness and the standard of right and wrong might be. The 20th century has seen new twists to these old themes and an increased attention to the application of ethics to practical problems. Each of these major questions is considered below in terms of metaethics, normative ethics, and applied ethics. As previously noted, metaethics deals not with substantive ethical theories or moral judgments but rather with questions about the nature of these theories and judgments. Among 20th-century philosophers in English-speaking countries, those defending the objectivity of ethical judgments have most often been intuitionists or naturalists; those taking a different view have been emotivists or prescriptivists. Moore and the naturalistic fallacy At first it was the intuitionists who dominated the scene. In 1903 the Cambridge philosopher G.E. Moore presented in Principia Ethica his “open question argument” against what he called the naturalistic fallacy. The argument can in fact be found in Sidgwick and to some extent in the 18th-century intuitionists, but Moore's statement of it somehow caught the imagination of philosophers for the first half of the 1900s. Moore's aim was to prove that “good” is the name of a simple, unanalyzable quality. His chief target was the attempt to define good in terms of some natural quality of the world whether it be “pleasure” (he had John Stuart Mill in mind), or “more evolved” (here he refers to Herbert Spencer, who had tried to build an ethical system around Darwin's theory of evolution), or simply the idea of what is natural itself, as in appeals to a law of nature—hence the label naturalistic fallacy (i.e., the fallacy of treating good as if it were the name of a natural property). But the label is not apt because Moore's argument applied, as he acknowledged, to any attempt to define good in terms of something else, including something metaphysical or supernatural such as “what God wills.” The so-called open question argument itself is simple enough. It consists of taking the proposed definition of good and turning it into a question. For instance, if the proposed definition is “Good means whatever leads to the greatest happiness of the greatest number,” then Moore would ask: “Is whatever leads to the greatest happiness of the greatest number good?” Moore is not concerned whether we answer yes or no. His point is that if the question is at all meaningful—if a negative answer is not plainly self-contradictory—then the definition cannot be right, for a definition is supposed to preserve the meaning of the term defined. If it does, a question of the type Moore asks would be absurd for all who understand the meaning of the term. Compare, for example, “Do all squares have four equal sides?” Moore's argument does show that definitions of the kind he criticized do not capture all that we ordinarily mean by the term good. It would still be open to a would-be naturalist to admit that the definition does not capture everything that we ordinarily mean by the term, and add that all this shows is that ordinary usage is muddled and in need of revision. (We shall see that J.L. Mackie was later to make this part of his defense of subjectivism.) As for Mill, it is questionable whether he really intended to offer a definition of the term good; he seems to have been more interested in offering a criterion by which we could ascertain which actions are good. As Moore acknowledged, the open question argument does not do anything to show that pleasure, for example, is not the sole criterion of the goodness of an action. It shows only that this cannot be known to be true by definition, and so, if it is to be known at all, it must be known by some other means. In spite of these doubts, Moore's argument was widely accepted at the time as showing that all attempts to derive ethical conclusions from anything not itself ethical in nature are bound to fail. The point was soon seen to be related to that made by Hume in his remarks on writers who move from “is” to “ought.” Moore, however, would have considered Hume's own account of morality to be naturalistic because of its definition of virtue in terms of the sentiments of the spectator. The upshot was that for 30 years after the publication of Principia Ethica intuitionism was the dominant metaethical position in British philosophy. In addition to Moore, its supporters included H.A. Prichard and Sir W.D. Ross. The 20th-century intuitionists were not far removed philosophically from their 18th-century predecessors—those such as Richard Price who had learned from Hume's criticism and did not attempt to reason his way to ethical conclusions but claimed rather that ethical knowledge is gained through an immediate apprehension of its truth. In other words, a true ethical judgment is self-evident as long as we are reflecting clearly and calmly and our judgment is not distorted by self-interest or faulty moral upbringing. Ross, for example, took “the convictions of thoughtful, well-educated people” as “the data of ethics,” observing that while some may be illusory, they should only be rejected when they conflict with others that are better able to stand up to “the test of reflection.” The intuitionists differed on the nature of the moral truths that are apprehended in this way. For Moore it was self-evident that certain things are valuable: e.g., the pleasures of friendship and the enjoyment of beauty. On the other hand, Ross thought we know it to be our duty to do acts of a certain type. These differences will be dealt with in the discussion of normative ethics. They are, however, significant to metaethical intuitionism because they reveal the lack of agreement, even among the intuitionists themselves, about moral judgments that each claims to be self-evident. This disagreement was one of the reasons for the eventual rejection of intuitionism, which, when it came, was as complete as its acceptance had been in earlier decades. But there was also a more powerful philosophical motive working against intuitionism. During the 1930s, Logical Positivism, brought from Vienna by Ludwig Wittgenstein and popularized by A.J. Ayer in his manifesto Language, Truth and Logic (1936), became influential in British philosophy. According to the Logical Positivists, all true statements fall into two categories: logical truths and statements of fact. Moral judgments cannot fit comfortably into either category. They cannot be logical truths, for these are mere tautologies that can tell us nothing more than what is already contained in the definitions of the terms. Nor can they be statements of fact because these must, according to the Logical Positivists, be at least in principle verifiable; there is no way of verifying the truths that the intuitionists claimed to apprehend. The truths of mathematics, on which intuitionists had continued to rely as the one clear parallel case of a truth known by its self-evidence, were explained now as logical truths. In this view, mathematics tells us nothing about the world; it is simply a logical system, true by the definitions of the terms involved, which may be useful in our dealings with the world. Thus the intuitionists lost the one useful analogy to which they could appeal in support of the existence of a body of self-evident truths known by reason alone. It seemed to follow that moral judgments could not be truths at all. In his above-cited Language, Truth and Logic, Ayer offered an alternative account: moral judgments are not statements at all. When we say, “You acted wrongly in stealing that money,” we are not expressing any fact beyond that stated by “You stole that money.” It is, however, as if we had stated this fact with a special tone of abhorrence, for in saying that something is wrong, we are expressing our feelings of disapproval toward it. This view was more fully developed by Charles Stevenson in Ethics and Language (1945). As the titles of books of this period suggest, philosophers were now paying more attention to language and to the different ways in which it could be used. Stevenson distinguished the facts a sentence may convey from the emotive impact it is intended to have. Moral judgments are significant, he urged, because of their emotive impact. In saying that something is wrong, we are not merely expressing our disapproval of it, as Ayer suggested. We are encouraging those to whom we speak to share our attitude. This is why we bother to argue about our moral views, while on matters of taste we may simply agree to differ. It is important to us that others share our attitudes on war, equality, or killing; we do not care if they prefer to take their tea with lemon and we do not. The emotivists were immediately accused of being subjectivists. In one sense of the term subjectivist, the emotivists could firmly reject this charge. Unlike other subjectivists in the past, they did not hold that those who say, for example, “Stealing is wrong,” are making a statement of fact about their own feelings or attitudes toward stealing. This view—more properly known as subjective naturalism because it makes the truth of moral judgments depend on a natural, albeit subjective, fact about the world—could be refuted by Moore's open question argument. It makes sense to ask: “I know that I have a feeling of approval toward this, but is it good?” It was the emotivists' view, however, that moral judgments make no statements of fact at all. The emotivists could not be defeated by the open question argument because they agreed that no definition of “good” in terms of facts, natural or unnatural, could capture the emotive element of its meaning. Yet, this reply fails to confront the real misgivings behind the charge of subjectivism: the concern that there are no possible standards of right and wrong other than one's own subjective feelings. In this sense, the emotivists were subjectivists. About this time a different form of subjectivism was becoming fashionable on the Continent and to some extent in the United States. Existentialism was as much a literary as a philosophical movement. Its leading figure, Jean-Paul Sartre, propounded his ideas in novels and plays as well as in his major philosophical treatise, L'Être et le néant (1943; Being and Nothingness). For Sartre, because there is no God, human beings have not been designed for any particular purpose. The Existentialists express this by stating that our existence precedes our essence. In saying this, they make clear their rejection of the Aristotelian notion that just as we can recognize a good knife once we know that the essence of a knife is to cut, so we can recognize a good human being once we understand the essence of human nature. Because we have not been designed for any specific end, we are free to choose our own essence, which means to choose how we will live. To say that we are compelled by our situation, our nature, or our role in life to act in a certain way is to exhibit “bad faith.” This seems to be the only term of disapproval the Existentialists are prepared to use. As long as we choose “authentically,” there are no moral standards by which our conduct can be criticized. This, at least, is the view most widely held by the Existentialists. In one work, a brochure entitled L'Existentialisme est un humanisme (1946; “Existentialism Is a Humanism”; Eng. trans., Existentialism and Humanism), Sartre backs away from so radical a subjectivism by suggesting a version of Kant's idea that we must be prepared to apply our judgments universally. He does not reconcile this view with conflicting statements elsewhere in his writings, and it is doubtful if it can be regarded as a statement of his true ethical views. It may reflect, however, a widespread postwar reaction to the spreading knowledge of what happened at Auschwitz and other Nazi death camps. One leading German prewar Existentialist, Martin Heidegger, had actually become a Nazi. Was this “authentic choice” just as good as Sartre's own choice to join the French Résistance? Is there really no firm ground from which such a choice could be rejected? This seemed to be the upshot of the pure Existentialist position, just as it was an implication of the ethical emotivism that was dominant among English-speaking philosophers. It is scarcely surprising that many philosophers should search for a metaethical view that did not commit them to this conclusion. The means used by Sartre in L'Existentialisme est un humanisme were also to have their parallel, though in a much more sophisticated form, in British moral philosophy. In The Language of Morals (1952), R.M. Hare supported some of the elements of emotivism but rejected others. He agreed that in making moral judgments we are not primarily seeking to describe anything; but neither, he said, are we simply expressing our attitudes. Instead, he suggested that moral judgments prescribe; that is, they are a form of imperative sentence. Hume's rule about not deriving an “is” from an “ought” can best be explained, according to Hare, in terms of the impossibility of deriving any prescription from a set of descriptive sentences. Even the description “There is an enraged bull bearing down on you” does not necessarily entail the prescription “Run!” because I may have been searching for ways of killing myself in such a way that my children can still benefit from my life insurance. Only I can choose whether the prescription fits what I want. Herein lies moral freedom: because the choice of prescription is individual, no one can tell another what he or she must think right. Hare's espousal of the view that moral judgments are prescriptions led commentators on his first book to classify him with the emotivists as one who did not believe in the possibility of using reason to arrive at ethical conclusions. That this was a mistake became apparent with the publication of his second book, Freedom and Reason (1963). The aim of the book was to show that the moral freedom guaranteed by prescriptivism is, notwithstanding its element of choice, compatible with a substantial amount of reasoning about moral judgments. Such reasoning is possible, Hare wrote, because moral judgments must be “universalizable.” This notion owed something to the ancient Golden Rule and even more to Kant's first formulation of the categorical imperative. In Hare's treatment, however, these ideas were refined so as to eliminate their obvious defects. Moreover, for Hare universalizability is not a substantive moral principle but a logical feature of the moral terms. This means that anyone who uses such terms as right and ought is logically committed to universalizability. To say that a moral judgment must be universalizable means, for Hare, that if I judge a particular action—say, a man's embezzlement of a million dollars from his employer—to be wrong, I must also judge any relevantly similar action to be wrong. Of course, everything will depend on what is allowed to count as a relevant difference. Hare's answer is that all features may count, except those that contain ineliminable uses of words such as I or my, or singular terms such as proper names. In other words, the fact that he embezzled a million dollars in order to be able to take holidays in Tahiti, whereas I embezzled the same sum so as to channel it from my wealthy employer to those starving in Africa, may be a relevant difference; the fact that the man's crime benefitted him, whereas my crime benefitted me, cannot be so. This notion of universalizability can also be used to test whether a difference that is alleged to be relevant—for instance, skin colour or even the position of a freckle on one's nose—really is relevant. Hare emphasized that the same judgment must be made in all conceivable cases. Thus if a Nazi were to claim that he may kill a person because that person is Jewish, he must be prepared to prescribe that if, somehow, it should turn out that he is himself of Jewish origin, he should also be killed. Nothing turns on the likelihood of such a discovery; the same prescription has to be made in all hypothetically, as well as actually, similar cases. Since only an unusually fanatical Nazi would be prepared to do this, universalizability is a powerful means of reasoning against certain moral judgments, including those made by the Nazis. At the same time, since there could be fanatical Nazis who are prepared to die for the purity of the Aryan race, the argument of Freedom and Reason allows that the role played by reason in ethics does have definite limits. Hare's position at this stage, therefore, appeared to be a compromise between the extreme subjectivism of the emotivists and some more objectivist view of ethics. As so often happens with those who try to take the middle ground, Hare was soon to receive criticism from both sides. For a time, Moore's presentation of the naturalistic fallacy halted attempts to define “good” in terms of natural qualities such as happiness. The effect was, however, both local and temporary. In the United States, Ralph Barton Perry was untroubled by Moore's arguments. His General Theory of Value (1926) gave an account of value that was objectivist and much less mysterious than the intuitionist accounts, which were at that time dominating British philosophy. Perry suggested that there is no such thing as value until a being desires something, and nothing can have intrinsic value considered apart from all desiring beings. A novel, for example, has no value at all unless there is a being who desires to read it or perhaps use it for some other purpose, such as starting a fire on a cold night. Thus Perry is a naturalist, for he defines value in terms of the natural quality of being desired or, as he puts it, being an object of an interest. His naturalism is objectivist, in spite of this dependence of value on desires, because value is defined as any object of any interest. Accordingly, even if I do not desire, say, this encyclopaedia for any purpose at all, I cannot deny that it has some value so long as there is some being who does desire it. Moreover, Perry believed it followed from his theory that the greatest moral value is to be found in whatever leads to the harmonious integration of interests. In Britain, Moore's impact was for a long time too great for any form of naturalism to be taken seriously. It was only as a response to Hare's intimation that any principle could be a moral principle so long as it satisfied the formal requirement of universalizability that philosophers such as Philippa Foot, Elizabeth Anscombe, and Geoffrey Warnock began to suggest that perhaps a moral principle must also have a particular kind of content—i.e., it must deal, for instance, with some aspect of wants, welfare, or flourishing. The problem with these suggestions, Hare soon pointed out, is that if we define morality in such a way that moral principles are restricted to those that maximize well-being, then if there is a person who is not interested in maximizing well-being, moral principles, as we have defined them, will have no prescriptive force for that person. This reply elicited two responses—namely, those of Anscombe and Foot. Anscombe went back to Aristotle, suggesting that we need a theory of human flourishing that will provide an account of what any person must do in order to flourish, and so will lead to a morality that every one of us has reason to follow. No such theory was forthcoming, however, until 1980 when John Finnis offered a theory of basic human goods in his Natural Law and Natural Rights. The book was acclaimed by Roman Catholic moral theologians and philosophers, but natural law ethics continues to have few followers outside these circles. Foot initially attempted to defend a similarly Aristotelian view in which virtue and self-interest are necessarily linked, but she came to the conclusion that this link could not be made. This led her to abandon the assumption that we all have adequate reasons for doing what is right. Like Hume, she suggested that it depends on what we desire and especially on how much we care about others. She observed that morality is a system of hypothetical, not categorical, imperatives. A much cruder form of naturalism surfaced from a different direction with the publication of Edward O. Wilson's Sociobiology: The New Synthesis (1975). Wilson, a biologist rather than a philosopher, claimed that new developments in the application of evolutionary theory to social behaviour would allow ethics to be “removed from the hands of philosophers” and “biologicized.” It was not the first time that a scientist, frustrated by the apparent lack of progress in ethics as compared to the sciences, had proposed some way of transforming ethics into a science. In a later book, On Human Nature (1978), Wilson suggested that biology justifies specific values (including the survival of the gene pool) and, because man is a mammal rather than a social insect, universal human rights. Other sociobiologists have gone further still, reviving the claims of earlier “social Darwinists” to the effect that Darwin's theory of evolution shows why it is right that there should be social inequality. As the above section on the origin of ethics suggests, evolutionary theory may indeed have something to reveal about the origins and nature of the systems of morality used by human societies. Wilson is, however, plainly guilty of breaching Hume's rule when he tries to draw from a theory of a factual nature ethical premises that tell us what we ought to do. It may be that, coupled with the premise that we wish our species to survive for as long as possible, evolutionary theory will suggest the direction we ought to take, but even that premise cannot be regarded as unquestionable. It is not impossible to imagine circumstances in which life is so grim that extinction is preferable. That choice cannot be dictated by science. It is even less plausible to suppose that more specific choices about social equality can be settled by evolutionary theory. At best, the theory would indicate the costs we might incur by moving to greater equality; it could not conceivably tell us whether incurring those costs is justifiable. Recent developments in metaethics In view of the heat of the debate between Hare and his naturalist opponents during the 1960s, the next development was surprising. At first in articles and then in the book Moral Thinking (1981), Hare offered a new understanding of what is involved in universalizability that relies on treating moral ideals in a similar fashion to ordinary desires or preferences. In Freedom and Reason the universalizability of moral judgments prevented me from giving greater weight to my own interests, simply on the grounds that they are mine, than I was prepared to give to anyone else's interests. In Moral Thinking Hare argued that to hold an ideal, whether it be a Nazi ideal such as the purity of the Aryan race or a more conventional ideal such as that justice must be done irrespective of the consequences, is really to have a special kind of preference. When I ask whether I can prescribe a moral judgment universally, I must take into account all the ideals and preferences held by all those who will be affected by the action I am judging; and in taking these into account, I cannot give any special weight to my own ideals merely because they are my own. The effect of this application of universalizability is that for a moral judgment to be universalizable it must ultimately be based on the maximum possible satisfaction of the preferences of all those affected by it. Thus Hare claimed that his reading of the formal property of universalizability inherent in moral language enables him to solve the ancient problem of showing how reason can, at least in principle, resolve ethical disagreement. Moral freedom, on the other hand, has been reduced to the freedom to be an amoralist and to avoid using moral language altogether. Hare's position was immediately challenged by J.L. Mackie in Ethics: Inventing Right and Wrong (1977). In the course of a defense of moral subjectivism, Mackie argued that Hare had stretched the notion of universalizability far beyond anything that is really inherent in moral language. Moreover, even if such a notion were embodied in our way of thinking and talking about morality, Mackie insisted that we would always be free to reject such notions and to decide what to do without concerning ourselves with whether our judgments are universalizable in Hare's, or indeed in any, sense. According to Mackie, our ordinary use of moral language presupposes that moral judgments are statements about something in the universe and, therefore, can be true or false. This is, however, a mistake. Drawing on Hume, Mackie says that there cannot be any matters of fact that make it rational for everyone to act in a certain way. If we do not reject morality altogether, we can only base our moral judgments on our own desires and feelings. There are a number of contemporary British philosophers who do not accept either Hare's or Mackie's metaethical views. Those who hold forms of naturalism have already been mentioned. Others, including the Oxford philosophers David Wiggins and John McDowell, have employed modern semantic theories of the nature of truth to show that even if moral judgments do not correspond to any objective facts or self-evident truths, they may still be proper candidates for being true or false. This position has become known as moral realism. For some, it makes moral judgments true or false at the cost of taking objectivity out of the notion of truth. Many modern writers on ethics, including Mackie and Hare, share a view of the nature of practical reason derived from Hume. Our reasons for acting morally, they hold, must depend on our desires because reason in action applies only to the best way of achieving what we desire. This view of practical reason virtually precludes any general answer to the question “Why should I be moral?” Until very recently, this question had received less attention in the 20th century than in earlier periods. In the early part of the century, such intuitionists as H.A. Prichard had rejected all attempts to offer extraneous reasons for being moral. Those who understood morality would, they said, see that it carried its own internal reasons for being followed. For those who could not see these reasons, the situation was reminiscent of the story of the emperor's new clothes. The question fared no better with the emotivists. They defined morality so broadly that anything an individual desires can be considered to be moral. Thus there can be no conflict between morality and self-interest, and if anyone asks “Why should I be moral?” the emotivist response would be to say “Because whatever you most approve of doing is, by definition, your morality.” Here the question is effectively being rejected as senseless, but this reply does nothing to persuade the questioners to act in a benevolent or socially desirable way. It merely tells them that no matter how antisocial their actions may be, they can still be moral as the emotivists define the term. For Hare, on the other hand, the question “Why should I be moral?” amounts to asking why I should act only on those judgments that I am prepared to universalize; and the answer he gives is that unless this is what I want to do, it is not always possible to give an adult a reason for doing so. At the same time, Hare does believe that if someone asks why children should be brought up to be morally good, the answer is that they are more likely to be happy if they develop habits of acting morally. Other philosophers have put the question to one side, saying that it is a matter for psychologists rather than for philosophers. In earlier periods, of course, psychology was considered a branch of philosophy rather than a separate discipline, but in fact psychologists have also had little to say about the connection between morality and self-interest. In Motivation and Personality (1954) and other works, Abraham H. Maslow developed a psychological theory reminiscent of Shaftesbury in its optimism about the link between personal happiness and moral values, but Maslow's factual evidence was thin. Victor Emil Frankl, a psychotherapist, has written several popular books defending a position essentially similar to that of Joseph Butler on the attainment of happiness. The gist of this view is known as the paradox of hedonism. In The Will to Meaning (1969), Frankl states that those who aim directly at happiness do not find it; those whose lives have meaning or purpose apart from their own happiness find happiness as well. The U.S. philosopher Thomas Nagel has taken a different approach to the question of how we may be motivated to act altruistically. Nagel challenges the assumption that Hume was right about reason being subordinate to desires. In The Possibility of Altruism (1969), Nagel sought to show that if reason must always be based on desire, even our normal idea of prudence (that we should give the same weight to our future pains and pleasures as we give to our present ones) becomes incoherent. Once we accept the rationality of prudence, however, Nagel argued that a very similar line of argument can lead us to accept the rationality of altruism—i.e., the idea that the pains and pleasures of another individual are just as much a reason for one to act as are one's own pains and pleasures. This means that reason alone is capable of motivating moral action; hence, it is unnecessary to appeal to self-interest or benevolent feelings. Though not an intuitionist in the ordinary sense, Nagel has effectively reopened the 18th-century debate between the moral sense school and the intuitionists who believed that reason alone can play a role in action. The most influential work in ethics by a U.S. philosopher since the early 1960s, John Rawls's Theory of Justice (1971), is for the most part centred on normative ethics, and so will be discussed in the next section; it has, however, had some impact in metaethics as well. To argue for his principles of justice, Rawls uses the idea of a hypothetical contract, in which the contracting parties are behind a “veil of ignorance” that prevent them from knowing any particular details about their own attributes. Thus one cannot try to benefit oneself by choosing principles of justice that favour the wealthy, the intelligent, males, or whites. The effect of this requirement is in many ways similar to Hare's idea of universalizability, but Rawls claims that it avoids, as the former does not, the trap of grouping together the interests of different individuals as if they all belonged to one person. Accordingly, the old social contract model that had largely been neglected since the time of Rousseau has had a new wave of popularity as a form of argument in ethics. The other aspect of Rawls's thought to have metaethical significance is his so-called method of reflective equilibrium—the idea that a sound moral theory is one that matches reflective moral judgments. In A Theory of Justice Rawls uses this method to justify tinkering with the original model of the hypothetical contract until it produces results that are not too much at odds with ordinary ideas of justice. To his critics, this represents a reemergence of a conservative form of intuitionism, for it means that new moral theories are tested against ordinary moral intuitions. If a theory fails to match enough of these, it will be rejected no matter how strong its own foundations may be. In Rawls's defense it may be said that it is only our “reflective moral judgments” that serve as the testing ground—our ordinary moral intuitions may be rejected, perhaps simply because they are contrary to a well-grounded theory. If such be the case, the charge of conservatism may be misplaced, but in the process the notion of some independent standard by which the moral theory may be tested has been weakened, perhaps so far as to become virtually meaningless. Perhaps the most impressive work of metaethics published in the United States in recent years is R.B. Brandt's Theory of the Good and the Right (1979). Brandt returns to something like the naturalism of Ralph Barton Perry but with a distinctive late 20th-century American twist. He spends little time on the concept of good, believing that everything capable of being expressed by this word can be more clearly stated in terms of rational desires. To explicate this notion of a rational desire, Brandt appeals to cognitive psychotherapy. An ideal process of cognitive psychotherapy would eliminate many desires: those based on false beliefs, those which one has only because one is ignoring the feelings or desires that are likely to be expressed in the future, the desires or aversions that are artificially caused by others, desires that are based on early deprivation, and so on. The desires that an individual would still have, undiminished in strength after going through this process, are what Brandt is prepared to call rational desires. In contrast to his view of the term good, Brandt does think that the notions of morally right and morally wrong are useful. He suggests that, in calling an action morally wrong, we should mean that it would be prohibited by any moral code that all fully rational people would support for the society in which they are to live. (Brandt then argues that fully rational people would support that moral code which would maximize happiness, but the justification of this claim is a task for normative ethics, not metaethics.) Brandt's final chapter is an indication of the revival of interest in the question, as he phrases it, “Is it always rational to act morally?” His answer, echoing Shaftesbury in modern guise, is that such desires as benevolence would survive cognitive psychotherapy, and so a rational person would be benevolent. A rational person would also have other moral motives, including an aversion to dishonesty. These motives will occasionally conflict with self-interested desires, and there can be no guarantee that the moral motives will be the stronger. If they are not, and in spite of the fact that a rational person would support a code favouring honesty, Brandt is unable to say that it would be irrational to follow self-interest rather than morality. A fully rational person might support a certain kind of moral code and yet not act in accordance with it on every occasion. As the century draws to a close, the issues that divided Plato and the Sophists are still dividing moral philosophers. Ironically, the one position that now has few defenders is Plato's view that “good” refers to an idea or property having an objective existence quite apart from anyone's attitudes or desires—on this point the Sophists appear to have won out at last. Yet, this still leaves ample room for disagreement about the extent to which reason can bring about agreed decisions on what we ought to do. There also remains the dispute about whether it is proper to refer to moral judgments as true and false. On the other central question of metaethics, the relationship between morality and self-interest, a complete reconciliation of the two continues to prove—at least for those not prepared to appeal to a belief in reward and punishment in another life—as elusive as it did for Sidgwick at the end of the 19th century. Normative ethics seeks to set norms or standards for conduct. The term is commonly used in reference to the discussion of general theories about what one ought to do, a central part of Western ethics since ancient times. Normative ethics continued to hold the spotlight during the early years of the 20th century, with intuitionists such as W.D. Ross engaged in showing that an ethic based on a number of independent duties was superior to Utilitarianism. With the rise of Logical Positivism and emotivism, however, the logical status of normative ethics seemed doubtful: Was it not simply a matter of whatever one approved? Nor was the analysis of language, which dominated philosophy in English-speaking countries during the 1950s, any more congenial to normative ethics. If philosophy could do no more than analyze words and concepts, how could it offer guidance about what one ought to do? The subject was therefore largely neglected until the 1960s, when emotivism and linguistic analysis were both on the retreat and moral philosophers once again began to think about how individuals ought to live. A crucial question of normative ethics is whether actions are to be judged right or wrong solely on the basis of their consequences. Traditionally, those theories that judge actions by their consequences have been known as teleological theories, while those that judge actions according to whether they fall under a rule have been referred to as deontological theories. Although the latter term continues to be used, the former has been replaced to a large extent by the more straightforward term consequentialist. The debate over this issue has led to the development of different forms of consequentialist theories and to a number of rival views. Varieties of consequentialism The simplest form of consequentialism is classical Utilitarianism, which holds that every action is to be judged good or bad according to whether its consequences do more than any alternative action to increase—or, if that is impossible, to limit any unavoidable decrease in—the net balance of pleasure over pain in the universe. This is often called hedonistic Utilitarianism. G.E. Moore's normative position offers an example of a different form of consequentialism. In the final chapters of the aforementioned Principia Ethica and also in Ethics (1912), Moore argued that the consequences of actions are decisive for their morality, but he did not accept the classical Utilitarian view that pleasure and pain are the only consequences that matter. Moore asked his readers to picture a world filled with all possible imaginable beauty but devoid of any being who can experience pleasure or pain. Then the reader is to imagine another world, as ugly as can be but equally lacking in any being who experiences pleasure or pain. Would it not be better, Moore asked, that the beautiful world rather than the ugly world exist? He was clear in his own mind that the answer was affirmative, and he took this as evidence that beauty is good in itself, apart from the pleasure it brings. He also considered that the friendship of close personal relationships has a similar intrinsic value independent of its pleasantness. Moore thus judged actions by their consequences but not solely by the amount of pleasure they produced. Such a position was once called ideal Utilitarianism because it was a form of Utilitarianism based on certain ideals. Today, however, it is more frequently referred to by the general label consequentialism, which includes, but is not limited to, Utilitarianism. R.M. Hare is another example of a consequentialist. His interpretation of universalizability leads him to the view that for a judgment to be universalizable, it must prescribe what is most in accord with the preferences of all those affected by the action. This form of consequentialism is frequently called preference Utilitarianism because it attempts to maximize the satisfaction of preferences, just as classical Utilitarianism endeavours to maximize pleasure or happiness. Part of the attraction of such a view lies in the way in which it avoids making judgments about what is intrinsically good, finding its content instead in the desires that people, or sentient beings generally, do have. Another advantage is that it overcomes the objection, which so deeply troubled Mill, that the production of simple, mindless pleasure becomes the supreme goal of all human activity. Against these advantages we must put the fact that most preference Utilitarians want to base their judgments, not on the desires that people actually have, but rather on those they would have if they were fully informed and thinking clearly. It then becomes essential to discover what people would want under these conditions, and, because most people most of the time are less than fully informed and clear in their thoughts, the task is not an easy one. It may also be noted in passing that Hare claims to derive his version of Utilitarianism from universalizability, which in turn he draws from moral language and moral concepts. Moore, on the other hand, had simply found it self-evident that certain things were intrinsically good. Another Utilitarian, the Australian philosopher J.J.C. Smart, has defended hedonistic Utilitarianism by asserting that he has a favourable attitude to making the surplus of happiness over misery as large as possible. As these differences suggest, consequentialism can be held on the basis of widely differing metaethical views. Consequentialists may also be separated into those who ask of each individual action whether it will have the best consequences, and those who ask this question only of rules or broad principles and then judge individual actions by whether they fall under a good rule or principle. The distinction having arisen in the specific context of Utilitarian ethics, the former are known as act-Utilitarians and the latter as rule-Utilitarians. Rule-Utilitarianism developed as a means of making the implications of Utilitarianism less shocking to ordinary moral consciousness. (The germ of this approach is seen in Mill's defense of Utilitarianism.) There might be occasions, for example, when stealing from one's wealthy employer in order to give to the poor would have good consequences. Yet, surely it would be wrong to do so. The rule-Utilitarian solution is to point out that a general rule against stealing is justified on Utilitarian grounds, because otherwise there could be no security of property. Once the general rule has been justified, individual acts of stealing can then be condemned whatever their consequences because they violate a justifiable rule. This suggests an obvious question, one already raised by the above account of Kant's ethics: How specific may the rule be? Although a rule prohibiting stealing may have better consequences than no rule at all against stealing, would not the best consequences of all follow from a rule that permitted stealing only in those special cases in which it is clear that stealing will have better consequences than not stealing? But what then is the difference between act- and rule-Utilitarianism? In Forms and Limits of Utilitarianism (1965), David Lyons argued that if the rule were formulated with sufficient precision to take into account all its causally relevant consequences, rule-Utilitarianism would collapse into act-Utilitarianism. If rule-Utilitarianism is to be maintained as a distinct position, then there must be some restriction on how specific the rule can be so that at least some relevant consequences are not taken into account. To ignore relevant consequences is to break with the very essence of consequentialism; rule-Utilitarianism is therefore not a true form of Utilitarianism at all. That, at least, is the view taken by Smart, who has derided rule-Utilitarianism as “rule-worship” and consistently defended act-Utilitarianism. Of course, when time and circumstances make it awkward to calculate the precise consequences of an action, Smart's act-Utilitarian will resort to rough and ready “rules of thumb” for guidance; but these rules of thumb have no independent status apart from their usefulness in predicting likely consequences, and if ever we are clear that we will produce better consequences by acting contrary to the rule of thumb, we should do so. If this leads us to do things that are contrary to the rules of conventional morality, then, Smart says, so much the worse for conventional morality. Today, straightforward rule-Utilitarianism has few supporters. On the other hand, a number of more complex positions have been proposed, bridging in some way the distance between rule-Utilitarianism and act-Utilitarianism. In Moral Thinking Hare distinguished two levels of thought about what we ought to do. At the critical level we may reason about the principles that should govern our action and consider what would be for the best in a variety of hypothetical cases. The correct answer here, Hare believed, is always that the best action will be the one that has the best consequences. This principle of critical thinking is not, however, well-suited for everyday moral decision making. It requires calculations that are difficult to carry out under the most ideal circumstances and virtually impossible to carry out properly when we are hurried or liable to be swayed by our emotions or our interests. Everyday moral decisions are the proper domain of the intuitive level of moral thought. At this intuitive level we do not enter into fine calculations of consequences; instead, we act in accordance with fundamental moral principles that we have learned and accepted as determining, for practical purposes, whether an act is right or wrong. Just what these moral principles should be is a task for critical thinking. They must be the principles that, when applied intuitively by most people, will produce the best consequences overall, and they must also be sufficiently clear and brief to be made part of the moral education of children. Hare therefore can avoid the dilemma of the rule-Utilitarian while still preserving the advantages of that position. Given that ordinary moral beliefs reflect the experience of many generations, Hare believed that judgments made at the intuitive level will probably not be too different from judgments made by conventional morality. At the same time, Hare's restriction on the complexity of the intuitive principles is fully consequentialist in spirit. Some recently published work has gone further still in this direction. Following on earlier discussions of the difficulties consequentialists may have in trusting one another—since the word of a Utilitarian is only as good as the consequences of keeping the promise appear to him to be—Donald Regan has explored the problems of cooperation among Utilitarians in his Utilitarianism and Co-operation (1980) and has come out with a further variation designed to make cooperation feasible and thus to achieve the best consequences on the whole. In Reasons and Persons (1984), Derek Parfit argued that to aim always at producing the best consequences would be indirectly self-defeating; we would be cutting ourselves off from some of the greatest goods of human life, including those close personal relationships that demand that we sacrifice the ideal of impartial benevolence to all in order that we may give preference to those we love. We therefore need, Parfit suggested, not simply a theory of what we should all do, but a theory of what motives we should all have. Parfit, like Hare, plausibly contended that recognizing this distinction will bring the practical application of consequentialist theories closer to conventional moral judgments. An ethic of prima facie duties In the first third of the 20th century, it was the intuitionists, especially W.D. Ross, who provided the major alternative to Utilitarianism. Because of this situation, the position described below is sometimes called intuitionism, but it seems less likely to cause confusion if we reserve that label for the quite distinct metaethical position held by Ross—and incidentally by Sidgwick as well—and refer to the normative position by the more descriptive label, an “ethic of prima facie duties.” Ross's normative ethic consists of a list of duties, each of which is to be given independent weight: fidelity, reparation, gratitude, beneficence, nonmaleficence, and self-improvement. If an act falls under one and only one of these duties, it ought to be carried out. Often, of course, an act will fall under two or more duties: I may owe a debt of gratitude to someone who once helped me, but beneficence will be better served if I help others in greater need. This is why the duties are, Ross says, prima facie rather than absolute; each duty can be overridden if it conflicts with a more stringent duty. An ethic structured in this manner may match our ordinary moral judgments more closely than a consequentialist ethic, but it suffers from two serious drawbacks. First, how can we be sure that just those duties listed by Ross are independent sources of moral obligation? Ross could only respond that if we examine them closely we will find that these, and these alone, are self-evident. But others, even other intuitionists, have found that what was self-evident to Ross was not self-evident to them. Second, if we grant Ross his list of independent prima facie moral duties, we still need to know how to decide, in a particular situation, when a less stringent duty is overridden by a more stringent one. Here, too, Ross had no better answer than an unsatisfactory appeal to intuition. Rawls's theory of justice When philosophers again began to take an interest in normative ethics in the 1960s after an interval of some 30 years, no theory could rival the ability of Utilitarianism to provide a plausible and systematic basis for moral judgments in all circumstances. Yet, many people found themselves unable to accept Utilitarianism. One common ground for dissatisfaction was that Utilitarianism does not offer any principle of justice beyond the basic idea that everyone's happiness—or preferences, depending on the form of Utilitarianism—counts equally. Such a principle is quite compatible with sacrificing the welfare of some to the greater welfare of others. This situation explains the enthusiastic welcome accorded to Rawls's Theory of Justice when it appeared in 1971. Rawls offered an alternative to Utilitarianism that came close to matching its rival's ability to provide a systematic theory of what one ought to do and, at the same time, led to conclusions about justice very different from those of the Utilitarians. Rawls asserted that if people had to choose principles of justice from behind a “veil of ignorance” that restricted what they could know of their own position in society, they would not seek to maximize overall utility. Instead, they would safeguard themselves against the worst possible outcome, first, by insisting on the maximum amount of liberty compatible with the like liberty for others; and, second, by requiring that wealth be distributed so as to make the worst-off members of the society as well-off as possible. This second principle is known as the “maximin” principle, because it seeks to maximize the welfare of those at the minimum level of society. Such a principle might be thought to lead directly to an insistence on the equal distribution of wealth, but Rawls points out that if we accept certain assumptions about the effect of incentives and the benefits that may flow to all from the productive labours of the most talented members of society, the maximin principle could allow considerable inequality. In the decade following its appearance, A Theory of Justice was subjected to unprecedented scrutiny by moral philosophers throughout the world. Two major issues emerged: Were the two principles of justice soundly derived from the original contract situation? And did the two principles amount, in themselves, to an acceptable theory of justice? To the first question, the general verdict was negative. Without appealing to specific psychological assumptions about an aversion to risk—and Rawls disclaimed any such assumptions—there was no convincing way in which Rawls could exclude the possibility that the parties to the original contract would choose to maximize average utility, thus giving themselves the best possible chance of having a high level of welfare. True, each individual making such a choice would have to accept the possibility that he would end up with a very low level of welfare, but that might be a risk worth running for the sake of a chance at a very high level. Even if the two principles cannot validly be derived from the original contract, they might be sufficiently attractive to stand on their own either as self-evident moral truths—if we are objectivists—or as principles to which we might have favourable attitudes. Maximin, in particular, has proved attractive in a variety of disciplines, including welfare economics, a field in which preference Utilitarianism once reigned unchallenged. But maximin has also had its critics, who have pointed out that the principle could require us to forgo very great benefits to the vast majority if, for some reason, this would require some loss (no matter how trivial) to the worst-off members of society. One of Rawls's severest critics, Robert Nozick of the United States, rejected the assumption that lies behind not only the maximin principle but behind any principle that seeks to achieve a pattern of distribution by taking from one group in order to give to another. In attempting to bring about a certain pattern of distribution, Nozick said, these principles ignore the question of how the individuals from whom wealth will be taken acquired their wealth in the first place. If they have done so by wholly legitimate means without violating the rights of others, then Nozick held that no one, not even the state, can have the right to take their wealth from them without their consent. Although appeals to rights have been common since the great 18th-century declarations of the rights of man, most ethical theorists have treated rights as something that must be derived from more basic ethical principles or else from accepted social and legal practices. Recently, however, there have been attempts to turn this tendency around and make rights the basis of the ethical theory. It is in the United States, no doubt because of its history and constitution, that the appeal to rights as a fundamental moral principle has been most common. Nozick's Anarchy, State and Utopia (1974) is one example of a rights-based theory, although it is mostly concerned with the application of the theory in the political sphere and says very little about other areas of normative ethics. Unlike Rawls, who for all his disagreement with Utilitarianism is still a consequentialist of sorts, Nozick is a deontologist. Our rights to life, liberty, and legitimately acquired property are absolute, and no act can be justified if it violates them. On the other hand, we have no duty to assist people in the preservation of their rights. If others go about their own affairs without infringing on the rights of others, I must not infringe on their rights; but if they are starving, I have no duty to share my food with them. We can appeal to the generosity of the rich, but we have absolutely no right to tax them against their will so as to provide relief for the poor. This doctrine has found favour with some Americans on the political right, but it has proved too harsh for most students of ethics. To illustrate the variety of possible theories based on rights, we can take as another example the one propounded by Ronald Dworkin in Taking Rights Seriously (1977). Dworkin agreed with Nozick that rights are not to be overridden for the sake of improved welfare: rights are, he said, “trumps” over ordinary consequentialist considerations. Dworkin's view of rights, however, derives from a fundamental right to equal concern and respect. This makes it much broader than Nozick's theory, since respect for others may require us to assist them and not merely leave them to fend for themselves. Accordingly, Dworkin's view obliges the state to intervene in many areas to ensure that rights are respected. In its emphasis on equal concern and respect, Dworkin's theory is part of a recent revival of interest in Kant's principle of respect for persons as the fundamental principle of ethics. This principle, like the principle of justice, is often said to be ignored by Utilitarians. Rawls invoked it when setting out the underlying rationale of his theory of justice. The concept, however, suffers from vagueness, and attempts to develop it into something more specific that could serve as the basis for a complete ethical theory have not—unless Rawls's theory is to count as one of them—offered a satisfactory basis for ethical decision making. Natural law ethics As far as secular moral philosophy is concerned, during most of the 20th century, natural law ethics has been considered a lifeless medieval relic, preserved only in Roman Catholic schools of moral theology. It is still true that the chief proponents of natural law are of that particular religious persuasion, but they have recently begun to defend their position by arguments that make no explicit appeal to their religious beliefs. Instead, they start their ethics with the claim that there are certain basic human goods that we should not act against. In the list offered by John Finnis in Natural Law and Natural Rights (1980), for example, these goods are life, knowledge, play, aesthetic experience, friendship, practical reasonableness, and religion. The identification of these goods is a matter of reflection, assisted by the findings of anthropologists. Each of the basic goods is regarded as equally fundamental; there is no hierarchy among them. It would, of course, be possible to hold a consequentialist ethic that identified several basic human goods of equal importance and judged actions by their tendency to produce or maintain these goods. Thus, if life is a good, any action that led to a preventable loss of life would, other things being equal, be wrong. Natural law ethics, however, rejects this consequentialist approach. It makes the claim that it is impossible to measure the basic goods against each other. Instead of engaging in consequentialist calculations, the natural law ethic is built on the absolute prohibition of any action that aims directly against any basic good. The killing of the innocent, for instance, is always wrong, even if somehow killing one innocent person were to be the only way of saving thousands of innocent people. What is not adequately explained in this rejection of consequentialism is why the life of one innocent person—about whom, let us say, we know no more than that he is innocent—cannot be measured against the lives of a thousand innocent people about whom we have precisely the same information. Natural law ethics does allow one means of softening the effect of its absolute prohibitions. This is the doctrine of double effect, traditionally applied by Roman Catholic writers to some cases of abortion. If a pregnant woman is found to have a cancerous uterus, the doctrine of double effect allows a doctor to remove the uterus notwithstanding the fact that such action will kill the fetus. This allowance is made not because the life of the mother is regarded as more valuable than the life of the fetus, but because in removing the uterus the doctor is held not to aim directly at the death of the fetus. Instead, its death is an unwanted and indirect side effect of the laudable act of removing a diseased organ. On the other hand, a different medical condition might mean that the only way of saving the mother's life is by directly killing the fetus. Some years ago before the development of modern obstetric techniques, this was the case if the head of the fetus became lodged during delivery. Then the only way of saving the life of the woman was to crush the skull of the fetus. Such a procedure was prohibited, for in performing it the doctor would be directly killing the fetus. This ruling was applied even to those cases in which the death of the mother would certainly bring about the death of the fetus as well. The claim was that the doctor who killed the fetus directly was responsible for a murder, but the deaths from natural causes of the mother and fetus were not considered to be the doctor's doing. The example is significant because it indicates the lengths to which proponents of the natural law ethics are prepared to go in order to preserve the absolute nature of the prohibitions. All of the normative theories considered so far have had a universal focus—i.e., if they have been consequentialist theories, the goods they sought to achieve were sought for all capable of benefitting from them; and if they were deontological theories, the deontological principles applied equally to whoever might do the act in question. Ethical egoism departs from this consensus, suggesting that we should each consider only the consequences of our actions for our own interests. The great advantage of such a position is that it avoids any possible conflict between morality and self-interest. If it is rational for us to pursue our own interest, then, if the ethical egoist is right, the rationality of morality is equally clear. We can distinguish two forms of egoism. The individual egoist says, “Everyone should do what is in my interests.” This indeed is egoism, but it is incapable of being couched in a universalizable form, and so it is arguably not a form of ethical egoism. Nor is the individual egoist likely to be able to persuade others to follow a course of action that is so obviously designed to benefit only the person who is advocating it. Universal egoism is based on the principle “Everyone should do what is in her or his own interests.” This principle is universalizable, since it contains no reference to any particular individual and it is clearly an ethical principle. Others may be disposed to accept it because it appears to offer them the surest possible way of furthering their own interests. Accordingly, this form of egoism is from time to time seized upon by some popular writer who proclaims it the obvious answer to all our ills and has no difficulty finding agreement from a segment of the general public. The U.S. writer Ayn Rand is perhaps the best 20th-century example. Rand's version of egoism is expounded in the novel Atlas Shrugged (1957) by her hero, John Galt, and in The Virtue of Selfishness (1965), a collection of her essays. It is a confusing mixture of appeals to self-interest and suggestions that everyone will benefit from the liberation of the creative energy that will flow from unfettered self-interest. Overlaying all this is the idea that true self-interest cannot be served by stealing, cheating, or similarly antisocial conduct. As this example illustrates, what starts out as a defense of ethical egoism very often turns into an indirect form of Utilitarianism; the claim is that we will all be better off if each of us does what is in his or her own interest. The ethical egoist is virtually compelled to make this claim because otherwise there is a paradox in the fact that the ethical egoist advocates ethical egoism at all. Such advocacy would be contrary to the very principle of ethical egoism, unless the egoist benefits from others' becoming ethical egoists. If we see our interests as threatened by others' pursuing their own interests, we will certainly not benefit by others' becoming egoists; we would do better to keep our own belief in egoism secret and advocate altruism. Unfortunately for ethical egoism, the claim that we will all be better off if every one of us does what is in his or her own interest is incorrect. This is shown by what are known as “prisoner's dilemma” situations, which are playing an increasingly important role in discussions of ethical theory. The basic prisoner's dilemma is an imaginary situation in which two prisoners are accused of a crime. If one confesses and the other does not, the prisoner who confesses will be released immediately and the other who does not will spend the next 20 years in prison. If neither confesses, each will be held for a few months and then both will be released. And if both confess, they will each be jailed for 15 years. The prisoners cannot communicate with one another. If each of them does a purely self-interested calculation, the result will be that it is better to confess than not to confess no matter what the other prisoner does. Paradoxical as it might seem, two prisoners, each pursuing his own interest, will end up worse than they would if they were not egoists. The example might seem bizarre, but analogous situations occur quite frequently on a larger scale. Consider the dilemma of the commuter. Suppose that each commuter finds his or her private car a little more convenient than the bus; but when each of them drives a car, the traffic becomes so congested that everyone would be better off if they all took the bus and the buses moved quickly without traffic holdups. Because private cars are somewhat more convenient than buses, however, and the overall volume of traffic is not appreciably affected by one more car on the road, it is in the interest of each to continue using a private car. At least on the collective level, therefore, egoism is self-defeating—a conclusion well brought out by Parfit in his aforementioned Reasons and Persons. The most striking development in the study of ethics since the mid-1960s has been the growth of interest among philosophers in practical, or applied, ethics; i.e., the application of normative theories to practical moral problems. This is not, admittedly, a totally new departure. From Plato onward moral philosophers have concerned themselves with practical questions, including suicide, the exposure of infants, the treatment of women, and the proper behaviour of public officials. Christian philosophers, notably Augustine and Aquinas, examined with great care such matters as when a war was just, whether it could ever be right to tell a lie, or if a Christian woman did wrong to commit suicide in order to save herself from rape. Hobbes had an eminently practical purpose in writing his Leviathan, and Hume wrote about the ethics of suicide. Practical concerns continued with the British Utilitarians, who saw reform as the aim of their philosophy: Bentham wrote on an incredible variety of topics, and Mill is celebrated for his essays on liberty and on the subjection of women. Nevertheless, during the first six decades of the 20th century moral philosophers largely isolated themselves from practical ethics—something that now seems all but incredible, considering the traumatic events through which most of them lived. There were one or two notable exceptions. The philosopher Bertrand Russell was very much involved in practical issues, but his stature among his colleagues was based on his work in logic and metaphysics and had nothing to do with his writings on topics such as disarmament and sexual morality. Russell himself seems to have regarded his practical contributions as largely separate from his philosophical work and did not develop his ethical views in any systematic or rigorous fashion. The prevailing view of the period was that moral philosophy is quite separate from “moralizing,” a task best left to preachers. What was not generally considered was whether moral philosophers could, without merely preaching, make an effective contribution to discussions of practical issues involving difficult ethical questions. The value of such work began to be widely recognized only during the 1960s, when first the U.S. civil rights movement and subsequently the Vietnam War and the rise of student activism started to draw philosophers into discussions of the moral issues of equality, justice, war, and civil disobedience. (Interestingly, there has been very little discussion of sexual morality—an indication that a subject once almost synonymous with the term morals has become marginal to our moral concerns.) The founding, in 1971, of Philosophy and Public Affairs, a new journal devoted to the application of philosophy to public issues, provided both a forum and a new standard of rigour for these contributions. Applied ethics soon became part of the teaching of most philosophy departments of universities in English-speaking countries. Here it is not possible to do more than briefly mention some of the major areas of applied ethics and point to the issues that they raise. Applications of equality Since much of the early impetus for applied ethics came from the U.S. civil rights movement, such topics as equality, human rights, and justice have been prominent. We often make statements such as “All humans are equal” without thinking too deeply about the justification for the claims. Since the mid-1960s much has been written about how they can be justified. Discussions of this sort have led in several directions, often following social and political movements. The initial focus, especially in the United States, was on racial equality, and here, for once, there was a general consensus among philosophers on the unacceptability of discrimination against blacks. With so little disagreement about racial discrimination itself, the centre of attention soon moved to reverse discrimination: Is it acceptable to favour blacks for jobs and enrollment in universities and colleges because they had been discriminated against in the past and were generally so much worse off than whites? Or is this, too, a form of racial discrimination and unacceptable for that reason? Inequality between the sexes has been another focus of discussion. Does equality here mean ending as far as possible all differences in the sex roles, or could we have equal status for different roles? There has been a lively debate—both between feminists and their opponents and, on a different level, among feminists themselves—about what a society without sexual inequality would be like. Here, too, the legitimacy of reverse discrimination has been a contentious issue. Feminist philosophers have also been involved in debates over abortion and new methods of reproduction. These topics will be covered separately below. Many discussions of justice and equality are limited in scope to a single society. Even Rawls's theory of justice, for example, has nothing to say about the distribution of wealth between societies, a subject that could make acceptance of his maximin principle much more onerous. But philosophers have now begun to think about the moral implications of the inequality in wealth between the affluent nations (and their citizens) and those living in countries subject to famine. What are the obligations of those who have plenty when others are starving? It has not proved difficult to make a strong case for the view that affluent nations, as well as affluent individuals, ought to be doing much more to help the poor than they are generally now doing. There is one issue related to equality in which philosophers have led, rather than followed, a social movement. In the early 1970s, a group of young Oxford-based philosophers began to question the assumption that while all humans are entitled to equal moral status, nonhuman animals automatically have an inferior position. The publication in 1972 of Animals, Men and Morals: An Inquiry into the Maltreatment of Non-humans, edited by Roslind and Stanley Godlovitch and John Harris, was followed three years later by Peter Singer's Animal Liberation and then by a flood of articles and books that established the issue as a part of applied ethics. At the same time, these writings provided the philosophical basis for the animal liberation movement, which has had an effect on attitudes and practices toward animals in many countries. Environmental issues raise a host of difficult ethical questions, including the ancient one of the nature of intrinsic value. Whereas many philosophers in the past have agreed that human experiences have intrinsic value and the Utilitarians at least have always accepted that the pleasures and pains of nonhuman animals are of some intrinsic significance, this does not show why it is so bad if dodos become extinct or a rain forest is cut down. Are these things to be regretted only because of the loss to humans or other sentient creatures? Or is there more to it than that? Some philosophers are now prepared to defend the view that trees, rivers, species (considered apart from the individual animals of which they consist), and perhaps ecological systems as a whole have a value independent of the instrumental value they may have for humans or other sentient creatures. Our concern for the environment also raises the question of our obligations to future generations. How much do we owe to the future? From a social contract view of ethics or for the ethical egoist, the answer would seem to be: nothing. For we can benefit them, but they are unable to reciprocate. Most other ethical theories, however, do give weight to the interests of coming generations. Utilitarians, for one, would not think that the fact that members of future generations do not exist yet is any reason for giving less consideration to their interests than we give to our own, provided only that we are certain that they will exist and will have interests that will be affected by what we do. In the case of, say, the storage of radioactive wastes, it seems clear that what we do will indeed affect the interests of generations to come. The question becomes much more complex, however, when we consider that we can affect the size of future generations by the population policies we choose and the extent to which we encourage large or small families. Most environmentalists believe that the world is already dangerously overcrowded. This may well be so, but the notion of overpopulation conceals a philosophical issue that is ingeniously explored by Derek Parfit in Reasons and Persons (1984). What is optimum population? Is it that population size at which the average level of welfare will be as high as possible? Or is it the size at which the total amount of welfare—the average multiplied by the number of people—is as great as possible? Both answers lead to counterintuitive outcomes, and the question remains one of the most baffling mysteries in applied ethics. War and peace The Vietnam War ensured that discussions as to the justness of a war and of the legitimacy of conscription and civil disobedience were prominent in early writings in applied ethics. There was considerable support for civil disobedience against unjust aggression and against unjust laws even in a democracy. With the cessation of hostilities in Vietnam and the end of conscription, interest in these questions declined. Concern about nuclear weapons in the early 1980s, however, has caused philosophers to argue about whether nuclear deterrence can be an ethically acceptable strategy if it means using civilian populations as potential nuclear targets. Jonathan Schell's Fate of the Earth (1982) raised several philosophical questions about what we ought to do in the face of the possible destruction of all life on our planet. Abortion, euthanasia, and the value of human life A number of ethical questions cluster around both ends of the human life span. Whether abortion is morally justifiable has popularly been seen as depending on our answer to the question “When does a human life begin?” Many philosophers believe this to be the wrong question to ask because it suggests that there might be a factual answer that we can somehow discover through advances in science. Instead, these philosophers think we need to ask what it is that makes killing a human being wrong and then consider whether these characteristics, whatever they might be, apply to the fetus in an abortion. There is no generally agreed upon answer, yet some philosophers have presented surprisingly strong arguments to the effect that not only the fetus but even the newborn infant has no right to life. This position has been defended by Jonathan Glover in Causing Death and Saving Lives (1977) and in more detail by Michael Tooley in Abortion and Infanticide (1984). Such views have been hotly contested, especially by those who claim that all human life, irrespective of its characteristics, must be regarded as sacrosanct. The task for those who defend the sanctity of human life is to explain why human life, no matter what its characteristics, is specially worthy of protection. Explanation could no doubt be provided in terms of such traditional Christian doctrines as that all humans are made in the image of God or that all humans have an immortal soul. In the current debate, however, the opponents of abortion have eschewed religious arguments of this kind without finding a convincing secular alternative. Somewhat similar issues are raised by euthanasia when it is nonvoluntary, as, for example, in the case of severely disabled newborn infants. Euthanasia, however, can be voluntary, and this has brought it support from some who hold that the state should not interfere with the free, informed choices of its citizens in matters that do not cause others harm. (The same argument is often invoked in defense of the pro-choice position in the abortion controversy; but it is on much weaker ground in this case because it presupposes what it needs to prove—namely, that the fetus does not count as an “other.”) Opposition to voluntary euthanasia has centred on practical matters such as the difficulty of adequate safeguards and on the argument that it would lead to a “slippery slope” that would take us to nonvoluntary euthanasia and eventually to the compulsory involuntary killing of those the state considers to be socially undesirable. Philosophers have also canvassed the moral significance of the distinction between killing and allowing to die, which is reflected in the fact that many physicians will allow a patient with an incurable condition to die when life could still be prolonged, but they will not take active steps to end the patient's life. Consequentialist philosophers, among them both Glover and Tooley, have denied that this distinction possesses any intrinsic moral significance. For those who uphold a system of absolute rules, on the other hand, a distinction between acts and omissions is essential if they are to render plausible the claim that we must never breach a valid moral rule. The issues of abortion and euthanasia are included in one of the fastest growing areas of applied ethics, that dealing with ethical issues raised by new developments in medicine and the biological sciences. This subject, known as bioethics, often involves interdisciplinary work, with physicians, lawyers, scientists, and theologians all taking part. Centres for research in bioethics have been established in Australia, Britain, Canada, and the United States. Many medical schools have added the discussion of ethical issues in medicine to their curricula. Governments have sought to deal with the most controversial issues by appointing special committees to provide ethical advice. Several key themes run through the subjects covered by bioethics. One, related to abortion and euthanasia, is whether the quality of a human life can be a reason for ending it or for deciding not to take steps to prolong it. Since medical science can now keep alive severely disabled infants who a few years ago would have died soon after birth, pediatricians are regularly faced with this question. The issue received national publicity in Britain in 1981 when a respected pediatrician was charged with murder, following the death of an infant with Down's syndrome. Evidence at the trial indicated that the parents had not wanted the child to live and that the pediatrician had consequently prescribed a narcotic painkiller. The doctor was acquitted. The following year, in the United States, an even greater furor was caused by a doctor's decision to follow the wishes of the parents of a Down's syndrome infant and not carry out surgery without which the baby would die. The doctor's decision was upheld by the Supreme Court of Indiana, and the baby died before an appeal could be made to the U.S. Supreme Court. In spite of the controversy and efforts by government officials to ensure that handicapped infants are given all necessary lifesaving treatment, in neither Britain nor the United States is there any consensus about the decisions that should be made when severely disabled infants are born or by whom these decisions should be made. Medical advances have raised other related questions. Even those who defend the doctrine of the sanctity of all human life do not believe that doctors have to use extraordinary means to prolong life, but the distinction between ordinary and extraordinary means, like that between acts and omissions, is itself under attack. Critics assert that the wishes of the patient or, if these cannot be ascertained, the quality of the patient's life provides a more relevant basis for a decision than the nature of the means to be used. Another central theme is that of patient autonomy. This arises not only in the case of voluntary euthanasia but also in the area of human experimentation, which has come under close scrutiny following reported abuses. It is generally agreed that patients must give informed consent to any experimental procedures. But how much and how detailed information is the patient to be given? The problem is particularly acute in the case of randomly controlled trials, which scientists consider the most desirable way of testing the efficacy of a new procedure but which require that the patient agree to being administered randomly one of two or more forms of treatment. The allocation of medical resources became a life-and-death issue when hospitals obtained dialysis machines and had to choose which of their patients suffering from kidney disease would be able to use the scarce machines. Some argued for “first come, first served,” whereas others thought it obvious that younger patients or patients with dependents should have preference. Kidney machines are no longer as scarce, but the availability of various other exotic, expensive lifesaving techniques is limited; hence, the search for rational principles of distribution continues. New issues arise as further advances are made in biology and medicine. In 1978 the birth of the first human being to be conceived outside the human body initiated a debate about the ethics of in vitro fertilization. This soon led to questions about the freezing of human embryos and what should be done with them if, as happened in 1984 with two embryos frozen by an Australian medical team, the parents should die. The next controversy in this area arose over commercial agencies offering infertile married couples a surrogate mother who would for a fee be impregnated with the sperm of the husband and then surrender the resulting baby to the couple. Several questions emerged: Should we allow women to rent their wombs to the highest bidder? If a woman who has agreed to act as a surrogate changes her mind and decides to keep the baby, should she be allowed to do so? The culmination of such advances in human reproduction will be the mastery of genetic engineering. Then we will all face the question posed by the title of Jonathan Glover's probing book What Sort of People Should There Be? (1984). Perhaps this will be the most challenging issue for 21st-century ethics. For an introduction to the major theories of ethics, the reader should consult Richard B. Brandt, Ethical Theory: The Problems of Normative and Critical Ethics (1959), an excellent comprehensive textbook. William K. Frankena, Ethics, 2nd ed. (1973), is a much briefer treatment. Another concise work is Bernard Williams, Ethics and the Limits of Philosophy (1985). There are several useful collections of classical and modern writings; among the better ones are Oliver A. Johnson, Ethics: Selections from Classical and Contemporary Writers, 5th ed. (1984); and James Rachels (ed.), Understanding Moral Philosophy (1976), which places greater emphasis on modern writers. Origins of ethics Joyce O. Hertzler, The Social Thought of the Ancient Civilizations (1936, reissued 1961), is a wide-ranging collection of materials. Edward Westermarck, The Origin and Development of the Moral Ideas, 2 vol., 2nd ed. (1912–17, reprinted 1971), is dated but still unsurpassed as a comprehensive account of anthropological data. Mary Midgley, Beast and Man: The Roots of Human Nature (1978, reissued 1980), is excellent on the links between biology and ethics; and Edward O. Wilson, Sociobiology: The New Synthesis (1975), and On Human Nature (1978), contain controversial speculations on the biological basis of social behaviour. Richard Dawkins, The Selfish Gene (1976, reprinted 1978), is another evolutionary account, fascinating but to be used with care. History of Western ethics Henry Sidgwick, Outlines of the History of Ethics for English Readers, 6th enlarged ed. (1931, reissued 1967), is a triumph of scholarship and brevity. William Edward Hartpole Lecky, History of European Morals from Augustus to Charlemagne, 2 vol., 3rd rev. ed. (1877, reprinted 1975), is fascinating and informative. Among more recent histories, Vernon J. Bourke, History of Ethics (1968, reissued in 2 vol., 1970), is remarkably comprehensive; while Alasdaire MacIntyre, A Short History of Ethics (1966), is a readable personal view. Surama Dasgupta, Development of Moral Philosophy in India (1961, reissued 1965), is a clear discussion of the various schools. Sarvepalli Radhakrishnan and Charles A. Moore (eds.), A Source Book in Indian Philosophy (1957, reprinted 1967), is a collection of key primary sources. For Buddhist texts, see Edward Conze et al. (eds.), Buddhist Texts Through the Ages (1954, reissued 1964). Standard introductions to the works of classic Chinese authors mentioned in the article are E.R. Hughes (ed.), Chinese Philosophy in Classical Times (1942, reprinted 1966); and Fung Yu-Lan, A History of Chinese Philosophy, 2 vol., trans. from the Chinese (1952–53, reprinted 1983). Ancient Greek and Roman ethics Jonathan Barnes, The Presocratic Philosophers, rev. ed. (1982), treats Greek ethics before Socrates. The central texts of the Classic period of Greek ethics are Plato, Politeia (The Republic), Euthyphro, Protagoras, and Gorgias; and Aristotle, Ethica Nicomachea (Nicomachean Ethics). A concise introduction to the ethical thought of this period is provided by Pamela Huby, Greek Ethics (1967); and Christopher Rowe, An Introduction to Greek Ethics (1976). Significant writings of the Stoics include Marcus Tullius Cicero, De officiis (On Duties); Lucius Annaeus Seneca, Epistulae morales (Moral Essays); and Marcus Aurelius, D. imperatoris Marci Antonini Commentariorum qvos sibi ipsi scripsit libri XII (The Meditations of the Emperor Marcus Antoninus). From Epicurus only fragments remain; they have been collected in Cyril Bailey (ed.), Epicurus, the Extant Remains (1926, reprinted 1979). The most complete of the surviving works of the Epicureans is Lucretius, De rerum natura (On the Nature of Things). Early and medieval Christian ethics In addition to the Gospels and Paul's letters, important writings include St. Augustine, De civitate Dei (413–426; The City of God), and Enchiridion ad Laurentium de fide, spe, et caritate (421; Enchiridion to Laurentius on Faith, Hope and Love); Peter Abelard, Ethica (c. 1135; Ethics); and St. Thomas Aquinas, Summa theologiae (1265 or 1266–73). On the history of the transition from Roman ethics to Christianity, W.E.H. Lecky, op.cit., remains unsurpassed. D.J. O'Connor, Aquinas and Natural Law (1967), is a brief introduction to the most important of the Scholastic writers on ethics. Ethics of the Renaissance and Reformation Machiavelli's chief works are available in modern translations: Niccolò Machiavelli, The Prince, trans. and ed. by Peter Bondanella and Mark Musa (1984), and The Discourses, trans. by Leslie J. Walker (1975). For Luther's writings, see the comprehensive edition Martin Luther, Works, 55 vol., ed. by Jaroslav Pelikan et al. (1955–76). Calvin's major work is available in Jean Calvin, Institutes of the Christian Religion, trans. by Henry Beveridge, 2 vol. (1979). The British tradition from Hobbes to the Utilitarians The key works of this period include Thomas Hobbes, Leviathan (1651); Ralph Cudworth, Eternal and Immutable Morality (published posthumously, 1688); Henry More, Enchiridion Ethicum (1662); Samuel Clarke, Boyle lectures for 1705, published in his Works, 4 vol. (1738–42); 3rd Earl of Shaftesbury, “Inquiry Concerning Virtue or Merit,” published together with other essays in his Characteristicks of Men, Manners, Opinions, Times (1711); Joseph Butler, Fifteen Sermons (1726); Francis Hutcheson, Inquiry into the Original of Our Ideas of Beauty and Virtue (1725), and A System of Moral Philosophy, 2 vol. (1755); David Hume, A Treatise of Human Nature (1739–40), and An Enquiry Concerning the Principles of Morals (1751); Richard Price, A Review of the Principal Questions and Difficulties in Morals (1758); Thomas Reid, Essays on the Active Powers of the Human Mind (1758); William Paley, The Principles of Moral and Political Philosophy (1785); Jeremy Bentham, Introduction to the Principles of Morals and Legislation (1789); John Stuart Mill, Utilitarianism (1863); and Henry Sidgwick, The Methods of Ethics (1874). Selections of the major texts of this period are brought together in D.D. Raphael (ed.), British Moralists, 1650–1800, 2 vol. (1969); and in D.H. Monro (ed.), A Guide to the British Moralists (1972). Useful introductions to separate writers include J. Kemp, Ethical Naturalism (1970), on Hobbes and Hume; W.D. Hudson, Ethical Intuitionism (1967), on the intuitionists from Cudworth to Price and the debate with the moral sense school; and Anthony Quinton, Utilitarian Ethics (1973). C.D. Broad, Five Types of Ethical Theory (1930, reprinted 1971), includes clear accounts of the ethics of Butler, Hume, and Sidgwick. J.L. Mackie, Hume's Moral Theory (1980), brilliantly traces the relevance of Hume's work to current disputes about the nature of ethics. The continental tradition from Spinoza to Nietzsche The major texts are available in many English translations. See Baruch Spinoza, The Ethics and Selected Letters, trans. by Samuel Shirley, ed. by Seymour Feldman (1982); Jean-Jacques Rousseau, A Discourse on Inequality, trans. by Maurice Cranston (1984), and The Social Contract, annotated ed., trans. by Charles M. Sherover (1974); Immanuel Kant, Grounding for the Metaphysics of Morals, trans. by James W. Ellington (1981), and Critique of Practical Reason, and Other Writings in Moral Philosophy, ed. and trans. by Lewis White Beck (1949, reprinted 1976); G.W.F. Hegel, Phenomenology of Spirit, trans. by A.V. Miller (1977), and Hegel's Philosophy of Right, trans. by T.M. Knox (1967, reprinted 1980); Karl Marx, Economic and Philosophic Manuscripts of 1844, ed. by Dirk J. Struik (1964), Capital: A Critique of Political Economy, trans. by David Fernbach, 3 vol. (1981), and The Communist Manifesto of Marx and Engels, ed. by Harold J. Laski (1967, reprinted 1975); Friedrich Nietzsche, Beyond Good and Evil: Prelude to a Philosophy of the Future, trans. by R.J. Hollingdale (1973), and The Genealogy of Morals: A Polemic, trans. by Horace B. Samuel (1964). Among the easier introductory studies are H.B. Acton, Kant's Moral Philosophy (1970); and Peter Singer, Hegel (1983), and Marx (1980). C.D. Broad, op. cit., contains readable accounts of the ethics of both Spinoza and Kant. 20th-century Western ethics The most influential writings in metaethics during the 20th century have been George Edward Moore, Principia Ethica (1903, reprinted 1976); W.D. Ross, The Right and the Good (1930, reprinted 1973); A.J. Ayer, Language, Truth, and Logic (1936, reissued 1974); Charles L. Stevenson, Ethics and Language (1944, reprinted 1979); R.M. Hare, The Language of Morals (1952, reprinted 1972), and Freedom and Reason (1963, reprinted 1977); and, in France, Jean-Paul Sartre, Being and Nothingness (1956, reissued 1978; originally published in French, 1943), and Existentialism and Humanism (1948, reprinted 1977; originally published in French, 1946). Ralph Barton Perry, General Theory of Value (1926, reprinted 1967), was highly regarded in the United States but comparatively neglected elsewhere. Wilfrid Sellars and John Hospers (eds.), Readings in Ethical History, 2nd ed. (1970), contains the most important pieces of writing on ethics from the first half of the 20th century. Widely discussed later works include Thomas Nagel, The Possibility of Altruism (1970, reissued 1978); G.J. Warnock, The Object of Morality (1971); J.L. Mackie, Ethics: Inventing Right and Wrong (1977); Richard B. Brandt, A Theory of the Good and the Right (1979); John Finnis, Natural Law and Natural Rights (1980); and R.M. Hare, Moral Thinking: Its Levels, Method, and Point (1981). A defense of naturalism can be found in two important articles by Philippa Foot, “Moral Beliefs” and “Moral Arguments,” both originally published in 1958 and later reprinted in her Virtues and Vices, and Other Essays in Moral Philosophy (1978, reprinted 1981). David Wiggins, Truth, Invention, and the Meaning of Life (1976), is a statement of what has come to be known as “moral realism.” Mary Warnock, Ethics Since 1900, 3rd ed. (1978); G.J. Warnock, Contemporary Moral Philosophy (1967); and W.D. Hudson, A Century of Moral Philosophy (1980), provide guidance through 20th-century metaethical disputes. For Moore's ideal Utilitarianism, see G.E. Moore, Ethics, 2nd ed. (1966). The best short statement of an act-Utilitarian position is J.J.C. Smart's contribution to J.J.C. Smart and Bernard Williams, Utilitarianism: For and Against (1973). R.M. Hare, op. cit., is an extended argument for a form of preference Utilitarianism that allows some scope to moral principles while not departing from act-Utilitarianism at the level of critical thought. David Lyons, Forms and Limits of Utilitarianism (1965), probes the distinction between act- and rule-Utilitarianism. Richard B. Brandt, op. cit., includes a defense of a version of rule-Utilitarianism. Donald Regan, Utilitarianism and Co-operation (1980), is an ingenious discussion of how the need to cooperate can be incorporated into Utilitarian theory. Amartya Sen and Bernard Williams (eds.), Utilitarianism and Beyond (1982), is a collection of essays on the difficulties of the Utilitarian position. A major contribution to consequentialist theory is Derek Parfit, Reasons and Persons (1984), which includes penetrating arguments on the nature of consequentialist reasoning in ethics. The standard defense of an ethic of prima facie duties remains W.D. Ross, op. cit. H.J. McCloskey, Meta-Ethics and Normative Ethics (1969), is a restatement with some modifications. The most widely discussed alternative theory to Utilitarianism in recent years is set forth in John Rawls, A Theory of Justice (1971, reprinted 1981). Robert Nozick, Anarchy, State, and Utopia (1974), criticizes Rawls and presents a rights-based theory. Another work giving prominence to rights is Ronald Dworkin, Taking Rights Seriously (1977). Very different from the approach of both Nozick and Dworkin is the attempt to ground rights in natural law in John Finnis, op. cit., and a shorter and more accessible introduction to natural law ethics is Fundamentals of Ethics (1983). Egoism as a theory of rationality is discussed by Derek Parfit, op. cit.; a useful collection of readings on this topic is David P. Gauthier (ed.), Morality and Rational Self-Interest (1970); see also Ronald D. Milo (ed.), Egoism and Altruism (1973). Many of the best examples of applied ethics are to be found in journal articles, particularly in Philosophy and Public Affairs (quarterly). There are many anthologies of representative samples of such writings. Among the better ones are James Rachels (ed.), Moral Problems, 3rd ed. (1979); Jan Narveson (ed.), Moral Issues (1983); and Manuel Velasquez and Cynthia Rostankowski, Ethics, Theory and Practice (1985). There are also books and collections on specific topics. Marshall Cohen, Thomas Nagel, and Thomas Scanlon (eds.), Equality and Preferential Treatment (1977), is a collection of some of the best articles on equality and reverse discrimination; while Alan H. Goldman, Justice and Reverse Discrimination (1979), is a book-length treatment of the issues. Some of the more philosophically probing discussions of feminism are Janet Radcliffe Richards, The Sceptical Feminist (1980, reprinted with corrections, 1982); Mary Midgley and Judith Hughes, Women's Choices: Philosophical Problems Facing Feminism (1983); and Alison M. Jaggar, Feminist Politics and Human Nature (1983). The moral obligations of the wealthy toward the starving are discussed in the anthology World Hunger and Moral Obligation, ed. by William Aiken and Hugh LaFollette. The ethics of the treatment of animals has given rise to much philosophical discussion. Books arguing for radical change include Stanley Godlovitch, Roslind Godlovitch, and John Harris (eds.), Animals, Man, and Morals: An Enquiry into the Maltreatment of Non-Humans (1971); Peter Singer, Animal Liberation: A New Ethics for Our Treatment of Animals (1975); Stephen R.L. Clark, The Moral Status of Animals (1977, reissued 1984); and Tom Regan, The Case for Animal Rights (1983). R.G. Frey, Interests and Rights: The Case Against Animals (1980), and Rights, Killing, and Suffering: Moral Vegetarianism and Applied Ethics (1983), resist some of these arguments. Mary Midgley, Animals and Why They Matter (1983), takes a middle course. Essays dealing with ethical issues raised by concern for the environment are collected in Robert Elliot and Arran Gare (eds.), Environmental Philosophy (1983); and K.S. Shrader-Frechette, Environmental Ethics (1981). Useful full-length studies include John Passmore, Man's Responsibility for Nature: Ecological Problems and Western Tradition, 2nd ed. (1980); and H.J. McCloskey, Ecological Ethics and Politics (1983). For specific problems of future generations, see R. Sikora and Brian Barry (eds.), Obligations to Future Generations (1979). A difficult but fascinating discussion of the problem of optimum population size in an ideal world can be found in Derek Parfit, op. cit. Michael Walzer, Just and Unjust Wars (1977), is a fine study of the morality of war; Richard A. Wasserstrom (ed.), War and Morality (1970), is a valuable collection of essays. Nigel Blake and Kay Pole (eds.), Objections to Nuclear Defence (1984), and Dangers of Deterrence (1984), are collections of philosophical writings on nuclear war. There is an immense amount of literature on abortion, though of various philosophical depth. Michael Tooley, Abortion and Infanticide (1983), is a penetrating study. For contrasting views, see Germain G. Grisez, Abortion: The Myths, the Realities, and the Arguments (1970); and Baruch A. Brody, Abortion and the Sanctity of Human Life: A Philosophical View (1975). Another notable treatment is L.W. Sumner, Abortion and Moral Theory (1981). Joel Feinberg (ed.), The Problem of Abortion, 2nd ed. (1984), is a good collection of essays. For a discussion of sanctity of life issues in general, including both abortion and euthanasia, see Jonathan Glover, Causing Death and Saving Lives (1977); and Peter Singer, Practical Ethics (1979). The specific problem of the treatment of severely handicapped infants is discussed in Helga Kuhse and Peter Singer, Should the Baby Live? (1985). For a comprehensive textbook on bioethics, see Tom. L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 2nd ed. (1983). Anthologies of essays on diverse topics in bioethics include Samuel Gorovitz et al. (eds.), Moral Problems in Medicine, 2nd ed. (1983); and John Arras and Robert Hunt (comp.), Ethical Issues in Modern Medicine, 2nd ed. (1983). James F. Childress, Who Should Decide? (1982), deals with paternalism in medical care; while Peter Singer and Deane Wells, The Reproduction Revolution: New Ways of Making Babies (1984), focusses on the new reproductive technology. For the philosophical issues underlying genetic engineering and other methods of altering the human organism, see Jonathan Glover, What Sort of People Should There Be? (1984).
BUENOS AIRES — Places speak through their names, and the stories of their foundational origins constitute an intangible heritage as significant as any famous building or monument. Toponymy is the study of place names and their origins, the relationship, in other words, between a place and its name. And names often give us information on a site's remotest past. We're not talking here about major urban centers that, like Lutetia/Paris and Londinium/London, obviously began as Roman settlements. This is about later cities, places that developed after the disorders that followed the fall of the Western Roman Empire in the late fifth century, when people took refuge in the countryside and abandoned efforts to build settlements that could have grown into cities. The collapse of the empire nearly provoked the end of city life for a good part of the Middle Ages, even if the situation began to improve from the 11th and 12th centuries as settlements emerged around political, economic or religious entities such as the castles of feudal lords and monarchs, abbeys and monasteries, or at trading crossroads and staging posts. In all these cases fortified walls were an indication of endemic insecurity and of how life was determined by being inside or outside those walls. In this period the difference between the settlements emerging respectively to house the "middle" or trading classes, the future bourgeoisie, and those tied to fortresses (castrum in Latin) would start to reflect in their names. Thus the appearance of suffixes or prefixes like "burg", "chester" or "borough" in Central Europe and England, as in Manchester, Chesterfield, Rochester, Newborough, Strasbourg, Freiburg or Hamburg. Or in Italy "borgo", in towns like Borghetto, Borgo dell Anime, Borgo di Vilanova. Strasbourg — Photo: Nicolas Vollmer Examples in Spain include Burgos, Castrillón, Castro Urdiales. In France, we see them in places like Cherbourg, Montebourg or Castres. The components of these place names are also at the root of so many surnames, like Castro, Borges, Burgess, Borghi, Borghese, Oldenburg, Borgia (Borja in Spanish), Bohórquez or Bourgeois among others. And in Argentina? Toponymy has worked differently on this continent, as most New World names arrived with the colonizers and thus date back no later than about 500 years. Where pre-Columbian names persist, we have information about a site's indigenous past or the native name given to a natural feature. In the Buenos Aires province for example, we have the district of Quilmes south of downtown Buenos Aires, named after the Quilmes or Kilme tribe of the Diaguita Nation that inhabited large parts of the Southern Cone. This particular name bears witness to the group's resistance, over the course of 130 years, to the Spanish conquerors. When the tribe's "pucará" (fortress) was found, between present-day Tucumán and Salta, in northwestern Argentina, the Spanish took the vindictive step of force-marching the fort's defenders across the country toward Buenos Aires. The action was meant to serve as a warning and lesson to other would-be resisters. By the time they stopped, in what came to be known as Quilmes, the few surviving captives had walked some 1,300 kilometers. Many of those died soon after, in part because of the muggy climate that so differs from to their desert homeland. And to think that most people know the word Quilmes for an entirely different reason: as the name of Argentina's top selling beer. See more from Food / Travel here
Modern brass instruments generally come in one of two families: - Valved brass instruments use a set of valves (typically three or four but as many as seven or more in some cases) operated by the player's fingers that introduce additional tubing, or crooks, into the instrument, changing its overall length. This family includes all of the modern brass instruments except the horn (also called tuba, as well as the cornet, tenor horn (alto horn), sousaphone, and the mellophone. As valved instruments are predominant among the brasses today, a more thorough discussion of their workings can be found below. The valves are usually piston valves, but can be rotary valves; the latter are the norm for the horn (except in France) and are also common on the tuba. - Slide brass instruments use a slide to change the length of tubing. The main instruments in this category are the trombone family, though valve trombones are occasionally used, especially in jazz. The trombone family's ancestor, the sackbut, and the folk instrument bazooka are also in the slide family. There are two other families that have, in general, become functionally obsolete for practical purposes. Instruments of both types, however, are sometimes used for period-instrument performances of Baroque or Classical pieces. In more modern compositions, they are occasionally used for their intonation or tone color. - Natural brass instruments only play notes in the instrument's harmonic series. These include the bugle and older variants of the trumpet and horn. The trumpet was a natural brass instrument prior to about 1795, and the horn before about 1820. In the 18th century, makers developed interchangeable crooks of different lengths, which let players use a single instrument in more than one key. Natural instruments are still played for period performances and some ceremonial functions, and are occasionally found in more modern scores, such as those by Richard Wagner and - Keyed or Fingered brass instruments used holes along the body of the instrument, which were covered by fingers or by finger-operated pads (keys) in a similar way to a woodwind instrument. These included the cornett, serpent, keyed bugle and keyed trumpet. They are more difficult to play than valved instruments. Bore taper and diameter Brass instruments may also be characterised by two generalizations about geometry of the bore, that is, the tubing between the mouthpiece and the flaring of the tubing into the bell. Those two generalizations are with regard to - the degree of taper or conicity of the bore and - the diameter of the bore with respect to its length. Cylindrical vs. conical bore While all modern valved and slide brass instruments consist in part of conical and in part of cylindrical tubing, they are divided as follows: - Cylindrical bore brass instruments are those in which approximately constant diameter tubing predominates. Cylindrical bore brass instruments are generally perceived as having a brighter, more penetrating tone quality compared to conical bore brass instruments. The trumpet, and all trombones are cylindrical bore. In particular, the slide design of the trombone necessitates this. - Conical bore brass instruments are those in which tubing of constantly increasing diameter predominates. Conical bore instruments are generally perceived as having a more mellow tone quality than the cylindrical bore brass instruments. The " British brass band" group of instruments fall into this category. This includes the flugelhorn, tenor horn (alto horn), baritone horn, horn, euphonium and tuba. Some conical bore brass instruments are more conical than others. For example, the flugelhorn differs from the cornet by having a higher percentage of its tubing length conical than does the cornet, in addition to possessing a wider bore than the cornet. In the 1910s and 1920s, the E.A. Couturier company built brass band instruments utilizing a patent for a continuous conical bore without cylindrical portions even for the valves or tuning slide. Whole-tube vs. half-tube The second division, based on bore diameter in relation to length, determines whether the fundamental tone or the first overtone is the lowest partial practically available to the player: ||Neither the horns nor the trumpet could produce the 1st note of the harmonic series ... A horn giving the C of an open 8 ft organ pipe had to be 16 ft (5 m). long. Half its length was practically useless ... it was found that if the calibre of tube was sufficiently enlarged in proportion to its length, the instrument could be relied upon to give its fundamental note in all normal circumstances. – Cecil Forsyth, Orchestration, p. 86 - Whole-tube instruments have larger bores in relation to tubing length, and can play the fundamental tone with ease and precision. The tuba and euphonium are examples of whole-tube brass instruments. - Half-tube instruments have smaller bores in relation to tubing length and cannot easily or accurately play the fundamental tone. The second partial (first overtone) is the lowest note of each tubing length practical to play on half-tube instruments. The trumpet and horn are examples of half-tube brass instruments. For half tube instruments the 'fundamental', although half the frequency of the second harmonic, is in fact a pedal note rather than a true fundamental Other brass instruments The instruments in this list fall for various reasons outside the scope of much of the discussion above regarding families of brass instruments.
Specific Instructional Objectives At the end of this lesson the student will be able to: Identify the scope and necessity of software engineering. Identify the causes of and solutions for software crisis. Differentiate a piece of program from a software product. Scope and necessity of software engineering Software engineering is an engineering approach for software development. Wecan alternatively view it as a systematic collection of past experience. Theexperience is arranged in the form of methodologies and guidelines. A smallprogram can be written without using software engineering principles. But if onewants to develop a large software product, then software engineering principlesare indispensable to achieve a good quality software cost effectively. Thesedefinitions can be elaborated with the help of a building construction analogy.Suppose you have a friend who asked you to build a small wall as shown in fig.1.1. You would be able to do that using your common sense. You will get buildingmaterials like bricks; cement etc. and you will then build the wall. Small WallBut what would happen if the same friend asked you to build a large multistoriedbuilding as shown in fig. 1.2? A Multistoried BuildingYou don't have a very good idea about building such a huge complex. It would bevery difficult to extend your idea about a small wall construction into constructinga large building. Even if you tried to build a large building, it would collapsebecause you would not have the requisite knowledge about the strength ofmaterials, testing, planning, architectural design, etc. Building a small wall andbuilding a large building are entirely different ball games. You can use yourintuition and still be successful in building a small wall, but building a large Version 2 CSE IIT, Kharagpur
Satellite Signal Rain Fade - Causes & Explanations Even the most reliable satellite communications technology can sometimes be out-matched by the forces of nature. It’s a phenomenon known as “rain fade” or “rain attenuation” – a weakening of the satellite signal as it passes through raindrops. Rain fade is one of the most common, and often most misunderstood, phenomena to affect satellite signals. But the more you learn about the causes of rain fade, the better your chances are to lessen its impact on your satellite system. Rain-fade is not service provider dependant, DIRECTV and Dish Network equipment are equally susceptable to the effects of signal loss due to rain-fade. The Causes of Satellite Rain Fade Any satellite communications system network operator using a Ku-Band system (12/14 GHz or higher frequencies) will face the effects of rain fade at some time. But to understand why this weakening occurs with Ku-Band transmissions, you must first understand the causes of satellite rain fade. Two of the most common causes are listed below. Absorption – Part or all of the energy generated when a radio wave strikes a rain droplet. The droplet is converted to heat energy and absorbed by the droplet. Scattering – A non-uniform transmission medium (the raindrops in the atmosphere) causes energy to be dispersed from its initial travel direction. Scattering can be caused by either refraction or diffraction: Refraction – The refractive index of the water droplets encountered by the radio wave. Diffraction – the travel direction of the radio wave also changes as it propagates around the obstacle in its path (a water droplet). These different reactions ultimately have the same effect – they cause any satellite system to lose some of its normal signal level. Don’t expect to lose your satellite signal every time it rains, though. Rain outage will only occur during the heaviest rains (convective and stratiform are the most predominant types) with only a small portion of the transmission path experiencing attenuation. In fact, of a typical satellite transmission path measuring 22,300 miles, less than .02% will be affected by rain fade. The Impact of Satellite Rain Fade Rain rate is the most common factor used to determine rain fade. Rain fade seems to correlate very closely with the volume of raindrops (expressed in cubic wavelengths) along the path of propagation. This is opposed to the common misconception that the degree of attenuation is proportional to the quantity or individual size of the raindrops falling near the receive site. Pinpointing the specific factor that lead to attenuation is essential to accurately predicting the problem. Models can be developed from this data to chart the effects of rain fade on a regional or individual site basis.
We have worked with a stem and leaf plot of the distribution of estimates of a population based on 100 random samples of size 10. The display is reasonably bell-shaped, with estimates occurring on both sides of 500 (the actual total number of penguins). There is a concentration of estimates around 500, with fewer estimates occurring as you move farther away from 500. Note 7 We can think of these estimates as "typical" of what you would get if you were to select another 100 samples of size 10. That is, you would generate a similar (but not exactly the same) distribution. The stem and leaf plot would also be similar, and you would expect about the same proportions of estimates to fall into the intervals we identified earlier. Under normal circumstances, if you were asked to estimate the size of a population, you wouldn't already know the population size -- otherwise, you wouldn't need to estimate it! Also, you would not repeatedly select samples as we did in this session. In practice, you take only one sample to make your estimate based on the results in your sample. How can you predict how accurate that one sample is likely to be? For our problem of counting penguins, we can use probability to make that prediction, using the "typical" distribution we found for the 100 samples: Let's say that the one sample you found yielded 360 for your estimate. This is not a very good estimate, since the actual population size is 500. But since only two of our samples produced this estimate, the probability of coming up with that estimate is only about 2/100. On the other hand, your sample might generate an estimate of 500, right on target! Your probability for this is approximately 8/100, because eight of the samples produced an estimate of 500.
Scientists eye radioactive Martian hoppers Scientists are examining the feasibility of constructing hopping vehicles powered by radioactive material to more effectively explore the rugged surface of Mars. According to Space.com, one concept floated by NASA engineers envisions a solar-powered vehicle capable of splitting carbon dioxide into oxygen and carbon monoxide, which it could then burn as fuel in conventional rockets. Chinese researchers proposed a similar paradigm that would use electricity generated from batteries to suck in and heat carbon dioxide. However, a French team adopted a different approach by suggesting that future exploratory missions could be powered by magnesium powder, albeit for a limited number of jumps, or hops. Meanwhile, British scientists opined that radioactive isotopes should be employed to help squeeze gas into thrusters and heat it up for propulsion. "Radioisotope power sources have been launched as part of spacecraft numerous times," explained Hugo Williams, an aerospace engineer at the University of Leicester in England. “A hopper would draw on these experiences and design standards and would be subject to an extensive test program to demonstrate compliance with safety requirements." "[And] because the vehicle can collect propellant in-situ from the atmosphere, it has the potential to have a very long life, and therefore visit many sites of interest."
2005 AIME I Problems/Problem 13 A particle moves in the Cartesian plane according to the following rules: - From any lattice point the particle may only move to or - There are no right angle turns in the particle's path. How many different paths can the particle take from to ? The length of the path (the number of times the particle moves) can range from to ; notice that gives the number of diagonals. Let represent a move to the right, represent a move upwards, and to be a move that is diagonal. Casework upon the number of diagonal moves: - Case : It is easy to see only cases. - Case : There are two diagonals. We need to generate a string with 's, 's, and 's such that no two 's or 's are adjacent. The 's split the string into three sections (): by the Pigeonhole principle all of at least one of the two letters must be all together (i.e., stay in a row). - If both and stay together, then there are ways. - If either or splits, then there are places to put the letter that splits, which has possibilities. The remaining letter must divide into in one section and in the next, giving ways. This totals ways. - Case : Now 's, 's, and 's, so the string is divided into partitions (). - If the 's and 's stay together, then there are places to put them. - If one of them splits and the other stays together, then there are places to put them, and ways to pick which splits, giving ways. - If both groups split, then there are ways to arrange them. These add up to ways. - Case : Now , , 's (). There are places to put , places to put , giving ways. - Case : It is easy to see only case. Together, these add up to . Another possibility is to use block-walking and recursion: for each vertex, the number of ways to reach it is , where is the number of ways to reach the vertex from the left (without having come to that vertex (the one on the left) from below), is the number of ways to reach the vertex from the vertex diagonally down and left, and is the number of ways to reach the vertex from below (without having come to that vertex (the one below) from the left). Assign to each point the triplet . Let . Let all lattice points that contain exactly one negative coordinate be assigned to . This leaves the lattice points of the first quadrant, the positive parts of the and axes, and the origin unassigned. As a seed, assign to . (We will see how this correlates with the problem.) Then define for each lattice point its triplet thus: It is evident that is the number of ways to reach from . Therefore we compute vertex by vertex the triplets with . Finally, after simple but tedious calculations, we find that , so . |2005 AIME I (Problems • Answer Key • Resources)| |1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15| |All AIME Problems and Solutions|
Anyone who has had me as a teacher in Grade 6 or 7 has heard me echo, “Estimation is Easy!”…but is it?! Estimation can be a difficult skill to learn because it involves number sense, spatial sense, measurement sense and lots of mental computation. This important skill is often left out of the curriculum, or is inserted as an insignificant add-on because approximate answers are not valued as much as correct answers; but they should be! Why is it important? Estimation may not be easy, but it is essential! We use estimation every day, whether estimating how much sand to buy to fill the new sandbox, or guessing whether our suitcase is overweight, or figuring out if we have enough money to buy something. Estimation helps develop number sense and fluency and is a great way to get children to visualize amounts mentally. “The emphasis on learning in math must always be on thinking, reasoning and making sense.” Marilyn Burns And what better way to emphasize these skills than to begin problems with estimation! What does it accomplish? Mathematicians agree on these four things: - It helps children focus on the attribute being measured (length, time, volume etc.) - It provides intrinsic motivation for measurement because children want to see how close their guesses are. - It helps develop familiarity with standard units, if that is what is being used to estimate. - It develops referents or benchmarks for important units and as a result, lays the groundwork for multiplication. When should you start? My belief is that once your child can count with meaning and he or she can comprehend the language of comparison (more, less, the same), you can start estimating with small amounts. Why not start when the kiddos are small and not yet pre-programmed to believe that right answers are more valued than close answers? That being said, Rosalind Charlesworth suggests that children can’t make rational estimates until they have entered the concrete operations stage (ages 7-11 yr). This is because she believes children should have already developed number, spatial and measurement sense before they can make educated guesses. Well, let’s see what Rory thinks; will he make a wild guess or a rational estimation? He’s 4 1/2 years old and in the pre-operational stage of development, but I believe he can make good estimates through motivating activities, coupled with appropriate phrasing of questions. Let’s see how he does. We are going to our new house on the weekend to measure the rooms so we can plan where to put our furniture, but oh-oh…Daddy forgot the measuring tape! What could we use to measure instead? Oliver! And he is a very willing helper…at least in the beginning! As you can see from Rory’s first attempt, he made a rational estimate even though his estimate wasn’t that close. He thought it would take 10 Olivers to line the wall, but it only took 5. I knew his guess was rational because he explained it to me and it made sense. His mistake was that he counted 10 steps initially, instead of 10 Olivers. Notice how he has already improved his understanding and his next estimate was much closer; he guessed it would take 1 Daddy and it actually took 2. I give him a thumbs-up for his first estimate activity and look forward to doing more with him to see how he improves. Here are some tips so your pre-school children meet with success also: |To start, provide numbers for the children to choose from so they don’t have to pull numbers out of thin air.| |Stick with numbers they can count up to.| |Begin using benchmarks (5 and 10).| |Use and teach proper words: about, around, estimate.| |Ask good questions that encourage comparisons: · Will it be longer, shorter or the same as _____________? · Will it be more or less than ____________? · Will it be closer to 5 or 10? |Ask good questions to ensure understanding: · How did you come up with your estimate? · How can we find out whether your estimate is reasonable? |Start with length, weight, and time.| |Let the estimate stand on its own; do not always follow with the measurement.| |Develop the idea that all measurements are approximations (thus estimates!), the smaller the units – the more precise but still approximate.| |Incorporate estimation activities into your every-day life so it becomes second nature.| It is easy to incorporate estimation activities into your daily life and the more your child practices, the better they will become! Need help getting started?
Given the prevalence of picturebooks in children’s everyday school and home experiences, we decided to investigate how picturebooks about mathematics and mathematicians present identities for young readers to adopt or refuse. Many children begin to identify as “good at math,” or “bad at math,” or “not a math person” by a young age. We wondered how children’s literature might contribute to those self-classifications. What images and stories about mathematics are children exposed to? How might the stories shape and even limit how children understand what mathematics is? Many of the picturebooks we read are well-intentioned in rejecting the stereotype that mathematics is primarily a domain for boys and people of European descent. We applaud the number of books that reject the trope of white male dominance in STEM fields! However, it is still possible for children to absorb other subtextual messages that present limited and limiting views of mathematics. In this blog post, we share insights from our examination of 24 picturebooks and discuss four patterns (or hidden messages) we identified within and across the texts. If you are interested in reading the full study, we encourage you to do so and let us know what you think! Hidden Message 1: Mathematical ability is a gift Books written to inspire readers belonging to identity groups that have been underrepresented in the narrative of mathematical ability may nevertheless mischaracterize what it means to be mathematically able. In ‘Hidden Figures: The True Story of Four Black Women and the Space Race,’ Katherine Johnson, Dorothy Vaughan, Mary Jackson, and Christine Darden do not struggle with mathematical concepts or make mistakes. These women are paragons, though not ones that readers can realistically emulate, for their brilliance is preternatural, their mathematical ability pure magic. In ‘Hidden Figures,’ each of the four protagonists is referenced as being “good at math. Really good.” In fact, that phrase is repeated nine times throughout the book, leaving it to the reader to understand what it means to be ‘really good’ at math. Other picturebooks use similarly opaque wording. Paul Erdös “was the best. He loved being at the top in math” (Heiligman & LeUyen, 2013, p. 15). Albert Einstein was “a genius” (Berne & Radunsky, 2013, opening 14). Raye Montague was “a smarty” (Mosca & Reiley, 2018, opening 13). Senefer had “intelligence and abilities” (Lumpkin & Nickens, 1992, p. 10). Eratosthenes was “a real whiz in math” (Lasky & Hawkes, 1994, p. 13). Garth, a character in ‘Math Man,’ “had a way with numbers” (opening 5). Harley, the central character of ‘The Great Math Tattle Battle’, was “the best math student in second grade” (opening 1). Across these examples, readers encounter the archetype of the mathematical doer who owes their ability to some innate, ineffable gift. The gist of this hidden message is that mathematical ability is an innate talent that one either possesses or doesn’t. Further implied is that those who possess ‘the gift’ are highly intelligent and may be geniuses. For them, mathematics requires little or no visible effort. Their work is intuitive and characterized by eureka moments. When innovation is reduced to eureka moments, and math is ‘a piece of cake,’ it obscures the perseverance associated with making sense of mathematical concepts. Children might misidentify themselves as mathematically incapable if it doesn’t come easily to them. Hidden Message 2: Mathematical ability is like having a magic eye or knowing a secret language In another picturebook about Katherine Johnson - ‘A Computer Called Katherine’ - the protagonist sees numbers up among the stars. Similarly, in ‘Nothing Stopped Sophie,’ a young Sophie Germain sees numbers vibrantly appearing out of thin air, superimposed over scenes of the French Revolution (opening 4). Although this might be a reasonable way to represent the thought processes of someone doing math in a picturebook, it might leave readers with the mistaken impression that talented mathematicians are people who literally see formulae floating around them. For other picturebook personae, mathematics functions like a decoder ring for translating or decrypting the language of the universe. For Einstein, numbers “were a secret language for figuring things out” (Berne & Radunsky, 2013, opening 9). The idea that math is akin to language is not problematic on its own. The issue is with the exclusive nature of languages if one does not speak them and believes one cannot learn them. For a child struggling with math anxiety, it could be frustrating to see that mathematics is a language understood easily by its conversant speakers but is incomprehensible to outsiders. Although the language of math is openly taught in classrooms, it is a language many learners struggle to use fluently. We can easily imagine children feeling inspired by Katherine Johnson, but also saying “I never see numbers floating in the air, so I guess I’m not a math person.” Hidden Message 3: Mathematics is about doing calculations quickly Twelfth century mathematician Fibonacci tells us that when he was just a boy, his teacher “wrote out a math problem and gave us two minutes to solve it. I solved it in two seconds” (D’Agnese & O’Brien, 2010, opening 2). Early in ‘The Great Math Tattle Battle,’ readers are told that Harley Harrison “could figure out forty-five plus thirty-nine faster than you could spell ‘Mississippi’” (opening 1). Harley’s mathematical experiences render him as mathematically capable because he is fast and does not make mistakes. His spot at the top of the class is only jeopardized when he produces an incorrect answer and becomes vulnerable to Emma Jean encroaching on his turf. For both Harley and Fibonacci, the focus is on the result rather than the process of making sense of a problem at hand. This hidden message reflects the tendency to overemphasize manipulation of numbers over relational thinking about mathematical ideas, which precludes (slower) processes of taking ownership of mathematical ideas through reasoning. In ‘Counting on Katherine: How Katherine Johnson Saved Apollo 13’, readers encounter the message that mathematical ability translates to experiences of speed and correctness. When the Apollo 13 spacecraft was in peril, Katherine “did flight-path calculations, quickly and flawlessly, to get the astronauts home safely” (opening 13). ‘The Girl with a Mind for Math: The Story of Raye Montague’ contains similar messages about mathematical speed and heroism: “Would it take her a month? Maybe weeks for success? Well, it took CALCULATIONS (and tons of caffeine), but Raye finished in HOURS … just over EIGHTEEN! (opening 22). ‘YOU DID IT!’ they cheered, and her boss had to say that her quick mind for math had in fact saved the day” (opening 24). Perhaps the goal of reading these and other picturebooks with children is to celebrate and normalize the contributions of Black women mathematicians. That is an admirable goal in and of itself! However, if the objective is to inspire children to view themselves as mathematically capable, it is counterproductive to insinuate that mathematical ability is based solely on innate speed and flawlessness. While speed and accuracy are important skills that need to be developed, having automatization take center stage is counterproductive to developing positive mathematical identities among young learners Hidden Message 4: Mathematical ability is associated with social awkwardness Few children yearn to be ostracized by their peers. However, one common message children receive about mathematics is that when you decide to devote time and energy to its pursuit, you may become a pariah, or at least socially awkward. This message is reinforced by numerous picturebooks. In ‘The Great Math Tattle Battle’, Harley Harrison and Emma Jean are the only two children shown to be interested in math. They are represented as being extremely annoying to their classmates. Eight centuries earlier, Fibonacci also annoyed his classmates, and later the townspeople of Pisa (D’Agnes & O’Brien, 2010). In some picturebooks, characters find themselves with a choice to make: pursue mathematics or have friends. As a child, Albert Einstein “didn’t want to be like the other students. He wanted to discover the hidden mysteries of the world” (Berne & Radunsky, 2013, opening 6). Why is Einstein’s choice presented as mutually exclusive? Did studying mathematics preclude him from being like other kids? In the case of other picturebook personae, such as Ada Byron Lovelace, “numbers were her friends” (Wallmark & Chu, 2015, openings 7 and 16) and for Paul Erdös, “numbers were his best friends” (Heiligman & LeUyen, 2013, p. 11). Later, he made human friends when he met others who loved mathematics (p. 14). We are concerned these picturebooks could leave young readers thinking, “I don’t want to be good at math because I don’t want to annoy people or be lonely.” In our article, we offer an analysis of overt and hidden messages in picturebooks, and consider how these messages may contribute to the formation of young people’s identities as learners and doers of mathematics. Inviting children to examine messages about what it means to do math, and what constitutes being ‘good’ at math, are steps toward welcoming a greater swath of learner identities into the boundless world of mathematical inquiry. About the authors Dr. Olga Fellus is an Assistant Professor of Mathematics Education at Brock University (Canada). Her work focuses on the interface between teaching and learning mathematics, identity making, and educational change. Olga can be reached at [email protected] Dr. David E. Low is an Associate Professor of Literacy Education at Fresno State University (USA). His research explores how children and youth critically theorize race, gender, power, and identity through multimodal texts such as comics and picturebooks. David can be reached at [email protected] Dr. Lynette D. Guzmán is a former mathematics teacher educator whose scholarship focused on broadening the ways students explore mathematical ideas in classrooms. She is currently a content creator for her brand, WizardPhD, and creates videos that bring forward philosophical perspectives on various media (games, film, books). She can be reached at [email protected] Dr. Alex Kasman received his PhD in mathematics from Boston University in 1995 and subsequently held postdoctoral positions in Athens, Montreal, and Berkeley. Since 1999, he has been a professor at the College of Charleston. He has published over 30 research papers in mathematics, physics, and biology journals. He also maintains a website that lists, reviews, and categorizes all works of “mathematical fiction”. The American Mathematical Society published his textbook on soliton theory in 2010 and the Mathematical Association of America published a book of his short stories in 2005. Alex can be contacted at [email protected] Dr. Ralph T. Mason’s work focuses on mathematics education, curriculum theory, and pedagogy. He can be reached at [email protected]
How the stars are formed What is it a star? Stars are giant balls of gas, mainly hydrogen and helium. In the core of a star, hydrogen join together to form helium. Sometimes the heat in the core of the star reaches 15 million C or even more. Imagine that a small grain of sand this hot could kill someone 100 km away. The stages of the star formation have been worked out by cosmologists and astronomers. It is a continuous process and even now we know quite a lot about the way how the stars are born they still continue to study them throughout the space. How the stars are born Clouds of hydrogen, other gases and space dust gradually clump together under their own gravity, drawing in more matter and becoming denser. Eventually the matter becomes so hot and dense that the centres, or nuclei, of atoms begin to join or fuse. This nuclear fusion gives off light, heat and other rays and energy – the star shines. This process probably first started to happen around 200 million years after the Bing Bang. Before this, it was too hot and energetic for atoms to undergo nuclear fusion. The Sloan Digital Sky Survey ( SDSS) is a project to map and measure the distances of millions of galaxies, stars, and other objects from Earth. The survey is specialised to detect the feature called red shift. Combined with other evidence, this indicates the distance of a star or a galaxy from Earth and the direction and speed at which it is moving. The SDSS uses an optical reflecting telescope with a mirror 2.5m across the Apache Point Observatory, New Mexico US. The survey’s result in effect looks far back in time by detecting extremely distant stars and other objects. Light from the objects has taken billions of years to reach us, so to us now, they look like they did soon after the Bing Bong. Choose our name a star gifts Comments are closed.
Xenophobia and Tolerance. Lessons of the Holocaust and Humanism. Multimedia e-book “Xenophobia and Tolerance. Lessons of the Holocaust and Humanism” created based on the unique testimonies of Holocaust survivor. Dramatic turns of human lives personify and “humanize” the history of countries and peoples. The stories of the complex dilemmas of moral choice in critical situations, examples of xenophobia and anti-Semitism, which are combined with striking manifestations of the best human qualities, provide a basis for reflection on issues of tolerance and humanism. E-book consists of 4 parts: 1. Xenophobia and Tolerance. 2. Lessons of Holocaust – Lessons of Tolerance. The manual is designed for secondary school teachers and pupils; it will also be interesting for students of social sciences as well as all those interested in history, psychology and ethics issues.
A “Hello, world!” program is a computer program that outputs or displays “Hello, world!” to a user. Being a very simple program in most programming languages, it is often used to illustrate the basic syntax of a programming language for a working program, and as such is often the very first program people write. A “Hello, world!” program is traditionally used to introduce novice programmers to a programming language. “Hello, world!” is also traditionally used in a sanity test to make sure that a computer language is correctly installed, and that the operator understands how to use it. The tradition of using the phrase “Hello, world!” as a test message was influenced by an example program in the seminal book The C Programming Language. The example program from that book prints “hello, world” (without capital letters or exclamation mark), and was inherited from a 1974 Bell Laboratories internal memorandum by Brian Kernighan. In addition to displaying “Hello, world!”, a “Hello, world!” program might include comments. A comment is a programmer-readable explanation or annotation in the source code of a computer program. They are added with the purpose of making the source code easier for humans to understand, and are generally ignored by compilers and interpreters. The syntax of comments in various programming languages varies considerably. Function Main ... This program displays "Hello world!" Output "Hello world!" End Each code element represents: Function Mainbegins the main function ...begins a comment Outputindicates the following value(s) will be displayed or printed "Hello world!"is the literal string to be displayed Endends a block of code The following pages provide examples of “Hello, world!” programs in different programming languages. Each page includes an explanation of the code elements that comprise the program and links to IDEs you may use to test the program. - A programmer-readable explanation or annotation in the source code of a computer program.
Life in the Ghetto Related ImagesSee the photographs related to this lesson Using the Analyzing Visual Images strategy and the Critical Analysis Process for exploring an artwork, print off the Related Images with the captions on their reverse sides and arrange them into the specified groups. Place each group of images on tables or display them on a wall for students to see. Ensure there is an obvious separation between each set of images. If your students have not used the Analyzing Visual Images strategy before, model it for the class using another image from the collection. After modelling the strategy, divide students into evenly numbered groups and assign each group a set of images. Each student should select an image from the group and apply the Analyzing Visual Images strategy. More than one student may select the same image. After completing this process, students should read the captions from the backs of the photographs and share their observations and analyses with the group. When all of the students have shared their ideas, ask them to discuss the following questions: What do the images in this collection have in common? What differences do you see within this collection of images? What title would you give to this collection of images? After completing this discussion, the groups should rotate to the next set of images and repeat the process. Continue this process until each group has worked with all four image sets. Once your students have seen all four collections of images, they should return to their seats and participate in a Think, Pair, Share discussion using a large piece of paper with two columns labelled "Collection Similarities" and "Collection Differences." Students should start writing individually in their notebooks, pair to fill in the large piece of paper, and then share their ideas with the whole group. The last piece of this lesson is an Exit Card. On their exit card, ask your students to do two things. First, they should answer the question: How do the photographs of Henryk Ross represent the complexity of life in the Lodz Ghetto? Second, ask students to pose a question of their own about the images. Students should hand in these cards as they exit the room. Analyzing Visual Images and Stereotyping This video shows Nazi footage of the Lodz Ghetto in the winter of 1940. Testimony of Leo Schneiderman on life in the Lodz Ghetto. Testimony of Blanka Rothschild on life in the Lodz Ghetto.
The largest earthquake recorded in the 20th century is the 1960 Valdivia earthquake, also known as the Great Chilean earthquake which occurred on May 22, 1960 and hit approximately 100 miles off the coast of Chile, which is parallel to the city of Valdivia. With a rate of 9. 5 on the moment magnitude scale, this earthquake occurred in the afternoon and lasted approximately 2 minutes (brit). After just thirty minutes of the major earthquake, a foreshock shook the area near the towns of Valdivia and Peurta Montt (kids). The four foreshocks that followed had a magnitude of greater than 7. 0. The largest earthquake had a magnitude of 7. and caused a large amount of damage to the Concepcion area. The devastating earthquake triggered a tsunami just off the coast of central Chile, which affected the entire Pacific Basin (usgs). After Chile, the tsunami traveled hundreds of miles and caused destruction all around the Pacific. The places that experienced the most impact was Hawaii and Japan (2010). The earthquake set off waves which bounced back and forth across the Pacific Ocean for a week (history). The Great Chilean earthquake along with the large tsunamis, has caused substantial damage to the country, it resulted to loss of life and homes, and presented long-term effects. Although the massive destruction had an overall negative effects, the natural disaster can be seen positively to help better prepare if anything like this occurs again. The geologic process of the earthquake that hit Valdivia, Chile started when the ground along either side of a fault moves. The reason for this is because of the buckling and stress from the movement of the tectonic plates. The epicenter of the earthquake was 60 meters below the ocean floor about 100 miles off the coast of Chile. Valdivia and Puerto Montt suffered significant damages based on how nearby they were to the center of a massive quake (extreme). The earthquake was a megathrust earthquake which is caused by the release of mechanical stress between the subducting Nazca Plate and the South American Plate which is on the Peru-Chile Trench. The population was strongly affected by both the earthquake and the tsunami, which was triggered after. The amount of casualties is uncertain but is reported to ranges from a low of 490 to a high of “approximately 6000”. Most of the damage and deaths were caused by the series of tsunamis which traveled across the Pacific Ocean at a speed of over 200 miles per hour, with waves that reached up to 25m. The waves swept over coastal areas, pushing buildings and drowned many people (geology). More than 3 000 were injured, 2 000 000 were left homeless, and $550 million damaged was totaled in southern Chile. The tsunami caused a reported 61 killed by the waves in Hawaii, and $75 million in damages (usgs). The tsunami arrived to Japan after traveling 22 hours which gave the Japanese enough time to put out alerts but there was still 185 people either dead or missing, more than 1600 homes, and a total of $50 million in damage. Another 32 people are dead or killed in the Philippines approximately 24 hours after the earthquake (geology). The damage total was $500 000 to the west coast of the United States. This is the most powerful earthquake recorded which resulted to extensive damage. The environmental impacts that the earthquake had in Valdivia included an estimate of about 40% of the houses being demolished in the process which left 20 000 people homeless (2010). Wooden houses did not collapse but were still found uninhabitable. Other houses which were built upon elevated areas experienced less damage. Lowlands, on the other hand, absorbed great amounts of energy. Most buildings which was built of concrete were found collapsed due to the lack of earthquake engineering (history). The summary of damages include: 4 133 schools, 79 hospitals, 212 bridges, 9 airports were damaged, and a total of 710 boats were lost (edu). Some structures were never rebuilt again; for example, many city blocks with destroyed buildings are found empty until the 1990s and 2000s, some of these spaces are now used as parking lots. Most of the bridges suffered minor damage, however, the CauCau Bridge suffered and has not been rebuilt (history105). The tsunami which was generated by the earthquake has also caused a substantial amount of damage to the environment not only in Chile, but also in Hawaii, and California. In Hawaii, the areas which was affected most by the waves was completely destroyed except for the buildings with reinforced concrete or structural steel. The damage in California was major, according to the reports in Los Angeles and Long Beach harbors. Damages including boats and ships being destroyed which resulted to thousands of liters of gasoline and oil spillage, incidents with the ferry service, damages to bridges, and destroying highways. geology) The earth experienced changes during the quake because of the extraordinary amount of energy released from below, these changes affected the geology. There were massive landslides that were sent down mountain slopes. Some of these landslides were so enormous that they changed the course of major rivers. Such as, the Port city of Peurto Montt which subsided (sunk downward) because of the movement during the quake. The Government policy was also affected by the event of the earthquake because it helped established the faults in their system. From their mistakes, they learned lessons which helped them better prepare and respond for situations like this. These changes were set out by the Ambassador of Chile, Arturo Fermandois. A few of the problems that were observed include, Seismological and telecommunication infrastructure. This included the fact that the the communications were down for more than 12 hours and sensors took more than 2 hours to give information. To help solve this problem, the government decided to provide real time monitoring process and robust telecommunications systems with multiple backups. Another problem was issuing alarms. The alarm system was unclear and there was no use of mass communication channels. To solve this, there are plans for clear communication protocols and more use of mass communication channels. Lastly, there was no special emergency task force in place for helping to early evaluate the damages. The government will be working on developing an army emergency task force who are specialized in emergency procedures (edu). Hawaii was also affected by the following tsunami and learned about their faults in their tsunami warning system. It’s shown that the victims who were killed or injured thought there would be two sirens, and the second one would mean an evacuation. They failed to understand that the first siren is the signal to evacuate immediately (wsspc). To improve, the government can help educate their people to understand and recognize the different parts of the warning system. The geologic process was overall affected by this disaster because it helped Chile, and other places that were affected, to learn lessons which will help them prepare, manage, and have a better response to disasters like this.
Read and write words that end with the long o vowel pattern -ow. - Picture of a lawn mower* - Grass template* - Paper leaves or leaf word cards* - Things that Grow target text* - Blow-dryer (optional) - Frog and Toad All Year by Arnold Lobel (optional) *Items included below. State and Model the Objective Tell the children they will pretend to mow grass and blow leaves and then read and write long o words spelled with -ow, such as mow, low, blow, and slow. Mow the lawn - Give the children the grass template (found below) and have them write -ow words on it, such as mow, low, blow, slow, row, and grow. - Make the grass stand up by following the directions on the template. - Tape the bottom of the grass to a table or floor. - Tell the children to grow grass by unfolding the top layer of the grass. - Let the children pretend to mow the grass by cutting the paper grass (see below) with scissors. - Have the children make and write -ow words on paper leaves such as blow, row, mow, grow, slow, throw, and low or use the leaf word cards (found below). - Read the story Frog and Toad All Year by Arnold Lobel (optional). - Have the children read the words on the leaves. - Scatter the leaves on the ground. - Have the children try to blow the leaves into a pile with or without using a blow-dryer. - Tell the children to “Blow slowly!” Read target words in texts - Engage the children in reading the text Things that Grow (found below). - Read the text again, fading support. Write about the activity using target words and patterns - Help the children make a list of the words with -ow (e.g., grow, low, mow, slow, blow, row, throw). - Have the children read the words on the leaves and then throw the leaves. - Let the children write about mowing grass and throwing or blowing leaves (e.g., “I can mow the grass. I can help blow the leaves”). SEEL Target Texts Grass Grows and Leaves Blow And the leaves on trees grow too. Grass is slow to grow. Leaves on trees are slow to grow. Grass and leaves are slow to grow. Grass does not grow in rows. But grass can be mowed low in rows. And the leaves fall off the trees. The leaves can be raked into rows. The leaves can be blown into rows. And when the wind blows, the leaves don’t stay in their rows. Mow Grass and Blow Leaves
By Tim Lambert Ice age humans lived in caves some of the time but they also made tents from mammoth skins. Mammoth bones were used as supports. They wore boots, trousers, and anoraks made from animal skins. When the ice age ended a new way of life began. By 8,000 BC people in the Middle East had begun to farm. Food was cooked in clay ovens. The people of Jericho knew how to make sun-dried bricks and they used them to make houses. About 7,000 BC a new people lived in Jericho and they had learned to make mortar. They used it to plaster walls and floors. Catal Huyuk was one of the world’s first towns. It was built in what is now Turkey about 6,500 BC not long after farming began. Catal Huyuk probably had a population of about 6,000. In Catal Huyuk the houses were made of mud brick. Houses were built touching against each other. They did not have doors and houses were entered through hatches in roofs. Presumably having entrances in the roofs was safer than having them in the walls. (Catal Huyuk was unusual among early towns as it was not surrounded by walls). Since houses were built touching each other the roofs must have acted as streets! People must have walked across them. In Catal Huyuk there were no panes of glass in windows and houses did not have chimneys. Instead, there were only holes in the roofs to let out the smoke. Inside houses were plastered and often had painted murals of people and animals on the walls. People slept on platforms. In Catal Huyuk the dead were buried inside houses. (Although they may have been exposed outside to be eaten by vultures first). By 4,000 BC farming had spread across Europe. When people began farming they stopped living in tents made from animal skins and they began to live in huts made from stone or wattle and daub with thatched roofs. Bronze Age people lived in round wooden huts with thatched roofs. The first civilization arose in Sumer (which is now Iraq). There were a number of city-states. Each city had a protector god and the king was regarded as his representative on earth. Below the kings were nobles and rich merchants who lived in considerable comfort in large houses with many rooms. Their houses were two-story high and they were arranged around a courtyard. However poor people lived in simple huts. Another civilization arose in the Indus Valley. Its center was the city of Mohenjo-Daro. The city consisted of two parts. In the center part was a citadel. It contained a public bath and assembly halls. It also held a granary where grain was stored. The lower part of the town had streets laid out in a grid pattern. The houses were 2 or even 3 stories and were made of brick as stone was uncommon in the area. Bricks were of a standard size and the Indus Valley civilization had standard weights and measures. The streets had networks of drains. The Minoans were an early civilization on the island of Crete. The Minoans are famous for the palace at Knossos (although there were other palaces at Mallia, Zakro, and Phaistos). The palace at Knossos was built around a central courtyard. On the ground floor of the palace were storage areas. In them, grain and olive oil were stored in large clay jars called pithoi. The upper floors of the palace were living quarters and they were luxurious. Light wells let in both light and cool air. Wooden columns painted red supported ceilings. Frescoes were painted on the walls. Sometimes human beings were painted but often sea animals such as dolphins were shown. Some rooms in the palace of Knossos were lined with alabaster. The palace at Knossos had bathrooms and even a flushing toilet. Of course, only a tiny minority lived in luxury like that. Most people lived in simple stone huts of one or two rooms. Rich Egyptians lived in large, comfortable houses with many rooms. The walls were painted and the floors had colored tiles. Most wealthy houses had enclosed gardens with pools. Inside their homes, rich Egyptians had wooden furniture such as beds, chairs, tables, and chests for storage. However, instead of pillows, they used wooden headrests. Toilets consisted of a clay pot filled with sand. It was emptied regularly. Ordinary people lived in simpler homes made of mud bricks with perhaps four rooms. People may have slept on the flat roof when it was hot and they did most of their work outside because of the heat. The furniture was very basic. Ordinary Egyptians sat on brick benches around the walls. They used reed chests or wooden pegs on walls to store things. In the 6th century BC the city of Babylon built up an Empire in the Middle East. Ordinary people in Babylon lived in simple huts made from sun-dried mud bricks. However, if the owner was wealthy they might have an upper story. The rich lived in palaces with central courtyards. The walls were decorated with painted murals. There were even bathrooms with pipes for drainage. Greek homes were usually plain and simple. They were made of mud bricks covered in plaster. Roofs were made of pottery tiles. Windows did not have glass and were just holes in the wall. Poor people lived in just one, two, or three rooms. Rich Greeks lived in large houses with several rooms. Usually, they were arranged around a courtyard and they often had an upper story. Downstairs was the kitchen and the dining room (called the andron). So was the living room. Upstairs were bedrooms and a room for women called a gynoecium (the women wove cloth there and also ate their meals there away from the men). The rivals of the Greeks were the Persians. Rich Persians lived in palaces of timber, stone, and brick. They had comfortable upholstered furniture such as beds, couches, and chairs. Tables were overlaid with gold, silver, and ivory. The rich also owned gold and silver vessels, as well as glass vessels. They also owned tapestries and carpets. Rich people in the Persian empire also had beautiful gardens. (Our word ‘paradise’ comes from the Persian word for garden). For ordinary people, things were quite different. They lived in simple huts made from mud brick. If they were quite well off they might live in a house of several rooms arranged around a courtyard. However poor people lived in huts of just one room. Any furniture was very basic. By 650 BC a people called the Celts lived in France and the British Isles. The Celts lived in roundhouses. They were built around a central pole with horizontal poles radiating outwards from it. They rested on vertical poles. Walls were of wattle and daub and roofs were thatched. Around the walls inside the huts were benches, which also doubled up as beds. The Celts also used low tables. In Rome, poor people lived in blocks of flats called insulae. Most were at least five stories high. However they were often badly built, and their walls sometimes cracked and roofs caved in. Most people lived in just one or two rooms. The furniture was very basic. Rooms were heated by charcoal burned in braziers. The inhabitants used public lavatories. Most obtained water from public fountains and troughs. It was too dangerous for the inhabitants of insulae to cook indoors and they had to buy hot food from shops. In Roman Britain, rich people built villas modeled on Roman buildings and they enjoyed luxuries such as mosaics and even a form of central heating called a hypocaust. Wealthy Romans also had wall paintings called murals in their houses. In their windows, they had panes of glass. Of course, poorer Romans had none of these things. Their houses were simple and plain and the main form of heating was braziers. The Saxons lived in wooden huts with thatched roofs. Usually, there was only one room shared by everybody. (Poor people shared their huts with animals divided from them by a screen. During the winter the animal’s body heat helped keep the hut warm). Thanes and their followers slept on beds but the poorest people slept on the floor. There were no panes of glass in windows, even in a Thane’s hall and there were no chimneys. Floors were of earth or sometimes they were dug out and had wooden floorboards placed over them. There were no carpets. Peasant’s Houses In The Middle Ages Peasants’ houses were simple wooden huts. They had wooden frames filled in with wattle and daub (strips of wood woven together and covered in a ‘plaster’ of animal hair and clay). However, in some parts of the country huts were made of stone. Peasant huts were either whitewashed or painted in bright colors. The poorest people lived in one-room huts. Slightly better-off peasants lived in huts with one or two rooms. There were no panes of glass in the windows only wooden shutters, which were closed at night. The floors were of hard earth sometimes covered in straw for warmth. In the middle of a Medieval peasant’s hut was a fire used for cooking and heating. There was no chimney. Any furniture was very basic. Chairs were very expensive and no peasant could afford one. Instead, they sat on benches or stools. They would have a simple wooden table and chests for storing clothes and other valuables. Tools and pottery vessels were hung on hooks. The peasants slept on straw and they did not have pillows. Instead, they rested their heads on wooden logs. At night in summer and all day in winter the peasants shared their huts with their animals. Parts of it were screened off for the livestock. Their body heat helped to keep the hut warm. Rich People’s Houses In The Middle Ages The Normans, at first, built castles of wood. In the early 12th century stone replaced them. In the towns, wealthy merchants began living in stone houses. (The first ordinary people to live in stone houses were Jews. They had to live in stone houses for safety). In Saxon times a rich man and his entire household lived together in one great hall. In the Middle Ages, the great hall was still the center of a castle but the lord had his own room above it. This room was called the solar. In it, the lord slept in a bed, which was surrounded by curtains, both for privacy and to keep out drafts. The other members of the lord’s household, such as his servants, slept on the floor of the great hall. At one or both ends of the great hall, there was a fireplace and chimney. In the Middle Ages, chimneys were a luxury. As time passed they became more common but only a small minority could afford them. Certainly, no peasant could afford one. About 1180 for the first time since the Romans rich people had panes of glass in the windows. At first, glass was very expensive and only rich people could afford it but by the late 13th and early 14th centuries, the middle classes began to have glass in some of their windows. Those people who could not afford glass could use thin strips of horn or pieces of linen soaked in tallow or resin which were translucent. In a castle, the toilet or garderobe was a chute built into the thickness of the wall. The seat was made of stone. Sometimes the garderobe emptied straight into the moat! 16th Century Houses In the Middle Ages, rich people’s houses were designed for defense rather than comfort. In the 16th century, life was safer so houses no longer had to be easy to defend. Rich Tudors built grand houses e.g. Cardinal Wolsey built Hampton Court Palace. Later the Countess of Shrewsbury built Hardwick Hall in Derbyshire. People below the rich but above the poor built sturdy ‘half-timbered’ houses. They were made with a timber frame filled in with wattle and daub (wickerwork and plaster). In the late 16th century some people built or rebuilt their houses with a wooden frame filled in with bricks. Roofs were usually thatched though some well-off people had tiles. In the 15th century, only a small minority of people could afford glass windows. During the 16th century, they became much more common. However, they were still expensive. If you moved house you took your glass windows with you! Tudor windows were made of small pieces of glass held together by strips of lead. They were called lattice windows. However the poor still had to make do with strips of linen soaked in linseed oil. Chimneys were also a luxury in Tudor times, although they became more common. Furthermore, in the Middle Ages, a well-to-do person’s house was dominated by the great hall. It was not possible to build upstairs rooms over the great hall or the smoke would not be able to escape. In the 16th century, wealthy peopled installed another story in their house over the great hall. So well off people’s houses became divided into more rooms. None of the improvements of the 16th century applied to the poor. They continued to live in simple huts with one or two rooms (occasionally three). Floors were of hard earth. In the 16th century, the Spanish destroyed civilizations in North and South America including the Aztecs. Ordinary Aztecs lived in simple huts, often of just one room. The huts were made of adobe and any furniture was very simple such as reed mats to sleep on or sit on and low tables. Wooden chests were used to store clothes. Aztec nobles lived in much grander houses with many rooms. They were usually shaped like a hollow square with a central courtyard. It often contained gardens and fountains. By law, only upper-class Aztecs could build a house with a second story. If ordinary Aztecs did they could be executed. Inca houses were very simple. They often consisted of just one room (although some houses did have an upper story with a wooden floor). Inca homes did not have furniture. People sat and slept on reed mats or animal skins. Doors and windows were trapezium-shaped. (A trapezium is a four-sided shape with only two parallel sides). Roofs were thatched and there were no chimneys. Rich Incas, of course, lived in much grander homes. Inca palaces sometimes had sunken stone baths. Ordinary Maya lived in simple huts of wood or stone with thatched roofs. They had no chimneys or windows. They did not have wooden doors either. Instead, doorways were hung with cloth screens. There was very little furniture. Mayans slept on beds, which were low platforms made of a wooden frame filled with woven bark. Dead Mayans were buried under the floors of their houses. Rich Mayans, of course, lived in far more elaborate homes with many rooms. 17th Century Houses In 17th century England ordinary people’s houses improved. In the Middle Ages, ordinary people’s homes were usually made of wood. However in the late 16th and early 17th centuries, many were built or rebuilt in stone or brick. By the late 17th century even poor people usually lived in houses made of brick or stone. They were a big improvement over wooden houses. They were warmer and drier. Furthermore in the 16th-century chimneys were a luxury. However, during the 17th-century chimneys became more common and by the late 17th century even the poor had them. Furthermore in 1600 glass windows were a luxury. Poor people made do with linen soaked in linseed oil. However, during the 17th-century glass became cheaper and by the late 17th century even the poor had glass windows. In the early 17th century there were only casement windows (ones that open on hinges). In the later 17th-century sash windows were introduced. They were in two sections and they slid up and down vertically to open and shut. Although poor people’s homes improved in some ways they remained very small and crowded. Most of the poor lived in huts of 2 or 3 rooms. Some families lived in just one room. 18th Century Houses In the 18th century, a small minority of the population lived in luxury. The rich built great country houses. The leading architect of the 18th century was Robert Adam (1728-1792). He created a style called neo-classical and he designed many 18th-century country houses. However the poor had none of these things. Craftsmen and laborers lived in 2 or 3 rooms. The poorest people lived in just one room. Their furniture was very simple and plain. 19th Century Houses In the early 19th century houses for the poor in Britain were dreadful. Often they lived in ‘back-to-backs’. These were houses of three (or sometimes only two) rooms, one of the top of the other. The houses were literally back-to-back. The back of one house joined onto the back of another and they only had windows on one side. The bottom room was used as a living room and kitchen. The two rooms upstairs were used as bedrooms. The worst homes were cellar dwellings. These were one-room cellars. They were damp and poorly ventilated. The poorest people slept on piles of straw because they could not afford beds. Fortunately in the 1840s local councils passed by-laws banning cellar dwellings. They also banned any n back-to-backs. The old ones were gradually demolished and replaced with better houses over the following decades. In the early 19th century skilled workers usually lived in ‘through houses’ i.e. ones that were not joined to the backs of other houses. Usually, they had two rooms downstairs and two upstairs. The downstairs front room was kept for the best. The family kept their best furniture and ornaments in this room. They spent most of their time in the downstairs back room, which served as a kitchen and living room. As the 19th century passed more and more working-class people could afford this lifestyle. In the late 19th century workers’ houses greatly improved. After 1875 most towns passed building regulations which stated that e.g. new houses must be a certain distance apart, rooms must be of a certain size and have windows of a certain size. By the 1880s most working-class people lived in houses with two rooms downstairs and two or even three bedrooms. Most had a small garden. However, even at the end of the 19th century, there were still many families living in one room. Old houses were sometimes divided up into separate dwellings. Sometimes if windows were broken slum landlords could not or would not replace them. So they were ‘repaired’ with paper. Or rags were stuffed into holes in the glass. 20th Century Houses At the start of the 20th century, working-class homes had two rooms downstairs. The front room and the back room. The front room was kept for the best and children were not allowed to play there. In the front room, the family kept their best furniture and ornaments. The back room was the kitchen and it was where the family spent most of their time. Most families cooked on a coal-fired stove called a range, which also heated the room. This lifestyle changed in the early 20th century as gas cookers became common. They did not heat the room so people began to spend most of their time in the front room or living room, by the fire. In 1900 about 90% of the population rented their home. However, homeownership became more common during the 20th century. By 1939 about 27% of the population owned their own house. Central heating became common in the 1960s and 1970s. Double glazing became common in the 1980s. The first council houses were built before the First World War. More were built in the 1920s and 1930s and some slum clearance took place. However, council houses remained rare until after World War II. After 1945 many more were built and they became common. In the early 1950s, many homes in Britain still did not have bathrooms and only had outside lavatories. The situation greatly improved in the late 1950s and 1960s. Large-scale slum clearance took place when whole swathes of old terraced houses were demolished. High-rise flats replaced some of them. However, flats proved to be unpopular with many people. Some people who lived in the new flats felt isolated. The old terraced houses may have been grim but at least they often had a strong sense of community, which was usually not true of the flats that replaced them. In 1968 a gas explosion wrecked a block of flats at Ronan Point in London and public opinion turned against them. In the 1970s the emphasis turned to renovate old houses rather than replacing them. Then, in 1979 the British government adopted a policy of selling council houses. Last revised 2022
On December 16, 1773 American natinalists dumped crates of tea from a British ship into the Boston Harbor in a protest over rising taxes imposed by the British colonists. Known after that as the Boston Tea Party, which was a direct action by colonists in Boston over the rising taxes. The incident is still an iconic event of American history, and other political protests often refer to it. The Tea Party was the culmination of a resistance movement throughout British America against the Tea Act, which had been passed by the British Parliament in 1773. They believed that the taxes violated their right to be taxed only by their own elevated representatives. Protesters had successfully prevented the unloading of taxed tea in three other colonies, but in Boston, the Royal Governor refused to allow the tea to be returned to Britain. He was not expecting the protestors to destroy the tea. The Boston Tea Party was a key event in the growth of the American Revolution. Parliament responded in 1774 with the Coercive Acts, which, among other provisions, closed Boston’s commerce until the British East India Company had been repaid for the destroyed tea. The crisis escalated, and the American Revolutionary War began near Boston in 1775.
Early Childhood Tooth Decay About Tooth Decay Tooth decay is a transmissible infection caused by bacteria, namely, Streptococcus mutans & Lactobacillus. The bacteria feed on what you eat producing acids which destroy tooth structure. The result being dental decay which requires dental surgical intervention. Dental surgical intervention removes the tooth decay, but does not alter the fact that your child has a bacterially infected mouth. To prevent future decay the oral environment needs to change. PREVENTING TOOTH DECAY Four things are necessary for cavities to form: - A tooth - Sugars or other carbohydrates We can share with you how to make teeth strong, keep bacteria from organizing into harmful colonies, develop healthy eating habits, & understanding the role that time plays. Remember, decay is an infection of the tooth. Visiting us early can help avoid unnecessary cavities & dental treatment. The pediatric dental community is continually doing research to develop new techniques for preventing dental decay & other forms of oral disease. Studies show that children with poor oral health have decreased school performance, poor social relationships & less success later in life. Children experiencing pain from decayed teeth are distracted & unable to concentrate on schoolwork. We Personalize Care to Kids and Teens Making a Trip To the Dentist Safe and Fun! IMPORTANCE OF PRIMARY TEETH (BABY TEETH) It is very important that primary teeth are kept in place until they are naturally lost. These teeth serve a number of critical functions. - Maintain good nutrition by permitting your child to chew properly. - Are involved in speech development. - Help the permanent teeth by saving space for them. ”A healthy smile can help children feel good about the way they look to others.” -American Academy of Pediatric Dentistry 2004 Tooth Decay is the Result of an Imbalance Between Two Factors: Pathological Factors & Protective Factors - Caries Risk Assessment starting in infancy, no later than the first birthday & continuing throughout life - Caries Susceptibility Testing - Bacteria Culture Testing to monitor the pathogenic bacteria, Streptococcus & Lactobacillus - Dental Plaque pH Testing to measure for bacterially produced acids Elevated levels of pathological bacteria in conjunction with other risk factors require intervention beyond parentally supervised oral hygiene and dietary changes. THERAPY MAY INCLUDE: - Sodium Bicarbonate - Calcium & Phosphate Replacement - Continual Monitoring of Risk Factors and Pathogenic Bacterial Level
RationaleThis rationale complements and extends the rationale for The Arts learning area. Visual arts includes the fields of art, craft and design. Learning in and through these fields, students create visual representations that communicate, challenge and express their own and others’ ideas as artist and audience. AimsIn addition to the overarching aims of the Australian Curriculum: The Arts, visual arts knowledge, understanding and skills ensure that, individually and collaboratively, students develop: conceptual and perceptual ideas and representations through design and inquiry processes StructureLearning in Visual Arts Learning in Visual Arts involves students making and responding to artworks, drawing on the world as a source of ideas. Students engage with the knowledge of visual arts, develop skills, techniques and processes, and use materials as they explore a range of forms, styles and contexts. Example of knowledge and skills Years 9 and 10 Years 9 and 10 Band Description In Visual Arts, students: - build on their awareness of how and why artists, craftspeople and designers realise their ideas through different visual representations, practices, processes and viewpoints - refine their personal aesthetic through working and responding perceptively and conceptually as an artist, craftsperson, designer or audience - identify and explain, using appropriate visual language, how artists and audiences interpret artworks through explorations of different viewpoints - research and analyse the characteristics, qualities, properties and constraints of materials, technologies and processes across a range of forms, styles, practices and viewpoints - adapt, manipulate, deconstruct and reinvent techniques, styles and processes to make visual artworks that are cross-media or cross-form - draw on artworks from a range of cultures, times and locations as they experience visual arts - explore the influences of Aboriginal and Torres Strait Islander Peoples and those of the Asia region - learn that Aboriginal and Torres Strait Islander people have converted oral records to other technologies - reflect on the development of different traditional and contemporary styles and how artists can be identified through the style of their artworks as they explore different forms in visual arts - identify the social relationships that have developed between Aboriginal and Torres Strait Islander people and other cultures in Australia, and explore how these are reflected in developments of forms and styles in visual arts - use historical and conceptual explanations to critically reflect on the contribution of visual arts practitioners as they make and respond to visual artworks - adapt ideas, representations and practices from selected artists and use them to inform their own personal aesthetic when producing a series of artworks that are conceptually linked, and present their series to an audience - extend their understanding of safe visual arts practices and choose to use sustainable materials, techniques and technologies - build on their experience from the previous band to develop their understanding of the roles of artists and audiences. Years 9 and 10 Content Descriptions Years 9 and 10 Achievement Standards By the end of Year 10, students evaluate how representations communicate artistic intentions in artworks they make and view. They evaluate artworks and displays from different cultures, times and places. They analyse connections between visual conventions, practices and viewpoints that represent their own and others’ ideas. They identify influences of other artists on their own artworks. Students manipulate materials, techniques and processes to develop and refine techniques and processes to represent ideas and subject matter in their artworks.
Cathay Williams (1844-1894) was the only woman known to serve as a Buffalo Soldier, and the first African-American woman to enlist in the United States Army. Cathay Williams was born to a free father and an enslaved mother, making her a slave. She worked as a house slave during her youth on the a plantation near Jefferson City, Missouri. Prior to her voluntary enlistment, she was captured by Union forces in1861 along with other slaves, and forced them to serve as military support. She traveled with the infantry and was present at the Red River Campaign and the Battle of Pea Ridge. In 1866, Cathay Williams enlisted in the U.S Regular Army as a man under the false name of “William Cathay” because women were prohibited from serving in the military. She was assigned to the 38th U.S. Infantry Regiment after passing the cursory medical examination. However, shortly after enlisting, she was infected with smallpox and was hospitalized. While she managed to somehow maintain her secret identity through her hospitalization and rejoin her unit in New Mexico, it was not long before health issues caught up with her and she was discovered. William’s was honorably discharged. This was not the end of her adventures, she sighed up with an all-black regiment and became part of the Buffalo Soldiers. Learn more here: Please use the buttons below to share Her Story… She Made History!
Millions of patients are diagnosed with diseases and conditions of the eye every year. Some of which may not display symptoms until there is irreversible damage to the patient’s vision. The outcome of eye disease can range from temporary discomfort to total loss of vision, which is why all eye problems and diseases should be taken seriously and regular eye check-ups are absolutely essential. The main causes of eye problems can be divided into five groups: Inflammation of the eye and surrounding structures caused by a bacterial, viral, parasitic or fungal infection. Injuries to the eye and surrounding structures, either as a result of trauma or an object in the eye. Genetically inherited eye diseases, many of which may only manifest later in life and affect the structures and the functioning of the eye which therefore can impair visual abilities. In some cases, however, children are born with these conditions. Diseases or conditions, such as migraine or diabetes, which can affect other organs of the body, such as the eyes. External causes, such as allergies or eye strain due to over-use, or as a side effect of medication. The three symptoms indicative of eye disease are changes in vision, changes in the appearance of the eye, or an abnormal sensation or pain in the eye. Changes in vision can include the following symptoms: Nearsightedness is caused by an elongation of the eyeball over time, making it difficult to clearly see objects far away. Farsightedness is caused by the shortening of the eyeball, making it difficult to see objects that are close-by clearly. Blurry or hazy vision, or loss of specific areas of vision, which can affect one or both eyes and is the most common vision symptom. Any sudden changes in vision should be a cause of concern. Double vision means a single clear image appears to repeat itself. This could be accompanied by other symptoms like headaches, nausea, a droopy eyelid, and misalignment of the eyes. Floaters are specks or strands that seem to float across the field of vision. These are shadows cast by cells inside the clear fluid that fills the eye. These are usually harmless, but should be checked out as they could point to something serious such as retinal detachment. Loss of vision after being able to see before. Night blindness is the inability to see clearly in the dark or adapting to the dark, especially after coming out of a brightly lit environment. Impaired depth perception means a person has difficulty distinguishing which of two objects is closer to him/her. Changes in the appearance of the eye include, but are not restricted to, the following: Redness or swelling of the eyes, which have a bloodshot appearance. Watery and itchy eyes, depending on the cause, discharge from the eyes is also possible. Redness and swelling of the eyelid. Cloudy appearance of the eye, which occurs due to a build-up of proteins making the lens of the eye appear cloudy. These can be symptomatic of cataracts. Eyelid twitch. This happens when eyelid muscles spasm involuntarily over a period of time. Bulging eyes could be a symptom of hyperthyroidism or an autoimmune disorder called Grave’s disease. Drooping eyelids can be a sign of exhaustion, aging, migraines or a more serious medical problem. Pain within the eye is called ocular pain, while pain on the surface of the eye is called orbital pain. Ocular pain can be caused by a scratch or a slight injury to the cornea of the eye or the presence of a foreign object in the eye and often causes redness of the eye. Orbital pain can be sharp or throbbing and may go beyond the surface. This should be a cause for concern if it’s accompanied by vision loss, vomiting, fever, muscle aches, eye-bulging and difficulty in moving the eye in certain directions. Trauma to the eye or the surrounding facial areas can also be the cause of pain. The treatment of eye diseases are divided into four main categories: Prescription glasses or contact lenses Treatment of systemic conditions affecting the eye
Normal brain tissue is represented by four different regions: Cerebellum, Cerebral cortex, Hippocampus and Caudate. The nervous system represents the major communication network and consists of the central nervous system (CNS) and peripheral nervous system (PNS). The intracranial cerebrum and cerebellum together with the spinal cord constitutes the CNS. The brain is covered by layers of membranes, the meninges, and submerged in cerebrospinal fluid, which also fills the intracerebral ventricles. The brain can grossly be divided into different neuroanatomical functional regions such as the frontal, parietal, temporal, occipital lobes and central gray matter structures. Anatomically and histologically the brain can further be stratified into the cerebral cortex representing the outermost gray matter overlying white matter and the innermost deep gray matter components. The hippocampus, containing the neuron rich dentate fascia, is closely associated with the cerebral cortex, and is located in the medial temporal lobe. The cerebral cortex incorporates neurons (nerve cells) and glial cells (supportive cells), whereas the white matter incorporates primarily glial cells and myelinated axons from neurons. The brain parenchyma is composed of neurons embedded in a framework of glial cells (astrocytes and oligodendrocytes) as well as microglia and blood vessels. The ependymal cell is also a specialized glial cell that lines the ventricles, and is closely related to the cellular component of the choroid plexus which produce cerebrospinal fluid. In addition to the cell bodies that can be defined in the microscope, cell processes from neurons and glial cells form a synaptically rich "background substance" denoted as the neuropil. The cerebellum, important for coordination, appears as a highly ordered tissue with distinct layers including the cell dense granular layer and the fiber rich but sparsely cell populated molecular layer, between which the large Purkinje cells (specialized neuronal cells) are located. The neurons are a morphologically and functionally heterogeneous family of cells that can transmit information through chemical and electrical signaling. Neurons vary in size from the small round cells that populate the internal granular layer of the cerebellum to the large pyramidal neurons of the primary motor cortex and the Purkinje cells of the cerebellum. Astrocytes represent the major glial cell type in the brain and are characterized by their cellular cytoplasmic processes reaching both synapses and capillary walls. The astrocyte is a star shaped cell involved in the maintenance of the microenvironment surrounding neurons and also important for the blood-brain barrier function. Oligodendrocytes are the main producer of myelin and are characterized by their small, rounded and lymphocyte like nuclei.
An international team of researchers has developed a fast, accurate and non-invasive method for diagnosing TB, an infectious disease that threatens the lives of over 10 million people every year. Created with support from the EU-funded A-Patch project, the new method uses a patch that absorbs TB-specific compounds that are detected in air trapped above the skin. Although TB is a treatable and curable disease, diagnosis remains a serious obstacle. At the moment, about 3 million active cases are missed by global health systems and the disease’s non-specific symptoms lead to millions of patients receiving incomplete or delayed diagnoses. Current diagnostic tests are also slow, not sensitive and/or specific enough, and too complex for places with limited resources. For example, mycobacterial cultures take 4 to 8 weeks, and at least 3 visits are needed before a patient’s diagnosis is finalised and treatment can begin. Another serious barrier is cost. A sputum smear costs EUR 2.2 to 8.9 per examination, which can be prohibitive considering that the overwhelming majority of TB cases occur in developing countries – in some of which people live on as little as EUR 1 a day. The need for a fast, cheap and sputum-free test for diagnosing TB led to the creation of this patch that is applied to the patient’s inner arm. The patch contains a pouch of absorbent material for capturing a variety of TB-specific volatile organic compounds (VOCs) that are released into the bloodstream by infected cells and can be detected in air trapped above the skin. When VOC values deviate from the healthy VOC range, this indicates either TB infection or a high risk of infection. The skin-based TB VOCs are detected and translated into a point-of-care diagnosis using a specially designed array of nanomaterial-based sensors. Trialling the patch The research team tested their absorbent skin patch in clinical trials in India and South Africa. The study population included newly diagnosed and confirmed active TB cases, healthy volunteers and confirmed non-TB cases. The patch proved to be highly effective in diagnosing the disease and met the World Health Organization’s criteria for a highly sensitive and specific new TB triage test that isn’t affected by difficult factors such as HIV status and smoking habits. The results have been published in the journal ‘Advanced Science’. “Our initial studies, done on a large number of subjects in India and in South Africa showed high effectiveness in diagnosing tuberculosis, with over 90 % sensitivity and over 70 % specificity,” observed study first author Dr Rotem Vishinkin of A-Patch project coordinator Technion – Israel Institute of Technology in a news item posted on the ‘Medical Dialogues’ website. “We showed that tuberculosis can be diagnosed through the compounds released by the skin. Our current challenge is minimizing the size of the sensor array and fitting it into the sticker patch.” The A-Patch (Autonomous Patch for Real-Time Detection of Infectious Disease) method promises to provide precise diagnoses of TB patients fast, easily and cost-effectively, without the need for specialised personnel. As important, the same method could in the future be used to also diagnose and monitor other diseases, such as COVID-19, in parts of the world that most need it. For more information, please see: A-Patch project website A-Patch, tuberculosis, TB, skin patch, diagnostic tests
What is an orbital tumor - is any tumor that occurs within the orbit of the eye. The orbit is a bony housing in the skull about 2 inches deep that provides protection to the entire eyeball except the front surface. It is lined by the orbital bones and contains the eyeball, its muscles, blood supply, nerve supply, and fat. - Tumors could develop in any of the tissues surrounding the eyeball and could also invade the orbit from the sinuses, brain, or nasal cavity, or it could metastasize (spread) from other areas of the body. Orbital tumors can affect adults and children. Fortunately, most are benign. What causes orbital tumors? - Most childhood orbital tumors are benign and are the lead to of developmental abnormalities. - Common orbital tumors in children are dermoids (cysts of the lining of the bone) and hemangiomas (blood vessel tumors). - Malignant tumors are unusual in children, but any rapidly growing mass should be cause for concern. - Rhabdomyosarcoma is the most common malignant tumor affecting children, and it usually occurs between the ages of 7 and 8. - The most common orbital tumors in adults are also blood vessel tumors, including hemangioma, lumphangioma, and arteriovenous malformation. - Tumors of the nerves, fat, and surrounding sinuses occur less often. - Lymphomas are the most often occurring malignant orbital tumors in adults. - Metastic tumors most often arise from the breast and prostate, while squamous and basal cell cancer can invade the orbit from surrounding skin and sinus cavities. What are the symptoms of an orbital tumor? - Symptoms of an orbital tumor could include - protrusion of the eyeball (proptosis) - loss of vision - double vision - swelling of the eyelids - obvious mass. - Prominence of the eyes is not necessarily the lead to of a tumor, but could lead to from inflammation such as that caused by Graves' thyroid disease. - In children, parents could initially notice a droopy eyelid or slight protrusion of the eye. How are tumors diagnosed - How are orbital tumors treatedOrbital tumors are most often diagnosed with either a CAT scan or MRI. If either of those tests look suspicious, a biopsy could be performed.? How are orbital tumors Treated - Treatment of orbital tumors varies depending on the size, location, and type. - Some orbital tumors require no treatment, while others are best treated medically or with the use of radiation therapy. - Som could need to be totally removed by either an orbital surgeon or a neurosurgeon, depending on the particular case. - After removal, additional radiation or chemotherapy could be required. Surgery has become much safer because CT scans and MRI testing can help pinpoint the location and size of the tumor.
What is Posterior Cord Syndrome? This occurs when the damage is towards the back of the spinal cord. The spinal cord carries motor commands and sensory information between the brain and the periphery. Damage to the posterior spinal cord--whether due to disease, tumor, or injury--can result in devastating consequences because these connections are interrupted. Posterior spinal cord injury produces the condition called posterior cord syndrome. The syndrome is characterized by particular symptoms, the hallmarks of which are differences in the extent of sensory and motor impairments below the level of the lesion. Sensory Loss The posterior spinal cord carries mainly sensory information from the periphery to the brain. This is critical information to the brain and includes sensations about the position of the body and limbs in addition to vibration sense and the ability to finely discriminate touch sensations. Destruction of neurons in the posterior spinal cord results in loss of these sensations below the level of involvement. Neuron destruction can be accompanied by other odd sensations on the skin, as well as shooting or burning pain, prickling and a feeling like that produced by insects crawling on the skin. Pain and temperature sensation, however, are preserved
“Platypus genome holds key to its testes” Platypus (Image via Healesville Sanctuary) Scientists have sequenced the platypus genome – and this has revealed the genes that govern how their testes hang (those of the platypus, but it is hoped that the sequence will lead to pinpointing important genes involved in sexual development in all mammals, including scientists). There is a lot more to this article than testicles and their descent: “…the platypus is exciting because it represents the earliest known branch in the mammalian lineage; the last common ancestor between humans and the platypus was around 230 million years ago. By comparing the genomes of humans and other mammals with the platypus, scientists can work out which genes have been conserved best through evolution. The longer a gene has lasted through time the more likely it is to have an important biological function. So the platypus genome will help scientists to focus on important parts of otherwise unwieldy mammalian genomes.”
Adaptation of Maize-Beans (Mesoamerican) Farming System Maize-Beans (Mesoamerican) Farming System varies between the countries, characterized by the importance of two main crops, maize and beans, which play a vital role in human diets and culture for local people (Dixon et al., 2001). Farming system, which occupies 65 million ha, extends over the eight countries, from southern Mexico to Panama. Before Spaniard’s arrival in the 15th century, this geographical bridge between two continents was highly influenced by local Indian people’s civilization. Because of history, location and special climate conditions, Mesoamerica is considered as an origin of agriculture with the high genetic diversity (DeClerck et predominant upland farming system, maize and beans ensure food security for millions of farmers but because of fragmented small size holdings, low yields and a high degree of on-farm consumption high poverty is found throughout the system. Coffee and fruit estates are main off-farm income sources for small-scale farmers, where they migrate seasonally. A vast majority of agriculture production in the system are implemented in the rainfed environment. Contrary, only small part (2 million ha) of agricultural land is under extensive irrigation, which is mostly controlled by larger big scale agricultural producers. The development of the export-oriented private sector in the last 30 years promoted diversification of local high-value fruits and vegetables (Dixon et al., 2001). erosion and deforestation are main abiotic stresses for the farming system. Because the agricultural lands are almost fully exploited small holding farmers tend to utilize marginal hillside and steeper slopes and use unsustainable “Slash and Burn” practice. Due to unsustainable logging practice deforestation rate reached to -0.7 percent annually during the last two decades in the past century, which is one of the highest rates in the world (DeClerck et al., 2010). describes main peculiarities of the system. It provides evidence for different adaptations and will be focused on a particular location for the farming system, such as western highlands in Guatemala, which is a typical reflection of Maize-Beans (Mesoamerican) farming system. Figure 1: System Location (Dixon et al., 2001) delineation of maize-beans (Mesoamerican) farming system encompasses mostly upland areas from the five southern states of Mexico to Panama Canal. Mesoamerican mountain chains which play a land bridge role to connect North America to South America is also a barrier between two major oceans. The narrow part of the isthmus is 80 km, bordered by the Pacific Ocean from the west and the Caribbean Sea to the east part (Gordon, 1976). The total area of the system counts 65 million ha with different latitude and relief which creates a variety of environments, considered as “one of the original 25 global biodiversity hotspots” (DeClerck et al., 2010). The most part of agricultural activities are conducted from 400 to 2000 m above sea level, but there are some exceptions such as Guatemalan highlands, with 3500 m above sea level. South Mexico part of the system shares the main cultural, social and agronomic features of the system, but agro-ecologically it adopts different characteristics, such as poor soil, low temperature, and highly extended cultivated lands (from 2000 to 3000 m above sea level) (Dixon et al., 2001). Climate and Crop-Weather Interaction Framing system encompasses the part of Central America region spanning latitudes approximately from 8° to 23°N (Tropic of Cancer) and longitudes from 105 to 77°W. Weather conditions, characterized by a predominance of dry – most sub-humid zones are seriously affected by different climatic forces from two surrounding oceans. Annual rainfall in the Mesoamerican farming system varies between 1000 to 2000 mm, with the Pacific the drier side and the Caribbean Sea the more humid. Because of a high variety of environments, the level of precipitation changes across the system and reaches to 3000 to 5000 mm in Mexico plateau. The regional climate is designed by dry winters and wet summers, whereby Atlantic slope experience more rainfall with alternating seasons and higher humidity than Pacific slopes. Temperature variation during the year is minimal, on average 4°C and declines from north to a south part of the system (Dixon et al., 2001). Mesoamerican farming system experiences a long dry season from December to April (up to 6 months), which is followed by bimodal rainfall pattern. The wet period which starts in May and lasts in November is interrupted by a short dry period from July to August. Rainfall is directly linked to agricultural activities and can highly influence the length of crops growing period (Schmidt et al., 2012). Typical rainfall pattern in the system (mm) (Schmidt et al., 2012) The main planting season (Primera) starts in May-June, as the first rainy season takes place. The maize planted in this period is harvested in September. Primera is followed by short dry period Canicula in July-August. After Canicula starts second rainy season (Postrera). During the Postrera beans are intercropped with grown maize. Sometimes the second crop of maize is planted during this period. Beans planted during the Postreta are harvested in November-December. In some humid part of the system, third planting is started in December-January, which is called Aspante. Maize or Beans cultivated in this period is harvested in February or March (Schmidt et al., 2012). A good example how rainfall and climate can affect cropping decisions is the short dry season Canicula, which can threaten food security in the region. Very dry Canicula, started early or extended longer can change the timing of crop planting. By shortening maize-bean growing period it affects both crops planted in Primera or Postrera seasons. El Niño which is a serious problem for the Central American corridor, cannot only extend Canicula period, can decrease precipitation in the main cropping season, Primera. Natural resources in the Mesoamerican farming system such as forest and land experienced high degradation in the last 50 years. Generally, most soils in the farming system are quite fertile, because of its volcanic origin, but on the steeper slopes where most smallholder farmers are located soil erosion is the main constraint. Due to, and not only because of rising population levels, land fragmentation was increased and the average size of holdings was decreased. All these caused that pressure on land and water resources has increased. In the 1980s, FAO estimated that more than 40 percent of all land in El Salvador and 35 percent of lands in Guatemala was subject to erosion (Dixon et al., 2001). Depletion of soil organic matter is highly accelerated by shorter fallow periods. Because the land is getting scarce farmers are not giving soils enough time to regenerate its structure and fertility. Even though the deforestation rate is lower today than fifty years ago, it is still high. Only from 1990 to 1995 because of unsustainable logging practice 450 000 hectares of forest was destroyed in the system (DeClerck et al., 2010). Thanks to some local and regional conservation programs about 11 percent of Mesoamerican territory (some parts out of the system) is under protection such as National parks. Biodiversity was also threatened by large monoculture and export-oriented farms in the system. Export crops which require large-scale estates have important consequences for the Mesoamerican biological corridor. Table 1: Forest cover by year and country (1,000 ha) Table 1: (Carr et al., 2006) Widespread deforestation and environmental degradation in the hillside Mesoamerican farming system was mainly caused by inappropriate Slash and Burn practice (Food and Agriculture Organization, 2016). In the most areas of the world slash and burn cultivation of maize, beans and other crops was appeared sustainable when population density was stable. But because of rapid population growth, high land fragmentation and utilization of marginal lands, forests failed to regenerate, slope erosion increased and finally system broke down. Figure 3: Mulch and leaving trees to prevent response to unsustainable slash and burn practice farmers in Honduras adapted “Slash and Mulch” technique which can reduce soil erosion and deforestation process. Instead of burning forests beans or sorghums are planted directly in the naturally regenerated forest, which is later pruned. After cutting thinning, leaves and branches are disseminated on the soil which creates a layer of mulch. Fruit trees, fuel woods and other high-value timber are left to grow. Unlike traditional relay cropping system, in Slash and Mulch practice maize is planted later, after beans or sorghums are harvested. Because mulch can slow down maize seedlings emergence and later development maize is not pioneer crop here. By pruning maize and beans are getting enough sunlight, while leaves, thinnings and other resides insure soil untilled and moisture (FAO, 2016). The importance of Maize-Beans in the In Mesoamerica farming system the most consumed crops are maize and beans. However, tree fruits, vegetables, coffee and other cash crops are also cultivated across the system. There are three main maize-bean cropping patterns widely used in the Mesoamerican farming system. First is intercropping when maize and bean are planted at the same time together (in the same or different rows). Second is relay cropping when they are planted in a different time but at least once their growing period is overlapped and the last one is a rotation when one crop is planted after the first one is harvested (FAO, 2016). Unlike big monoculture fields which gives scale effect for big producers intercropping strategies are adopted by small holding farmers in the system. When maize and beans are grown in monoculture the yields are higher than those of maize and beans intercropped, but by intercropping farmers can reduce the costs of production. Moreover, intercropping affects more maize yields rather than beans, which are four times more expensive to sell than maize and guarantees higher and more stable incomes Figure 4: The Ancient Three Sisters Method showed that mixed cropping system of Maize and beans with third traditional crop squash had started more than 5000 years ago in this territory. “Three sisters” (maize, bean and squash) together creates the symbiotic relationship which benefits in different ways (Gordon, 1976). The maize provides stalk for beans, which helps bean to climb and catch the sunlight. The bean itself can supply nitrogen in the soil, which is consumed by maize. The squash as creepers maintenance soil moisture and reduce soil erosion by covering the ground with big leafs, also it competes for the weeds around the maize and beans. The Three sisters can nutritionally provide healthy foodstuff for a human. Maize is rich in carbohydrates while beans furnish protein component other required amino acids. This two crops with squash which is the source of Vitamin A creates a well-balanced diet for local people (Food and Agriculture Organization, 1992). And finally, this kind of intercropping plays the role of insurance. In the case, if one crop fails, the farmer can get in non-traditional agricultural sector Existing trends, which aims to benefit smallholder farmers in the Mesoamerican farming system are followings: exploitation of new marginal lands, intensification and diversification (Dixon et al., 2001). Because of limited land resources, only steeper slopes are available for exploitation, which can give only short-term gains to farmers. Moreover, in the long run, soil erosion, climate variability and flooding can risk people living in the region. The second trend, intensification of production, cannot guarantee food security in the region. While some increase in maize and bean production are predicted in the system, trade liberalization and falling international prices can only limit the benefits to the farmers. Diversification as an alternate source of income generation for Mesoamerican farmers started in the 1970s in horticulture and fruit production. Adaptation of this strategy by a large number of indigenous people can play a vital role to eradicate extreme poverty which is strongly correlated to the percentage of local indigenous people in the maize-bean farming system. In the early 1970s, U.S. started looking for alternative sources of snow peas to meet the increased demand (Dixon et al., 2001). Because of agronomically well-suited regions, American entrepreneurs started snow pea production in Guatemala. Problems connected to obtaining the land from the local population, pushed big corporations to start collecting snow peas from small independent producers. After 10 years, small producers directly connected to exporters and bypass the agribusinesses. Local population from Guatemala, most of whom did not even speak in Spanish, managed to earn extremely high returns by economizing labour and land cost, compared to big corporations. Furthermore, some farmers who had access to irrigation started cultivating broccoli, which also became a high return export-oriented product. By setting appropriate legislative and policy frameworks, the government in collaboration with small producers, exporters and non-governmental export support organizations (GEXPRONT) played an important role to boost non-traditional sectors in Guatemala. By 1996, more than 21000 indigenous households were involved in snow pea and broccoli production. It was estimated that from Guatemalan annually was exported about 55 million dollars valued snow peas to U.S. market. Indigenous families who successfully managed production diversification could earn from 1400$ to 2500$ per year (Dixon et al., 2001). The typical example of Maize Beans (Mesoamerican) farming system Evaluation of western highland livelihoods in Guatemala highlights some main characteristics of Maize Beans (Mesoamerican) farming system and clearly demonstrates the trends and issues throughout the system. Extensive poverty, which is predominated in the system with the regional average 60 percent, is reaching 80 percent in western highlands of Guatemala. Malnutrition is a serious problem for local indigenous population, especially before harvests (Dixon et al., 2001). Because of the armed conflict decades ago, public infrastructure is rare and sometimes completely absent in the regions. Distance from administrative centres, poor infrastructure and no year-round access by road limit the availability of healthcare, education and even marketplaces for many indigenous communities. A typical household in Guatemala highlands controls about 3.5 hectares of land. More than half of it is dedicated to maize and bean production. A large part of cultivated maize and beans are consumed on a household level, and only in minor cases surplus is sold on the market. Sometimes maize and beans are cultivated more than 2 times in the year, depends on the climate and rainy seasons. Main cash crop for local people is coffee, which occupies 0.5 hectares of traditional owned land. Because of coffee price high variability, food insecurity is present in the region. Diversification of production in high demanded export-oriented vegetables such as snow peas and broccolis created a new source of income generation for some smallholder farms. Well suited agronomic and climatic conditions in Guatemala offered snow pea harvesting season from October to May, which is snow peas shortage period in export 5: Seasonal cropping/feeding calendar in Guatemalan Figure 6: Diagram of the typical farm system of Guatemalan highlands An important source of income during the pre-harvest period is seasonal migration or remittances from abroad. Income generated from wage labour in coffee estates is sometimes the main source to fund children’s education. Some wealthy households in Guatemalan highlands own cows for milk and draught power and some chickens. Also, proteins gained from meat and milk consumption can supplement local people’s diet. Because the fertilizers are expensive and most farmers have not accessed, the manure provided by animals is the important source to improve soil composition. Compost produced by crop residues and manure are mostly supplied to cash crops, such as coffee and vegetables. On another hand, livestock is kept on unimproved pastures or feed by the maize straw and other byproducts. Latter is very important during the dry season when pastures are in a poor condition (Barber, 1999). Depending on the given example of Guatemalan highland livelihoods, can be deducted that only maize and bean crops, which defines the name of the Mesoamerican farming system, cannot guarantee food security. Diversification in horticulture, off-farm activities and crop-animal interactions eventually benefits stallholder farmers in the region. Analyzing maize-Beans (Mesoamerican) farming systems showed that adaptation of the system varies between the countries and regions, but there are some main characteristics which design the picture of the farming system. Moreover, there are some general future development strategies, which aims to facilitate poverty reduction in the maize–beans farming system and which requires big attention from government sector (Dixon et al., 2001). First one is diversification, which started in the 1970s and should be promoted in the future. Guatemalan example clearly demonstrates the importance of diversification in non-traditional sectors. While this process was started by the private sector, the crucial role remains for the government. Fair competition in the markets, low barriers, supporting farmers association, training and other extension services should be ensured by the government sector. Moreover, an undeveloped land market which delays the process of land transfer from non-successful to successful farmers should be The second strategy implies supporting off-farm employment in the region. Maize and beans produced in the farming system can ensure sufficient foodstuff for local people, but because of the limited access to off-farm income sources, they sell some parts of crops, which creates secondary malnutrition. By modernizing infrastructure, training and tax benefits government should promote tourism which offers some source of income for the local population. An exit strategy is last chance for farmers who cannot support diversification or off-farm employment. Unfortunately, past experiences show that this way of the transition process is hardest adopted by poorest segments of the population. Even this exodus strategy requires some source of finance to ensure migrants successfully absorption at the final destination. Payments to abandon sub-marginal lands, land market, vocational training for migrants and other incentives should be developed by the government, for the successful transitional process.
Perennials are often grown in winter to get them ready for Spring sales. And that means that supplemental lighting is often required to produce high-quality plants in a timely manner. The electricity costs associated with supplemental lighting can be high. So, it’s important that supplemental light is provided in the most efficient way possible. For a long time, lighting recommendations have been based on the daily light integral (DLI.) DLI is the total amount of light received by a crop over a day. DLI is calculated from photosynthetic photon flux density (PPFD). By integrating these instantaneous measurements of the intensity of photosynthetic light, the DLI can be calculated. But basing lighting decisions solely on DLI may not be optimal. By Dr. Marc van Iersel and Claudia Elkins, University of Georgia In our research, funded by American Floral Endowment, we take a systematic approach towards finding the best supplemental lighting strategies. Our goal is to help growers produce high-quality crops, while assuring that lighting costs are not excessive. This starts with understanding the basic physiology of how efficiently different species use light. Then, developing lighting strategies based on those physiological responses. Light use efficiency of perennials Plants need light for photosynthesis, but as plants receive more light, they use that light less efficiently. Understanding how efficiently different species use light is important. Supplemental light should only be provided when plants can use that light efficiently. Measuring a plant’s light use efficiency is surprisingly easy and takes advantage of a little-known property of all plants. This is known as chlorophyll fluorescence. When plants are exposed to light, much of that light is absorbed by chlorophyll and associated pigments in the leaves. Much like a solar panel, that light is used to create tiny electrical currents inside leaves and the energy from that current drives the reactions of photosynthesis. And, it indirectly provides the energy for all life on earth. Not all absorbed light is used to create a current… However, not all the absorbed light is used to create a current. Some of the energy is converted into heat, while a small fraction of that light energy is converted into fluorescence. All leaves exposed to light give off a small amount of red light. This fluorescence is not enough to see with the naked eye, but can easily be measured. And by measuring fluorescence, we can determine exactly how much of the light absorbed by a leaf is converted into current and used for photosynthesis. We use chlorophyll fluorescence measurements to quantify how the light use efficiency of different perennials is affected by the PPFD. Two things are clear. First, the light use efficiency of all species decreases at higher PPFDs. Second, there are important differences among species (Figure 1). And those differences have important implications for supplemental lighting. At very low PPFDs, all species use light with similar efficiency: between 70 and 80% of the light is used for photosynthesis. However, how rapidly the light use efficiency decreases with increasing PPFD depends on the species. The light use efficiency of Heucherella, a plant that thrives in shade, decreases rapidly with increasing PPFD. On the other hand, Rudbeckia, a plant that does well in full sun or partial shade, is much more capable of using higher PPFDs efficiently. And perhaps not surprisingly, the response of Hosta, which does well in partial or full shade, falls in between that of Heucherella and Rudbeckia. Figure 1. Light use efficiency of three perennial species in response to increasing PPFD. Note that the light use efficiency of all species decreases at higher PPFDs, but this decrease is more pronounced in shade-obligate Heucherella than in sun-loving Rudbeckia. Using light use efficiency to develop better lighting strategies So how can this basic information be used to help growers manage their lighting? There are three important lessons that can be drawn from this physiological information: 1) All species will use supplemental light more efficiently when that supplemental light is provided during periods with little sunlight. The common threshold-control long used for HPS lights is based on one principle. The lights are turned on when sunlight levels drop below a specific threshold and turned off above that threshold. 2) Appropriate thresholds are species-specific: providing Rudbeckia with supplemental light when sunlight provides a PPFD of 500 µmol/m2/s, allows those plants to use that supplemental light with an efficiency of over 60%; however, Heucherella would be able to use that same light with an efficiency of only about 30%. Because of such differences among species, it is important to provide a crop like Heucherella with supplemental light only when there is little sunlight; otherwise, the plants will not be able to use the supplemental light efficiently. For Rudbeckia, as well as sun-loving crops like lantana and rose, it is less important to provide supplemental light only when there is little sunlight. Those crops can still use supplemental light with reasonable efficiency at higher PPFDs. 3) These findings suggest that not all DLIs are equal. Because light is used more efficiently when the PPFD is low, our findings suggest that the overall light use efficiency can be increased by providing light at lower PPFDs and longer photoperiods, while providing the same DLI. In other words, spreading the light out over more hours each day should improve the overall light use efficiency and thus increase growth. Not all DLIs are created equal To test whether spreading the light out more hours each day increases growth, we grew Rudbeckia seedlings at a DLI of 12 mol/m2/day, with that light provided over photoperiods of 12, 15, 18, or 21 hours per day. To ensure that all plants received the same amount of light, we developed a new control approach for dimmable LED lights. Our system measures the PPFD at the crop level and calculates how much light is needed to reach the DLI by the end of the photoperiod. The controller then sends a signal to the LED lights so they provide just enough supplemental light to ensure that the plants receive a DLI of 12 mol/m2/day by the end of the photoperiod. This control approach to supplemental lighting has two advantages: 1) the DLI can be precisely controlled, regardless of weather conditions and 2) the supplemental light is provided preferentially when there is little sunlight (and thus when plants can use the supplemental light most efficiently). Using this control approach to supplemental lighting, the PPFD received by the plants decreased from 275 to 160 µmol/m2/s as the photoperiod increased from 12 to 21 hours. As We Hypothesized… As we hypothesized, the Rudbeckia seedlings grew substantially faster with longer photoperiods and lower PPFDs; with a 21-hour photoperiod, plants were about 30% larger than those grown under a 12-hour photoperiod, even though all plants received the same total amount of light (Figure 2). And the longer photoperiod had no negative effects on plant quality, as determined from root fraction (an important measure for seedlings, since good root growth is critical) and compactness. Based on our findings, using a longer photoperiod can decrease the crop cycle by at least one week. Another advantage of using longer photoperiods is the instantaneous amount of supplemental light that needs to be provided is lower. That means that fewer light fixtures are needed to provide the supplemental light. Thus, lowering the cost of installing a lighting system. Keep in mind that not all species will respond the same to longer photoperiods. The flowering of many crops is photoperiod-dependent. And while flowering is desired when the crop is finished, premature flowering can slow down growth. Figure 2. Longer photoperiods result in better growth of Rudbeckia seedlings. The control plants on the left did not receive supplemental light (and an average DLI of about 5 mol/m2/d. The other plants all received a DLI of 12 mol/m2/d, but that light was spread out over photoperiods ranging from 12 to 21 hours. What does this mean to the floriculture industry? More efficient lighting strategies can improve the production of perennials while lowering electricity costs. Our approach is most easily implemented with dimmable LED light fixtures, but similar approaches can be implemented using HPS lights. Several lighting companies now offer lighting control systems that take advantage of the ability to dim LED fixtures in response to changing levels of sunlight. This can provide a more consistent light environment in your greenhouse. It can also make crop production more predictable while assuring that no excess light is provided. What is next? Industry support of the American Floral Endowment has helped to make this research possible. AFE’s financial support for our work helped us get a $5,000,000 grant from USDA’s Specialty Crops Research Initiative. This project, titled Lighting Approaches to Maximize Profits, brings together scientists and engineers from around the country. A diverse team working on lighting issues in the controlled environment agriculture industry helps integrate horticultural production, economics, and engineering. This will result in holistic approaches to optimize supplemental lighting strategies. To learn more about project LAMP, please visit www.hortlamp.org. Visit endowment.org/research for more articles like this.
A prevailing theory in neuroscience holds that people make decisions based on integrated global calculations that occur within the frontal cortex of the brain. However, Yale researchers have found that three distinct circuits connecting to different brain regions are involved in making good decisions, bad ones, and determining which of those past choices to store in memory, they report June 25 in the journal Neuron. The study of decision-making in rats may help scientists find the roots of flawed decision-making common to mental health disorders such as addiction, the authors say. “Specific decision-making computations are altered in individuals with mental illness,” said Jane Taylor, professor of psychiatry and senior author of the study. “Our results suggest that these impairments may be linked to dysfunction within distinct neural circuits.” Researchers used a new tool to manipulate brain circuits in rats while they were making choices between actions that led to them receiving rewards or no rewards. The authors found decision-making is not confined to the orbital frontal cortex, seat of higher order thinking. Instead, brain circuits from the orbital frontal cortex connecting to deeper brain regions performed three different decision-making calculations. “There are at least three individual processes that combine in unique ways to help us to make good decisions,” said Stephanie Groman, associate research scientist of psychiatry and lead author of the research. Groman says an analogy would be deciding on a restaurant for dinner. If restaurant A has good food, one brain circuit is activated. If the food is bad, a different circuit is activated. A third circuit records the memories of the experience, good or bad. All three are crucial to decision-making, Groman says. For instance, without the “good choice” circuit you may not return to the restaurant with good food and without the “bad choice” circuit you might not avoid the restaurant with bad food. The third “memory” circuit is crucial in making decisions such as whether to return to the restaurant after receiving one bad meal after several good ones. Alterations to these circuits may help explain a hallmark of addiction — why people continue to make harmful choices even after repeated negative experiences, researchers say. The Yale researchers previously showed that some of the same brain computations were disrupted in animals that had taken methamphetamine. “Because we used a test that is equivalent to those used in studies of human decision- making, our findings have direct relevance to humans and could aid in the search for novel treatments for substance abuse in humans,” Groman said.
AP4ATCO - Lift/Drag Ratio, Forces Interaction and Use - SKYbrary Aviation Safety Increasing the angle of attack can increase the lift, but it also increases drag so known to get a bit testy about their lift being attributed to the Bernoulli effect. The altered airfoil shape causes a reduction in the effective angle of attack, so both the suction on top and the pressure on the bottom are lower. particular angle of attack (and airspeed), lift and drag must be considered together. coefficient of drag versus angle of attack curve shows that C. D increases . The same lift over drag relations exists at low speed as well as high speed. All items that affect the aeroplane's drag, affect CD/CL ratio as well. The first consequence is a loss of lift. The aerodynamic lift the vertical force shown in Figure 4. High lift is obtained when the pressure on the bottom surface is large and the pressure on the top surface is small. Separation does not affect the bottom surface pressure distribution. However, comparing the solid and dashed arrows on the top surfacejust downstream of the leading edge,we find the solid arrows indicating a higher pressure when the flow is separated.Lift, Coefficient of Lift This higher pressure is pushing down, hence reducing the lift. This reduction in lift is also compounded by the geometric effect that the position of the top surface of the airfoil near the leading edge is approximately horizontal in Figure 4. When the flow is separated, causing a higher pressure on this part of the airfoil surface, the direction in which the pressure is acting is closely aligned to the vertical, and hence, almost the full effect of the increased pressure is felt by the lift. The combined effect of the increased pressure on the top surface near the leading edge, and the fact that this portion of the surface is approximately horizontal, leads to the rather dramatic loss of lift when the flow separates. Note in Figure 4. Now let us concentrate on that portion of the top surfacenear the trailing edge. On this portion of the airfoil surface, the pressure for the separated flow is now smaller than the pressure that would exist if the flow were attached. Understanding Angle of Attack 3. Co-efficient of Lift and Drag 4. Defining a Stall 5. Understanding Angle of Attack Angle of Attack of 0 degrees with a cross-sectional symmetrical wing form e. It is impossible with current technology to have zero drag and lift. An angle of attack of 90 degrees for a 2-dimensional plate shaped wing would produce 0 lift and the maximum total drag. Just because the wing looks like it's pointing straight doesn't mean it's not producing lift you know. Modern wing designs produce smallest drag coefficient values with an absolute angle of attack value greater than zero for efficiency which is why they're commonly more convex-curved on the top than the bottom. Coefficient of Lift and Drag You basically multiply the coefficient value of lift by airspeed and you get the value of lift force. Similar with drag, you multiply it by airspeed and you get the drag force. Most images on the internet refer to a lift value of "1g" neither accelerating upwards or downwards - g refers to the value of 9. They aren't incorrect, it is just that people often miss-understand how their airspeed are related to angle of Attack. Drag and lift are merely vector components of the reaction force of the airflow on the wing: Defining a Stall Many people have a misunderstanding what a "stall" is and thought it was merely just the speed at which a plane "drops out of the sky. However in my case I merely just did not understand the terminology or definition of "Stall" but I did at least understand the basic concepts before even coming anywhere close to why a wing is shaped funny. What I understood back then though, is that increasing the angle of the wing to the airflow, after a certain angle the wing wouldn't produce more lift and would start to decrease. Angle of attack - Wikipedia Else people would be flying like helicopters at 90 degrees with infinite lift. You could try thinking it of it this way, the preserving of kinetic energy and obtaining the maximum acceleration. Refer to the secenario below. Assume the ball does not bounce and the wall has zero friction as if you're playing Quake 3 Arena while pogo-jumping and hitting a vertical wall while moving as an example. Angle of attack In the first example the ball hits the wall There's no conservation of energy here. Thus, the two couples generally cancel each other out. Lift is a force which opposes the downward force of weight. It is produced by the dynamic effect of the air acting on the airfoiland acts perpendicular to the flightpath through the center of lift. Drag is a rearward, retarding force caused by disruption of airflow by the wing, rotor, fuselage, and other protruding objects. Drag opposes thrust, and acts rearward parallel to the relative wind. In steady and level flight: If during steady and level flight, thrust is increased, then the aircraft will start to accelerate in the direction of thrust will start to gain speed. The increase in speed will lead to increase in drag. In the reverse situation, when during steady and level flight thrust is reduced, the aircraft will start to accelerate in the direction of drag the speed will start to decrease.
Comparing the Curiosity Mars Rover with Past NASA Missions to Mars Imagine you want to send a one-ton package to Mars, land it safely on the surface, and have it move around after it lands. First, the package has to survive atmospheric entry on Mars, which heats the package to 1,600 degrees. After this, the package has to deploy a supersonic parachute capable of withstanding 9 gs of force to slow down. The package then has to fire rocket engines, descend slowly to within 20 meters of the surface, and slowly lower a rover into a crater right next to a mountain six kilometers tall via a tether, so that the rockets don’t stir up too much dirt that might damage the rover. All of this in about seven minutes. Oh, and by the way, it takes nearly 14 minutes to send a signal to Mars (too long to allow any realistic form of remote control from Earth), so the onboard computer has to do all of this on its own without any help from mission control. In the early morning hours of August 6, 2012 (Central Daylight Time), NASA will attempt this complex sequence of events and land the newest in its fleet of rovers, called the Mars Science Laboratory (a.k.a. the Curiosity rover). Various space programs of the past and present have had a number of successes and failures in landing packages on other worlds. In missions past, such missions involved stationary landers or small rovers. In the 1970s, the Soviet Union successfully landed and operated two robotic Moon rovers. Since the Moon is much closer than Mars, this early feat was made easier by the much smaller 1-2 second communication lag that allowed for human controlled operations. Missions to Mars have been numerous over the years–some succeeded, others failed. These missions were a mixed bag of landers, orbiters, flybys, and only more recently, rovers. The Curiosity rover is a next step, following in the footsteps of previous rovers. The first rover, Soujourner, was a tiny little thing that was only a little over two feet long. It was solar-powered and couldn’t go very far from its lander. This mission was followed by the tremendously successful Spirit and Opportunity rovers, both of which were quite a bit larger than Sojourner. The Curiosity rover is by far the largest of all the rovers on Mars, at about 86 times the mass of Sojourner: Not only has the mass of rovers increased, but the speed has as well. The Curiosity rover has a maximum surface speed that is about 2.5 times that of Sojourner: But this is still a relatively slow speed. The maximum surface speed of the Curiosity rover falls somewhere between the speed of a garden snail and the speed of the slowest land mammal. NASA has produced a dramatic five-minute teaser documentary about the mission to get you hyped about it: Hopefully nothing goes wrong, but if you want to keep tabs on the landing when it happens, NASA will be providing a number of live feeds while the landing is going on. In addition, local museums and planetariums may run special viewing opportunities, so check your local area for such events.
The villanelle is a highly structured poem made up of five tercets followed by a quatrain, with two repeating rhymes and two refrains. Rules of the Villanelle Form The first and third lines of the opening tercet are repeated alternately in the last lines of the succeeding stanzas; then in the final stanza, the refrain serves as the poem's two concluding lines. Using capitals for the refrains and lowercase letters for the rhymes, the form could be expressed as: A1 b A2 / a b A1 / a b A2 / a b A1 / a b A2 / a b A1 A2. History of the Villanelle Form Strange as it may seem for a poem with such a rigid rhyme scheme, the villanelle did not start off as a fixed form. During the Renaissance, the villanella and villancico (from the Italian villano, or peasant) were Italian and Spanish dance-songs. French poets who called their poems "villanelle" did not follow any specific schemes, rhymes, or refrains. Rather, the title implied that, like the Italian and Spanish dance-songs, their poems spoke of simple, often pastoral or rustic themes. While some scholars believe that the form as we know it today has been in existence since the sixteenth century, others argue that only one Renaissance poem was ever written in that manner—Jean Passerat’s "Villanelle," or "J’ay perdu ma tourterelle"—and that it wasn’t until the late nineteenth century that the villanelle was defined as a fixed form by French poet Théodore de Banville. Regardless of its provenance, the form did not catch on in France, but it has become increasingly popular among poets writing in English. An excellent example of the form is Dylan Thomas’s "Do not go gentle into that good night." Contemporary poets have not limited themselves to the pastoral themes originally expressed by the free-form villanelles of the Renaissance, and have loosened the fixed form to allow variations on the refrains. Elizabeth Bishop’s "One Art" is another well-known example; other poets who have penned villanelles include W. H. Auden, Oscar Wilde, Seamus Heaney, David Shapiro, and Sylvia Plath.
The Kepler telescope has delivered again. Yesterday scientists at the University of Birmingham announced the discovery of a Sun-like star, called Kepler-444, which is orbited by five planets with sizes similar to those of planets in our own solar system. Kepler-444 was formed 11.2 billion years ago — it is the oldest known system of terrestrial-sized planets in our Galaxy. By the time our own blue planet formed, the planets in the Kepler-444 system were already older than the Earth is today. An illustration of Kepler-444 and its five planets. Image: Tiago Campante/Peter Devine. The first question most people are likely to ask is whether those newly-discovered planets could harbour life. With so much time to evolve, it's tempting to imagine them hosting super-advanced civilisations with unheard-of technologies. Unfortunately (or perhaps fortunately) the answer is "no": the newly-discovered planets are too close to their star to harbour life. But scientists still hope that the discovery will shed important light on how planets form. "We are now getting first glimpses of the variety of galactic environments conducive to the formation of these small worlds," says Bill Chaplin of the University of Birmingham, who played a leading role in the research. "As a result, the path towards a more complete understanding of early planet formation in the Galaxy is now unfolding before us." The new worlds were discovered using something called asteroseismology - listening to the natural resonances of the host star which are caused by sound trapped within it. These oscillations lead to miniscule changes which allow the researchers to measure its diameter, mass and age (you can find out more about asteroseismology in A new kind of singing star). The planets were then detected from the dimming that occurs when the planets pass across the stellar disc. This fractional fading in the intensity of the light received from the star enables scientists to accurately measure the size of the planets relative to the size of the star. If you'd like to find out more about how planets form from minuscule specks of dust, read our recent article From dust to us. And to read more about the search for life, see Hunting for life in alien worlds. Or, if you'd just like to sit back and watch, have a look at the animation below. It starts by showing us Kepler's field-of-view in the direction of the constellations Cygnus and Lyra. We are next taken to the vicinity of the Kepler-444 planetary system, located some 117 light years away. Its pale yellow-orange star is 25% smaller than the Sun and substantially cooler. The last segment of the animation emphasises the compactness of this system. The five planets orbit their parent star in less than 10 days or, equivalently, at less than one-tenth Earth's distance from the Sun. In a way, this system may be thought of as a miniature version of the inner planets in our own Solar System. (Animation by Tiago Campante and Peter Devine.)
When did you learn to write? It’s hard to remember, isn’t it? It’s not just because it was so long ago but also because it didn’t happen at one exact moment in time. It happened over time. Just like their bodies, children’s knowledge and skills grow in spurts and stops, as well as sudden peaks and long plateaus. To help and support children’s writing progression as best as possible, it’s important to understand the different writing stages. Please note that the developmental stages overlap and the age references are a generalization. Audio storytelling (3-4 years) - This stage is based on the spoken language and gives the youngest users the opportunity to tell stories by using the recording function. - Parents and teachers should “translate” children’s audio recordings into written language by adding text to the adult text field. This gives children valuable insight into the purpose of writing, and shows similarities and differences between spoken and written language. Early Emergent Writing (4-5 years) - The first writing stage is characterized by “scribbling,” where children pretend that they are writing by hitting random keys on the keyboard. It also includes “logographic” writing of high frequency and easily recognizable words like the child's name and text logos like LEGO, McDonald’s, and Oreo. - Turn on the key function that provides audio support for the letter names. This allows children to make the connection between the letter and its name while they “scribble”. - Continue to add conventional writing (“translation) in the adult writing field, which gives the child the chance to see the spelling of familiar words. Emergent Writing (5-7 years) - By now, children have gained an initial understanding of phonics, which is the correspondence between letters patterns (graphemes) and sounds (phonemes). Some of the words may have the correct initial letter and a few other letters. - Set the audio support to letter sounds (phonemes). Continue to provide conventional writing to help the child understand the letter/sound relationship more fully. - Children begin to make the reading-writing connection and are much more aware of embedded clues, such as picture and initial letter clues. When it comes to reading WriteReader books, be sure that they read the conventional text to recognize and learn from the correctly spelled words. Transitional Writing (6-8 years) At this stage, there is a one-to-one relationship between the letters and sounds represented in children’s writing. For example, word like “people” could be spelled “pepl”. Even though children’s writing has now reached a certain level where it may be able to be read by others, providing conventional writing is still very important to writing progress. Children will learn through comparison that many letters have different sounds and that some are silent. At the same time, children will start to notice and learn about the use of punctuation and capital letters. Fluent Writing (8-10 years) - Around this age, children start to notice and learn all the irregularities in written language. It’s the longest learning phase in writing development and can extend over several years. - Children can turn off the key sounds at this stage, if this support is not needed. - When children are able to spell more than 75% of the words correctly, it no longer makes sense to “translate” their writing in the adult text field. Instead, the teacher/parent can try these suggestions: - Write the misspelled words in the adult text field. - Write a comment that can guide the children to correct themselves. For example, “Find and correct two misspelled words” or “Remember to use punctuation and capital letters.” Most importantly, give your children or students plenty of praise, encouragement, and opportunities to practice writing.