content
stringlengths
275
370k
Jupiter's magnetic field -- here converted to sound from radio waves collected by spacecraft passing close to the planet -- is the window to its heart. Precise measurements of the magnetic field can reveal details about the planet's interior, such as the size and mass of its core. That's the goal of Juno, a mission to Jupiter whose launch window opens on Friday. The probe is scheduled to arrive at Jupiter in five years. It'll orbit the planet from pole to pole -- something that's never been done. That vantage point will allow the craft to map the magnetic environment around Jupiter in far greater detail than ever before. And that knowledge will lead to a better understanding of the entire planet. Scott Bolton is the project's lead scientist. BOLTON: The real source of Juno is trying to understand the fundamental properties of Jupiter, the state of Jupiter -- what is it made of. The magnetic field is generated by motions deep below the planet's visible cloudtops. Jupiter probably contains a dense, heavy core surrounded by a layer of metallic hydrogen. These different layers rotate at different speeds, creating a dynamo effect that produces the magnetic field. So measuring the detailed structure and strength of the magnetic field, combined with observations of the planet's gravity, should produce a clearer picture of how Jupiter is put together -- a window to its heart. More about Juno tomorrow. Script by Damond Benningfield, Copyright 2011 For more skywatching tips, astronomy news, and much more, read StarDate magazine.
The primary care physician may not perceive the pervasive heterosexual bias in healthcare delivery. However, as early as the initial interaction within the clinical setting, it can be blatantly clear to the lesbian, gay, bisexual and transgender (LGBT) patient. LGBT patients often cannot complete the most basic informational questions that appear as check boxes on initial visit intake forms, such as sex (male or female) or marital status. Although the healthcare provider may be unbiased and open-minded, prejudice is well-ingrained in the healthcare system in subtle and overt ways.1-4 Policies and practices in health care that assume and/or favor heterosexuality may leave LGBT patients feeling judged and demeaned, and LGBT patients receive suboptimal health care as a result.5-7 Sexuality is a human dimension that can only be described on a continuum and often cannot be defined by a specific category. In health care, terms used to define a person’s sexuality include lesbian (L), gay (G), bisexual (B), and transgender (T).8 In the medical literature, lesbians are often referred to as women who have sex with women (WSW). Gay men are often referred to as men who have sex with men (MSM).9 The terms lesbian and gay are used to refer to people who are attracted to members of the same sex, and the term bisexual describes people who are attracted to members of either sex.10,11 The term transgender is used to describe people who do not identify with their biologically assigned sex at birth. Transsexual refers to transgender persons who have undergone sex reassignment surgical procedures.8 At times, the acronym LGBT includes the letter Q (LGBTQ). Q denotes the term queer, which is an umbrella category that some have used to refer to the entire LGBT community. The term may be used to describe those who are on a continuum of gender, gender presentation, or sexuality that may not fit within societal “norms.” Q also denotes the term questioning, which indicates a person is in the process of understanding his or her sexual orientation.9,12 It should be taken into consideration that not all LGBT people identify as queer, and the term may also be used by some as a political or activist identifier. Other terms to understand include intersex and gender dysphoria. The term intersex refers to a person whose reproductive organs and/or chromosomes do not fit into usual patterns (eg, those with Klinefelter syndrome).13 Gender dysphoria is distress associated with one’s sex that was assigned at birth because it is inconsistent with the individual’s perceived and preferred gender identity.14 It is important to recognize that gender dysphoria is not a mental illness but considered gender nonconformity. The World Professional Association of Transgender Health stresses that gender nonconformity is a matter of diversity not pathology.15 Sexual orientation and health According to the Centers for Disease Control and Prevention (CDC) National Health Statistics Report: Sexual Orientation and Health Among US Adults, 96% of adults aged between 19 and 64 years identify themselves as heterosexual, 1.6% identify as gay or lesbian, 0.7% identify as a bisexual, and 1.1 % indicate “something else” or gave no answer.16 However in 2011, a CDC report from a survey of 13,495 men and women between the years 2006 and 2008 revealed that 12% of women aged between 25 and 44 years had sexual contact with another woman, and 6% of men revealed they had sexual contact with another man.17 These numbers indicate sexual orientation is more accurately described as fluid rather than a set of discrete categories. This article originally appeared on Clinical Advisor
7th grade math worksheets 7th grade math worksheets on math topics covered in grade 7. Get seventh graders to have more math practice by downloading all worksheets under this category. Each 7th grade math topic links to a page with PDF printable math worksheets covering subtopics under the main category. 7th grade math topics covered include : Algebra, quadratic equations, algebra 2 type exercises, financial arithmetic, fractions, shopping and money, decimals, volumes, Pythagorean theorem, graphing linear equations, linear inequalities, absolute values and integers, proportions and ratios, area of figures, converting scales and metric systems and more.
A map showing the pattern of settlement across Polynesia, represented by a triangle, by migrants from Southeast Asia Oceania consists of the islands of Australia, New Zealand and New Guinea, and the Melanesian, Micronesian and Polynesian island groups of the South Pacific. The first of these islands to be settled were Australia and New Guinea. Aboriginal Australians migrated to Australia from Southeast Asia between 60,000 and 50,000 years ago. Some also settled on New Guinea, which was then connected to Australia by land. The settlement of the other South Pacific islands did not begin until about 5000 years ago. New Zealand remained uninhabited until about 750 years ago. These Aboriginal Australian rock paintings, in a style known in some Aboriginal languages as Gwion Gwion, were painted in the...Read More >>These Aboriginal Australian rock paintings, in a style known in some Aboriginal languages as Gwion Gwion, were painted in the Kimberley region of Western Australia around 26,500–20,000 years ago. The earliest Aboriginal rock paintings date back to 40,000 years ago. The earliest inhabitants of Australia probably reached the continent between about 60,000 and 50,000 years ago. European settlers in the 18th century called the native peoples "Aborigines", meaning “people from the earliest times”. When the colonizers arrived, the Aboriginal Australians lived by hunting and gathering their own food. They had no knowledge of metals so they used long wooden spears, tipped with stone or bone. They always knew where to find water, even in the driest desert. They knew which plants were safe to eat and which could be used to make medicine. But as the Aboriginal Australians were a nomadic people, with few possessions, it was easy for the Europeans to claim ownership of their land. Aboriginal Australians demonstrating traditional weapons, such as the spearthrower Apart from African peoples, the Australian Aboriginal peoples have occupied the same territory continuously longer than any other human populations. Find the answer
Join Morten Rand-Hendriksen for an in-depth discussion in this video True and false: How computer logic works, part of Foundations of UX: Logic and Content. …The age we live in, the information age, is…defined by our interactions with and use of computers.…Computers act as a our intermeidaries, between us and information.…And between us and other people.…In many respects, modern communication and sharing…of information is defined by computer algorithms.…This means it's vitally important that we…understand how computers use logic to process…information so that we can enable the computer to do what we…want to and also so we can understand information we get from computers.… This process starts with understanding how our computers see the world.…If asked to draw a picture of what happens in a computer,…you'll probably draw a long list of ones and zeros like this.…These ones and zeros are natural language representations for…the core operating principle of a computing circuit.…One means the circuit is on.…Zero means the circuit is off.…Computers are complex logic machines.…Following the basic principles of logic that we've discussed in this course,…they test statements and arguments for… The core idea of logic is to create a system in which communication is clear, precise, and unambiguous, which is (or at least should be) the goal of any website or other communication. - How humans communicate - Comparing human and computer communication - Speaking logically - Using logical arguments - Understanding the limits of computer logic - Formatting information for humans - Communicating with logic Skill Level Beginner 1. Communication and Logic 2. Principles of Communication and Logic 3. Computer Logic 4. Human Logic and Information 5. Making Logical User Experiences 6. Using Logic for Improved UX Next steps1m 22s - Mark as unwatched - Mark all as unwatched Are you sure you want to mark all the videos in this course as unwatched? This will not affect your course history, your reports, or your certificates of completion for this course.Cancel Take notes with your new membership! Type in the entry box, then click Enter to save your note. 1:30Press on any video thumbnail to jump immediately to the timecode shown. Notes are saved with you account but can also be exported as plain text, MS Word, PDF, Google Doc, or Evernote.
The Roll-Mole Worksheet (which follows) is designed to be used by answering Problems 1, 2, 1', and 2' first. Then have students fill in the blanks on the equalities. After this point, students are told to "forget" that a nickel is worth 5 cents and a quarter is worth 25 cents. They must not use that information in solving Problems 3 through 6 or the purpose of the exercise is defeated. (Nobody remembers the mass of an atom of an element or a formula unit of a compound!) The worksheet points out the information students are allowed to use to solve the problems. To use the worksheet most effectively, work Problem 3 and 3' side-by-side on the board so students see that the reasoning is identical. Students should be made aware of the reasoning (use a set up) for each 4. Empirical Formula Analogy (by Kerro Knox, 1980) Given the average mass (weight) of boys and girls in a class and the percent boys by mass and percent girls by mass, calculate the ratio of boys to girls and the "formula" for the class (BxGy). Assume 100 kg of the class. The class would be 66.7% x 100 kg = 66.7 kg boy (B) and 33.3% x 100 kg = 33 kg girl (G). Formula for the class = B3G2 One mole of marshmallows would cover the USA to a depth of 105,000 km (6500 miles). NOTE: The volume of a marshmallow is estimated as 16 cm3 (1.0 in3 ). The area of the USA is 9.32 x 10 6 km2 or 3.6 x 10 6 mi 2. If an Avogadro number of pennies were distributed evenly among the 4.9 x 109 human inhabitants of earth, each man, woman, and child would have enough money to spend a million dollars every hour-day and night-and still have over half of it unspent at death. One guacamole is the amount of taco chip dip that can be made from an Avogadro number of avocados-plus appropriate quantities of tomatoes, onions, and chili. A train stretching to the North Star and back 2-1/2 times would be required to transport one guacamole. NOTE: This assumes that the volume of one standard avocado (pit removed) is 278 cm 3 and that other ingredients make up 25% of total volume. The average coal car has a capacity of 110 kL (4000 ft 3 ) and is 16 m (53 ft) long. The North Star is 680 light years distant. Suppose the Greek god Zeus, after observing the Big Bang 15 billion years ago, became bored and decided to count one mole of atoms. Zeus is omnipotent. He can count very fast (one million atoms per second) and, of course, never sleeps. He has currently completed over three-fourths of the task, and will be finished in just another four billion years. One mole of moles (animal-type), placed head to tail, would stretch 11 million light years and weigh 9/10 as much as the moon. NOTE: Each mole is assumed to be 17 cm long with a mass of 100 g. Speed of light = 3.0 x 10 8 m/s. Mass of moon = 6.7 x 1022 kg.. One mole of marbles, each 2 cm in diameter, would form a mountain 116 times higher than Mount Everest. The base of the marble mountain would be slightly larger than the area of the USA. NOTE: Marbles are assumed to have hexagonal closest packing and the mountain has a cone of angle 30 degrees;. Area of USA = 9.32 x 106 km2. |TABLE OF CONTENTS||TOPIC OVERVIEW||CONCEPT/SKILLS DEVELOPMENT||LINKS/CONNECTIONS||EXTENSIONS|
The word Vedangas in Sanskrit means “limbs of the Vedas” which is appropriate because it is a collection/genre that is an appendage of the Vedas. The origin of the Vedangas can date back to as early as 1200 BC but some speculate on an even earlier date of 1800 BC. The jyotisa collection for instance refers to the beginning of the Vedangas during a winter solstice, which may have occurred closer to 1800 instead or 1200 BC (Achar 173). The Vedangas consist of six appendages: siksa, chandas, vyakarna, nirukta, jyotisa [oldest in Hindu history], and kalpa. The first four of the appendages are considered exegetical, meaning they are used as aids to help understand the Vedas. The last two appendages are regarded as ritual because they deal with rites and laws as well as the proper time and place to perform the appendages (Bhat 10). The first appendage of the Vedangas is siksa, which is the category related to correct pronunciation and accentuation. Siksa is proper pronunciation and in order to have proper phonics, there have to be rules. A major rule under this category pertains to the sound of syllables because being off pitch by even a slight degree would alter the result and therefore the effect of the word trying to be pronounced (Tiwari 1). There are four main pratisakhya; which deals with the phoenics of the Sanskrit language. Pratisakhya also falls under siksa: Rgveda-Pratisakhya of Rgveda, Taittiriya-Pratisakhya of Krishna Yajurveda, Vajasaneyi Pratisakhya of Shukla Yajurveda, and Atharvaveda-Pratisakhya of Atharvaveda. These pratisakhya are responsible for determining the relationship between Samhita; the most ancient layer of text in the Vedas, consisting of mantras, hymns, prayers, litanies and benedictions, to Padapathas; which are recitation styles designed to complete and memorize a text, and also vice versa. They are also important for the interpretation of the Vedas (Bhat 11). The second appendage of the Vedangas is kalpa, which is the category related to Vedic rituals. If the Vedas were imagined as a person (Purusa), this section would be known as the arms. Rules referring to sacrifice, excluding things that are not directly connected to the ceremony are found in the Kalpa-sutras [contents directly connected to the Brahmanas and Aranyakas] (Tiwari 1). The Kalpa-sutras is broken down into three categories (1) the Srauta-sutras, (2) the Grhya-sutras, and (3) the Dharma-sutras. The Srauta-sutras consist of great sacrificial rites, where the most priests were employed. The Grhya-sutras consist of household rituals that do not need a priest’s assistance. The Dharma-sutras consist of customary law prevalent at the time (Bhat 13). The third appendage of the Vedangas is vyakaraṇa, which is the category related to Vedic grammar. Parts of this section have been lost over time because of pratisakhya, which also connects to grammar but has surpassed Vyakarana (Bhat 11). However, one major figure when Vyakarana is being discussed is Panini, primarily because he was one of the most, if not the most significant grammarian alive. His book the Astadhyayi is possibly the reason Panini surpassed all other grammarians of the period. Vyakarana is called the mouth of the Veda Purusha and is also seen as crucial for understanding the Vedas (Tiwari 1). The fourth appendage of the Vedangas is nirukta, which is the category related to why certain words are used. This section is known as the ears of the Veda Purusa. Under this category, there has only been one text that is based on “etymology” that has survived known as the Nirukta of Yaska. In this text, it explains words found in the Vedas are explained and then assigned to one of three sections based on the type of word. The first category are words that were collected under main categories, the second category are more difficult words found, and the third category are words based on the three regions (earth, sky, and heaven) and the classification of deities (Tiwari 1). These three categories are known as the Naighantuka-kanda, Naigama-kanda, and the Daivata-kanda. The Vedangas put lots of emphasis towards this category for increase growth in the grammatical science in India (Bhat 12). The fifth appendage of the Vedangas is chandas, which is the category related to meter, which covers the sense of the Mantra. Even though there has been no exclusive Vedic meter that survived there is the Chandas-shastras (book by Pingala). This section is often referred to as the feet of the Veda Purusha. This is because the Vedas are known as the body, which relies on the chandas [feet]. The use of this appendage is so reading and reciting is done properly (Tiwari 1). The chandas discuss the number of syllables in texts and poems which is linked to meter. This category is connected to the Brahmanas, which created the syllable and verse, however research could not find a meter in it. There are also two different types of meters based on the Rg Veda and the Yajur Veda based on the recessions (Bhat 12). The sixth appendage of the Vedangas is jyotisa, which is the class related to the knowledge of astronomy. This section is the oldest text about astronomy in Hindu literature and dates back to around 1300 BC (Abhyankar 61). Since this category was supposedly created during a winter solstice when the sun and the moon were aligned, the date of 1820 BC has been proposed and is said that astronomy started shortly after (Achar 177). Jyotisa is known as the eye of the Veda Purusha. Jyotisa is not the teaching of astronomy, but the use of astronomy to fix the appropriate time [days and hours] for sacrifices (Tiwari 1). The most substantial sources of knowledge on astronomy can be found early in the Brahmanas. Jyotisa is especially useful because it can give positions of the moon and sun for solstices as well as other useful information (Bhat 13). Since the Vedangas are appendages of the Vedas they can be seen as equally important in the studying and learning of the Hindu culture. Siksa provides the phonics of the Sanskrit, and without it speaking and understanding would be near impossible. Kalpa provides the proper steps towards performing rituals and when to do them. Vyakaraṇa is similar to the phonics but provides proper grammar for words that are used in the Vedas. Nirukta contains etymology (ie. meaning of usage). Chandas provide the meters in Vedic hymns to help proper reading. Jyotisa is the knowledge of astronomy to help with dating events in Hindu history and other useful information. The origins of the Vedangas can also be traced to the Brahmanas, which are a collection of ancient commentaries based on the Vedas. This connection can be made because the Brahmanas also have discussions on grammar, meter, etymology etc. (Bhat 10). References and Further Readings Abhyankar, K. D (1998) “Antiquity of the Vedic Calendar.” Bulletin of the Astronomical Society of India, Vol. 26, 61-66. Achar, B.N (2000) “A Case For Revising The Date or Vedanga Jyotiṣa” Indian Journal of History of Science, Vol. 35, No. 1: 173-183. Arnold, E.V (1905) Vedic metre in its historical development: Cambridge, UP. Bhat, M. S (1987) Vedic Tantrism: A Study of R̥gvidhāna of Śaunaka with Text and Translation: Critically Edited in the Original Sanskrit with an Introductory Study and Translated with Critical and Exegetical Notes. Delhi: Motilal Banarsidass. Brockington, J. L (1989) “Review of Literature in the Vedic Age.” The Brāhmaṇas, Āraṇyakas, Upaniṣads and Vedāṅga Sūtras. Bulletin of the School of Oriental and African Studies, University of London, Vol. 52(3), 569–570. Tiwari, Sashi (2014) “The Vedangas – Vedic Heritage.” The Vedangas – Vedic Heritage. Delhi: Delhi University. Related Topics for Further Investigation Nirukta of Yaksa Noteworthy Websites Related to the Topic Article written by: Ryan Loman (March 2016) who is solely responsible for its content.
When Slow Evolution is the Best Survival Tactic Fossils often indicate what environmental conditions were like at different times in earth’s history. As climates change over time, so does life since certain characteristics are more beneficial in specific climates. For example, crocodile fossils from the Eocene of Antarctica indicate a warm temperate climate ~50 million years ago. Both physical characteristics and behavior make crocs much more suitable to a lush green Antarctica rather than the current frozen over tundra. But what about species able to withstand various climate shifts over several million years, yet physically remains mostly unchanged? A Midge Discovery A recent PE article highlights the discovery of a species of non-biting midge (Diptera, Chironomidae) in Eocene Baltic Amber, previously only known from the Mesozoic. With resemblances too close to ignore, this discovery extends the existence of this species at least 80 million years! Dr. Viktor Baranov and his colleagues are curious as to how these midges became such masters of survival. Since these flies don’t require a blood meal, they are sometimes referred to as ‘blind midges’. “Most people would be familiar with their larvae ‘blood worms’ which serve as a staple aquarium fish food or even as fishing lures” mentions Dr. Baranov. “Non-biting midges are the most widespread, free-living, winged insects in existence. Their distribution ranges from the Arctic Elsemere Island to mainland Antarctica and everywhere in between.” says Dr. Baranov. Another PE article from 2016 by Marta Zakrzewska, Wiesław Krzemiński, and Wojciech Giłka, discusses the diversity of non-biting midges from the Eocene. The article states “Chironomidae (over 7,000 species in more than 500 genera and 12 subfamilies) is undoubtedly one of the most diverse group of aquatic dipterans.” Dr. Baranov adds, “there are probably as many or even more that are undescribed.” Non-biting midges prefer moist environments often next to bodies of water. “Their larvae are a very important part of the freshwater ecosystem being crucial to feed fish, cycle matter and energy in lakes and rivers, and their activities even help regulate the global carbon cycle” states Dr. Baranov. They can be found literally in every moist environment, “from tropical rainforests to mountain glaciers to even the surface of the open ocean.” “Although their larvae mostly inhabit sediments and the undersides of stones in streams, some species can live in waterlogged wood, damp soil, cow dung, or even as parasites in fish gills and freshwater sponges” Dr. Baranov explains. Besides serving as a pillar to the aquatic food chain, and as a nuisance to other species (us included), these tiny flies are known to be beneficial in other ways. “Millions of years before bumblebees, drone flies, bees or any other modern pollinators, these flies have been co-evolving with ancient lineages of plants. In the Cretaceous, and now likely later in the Eocene, Chironomids likely were pollinators of magnolias and flowering arums (Araceae).” says Dr. Baranov. The article mentions this discovery is the first case of slow evolution (“Bradytely”) of Diptera with aquatic larvae into the Cenozoic. “Bradytely is a term coined by the famous evolutionary biologist G. G. Simpson to describe a seeming stasis in the evolution rate of an organism. Gingko biloba tree morphology has changed very little in 200 million years, serving as an example of morphological bradytely. Although bradytely is common among insects in some groups, it seems to be relatively rare for insects with aquatic larvae.” shares Dr. Baranov. This recent discovery demonstrates the longest recorded survival interval for a fly with aquatic larvae. Maybe this characteristic favored its survival through the end-Cretaceous mass extinction. Dr. Baranov isn’t so sure. “I think most insects survived the K-PG event not due to a particular habitat, but because of their small size and therefore consuming little food, something which was in short supply in the devastated world of the early Paleogene.” Dr. Baranov even predicts that it’s possible these non-biting midges are still around somewhere. “Dozens of new Chironomidae species are described every year. Taking into account their distribution pattern of survivors from the Eocene, South Australia, East Asia, and the Mediterranean will be the most likely locations to find them.” Dr. Baranov explains other insect relics, such as lacewings from Nevrorthidae, have similar distributions. I suppose it’s only a matter of time. Predicting Future Survival How might this latest discovery help predict how insect groups might fare during the human-induced sixth extinction? “Aquatic insects are in huge trouble. Many taxa lost over 80% abundance in protected areas within only the last 40 years.” says Dr. Baranov. The sixth extinction stands out from the other five in that it is the only one directly tied to activity of a single species. Humans are influencing climate change and insects especially are being hit hard. “Innumerable species will be lost to the Sixth Mass Extinction. The only choice we have left is how much of our wildlife are we going to lose across the board- 30%? 50%? 70%? That depends on when we will become serious about acting on this climate and biodiversity crisis - the best time would’ve been 40 years ago, the second best – is right now.” And how will non-biting midges fare during these troubled times? “Chironomids are survivors. While many lineages will be pruned by the anthropogenic destruction, some will carry on their 210 million year long legacy beyond the Sixth Mass Extinction.” says Dr. Baranov. To read the full 2019 article by Viktor Baranov, Christel Hoffeins, Hans-Werner Hoffeins, and Joachim T. Haug, click here. To read the 2016 article by Marta Zakrzewska, Wiesław Krzemiński, and Wojciech Giłka, click here.
Western Camel (Extinct) Central and western North America In Our Region: Carlsbad, Anza-Borrego Desert, Rancho La Brea This extinct camel looked much like the modern Bactrian and dromedary camel of Asia and Africa. Camelops stood about 7 feet (2.2 meters) tall at the shoulder. As members of the camelid family, these animals had very unusual feet, with four toes reduced to only two elongated digits. Splaying of the toes and a broad foot pad were probably adaptations that helped when walking on rough terrain or soft sand. Their leg bones were also very elongated and adapted for walking long distances when searching for food and water. The shoulder joint of the Western Camel was much higher than the hip joint, with a rather steep slope in the hindquarters down to the tail, similar to what we see in modern camels. From studying the tall neural spines on its back vertebrae, scientists believe that the single hump must have looked similar to that of a dromedary, although it was placed a bit farther forward. Despite the fact that we presently associate camels with the deserts of Asia and Africa, the camelid family originated in North America during the middle Eocene, at least 44 mya. By the Oligocene, 28 mya, a small delicate camel, Miotylopus, occupied open habitats in southern California, hunted by nimravid cats and early canids. By 3 mya, an extraordinary tall camel, Titanotylopus, lived in coastal areas of southern California and probably fed on coarse shrubs. During the Pleistocene, great numbers of Camelops probably moved in herds across North America, but they became extinct on this continent about 11,000 years ago. They are survived by the llamas, vicunas, alpacas, and guanacos in South America, and the modern-day camels of Africa and Asia. Studies on the Rancho la Brea Camelops hesternus fossils reveal that rather than being limited to grazing, this species likely ate mixed species of plants, including coarse shrubs growing in coastal southern California. Scientists believe that Camelops was actually more closely related to llamas than to living camels. Camelops probably could travel long distances, similar to the living camel. We do not know if it had the ability to exist for long periods without water that modern-day camels display; this may have been an adaptation that occurred much later, after camelids migrated to Asia and Africa. Camelids developed an unusual style of walking called “pacing,” which means that the front and back legs on each side of the body move forward together. We can see fossil evidence that this pacing movement occurred with the earliest camels, since there are fossil trackways that document it. Why did the camelid family become extinct in North America? In southern California alone, there were at various times at least eight different forms of camels, yet by 11,000 years ago, the entire family was extinct on this continent. Jefferson, George T. and Lowell Lindsay. 2006. Fossil Treasures of the Anza-Borrego Desert. San Diego: Sunbelt Publications. Illustration: William Stout
~~~~~ if used well, minor characters aren’t minor at all we know that minor characters should never be “cardboard” that every character needs to be fully fleshed out. Shakespeare’s minor characters are as often as diverse and essential to the plot as their protagonist counterparts, used within his plays to illuminate the main characters’ goals and. Sir john falstaff: sir john falstaff, one of the most famous comic characters in all english literature, who appears in four of shakespeare’s plays entirely the creation of shakespeare. Shakespeare's most minor characters: their roles and functions contents purpose of thesis 1 foreword 2 1 shakespeare's characters 6 2 dramatic functions of bit characters 21. Hamlet (use of minor characters) essays: over 180,000 hamlet (use of the minor characters in hamlet not only provide contrast and extras to shakespear's work. Any discussion about the importance of minor characters to a novel, film or play, begins with a discussion about characters in general characters -- the people that inhabit the story -. The 10 best shakespeare characters the 10 best shakespeare characters (in as you like it) for one of shakespeare’s best parts the clincher, for me. Romeo and juliet: setting / character list / character descriptions by william shakespeare romeo and juliet by william shakespeare minor characters. In william shakespeare s play romeo and juliet , the major characters romeo and juliet whose strong and deep love for each other overcomes many obstacles. Get an answer for 'who are the important minor characters of the play the merchant of venice' and find homework help for other the merchant of venice questions at enotes. In his play romeo and juliet, shakespeare puts his minor characters to good use romeo’s friend mercutio and juliet’s nurse are both characters that are not considered the main focus of the. Hamlet's minor characters rather than just an important minor character i wanted to add more to the functions of minor characters in shakespeare. Shakespeare's fool in king lear king lear summary king lear character introduction why shakespeare is so important shakespeare's language. Analysis of minor characters in macbeth uploaded by kharla brillo no three musketeers kharla mae d brillo it is established, all throughout the play, that. You must let your readers know which characters are most important to the story once you can define minor character line between major and minor characters. Shakespeare's most minor characters : their roles and functions : most minor characters : their a different class of shakespeare's minor characters who are. Discuss shakespeares use of minor characters in macbeth and othello discuss shakespeare's use of minor characters in any the really important stuff. Importance of minor characters in hamlet a now-dead philosopher once said that people need three relationships in life—confidant, lover, mentor. Shakespeare henry v character compelling and their importance cannot be of minor and ephemeral characters in shakespeare's henry v. Horatio may be a minor role in the great shakespeare play “hamlet” however he is role of great importance to not only the readers of this play but also to the good prince hamlet of who. Get an answer for 'what is the importance of the minor character maria in the play twelfth night by william shakespeare is she a foil to malvolio or any other character in the play' and. Looks at the contribution of several minor characters in plays by william shakespeare. Othello navigator indexes all appearances and all mentions of all speaking characters in shakespeare's othello. The tempest is most likely the last play written entirely by shakespeare the tempest: characters, symbols, themes you are here: minor characters. Opheliacharacter analysiseven as a minor character in the play hamlet, the character ophelia plays a vital part in the development of both the plot and thematic ideas. Presenting a lively look at two of shakespeare’s most iconic plays, romeo and juliet and macbeth, the roles of minor characters and the importance of specific dramatic techniques form the. Readers do not tend to notice the importance of minor characters and how they help the main character through the story in multiple ways in the play hamlet, written by william shakespeare. Minor characters play a very crucial role in shakespear's hamlet they serve as narrators for events that occurred outside the immediate play: the dane's ghost. Why was king hamlet an important character in shakespeare's what is the importance of gertrude in shakespeare's how essential are the minor characters in. In fiction, major characters are central to the plot and are generally complex and three-dimensional, while minor characters are generally flat, stereotypical and not of central importance. The major importance of minor characters just this morning i realized how very grateful i am to have what i had thought to be a minor character in a novel blossoming. Who are the main minor characters in hamlet fortinbras opposite of hamlet strong leader very persistent willing to kill for what he wants ambitious.
Fatal Infections: Why are Some Diseases Incurable? - Senior SPARK This course is expected to run but has not yet been scheduled. This exciting course will centralize around major illnesses that stem from deadly microbial organisms. Our focus will be on zoonotic infections, or in other words, diseases that pass from animals to humans. Most newly arising diseases and some of the most fatal, originate from our animal counterparts. Zoonotic diseases tend to be some of the most deadly because our bodies’ immune system are not prepared or trained to battle these new invaders. How do germs invade the human body? How does our body fight these tiny invaders and in some circumstances lose the battle? Why do some diseases affect humans and not animals or vice versa? In this course, you will learn why certain infections cannot be cured and what makes a germ infectious to humans. Ever curious how vaccines work to protect us from deadly microbes? We will explore the human body’s natural defense system, as well as the symptoms, diagnosis and relevant treatments for some of the most lethal germs on earth. In this course, we will grow microbes and analyze their colonies. Also, we will tour microbiology laboratories studying disease to learn how their research might one day cure patients. We will learn how our body’s natural defense system works and why, in some circumstances, it fails to eliminate the invading organism. We will describe the inherent differences between bacteria and viruses, as well as how they survive and proliferate in humans. Symptoms and diagnosis will also be emphasized. For the purposes of this class, we will discuss some infections that have attacked people throughout history, including rabies, mad cow, and the bubonic plaque. For comparison, we will brainstorm on newer entities such as Ebola that have only arisen in the last few decades. This course will detail ancient medical remedies, current treatments and theoretical future medicines for these fatal infectious diseases. Furthermore, we will touch on how these deadly microorganisms have shaped our culture, fears and myths, examples such as vampires and werewolves. By the end of the week, students will be familiar with the different types of invisible germs and how the human body’s natural defense system works to combat these invading microbes. Using hands-on experiments, students will grow their own microbes and explore why some germs are deadly, while others are harmless. Students will understand how infections spread in humans and their animal hosts. Moreover, students will explore the medical lineage of these diseases, such as where they originated and the historical significance of these pathogens on our society. If you’re interested in becoming a veterinarian, historian, doctor or scientist, this is the perfect class to get you started! Middle school students have varying backgrounds. There are no prerequisites for this course, but introductory biology may be advantageous. *This Senior SPARK course is designed for students currently in 8th grade (entering 9th grade Fall 2015). Younger students are encouraged to register for our Junior SPARK courses.
Every cook needs ingredients to make a meal. Consider a simple sandwich: cheese, tomato, and all the ingredients that go into the bread: flour, water, salt and yeast. Oh, and don’t forget the pickle. But if you’re a plant, you’ll make your meal through photosynthesis—and all you’ll need is a little light, water, and carbon dioxide. In this lesson you will: - Interpret diagrams that describe the process of photosynthesis - Examine the ingredients and products of photosynthesis - Identify producers and consumers in the food web Photo: (left) clouds formed by photosynthesis over the Amazon rainforest: NASA Earth Observatory
When Paul Chodas and Steve Chesley arrived at Nasa’s Jet Propulsion Laboratory, in a valley beneath the slopes of California’s San Gabriel Mountains, on October 6 2008, they assumed it would be a normal day. But it would prove to be anything but. The scientists worked for the space administration’s Near Earth Object (NEO) programme, a team tasked with identifying comets, asteroids and meteors that potentially pose a threat to Earth. A normal day meant scanning their screens for small white dots in our solar system — the vast majority of which were either too far away to ever be a problem or so small they would burn up in our atmosphere long before they could ever do any serious damage. On that Monday morning, however, Chodas noticed an asteroid about the size of a truck beyond the moon’s orbit. It was on a collision course with Earth. He called Chesley over. The pair estimated the asteroid to be about 5m (16ft) long and they reckoned they had about 19 hours before it hit. Chesley began punching co-ordinates into his machine, trying to compute exactly where it would make contact. “We have software that computes the trajectories and we can run it right through to impact. I pulled out my National Geographic atlas and Steve went on Google to look it up. We both reached the same conclusion: it was going to hit Sudan in the early hours of the following morning, so our next task was to tell Nasa headquarters.” Although there had never been a precedent (this was the first time in human history that Man had been able to foretell an asteroid’s imminent impact with Earth), there were still procedures in place: if Nasa research scientists working on the NEO programme spotted an asteroid that looked like it could come within six Earth radii (about 23,750 miles), they were to notify Nasa’s head office in Washington DC. “From there, it certainly went to the State Department,” Chodas says. “I don’t know all the details, but I’ve heard that it went all the way up to the White House.” Chodas and Chesley worked out the asteroid was heading for the middle of the Nubian Desert. “We knew it was small and we were certain most of it would break up in the Earth’s atmosphere so it didn’t pose a hazard – it would be no more than a shower of rocks on the ground. The closest habitation was something called Station Six, an oil-compressing station with a population of about 10,” Chodas says. “It was basically barren desert.” Just before eight in the evening California time and seven in the morning in Sudan, the captain of a KLM flight some 860 miles south-west of Station Six and 30,000ft up in the air saw a flash in the sky. The asteroid, named 2008 TC3, had just entered the Earth’s atmosphere, and one of the 10 people living at Station Six managed to capture a brief image on their mobile phone of the long trail of smoke as the remnants of the near-Earth object plunged into the sand. Disaster was averted. That time. However, there are plenty more where 2008 TC3 came from and, unfortunately, they are much, much bigger. Earlier this year, news networks around the world warned of a “doomsday asteroid”. Dubbed the “continent killer”, Apophis is a frightening-looking, 250m (820ft)-wide, 20million-ton chunk of rock, ice and dust, pockmarked with craters, which apparently could “land” on Earth, at about 23,000 miles per hour, in 25 years’ time — i.e. in most of our lifetimes. There are two scenarios: the first, and thankfully most likely, is that Apophis will fly by in April 2029, the year it is due to make its first “close approach”, and that’s the last we’ll see or hear of it. The second is that during that approach, it’ll pass through what scientists refer to as a “keyhole” – a small area of space that can alter the asteroid’s course due to Earth’s gravity. If this happens, it’ll be on a massive collision course with us seven years later, likely to be April 13 2036 — Easter Sunday. Back at the Near Earth Object HQ, I’m relieved, initially, to find that Chodas doesn’t seem too fazed. “It’s too far away to predict yet,” he tells me. “We don’t know precisely where Apophis is headed but we will soon, when it becomes observable again, probably in 2012 or 2013. Once we get radar on it we will be able to nail down its orbit and we will know the chances of it going through the keyhole and hitting in 2036. By that time, it could be a four in a million chance, and that could very well go down to zero.” But here’s the clincher: “There are other keyholes,” Chodas says, almost in passing. And I’m fairly sure I gasp, because he starts to smile. “It is actually a problem, because each keyhole has keyholes around it, which means it could return to Earth in a different year. Mother Nature is very devious,” he says. “If asteroids come close to Earth one year, they can come back and hit you another year. It is actually fascinating, from a mathematical standpoint.” “Fascinating” is one word for it. Chodas says there are around a thousand asteroids like Apophis that are more than a kilometre in length. And, of course, big space objects have hit the Earth before. Meteor Crater in Arizona is the legacy of a collision that happened 50,000 years ago – relatively recent in geological terms. “If one of these [large asteroids] should hit, it would cause a global catastrophe,” says Chodas. “They would throw off so much dirt into the Earth’s atmosphere that it would cover the sun and change the climate, and agriculture would severely suffer.” It’s apocalyptic visions like this that led Nasa to set up the NEO programme in 1998, with orders from the US government to locate 90 per cent of large space objects within 10 years. From its inception, the programme split its targets into four categories: comets, small bodies orbiting the sun which produce tails of gas and dust; asteroids, like Apophis, which also orbit the sun; meteors, which Chodas describes as “just little rocks or pebbles”; and man-made satellites. Considering it does such important work, you might expect the NEO programme to be based in a “mission control” room with a screen the size of a wall. But it’s not. Instead, it operates out of an office. Chodas does a lot of work on a simple laptop, although the team has “mission critical computers” that are protected against power cuts. The actual star gazing is done by observatories all over the world, using what are known as wide-field cameras that take a single wide-angle image of the sky. That data is then sent to the Minor Planet Centre in Massachusetts. “We pull that information hourly and we automatically compute their orbits,” Chodas says. “I guess we get a couple of NEOs every day on average.” The NEO programme has also tracked a number of objects using the Hubble space telescope, but, in recent years, another telescope, the Wide-field Infrared Survey Explorer — or Wise — has proved invaluable. In just nine months, it photographed the entire universe and discovered around 580 near-Earth asteroids. It was thanks to Wise that Nasa was able to announce last month that it had finally met the US government’s target of locating 90 per cent of all large space objects; 911 out of an estimated total of 981 more than one kilometre in size, although it admits there are still around 20,000 “medium-sized” objects — between 100m and one kilometre in size — and around a billion “small” objects, like the one that hit Sudan in 2008, which can only be seen when they’re very close to Earth. But it’s all very well locating the big ones; how do we stop them from destroying the planet? In layman’s terms, Chodas says, they have to be knocked off course. Nasa does actually have form in this area. In 2005, the organisation used a spacecraft called Deep Impact to blow a hole in the comet Tempel 1 to study its composition. Unfortunately, it wasn’t a very big hole. High-resolution photographs taken earlier this year showed the rocket fired by Deep Impact hardly made a dent. “It just reminds us how difficult it can be to move these objects,” says Chodas. So does America, or any other country, have a spacecraft that’s up to the job? “I don’t think so,” he replies. “The best thing we can do is to discover [the asteroid] in advance to give us lead time.” With enough warning, he’s confident that Nasa, together with other space administrations, will be able to improve on Deep Impact’s rather lacklustre effort. Another possibility, he says, is to detonate a nuclear device near the asteroid which would heat the surface sufficiently to push it off course. “Another technique that has been suggested is to ‘paint’ the asteroid,” Chodas continues. “If you change its reflectivity, then the change in sunlight can be used to push asteroids around.” Thankfully, an even likelier scenario is that we won’t have to do anything at all. Amy Mainzer, a Nasa research scientist, says we must keep the risks in context with others like climate change and natural disaster, which, she says, are clear and present dangers. “We’re [still] refining our understanding of how often impacts happen,” she says. “[But] this is nothing you have to panic about. It’s a reasonably infrequent thing that doesn’t happen very often.” Not very often. It’s those three brief, non-committal words that, for some reason, just don’t seem enough.
The solar system is the Central star, the Sun and all the cosmic bodies that revolve around it. The structure of the Solar system In the solar system 8 the largest celestial bodies or planets. Our Earth is also a planet. In addition to her around the Sun make its journey in space 7 planets: mercury, Venus, Mars, Jupiter, Saturn, Uranus and Neptune. The last two from the Ground can only be observed through a telescope. The remaining visible to the naked eye. More recently, the number of planets reckoned another celestial body Pluto. He is very far from the Sun, beyond the orbit of Neptune, and was opened in 1930. However, in 2006, astronomers have introduced a new classic definition of a planet, and Pluto don't make it horrible. Comparative sizes of the Sun and planets of the Solar system The planet known to people since ancient times. The nearest neighbors of the Earth - Venus and Mars, the most distant from it - Uranus and Neptune. Large planets can be divided into two groups. The first group includes the planets that are closest to the Sun: it is the terrestrial planets, or inner planets, mercury, Venus, Earth and Mars. All these planets have a high density and a hard surface (although it is liquid core). The largest in this group of planet Earth. However, far from the Sun and the planets Jupiter, Saturn, Uranus and Neptune significantly superior to the Earth in size. Therefore they are called giant planets. They are also called the outer planets. Thus, the mass of Jupiter is greater than the mass of the Earth is more than 300 times. Giant planets are significantly different from the terrestrial planets according to their structure: they do not consist of heavy elements, and of gas, mainly hydrogen and helium, like the Sun and other stars. Planets-giants do not have a solid surface is just gas balls. So they are called gaseous planets. Between Mars and Jupiter is the belt of asteroids, or minor planets. An asteroid is a small planetologie body of the Solar system, ranging in size from a few meters to thousands of kilometers. The largest asteroids in this belt is Ceres, Pallas and Juno. Beyond the orbit of Neptune is another belt of small celestial bodies, which is called the Kuiper belt. It is 20 times wider than the asteroid belt. Pluto, who lost the status of a planet and was classified as dwarf planets, is precisely in this zone. In the Kuiper belt, there are other dwarf planets similar to Pluto, in 2008 was named plutoid. This makemake are carved on rocks and Haumea. By the way, Ceres from the asteroid belt also belong to the class of dwarf planets (but not plutoids!). Another plutoid - Eris - size comparable to Pluto, but is much farther from the Sun in the Kuiper belt. Interestingly, Eris one time there was even a candidate for the 10th planet in the Solar system. But ultimately, it is the discovery of Eris (caused a review of the status of Pluto in 2006 when the international astronomical Union (IRU) has introduced a new classification of celestial bodies in the Solar system. According to this classification, Eris and Pluto are not under the notion of the classical planets, and "earned" only the title of the dwarf planets, heavenly bodies that revolve around the Sun, are not satellites of the planets and have a fairly large mass in order to maintain a nearly round shape, but, unlike the planets, is not able to clear its orbit from other space objects. In the composition of the Solar system beyond the planets are their satellites, which revolve around them. All satellites currently account for 415. Constant companion of the Earth - Moon. Mars has 2 satellites - Phobos and Deimos. Jupiter 67 satellites, and Saturn - 62. 27 satellites has Uranium. And only Venus and mercury no satellites. But the "dwarfs" Pluto and Eris (the satellites are: Pluto is Charon, and Eris (- Dysnomia. However, astronomers have not yet come to a final conclusion whether the companion Charon Pluto or Pluto-Charon is so-called a double planet. Even some asteroids have satellites. Champion size among satellites Ganymede, a moon of Jupiter, slightly behind him Saturn's moon Titan. And Ganymede, and Titan exceed the size of mercury. In addition to the planets and satellites of the Solar system through dozens or even hundreds of thousands of small bodies: caudate heavenly bodies - comets, a huge number of meteorites, dust particles of matter scattered atoms of different chemical elements, the flow of atomic particles and others. All objects in the Solar system are held in it by the force of attraction of the Sun, and they all revolve around him, and in the same direction with the rotation of the Sun and almost in the same plane, called the Ecliptic. Exception - some comets and Kuiper belt. In addition, almost all objects in the Solar system rotate around its axis, and in the same direction as around the Sun (except for Venus and Uranus; the latter rotates and does "lying on its side"). Planets of the Solar system revolve around the Sun in the same plane - the plane of the Ecliptic The Ecliptic of the Solar system The orbit of Pluto is highly tilted relative to the Ecliptic (17°) and strongly stretched The Sun is concentrated almost the entire mass of the Solar system - 99,8%. The four largest object - gas giants - make up 99% of the remaining mass (when the majority - about 90 percent - are in Jupiter and Saturn). As for the size of the Solar system, astronomers have not yet come to a consensus on this issue. According to current estimates, the size of the Solar system is not less than 60 billion kilometers. To at least approximately to imagine the scale of the Solar system, we will give a more clear example. Within the Solar system per unit distance take an astronomical unit (and. E.) - the average distance from the earth to the Sun. It is approximately 150 million km (light travels this distance in 8 min 19 s). The outer boundary of the Kuiper belt is located at a distance of 55 and. that is from the Sun. Another way to imagine the real size of the Solar system is to imagine a model in which all dimensions and distances are reduced in a billion times. In this case, the Earth would be about 1.3 cm in diameter (about the size of a grape). The moon will rotate at a distance of about 30 cm from it. The sun is 1.5 meters in diameter (about the height of a person) and to be at a distance of 150 meters from the Ground (about a city block). Jupiter - 15 cm in diameter (the size of a large grapefruit) and at a distance of 5 city blocks from the Sun. Saturn (the size of an orange) at a distance of 10 blocks. Uranus and Neptune (lemons) - 20, and 30 blocks. People on this scale would be the size of an atom; and the nearest star is at a distance of 40 000 km How did the Solar system?
The word "dignity" is frequently employed in many moral, political, ethical and religious debates or simple discussions. It generally refers to any human being's rights to be respected and treated ethically. This concept is extended from the Enlightenment age principles of inalienable, inherent human rights. In politics, the term is used when criticizing the bad treatment received by certain vulnerable, oppressed categories of people. This English word called "dignity" derives from the Latin "dignitas" via the French word "dignité." In its ordinary sense, it refers to status and respect. It is most often used to imply that a certain person does not receive the proper treatment, or even that a certain person does not treat himself/herself with enough self-respect. The special philosophical use of this word has a very interesting history. Nonetheless, political, scientific and legal discussions avoid any clear-cut definition of the word "dignity." So what does it actually mean? Even international proclamations leave this word undefined. Present-day scientific commentators like for instance those who argue against algeny and genetic research use the word dignity to support their claim, but when it comes to its application, they are rather ambiguous. Immanuel Kant, the Enlightenment Age (17th-18th centuries) philosopher, said that there are things which have dignity, and these things ought to be regarded as valuable without any discussion or debate. But of course, valuable is a relative term, because what one considers worthy and of value is disregarded by another, and so on and so forth. Value depends on the observer's perspective of that thing. And a thing that is an end in itself if it has a moral dimension. This means that the thing must be the representation of a choice between right and wrong. Kant asserts that only human's moral capacity is endowed with dignity. Kant also explained to the Western philosophical world that man's free will is quintessential. Human dignity is thus strongly related to humans' ability to be the choosers of their very own actions, be them right or wrong. There are many 20th century philosophers that expressed his perspectives on the topic of dignity. These philosophers included Alan Gewirth and Mortimer Adler. Gewirth's points of view are usually compared to the ones belonging to Kant, and are often opposed to them. Although he shares Kant's perspective that human dignity comes from man applying the free will principle to his actions, Gewirth placed the focus more on the moral duties and implications derived from dignity. He refers to humans' moral obligation not only to avoid doing harm to anyone else, but also to provide others with active assistance in order for them to achieve and preserve a well-being state of affairs. Adler developed the topic, referring also to humans' equal rights to dignity. He also made reference to the dignity of labor, and other things. He relates the question whether humans really possess equal dignity rights to the question whether humans are actually all equal. He also poses the question whether humans are to be set apart from other beings or things, including animals. He concluded that all humans are equal only in the sense that they are all equally different and distinct from the animal world. He said that the dignity of man is "the dignity of the human being as a person - a dignity that is not possessed by things." Where the distinction is not clearly recognized and accepted, there are complications in understanding the equivalence between dignity and equal treatment towards humans and other beings. Although Dan Egonsson and later on Roger Wertheimer said that dignity is conventionally equaled with the fact of being human, they both enriched the idea of dignity with something more than human. Egonsson made the suggestion that the two conditions of being worthy of dignity include first of all being human, and second of all being alive. Arthur Schopenhauer made the distinction between the objective and subjective definition of dignity. Thus, the objective definition of dignity implies other people's opinion about our own worth, and the subjective definition of dignity refers to our fear of other people's opinion.
Eating a healthy diet can be a challenge for anyone. Early childhood educators can foster healthy habits with young children by applying the process of discovery to nutrition education. With the Early Sprouts approach, discovery starts in the vegetable garden, extends to the table, and draws on a unique blending of research-based approaches from two disciplines: early childhood education and nutrition science. The Early Sprouts Online Training provides early childhood educators with teaching methods and tools to implement the Early Sprouts approach, including the discovery of and appreciation for healthy foods, as well as strategies for meaningful parent involvement. This online, self-paced course is directed toward early childhood educators with all levels of education and experience. There are no prerequisites. What you’ll learn - The basics of young children's nutrition - How to apply the Early Sprouts approach to your early childhood setting - How to build and maintain a schoolyard garden - The role of sensory exploration and cooking activities in nutrition - How to engage families and motivate staff - How to use Early Sprouts with CACFP - How to make Early Sprouts work in your center or classroom The course consists of nine self-paced modules with interactive lessons and quizzes. Each module must be completed in sequence with a passing score before advancing to the next. There is no limit on the number of times a quiz can be retaken. Course access is available 24 hours a day. - An Early Sprouts Overview - Nutrition and Young Children - The Nutritionally Purposeful Preschool Classroom - Building and Maintaining a Garden - Sensory Exploration - Cooking with Young Children - Engaging Families - Using Early Sprouts with CACFP - Making it Happen in Your Center
Learn English Free Learn English Grammar A predicate noun follows a form of the verb "to be". He is an idiot. (Here idiot is a predicate noun because it follows is; a form of the verb "be".) A predicate noun renames the subject of a sentence. Margaret Thatcher was the Prime Minister. (Margaret Thatcher is the subject and Prime Minister is the predicate noun - notice it follows 'was' the past tense of 'to be'.)
Perhaps the most important contribution Khufu, the fourth dynasty Egyptian pharaoh, made to Egypt and the world is the Great Pyramid of Giza. This ruler is sometimes referred to as Khnum Khufu or Cheops and is widely credited as the force behind this wonder of the world.Continue Reading Khnum Khufu was the first Egyptian pharaoh to build a pyramid at Giza. Historical records offer conflicting views about Khufu’s character and actions. Some historians like Herodotus have claimed that Khufu was a tyrannical and cruel ruler who enslaved his people and forced them to construct the pyramid; others suggest that Khufu was a good-natured, traditional ruler and a wise leader. The Great Pyramid is sometimes offered as proof that he had the ability to conscript and mobilize a large army of workers, a monumental task in an era when almost no machinery was available. The workers were skilled craftsmen or seasonal laborers, and Khufu ensured that they were looked after and compensated well for their efforts. Some historians suggest that Khufu in his early years was contemptuous of Gods and critical of his people’s belief in them, but later regretted his actions and composed several sacred books. It is also believed that he was worshipped as a god after his death, with his funerary cult becoming very popular during the Roman Period.Learn more about Ancient Egypt
- Home Page - Interactive Labs - Support Materials Teacher resources and professional development across the curriculum Teacher professional development and classroom resources across the curriculum To show how successive schools of economic thought struggled unsuccessfully to give a satisfactory explanation of business cycles until John Maynard Keynes showed that shifts in aggregate demand were the primary cause of these fluctuations. Economist Barnard J. notes that high inflation is boosting interest rates. Increasingly, consumers seem to be purchasing imported goods rather than American-made goods. Which phase of the business cycle is Barnard MOST LIKELY to be observing? Economics student Sylvia B. was recently overheard making the comment, “Well, if you’ve seen one business cycle, you’ve pretty much seen them all.” Which of the following is the MOST ACCURATE appraisal of Sylvia’s observation? It is totally unsubstantiated, since business cycles are all highly individualistic.NEXT QUESTION Most economists today—even those who totally disagree with Marxist philosophy — hold a certain respect for Marx’s economic perspectives. This is MOST LIKELY because: Marx was the first to conceptualize a business cycle in which good times eventually produced bad, and vice versa.NEXT QUESTION Which of these BEST summarizes the economic costs to society for tolerating a given level of unemployment? It is the difference between: real GNP and the potential GNP.NEXT QUESTION If the aggregate supply curve is S1, full employment of the economy’s resources will occur when: total real output is $2,000 billion.NEXT QUESTION If the aggregate supply curve shifts to S2: the demand for money will decrease and interest rates will fall..RESTART QUIZ
- The main goal of accessibility standards and guidelines is to design websites everyone can use. - Several organizations offer standards to meet Section 508 and other accessibility requirements. - Close attention to formats when creating web materials makes it easier to incorporate accessibility features. The IT Accessibility Constituent Group developed this set of draft guidelines to help EQ authors, reviewers, and staff and the larger EDUCAUSE community ensure that web content is accessible to all users, including those with disabilities. This article covers the most common types of files and formats. What Is Accessible Web Design? Accessible web design is the practice of designing and developing websites that are usable by everyone. People who use the web have a variety of characteristics. As web developers, we cannot assume that all our users access our content using the same web browser or operating system that we do, nor can we assume they use a traditional monitor for output or keyboard and mouse for input. For example: - Users who are blind might access a web page using an audible interface such as screen reader software or a tactile interface such as a refreshable Braille output device. - Users with low vision might view the page with large fonts or a high-contrast color scheme. - Users with physical disabilities might navigate the web without a mouse, instead using a keyboard, speech recognition technology, or other assistive technologies. The growing variety of compact web-enabled mobile devices adds further variety to the user agent spectrum. The following guidelines are designed to assure that content does not place constraints on users by requiring them to use a narrowly defined set of supported devices. By following these guidelines, you can make your content accessible to everyone. Overview of Web Accessibility Guidelines and Standards Web accessibility is formally defined by the World Wide Web Consortium (W3C), whose Web Content Accessibility Guidelines (WCAG) 2.0 became an official W3C Recommendation in December 2008. WCAG 2.0 organizes web accessibility success criteria into four general principles: - Perceivable: Content must be perceivable to all users. Keep in mind that users perceive content with a variety of senses, output devices, and settings. - Operable: User interface components, including menus, links, and controls must be operable by all users. Keep in mind that users operate such controls using a variety of input devices, including mouse, keyboard, stylus, touch screen, speech, and other assistive technologies. - Understandable: Content and the user interface must be usable and easy to understand. - Robust: Content must use standard technologies and be coded in a way that will increase the likelihood of its being supported across all web-enabled technologies, including assistive technologies and future technologies. For additional information, consult the comparatively user-friendly WCAG 2.0 Checklist created by WebAIM at Utah State University. Another set of relevant standards is the Electronic and Information Technology Accessibility Standards developed by the federal Access Board in support of Section 508 of the Rehabilitation Act as amended in 1998. Section 508 is federal law that requires federal agencies to ensure accessibility of their electronic and information technologies, including websites, software, and multimedia, plus three other categories of IT. The web-related portion of the Section 508 standards, published in the Federal Register in 2001, is based in part on the previous version of the WCAG (version 1.0). The Section 508 standards have been adopted by many states and higher education institutions. These standards are currently under review. Accessible Word Processing Documents Word processing documents are created using a variety of software, including Microsoft Word, Open Office, and various other products. Whether word processing documents are delivered over the web in their native format or are used to author content that will ultimately be converted to HTML, there are several accessibility guidelines to keep in mind. The guidelines presented in this section apply to word processing documents in general rather than to documents produced with a specific product. For more specific information about creating accessible Microsoft Word documents and converting them to HTML, consult the WebAIM article simply titled “Microsoft Word.†Document Structure and Presentation - Place content in logical reading order so that the document renders correctly when the display size is changed or when the document is magnified or converted into alternative formats (audio, HTML, PDF, DAISY, Braille, etc.). - Avoid complex layout, sidebars, and other ornamentation because they make it difficult to maintain a logical reading order. - Avoid placing content in drawing-canvases or text-boxes because these are floating objects and flow to the bottom of a page’s reading order. - Use structural and stylistic features that are built into word processing software (headings, paragraphs, lists, sections, headers/footers, tables, columns, forms). This ensures that objects on the page are coded semantically. This information is passed on to HTML or PDF files when exported and plays a critical role in screen reader users’ ability to navigate efficiently through these documents. Images and Non-Text Objects - Always provide an alternative text description (alt text) for all non-text objects (graphs, images, illustrations, multimedia, etc.). Users of non-visual devices such as screen readers or Braille output devices depend on alt text in order to access the essential content of the images. - Most word-processing software applications provide a means of adding alt text to images, and this is passed on to HTML or PDF files when exported. For example, in Microsoft Word 2003, you can add alt text to images by right-clicking on an image, then selecting Format Picture, then selecting the Web tab, then entering text into the Alternative Text field. In Word 2007, this same feature is accessed by right-clicking on an image, then selecting Size, then selecting the Alt Text tab. - If images are provided as separate image files (JPEG, GIF, PNG), an alternative text description must be provided separately within the document, clearly identified as alternate text for a particular image (for example, “Alt Text for Figure1.gifâ€Â). - Alt text should communicate the essential content of the image as efficiently as possible. - Alt text should not be provided for decorative elements. - If multiple images are used for a single concept, they should be merged into a single composite image. - If images communicate highly detailed visual information, as in charts or graphs, a long description must be provided in addition to the shorter alt text. This should be provided separately within the document and clearly identified as a long description for a particular image (such as “Long Description for Figure1.gifâ€Â). See the section Images Requiring Long Description below for information about how this information is utilized in HTML. When a non-visual user (e.g., a screen reader or Braille user) reads a data table, the default reading order flows by row from the top-left cell to the bottom-right cell in the matrix. As tables increase in complexity (especially if there are nested columns or rows), it becomes increasingly challenging for non-visual users to understand their position within the structure of the table. HTML provides markup that allows table structure to be explicitly communicated to non-visual users. Word-processing software does not have similar markup or functionality. Therefore, the process of converting a data table to HTML requires extra steps to properly mark up the table for accessibility. See the section on Data Tables under Accessible Web Pages below for more information about accessible table markup in HTML. - Use link text that makes sense out of context. Screen readers are equipped with functionality that allows users to pull up a list of links on the page and navigate through that list either in order of appearance or alphabetically. In this context, links that depend on context (redundant links or “click here†links) make no sense to non-visual users. - Use link text that is succinct and easy to verbalize. Speech recognition users select links by speaking the link text. Long, complex link text, including URLs, is difficult to verbalize and should therefore be avoided. Accessible Web Pages Document Structure and Presentation HTML is a semantic, structured language. Assistive technologies such as screen readers utilize this structure extensively, so it is critical that HTML be used properly to support accessibility. - HTML heading elements must be used to mark up all headings and subheadings. If used properly, the headings on a page form an outline of the content of that page. - HTML list elements must be used to markup any lists of content, including navigation menus (which are lists of links). - Forms must include markup that explicitly communicates the structure of the form and the relationships between its parts. The most fundamental step in creating accessible forms is using the HTML label element to identify labels and explicitly associating them with the form fields they represent. Additional information about accessible forms in HTML is available in the WebAIM article “Creating Accessible Forms.†- HTML should be validated using the W3C Markup Validation Service. This increases the likelihood of interoperability across platforms and browsers. Images and Non-text Objects Images must have alternate text to be accessible to non-visual users. The method for providing alternate text varies depending on the content of the image and the method of submission: - In HTML, the <img> element must have an alt attribute, such as alt=“description of the image†- If the image is decorative, the best practice is to deliver the image as a background image using Cascading Style Sheets. However, if an HTML <img> element is used, include a NULL alt attribute (alt=“â€Â). This is a standard practice that instructs screen readers to ignore the image. - Alt text should communicate the essential content of the image as efficiently as possible. - If images contain text, repeat the text verbatim. - If images contain highly detailed information, as in charts or graphs, provide a succinct alt attribute (for example, alt=“Figure 1â€Â) and provide additional detail using a long description (see below). Images Requiring Long Description If images communicate highly detailed information, as in charts or graphs, the important content from these images must be communicated in a long description. - The long description should be provided in a separate HTML page. - The <img> element should have a longdesc attribute, which points to the URL of the separate web page where the long description is available (for example, <img src=“figure1.gif†longdesc=“figure1_description.htmlâ€Â/>). When screen reader users encounter an image with a long description, they are informed that the image has a long description, at which point they have the option of reading that description or skipping it. The following HTML markup is needed to ensure that non-visual users can navigate tables with full awareness of their position within the table, and of how all parts of the table are related. - Wherever possible, avoid complex tables with nested rows and columns or split or merged cells. Even with accessible markup, complex tables present usability challenges for non-visual users, and they are not easily converted into alternative formats such as Braille. - Include a summary attribute with the table element (such as <table summary=“This table shows...â€Â>). The summary attribute is read by screen readers, but is not displayed visually. The purpose is to provide a succinct overview of the table’s content and layout so screen reader users can explore the table with an idea of what to expect. - Mark up the table’s caption using the HTML <caption> element. This ensures that the caption is explicitly associated with the table for non-visual users. - Mark up column and row headings with the HTML <th> element and include a scope attribute to identify whether the heading is for a column (<th scope=“colâ€Â>) or row (<th scope=“rowâ€Â). - For complex tables, include id attributes on all <th> elements and headers attributes on all <td> elements, where the value of the headers attribute is a space-delimited list of id’s that correspond with the current table cell. - See the corresponding section above, under Accessible Word Processing Documents. - Avoid causing links to open in a new window. This can be disorienting to users of assistive technologies and is unreliable given the widespread use of pop-up blockers. - Be sure that all links, form fields, and controls can be operated without a mouse. This can be tested by navigating through a web page using the Tab key in most browsers. - Avoid using color as the sole means of communicating differences or other information. Keep in mind that some users, including those who are blind or colorblind, cannot perceive differences in color. - Avoid causing objects on the screen to flash (such as in a strobe-like effect). Flashing objects can trigger seizures in susceptible individuals. - Avoid using or requiring plug-ins or other technologies that do not honor the user’s operating system or browser settings for font choice, font size, and alternative color scheme. This can be tested by changing these settings within the preferences of the browser or control panel of the operating system, then refreshing the web page to determine whether it is still usable. Portable Document Format (PDF) is a file format developed by Adobe to deliver and render on the web documents created for print. It preserves a source document’s original style, layout, formatting, fonts, images, etc. A PDF document uses a helper agent to “view†or “read†the document on the screen, making it independent of the operating system, authoring software, and display device. General Types of PDF Files - Unstructured: A graphical representation of the original document created by scanning the original document as an image. They are inaccessible to assistive technologies such as screen readers. - Structured: Same as unstructured, but also including electronic text of the original document. Searchable images are created using the Adobe Distiller or other PDF writers. The text is searchable and partially accessible to screen readers, although without the markup required for full accessibility. - Tagged: a true electronic document, with searchable text and an underlying semantic structure. This is the only type of PDF that has full support for accessibility, including a heading structure that can be easily navigated by screen reader users, support for alternative text for images, and the ability to reflow (wrap) document text when zoomed. A Tagged PDF is created by default when converting to PDF from Microsoft Word, Excel, and PowerPoint using Adobe’s Acrobat PDFMaker plug-in for Office. Creating Accessible PDF Files Craft original documents with accessibility in mind (see the earlier section on Accessible Word Processing Documents). Keep in mind that: - Complex tables might not be correctly interpreted by screen readers. - Complex layouts with multiple layers might not be fully recognized or might follow a reading order that is not consistent with how the information is presented visually. The steps for converting a document into an accessible Tagged PDF file depend on the original source of the document: - Documents created in Microsoft Office and several Adobe products are converted by default to a tagged PDF file when exported via the PDF toolbar, the PDF menu, or Save As > PDF. For this to result in an accessible document, care must have been taken when authoring the document to include semantic structure, add alternate text to images, and apply other accessibility techniques as described in the Accessible Word Processing Documents section of these guidelines. - PDF documents generated using Acrobat Distiller and other PDF writers are not tagged by default. However, accessibility can be added after production using Adobe Acrobat Professional. The first step in making the document accessible is to select the menu item Advanced > Accessibility > Add Tags to Document. Note that this is only the first step. Additional steps must be taken to add alternate text to images, add heading structure, check for proper read order, etc. Adobe Acrobat provides tools that support these steps in the Advanced > Accessibility menu. An audible text reader is also available in both Acrobat and Acrobat Reader, accessed via the menu by selecting View > Read Out Loud. Additional information about checking for and fixing accessibility in PDF files is available in WebAIM’s article titled “PDF Accessibility.†- PDF documents created as scanned images require the same procedure as in the previous item, plus the additional first step of performing optical character recognition. This can be done within Adobe Acrobat via the Document > OCR Text Recognition menu. When presentation slides are delivered over the web, there are two general accessibility considerations: The slides must be created with attention to accessibility, and the slides must be delivered in a format that is perceivable to and operable by all users. One good test for operability is to try to advance the slides, or operate any other slide controls, using the keyboard alone. Since the most common slideshow application is Microsoft PowerPoint, these guidelines focus primarily on PowerPoint accessibility. Creating Accessible Slides - Use a standard design style template. Templates create organized placeholders for standard content. Using them increases the likelihood that when content is exported, it will be properly exposed to conversion tools and assistive technologies. - Be attentive to reading order. If content (e.g., a text box) is added to a standard slide design, be aware that the added content will be appended to the end of the read order, which in some cases may result in an illogical flow for non-visual users. - Add alternative text to all images. The technique for doing so in PowerPoint is similar to that for Microsoft Word: In PowerPoint 2003, you can add alt text to images by right-clicking on an image, then selecting Format Picture, then selecting the Web tab, then entering text into the Alternative Text field. In PowerPoint 2007, this same feature is accessed by right-clicking on an image, then selecting Size and Position, then selecting the Alt Text tab. - Use discretion with embedded multimedia, automatic progression, transitions, custom animations, and similar features when PowerPoint presentations are intended for distribution over the web. If the PowerPoint will be distributed in its original format, some of these features may inherently pose accessibility challenges. If the PowerPoint will be exported to HTML or PDF, these features may not survive the export, and the remaining content may be impacted by their absence. - Provide sufficient contrast between foreground and background colors, and avoid using patterned backgrounds. - Give each slide a unique title, since this information can help facilitate navigation, both within PowerPoint and within exported formats. - If inserting diagrams, charts, or tables into a PowerPoint slide, consider accessibility best practices. Techniques vary depending on how the slides will ultimately be delivered. IBM provides additional details related to specific PowerPoint features in “Creating Accessible Microsoft PowerPoint Documents.†Distributing Slides Over the Web - If PowerPoint slides are distributed in their native format, users must have PowerPoint or an alternative software package capable of reading PowerPoint. A free PowerPoint Viewer browser plug-in is available from Microsoft, but it does not work in all browsers or across all operating systems, and it does not work well with assistive technologies. - PowerPoint has built-in features for saving to web pages, but the output is a complex frameset with pages coded in ways that are not supported well by assistive technologies. - PowerPoint can be converted to valid, accessible HTML with the Virtual 508.com Accessible Web Publishing Wizard for Microsoft Office. In addition to exporting to an accessible format, both products provide a wizard interface that helps identify accessibility problems and solutions. - PowerPoint slideshows can be exported to PDF using Adobe’s Acrobat PDFMaker plug-in for Microsoft Office, accessed via the PDF toolbar, the PDF menu, or Save As > PDF. As with Microsoft Word, this plug-in exports by default to a tagged PDF file. However, for this to result in an accessible document, care must have been taken when authoring the document to use standard design templates, add alternate text to images, and apply other accessibility techniques as described in the preceding section. The resulting output is a single file and thus is easily distributable. Accessibility of video content affects many groups of users, including people who are unable to hear the audio content, people who are unable to see content that is presented visually, and people who are unable to access the player controls. - Video content must be captioned. Otherwise, the content is inaccessible to people who are deaf or hard of hearing. A growing number of software tools and services are available, many of them free, that support adding captions to video. Consult WebAIM’s Captioning Resource List for additional information. - Video content must be described. If video content is communicated visually and not already described in the audio, this content is inaccessible to people who are unable to see it. This content must be described, either in a transcript accompanying the video, or via audio description, a standard technique by which video is supplemented with a narrative track that describes visual content as it happens. - A transcript should be provided. This benefits users who don’t have the technology or bandwidth to view the video, as well as those who want quick access to information without watching the entire video production. - Video must be delivered in a player that (1) is accessible by keyboard to users who are unable to use a mouse; and (2) has buttons and controls that can be read by screen readers. Audio-only content such as podcasts must be accompanied by a transcript. Otherwise, the content is inaccessible to people who are deaf or hard of hearing, is not searchable, and can be inconvenient to users who want to quickly retrieve specific information from the presentation. For issues related to audio that is part of a video, see the preceding section. Accessible On-line Applications On-line applications such as interactive games, automated simulations, etc., are each unique and therefore must be evaluated individually for accessibility. The following is a quick checklist of some of the issues to consider: - Can the application be operated without a mouse, for example by using a keyboard alone or speech-recognition technology? - Is the application accessible to blind individuals using screen readers or Braille output devices? - Does the application avoid using color as the sole means of communicating differences or other information? - If the application includes audio or video, are these features accessible as defined in the preceding two sections? - Does the application avoid causing objects on the screen to flash in a way that could trigger seizures in susceptible individuals? - Does the application honor the user’s operating system or browser settings for font size or alternative color scheme? - If the application changes automatically over time, does it provide a mechanism by which the user can pause or override this behavior? Join the Conversation We encourage interested readers to join the EDUCAUSE Accessibility Constituent Group LISTSERV and participate in the ongoing conversation about accessibility issues. The authors wish to thank James Bailey at the University of Oregon and Kevin Shalla at the University of Illinois at Chicago for their contributions to this document. Thanks also to Jon Gunderson at the University of Illinois at Urbana-Champaign, the entire EDUCAUSE IT Accessibility Constituent Group, and EDUCAUSE staff for conceiving of this document and inspiring and supporting its creation. Terrill Thompson's participation as author was supported in part by the National Science Foundation (grant #CNS-0540615). Any opinions, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the federal government. © 2009 Terrill Thompson, Saroj Primlani, and Lisa Fiedor. The text of this article is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 license.
This Sunday, the United Nations celebrated the first ever World Bee Day, making it a back-to-back celebration with today’s 25th International Day of Biodiversity. World Bee Day – proposed by Slovenia’s UN mission last December – coincides with the 285th birthday of the Carniolan beekeeping pioneer Anton Janša. Since Janša’s days, we have come to better understand the amazing service bees provide to ecosystems, especially their crucial role in pollinating important crops. We also know more about the potential impacts of their decline. In 2016, the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) evaluated the available data about pollinators, including bees, but also bumble bees, butterflies, birds, and many more. Their first global thematic assessment showed that large-scale declines in wild pollinators are happening in northern Europe and North America. For other regions, there is insufficient data to allow a general assessment. However, local declines were recorded in South America, Asia, Africa and Oceania. Given the vital role insects and birds play in pollination, this loss is extremely worrying. For example, almost 90 percent of wild flowering plant species depend on pollinators and are critical for providing food and habitats to other species. Farmers’ livelihoods have also become increasingly dependent on pollination since the production of pollinator-dependent crops has increased by 300 per cent during the past 50 years. Today, thirty-five percent of the global crop production, including at least 800 cultivated plants, depend on insects and birds and other pollinators. Without bees, apples, coffee and gummy bears could not be found in supermarkets anymore. Research has shown that in addition to being reliable pollinators, bees also improve the quality of plants. One study demonstrated that bee-pollinated strawberries were “heavier, had less malformations and reached higher commercial grades,” compared to self- or wind-pollinated ones. Unfortunately, declines in both wild and managed bees have been widely reported in the past decade, with strong evidence that pesticides and fertilizers from agricultural intensification is the main cause. Furthermore, with the intensification of agriculture, weeds that provide food for pollinators were eliminated and crop fields were homogenized. These monocultures result in only one crop flowering at one period in time, effectively limiting nectar sources and opportunities for bees and other pollinators. If there are less pollinators, then pollinator-dependent crops will have a problem reproducing. The EU’s partial ban on neonicotinoids is a step in the right direction The recent extension of the EU’s ban on neonicotinoid insecticides is a crucial step towards protecting bees and other insects harmed by pest control chemicals. The ban became politically feasible when a major new assessment from the European Food Safety Authority (EFSA) showed that the world’s most widely used insecticides pose a serious danger to both honey bees and wild bees. Another study shows that about 75 per cent of the world’s honey is contaminated with neonicotinoids. The ban is a positive development, but it is still a “band aid” solution. This is partly because the EU, while approving a full outdoor ban of three active substances (imidacloprid, clothianidin, and thiamethoxam), continues to allow their usage in permanent greenhouses. There is still a risk of leakages, where insecticides can contaminate waterways and surrounding soils. In any case, bees and other pollinators are exposed to harmful chemicals because of agricultural systems that are susceptible to pests. What we need is an agricultural industry that is less dependent on chemical inputs. Sustainable, pollinator-friendly farming through promoting an ecological intensification of agriculture and a diversification of farming systems would be a good solution. We stand to lose a third of our favorite food crops if we do not find means to maintain the diversity of bees and other pollinators. Moreover, extreme measures such as large-scale beehive rental is becoming commonplace in some parts of the world because there are not enough native and local honey bees to sufficiently pollinate farms. Albert Einstein is often quoted to have said, “If the bee disappears from the surface of the earth, man would have no more than four years to live”. We now know that Einstein did not actually say these words, but we are still well advised to err on the side of caution, and stop the decline of bees before they completely disappear. In this sense, citizen efforts such as bee hotels or bee-friendly gardening, alongside consumption of organic food, are important initiatives that can make a difference. The World Bee Day is celebrated not only to generate a buzz about the benefits from bees; it is also an invitation to take concrete action in protecting bees and, in the process, protect the future of our food. Source: The Current Column – German Development Institute / Deutsches Institut für Entwicklungspolitik (DIE), 22.05.2018
Complete List of Google Earth Activities Results 1 - 8 of 8 matches Plate Tectonics as Expressed in Geological Landforms and Events part of MARGINS Data in the Classroom:MARGINS Mini-Lessons This activity seeks to have students analyze global data sets on earthquake and volcano distributions toward identifying major plate boundary types in different regions on the Earth. A secondary objective is to familiarize students with two publicly available resources for viewing and manipulating geologically-relevant geospatial data: Google Earth(TM) and GeoMapApp. Emergent Models in Google Earth part of Cutting Edge:Introductory Courses:Activities This is one sample of a set of emergent models we are developing for use with Google Earth. Students use the Google Earth time-slider to lift 3D models of the subsurface into view. They can substitute their own ... Google Earth Tours of Glacier Change part of Cutting Edge:Topics:Climate Change:Activities A detailed Google Earth tour of glacier change over the last 50 years is given in class as an introduction. Students are then asked to select from a group of glaciers and create their own Google Earth tour ... Google Earth and Meandering Rivers part of Cutting Edge:Early Career:Previous Workshops:Workshop 2011:Teaching Activities This activity uses Google Earth to introduce students to a variety of measurements related to meandering rivers by looking at how rivers around the world have changed over time. Introducing Geologic Map Interpretation and Cross Section Construction Using Google Earth part of Cutting Edge:Structural Geology:Activities A highly effective, non-traditional approach for using Google Earth to teach strike, dip, and geologic map interpretation. - Landform Interpretation: Table Mountain part of Cutting Edge:Geomorphology:Activities Using topographic maps, geological maps,aerial photos,and Google Earth, groups of students develop hypotheses about a Miocene [9 Ma]river channel [Table Mountain] and post-flow processes that have resulted in the ... Stream Characteristics Lab part of Quantitative Skills:Activity Collection Students determine the relationship between the sinuosity of a river and its gradient by calculating gradients and sinuosity, and generating a graph on Excel. They then test the relationship by making measurements on a picture generated on Google Earth. Arctic Climate Curriculum, Activity 1: Exploring the Arctic part of Cutting Edge:Climate Change:Activities This activity introduces students to the Arctic, including different definitions of the Arctic and exploration of the Arctic environment and Arctic people. Students set out on a virtual exploration of the ...
Intellectual Disability: Everything you need to know to understand it In this article we address in depth what is the intellectual disability: symptoms and diagnostic criteria, types of intellectual disability and its characteristics, causes and evolution. Also, discover useful tips that can help you relate better to people with intellectual disability. Intellectual disability has been taboo for many years, surrounded by stigma and exclusion. Concepts, definitions, and contexts are changing. We now focus more on the person and their goals and needs more than on their limitations. If you have doubts and want to know more about intellectual disability, if you don’t want to stay confined to the myths spread on intellectual disability, I invite you to continue reading. What is intellectual disability?- Definition Intellectual disability involves a number of significant skill limitations. In other words, people with these disabilities have intellectual limitations and in adaptive behavior. Therefore the interaction with the environment that is not adapted for them is difficult to handle. The American Association for Intellectual and Developmental Disabilities (AAIDD), defines intellectual disability as follows: “Intellectual disability is characterized by significant limitations in intellectual functioning and adaptive behavior manifested in adaptive conceptual, social, and practical skills. This disability originates before the person turns 18″ Schalock established that if personalized appropriate supports are maintained over a long period of time, the general functioning or performance of the person with an intellectual disability will improve. The current approach has led to change the term mental retardation for the term intellectual disability. This new term not only adapts to clinical terminology but it is also less offensive for people with intellectual disability. Other terms that are no longer used are mental deficiency, cognitive disability, psychic disability, mental retardation, abnormal or subnormal. Intellectual disability is not a mental illness. Is intellectual disability the same as a developmental disability? Developmental disability is a broader term containing intellectual disability, cerebral palsy, autism spectrum disorders and other conditions that are largely related to intellectual disability (or characteristics similar to intellectual disability). Symptoms and diagnostic criteria for intellectual disability When we talk about intellectual disability, we must always keep in mind the individual differences that this might pose for each person. Nor should we forget that the environment can play an important part in the adaptation or adjustment of the person. The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) which includes, within neurodevelopmental disorders the intellectual disability characterizes it by: - A. Deficits in intellectual functions, such as reasoning, problem-solving, planning, abstract thinking, judgment, academic learning and learning from experience, and practical understanding confirmed by both clinical assessment and individualized, standardized intelligence testing. - B. Deficits in adaptive functioning that result in failure to meet developmental and sociocultural standards for personal independence and social responsibility. Without ongoing support, the adaptive deficits limit functioning in one or more activities of daily life, such as communication, social participation, and independent living, and across multiple environments, such as home, school, work, and recreation. - Difficulties at the conceptual or academic level: that is, they have difficulty performing tasks involving memory, attention, language, reading, writing, mathematical reasoning, acquiring practical skills, problem-solving skills, etc. - Difficulties in the social domain: Little awareness of their thoughts, feelings, and experiences with others, empathy, skills for interpersonal communication, the ability to make friends, etc. - Difficulties in the practical domain: Related to the degree of learning and how the person is self-sufficient in different situations in life, such as personal care, work responsibility, money management, leisure, task organization or appropriate behavior control. The last of the diagnostic criteria for intellectual disability is that the onset of intellectual and adaptive deficiencies. They must always happen during the developmental period, but age and characteristic traits will depend on the cause, type, and severity of intellectual disability. While it is true that child development is a complex process based on the biological, psychological and social evolution of the person. It is important to take into account the series of milestones or fundamental evolutionary moments that imply progress and mastery of simple skills that will facilitate other complex ones. Other that DSM-5 criteria we have to take into account other characteristics and symptoms that can help us recognize an intellectual disability. - In cases of more severe intellectual disability, milestones of motor development, language development or social development are reached later than in the general population. These can be identified in the first years of life, however, when the intellectual disability is low it may not be identified until the child is at school when academic difficulties become evident. - As for cognitive development, children with intellectual disabilities go through the same evolutionary stages as children without intellectual disabilities and in the same order, but their pace of development is slower. People with intellectual disabilities have primary periods of thought when reasoning, however, others show skills related to more advanced levels of thinking. - When intellectual disability is associated with a genetic syndrome there may be physical characteristics genetically associated. - When intellectual disability is acquired, for example, after an illness, it can be started abruptly: after meningitis, encephalitis or after a blow (trauma) to the skull during development. - People with intellectual disabilities have between 3 and 4 times more mental, neurological, medical and physical disorders than the general population. - The mental health problems in people with intellectual disabilities are the same as in people without disabilities, however, the prevalence is greater. The most common are mood disorders, depression, schizophrenia, anxiety symptoms and disorders and sleep disturbances. - Regarding physical health problems, people with intellectual disabilities are at greater risk and prevalence for diseases such as obesity, epilepsy, diabetes, HIV and STDs, dementia, and others. That is why it is necessary to create health programs aimed at meeting their needs. - Another of the intellectual functions that are usually most affected in people with intellectual disabilities is language and communication. In general, people with intellectual disabilities have adequate language that equals their younger peers. The greater the severity of intellectual disability, the greater the severity of language problems. - Behavior alterations are another of the problems generally associated with intellectual disability and can be caused by several factors: such as the discomfort caused by the difficulty in communicating or expressing their personal desires and needs. Emotional problems related to exclusion and social discrimination or simply as a way of expressing or communicating what they can not otherwise transmit (frustration, worry, nervousness …). - These problematic behaviors, because of their intensity, frequency or duration, will negatively affect the personal development and communitiy participation of the person with intellectual disability. Generally, these behaviors serve a purpose to the person who performs it. Types of intellectual disability and characteristics: Mild, Moderate, Severe and Profound When diagnosing an intellectual disability, the severity of the disability must be specified at four levels: mild, moderate, severe and profound. Traditionally, this classification was based on IQ scores obtained through intelligence tests or tests. In such a way that the person with mild intellectual disability would have obtained IQ scores of between 50-55 and 70 points; With moderate intellectual disability between 35-40 and 50-55 points; With severe intellectual disability between 20-25 and 35-40 points and with profound intellectual disability scores lower than 20-25 points (remember that the average score of the general population is between 90 and 110). The most current classification of intellectual disability is based on adaptive behavior and could be summarized as follows: 1.Mild Intellectual Disability A person with a mild intellectual disability the conceptual domains affected, such as abstract thinking, cognitive flexibility, short-term memory and the functional use of academic skills such as reading or handling money. As for the social domain and in comparison with their peers, immaturity appears in social interactions and increases the risk of being manipulated. Finally, they need support to complete complex tasks of daily living. 2. Moderate Intellectual Disability The person with a moderate intellectual disability needs continuous assistance to complete daily basic activities of the day-to-day, and others may need to take some responsibility for that person (example: sign an informed consent). Regarding the social domain, oral language (which is the main tool we have to communicate socially) is much less complex than that of people without disabilities. They may not adequately interpret certain social cues and need communicative support to establish successful interpersonal relationships. Finally, with a longer teaching period and additional support they can develop various skills and abilities. 3. Severe Intellectual Disability When intellectual disability is severe, conceptual and cognitive skills are much more limited. The person has little understanding of language and numerical concepts such as time or money. Caregivers should provide extensive support to perform daily activities. Since oral language is very limited in both vocabulary and grammar, speech is formed only by simple words or phrases that could be improved by alternative means, such as non-verbal language. Communication and social focus on the here and now. The person requires constant support and supervision for all daily living activities (cooking, personal hygiene, choice of wardrobe, etc.). 4. Profound Intellectual Disability The person could use some objects (brushes, etc.) and acquire some visual-spatial skills such as pointing. However, the motor and sensory problems that are usually associated can prevent the functional use of objects. Social skill is also very limited in terms of the compression of both verbal and gestural communication. The person can understand very simple instructions and express basic desires or emotions through simple and non-verbal communication. The person is dependent on all aspects, although, if there are no major motor or sensory impairments, they could participate in some basic activities. Main syndromes associated with intellectual disability Down Syndrome is caused by a chromosomal abnormality in pair 21, that is, the person has an extra chromosome in that pair, therefore, they have 47 chromosomes instead of the 46 that typically appear in the human karyotype. This syndrome is the most frequent cause of intellectual disability of genetic origin. Three different types are now known: simple or pure trisomy 21 (more than 90% of cases), chromosomal translocation (whose characteristics are similar to the previous type) and mosaicism, a less frequent type that affects only 1% of the cases. The latter is also known because it can show all, some and even none of the typical features associated with Down Syndrome, depending on the percentage of cells carrying extra information. At the physical level, people with Down syndrome usually have a flat and straight face (brachycephaly), muscular hypotonia, short stature; neck, extremities, fingers, and ears generally short; nose and eyes inclined upwards and a small mouth with a large tongue. At the behavioral level, there is slowness to process, structure, interpret and elaborate the information, generating a mild to moderate intellectual disability. They also often have difficulty maintaining attention, retaining information in memory and spatiotemporal orientation. Language production is usually poor, however, despite this, they usually have a good capacity of social adaptation. 2- X-Fragile Syndrome X-fragile syndrome is a disorder of hereditary origin that happens more in men than in women and is the result of a molecular anomaly on the X chromosome. It is the second genetic cause of intellectual disability. People with this syndrome have four main features: elongated face with a broad forehead and prominent chin, large and detached ears, great joint mobility and macroorchidism (excessive testicular development). Behaviorally, they usually present mild or moderate intellectual disability, language alterations such as speech delay or absence of language. Hyperactivity, attention deficit, extreme shyness or frequent stereotypes such as flapping or biting their hands. In women, cognitive impairment is usually lower. Williams Syndrome is a genetic disorder characterized by the loss of genes in one of the chromosomes. Physically it is associated with a determined form of the face: elongated and thin, large lips, clear eyes with a star pattern on the iris and flattened nose. In a large number of cases, the cardiovascular system is affected. People with this syndrome have intellectual disability, usually between mild and moderate and psychomotor difficulties. They usually have a rich vocabulary, good skills for social interaction, as well as for music and a good memory. 4- Angelman’s syndrome Angelman Syndrome is a genetic disorder due to alterations in chromosome 15. It is associated with delays at the neurological level. Physically they have low pigmentation in their feet, their hair, and in their eyes. They have a large mouth with elongated teeth and forward jaw. They also have a smaller head size and some characteristic spots on the iris. The intellectual disability in this syndrome is usually severe or profound, with severe conditions in speech and language, delayed psychomotor development, alterations of movement, balance and hand stereotypies. Another characteristic feature is the presence of a smile that remains without cause. 5- Prader-Willi syndrome Prader-Willi syndrome is a congenital disorder derived from the absence of normal parent gene activity on chromosome 15 and other chromosomal abnormalities. During early childhood, they may present problems with feeding and low muscle tone. It is also common they have low stature, small hands and feet, hypogonadism, compulsive food intake and obesity. In this syndrome, intellectual disability is not key to diagnose since 32% of people with this syndrome have a normal IQ. Although in general, they usually present speech disorders and cognitive limitations in processing information and short-term memory. 6- Cri Du Chat Syndrome (Syndrome 5p) The syndrome is due to an alteration in chromosome 5. Physically they have a small size head, a rounded face, separated eyes, a wide nasal bridge, malformación in the ears and a small jaw. They also have small hands and deformities in the feet and the palate in addition to eyesight problems such as strabismus. At a more behavioral level, the distinguishing feature is the baby’s crying, it is similar to the meow of a cat and has no communicative utility. There is a significant delay in motor development, they usually present severe intellectual disability and a very limited attention span. Causes of intellectual disability The causes of intellectual disability are multiple: from genetic diseases to alterations caused by the environmental. Currently, the cause of intellectual disability is considered to be an interaction between four risk factors: biomedical, social, behavioral and educational. They interact throughout a person’s life and are passed on between generations. Some examples of risk factors are: Causes before birth (prenatal) - At a biomedical level: chromosomal disorders, disorders associated with a single gene, syndromes, maternal diseases or parents age. - At a social level: maternal malnutrition, domestic violence, lack of access to healthcare or poverty. - At a behavioral level: drug use, alcohol, tobacco and parental immaturity. - At an educational level: cognitive impairment of the parents or lack of preparation for parenthood. Causes during birth (perinatal) - At a biomedical level: prematurity in childbirth, birth injuries or neonatal disorders. - At a social level: lack of parental care. - At a behavioral level: rejection by the parents or abandonment of the child. - At an educational level: lack of medical care after medical discharge. Causes after birth (postnatal) - At a biomedical level: trauma or cranioencephalic lesions, malnutrition, meningoencephalitis, epileptic disorders or degenerative disorders. - At a social level: poor interaction between child and caregiver, lack of adequate stimulation, family poverty, chronic illness in the family. - At a behavioral level: mistreatment and abandonment, domestic violence, inadequate safety measures, social deprivation (isolation) or problematic behaviors of the child. - At an educational level: deficits in upbringing, late diagnosis, inadequate early care services, inadequate special education services or deficient family support. Despite current knowledge, medical advances and efforts to promote detection as early as possible, the fact is that in many cases the specific causes of intellectual disability are unknown. How does an intellectual disability develop? The evolution or development of an intellectual disability has as many possibilities as people with this existing condition. One of the most important points in which research in this field is focused is the early diagnosis, that way professionals carry out an intervention with the child as soon as possible in all the affected areas. Another important point that I have stressed throughout the article is the importance of providing the appropriate therapy and support to the person with intellectual disability. This implies that therapies must be individualized and an evaluation will have to be carried out prior in order to meet the person’s needs. Keep in mind that not all people with disabilities are the same, nor do they need the same therapies nor even require the same intensity of those therapies in all areas and activities of their life. Hence the current focus is on planning support systems based on the person with intellectual disability, individually, identifying their goals and their desired life experiences and emphasizing the strengths, not the limitations they may have. The main goal should be to improve their quality of life. Here you have a video where Loretta Claiborne explains what it’s like to live with an intellectual disability. Advice for parents of children with intellectual disabilities Every person, whether or not he has an intellectual disability, is different. Each family is also different and deal with situations differently. However, parents play a primary and fundamental role in our life. Here are some helpful tips: 1. If you see behaviors or reactions that are not expected in your child or if you have doubts about whether something is happening to your child, ask for professional help as soon as possible. This is a very important point since you dispel your doubts and also allow for the intervention to happen as soon as possible. 2. Once the diagnosis of intellectual disability is confirmed, it is possible that negative feelings such as guilt, anger, sadness appear. This is perfectly understandable and normal, you will have to give yourself time to process them and then take steps towards acceptance and recognition. You can do this with help from other parents who have gone through the same or with a professional. 3. A very important challenge will be to differentiate the things that we can change from those that we can’t. This allows us to spend our resources on things we can change and not waste time on other irrelevant aspects. 4. Research information about your child’s disability and everything you can do throughout his development. Keep informed about recent studies with scientific evidence and therapies as well as through families that experience the same disabilities. 5. Find out about therapies that might be useful for your child and your family. 6. Value professional opinions as much as other nonprofessional. 7. Think about your child best interests based on their possibilities and not on their limitations. Think about their needs, what can make them happy and progress by offering the appropriate therapies that they need. This will allow them to achieve their highest level of autonomy and the highest quality of life possible. 8. Share and enjoy with your child, considering that although he will go through the same developmental stages at a slower pace than other children, you can still enjoy every minute of it. Don’t underestimate him or always treat him like a child. 9. And remember, even if we are different, we are all people with the same rights and the same opportunities as the others. Verdugo M.Á., Guillén, V.M. y Vicente, E. (2014). Discapacidad intelectual. En L. Expeleta y J. Toro (Eds.) Psicopatología del desarrollo (pp.169-190). Madrid, España: Pirámide Guía para padres de FEAPS: No estás solo. 3ª edición. Alejandra is a clinical and health psychologist. She is a child specialist with a diploma in evaluation and intervention in autism. She has worked in different schools with young children and private practice for over 6 years. She is interested in early childhood intervention, emotional intelligence, and attachment styles. As a brain and human behavior enthusiast, she is more than happy to answer your questions and share her experience.
SHOPPING AND MONEY VALUES LESSON IDEA Here is a lesson idea from one of our teaching members: - For students to learn the value of money. - ESL students to learn the value of the different coins - The value of food - The concepts of 2 for $2.00 etc. - The use of decimals. I used the following lesson with my beginner 3rd Grade ESL students. It may be modified for other grade levels. - Each Student gets a store Supermarket Flyer. sheet of paper Marker Classroom play Register Classroom Money or real money. Students must choose at least two items from each food group. Students prepare a shopping list. They must add the total of their items. They are each given a certain amount of money to spend. This depends on the math concept being taught. For example: They may not have enough so: "How much more do you need?" or "What is the change?" etc. Students must calculate their change so they can Each student can take a turn at the register. The shoppers come to the register with their list and their money. The "cashier" must give the correct change. *With beginner ESL students, this lesson may take a week. Do you have a teaching idea you would like to share, or do you have an idea for a new lesson? Then leave us a
The quantum entanglement of particles, such as photons, is a prerequisite for the new and future technologies of quantum computing, telecommunications, and cyber security. Real-world applications that take advantage of this technology, however, will not be fully realized until devices that produce such quantum states leave the realms of the laboratory and are made both small and energy efficient enough to be embedded in electronic equipment. In this vein, European scientists have created and installed a tiny "ring-resonator" on a microchip that is claimed to produce copious numbers of entangled photons while using very little power to do so. Entangled photons have been produced on a silicon chip before, but the number of pairs produced was low, and the amount of energy required to achieve this was prohibitively high – especially on a low-powered device such as a silicon chip. This is where the new micro-ring resonator claims its points of difference. Created in a collaborative effort between scientists at the Università degli Studi di Pavia, Italy, the Universities of Glasgow and Strathclyde, Scotland, and the University of Ontario, Canada, the new micro-ring resonator at the heart of this work takes the form of a loop etched onto a silicon wafer substrate. By precisely engineering the properties of this tiny device, the researchers have made it produce light in the form of entangled photons. And, by keeping its size down to the micron level and achieving exceptional power efficiencies, they have also made it an ideal candidate for use as an on-chip component. "The main advantage of our new source is that it is at the same time small, bright, and silicon based," said Daniele Bajoni, a researcher at the Università degli Studi di Pavia. "The diameter of the ring resonator is a mere 20 microns, which is about one-tenth of the width of a human hair. Previous sources were hundreds of times larger than the one we developed." With ordinary entangled photon emitters generally being produced with specialty crystals and unable to be miniaturized to much less than a couple of millimeters or so, the researchers looked at alternatives in their early research. Happening upon an existing optoelectronic component, the micro-ring, the scientists soon realized that these devices – already etched onto silicon chips – could be modified to produce co-mingled photons for quantum entanglement. And with low power requirements, an inbuilt resonator, and able to produce photons with a relatively low-powered laser beam, the micro-ring resonator provided the ideal environment for light particle experimentation. A micro-ring (or optical ring) resonator is a device that essentially uses the same principles as those found in whispering galleries, except that instead of sound, they use light. When light of a wavelength resonant ("in tune") with the loop is input from a laser via a waveguide, its intensity increases as it completes multiple circuits around the device until it is finally emitted as a very bright beam of photons at the output.. After some experimental tinkering and tailoring, the scientists were very pleased to find that the micro-ring device they had decided to use was an admirable choice. When the device was "pumped" with a laser, a high number of the subsequent photons streaming from the resonator showed all the hallmarks of quantum entanglement. "Our device is capable of emitting light with striking quantum mechanical properties never observed in an integrated source," said Bajoni. "The rate at which the entangled photons are generated is unprecedented for a silicon integrated source, and comparable with that available from bulk crystals that must be pumped by very strong lasers." Using an already established technology to produce their device, the scientists are confident that the application of their modifications may well soon see the production of silicon chips with inbuilt micro-resonators embedded in modern electronic equipment. "In the last few years, silicon integrated devices have been developed to filter and route light, mainly for telecommunication applications," observed Bajoni. "Our micro-ring resonators can be readily used alongside these devices, moving us toward the ability to fully harness entanglement on a chip." The research was recently published in the journal Optics Infobase. Want a cleaner, faster loading and ad free reading experience? Try New Atlas Plus. Learn more
Point Lookout was the third in a series of alarming names, which James Cook applied to the land in 1770, after Point Danger and Mount Warning. Point Lookout was the third in a series of alarming names, which James Cook applied to the land, after Point Danger and Mount Warning. Based on these three names, it could be surmised that Cook was feeling the need for a pilot. On the other hand, these names may be the result of his preference to be without one. After all, isn’t it logical that explorers come first, and only in their footsteps can pilots implement their findings? The presence of knowledgeable people, who were living all along the coast line poses a challenge to the notion of ‘exploration’. A pilot station, specifically designed to support colonial ships in Quandamooka, had been set up at Amity Point by 1825. A maritime pilot is some one who understands local coastal conditions and who works with the commander of a vessel to correctly and safely negotiate a specific interface between land and sea. As he traveled north, Cook passed many people, from many different nations, who had a deep knowledge of the interfaces between the sea and the land, which he saw and mapped. Without their help, James Cook effectively negotiated some offshore physical challenges. He eventually crashed the Endeavour, his re-fitted coal carrier, into the Great Barrier Reef. The Endeavour’s records of the language of the coastal people were developed in the context of that crisis. Cook had traveled a long way up the coastline before demonstrating that there was an accessible vocabulary which might assist communication and relationship. Physical hazards were not the only challenges facing Cook, Banks and the Endeavour. Negotiating the interface between land and sea involves negotiating relationships as well as negotiating the local legal conditions. One key relationship requiring negotiation is the relationship between the pilot and the ship’s commanding officer. Very often, these two characters are answerable to different authorities, with different rules. They are likely to have conflicting priorities, conflicting objectives, conflicting needs and conflicting problem solving strategies. Conflict between pilot and ship’s captain is a real possibility. There are two elements in Cook’s journey north from the land of the Tharawal People, and past Quandamooka, that suggests he was prepared, in some way, to attempt to negotiate the legal and relational challenges at the interface between land and sea. Firstly, he had employed a Polynesian navigator and mediator. This person’s name was Tupaia. Secondly, he had some ‘hints’ from James Douglas, the 14th Earl of Morton. These ‘hints’ urged Cook to respect that the people he encountered in his travels are: ‘the natural, and in the strictest sense of the word, the legal possessors of the several Regions they inhabit.’ He also suggested to that Cook “Form a vocabulary of the names given by the Natives to the several things and places which come under the inspection of the Gentlemen”. It is ironic that James Cook re-named the Quandamooka region after the man who had urged him to document local place names. Even with Tupaia onboard and the Earl of Morton’s ‘hints’ in hand, Cook was not well equipped to negotiate the interface between land and sea with the success expected of a pilot. Nor was Matthew Flinders when he came to Quandamooka. The first time Flinders came to Quandamooka, he brought with him a Darkinyung man, remembered by the name of Bungaree. Though he was a man who lived at the interface of land and sea in the southern Pacific Ocean, Bungaree could not speak the Jandai language and was a stranger in Quandamooka Country. Many distinct Aboriginal Nations lived between Bungaree’s Darkinyung People and the people in Quandamooka. It is likely that Bungaree new more about the protocols of being a stranger than Flinders did. However, he did not have the legal or social authority, nor the language or local knowledge to play the role of pilot that Flinders would have desired. Three place names that appeared on early colonial charts in Quandamooka are direct references to the physical, legal and relational issues that are the domain of a maritime pilot. - Point Lookout identifies the physical hazards of rocks and shoals. - Cape Morton is an indication that Cook had not forgotten the man who advised him who the legal possessors were, adding “should they in a hostile manner oppose a landing and kill some men in the attempt, even this would hardly justify firing among them, ‘till every other gentle method had been tried.” - Point Skirmish is a reminder of the failure to establish a relationship between ‘the landed’ and ‘the landing’, which led to Flinders firing among the Quandamooka People Street names at Point Lookout are further affirmations of the virtues of the maritime pilot. - Hopewell St recollects Flinders being guided to drinking water by the local people (as the physical negotiation). - Mooloomba St is a recognition that the Quandamooka People had a relationship with the land before Cook called it Pt Lookout (as the legal element). - Mintee St and Baramba St (also known as ‘Banksia’) are assertions that the Quandamooka people were, and are equipped with a language of interest and relevance to English-speaking people, who meet them at their interface between land and sea. Language enables relationships. Australians continue to be tested by physical, legal and relational challenges at the interface between land and sea. Our English language includes place names which remind us of how some specific challenges have been handled in the past. These names may help us to see what we have learned over time, and if and how we have changed.
Fats perform many critical functions in the body: - They provide 60-80% of the body's energy needs at rest. - Fats are an abundant energy reserve. - They protect and insulate vital organs. - Fats provide proper cell structure, especially in nerve and brain tissue. - Fats help produce vitamin D in the body. - Fats form steroid hormones. - They carry vitamins A, D, E, and K through the bloodstream. - Fats enhance the flavor and add satiety to meals. Foods that contain fats include oils, grains, meat, dairy, beans, certain vegetables (like avocados and olives), nuts, and seeds. Generally speaking, fruits and vegetables contain minimal to no fat. Some of the foods with the highest cholesterol (fat) content are whole milk, cheddar cheese, beef, chicken, turkey, pork, and butter and whole eggs (which contains the highest amount of cholesterol). The guidelines for fat consumption vary, but recommendations range from 20-35% of the daily caloric intake. If you consume 2,000 calories per day, fats should be 400-700 of those calories. It is thought best to aim at the lower end of this spectrum and to choose non-animal based sources for optimal heart health. Be good to yourself! More next time...
Some tips for home schooling kindergartners are to be sure to understand local and state laws, to research different curricular and educational approaches before determining the best fit for the child, and to connect with local resources for support. Talking to other parents and connecting with a local home-schooling community is a good way to learn about options and different approaches, according to PBS Parents.Continue Reading During kindergarten, students begin to learn to read and write while showing different levels of readiness at different times. To support this process, Successful Homeschooling recommends that home-school lessons should be simple and move at a pace that is comfortable for the child. Nursery rhymes, story-telling and talking to children in complete sentences can all help kindergartners learn about words and sounds as they begin reading on their own. It's a good idea to start with simple words, having children write out numbers, letters and names. While engaged in play, parents can practice using counting numbers up to 20 and introduce the basics of addition and subtraction. Organized play dates, group activities and classes through children's programming at a local library can all serve as opportunities for socialization and interaction. These activities augment home lessons and provide home-schooled kindergartners the chance to develop friendships and practice communication skills.Learn more about Homework Help
The National Wetlands Research Center’s Wetlands Ecology Branch focuses on restoration of coastal marshes and prairies, the ecological processes that drive loss and restoration of wetlands, the effects of large-scale storms such as hurricanes, and the effects of global change, particularly sea-level rise. Research ecologists at NWRC study causes and loss of threatened coastal ecosystems, and investigate how to stabilize, restore, and manage the coastal landscape. Additionally, they provide international technical assistance and collaboration with scientists in other countries such as India, Mexico, and England. In 2005, they released a report entitled Science and the Storms: the USGS Response to the Hurricanes of 2005. Go to the NWRC hurricane research page for an in-depth look at their hurricane research efforts. NASA image courtesy Jeff Schmaltz, MODIS Rapid Response Team at NASA GSFC.
What is a processed food? What is actually considered a processed food? Are all processed foods unhealthy? “Avoiding processed foods” – this seems to be the blanket response when someone asks how to eat a healthier diet. What actually is a processed food? Are all processed foods unhealthy? Some of the answers may surprise you. According to the United States Department of Agriculture (USDA), processed food is defined as any raw agricultural commodity that has been subject to washing, cleaning, milling, cutting, chopping, heating, pasteurizing, blanching, cooking, canning, freezing, drying, dehydrating, mixing, packaging, or other procedures that alter the food from its natural state. This may include the addition of other ingredients to the food, such as preservatives, flavors, nutrients and other food additives or substances approved for use in food products, such as salt, sugars and fats. Minimally processed food retains most of its inherent physical, chemical, sensory and nutritional properties and many minimally processed foods are as nutritious as the food in its unprocessed form. According to the Academy of Nutrition and Dietetics, processed food falls on a spectrum from minimally processed, to heavily processed. - Foods such as sliced fruits and vegetables, bagged salads and leafy greens and roasted nuts are all examples of minimally processed foods. - The next category of processed foods includes canned foods such as beans, tuna, fruits and vegetables, as well as frozen fruits and vegetables. - Jarred pasta sauces, yogurt and salad dressing have added oils, sweeteners and preservatives, which makes them a processed food as well. - Examples of heavily processed foods include crackers, deli meat and granolas. - The most heavily processed foods examples include pre-packaged and/or frozen meals. The main goal of processed foods is to make it more convenient for the consumer. There are practical reasons for why certain foods are processed the way they are. The minimally processed foods such as sliced fruits and vegetables make it more convenient for the consumer to eat fruits and vegetables without needing to wash, dry and cut the produce themselves. Canned and frozen foods help to preserve perishable foods at their peak to be consumed at a later date. Adding oils, sweeteners and preservatives help to add texture and flavor to foods. The most heavily processed foods, such as pre-made and frozen dinners require little to no prep and are ready to eat after heating. Michigan State University Extension encourages that not all processed foods should be deemed evil. There is a time and place for each in a well-balanced diet. People are busy and picking up pre-cut and washed fruits vegetables is a better and less processed option than fruit snacks or vegetable chips. Using canned foods that could otherwise take a long time to prep, such as beans and tuna, helps people in a time crunch to consume these nutritious foods. Keep the “spectrum” in mind when choosing processed foods, so you choose minimally processed foods more often than heavily processed foods. When choosing processed foods it is important to read the food label; including the ingredient list. This will help you know exactly what is in that food and you are able to make an informed decision. Choose the option with the lowest added sugars, lowest sodium and no trans-fat.
In volcanology, magma is melted rock that is under the ground. It is like lava, which is melted rock above the ground. There are many types of magma. One is called felsic magma. Felsic magma is thick and has lots of a mineral called silica. It mostly makes light-coloured rocks. Another type is called mafic magma, which is runny and has less silica. It usually makes dark-coloured rocks. A third type is intermediate magma. It is like both the other types. When magma becomes solid it's usually by cooling slowly, far below the surface. This makes "plutonic" rocks such as granite. Sometimes magma comes out from a volcano as lava, cools more quickly, and forms other kinds of rock.
This activity presents a picture of a group of individuals with different appearances. It is meant to help you reflect on what you assume about people based on their outward characteristics. Suppose you met the following people and you were asked to guess which one of them is a physicist. Who would you pick? - A middle-aged man of South Asian descent, with a neat haircut, wearing a dress shirt and tie, and dress pants. - An Asian woman with long hair, wearing a sleeveless blouse, a knee-length jean skirt and open-toe shoes. - A Caucasian man in his 60s with a light beard, wearing glasses, a long-sleeved collared shirt, a belt, and khaki pants. In the case of this particular group of people, the middle-aged man of South Asian descent is a police officer, and the Caucasian man is a graphic designer. The Asian woman is the physicist. But most people are unlikely to make that guess. The point of this exercise is not whether you guessed correctly. Rather, think about what was going through your mind as you are trying to guess the answer. How were you trying to categorize a physicist? Take a moment to reflect on times when people have made assumptions about your abilities and qualifications, both positive and negative, based on a group that you belong to (gender, age, ethnicity). How did that make you feel? Similarly, what assumptions do you make about others based on these criteria? What might be the unintended consequence of these assumptions? In education, it is critical that we become aware of, and manage, the way we make assumptions about people based on stereotypical beliefs about gender, age, cultural background and other characteristics. Over time, these beliefs shape our micromessages, and these in turn can have a dramatic impact on the students we work with.
The Shape Tools Geometric shapes, despite their simple look, are useful for many drawing techniques (and not only vector drawing). With path operations (Union, Difference, etc…), you can quickly get awesome results from some elementary shapes. You can even further improve that with path tools. Both path operations and path tools are detailed in later sections. Let’s draw some geometric shapes. All the shape tools follow similar rules to let you create and edit your shapes. But each tool has specific options: for example, a circle can become a pie wedge, or a rectangle can have its corners rounded. To create a geometric shape: enable the relevant tool in the toolbar by clicking on it; on the canvas, press the mouse button and hold while you drag the mouse; release the mouse button to display the shape. Once the mouse is released and the shape is displayed, various handles will become visible. Many of Inkscape’s tools use handles for one purpose or another. But it’s the handles of the geometric shapes which are used for creating many fancy and exciting effects. The handles may be tiny circles, squares and/or diamonds. Sometimes, the same handles can be available for different tools. We will learn more about each handle’s function in the following chapters. Many features of Inkscape are accessible through keyboard shortcuts, and sometimes even only through key shortcuts. While drawing your shape: press Ctrl while you drag the mouse, with the Rectangle and Ellipse tools, to create squares and circles; press Shift while you drag to draw from the center, rather than from one corner. Try drawing some shapes, with and without those keys to get an idea of how they can be used.
Dam Good! Beavers May Restore Imperiled Streams, Fish Populations The picture above shows what can happen when stream beds erode and disconnect from their old floodplains. Wetlands are now fallow meadows. As discussed in the article below, beavers have trouble building dams in these eroded streams where trees have disappeared. Where heavy material is lacking, and contained flood runoff has grown violent because of stream bed erosion, dams made from cane, grass and brush will not hold in floods. One quick way to help beavers come back is to build artificial beaver dams. These rely on vertical posts installed as storm-proof frameworks. Beavers readily take to these, and maintain and expand them. As discussed below, these artificial beaver dams greatly accelerate beaver reintroduction, stream and fishery regeneration, and the restoration of meadows and wetlands along streams. There are thousands of miles of actively eroding gullies throughout far-West Texas. Even without beavers, the erosion control practices described below would work with many of these. Utah State University scientists report a watershed-scale experiment in highly degraded streams within Oregon’s John Day Basin demonstrates building beaver dam analogs allows beavers to increase their dam building activities, which benefits a threatened population of steelhead trout. “Whether or not beaver dams are beneficial to trout and salmon has been hotly debated,” says ecologist Nick Bouwes, owner of Utah-based Eco Logical Research, Inc. and adjunct assistant professor in USU’s Department of Watershed Sciences. Billions of dollars are spent for varied river restoration efforts each year in the United States, he says, but little evidence is available to support the efficacy of beaver dams. “This may be due to the small scale of the limited research aimed at investigating restoration effects,” Bouwes says. “So, we conducted a large-scale experiment, where the effects of restoration on a watershed were compared to another watershed that received no restoration.” Bouwes is lead author of a paper published July 4, 2016, in the journal Nature’s online, open access Scientific Reports that details the seven-year experiment conducted in streams within north central Oregon’s Bridge Creek Watershed. Contributing authors are Bouwes’ USU colleagues Carl Saunders and Joe Wheaton, along with Nicholas Weber of Eco Logical Research, Chris Jordan and Michael Pollock of NOAA’s Northwest Fisheries Science Center in Seattle, Ian Tattam of the Oregon Department of Fish and Wildlife and Carol Volk of Washington’s South Fork Research, Inc. When Lewis and Clark made their way through the Pacific Northwest in the early 19th century, the area’s streams teemed with steelhead and beaver. But subsequent human activities, including harvesting beaver to near extirpation, led to widespread degradation of fish habitat. Bouwes says these activities may have also exacerbated stream channel incision, meaning a rapid down-cutting of stream beds, which disconnects a channel from its floodplain and near-stream vegetation from the water table. He notes beavers build dams in the incised trenches, but because of the lack of large, woody material, their dams typically fail within a year. “It’s an ubiquitous environmental problem in the Columbia River Basin and throughout the world.” Bouwes says. “It sets a chain of ecological effects in motion that result in habitat destruction, including declines in fish populations and other aquatic organisms.” To conduct the experiment, the researchers built beaver dam analogs, known as “BDAs,” by pounding wooden posts into the stream bed, and weaving willow branches between the posts, throughout the 32-kilometer study area. “Our goal was to encourage beaver to build on stable structures that would increase dam life spans, capture sediment, raise the stream and reconnect the stream to its floodplain,” Bouwes says. “We expected this would result in both an increase in near-stream vegetation and better fish habitat.” Beavers quickly occupied the BDAs, resulting in an increase in natural dam construction and longevity in Bridge Creek. “What really impressed us was how quickly the stream bed built up behind the dams and how water was spilling onto the floodplain,” Bouwes says. The researchers also documented increases in fish habitat quantity and quality in their study watershed relative to the watershed that received no BDAs and saw little increase in beaver activity. The changes in habitat in the watershed receiving BDAs resulted in a significant uptick in juvenile steelhead numbers, survival and production. “This is, perhaps, the only study to demonstrate beaver-mediated restoration may be a viable and efficient strategy to rehabilitate incised streams and to increase imperiled fish populations,” Bouwes says. “With so many streams that need help, we need to look towards more cost-effective and proven means to restore streams, and beavers may be able to do a lot of the heavy lifting for us.”
Kin selection and indirect fitness benefits Kin selection is a first evolutionary mechanism that explains that cooperative behaviours can be adaptive (Hamilton, 1964). Indeed, the relevant unit for the propagation of a trait is the gene in all of its copies, not the individual by itself (see section 1.1.1). A gene can increase its propagation by improving the fitness of the individual carrying it, but also by increasing the fitness of other individuals carrying the same gene. Thus, a gene that helps its carrier’s siblings or offspring increases the fitness of related individuals. It is likely that these relatives also carry a copy of this gene since they are from the same parents. Therefore, this gene indirectly increases its fitness. Thus, a mutant gene that helps relatives of an individual compared to a resident gene that does not help them is favoured by natural selection. However, kin selection can only explain the existence of a subset of the cooperative behaviours observed in the living world, those expressed towards genetically related individuals. How could cooperative behaviours between unrelated individuals, as observed in humans or vampire bats, for example, or cooperative behaviours between individuals of different species, as observed in symbioses, have been favoured by natural selection? For cooperation between genetically unrelated individuals to be evolution-arily stable, the actor must itself receive a benefit from its cooperative be-haviour. This benefit can be obtained if others respond to its cooperative behaviour later by cooperating back (Trivers, 1971). This mechanism is called conditional cooperation or reciprocity. There are two main families of conditional cooperation mechanisms: Partner Con-trol (also called Partner Fidelity Feedback) and Partner Choice (Noë, 2006; Sachs et al., 2004). In Partner Control, the recipient of an interaction adjusts its behaviour towards the same actor and continues to interact with it. Reci-procity can be either positive, i.e. the recipient cooperates with the actor in response to the cooperation, or negative, i.e. the recipient punishes the actor if the actor does not cooperate. In both cases, it is in the actor’s interest to cooperate, since this maximizes its gain from the recipient’s response. Partner Control behaviours can be implemented very easily and are par-ticularly robust, as shown in Axelrod and Hamilton (1981). Thus, in a coop-erative situation which can be modelled as a Prisoner’s Dilemma (detailed in section 1.2), with repeated interactions, the tit-for-tat behaviour is a robust and straightforward reciprocity strategy. The tit-for-tat strategy is an op-timistic imitation strategy. It consists of always starting any interaction by cooperating and then imitating its partner during the following time steps. Thus two individuals playing this strategy will cooperate at every time step of the interaction. On the other hand, when an individual playing tit-for-tat interacts with a cheater, it is exploited only once, and then it stops cooper-ating. Tit-for-tat behaviours have been observed in pied flycatchers (Krams et al., 2008), who only come to defend partners who have defended them before, and refuse to help those who have not come to help them when they needed it. They are also seen in wild vervet monkeys for grooming (Fruteau et al., 2009). Partner Choice and Biological Markets In partner choice, on the other hand, individuals do not only adjust their cooperation with a given partner according to his past action. They choose their partner according to their past action. Since all individuals in the population seek to be with the best possible partner and not all can meet their demand, there is a biological market of partners (Noë & Hammerstein, 1994). We observe partner choice processes in a wide variety of living systems, from the cleaner fishes (Bshary & Grutter, 2002) to the legumes-rhizobia mutualisms (Simms & Lee Taylor, 2002) that we will develop further. In the human species, partner choice has likely played a prominent role in the evolution of cooperative behaviour (Barclay, 2013; Barclay & Willer, 2007; Debove, André, et al., 2015). Thus, partner choice allows the appearance and maintenance of coopera-tion. To understand this, let us no longer focus on the actor of cooperation, but on the recipient. In a collective task, it is always relevant for the recip-ient to interact with the best possible actor, i.e. the actor who will enable it to obtain the biggest gain. Since in order to perform this collective task, the actors need the recipients, it is then in the interest of the actors to be as cooperative as possible to be picked. This pressure is all the stronger if the number of actors is particularly large for the number of recipients. The actors and recipients are in a supply and demand setup, which can be studied in the form of a market (Noë & Hammerstein, 1994). Let us consider a population of individuals who are looking to interact with the best possible partner. In this population, a mutant appears who co-operates more than the others. The other individuals will particularly desire to interact with this mutant. The mutant will therefore be involved in a lot of interactions and obtain many gains. As a result, the mutant can be picky. It will be able to refuse interactions with the least efficient individuals in or-der to choose the most efficient partners. There is therefore an “assortative matching”. That is to say that individuals will be matched according to their performance. The best performing individuals will be able to afford to be picky and will end up being paired with other well performing individuals, and vice versa the worst-performing individuals will pair up together(Geof-froy et al., 2018). Therefore, the best performing individuals who interact together will receive benefits from their high level of cooperation. That is, assortative matching generates a selective pressure in favor of cooperation. For example, Bshary and Grutter (2002) shows that cleaners fishes and their clients cooperate in a market structure. Cleaner fishes are small fishes that eat the parasites present on “client” fishes. Cleaner fishes have “cleaning stations”. They always stay in the same area. When clients want to be cleaned, they go to these stations. The clients can select which station they go to. Depending on the supply of cleaners and the demand of clients, the market can achieve different balances in favour of the clients or the cleaners: If there are fewer stations than necessary to meet the total demand of the clients, then the clients are in an unfavourable situation, it is difficult for them to access a station. The cleaners take advantage of this situation: They allow themselves not only to eat the parasites present on the clients but also to eat their mucus tissues, which are very nutritious for the cleaners. The cleaners are cheating. Since the clients have no other options, they can only comply. On the contrary, when there are more stations than necessary to accommodate all the clients, there is more supply than demand, and the clients are in an advantageous situation. The cleaner fishes do not eat the mucous membranes of the clients, because if the clients are not satisfied with the service provided, they can go to another station next time. The mechanism is similar in the legume-rhizobia mutualism (Simms & Lee Taylor, 2002). Legumes need nitrogen that they cannot capture from the air. Many bacteria in the soil, the rhizobia, release nitrogen elements that the plant can capture. The rhizobia also need the help from the legume because they consume carbon elements that the plants produce. Thus, legumes create in their roots nodules that host and supply carbon elements to the bacteria. Inefficient nodules, where the bacteria produce little nitrogen, are destroyed and deprived of carbon, while efficient nodules are maintained and supplied with carbon. There is a market effect, and partner choice develops. The plant hosts and provides resources only to the bacteria that offer nitrogen in exchange. Note that partner choice can be implemented in many different ways, varying in complexity and efficiency. For example, partner choice can be achieved through direct information. The individual looking for a partner, the chooser, uses its knowledge of the different partners (Aktipis, 2004, 2011; Debove, Baumard, et al., 2015; McNamara et al., 2008). If the chooser uses only the information from the current partner, this partner choice is called partner switching. It can be worded as a simple rule: If the current partner cooperates, the chooser stays with it; otherwise it switches to a new partner at random (Aktipis, 2011; Bshary & Grutter, 2005). The chooser can also use a memory of all past interactions with its partners to pick the best partner available directly. Partner choice can also be made through indirect information. The chooser picks a partner based on the partners’ past interactions with other individuals. This knowledge can come from direct observation or reported information from other individuals. Why isn’t cooperation everywhere? Although we wondered at the beginning of this chapter how cooperation could evolve, after studying the different mechanisms that could support it, it is now the opposite question that emerges. Why is reciprocal cooperation relatively rare in nature? Indeed, all examples of reciprocity in animals are contested (reviewed in part in Carter, 2014), and yet partner choice is an incredibly powerful mechanism in Humans (Barclay, 2013; Barclay & Willer, 2007; Debove, André, et al., 2015). What factors might prevent the emergence of reciprocity? First of all, a substantial problem is the bootstrapping issue (André, 2014). While it is easy to understand how reciprocity mechanisms can main-tain cooperative behaviours, it is more complicated to explain how this mech-anism can appear by itself. Indeed, reciprocity — be it in the partner control or in the partner choice version — requires two mutually dependent traits, both unstable by themselves: it requires that (i) the actor can cooperate and that (ii) the recipient can recognize and respond to an act of cooperation. Without the simultaneous presence of these two traits, reciprocity cannot take place, and cooperation is not evolutionarily stable. Indeed, the ability to distinguish a cooperative partner from a cheating partner only makes sense if there are both cooperative and non-cooperative individuals in the recipient’s vicinity. If the individual’s neighbourhood consists only of cheaters (or only of cooperators), then there is no benefit in maintaining a complex system of cooperator recognition. Similarly, cooperative behaviours have no reason to be maintained by natural selection if there is no individual able to recognize and respond con-ditionally to them. Indeed, cooperation is evolutionarily stable only if paying a short-term cost makes it possible to change the recipient’s future behaviour. If the selected recipient does not have the competence to distinguish a coop-erator from a cheater, its behaviour cannot change. Therefore, there cannot be any interest in cooperating. These two traits, which are both complex and different, can only be fa-vored together. However, it is extremely improbable that these two traits will appear at the same time in a population. One solution to overcome this gap is that either of these behaviours already existed at least partially in the population for other reasons. For example, one hypothesis to allow the emer-gence of cooperation between unrelated individuals is that the cooperation implemented by kin-selection can sometimes be applied between non-kin by misfiring. Another hypothesis is based on the role of byproduct cooperation as a triggering factor (André, 2015). Beyond this bootstrapping problem, however, other constraints influence the evolution of cooperation by partner choice. Partner choice requires the presence of numerous and accessible outside options so that comparing dif-ferent partners is viable (Chade et al., 2017; Debove, Baumard, et al., 2015; Raihani & Bshary, 2011). If it is too costly for an individual to find a better partner compared to the gain obtained with their current partner, it is not advantageous to be choosy. We will develop this point further in Chapter 2 and show that it could play an important role in the phylogenetic distribution of cooperation. In Chapter 3, we explore the possibility of the emergence of cooperative behaviours by partner choice in pseudo-realistic environments, studying the impact of these emergence issues. The biological market in a spatialised environ-ment Partner choice models are mainly done in aspatial environments. In these en-vironments, there is no notion of distance or proximity between individuals. Individuals are either randomly paired and separated to join a « pool » of single individuals, or they all interact with each other with diverses resource distri-bution systems. In a spatial environment, the search for a partner requires one to move in order to reach other individuals. Although models of partner choice in aspatial environments show that straightforward behavioural rules are sufficient to implement partner choice, it is tempting to think that in spatial environments, behavioural rules need to be much more complicated. Aktipis (2011) shows that even in a spatial environment, it is possible to change cooperative behaviours through partner choice with elementary cognitive mechanisms. The behavioural rule that they call Walk Away allows the emergence of partner choice and could develop in many setups. Aktipis (2011) proposes a model of partner choice in a spatial world con-stituted of a grid of cells, where individuals use their travel ability to choose the best group of partners. The model is similar to that of McNamara et al. (2008) but individuals are no longer paired randomly; they are paired based on their proximity. All individuals on a same cell play together. Once individuals have interacted, each individual has the option of staying or leav-ing (walking away) depending on the proportion of cooperators present in their cell compared to their satisfaction threshold. This satisfaction thresh-old is fixed for the whole population during the whole simulation, but the proportion of cooperators and cheaters varies according to an evolutionary algorithm. The model shows that when the satisfaction threshold value is high, then the population stabilizes towards a predominance of cooperators. If the threshold value is low, then cooperators are exploited, and cheaters invade the population. Aktipis (2011) thus presents an excessively simple behavioural rule in spatial environments that allows a partner choice in a population leading to the evolution of cooperation. The fact that individuals navigate in a com-plex environment does not necessarily imply that the cognitive mechanisms necessary for partner choice are very elaborate. Evolutionary Robotics as an Individual-based modelling method for the evolution of cooperation As seen previously, results presented before have been obtained with rather abstract models, especially when it comes to capturing the mechanistical con-straints of the « real » world. Most models do not capture how the individuals actually move around, meet with potential partners and find resources. Even when they do consider spatial environment, such as Aktipis (2011), they do so in a much simplified form such by using grid-based 2D environment and fixed behavioural strategies. In this thesis, we are interested in how Partner Choice benefits the evo-lution of cooperation under more realistic constraints. In particular, the probability of possible interactions (whether successful or not) depends on resources availability, population density and exploration strategies. There-fore, we design individual-based model that capture the complex interactions at work in a pseudo-realistic 2-dimensional environment, where individuals learn to explore and interact, using realistic sensory inputs and actuators (i.e. continuous states and actions). This entails to also consider more complex decision-making apparatus, with two possible outcomes whether results ob-tained with more simplistic models may not hold anymore, or whether they do hold but must be further specified (e.g. by taking into account the fact the individual and resources are two different things). In this Section, we present evolutionary robotics, which is the application of evolutionary computation to robotics. We present the various ways in can be used to enable swarm or collective robots to learn how to solve a task. We then present how the very same method can be, and has already been, used to tackle open questions in evolutionary biology, including the evolution of cooperation. When it comes to modelling for evolutionary biology, Evo-lutionary Robotics thus presents a ready-to-use method for individual-based modelling, where mechanistic contraints can be modelled as robotic agents move in pseudo-realistic 2-dimensional environment. This makes it possi-ble to study how physical constraints imposed by the environment and the robotic agents may shape the evolutionary dynamics of learning to cooper-ate. And in our particular case, how partner choice may affect the evolution of cooperation in more complex setups. Table of contents : 1 Evolution of Cooperation and Partner Choice 1.1 The Evolution of Cooperation 1.1.1 Evolutionary approaches to behaviour 1.1.2 The Problem of Cooperation 1.1.3 Kin selection and indirect fitness benefits 1.1.5 Partner Choice and Biological Markets 1.1.6 Why isn’t cooperation everywhere? 1.2 Models for the Evolution of Cooperation 1.3 Models of Partner Choice 1.3.1 Population Diversity 1.3.2 The biological market in a spatialised environment 1.3.3 Competitive Helping 1.3.4 Partner choice with memory 1.3.5 Seeking Time and Interaction Time 1.3.6 Discussion on partner choice modelling 1.4 Adaptative Swarm Robotics 1.4.1 Evolutionary Robotics and Collective Systems 1.4.2 Evolutionary robotics as a Method to Understand Cooperation in Nature 1.5 Thesis objective 2 Nothing better to do? Environment quality and the evolution of cooperation by partner choice 2.2.1 The decision-making mechanisms 2.2.2 Phenotypic variability of cooperation 2.2.3 The payoff function 2.2.4 The evolutionary algorithm 2.3.1 Cooperation cannot evolve when patches are scarce 2.3.2 Cooperation cannot evolve when there are too many partners around 2.3.3 Analysis of the behaviour of “patch ranking” networks 2.5 Supplementary Materials 3 Learning to Cooperate in a Socially Optimal Way in Swarm Robotics 3.2.2 Payoff function 3.2.3 Partner Choice 3.2.4 Robotic Behaviors 3.2.5 Controller and Representation 3.3.1 Experimental setup 3.3.2 Learning Cooperation and Population Size 3.3.3 Learning Cooperation and Interaction Length 3.3.4 Effect of Mutation Strength (Control) 3.3.5 Population Size vs Generations (Control) 3.3.6 Wandering and Relocation (Control) 3.5 Supplementary Materials 4 Policy Search when Significant Events are Rare: Choosing the Right Partner to Cooperate with 4.2.1 Learning with Rare Significant Events 4.2.2 Partner Choice and Payoff Function 4.2.3 Behavioural Strategies 4.3 Parameter Settings and Algorithms 4.3.1 Proximal Policy Optimization 4.3.2 Covariance Matrix Adaptation Evolution Strategy 4.4.1 Learning with always significant events 4.4.2 Learning with rare significant events 4.4.3 Analysing best policies for partner choice 4.5 Concluding Remarks 4.6 Supplementary Materials 4.6.1 Detail analysis of the agents’ reward 4.6.2 Re-evaluation performance statistical score 5.2 Discussion and Perspectives
Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more! How big is a neutron star? Neutron stars are the remains of massive stars after they go supernova; while the outer layers of the star explode outward creating fireworks literally on a cosmic scale, the core of the star collapses, becoming incredibly compressed. If the core has enough mass it'll become a black hole, but if it's shy of that limit it’ll become an ultra-dense ball made up mostly of neutrons. The stats for neutron stars are sobering. They have a mass of up to over twice the Sun, but the density of an atomic nucleus: Over 100 trillion grams per cubic centimeter. That's hard to grasp, but think of it this way: If you compressed every single car in the United States into neutron-star-stuff, you’d get a cube 1 centimeter on a side. The size of a sugar cube, or a six-sided die. All of humanity compressed into such a state would be less than twice that width. Neutron stars have a surface gravity hundreds of billions times Earth's, and magnetic fields even stronger. A neutron star half the galaxy away from us had a seismic event on it that physically affected us here on Earth, 50,000 light years distant. Everything about neutron stars is terrifying. But for all that, we're still not exactly sure how big they are. I mean, we have a rough idea, but the exact number is difficult to determine. They're too small to see directly, so we have to infer their size from other observations, and those are plagued with uncertainties. Their size also depends on their mass. But using observations of X-rays and other emission from neutron stars, astronomers have found they have a diameter of 20–30 kilometers. That's tiny, for such a huge mass! But it's also an irritatingly large range. Can we do better? Yes! A group of scientists have approached the problem in a different way, and have been able to narrow down the size of these fierce but wee beasts: They found that, for a neutron star with a mass of 1.4 times the Sun (about average for such things), it will have a diameter of 22.0 kilometers (with an uncertainty of +0.9/-0.6 km). They find their calculation is a factor of two more accurate than any others done before. That's… small. Like, really small. I'd consider 22 km a short bike ride, though to be fair doing it on a neutron star would be difficult. So how did they get this number? The physics they employed is actually fiendishly complicated, but what they did in effect was solve a neutron star's equation of state — the physical equations that relate characteristics of an object like pressure, volume, and temperature — to get what the conditions would be like for a model neutron star with the mass fixed at 1.4 times that of the Sun. They then used those results and compared them against observations of an event from 2017: A merger of two neutron stars that resulted in a colossal explosion called a kilonova. This event, called GW170817, was a huge watershed moment for astronomy, because the colliding neutron stars emitted powerful gravitational waves, literally shaking the fabric of the Universe. This was our first alert to the event, but then a large fraction of telescopes on and above the Earth aimed at the part of the sky where the merger was found to be, and saw the explosion itself, the kilonova. It was the first time an event was seen emitting electromagnetic energy (that is, light) that was first seen in gravitational waves. It also put a lot of constraints on the neutron stars that collided. For example, after they merged they emitted light in a specific way, and it turns out that was inconsistent with the merged remnant having enough mass to collapse directly into a black hole. That happens around 2.4 times the Sun's mass, so we know the two stars together had less mass then that. Conversely, the light was inconsistent with the remnant being a neutron star well below that limit, too. It looks like a "hypermassive" neutron star was formed near that limit, lasted for a very short time, and then collapsed into a black hole. All of this data was fodder for the scientists calculating the neutron star size. By comparing their models with the data from GW170817, they were able to greatly reduce the range of sizes that made sense, zeroing in on the 22 km diameter. This size has interesting implications. For example, one thing the gravitational wave scientists are hoping to see is the merger of a black hole and a neutron star. This will definitely be detectable, but the question is will it emit any light that more traditional telescopes can see? That happens when material from the neutron star gets ejected during the merger, generating a lot of light. The scientists in this new work ran the numbers, and found that for a neutron star of 1.4 solar masses and 22 km diameter, any black hole bigger than about 3.4 times the mass of the Sun would not eject any material! That's a very low mass for a black hole, and it's very unlikely we'd see any that low mass, especially one with a neutron star it can eat. So they predict this event will only be seen in gravitational waves and not light. On the other hand, that’s only for non-spinning black holes, and in reality most will have a rapid spin; it's unclear what would happen there, but I imagine a lot of folks will be running their models again to see what they can predict. Having the size of a neutron star means being able to better understand what happens as they spin, as their ridiculously powerful magnetic fields affect material around them, how they accrete new material, and what happens near the mass limit between a neutron star and a black hole. Even better, as the LIGO/Virgo gravitational wave observatory folks fine-tune their equipment they expect their sensitivity to increase, allowing better observations of neutron star mergers, which can then be used to tighten the size constraints even more. I've been fascinated by neutron stars my whole life, and to be honest that's the correct attitude. They're leftovers from supernovae; they collide and make gold, platinum, barium, and strontium; they are the powerhouse behind pulsars; they can generate mind-crushing blasts of energy; and are the densest objects you can still consider to be in the Universe (the physical object inside a black hole's event horizon is forever beyond our reach). I mean, c'mon. They're amazing. And that about sizes them up.
Through this article which is on to multiply two numbers between 10 and 20, we are starting a series on speed maths techniques. So stay connected with this website and don’t forget to join our Telegram Group if interested, the link of which is given at the end of this article. Here we have started with simple and easy numbers inorder to clear the basics for our readers. Once you are easy to deal simple digits, you can move towards advance calculations including more complex digits. After going through this article don’t forget to practice questions by yourself. Q.) Multiply 12 by 11 Solution : Consider 10 as the base number and look out the difference of the numbers from 10. In this case first difference is +2 (12-10) and second difference is +1 (11-10). Now divide the answer in two parts. First part will have the number obtained after adding 12 and 1 whereas the second part will have the number as a product of two differences. The fact here which you always need to remember is that Number of digits in second part will always be equal to number of zeros in base. Here since number of zeros in 10 are only one, so there will be only one digit in the second part of our answer. Rest follow the instructions given in the figure below to understand it more clearly. Answer for this question is 132. Q.) Multiply 11 by 15 Solution : We will be using the same method as described in the above question to multiply these two numbers or also you follow the instructions given in the figure below. Answer to this question is 165. Q.) Multiply 17 by 16 Solution : Here in this case, after following the same method as described above, the problem will arise as what to do when there are two digits after multiplying the two differences. Since as mentioned above the second part can have only one digit as there is only one zero in the base, so we will carry over the digit at tens place to the first part and add it as shown in the pic below. Answer to this question is 272. Q.) Multiply 14 by 18 Solution : For multiplying 14 by 18. The same pattern will be followed as described in the previous question. Answer to this question is 252. Q.) Multiply 17 by 19 Solution : Although the numbers may look complex here but if deal with the same method can save some couple of valuable seconds for you in the examination. Answer to this question is 323. Q.) Multiply 15 by 18 Solution : Similar method will be followed here also. The answer to this question is 270. Q.) Multiply 14 by 19 Solution : The method is explained in the below figure. Answer to this question is 266. Q.) Multiply 17 by 14 Solution : Refer the figure given below to it. Answer for this question is 238. Q.) Multiply 13 by 16 Solution : The method is similar as used in all other questions explained above. Refer the below figure. Answer to this question is 208. Hope these examples involving the multiplication of two different numbers in every case, can help you in learning the method more practically. That’s all in this pictorial oriented article to multiply two numbers between 10 and 20. We have added these pictures to make you understand the calculations happening in the background. Once you have understood them it will definitely contribute in increasing your speed while dealing with questions in the quantitative aptitude section. As we all know that Practice makes the Men Perfect, so here are some practice questions for you so that you can analyse yourself. If you wish you can comment your answers below in the comment box, someone from our team will definitely reply to your comment. Q.) Multiply the numbers given in the respective questions and answer the following. (a) 15 × 19 (b) 11 × 13 (c) 16 × 18 (d) 14 × 17 (e) 12 × 15 Answer these questions in the comment box given below and if you find it useful then don’t forget to share it with your near and dear ones. As said Sharing is Caring. Thank You !! Join our Telegram Channel : t.me/bankingdreams
Mumps was a common childhood viral disease, but widespread vaccination has now made it rare in developed countries. - Analyze the cause, symptoms, and prevention of mumps - Mumps is a contagious disease that is spread from person to person through contact with respiratory secretions, such as saliva from an infected person. The common symptoms of mumps include inflammation of the salivary glands, pancreas, and testicles; fever; and headache. - A physical examination confirms the presence of the swollen glands. Usually, the disease is diagnosed on clinical grounds and no confirmatory laboratory testing is needed. - The most common preventative measure against mumps is a vaccination with a mumps vaccine. The vaccine may be given separately or as part of the routine MMR immunization vaccine which also protects against measles and rubella. - Like many other viral illnesses, there is no specific treatment for mumps, other than supportive treatment. Death from mumps is very unusual. The disease is self-limiting, and general outcome is good. Known rare complications of mumps include infertility in men and profound hearing loss. - orchitis: A painful inflammation of one or both testes. - salivary gland: Any of several exocrine glands that produce saliva to break down carbohydrates in food enzymatically. - prodromal symptoms: A prodrome is an early symptom (or set of symptoms) that might indicate the start of a disease before specific symptoms occur. - parotid gland: Either of a pair of salivary glands located in front of, and below each ear in humans. Mumps, also known as epidemic parotitis, was a common childhood viral disease caused by the mumps virus. Before the development of vaccination and the introduction of a vaccine in 1949, it was common worldwide, but now, outbreaks are largely confined to developed countries. The common symptoms of mumps include inflammation of the salivary glands, pancreas, and testicles; fever, and headache. Swelling of the salivary glands, specifically the parotid gland, is known as parotitis, and it occurs in 60–70% of infections and 95% of patients with symptoms. Parotitis causes swelling and local pain, particularly when chewing. It can occur on one side but is more common on both sides in about 90% of cases. Painful inflammation of the testicles in mumps in known as orchitis. Other symptoms of mumps can include dry mouth, sore face and/or ears and occasionally, in more serious cases, loss of voice. In addition, up to 20% of persons infected with the mumps virus do not show symptoms, so it is possible to be infected and spread the virus without knowing it. Fever and headache are prodromal symptoms of mumps, together with malaise and loss of appetite. Mumps is a contagious disease that is spread from person to person through contact with respiratory secretions, such as saliva from an infected person. When an infected person coughs or sneezes, the droplets aerosolize and can enter the eyes, nose, or mouth of another person. Mumps can also be spread by sharing food and drinks. The virus can survive on surfaces and then be spread after contact in a similar manner. A person infected with mumps is contagious from approximately six days before the onset of symptoms until about nine days after symptoms start. The incubation period can be anywhere from 14–25 days, but is more typically 16–18 days. A physical examination confirms the presence of the swollen glands. Usually, the disease is diagnosed on clinical grounds, and no confirmatory laboratory testing is needed. If there is uncertainty about the diagnosis, a test of saliva or blood may be carried out. An estimated 20–30% of cases are asymptomatic. As with any inflammation of the salivary glands, the level of amylase in the blood is often elevated. The most common preventative measure against mumps is a vaccination with a mumps vaccine. The vaccine may be given separately or as part of the routine MMR immunization vaccine which also protects against measles and rubella. The MMR vaccine is given at ages 12–15 months and then again at four to six years. Treatment and Complications Like many other viral illnesses, there is no specific treatment for mumps. Symptoms may be relieved by the application of intermittent ice or heat to the affected neck/testicular area and by the acetaminophen or ibuprofen for pain relief. Warm salt water gargles, soft foods, and extra fluids may also help relieve symptoms. Patients are advised to avoid acidic foods and beverages, since these stimulate the salivary glands, which can be painful. Death from mumps is very unusual. The disease is self-limiting, and general outcome is good, even if other organs are involved. Known complications of mumps include: - In teenage males and men, complications from orchitis such as infertility or sub-fertility are rare, but present. - Spontaneous abortion in about 27% of cases during the first trimester of pregnancy. - Mild forms of meningitis in up to 10% of cases. - Profound hearing loss is very rare, but mumps was the leading cause of acquired deafness before the advent of the mumps vaccine. After the illness, life-long immunity to mumps generally occurs; re-infection is possible but tends to be mild and atypical.
Last Updated on September 29, 2023 Since the time of our ancestors, navigation has been playing a key role in the history of humanity. From the early seafarers of Polynesia who navigated the vast oceans with the guidance of the stars to modern-day astronauts who explored the infinite universe with breakthrough technology, humans have been using different tools to find their way around the world. This article delves into the evolution of maritime technology, from the beginnings of the modest sextants to the rise of powerful and all-encompassing satellites. The Humble Sextants Sextants are navigational tools used to identify the angle between two objects, commonly a celestial object like a star or the sun and the horizon. These tools were developed for the first time during the early 18th century, quickly becoming a critical tool to navigate the oceans. Using Sextants in the Maritime Industry It takes both practice and skill to use a sextant. The navigator who uses a sextant starts by measuring the angle between the celestial object and the horizon. The measurement they get from this will then be used with some other calculations to know their latitude. Although sextants are no longer as commonly used as they were back then, they remain essential for navigators and sailors who wish to learn traditional navigation techniques. Strengths and Weaknesses of Sextants Sextants dramatically improved from the earlier navigation tools, including the astrolabe and the quadrant. They allowed sailors to identify their latitudes with better accuracy, which was necessary for long-distance ocean and sea voyages. But as expected, sextants also had several limitations. For one, these tools are still determining longitude. An accurate reading will also be possible only if the horizon is visible and the skies are clear. The Game-Changing Chronometer Chronometers are highly accurate clocks that determine longitude by measuring the difference in time between a known location and the ship’s location. It was initially developed during the 18th century, playing a crucial role in maritime navigation. Using Chronometer in the Maritime Industry It would be best if you were precise and skilled to use a chronometer correctly. The navigator should set the clock first to the specific time at the known location, like a port. The watch will then be used to identify the time at the current location of the navigator. The navigator can calculate their current longitude by comparing these two times. Although sailors no longer use chronometers as their primary navigation tool, some navigators continue using them today, especially those who appreciate their accuracy and significance in history. Strengths and Weaknesses of Chronometers The chronometer changed the game for ocean navigation. Thanks to this tool, sailors could identify their longitude with better accuracy. This was imperative, especially for long-distance trips. The only downside was that chronometers needed regular maintenance. They were also expensive, making them an impractical choice for most sailors. The Nifty Radio Navigation The method of radio navigation uses radio signals to determine the position of a ship or an aircraft. This was first introduced during the early 20th century and was helpful during the Second World War. Using Radio Navigation in the Maritime Industry Specialized training and equipment are necessary to use radio navigation. The navigator will use the radio signals from the network of stations based on the ground to identify their position. Although it’s been some time since radio navigation was used as the primary navigation method for most pilots and sailors alike, those who appreciate its ease of use and reliability continue to use the tool even today. Strengths and Weaknesses of Radio Navigation Sailors and pilots could pinpoint their position with improved accuracy with the help of radio navigation, even during poor weather conditions. But despite this, radio navigation has its own set of limitations. There is always the risk of radio signals getting disrupted or jammed. On top of that, the system was accurate only within a specific range. The Revolutionary GPS Navigation Global Positioning System or GPS is a navigation system that uses satellites and provides time and location data no matter where you are. The United States Department of Defense developed the GPS during the 70s before civilians were allowed to use it during the 80s. Using GPS Navigation in the Maritime Industry It’s intuitive and easy-to-use GPS navigation. The navigation needs to turn the GPS receiver on and wait for a signal from the GPS satellite system. After the receiver acquires a call, this can already give the user their exact location, direction, and travel speed. The introduction of GPS navigation changed how humanity navigates the world, and today, it has become the most widely used tool for navigation among drivers, pilots, and sailors alike. Strengths and Weaknesses of GPS Navigation GPS navigation’s incredible accuracy and reliability made it the official go-to tool for drivers, pilots, and sailors. But despite its advancement, trees, buildings, and other similar obstacles can still disrupt GPS signals. The system also uses a network of satellites that can be prone to attacks. The Bottom Line The navigational tools used in the maritime industry have come a long way ever since sextants were developed. These days, people have instant access to incredibly reliable and accurate devices such as GPS that have changed how humanity navigates the world. But even with all these developments, it’s still critical to remember and commemorate the skill and ingenuity of the early navigators who only relied on the stars alone combined with their wits and skills to find their way across the vastness of the ocean waters. Whether you’re a seasoned sailor or a beginner doesn’t matter. Knowing the history and evolution of maritime technology can help you understand and appreciate the fantastic progress this critical field has seen through the years.
Scientist of the Day - The Transistor Seventy-five years ago, on December 16, 1947, John Bardeen and Walter Brattain successfully tested the world’s first transistor in Murray Hill, New Jersey. The two physicists were members of a Bell Labs research group seeking a new means of amplifying electrical signals. During the first half of the 20th century, electrical engineers had relied on vacuum tubes to accomplish that task, but those devices were bulky, fragile, and consumed a great deal of power. Bardeen and Brattain’s supervisor, William Shockley, theorized that it might be possible to develop an improved amplifier by capitalizing upon the previously unexplored electrical properties of semiconductors. Shockley’s research team focused their attention on the surface of materials like germanium, where a layer of electrons prevented external electric fields from modulating current passing through the interior. Beginning in mid-November 1947, Bardeen and Brattain conducted a series of experiments that eventually allowed them to circumvent this barrier and create a solid-state amplifier. The culmination of this “magic month” of research came on Tuesday, December 16, when Brattain suspended a small plastic wedge above a piece of germanium. He wrapped a thin ribbon of gold foil around the wedge and used a razor to slice it in half at the vertex. He then carefully maneuvered the wedge so that the resulting electrical contacts touched the surface of the germanium and connected the entire setup to a power supply. Incredibly, this makeshift apparatus was able to boost both the power and voltage of an incoming signal without a vacuum tube (first image). After sharing their findings with Shockley, Bardeen and Brattain scheduled a demonstration for Bell Labs’ leadership team on December 23, 1947. After the holidays, they started working on a patent application, while management began brainstorming a name for the new amplifier. Ralph Bown, Bell Labs’ vice president for research, organized a committee to resolve the latter question. In May 1948, this group circulated a memo asking key executives and members of the technical staff to choose between half a dozen possible options, including “Semiconductor Triode” and “Iotatron.” When the ballots were counted, the winner was transistor, which the memo described as “an abbreviated combination of the words ‘transconductance’ or ‘transfer’ and ‘varistor.’” Bown would later take center stage at the June 30, 1948 press conference that introduced the public to Bardeen and Brattain’s invention, which by then had been redesigned as a metal cylinder smaller than a paperclip, which contained two fine wires touching a pinhead-sized sliver of germanium. Standing next to a giant cutaway model of the transistor at Bell Labs’ New York City headquarters, he explained that it could “do just about everything a vacuum tube can do, and some unique things which a vacuum tube cannot do” (second image). He then invited reporters to listen to his voice being amplified with transistors through headphones attached to their seats. Other demonstrations featured an oscillator circuit that did not require any time to warm up (a common issue when dealing with vacuum tubes) and an off-the-shelf radio receiver whose tubes had been replaced with transistors (third image). Members of Bell Labs’ technical staff were convinced that the transistor was a game-changing innovation, but the same could not be said of the popular press. In a move that would subsequently be deemed short-sighted, the July 1, 1948 edition of the New York Times limited coverage of the press conference to a handful of paragraphs in a column on page 46 entitled “The News of Radio.” The scientific community responded more enthusiastically to Bardeen and Brattain’s first article on the transistor in the July 15, 1948 issue of Physical Review (fourth image).Not to be outdone, the editors of Electronics, a leading trade journal, placed a photograph of Bardeen, Brattain, and Shockley inspecting a “crystal triode” on the front of their September 1948 issue. (Regrettably, the Linda Hall Library’s copy of that publication had its cover removed when it was sent to the bindery.) As much as Bell Labs appreciated this publicity, it was clear that there was still much to learn about the theoretical principles underlying the transistor’s operation and how easily it could be mass produced. One person who was interested in both questions was William Shockley. In 1950, he published an influential textbook summarizing the latest developments in semiconductor electronics. While working on that manuscript, he also oversaw the development of an entirely new type of transistor, consisting of three layers of semiconductors with varying electrical properties that were placed in direct contact with one another. In addition to eliminating the need to fabricate delicate electrodes on the surface of the semiconductor, Shockley’s sandwich-shaped “junction” transistor offered greater power efficiency and reliability compared to Bardeen and Brattain’s “point-contact” design. For these reasons, the junction transistor eventually became the industry standard. Before that could happen, however, other companies would need to embrace transistor technology. In response to growing demand from rival electronics firms and the U.S. military, Western Electric (AT&T’s manufacturing subsidiary) began licensing the rights to produce transistors in 1951. That year, Bell Labs started organizing symposia where licensees could learn about the latest semiconductor production methods. The proceedings from the second of these meetings, held in April 1952, were initially classified due to national security concerns but were subsequently issued as a two-volume set (Transistor Technology) that was nicknamed “Mother Bell’s Cookbook” by industry professionals (fifth image). Thanks to these outreach efforts, transistors gradually made their way into consumer products, beginning with the Sonotone 1010 hearing aid in 1952. Two years later, Texas Instruments (TI) and the Regency Division of Industrial Development Engineering Associates (IDEA, an Indianapolis-based electronics company) released the Regency TR-1, the first commercially available transistor radio (sixth image). The TR-1 was expensive ($49.95), and Consumer Reports noted that its ability to pick up signals and overall sound quality paled in comparison to other portable models. Despite these shortcomings, Americans purchased over 100,000 TR-1 units, confirming the popularity of the lightweight, pocket-sized, electronic devices that could now be produced using transistors. The success of the TR-1 revealed how much the transistor was beginning to alter the landscape of the American electronics industry. Established east coast corporations like AT&T, IBM, and RCA now faced competition from newcomers like TI, which began selling the first silicon transistors in 1954. These devices were more difficult to manufacture than their germanium predecessors, but their superior performance inspired a growing number of people to explore the technological possibilities of silicon. Foremost among them was William Shockley, who left Bell Labs in 1955 to establish the first of many semiconductor firms in the region that would become known as Silicon Valley. The following year, he traveled to Stockholm along with Bardeen and Brattain to accept the Nobel Prize in Physics “for their researches on semiconductors and their discovery of the transistor effect.” Over the coming decades, engineers continued to design smaller, faster transistors and new products capitalizing on their ability to detect and amplify electrical signals. The growing complexity of these systems led electrical engineers Jack Kilby (TI) and Robert Noyce (Fairchild Semiconductor) to develop the first integrated circuits, featuring multiple transistors, capacitors, or resistors on a single semiconductor substrate. Subsequent improvements in fabrication techniques led to an exponential growth in the number of transistors that could fit on a silicon wafer (famously documented in 1965 by Noyce’s colleague, Gordon Moore) and the creation of the modern microchip. Today, billions of microscopic transistors can be found in each of our laptops, cell phones, televisions, and video game systems. Together they comprise the invisible foundation of our digital world, and every single one can trace its origins to an experiment conducted seventy-five years ago today in an unassuming laboratory in New Jersey. Benjamin Gross, Vice President for Research and Scholarship, Linda Hall Library. Comments or corrections are welcome; please direct to [email protected].
One thing you may know about particle physics experiments is that they’re enormous. The Large Hadron Collider is five or so miles in diameter, big enough to circle some towns. Stanford’s Linear Accelerator is two miles long. Scientists are hoping a new experiment will lead to far smaller but still extremely powerful accelerators. Scientists at CERN ran some of the first tests at the AWAKE experiment yesterday, a new kind of accelerator based on a concept that might be able to cut the size of particle physics experiments down by a factor of a hundred or more. Colliders like the Large Hadron Collider have lots of parts, but generally need a place to store particles, a place to speed them up to incredibly high speeds, a place to smash them together or against something else, and a place to look at all of the particle bits that came out of the resulting explosion. Part of the speeding up requires pushing the particles by putting them through a series of alternating electric fields. To get the particles faster, scientists build longer experiments rather than more powerful ones. The AWAKE experiment, also known as the “Proton Driven Plasma Wakefield Acceleration Experiment,” will use a whole new method that will get particles going much faster in a shorter amount of time. AWAKE’s secret comes from wakefield acceleration, a concept first theorized only in the 1970s but too much of a technical challenge to construct back then, project leader Edda Gschwendtner told Popular Science. Here’s how it works: First, a packet of protons from CERN’s proton accelerator, the Super Proton Synchrotron, will pass through a field of plasma. The electrons in the plasma are negatively charged, so they fly towards the positively charged proton bunch. By that point, though, the protons have flown away, so the electrons keep flying. The electrons leave positively charged plasma in their absence, though, so they’re pulled back to where they came from. The process continues and makes a wave. If you plop another electron into the wave, it will surf along to avoid all the other negative electrons crashing down, causing our surfer dude to accelerate really quickly through the wake field, as much as a thousand times faster than the traditional method, according to an article published in Nature. “It would allow accelerators to be much smaller,” Gschwendtner said. “If you make a linear collider nowadays it would be about 50 kilometers.” Today, physicists just passed the proton bunches through the main beam line on the way to the experiment, a prerequisite to getting anything to work. There’s no plasma or accelerated electrons yet – that probably won’t happen until 2018. AWAKE is just a proof of concept, an early attempt to produce a wakefield accelerator. Stanford Linear Accelerator lab and Brookhaven National Lab in the United States are also working on similar concepts, but AWAKE is the only one whose wave is created by protons, which could produce more powerful wakefield accelerators. One day, we may even see powerful particle tabletop-sized accelerators, according to a CERN press release, but those probably wouldn’t use protons to make the wave, since the proton-based accelerators are for more heavy-duty experiments. Either way, we won’t see these kind of accelerators in particle physics experiments for several decades, said Gschwendtner. But the idea of tabletop particle physics experiments will keep our eyes open.
Electricity Role Play 1. Have the students stand in a continuous line side-by-side. Students will by putting their arms around each other’s shoulders. 2. The first person in line starts a wave by bending over and then standing back up. This will sequentially pull everyone else in the line over too, simulating electricity flowing through a conductor. 4. Next students will simulate an insulator. Repeat the line of students with their arms around each other’s shoulders, but this time have one student in the middle drop their arms to their sides. They should still be next to the other students, but not linked to them. Again, the first person in the line bends over at the waist and stands up. What happens? 4. Discuss with the students what happened this time – how the bending wave stopped at the unattached student. This simulates the effect of an insulator (the unattached student), which is a material that does not allow electricity to easily flow through the it. 5. Point out that a good conductor is a poor insulator and a poor conductor is a good insulator. Brainstorm two lists with the students – one of objects that they think would be good conductors and those that would be good insulators. Ask them how they would find out which was which. 6. After completing your lesson on electricity and circuits, revisit this hypothesis with the students to see if their experiments with insulators and conductors confirmed or refuted their predictions. The word circuit literally means a route that starts and finishes at the same place. An electrical circuit has a source of power (the battery), the connecting wires (the conductors), and the device that is collecting the electrical power (the load). For the load to receive the electricity the circuit must be continuous. If the circuit is broken the power cannot get through to power the load. Controlling the Electrical Current: In an electrical circuit, there needs to be a way to control when the power will flow through it and when it won’t. That is the job of the switch. A switch can be open or closed. A closed switch keeps the flow of electricity flowing through the circuit to power a device. An open switch creates a gap in the circuit, so electricity stops flowing to the device. It “breaks” the circuit. A switch can also be used to change the pathway of electricity in a parallel circuit to power a different devise. Another way electrical current has been controlled is through fuses. A fuse will literally melt down if too much current is forced through it, creating a circuit gap – which stops he flow of electricity. This was meant to stop electrical fires, but requires replacing the burned out fuse with a new one each time to get the current going again. A newer current protection device is the breaker. A breaker or circuit breaker is a switch that is designed to activate and break the flow of electricity when too much current is flowing through the system to prevent an electrical fire. It can be reset by hand, so does not have to be replaced each time like a fuse. Instead of simply stopping the flow of electricity, it is sometimes necessary to just decrease how much current a device is getting. This is done by resistor. A resistor can lower the current that is flowing into more fragile devices like a computer circuit board. Types of Circuits There are three kinds of electrical circuits we will look at: simple circuits, parallel circuits, and series circuits. • A simple circuit has a just one device (component load) that gets its electricity from the power source and sends it along the conductors in the circuit to touch the power source’s opposite end (terminal). • A series circuit has more than one device (component loads) that are joined end-to-end by connecting wire (conducting wire). If there is a break anywhere in the wire (conductor), all the devices will lose power, because the circuit has been broken. • A parallel circuit has branching paths to each device (component loads) that is getting electricity. If the wire (conductor) to one device (component load) is broken in a parallel circuit, the other devices will still get power. This makes a better circuit for wiring a house. Simple Circuit Making Activity When you research information you must cite the reference. Citing for websites is different from citing from books, magazines and periodicals. The style of citing shown here is from the MLA Style Citations (Modern Language Association). When citing a WEBSITE the general format is as follows. Author Last Name, First Name(s). "Title: Subtitle of Part of Web Page, if appropriate." Title: Subtitle: Section of Page if appropriate. Sponsoring/Publishing Agency, If Given. Additional significant descriptive information. Date of Electronic Publication or other Date, such as Last Updated. Day Month Year of access < URL >. Amsel, Sheri. "Circuits Unit (Complete)" Exploring Nature Educational Resource ©2005-2021. November 28, 2021 < http://exploringnature.org/db/view/Circuits-Unit-Complete >
What are gross motor skills and why are they important? Gross motor skills are those that enable us to move efficiently, to negotiate our environment, to balance, to run, jump, play games with balls and more. Gross motor skills use the large muscles of the body. They are essential for movement and participating in activities at school and at home. If your child has a problem with their motor skills such as: • climbing and negotiating playground equipment like monkey bars, • copying games like follow the leader, Simon Says and freeze games, • galloping and skipping, • ball skills - catching, throwing and kicking, • jumping and hopping, • swinging on a swing. have your child assessed by the Albany Children’s Physio physiotherapist and we can liaise with your child's teacher and so your child can achieve their best at school and at home. Gross motor skill development goes hand in hand with development of speech and fine motor skills. Good motor skills stem from having developed good sensory motor skills – especially body and limb awareness and motor planning skills – something which we develop from birth and continues to develop right throughout our life span. Good gross motor skills and body awareness are important for good posture, movement around the school, participation in games and sports and for general fitness, health and well-being. Email us and in the subject line place "list of activities for school aged children" and we will send you 3 documents with activity ideas to develop your child's Gross Motor Skills.
Normal (Gaussian) Distribution: z Tests DOI link for Normal (Gaussian) Distribution: z Tests Normal (Gaussian) Distribution: z Tests book This chapter examines the normal or gaussian distribution. Gaussian distributions are important because they often approximate distributions occurring in nature. Other distributions, such as Poisson distributions, binomial distributions, approximate to normal when the sample size is large. But there is a further reason for the importance of normal distributions. The chapter looks at how normal distributions fit into statistical theory. Gaussian or normal distributions are important because of sampling distributions and the central limit theorem. The chapter explains sampling distribution. The mean of the sampling distribution will be the same as the mean of the population. The importance of the normal distribution is that generally, no matter what the population distribution, the sampling distribution will be normal as long as the samples are sufficiently large. The chapter considers the calculations that have been using the normal distribution tables.
Attractive and durable surface coloration can be achieved by a process called anodization. Roughly speaking, the surfaces are carefully degreased, and imbibed with hydrofluoric and nitric acid. Silver, gold, blue, purple, pink and pale blue may be produced under the right conditions. More specifically, anodizing is the building of a coating on the metal. This permanent oxide coating can refract and absorb light to take on extremely decorative colors A variety of means may be employed: heat, chemical, or electrolytic. The most common is electrolytic, similar to electroplating, because it is a more controlled process, yielding predictable color and uniform appearance. The layer is in fact an oxide of titanium. The mildly conductive solution usually consists of as phosphoric acid and trisodium phosphate. The voltage influences the color. It is interesting to note that anodizing does not result in the creation of a pigment, but inference colors. The apparent color is caused by interference between certain wavelengths of light reflecting off the metal and oxide coated surface. Light passing through the oxide layer, then reflecting off of the metal, must travel farther than light reflecting directly off the surface of the oxide. If one wave pattern is out of synch with the other, they will cancel each other out, making that particular color "darker" or not visible at all. If the thickness is such that a specific wavelength of light following one path closely synchronizes with that of the other path, then the wave strength (amplitude) will be increased, and that particular color would appear brighter. When the wave patterns cancel each other, it is called destructive interference, and when they match, it is constructive interference. It is possible that the thickness will create a combination of effects at the same time. At about 110-120 VDC, the anodized titanium takes on a purple appearance, but with green highlights or reflections. Relationship between applied voltage, thickness of titanium oxide layer, and color of titanium
Concept mappingAlso known as: mind mapping, diagramming, webbing, idea mapping Concept maps are an effective way of expressing structured relationships. They have three elements: shapes, arrows, and text. The subject is at the top and related ideas become specific entries as you move down the map. In this way, concept maps differ from mind maps because relationships are expressed in a tree or radial structure, depending on the relationships between ideas. They enable teachers to tell at a glance if students have a deep understanding or are struggling with the content and concepts being studied. Concept maps aid learning by explicitly integrating new and old knowledge and students can assess understanding or diagnose misunderstanding through drawing concept maps. Google G Suite How to use with ICT Create an interactive and/or collaborative concept map to support student understanding and relationships between content, using one of the available tools (See ICT tools). Presentation software or a word processor can be used to create a skeleton concept map that students will complete in class. Specialised applications like draw.io or bubbl.us are better for creating a concept map from scratch in class. Project a concept map on the board while adding to it as a class. Share the completed concept map via email or Google Classroom or Microsoft Teams. |How to make a concept map (video) by Lucidchart||How to make a concept map (new window)||This tutorial teaches both expert and beginner diagrammers how concept mapping works and how to make one from scratch, with explanations every step of the way.| |Concept mapping and the theory behind its structure||Concept mapping and the theory behind its structure (new window)||The theory behind the development of concept mapping by American professor and science researcher Joseph D. Novak and his team from Cornell University in the 1970s.| |Concept map on the Teacher Toolkit website||Concept map on the Teacher Toolkit website (new window)||Concept Map on the Teacher Toolkit website.| Links to third-party websites: The department accepts no responsibility for content on third-party websites. Disability, Learning and Support When planning to use technology in the classroom it is important to consider the diversity of your learners. Universal Design for Learning (UDL) is a framework to guide the design of learning environments that are accessible and effective for all. For UDL guidelines, information and additional materials, visit the CAST website. Many students require technology as an adjustment to support their access to learning. Adjustments (NESA) are actions taken that enable a student with disability and additional learning needs to access syllabus outcomes and content on the same basis as their peers. Enrol in the Personalised learning with technology online course to help you make more informed decisions regarding technology. For a range of simple, how-to videos visit the Assistive Technology page on the Disability, Learning and Support website. Resources are organised into four sections; Literacy and Learning, Vision, Hearing, Physical and Motor Skills. High potential and gifted learning and support When planning to use technology in the classroom it is important to consider the full range of abilities of all learners. High potential and gifted learners may require additional adjustments and deliberate talent development. These strategies include differentiation, grouping, enrichment and advanced learning pathways so students can be engaged, grow and achieve their personal best. Assessing and identifying high potential and gifted learners will help teachers decide which students may benefit from extension and additional challenge. Effective strategies and contributors to achievement for high potential and gifted learners helps teachers to identify and target areas for growth and improvement. For further support and advice about how to tailor learning for high potential and gifted students from all backgrounds, visit the High Potential and Gifted Education web section, High Potential and Gifted Education Policy or attend one of the professional learning courses on offer.
Subsurface ocean currents is the movement of water masses vertically and horizontally so that towards balance, or a very broad movement of water that occurs throughout the oceans of the world. Sometimes the flow below sea level occurred erratically. When the subsurface ocean currents happen, the diver would be very difficult activity because its pretty dangerous. In addition, the subsurface ocean currents, better known as ocean currents are extremely dangerous. Then, what is the exact cause of sea level is now a very dangerous? Here are some of the causes of subsurface ocean currents that you need to beware of. - Determination of wind direction These same things that will lead to the existence of the movement of sea water flow. When the movement of the wind settled down to eat the currents below sea level will join arising because of the consistency of the movement. As for the Wind which causes the onset of flow, i.e. wind, West wind, pasat and monsoon winds (monsoon). 2. The difference in the levels of salt Subsurface tips for boating in the ocean currents also can be caused by levels of salt sea water. Why is this so? Levels or weight of different types of sea water can also cause the onset of ocean currents, but the local nature. Sea water with a heavy kind of light will move or flow into the sea water with the weight of its huge. So, the sea level will also move from the kind of water that is kind of small, are opposites. The thing that is causing the occurrence of ocean currents. 3. Temperature difference The temperature of the sea water also apparently can cause the occurrence of tips for ocean conservation currents. With the influence of sunlight causes the heat went into the sea to a depth of 50 – 70 m with sea water temperature is almost the same so this layer is called a homogeneous layer. The more reduced the influence of sunlight that goes into the sea, will lead to the formation of a thermocline layer having symptoms decrease temperature quickly. This is reinforced by the presence of salinity changes or rapid sugar levels anyway, then the resulting layers thick. In this case, there was a raising of the water. The more down the temperature gradually descends to a depth of 1,000 m and the temperature is usually less than 5 ° c. The existence of such a temperature difference is what causes the occurrence of ocean currents. 4. The difference in tides rise and ebb The motion of endangered sea otter water as the tide of seawater caused by the motion of the Moon and the Sun towards the Earth. The moon revolves 24 hours 51 minutes. If other factors are ignored then the location on Earth will experience ups and downs in twice a day. When the position of the Moon and the Sun are more or less in one straight line with Earth, as at the time of the young Moon or the full moon, then pull both mutual strengthening. Thus, it will affect the occurrence of tidal waters, and the effect on the occurrence of the underwater currents. 5. High sea level difference As it turns out, the difference in height of the surface of sea water can menagkibatkan mass movement of ocean water to fill the part of sea water. It later led to underwater currents, or by other meanings, migratory use of sea water for agriculture from a height onto a lower place, and or vice versa. Currents arising out of the difference in the surface of the high water levels in the ocean is called the compensation flow or current of the charger. 6. The existence of the barrier island or continent The subsurface ocean currents, it can also happen because of obstacles the island or continent. Why is this so? The existence of the barrier island or continent can cause use of carbon in the ocean currents turn according to the coastline of the continent or hemisphere flow. The coastline or the continent that created the wave back in the sea water flow or subsurface ocean currents. It is like the current Brazil in Brazil, and the current Eastern Australia in Australia. 7. The difference of density What is the density? Density is the amount of substance that is contained in a unit volume of sea water. This density is usually influenced by salinity, temperature and pressure of sea water. If seen from the influence that caused the density, the presence of the temperature-and pressure-salt levels, will certainly lead to the occurrence of subsurface causes of hypoxia in ocean currents. It is caused by temperature change, unbalanced pressure and levels of salt in the ocean are not the same. In addition to the points above, for some causes of currents below sea level based on species, such as the following: - Topography forms the basis of the ocean and surrounding islands. - The presence of up-welling or the phenomenon involving a strong wind currents, cold water, and bring the masses who have the nutrients. and sinking ( wind blowing from North to South following the coastline ). - The existence of global warming is causing the levels of salt in sea water turns into acid. - The presence of turbulence or movement that occurs at the boundary layers of the sea water, and the presence of friction layer. - Oxygen content in low sea can cause currents under the subsurface ocean currents, as well as a temperature higher than sea level. - The underwater currents can occur when the occurrence of tsunamis in the vicinity. Undersea movements can create the ocean currents. In addition to the tsunami, the earthquake in the sea or on the edge of the Ocean can affect the occurrence of ocean currents. Thus, causes of subsurface ocean currents that you must know. This will help you in your research or your personal observations about ocean currents. In addition to research, the causes of ocean currents above else You need to know to become more vigilant against the return of natural disasters around the sea. Hope this article will useful.
What is stress? There are many perceptions about what ‘stress’ is and we mainly think about it in human terms and consider ‘stress’ to be bad for our health. But from a physiological and evolutionary perspective, stress has a different context and can be good and bad! Stress is a physiological state in which the body responds (behaviourally and/or physiologically) to a challenge (a ‘stressor’) that can threaten the well-being of the animal. The outcome of a physiological stress response largely depends on how the situation is perceived by the animal. A physiological stress response can be perceived as ‘positive’ (e.g. during mating) and as ‘negative’ (e.g. when attacked by a predator). A state of physiological stress is absolutely necessary for survival in many ‘life-or-death’ (‘acute stress’) situations: it provides the animal with the opportunity to adapt to the situation to avoid more severe consequences of the challenge. For example, it allows it to react more quickly and to run faster, or deal with an injury. Therefore, stress in this context is good. Animal welfare has historically largely focussed on what we would call negative stress, such as pain, fear or hunger, that arise in situations that farmed deer have little control over. When a deer cannot reduce the impact of a stressor or make it go away, stress becomes a chronic state which is detrimental to the animal and will threaten its health and well-being. For example, constant nervousness due to the prolonged presence of predators can prevent deer from feeding properly and thus, through poor nutrition, affect well-being and consequently production. However, chronic stress can have more direct effects on animal welfare. For instance, prolonged exposure to chronic stress and hormones secreted primarily by the adrenal gland can interfere with other physiological processes in the animal. For example, adrenal corticosteroids secreted over long periods of time can interfere with ovary function and disrupt reproduction. What does the adrenal gland do? The adrenal gland produces a number of hormones that elicit various ‘heightened’ or ‘aroused’ states in the animal. With acute stressors that require a rapid ‘flight or fight’ response (e.g. sudden attack by a predator), a burst of adrenaline is produced by the adrenal gland that seemingly gives the animal miraculous physical powers. This rapidly-acting hormone affects many aspects of the animal’s physiology involved in reaction speed, aggression and muscle power, to name a few. However, the effects are short-lasting as adrenaline is rapidly cleared from the body. Once the acute stressor is gone, the animal quickly returns to a normal physiological state. Where stressors are present on a chronic basis (e.g. continual presence of predators that necessitates the animal to be in a constant state of vigilance, or as a response to an injury), the deer needs a longer-term shift in its response to them. In such a state, the adrenal gland secretes longer-acting hormones in the corticosteroid family (e.g. cortisol). Corticosterioids regulate many aspects of physiology, such as reproduction and immune function in the absence of stressors, in a moderate, sustained way. However, when they are continually secreted at high levels as a response to chronic stress, they can shut-down or dampen those normal biological functions. Stressors facing farmed deer Farmed deer face several potential stressful situations due to the nature of farming. Some will be more severe than others, and some can be avoided or minimised. For example, stressful periods in a farmed deer’s life may include yarding, transport, unfamiliar humans or dogs, regrouping, weaning, and inclement weather. If animals are given the opportunity to adapt by expressing their natural behaviour, they will cope with the situation and no harm is done. However, it is in the farmer’s interest to minimise stress in deer as a stressed deer is likely to have lower production and can be a threat to humans and other deer depending on the situation where the stress occurs. If animals are able to predict or control a situation, stress levels are in general lower. How does stress affect production? We know that chronic stress impacts on production, but it is hard to accurately measure in most cases. The effects can include poor reproductive performance, low growth rates due to reduced feed intake, and greater incidence of disease due to suppression of the immune system. It is also often associated with an increased incidence of fatigue and impact injuries. Becoming familiar with normal behaviour of deer is important in order to observe when deer are in a stressed state. Behaviours that indicate stress, include fence-pacing, excessive and prolonged panting, aggressions, general nervousness and frequent vocalisations. If these signs continue for a long time, the animals may be in chronic stress and physical signs can include noticeable weight loss and hair loss. How do I mitigate these effects? Clearly, it is in the farmer’s interest to minimise stress on the farm. One of the most common stressors encountered on farms is under-feeding. Hungry animals are stressed and exhibit various behaviours associated with chronic stress (e.g. fence pacing). Good nutrition is arguably the cornerstone to happy deer. Preventing other types of stress is easier than dealing with the effect of stress. See advice on managing nutritional stress risk. Nevertheless, if animals are showing signs of chronic stress, something is wrong within their environment and their welfare is most likely impaired. It is important to identify the source of the problem and remove it. It could be something as simple as the continual close presence of a perceived predator (e.g. the farm dog) that constantly disturbs them. Consider also their social environment: are individuals failing to cope with dominant or disruptive individuals? If so, consider mobbing disruptive individuals differently. The following topic is available in convenient DINZ Deer Fact sheets. Print off your own copies here >>
The term “worm” is commonly used to describe a wide range of invertebrates that are, in many cases, not closely related to one another. Some live in the soil, some in the sea, and some are parasites; some are beneficial to man, some are pests, and some can cause serious disease; the only thing they all have in common is a long, thin, flexible body. In most cases, they do not have limbs, but some insect larvae that do possess short legs are frequently described as worms. Among the animals that come under this rather ill defined and unscientific category are earthworms, nematodes, flatworms, various insect larvae, and a number of marine invertebrates. There are around 2,700 different types of earthworm. As their name suggests, they live in the earth, and they are generally regarded as beneficial, as their movements mix the soil, keeping it well aerated and porous. Earthworms eat various types of dead organic material, such as fallen leaves and other plant parts, and excrete waste that helps supply living plants with nutrients. In some cases, however, they can be considered a pest, as they may remove leaf litter that is required by other, sometimes endangered, species. Earthworms generally live in burrows in the soil, which may be temporary or permanent. Some types rarely leave their burrows. In areas with cold winters, the animals stay warm by burrowing deep into the soil, coming back up to the surface in spring when the ground warms up. They move using tiny bristles along their sides, controlled by muscles, and breathe by absorbing oxygen directly through their moist skins. Although they have no eyes, they are sensitive to light and will avoid it. Some types of earthworm can grow to a considerable size. The types most commonly found in the USA, often called “nightcrawlers”, typically grow to a little over one foot (30 centimeters) in length, but the largest North American species, the endangered Giant Palouse worm, can reach three feet (one meter). Much larger types are found in other parts of the world. The Giant Gippsland Earthworm from Australia grows up to nine feet (three meters) in length, and a 22 ft (6.7 m) specimen was reported in South Africa. There are just under 20,000 known species of nematode worms, but the true number may be much higher, as many types have not been studied closely, due to their usually small size and diverse habitats. They are extremely numerous, and are thought to be the most abundant animals on the planet — a small sample of soil will contain many thousands of them. The vast majority of species are very small, often less than 0.04 inches (1mm) long, but a few are much longer — a 26 foot (8m) specimen was reportedly found in a sperm whale. Huge numbers of nematodes are found in soil. Some are considered pests, as they eat plant roots, but some are predatory and may be beneficial to man by eating various invertebrate pests, including other nematodes. Many species are parasitic, and just about every animal species, including humans, can potentially harbor a parasitic nematode. Roundworms and hookworms, which can infect domestic pets and humans, are two common examples. Some other nematode infections, such as trichinosis, can be very serious. The flatworms include both predatory and parasitic species. The reason they are flat is that they have no circulatory system — oxygen and nutrients reach cells by diffusing through tissue, so the cells must all be near the surface to receive oxygen and near the gut to receive nutrients from food. The gut may be branched, to enable distribution of nutrients to all tissues. Among the most studied types of non-parasitic flatworms are the planarians, which are best known for their ability to regenerate lost body parts. Planarians can be cut in half, or even into smaller pieces, and survive, with each part eventually growing into a new, complete animal. They are found in both fresh and salt water, and in damp soil. Many other types of flatworm are parasites. Among the best known are tapeworms, which live in the intestines of mammals, absorbing pre-digested food. Some types can grow to over 65 ft (20 m) long in land mammals, and whale tapeworms reaching 100 ft (30 m) have been reported. In humans, these parasites are usually picked up from undercooked meat. Liver flukes, which often affect sheep, are another type of parasitic flatworm. Wormlike Insect Larvae Many insects have larvae that are commonly described as worms. For example, inchworms, — the caterpillars of geometrid moths — have three pairs of legs at the front of their bodies and two to three pairs at the back, and move with a looping motion. This, combined with the fact that many types grow to around an inch (2.54 cm) long, gives them their name: they look as if they are measuring out inches. There are around 1,200 species of geometrid moth in North America, and many more in other parts of the world. One interesting type of inchworm is called the cankerworm. It can produce a thin line made of silk, similar to a spider web. The threads are often produced when the caterpillar has to drop from a tree in order to evade a predator. Cankerworms come in a variety of colors, but they all have distinctive long horizontal stripes on their bodies. They are one of the most destructive pests to crops, and often feed on fruit trees. The polychaetes, or bristle worms, are the most commonly seen marine worms. They have segmented bodies with prominent bristles, and many species live in burrows in sand or mud at the seashore or in shallow coastal waters, although some species are found on the sea floor under deep water or among coral reefs. Bristle worms sometimes cement together sand or grit particles to construct tubes, which they live in. They are mostly predators, but some species may scavenge. Some types are very brightly colored, and a few are luminous.
In the great horned owls are the largest species in the true owls and this kind of owls are normally live in the place of the north and south America. Sometimes, in the great horned owls are also called as the tiger owls and the cat owls. In the great owl contain so many interesting natures so that in the article deals with the great horned owl facts and such facts may definitely help to understand the fascinating species. Hearing sense supports to attack larger prey: In the great owls always prefer the less important growth woodlands and also it wants to live in the agricultural areas. In the great horned owls are more powerful than other common owls because it acquires the aggressive nature. Normally in the great owls are settled mainly in trees, cavities and some other human made structures and so on. In the great owls are first described by the German naturalist in the year of 1788 and the scientific name of the horned owls are the bubo virginianus. In the great owls are also called as the fierce predators because it has the ability to take large preys such as mammal and reptiles and also it attack the Raptors. Through the support of the hearing sense it can easily attack or capture their prey in the forest. It can grow 18 to 25 inches long and with a wing span it can grow up to0 50 to 52 inches approximately. In the great owls have the incredible digestive system, which means in the owl can easily sallow the large prey. Latter, it can regurgitate the pellets with the bones and other unwanted parts in their meals for the reason that in the great horned owls are nocturnal. Fun information about the great horned owls: In the horned owls are active at night time particularly it are more active at dusk and before dawn. Do you know? In the great horned owls have so many nicknames like flying bobcat and night tiger and so on. The thick feathers of the great owls help to maintain the warm in the winter season. In the great horned owls have the fourteen neck vertebrates so that it can easily turn its head around 270 degrees. It is the only animal have the nature to eat the skunks and the colours of the owls are depending on the area, the great horned owls found. In the year of 2005 the oldest great horned owls are found in Ohio at the age of the owls is approximately 28 years. In the great horned owl facts contains one more interesting fact about their survival nature. In the owls have the ability to attack large species, but at the same time it can feel harder to survive in the first year. In the owls does not have any natural predators so that most of the great horned owls are admitted in the rehabilitation centres include human caused injuries.
Water vapour is a gaseous constituent of air and, like any of the atmospheric gasses, exerts a pressure. Indeed, the sum of their individual pressures is the atmospheric pressure. There is a maximum amount of water vapour which air at a given temperature can hold. The vapour pressure then exerted is known as the saturation vapour pressure. The relative humidity of the air is the ratio of the actual vapour pressure to the saturation vapour pressure, the ratio being expressed as a percentage. Vapour pressure is calculated at synoptic weather stations at each hour for which observations are made. The average annual vapour pressure over Ireland and the mean annual and mean monthly vapour pressure at Valentia Observatory are displayed below.
Activities on Leadership & Ethics An often-neglected portion of education is leadership and ethics training. It's not usually enough to simply tell students to exercise good leadership skills and to be ethical; rather, students learn more effectively by discussing and applying these principles through independent and group activities. This sends the message home more effectively and allows them to develop their own style of leadership and ethics rather than just taking on yours from a top-down point of view. Open a leadership and ethics session with a discussion. This is a helpful activity to involve and engage the entire class. Start by asking students to define leadership and ethics and how the two terms are connected. This is important because it highlights the subjective nature of these terms. By allowing the class to come to a consensus on what the terms mean, you can use their definition to move forward in the activity and get the students thinking about how subjective the terms are rather than just presenting the definitions yourself and moving on. Divide your students into groups. Have each group choose a leader and give them a task. The task should be clear and there should be clear criteria for success. The task should also be fairly simple, taking five to ten minutes so students can alternate as leader of the group. The two teams will compete under their leaders to do the best job on the task at hand. By making it quick and giving everyone a chance to lead, you will allow the students to develop their own personal leadership style, which is the intangible key to leadership. Break your class into groups of three and give each group a stack of 3-by-5 index cards with an unethical situation written on it. For example, you may write, "You have found a bank error on your business's line of credit and you have been charged $10,000 less than you should have been." One person in the group is the persuader, trying to convince another person (the decider) to make the unethical choice. The third person is the observer, who watches how the persuader persuades and the decider decides. After a few minutes, have the groups draw another card and switch roles. This activity will teach the subjective nature of ethics, emphasizing through discussion that unethical behavior can be spun as ethical and vice versa.
Chapter 2. Numeric Representation High-level languages shield programmers from the pain of dealing with low-level numeric representation. Writing great code, however, requires a complete understanding of how computers represent numbers. Once you understand internal numeric representation, you’ll discover efficient ways to implement many algorithms and see the pitfalls associated with many common programming practices. Therefore, this chapter looks at numeric representation to ensure you completely understand what your computer languages and systems are doing with your data. 2.1 What Is a Number? Having taught assembly language programming for many ...
COVID-19 is a disease caused by the New Coronavirus, called SARS-CoV-2, which can cause milder symptoms, similar to those of common flu, to more severe symptoms, which can result in the death of the infected individual. The main form of transmission of this disease is between people, through droplets of saliva and secretions that contain the virus, but it can also occur through contact with surfaces contaminated by the droplets. Given the form of contagion, the most prudent attitude is social isolation. However, this strategy is not a possible measure for everyone, since, often, for industries to work fully, it is impossible for all employees to perform their work from home. Thus, while some can work from home, in social isolation, others need to work in person, exposing themselves to the risks of contamination. Thus, even for people who continue to work normally during the pandemic, it is essential to adopt three main preventive measures: access control, work environment disinfection, and social distancing. These three measures can be benefited and facilitated by the use of some technological resources. Access control means limiting or denying access to the industrial work environment in cases of workers who have any symptoms of COVID-19 or are not wearing facemasks. Currently, the only symptom that can be tested electronically is body temperature — which, to prevent the spread of the disease, must be done without contact. Two technologies are used for temperature verification without physical contact: infrared thermometers and thermal imaging cameras. Both technologies measure the infrared radiation emitted by the body, which is proportional to its temperature. In the case of the thermometer, the measurement is made at a specific point in the body. In the camera, the measurement is made at several points, forming an image, with the heat map of the measured area. According to ISO/TR 13154: 2017 — a standard that provides guidelines for contactless body temperature measurement and that regulates the use of electrical medical equipment — the measurement can be done in the larger areas of the head, usually on the forehead. An elevated body temperature (greater than 37°Celsius) may indicate the person has a fever, one of the symptoms of COVID-19. Facemask checking, in turn, can be automated using machine-learning image recognition, in which an algorithm learns to recognize patterns over time. The implementation of this type of solution can be done with popular frameworks-ready software functions — such as OpenCV with TensorFlow. This requires initial training of the recognition algorithms for the correct detection and there are already open ready-made data sets for this task. These two access control methods can be combined into a single system, with an infrared temperature sensor or thermal camera and a common camera for facemask recognition. Equipment with these functions already exist and is being tested in some industrial environments for access control on the entrance turnstiles with daily measurements of all employees. Disinfection of the working environment Industrial area cleaning before the pandemic was limited to keeping the environment clean. Chemical disinfection with recommended cleaners (disinfectants) is expensive and time-consuming, as it requires more employees allocated daily to the function. Added to this, we need to disinfect items that arrive for stock, such as raw materials. Autonomous cleaning robots are already being developed, tested and used in these cases. A robot can cover an industrial area daily, at more favorable times, such as in the evening period. They are already commonly used in cleaning and disinfection factory floors, since the paths through which robots must pass are predefined and do not have variations. Storage locations or that have too many machines, in turn, are a problem, since they do not offer space for the passage of robots or are too high to be reached. Another alternative that is being tested is the use of ultraviolet light, which, due to its ionizing action, can cause damage to the genetic material of the virus and disable it. Because of the danger of eye and skin contact with these radiation sources, ultraviolet light should be applied without human intervention. There are already cases where autonomous robots use this light and apply disinfection during shifts where no people are present[link 6]. Even after access control has been applied and the disinfection has been performed, it is still necessary for employees to maintain the distance of one meter, as recommended by the WHO. To tackle this problem, there are several auxiliary technologies for distancing measuring and to warn when distancing is compromised. Among these technologies, the one that has stood out is the UWB (Ulta Wide Band), in which devices — tag or bracelet — emit radio signals in a large frequency range to other devices . By measuring the time between messages, it is possible to determine the distances between employees and issue sound and light alerts to move away if they approach too closely. This technology allows an accuracy of 10 cm, better than solutions with Bluetooth, which suffer from inaccuracy problems indoors — Bluetooth signal is attenuated by walls, equipment and obstacles. Anatel (the Brazilian Telecommunication Nacional Agency) has already regulated the working frequency ranges for the use of this new technology in Brazil in Resolution of June 27 2017, number 680. Even with the great challenge of the pandemic, several lines of research and product development are being explored so that industry continues to function and ensuring the safety of its workers. Access control with facemask recognition and body temperature measurement help prevent the spread of the disease before it can enter factories. Automated robot disinfection ensures that employees have a safe environment to work in. Finally, measuring the distance between employees can control the spread of the virus in assembly lines, inventories and other areas of companies. These solutions are direct responses to the new needs brought by the pandemic of the New Coronavirus. Thus, with the use of these technologies, it is possible to create safer environments for employees who need to work in outside of their homes. Virus contamination can be contained and activities can be safely resumed.
By Francesco de Gasperin and Timothy Shimwell Supermassive black holes can leave a trail of energetic particles that astronomers are able to detect using radio telescopes. Usually the radio emission from these particles fades away and become invisible as it ages. However, in the merging galaxy cluster Abell 1033, LOFAR discovered that some of these particles can be rejuvenated and start shining again when observed at very low radio frequencies. This reenergising process can occur because when cluster merge a huge amount of energy is dissipated — these merging events are the most energetic processes since the Big Bang. With the help of LOFAR, astronomers want to study these particles, to learn about how galaxy clusters evolve in the Universe and how their evolution is influenced by magnetic fields and accretion.
CUPE celebrates Black History Month February 1 marks the beginning of Black History Month (also known as African Heritage month). Black History Month is a time for all Canadians to reflect and educate ourselves on the history of enslavement, discrimination, bigotry and criminalization of people of African descent. It is also a time to celebrate and highlight the best of Black history and culture and to honour the historic leaders of Black communities, their accomplishments and their enduring fight for freedom. Canadian society has seen a lot of progress over the decades, but the realities of differential treatment towards African Canadians continue. At the global level, the United Nations Working Group of Experts on People of African Descent is educating people on our world history of enslavement, discrimination, bigotry and criminalization. In 2016, this UN body visited Canada and wrote a detailed report on what they learned. They called for a number of actions, including one for the federal government to “issue an apology and consider providing reparations to African Canadians for enslavement and historical injustices.” Last March, the federal government announced that it is developing a much-needed anti-racism strategy for Canada. CUPE participated in the government’s consultations and we will watch its development closely. Our union is committed to fighting racism and hatred in all its forms and to empower our members to speak out and act against discrimination. We encourage members to celebrate Black History Month and to keep fighting anti-Black racism in their locals, workplaces, schools and communities. Here are some ways to increase awareness, understanding and change: - Check out CUPE’s landing page on Black History Month to view activities and learning resources - Order free copies of CUPE’s newly designed Black History Month bookmark from cupe.ca - Invite a Black activist or community organizations to speak to your members - Contact Union Education and ask for CUPE workshops on Challenging Racism, Human Rights and Anti-Oppression to be delivered to members in your region - Celebrate and promote Black history month within your local - Lobby your government for the implementation of legislation that addresses anti-Black racism in your region, including employment equity legislation - Support community organizations and movements such as Black Lives Matter and other community organizations that fight against systemic racism and violence - Visit blacklivesmatter.ca and follow #BlackLivesMatter on Twitter - Attend Black History Month events in your local communities to celebrate, learn and network - Bargain employment equity language into your collective agreement to help ensure that your workplace represents the diversity of your community Learn more at cupe.ca/black-history-month Source: CUPE National
COMPACT DISC MANUFACTURING PROCESS A Graphic Explanation of the Manufacturing Steps Used to Create Compact Discs If you already know this and want to just get on with the ordering, click here for a quote or here for order forms. How the Process Begins How CD's are Reproduced 1. Glass Master From the Customer's data, the CD Glass Master is produced. - An optically ground glass disc is rotated with a 1/10th micron thick layer of Photoresist which is then exposed by a laser. The laser "writes" or exposes a pattern of pits on this thin layer, transferring the information from the master image. The disk is developed (the exposed parts are etched away), it is silvered, resulting in the actual pit structure of the finished disc. The master is then electroplated with nickel which, when separated from the master, forms a metal negative or "father". The father could be used to replicate CD's but would wear out too soon. Instead, several "mothers" (or positives) are made by plating onto the father. In a third plating stage, each mother is used to create a number of stampers, which are actually used to mold the pit structure onto the CD's. 5. Clear Disc Compact discs are made similarly to conventional records using injection molding techniques and a stamper. 6. CD-ROM Disc The information surface is coated with a micron thick layer of aluminum to provide a reflective surface. - This is the surface which is actually read by a CD Player. The reflective surface is then protected with a lacquer coating. The disc label is then printed directly onto the disc. ? Cinram used with permission. Copyright © 2002-03 CD Solutions Inc., All Rights Reserved.
There are several good handbooks on grammar and mechanics available. A reasonable online handbook (free of charge) can be found at the Grammarly blog. It provides basic information on grammar, mechanics, sentence structure, and usage. Dartmouth College’s Program in Rhetoric and Writing has identified the twenty most common errors, listed below according to the frequency with which they appear. Fuller descriptions of these and other grammatical errors can be found in Jack Lynch’s Guide to Grammar and Style. Twenty Most Common Errors 1. Missing comma after introductory phrase For example: After the devastation of the siege of Leningrad the Soviets were left with the task of rebuilding their population as well as their city. (A comma should be placed after “Leningrad”.) 2. Vague pronoun reference For example: The boy and his father knew that he was in trouble. (Who is in trouble? The boy? His father? Some other person?) 3. Missing comma in compound sentence For example: Wordsworth spent a good deal of time in the Lake District with his sister Dorothy and the two of them were rarely apart. (Comma should be placed before the “and”.) 4. Wrong word This speaks for itself, but writers may fail to distinguish between “there” and “their” or may say that someone “imagines” something, when they meant to say that someone “envisions” something. 5. No comma in nonrestrictive clauses Here you need to distinguish between a restrictive relative clause and a nonrestrictive relative clause. Consider the sentence, “My brother in the red shirt likes ice cream.” If you have two brothers, then the information about the shirt is restrictive, in that it is necessary to defining which brother likes ice cream. Because they are essential to identifying the noun, restrictive clauses use no commas. However, if you have one brother, then the information about the shirt is not necessary to identifying your brother. It is non-restrictive and, therefore, requires commas: “My brother, in the red shirt, likes ice cream.” 6. Wrong/missing inflected ends “Inflected ends” refers to a category of grammatical errors that you might know individually by other names—subject-verb agreement, who/whom confusion, and so on. The term “inflected ends” refers to something you already understand: adding a letter or syllable to the end of a word changes its grammatical function in the sentence. For example, adding “ed” to a verb shifts that verb from present to past tense. Adding an “s” to a noun makes that noun plural. A common mistake involving wrong or missing inflected ends is in the usage of who/whom. “Who” is a pronoun with a subjective case; “whom” is a pronoun with an objective case. We say, “Who is the speaker of the day?” because “who” in this case refers to the subject of the sentence. But we say, “To whom am I speaking?” because, here, the pronoun is an object of the preposition “to”. 7. Wrong/missing preposition Occasionally prepositions will throw you. Consider, for example, which is better: “different from,” or “different than?” Though both are used widely, “different from” is considered grammatically correct. The same debate surrounds the words “toward” and “towards.” Though both are used, “toward” is preferred in writing. When in doubt, check a handbook. 8. Comma splice A comma splice occurs when two independent clauses are joined only with a comma. For example: “Picasso was profoundly affected by the war in Spain, it led to the painting of great masterpieces like Guernica.” A comma splice also occurs when a comma is used to divide a subject from its verb. For example: “The young Picasso felt stifled in art school in Spain, and wanted to leave.” (The subject “Picasso” is separated from one of its verbs “wanted.” There should be no comma in this sentence. 9. Possessive apostrophe error Sometimes apostrophes are incorrectly left out; other times, they are incorrectly put in (her’s, their’s, etc.) 10. Tense shift Be careful top stay in a consistent tense. Too often students move from past to present tense without good reason. Note that analyses of texts and other cultural products are kept in present tense: “Faulkner offers us clues to his characters’ motives.”) 11. Unnecessary shift in person Don’t shift from “I” to “we” or from “one” to “you” unless you have a rationale for doing so. 12. Sentence fragment Silly things, to be avoided. Unless, like here, you are using them to achieve a certain effect. Remember: sentences traditionally have subjects and verbs. Don’t violate this convention carelessly. 13. Wrong tense or verb form Though students generally understand how to build tenses, sometimes they use the wrong tense, saying, for example, “In the evenings, I like to lay on the couch and watch TV.” “Lay” in thise instance is the past tense of the verb, “to lie.” The sentence should read: “In the evenings, I like to lie on the couch and watch TV.” (Note that “to lay” is a separate verb meaning “to place in a certain position.”) 14. Subject-verb agreement This gets tricky when you are using collective nouns or pronouns and you think of them as plural nouns: “The committee wants [not want] a resolution to the problem.” Mistakes like this also occur when your verb is far from your subject. For example, “The media, who has all the power in this nation and abuses it consistently, uses its influence for ill more often than good.” (Note that media is an “it,” not a “they.” The verbs are chosen accordingly). 15. Missing comma in a series Whenever you list things, use a comma. You’ll find a difference of opinion as to whether the next-to-last noun (the noun before the”and”) requires a comma. This is called the Oxford comma:(“Apples, oranges, pears, and banannas. . .”). Our advice is to use the comma because sometimes your list includes pairs of things; “For Christmas, she wanted books and tapes, peace and love, and for all the world to be happy.” If you are in the habit of using a comma before the “and,” you’ll avoid confusion in sentences like this one. 16. Pronoun agreement error Many students have a problem with pronoun agreement. They will write a sentence like “Everyone is entitled to their opinion.” The problem is, “everyone” is a singular pronoun. You will have to use “his” or “her” or “his/her.” 17. Unnecessary commas with restrictive clauses See the explanation for number five, above. 18. Run-on, fused sentence Run-on sentences are sentences that run on forever, they are sentences that ought to have been two or even three sentences but the writer didn’t stop to sort them out, leaving the reader feeling exhausted by the sentence’s end which is too long in coming. (You get the picture.) Fused sentences occur when two independent clauses are put together without a comma, semi-colon, or conjuction. For example: “Researchers investigated several possible vaccines for the virus then settled on one.” 19. Dangling, misplaced modifier Modifiers are any adjectives, adverbs, phrases, or clauses that a writer uses to elaborate on something. Modifers, when used wisely, enhance your writing. But if they are not well-considered—or if they are put in the wrong places in your sentences—the results can be less than eloquent. Consider, for example, this sentence: “The professor wrote a paper on sexual harassment in his office.” Is the sexual harassment going on in the professor’s office? Or is his office the place where the professor is writing? One hopes that the latter is true. If it is, then the original sentence contains a misplaced modifier and should be re-written accordingly: “In his office, the professor wrote a paper on sexual harassment.” Always put your modifiers next to the nouns they modify. 20. It/its error “Its” is a possessive pronoun. “It’s” is a contraction for “it is.” To learn several strategies about proofreading, please watch the short video found on this page about proofreading produced by the The Writing Center at the University of North Carolina-Chapel Hill:
For most of their prehistory, humans were highly mobile hunter-gatherers. We can expect that Neandertals were also highly mobile, at least compared to sedentary post-agricultural human populations. Great apes are our closest living relatives, but they live in tropical forests – a pretty different environment than the Neandertals. There are constraints on ape mobility, including difficulty of locomotion, habitat complexity, and extreme territoriality, that might not have constrained ancient humans, including Neandertals. We might then consider the population structure of other highly mobile large mammals. Brown bears have been sympatric with humans in Europe since the Middle Pleistocene. Bear ecology has similarities and differences from Neandertals – bears were omnivores accentuating meat consumption to a similar extent, but did not live in groups. Like Neandertals they may have exploited edges between habitat types, although brown bears are effective in open country as well. For bears, like other European mammals, one of the most important questions is what happened to their population during the Last Glacial Maximum (LGM). The LGM was only around 18,000 years ago, so it’s not an issue for Neandertals who were long gone by that time. But because the LGM is relatively recent, we have a relatively large representation of bear mitochondrial genetics spanning that time interval. So it gives us a chance to look at the relationship of population structure and genetic diversity in a large, mobile, European mammal. The bear comparison also lets us consider the effects of a smaller sample on our conclusions about ancient population structure and dynamics. Brown bears are very common in archaeological and subfossil paleontological faunal lists. During the LGM, brown bears are known from northern Spain and Moldova. Evidence from today’s bears suggests the occupation of at least four refugia (Sommer and Benecke 2005) – basically Iberia, Italy, the Balkans and the Carpathians. These four areas can be expected to have housed substantial diversity during the LGM. The subsequent recolonization of northern Europe may be the largest factor organizing the present pattern of genetic variability, with the differential expansion of lineages through space. Interspecific patterns of recolonization from refugia Taberlet and colleagues (1998) collated phylogeographic evidence from 10 European species, ranging from plants to large mammals and including brown bears, to trace the likely pathways of postglacial recolonization of Europe. They found evidence for the importance of three refugia – basically Iberia, Italy, and the Balkans. But most interesting, they found that each of their 10 species showed different patterns of postglacial expansion dynamics. It seems that each taxon has responded independently to Quaternary cold periods, and therefore is largely a unique case with its own history. For example, if we compare lineages present in Italy and in the Iberic peninsula, they are closely related in Ursus (less than 1% of sequence divergence in the cytochrome b gene) but much more distantly related in Crocidura (6.4%), in Arvicola (7.6%) and in Triturus (8.5%), while the Sorex species considered here exhibit two lineages in each of these two refugia. Populations occurring in France come either from a refugium in the Iberic peninsula (e.g., Arvicola sapidus, Triturus marmoratus), or from a refugium in the Balkans (e.g., Chorthippus parallelus, Fagus sylvaticus). ...[T]he results obtained in Europe and North America (Zink 1996) suggest that congruence is the exception at the continental scale. The consequence of an independent history for each taxon is that assemblages of plants and animals comprising particular communities are not stable over time, an observation consistent with previous findings based mainly on fossil pollen data (Bennett 1990) (Taberlet et al. 1998:459). Before going on to cite their conclusion, I want to note one possibility that they don’t consider – namely, that the species have similar dynamics of range constriction and expansion but that the mtDNA evidence represents these dynamics with substantial variance. One aspects of that study stands out as interesting as applied to the Neandertals. Although the species did not share any single pattern of expansion from refugia, one aspect was shared: species did not expand from Italy. The authors speculated that the Alps are an effective barrier to rapid recolonization of northern Europe from Italian refugia, and indeed most northern species were recolonized either from Iberia or from the Balkans, or both. Thus Italy today contains many endemic lineages that were stuck in Italy during the LGM or other contractions, and never left. The possibility of an Italian-Croatian population of Neandertals was raised by Fabre and colleagues (2009). Was recolonization from this population possible during warmer phases of the Pleniglacial? If not, this population of Neandertals may have been exceptionally variable – containing many long-standing endemic variants compared to other Neandertal populations. It may also have been substantially divergent from those other populations. Since Vindija is the most important source of the Neandertal genome, it’s an important aspect of biogeography to try to understand. Recolonization by brown bears So much for the general pattern of recolonization. Now back to brown bears. Sommer and Benecke (2005:161) considered further the present population of European brown bears and likely refugia in southern Europe. They returned to the genetic data developed in earlier studies by Taberlet and colleagues to conclude: It is possible to detect three different glacial refugia from their data: (i) the Iberian Peninsula (Spain), (ii) the Italian Peninsula and (iii) the Balkans (Bulgaria/Greece). Furthermore, the investigation into the mitochondrial DNA of brown bears in Europe (Taberlet & Bouvet, 1994) shows four main points: 1. The individuals of southern Scandinavia originated from the Iberian Peninsula are closely related to the individuals from the Balkans and the Italian Peninsula, and form a 'western lineage'. 2. The bears of northern and eastern Scandinavia, from the Baltic States, from north-western Russia and the Carpathians differ with a sequence divergence of 7.13% from those individuals in the western lineage (Fig.5). 3. Based on their genetic similarity, the brown bears from northern and eastern Scandinavia, the Baltic States, and north-western Russia are designated as 'eastern lineage' and a glacial refuge in eastern Europe is assumed to be the origin of this genotype (Hewitt, 1999). 4. Within the mitochondrial DNA of brown bears from the Carpathians, three different genotypes can be identified, whereas the genotype of bears from north of the Carpathians (Slovakia) is distributed throughout bears from Norway, Finland, the Baltic States and north-western Russia (Fig. 5). They used these observations to argue for a refuge in the Carpathians during the LGM, which seems eminently reasonable based on their observations. They did not point out (but I will add) that the expansion from an Iberian refugium toward Scandinavia mirrors the pattern of expansion of Magdalenian assemblages after the LGM. The recent literature has described this as a slow and tentative process of expansion (e.g., Jochim et al. 1999), but it was nonetheless as fast or faster than accomplished by small mammals, and may have mirrored the movements of the Magdalenians’ large mammal prey animals. That human movement may also explain the distribution of mtDNA haplogroup H in Europe, which Pereira and colleagues (2005) attributed to a post-glacial recolonization from Iberia northeastward. This is not a new idea; Cavalli-Sforza wrote about this direction of postglacial migration some 30 years ago. Later, Sommer and Nadachowski (2006) extended the map of refugia to take in more species, using faunal records from LGM archaeological sites. The map below helps to put these observations into context: The possible ranges of human occupation and mammal refugia seem very extensive across southern Europe but are not necessarily contiguous. For example, the Alps form a partial barrier around the northern part of Italy, and the Pannonian Basin might partially cut off from Italy/Dalmatia as well. But it’s not hard to imagine a large mammal like a bear (or a human) traversing the distances between such refugia, or walking along corridors between them such as the coasts. More samples, more complexity We have to remember that the interpretation of semi-isolated refugia has been based on the pattern of genetic variation in living species in Europe. But geographic differentiation need not only have occurred because populations were once fragmented during glacials. Differentiation may also be a product of range expansion, selection, or later interaction with other species, including humans. Today’s differentiation is not necessarily a trace of refugia in the past. So it becomes important to test the hypothesis of semi-isolated refugia, by looking at the variation of ancient DNA sequences. Last year, a study by Valdiosera and colleagues did exactly that – looking at new sequence data from a larger set of brown bear subfossil remains from Iberia. Here’s a paragraph from the discussion of that paper: Under traditional glacial refugia hypotheses (4, 17), the extant brown bear phylogeographic structure derives from ancestral glacial refugia: the western lineage originating from Iberia, Italy, and the Balkans, and the eastern lineage possibly derived from a Carpathian refugium (14, 16). In contrast to such a strict refugial model, but in concordance with a continuous European prehistoric population, we have identified a sequence from a Pleistocene Iberian brown bear from Arlanpe site (the Basque country) that belongs to the eastern clade. In our analyses, such a phylogenetic assignment is supported by maximal posterior probabilities (Fig. 1 A). This pattern is further supported by three Pleistocene brown bear sequences from Valdegoba (northern Spain), which cluster with a previously published sequence from Atapuerca (northern Spain) and with several sequences from modern Italian and Balkan bears. Furthermore, AMOVAs suggest little geographic substructure among Spanish and European Pleistocene populations. These new data confirm the lack of phylogeographic discontinuity in European brown bears before the LGM (23). Although Spanish and European Holocene populations appear geographically differentiated in our AMOVAs, a recent study has suggested that gene flow could have continued from the Pleistocene to the Holocene (20). An Iberian brown bear, dated to the time of the LGM from the site of Atapuerca in Burgos in the north of Spain, was more closely related to Italian/Balkan bears than to the Iberian ones. Moreover, during the Holocene in Mont Ventoux (southern France) three mitochondrial groups are found between 1,570 to 6,525 years B.P.: one belonging to the Iberian group, another one to the Italian/Balkan one, and yet a third one not associated with any of the three main glacial refugia (20). Note, however, that support for the Spanish and the Italian/Balkan clades are low in our tree. In this study, we have found three different individuals from Valdegoba, a Late Pleistocene site also in Burgos, that group together with the sample from Atapuerca (Valdiosera et al. 2008: emphasis added). I think this study is so interesting because of the way it shows the influence of sample size on the phylogeographic interpretation. Consider how the conclusions of the study would have been different if the sample had been smaller. The authors found one Iberian Pleistocene bear that belonged to a clade otherwise comprising bears from Austria, Germany and Russia. This one bear is their clearest indication of ancient movement between plausible refugia. Had they not found a sequence in this bear, the evidence favoring two distinct refugia would have been much stronger. Likewise, their sample includes three bears from Pleistocene France that belong to a clade of their own. This diversity no longer exists among today’s bears – at least, not the ones sampled up to now. If this region of France happened not to have produced bear remains, we would not have any evidence of this divergent clade at all. Again, the record would suggest that present bears derived from two largely isolated refugia. As it is, either another French refugium existed or the Pleistocene Iberian population harbored more diversity than present bears of Iberia. That last element, a reduction of diversity over time, is also suggested by the pattern of variation between Pleistocene and Holocene bear remains. It has a lesson for the interpretation of human variation – some human mtDNA haplogroups have reduced in frequency in recent Europeans, others have apparently increased. In the case of humans, we may be looking at selection. Brown bears have had a smaller effective size than humans during the last 10,000 years, so we might be looking at an actual reduction in numbers or geographic range. In any event, with the bears every additional sample carries information about ancient population structure. We can expect that the addition of more Neandertal mtDNA samples will likewise add information about Neandertal population structure. The addition of samples is more likely to confuse a simple story than to confirm it, although either is possible. Valdiosera and colleagues conclude that brown bears were actually highly mobile during the LGM, moving easily across a range that, although limited compared to earlier and later time periods, extended from east to west across southern Europe. It’s hard to believe that Neandertals weren’t capable of similar movement. On the other hand, chimpanzees are likely capable of long-distance movement but still have substantial population differentiation. This may be because intervening groups prevent individuals from moving long distances. So the dispersal character of Neandertal populations may have depended upon their social dynamics, an aspect of behavior that we are poorly situated to test. Pereira L and 12 others. 2005. High-resolution mtDNA evidence for the resettlement of Europe from an Iberian refugium. Genome Res 15:19-24. doi:10.1101/gr.3182305 Sommer RS, Benecke N. 2005. The recolonization of Europe by brown bears Ursus arctos Linnaeus, 1758 after the Last Glacial Maximum. Mammal Rev 35:156-164. doi:10.1111/j.1365-2907.2005.00063.x Sommer RS, Nadachowski A. 2006. Glacial refugia of mammals in Europe: evidence from fossil records. Mammal Rev 36:251-265. doi:10.1111/j.1365-2907.2006.00093.x Taberlet P, Fumagalli L, Wust-Saucy A-G, Cosson J-F. 1998. Comparative phylogeography and postglacial colonization routes in Europe. Mol Ecol 7:453-464. doi:10.1046/j.1365-294x.1998.00289.x Valdiosera CE and 10 others. 2008. Surprising migration and population size dynamics in ancient Iberian brown bears (Ursus arctos). Proc Nat Acad Sci USA 105:5123-5128. doi:10.1073/pnas.0712223105
Recent, major bleaching events in the Great Barrier Reef — the largest living structure on the planet — has dramatically compromised the recruitment of new corals. According to researchers, the number of juvenile corals that settled in the reef was 89% lower in 2018 than the historical average. A bleak future Australia’s Great Barrier Reef has been hampered by four mass coral bleaching events since 1998, the most recent one lasting from June 2014 to May 2017. This was the longest, most damaging coral bleaching event on record killing 30% of the reef. An estimated half billion people around the world directly depend on reefs for income from fishing and tourism. Economic activity derived from the Great Barrer Reef alone is thought to be worth $4.5 billion annually. Bleaching occurs when the ocean’s waters become too warm and expel the photosynthetic algae, called zooxanthellae, which live in a symbiotic relationship with the coral. Without the algae, the coral dies and seaweeds take over. The main culprit is man-made climate change, which warms and increases the acidity of the waters. Although some think the effects of climate change are hazy and yet to rear their head, it has actually been affecting the reef for at least 20 years. A 2018 study found that the number of ocean heatwaves has risen by more than 50% since 1925, threatening to collapse marine ecosystems all over the world, coral reefs being no exception. Scientists believe that under normal conditions, the coral would need 10 years to bounce back. But a new study led by researchers at ARC Centre of Excellence for Coral Reef Studies suggests conditions are anything but normal. The rate of new coral recruitment is abysmally low. Researchers measured how many adult corals along the reef had survived following the mass bleaching events, as well as the number of new corals that had been produced in 2018. Compared to 1990-levels, a period where there were no bleaching events, there was an average 90% decline in coral recruitment across the whole length of the Great Barrier Reef. Typically, when one reef is destroyed, it can be replenished by babies from another reef. However, the 2016 an 2017 bleaching was so severe that in many parts of the reef there were no longer any adjacent reefs to provide offspring. Not only does the Great Barrier Reef’s future hang by a thread, what remains of it is also morphing dramatically. Some corals are more resilient than others, which means that they now breed more, altering the coral composition. For instance, the hardest hit species is Acropora, which saw a 93% decline. Coral reefs are complex ecosystems, so when a coral species disappears, so does the habitat for countless other species of marine wildlife. “The collapse in stock–recruitment relationships indicates that the low resistance of adult brood stocks to repeated episodes of coral bleaching is inexorably tied to an impaired capacity for recovery, which highlights the multifaceted processes that underlie the global decline of coral reefs. The extent to which the Great Barrier Reef will be able to recover from the collapse in stock–recruitment relationships remains uncertain, given the projected increased frequency of extreme climate events over the next two decades,” the authors wrote in their study. If current trends continue unabated, coral bleaching might affect 99% of the world’s reefs within this century, the United Nations warns. Previously, the U.N. Intergovernmental Panel on Climate Change warned that tropical reefs could decline by 70% to 90%, if the planet warms by 1.5ºC compared to preindustrial average temperatures — the upper limit set by the Paris Agreement. At 2ºC of warming, 99% of the world’s reefs could perish. “Going to 2C and above gets to a point where corals can no longer grow back, or you have annual bleaching events. On the other hand, at 1.5C there’s still significant areas which are not heating up or not exposed to the same levels of stress such that they would lose coral, and so we’re fairly confident that we would have parts of those ecosystems remaining,” said Professor Ove Hoegh-Guldberg, a coral reefs expert with the University of Queensland. Last year, Australian scientists bred baby corals in an artificial environment and later moved them to some of the most damaged parts of the reef. Eight months later, the juvenile coral had survived and grown, lending hope that coral transplants can restore similarly damaged ecosystems, not just in the Great Barrier Reef, but around the world as well. However, this is just patchwork. The only viable long-term solution is cutting global greenhouse emission. But even if we manage to avert 1.5ºC of warming, the Great Barrier Reef will never be the same. The findings appeared in the journal Nature.
YOU may not have heard of Abū ‘Alī al-Ḥasan ibn al-Haytham. In the West he is known as Alhazen, a Latinized form of his Arabic first name, al-Ḥasan. In all likelihood, though, you benefit from his lifework. He has been described as “one of the most important and influential figures in the history of science.” Alhazen was born in Basra, now in Iraq, about 965 C.E. His interests included astronomy, chemistry, mathematics, medicine, music, optics, physics, and poetry. What in particular do we have to thank him for? A DAM ON THE NILE A story about Alhazen has circulated for a long time. It concerns his plan to regulate the flow of the Nile River almost 1,000 years before the project was actually carried out at Aswân in 1902. As the story goes, Alhazen laid out ambitious plans to alleviate the cycle of floods and droughts in Egypt by damming the Nile. When Cairo’s ruler, Caliph al-Hakim, heard of the idea, he invited Alhazen to Egypt to build the dam. Yet, on seeing the river with his own eyes, Alhazen knew that the project was beyond him. Fearing punishment from this notoriously unstable ruler, Alhazen pretended to be insane until the caliph died some 11 years later, in 1021. In the meantime, Alhazen had plenty of leisure time to pursue other interests while confined for his feigned mental illness. THE BOOK OF OPTICS By the time of his release, Alhazen had written most of his seven-volume Book of Optics, considered to be “one of the most important books in the history of physics.” In it he discussed experiments into the nature of light, including how light splits into its constituent colors, reflects off mirrors, and bends when passing from one medium into another. He also studied visual perception and the anatomy and mechanics of the eye. By the 13th century, Alhazen’s work had been translated from Arabic into Latin, and for centuries thereafter, European scholars cited it as an authority. Alhazen’s writings on the properties of lenses thus laid essential groundwork for European eyeglass makers who, by holding lenses one in front of another, invented the telescope and the microscope. THE CAMERA OBSCURA Alhazen identified the principles that underpin photography when he built what could amount to the first camera obscura on record. This enclosure consisted of a “dark room” into which light entered through a pinhole-size aperture, projecting an inverted image of what lay outside onto a wall inside the chamber. In the 1800’s, photographic plates were added to the camera obscura to capture images permanently. The result? The camera. All modern cameras—and indeed the eye itself—use the same physical principles as the camera obscura. * THE SCIENTIFIC METHOD An outstanding aspect of Alhazen’s work was his meticulous and systematic research into natural phenomena. His approach was most unusual for his day. He was one of the first investigators to test theories by experimentation, and he was not afraid to question accepted wisdom if the evidence did not back it up. A tenet of modern science can be summed up by the dictum: “Prove what you believe!” Some consider Alhazen to be “the father of the modern scientific method.” On that basis, we have much to thank him for. ^ par. 13 The similarity between the camera obscura and the eye was not well understood in the West until it was explained by Johannes Kepler in the 17th century.
SOURCE: New York Times DATE: September 3, 2016 AUTHOR: Justin Gillis SNIP: The research, likely to take years, may supply a figure for how quickly the ocean was able to rise under past conditions, but not necessarily a maximum rate for the coming decades. The release of greenhouse gases from human activity is causing the planet to warm rapidly, perhaps faster than at any other time in the Earth’s history. The ice sheets in both Greenland and West Antarctica are beginning to melt into the sea at an accelerating pace. Scientists had long hoped that any disintegration of the ice sheets would take thousands of years, but recent research suggests the breakup of West Antarctica could occur much faster. In the worst-case scenario, this research suggests, the rate of sea-level rise could reach a foot per decade by the 22nd century, about 10 times faster than today.
expert advice MORE What Is ODD? Q: From the time he was a toddler, my son has been a discipline problem. He has been thrown out of four day cares and two private homes because they could not handle him. He's seven-years-old now. I've been called to the school numerous times because of his behavior. The principal suggested I have him evaluated for ODD (Oppositional Defiant Disorder) and ADD(Attention Deficit Disorder). I am familiar with ADD, but not ODD. Can you tell me about it? A: Oppositional Defiant Disorder is a syndrome where children have hostile, negative, and defiant behaviors. While all children can have some of these behaviors at various times during their early years, they generally develop self-control and restraint, accepting the normal rules of society that eliminate those behaviors. In children with ODD, their behavior causes significant impairments in their social, academic, or occupational functioning. In a child, the specific criteria for diagnosing ODD include having the behaviors for at least six months, with at least four of the following patterns: - Often loses temper - Often argues with adults - Often defies or refuses to comply with adults' requests or rules - Often annoys people deliberately - Often blames others for his or her own mistakes or misbehavior - Often is touchy or easily annoyed by others - Often is angry or resentful - Often is spiteful or vindictive These criteria come from the Diagnostic and Statistical Manual of the American Psychiatric Association. It's also important to note that in answering yes to these criteria, you have to compare the behaviors to other children of the same age and developmental level. Thus, for a three-year-old who loses his temper and defies adults, I wouldn't answer yes because many three-year-olds have that behavior -- it's relatively typical. In a seven-year-old, those are not typical behaviors. The cause of ODD is not completely understood, but is felt to be related to both genetic and environmental factors. While ODD is different from ADD, it does appear that some children who have ADD also have this type of disruptive behavior when they are younger. It's important to have a child evaluated when there are significant concerns, because both ADD and ODD can be managed effectively with early intervention. You can ask your pediatrician for a referral to a child psychologist or child psychiatrist to help evaluate your son. More on: Expert Advice Shari Nethersole is a physician at Children's Hospital, Boston, and an instructor in Pediatrics at Harvard Medical School. She graduated from Yale University and Harvard Medical School, and did her internship and residency at Children's Hospital, Boston. As a pediatrician, she tries to work with parents to identify and address their concerns.
* This is the Consumer Version. * The brain’s functions are both mysterious and remarkable. All thoughts, beliefs, memories, behaviors, and moods arise within the brain. The brain is the site of thinking and the control center for the entire body. The brain coordinates the abilities to move, touch, smell, taste, hear, and see. It enables people to form words and communicate, understand and manipulate numbers, compose and appreciate music, recognize and understand geometric shapes, plan ahead, and even to imagine and fantasize. The brain reviews all stimuli—from the internal organs, surface of the body, eyes, ears, nose, and mouth. It then reacts to these stimuli by correcting the position of the body, the movement of limbs, and the rate at which the internal organs function. The brain can also determine mood and levels of consciousness and alertness. No computer has yet come close to matching the capabilities of the human brain. However, this sophistication comes with a price. The brain needs constant nourishment. It demands an extremely large amount and continuous flow of blood and oxygen—about 20% of the blood flow from the heart. A loss of blood flow to the brain for more than about 10 seconds can cause loss of consciousness. Lack of oxygen or abnormally low sugar (glucose) levels in the blood can result in less energy for the brain and can seriously injure the brain within minutes. However, the brain is defended by several mechanisms that can work to prevent these problems. For example, if blood flow to the brain decreases, the brain immediately signals the heart to beat faster and more forcefully, and thus to pump more blood. If the sugar level in the blood becomes too low, the brain signals the adrenal glands to release epinephrine (adrenaline), which stimulates the liver to release stored sugar. The blood-brain barrier also protects the brain. This thin barrier prevents some toxic substances in the blood from reaching the brain. It exists because in the brain, unlike in most of the body, the cells that form the capillary walls are tightly sealed. (Capillaries, the smallest of the body’s blood vessels, are where the exchange of nutrients and oxygen between the blood and tissues occurs.) The blood-brain barrier limits the types of substances that can pass into the brain. For example, penicillin, many chemotherapy drugs, and most proteins cannot pass into the brain. On the other hand, substances such as alcohol, caffeine, and nicotine can pass into the brain. Certain drugs, such as antidepressants, are designed so that they can pass through the barrier. Some substances needed by the brain, such as sugar and amino acids, do not readily pass through the barrier. However, the blood-brain barrier has transport systems that move substances the brain needs across the barrier to brain tissue. When the brain is inflamed, as may occur when people have certain infections or tumors, the blood-brain barrier becomes leaky(permeable). When the blood-brain barrier is permeable, some substances (such as certain antibiotics) that normally are unable to pass into the brain are able to do so. The activity of the brain results from electrical impulses generated by nerve cells (neurons), which process and store information. The impulses pass along the nerve fibers within the brain. How much and what type of brain activity occurs and where in the brain it is initiated depend on a person’s level of consciousness and on the specific activity that the person is doing. The brain has three main parts: Each has a number of smaller areas, each with specific functions. The cerebrum, the largest part of the brain, contains the following: The cerebral cortex: This convoluted layer of tissue forms the outer surface of the cerebrum. It consists of a thin layer gray matter about one eighth of an inch (2 to 4 mm) thick. In adults, the cerebral cortex contains most of the nerve cells in the nervous system. White matter : White matter consists mainly of nerve fibers that connect the nerve cells in the cortex with one another, as well as with other parts of the brain and spinal cord. The white matter is located under the cortex. Subcortical structures: The structures are also located under the cortex—hence, the name. They include the basal ganglia, thalamus, hypothalamus, hippocampus, and the limbic system, which includes the amygdala. The cerebrum is divided into two halves—the left and right cerebral hemispheres. The hemispheres are connected by nerve fibers that form a bridge (called the corpus callosum) through the middle of the brain. Each hemisphere is further divided into lobes: Each lobe has specific functions, but for most activities, several areas of different lobes in both hemispheres must work together. The frontal lobes have the following functions: Initiating many voluntary actions, ranging from looking toward an object of interest, to crossing a street, to relaxing the bladder to urinate Controlling learned motor skills, such as writing, playing musical instruments, and tying shoelaces Controlling complex intellectual processes, such as speech, thought, concentration, problem-solving, and planning for the future Controlling facial expressions and hand and arm gestures Coordinating expressions and gestures with mood and feelings Particular areas of the frontal lobes control specific movements, typically of the opposite side of the body. In most people, the left frontal lobe controls most of the functions involved in using language. The parietal lobes have the following functions: Interpreting sensory information from the rest of the body Controlling body and limb position Combining impressions of form, texture, and weight into general perceptions Influencing mathematical skills and language comprehension, as do adjacent areas of the temporal lobes Storing spatial memories that enable people to orient themselves in space (know where they are) and to maintain a sense of direction (know where they are going) Processing information that helps people know the position of their body parts The occipital lobes have the following functions: The temporal lobes have the following functions: Subcortical structures include large collections of nerve cells: The basal ganglia, which coordinate and smooth out movements The thalamus, which generally organizes sensory messages to and from the highest levels of the brain (cerebral cortex), providing an awareness of such sensations as pain, touch, and temperature The hypothalamus, which coordinates some of the more automatic functions of the body, such as control of sleep and wakefulness, maintenance of body temperature, regulation of appetite and thirst, and control of hormonal activity of the adjacent pituitary gland (see Overview of the Pituitary Gland). The limbic system , another subcortical structure, consists of structures and nerve fibers located deep within the cerebrum. This system connects the hypothalamus with other areas of the frontal and temporal lobes, including the hippocampus and amygdala. The limbic system controls the experience and expression of emotions, as well as some automatic functions of the body. By producing emotions (such as fear, anger, pleasure, and sadness), the limbic system enables people to behave in ways that help them communicate and survive physical and psychologic upsets. The hippocampus is also involved in the formation and retrieval of memories, and its connections through the limbic system help connect those memories to the emotions experienced when the memories form. Through the limbic system, memories that are emotionally charged are often easier to recall than those that are not. The brain stem connects the cerebrum with the spinal cord. It contains a system of nerve cells and fibers (called the reticular activating system) located deep within the upper part of the brain stem. This system controls levels of consciousness and alertness. The brain stem also automatically regulates critical body functions, such as breathing, swallowing, blood pressure, and heartbeat, and it helps adjust posture. If the entire brain stem becomes severely damaged, consciousness is lost, and these automatic body functions cease. Death soon follows. However, if the brain stem remains intact, the body may remain alive, even when severe damage to the cerebrum makes movement and thought impossible). The cerebellum, which lies below the cerebrum just above the brain stem, coordinates the body’s movements. With information it receives from the cerebral cortex and the basal ganglia about the position of the limbs, the cerebellum helps the limbs move smoothly and accurately. It does so by constantly adjusting muscle tone and posture. The cerebellum interacts with areas in the brain stem called vestibular nuclei, which are connected with the organs of balance (semicircular canals) in the inner ear. Together, these structures provide a sense of balance, making walking upright possible. The cerebellum also stores memories of practiced movements, enabling highly coordinated movements, such as a ballet dancer’s pirouette, to be done with speed and balance. Both the brain and spinal cord are covered by three layers of tissue (meninges) that protect them: The space between the arachnoid mater and the pia mater (the subarachnoid space) is a channel for cerebrospinal fluid, which helps protect the brain and spinal cord. Cerebrospinal fluid flows over the surface of the brain between the meninges, fills internal spaces within the brain (the four cerebral ventricles), and cushions the brain against sudden jarring and minor injury. The brain and its meninges are contained in a tough, bony protective structure, the skull. Generic NameSelect Brand Names nicotineCOMMIT, NICORETTE, NICOTROL * This is the Consumer Version. *
Every state has the primary responsibility within its territory to ensure human rights are guaranteed to all members. By signing and ratifying human rights conventions, governments at national and local levels must commit to avoiding any actions that would violate or lead to a violation of human rights. In addition, most treaty obligations require the government to take positive steps to adopt affirmative measures, to ensure or protect the enjoyment of human rights. They may also require enacting and enforcing legislation or adopting other appropriate measures to ensure that individuals and other entities respect human rights. Many countries create human rights enforcement systems, which may include a human rights commission to investigate claims and special adjudicative bodies to hear cases. Also, human rights claims may be heard through regular course of a civil or criminal case. Finally, ad hoc or permanent commissions may be established to monitor and write reports on immediate or ongoing issues. To ensure enforcement of human rights obligations, various mechanisms exist at national, regional and international levels. At the international level, most of these mechanisms provide vehicles for monitoring compliance. Some offer petition procedures which allow individuals to challenge breaches by the state of their human rights obligations. In some cases mechanisms are linked to constitutions and national legislation, in others to human rights treaties and in still others to specialized agencies of the UN charged with the enforcement of specific rights, such as labour refugee and health rights. Mechanisms linked to national constitutions and legislation may offer more concrete and enforceable remedies and should usually be tried first, before turning to international petition procedures. The concept of procedural due process refers to the process by which all rights are implemented by the State. Most of the formal protections of due process are linked to the conduct of a fair hearing. All persons are, according to the International Covenant on Civil and Political Rights, entitled to a fair and public hearing, and at trial stage, to be informed promptly and in a language in which he / she understands the nature of the charge (article 14). These norms are relevant in the context of disability in three respects. Firstly, they are critically relevant in the civil commitment context. Secondly, they are obviously relevant in the context of ordinary criminal proceedings against individuals who happen to have disabilities. Thirdly, they are relevant in the sense of affording a right to the court to vindicate other rights. Thus, the right to a court might be used offensively to establish and vindicate rights. International law also recognises that a person is entitled to certain minimum standards of due process in judicial proceedings. Article 10 of the Universal Declaration of Human Rights stipulates that: "Everyone is entitled in full equality to a fair and public hearing by an independent and impartial tribunal, in the determination of his rights and obligations and of any criminal charge against him." Most of the international recourse procedures require from the petitioner prima facie personal involvement in the matter. The claim of being a victim is a condition laid down in most of the conventions that provide for remedies. However, the Human Rights Committee (of the ICCPR) has agreed to consider communications submitted on behalf of alleged victims by others, when the victim has been unable to submit the complaint himself. This opens the complaint system of the Covenant to a vast number of victims who cannot contact a lawyer. The petitioner must show authority to act on the behalf of the victim; in practice the Committee has opened the door only for persons showing a close family connection. The American Convention on Human Rights recognises explicitly actio popularis in its article 44, stipulating that: "Any person or group of persons, or any non-governmental entity legally recognised in one or more Member States of the Organization, may lodge petitions with the Commission containing denunciations or complaints of violations of this Convention by a State Party." Individuals whose rights have been violated are victims and, therefore have the right to vindicate their rights in courts or other relevant judiciary bodies. However, persons with disabilities may lack the possibility of effectively pursuing their rights. The rules of locus standi can be radicalised to broaden access to courts to those who hitherto could not come before the court due to poverty or social or physical disability. This can be done by extending standing to any member of the public or social action group to bring an action on behalf of the person or group of persons to whom the harm was caused. In order to circumvent the time consuming and expensive writ petition, claimants could perhaps be permitted to address a letter to the court in order to commence action. Persons with disabilities may lack resources required to hire legal aid. The Declaration on the Rights of Disabled Persons, paragraph 11, states that "…disabled persons shall be able to avail themselves of qualified legal aid when such aid proves indispensable for the protection of their persons and properties." The States should provide legal aid to persons with disabilities, as well as for other vulnerable sections of society. Persons with disabilities may have difficulties in financing the costly lawyers' fees. One solution to this problem could be a socialisation of legal services where the lawyers would be obliged to handle certain cases of important social nature at a considerably reduced fee. Article 14, para 3 (d) ICCPR states that "To be tried in his presence, and to defend himself in person or through legal assistance of his own choosing; to be informed, if he does not have legal assistance, of his right; and to have legal assistance assigned to him, in any case where the interests of justice so require and without payment by him in any such case if he does not have sufficient means to pay for it." The European convention on Human Rights states in article 6, para 3 (c) that "… to defend him self in person or through legal assistance of his own choosing, or if he has not sufficient means to pay for legal assistance, to be given it for free when the interests of justice so require". Many advocacy organizations, including several outside European countries rely on the human rights case of the European Court, Airey v. Ireland (ECHR Decision of 11 September 1979, Series A no. 32). In, the Court found that the obligation of states to make access to the courts possible and effective includes a right to free legal assistance in civil matters when the procedure involved is so complex as to require legal assistance in order to ensure access to the court. The Airey precedent, and decisions that followed it, have led to extensive reform of European domestic law in order to protect access to the courts for the indigent in civil legal matters. Another barrier to seek recourse in a court of law can also be the formal structure of the courts. The atmosphere should be encouraging and humanising for its clients. Participation in the legal process and the formal court procedure is often one of the most fundamental human rights. Controversial disability matters, such as the sterilisation of mentally retarded children, might fall within the jurisdiction of special family courts. These courts are aimed at providing informal and speedy relief in family law matters. Family court counselling staff may assist in the process. Out of court settlements are encouraged. Conciliation and mediation are other informal methods of dispute settlement. Human rights, in the first instance must be enforced through domestic courts. The reasons are: Most states have some type of laws protecting human rights guarantees. Countries that are signatory to regional and international human rights documents are obligated to abide by their provisions. At times national laws directly refer to human rights and many states copy international and regional guarantees virtually word for word into their national law. Many countries create human rights enforcement systems, which may include a human rights commission to investigate claims and special adjudicative bodies to hear cases. Mechanisms linked to national constitutions and legislation may offer more concrete and enforceable remedies and usually should be tried first, before turning to international petition procedures. At the national level the weight of the nations legal system can be brought to bear on the enforcement of human rights. The types of mechanisms and procedures vary from country to country. For example, human rights commissions exist in some countries while they are unheard of in others. Similarly, constitutional courts exist in some countries but not in others. States may also establish administrative bodies to monitor and carry out compliance with international and regional agreements. The greater the extent to which international norms on disability are widely known, the greater the possibility of domestic courts complying with these norms. National courts could become a promoter and protector of international human rights of persons with disabilities. Furthermore, judicial initiatives can propel the executive and legislative branches of Government to reform the law. The domestic court system serves an important function in ensuring the rights of persons with disabilities. Aggrieved disabled persons may bring an action when their rights are violated. They may sue for damages where appropriate. The court may then decide whether the rights of the claimant have been infringed. Judgements of the court can be enforced by ordinary means. Courts also may bring matters to legislative attention and encourage various interest groups to take up action on certain issues. National human rights systems can be important for two reasons: International and regional human rights law can be used in national human rights mechanisms in different ways including: National Laws and mechanisms can be used to advance disability rights by examining: In certain jurisdictions like Chile, a number of judicial precedents show that Chilean courts have tended to rely on international law in deciding a case. There are also important cases in which the automatic incorporation of customary international law has been recognised by the courts and applied accordingly. There are also cases where the courts have upheld domestic law above international law. This is particularly the case when the conflict arises between a treaty and a subsequent contradictory statute, since the court may be inclined to apply the rule enacted later in time. This is yet another consequence of having assigned a treaty the same legal hierarchy as a domestic statute. If the conflict arises between a rule of international law and a provision of the Constitution, the situation will be further complicated by the fact that courts will generally approach the question with added caution. This, of course, is not a peculiarity of the Chilean case, but of many other legal systems, as well. There is no question that from the viewpoint of international law, the argument that constitutional provisions prevail over treaties would not stand. From the point of view of a constitutional court, however, it is most probable that the Constitution will be upheld, unless its very clauses might provide for the supremacy of the international rule. In Germany, international human rights and fundamental rights guaranteed in the Basic Law overlap to a large extent. The Basic Law begins with a catalogue of fundamental rights which opens in article 1 paragraphs 1-3 with a pledge of the German State to respect and protect individual rights: "The dignity of man shall be inviolable. To respect and protect it shall be the duty of all State authority." The German people, therefore, acknowledge inviolable and inalienable human rights as the basis of every community, of peace and of justice in the world. In Germany, international law stemming from non-treaty sources is introduced into the German legal system via article 25 of the Constitution of the Federal Republic of Germany, which contains the following incorporation clause: "The general rules of public international law shall be an integral part of federal law. They shall take precedence over the laws and shall directly create rights and duties for the inhabitants of the federal territory." This means that no additional implementing legislation is required. These general rules are accorded a rank above all other domestic laws in the hierarchy of norms. In litigation, where doubts arise about the existence of a general rule of international law, this issue can be solved by way of reference to the Federal Constitutional Court. In Japan, treaties have the force of law and override statutes by the Japanese parliament. Because treaties have such a privilege status in Japan, the country is extremely wary of acceding to a human rights treaty. Although Japan has not ratified many human rights conventions, it has ratified some of the most important ones within the last fifteen years. Upon ratification of these treaties, Japan revised its laws extensively to bring them into conformity with the requirements of the treaties. Even though international law has domestic legal force in Japan, those international human rights instruments, which lack a legally binding character are not regarded as having the force of law. Binding character under international law is a prerequisite for the domestic force of law. The drawbacks of the national system: National human rights systems can also have significant limitations. Common limitations of national human rights systems may include:
The way it really is: little-known facts about radiometric dating Long-age geologists will not accept a radiometric date unless it matches their pre-existing expectations. Many people think that radiometric dating has proved the Earth is millions of years old. That’s understandable, given the image that surrounds the method. Even the way dates are reported (e.g. 200.4 ± 3.2 million years) gives the impression that the method is precise and reliable (box below). However, although we can measure many things about a rock, we cannot directly measure its age. For example, we can measure its mass, its volume, its colour, the minerals in it, their size and the way they are arranged. We can crush the rock and measure its chemical composition and the radioactive elements it contains. But we do not have an instrument that directly measures age. Before we can calculate the age of a rock from its measured chemical composition, we must assume what radioactive elements were in the rock when it formed.1 And then, depending on the assumptions we make, we can obtain any date we like. It may be surprising to learn that evolutionary geologists themselves will not accept a radiometric date unless they think it is correct—i.e. it matches what they already believe on other grounds. It is one thing to calculate a date. It is another thing to understand what it means. So, how do geologists know how to interpret their radiometric dates and what the ‘correct’ date should be? A geologist works out the relative age of a rock by carefully studying where the rock is found in the field. The field relationships, as they are called, are of primary importance and all radiometric dates are evaluated against them. For example, a geologist may examine a cutting where the rocks appear as shown in Figure 1. Here he can see that some curved sedimentary rocks have been cut vertically by a sheet of volcanic rock called a dyke. It is clear that the sedimentary rock was deposited and folded before the dyke was squeezed into place. Figure 2. Cross-section By looking at other outcrops in the area, our geologist is able to draw a geological map which records how the rocks are related to each other in the field. From the mapped field relationships, it is a simple matter to work out a geological cross-section and the relative timing of the geologic events. His geological cross-section may look something like Figure 2. Clearly, Sedimentary Rocks A were deposited and deformed before the Volcanic Dyke intruded them. These were then eroded and Sedimentary Rocks B were deposited. The geologist may have found some fossils in Sedimentary Rocks A and discovered that they are similar to fossils found in some other rocks in the region. He assumes therefore that Sedimentary Rocks A are the same age as the other rocks in the region, which have already been dated by other geologists. In the same way, by identifying fossils, he may have related Sedimentary Rocks B with some other rocks. Creationists would generally agree with the above methods and use them in their geological work. From his research, our evolutionary geologist may have discovered that other geologists believe that Sedimentary Rocks A are 200 million years old and Sedimentary Rocks B are 30 million years old. Thus, he already ‘knows’ that the igneous dyke must be younger than 200 million years and older than 30 million years. (Creationists do not agree with these ages of millions of years because of the assumptions they are based on.2) Because of his interest in the volcanic dyke, he collects a sample, being careful to select rock that looks fresh and unaltered. On his return, he sends his sample to the laboratory for dating, and after a few weeks receives the lab report. Let us imagine that the date reported by the lab was 150.7 ± 2.8 million years. Our geologist would be very happy with this result. He would say that the date represents the time when the volcanic lava solidified. Such an interpretation fits nicely into the range of what he already believes the age to be. In fact, he would have been equally happy with any date a bit less than 200 million years or a bit more than 30 million years. They would all have fitted nicely into the field relationships that he had observed and his interpretation of them. The field relationships are generally broad, and a wide range of ‘dates’ can be interpreted as the time when the lava solidified. What would our geologist have thought if the date from the lab had been greater than 200 million years, say 350.5 ± 4.3 million years? Would he have concluded that the fossil date for the sediments was wrong? Not likely. Would he have thought that the radiometric dating method was flawed? No. Instead of questioning the method, he would say that the radiometric date was not recording the time that the rock solidified. He may suggest that the rock contained crystals (called xenocrysts) that formed long before the rock solidified and that these crystals gave an older date.3 He may suggest that some other very old material had contaminated the lava as it passed through the earth. Or he may suggest that the result was due to a characteristic of the lava—that the dyke had inherited an old ‘age’. The error is not the real error The convention for reporting dates (e.g. 200.4 ± 3.2 million years) implies that the calculated date of 200.4 million years is accurate to plus or minus 3.2 million years. In other words, the age should lie between 197.2 million years and 203.6 million years. However, this error is not the real error on the date. It relates only to the accuracy of the measuring equipment in the laboratory. Even different samples of rock collected from the same outcrop would give a larger scatter of results. And, of course, the reported error ignores the huge uncertainties in the assumptions behind the ‘age’ calculation. These include the assumption that decay rates have never changed. In fact, decay rates have been increased in the laboratory by factors of billions of times.1 Creationist physicists point to several lines of evidence that decay rates have been faster in the past, and propose a pulse of accelerated decay during Creation Week, and possibly a smaller pulse during the Flood year.2 - Woodmorappe, J., Billion-fold acceleration of radioactivity demonstrated in laboratory, TJ 15(2):4–6, 2001. Return to text. - Vardiman, L., Snelling, A.A. and Chaffin, E.F., Radioisotopes and the age of the Earth, Institute for Creation Research, El Cajon, California and Creation Research Society, St. Joseph, Missouri, USA, 2000. Text is available at icr.org/rate Return to text. What would our geologist think if the date from the lab were less than 30 million years, say 10.1 ± 1.8 million years? No problem. Would he query the dating method, the chronometer? No. He would again say that the calculated age did not represent the time when the rock solidified. He may suggest that some of the chemicals in the rock had been disturbed by groundwater or weathering.4 Or he may decide that the rock had been affected by a localized heating event—one strong enough to disturb the chemicals, but not strong enough to be visible in the field. No matter what the radiometric date turned out to be, our geologist would always be able to ‘interpret’ it. He would simply change his assumptions about the history of the rock to explain the result in a plausible way. G. Wasserburg, who received the 1986 Crafoord Prize in Geosciences, said, ‘There are no bad chronometers, only bad interpretations of them!’5 In fact, there is a whole range of standard explanations that geologists use to ‘interpret’ radiometric dating results. Why use it? Someone may ask, ‘Why do geologists still use radiometric dating? Wouldn’t they have abandoned the method long ago if it was so unreliable?’ Just because the calculated results are not the true ages does not mean that the method is completely useless. The dates calculated are based on the isotopic composition of the rock. And the composition is a characteristic of the molten lava from which the rock solidified. Therefore, rocks in the same area which give similar ‘dates’ are likely to have formed from the same lava at about the same time during the Flood. So, although the assumptions behind the calculation are wrong and the dates are incorrect, there may be a pattern in the results that can help geologists understand the relationships between igneous rocks in a region. Contrary to the impression that we are given, radiometric dating does not prove that the Earth is millions of years old. The vast age has simply been assumed.2 The calculated radiometric ‘ages’ depend on the assumptions that are made. The results are only accepted if they agree with what is already believed. The only foolproof method for determining the age of something is based on eyewitness reports and a written record. We have both in the Bible. And that is why creationists use the historical evidence in the Bible to constrain their interpretations of the geological evidence. What if the rock ages are not ‘known’ in advance—does radio-dating give coherent results? Recently, I conducted a geological field trip in the Townsville area, North Queensland. A geological guidebook,1 prepared by two geologists, was available from a government department. The guidebook’s appendix explains ‘geological time and the ages of rocks.’ It describes how geologists use field relationships to determine the relative ages of rocks. It also says that the ‘actual’ ages are measured by radiometric dating—an expensive technique performed in modern laboratories. The guide describes a number of radiometric methods and states that for ‘suitable specimens the errors involved in radiometric dating usually amount to several percent of the age result. Thus … a result of two hundred million years is expected to be quite close (within, say, 4 million) to the true age.’ Photo by Phil Peachey Castle Hill (Townsville, Queensland, Australia) This gives the impression that radiometric dating is very precise and very reliable—the impression generally held by the public. However, the appendix concludes with this qualification: ‘Also, the relative ages [of the radiometric dating results] must always be consistent with the geological evidence. … if a contradiction occurs, then the cause of the error needs to be established or the radiometric results are unacceptable’. This is exactly what our main article explains. Radiometric dates are only accepted if they agree with what geologists already believe the age should be. Townsville geology is dominated by a number of prominent granitic mountains and hills. However, these are isolated from each other, and the area lacks significant sedimentary strata. We therefore cannot determine the field relationships and thus cannot be sure which hills are older and which are younger. In fact, the constraints on the ages are such that there is a very large range possible. We would expect that radiometric dating, being allegedly so ‘accurate,’ would rescue the situation and provide exact ages for each of these hills. Apparently, this is not so. Concerning the basement volcanic rocks in the area, the guidebook says, ‘Their exact age remains uncertain.’ About Frederick Peak, a rhyolite ring dyke in the area, it says, ‘Their age of emplacement is not certain.’ And for Castle Hill, a prominent feature in the city of Townsville, the guidebook says, ‘The age of the granite is unconfirmed.’ No doubt, radiometric dating has been carried out and precise ‘dates’ have been obtained. It seems they have not been accepted because they were not meaningful. - Trezise, D.L. and Stephenson, P.J., Rocks and landscapes of the Townsville district, Department of Resource Industries, Queensland, 1990. Return to text. References and notes - In addition to other unprovable assumptions, e.g. that the decay rate has never changed. Return to text. - Evolutionary geologists believe that the rocks are millions of years old because they assume they were formed very slowly. They have worked out their geologic timescale based on this assumption. This timescale deliberately ignores the catastrophic effects of the Biblical Flood, which deposited the rocks very quickly. Return to text. - This argument was used against creationist work that exposed problems with radiometric dating. Laboratory tests on rock formed from the 1980 eruption of Mt St Helens gave ‘ages’ of millions of years. Critics claimed that ‘old’ crystals contained in the rock contaminated the result. However, careful measurements by Dr Steve Austin showed this criticism to be wrong. See Swenson, K., Radio-dating in rubble, Creation 23(3):23–25, 2001. Return to text. - This argument was used against creationist work done on a piece of wood found in sandstone near Sydney, Australia, that was supposed to be 230 million years old. Critics claimed that the carbon-14 results were ‘too young’ because the wood had been contaminated by weathering. However, careful measurements of the carbon-13 isotope refuted this criticism. See Snelling, A.A., Dating dilemma: fossil wood in ‘ancient’ sandstone, Creation 21(3):39–41, 1999. Return to text. - Wasserburg, G.J., Isotopic abundances: inferences on solar system and planetary evolution, Earth and Planetary Sciences Letters 86:129–173, 150, 1987. Return to text. While reading this article I could not help but think of the scientists who use this dating method to confirm their already held beliefs are like marksmen archers who shoot an arrow then go paint the bulls eye around it. I can just hear the congrats as they pat each other on the back and comment, "wow, look at that, you've hit another bulls eye!" BTW, Great article and don't mean to be negative but, R.M. (you called him Richard) did have a valid point which you did not adequately respond to. I had an atheist ask me a similar question that if science disproved my belief in God would I change my mind? One could conclude that truth is false but that does not make the false true. I agree with you we/you are not being hypocritical, but I also agree with him that it appears as though we are. It's a great method for anyone who wishes to discredit creationists beliefs; or, at least it would be if it was not so discredited. About what percentage of radio-isotope dates are rejected? All dates are interpreted, so no matter what the result is it is always be made to sound reasonable. I would not know what proportion of dates have been measured that are not published. Great article Dr. Walker! The perspective you present of "depending on the assumptions we make, we can obtain any date we like", certainly seems to match the data. What is unsettling is that some creationist geologists, e.g. Dr. Snelling, say that if the dates are scaled and also adjusted for the type of radiometric test, creationists could use the dates. That view is also presented in a compelling fashion. The two views seem to be irreconcilable, but I'm not certain about it. Is it possible to access the rejected dates and make them public? Lots of radio-isotope dates are not reported, but are sitting in the researchers' files waiting for time to figure out what is going on with them. However, there are lots and lots of dates that are reported but you would not be aware of the problems unless you know how to read the papers, and unless you refer to other papers that deal with the same topic. Read the above article again because it explains how all the results are interpreted such that they are consistent with the story the researcher wants to present. Search creation.com for "the dating game mungo" and "Radioactive dating anomalies" for two articles that show how the numbers are interpreted. Dear CMI - The subtitle of this article states that “Long-age geologists will not accept a radiometric date unless it matches their pre-existing expectations.” This is a direct imputation of widespread scientific malfeasance on the part of professional geologists. Yet we read on your website (and on many other creationist sites) the following (taken from your ‘Statement of Faith'): “By definition, no apparent, perceived or claimed evidence in any field, including history and chronology, can be valid if it contradicts the Scriptural record.” How is this different from the attitude that you criticize mainstream geologists for adopting? Is there a “mote in thy brother’s eye” or “a beam … in thine own eye? Oh Richard, I know that you know how the scientific paradigm affects interpretations and research outcomes. Long-age geologists are committed to the long-age paradigm, which assumes naturalism. This article makes the point that, contrary to the impression we are given, the radio-isotope dates are not a scientific fact but are interpretations driven by the paradigm. Understanding that liberates people to be able to look at the world from a different perspective. We have clearly set out the worldview within which we are working: we believe the Bible is the true revelation of the Creator God who made this world. That is not hypocrisy, but being open and up-front about where we are coming from. ‘There are no bad chronometers, only bad interpretations of them!’5 I do clock and watch repair. Should I try that one on my clients? Navigating by an unreliable chronometer? No problem just decide you are where you think you might be and adjust the chronometer to fit.
You’ve come to rely on weather radar, but an understanding of how those images are produced will give you far more useful data than the pretty colors alone. Twenty years ago, the idea of carrying sophisticated digital radar in anything under a medium twin would probably have been met with roars of laughter, but technology has brought amazing advances. Now it’s possible for even an ultralight pilot to use the Internet to access essentially the same tools that are available to forecasters. In the United States, the ground radar network has become so dense and reliable that it’s largely done away with the need for airborne weather radar for anything short of real-time dodging and weaving through an area of cells. Of course none of this radar technology will help a pilot flying to vacation spots in Cancun or St. Vincent, but on a typical cross-country flight, the amount of weather data at your fingertips can be overwhelming. Getting the most out of radar requires an understanding of how radar systems work. You may have once learned the basics of radar, possibly using the popular analogy of a bat emitting chirps in a dark room and judging distance by the time it takes for the sound wave to echo back. The underlying principles of radar haven’t changed a bit. Radar antennas emit a directional pulse traveling at the speed of light. If it strikes a target, it is backscattered to the radar, which uses the elapsed time to determine the distance, or range. By rotating the antenna, it samples all azimuths in a full circle from 0 to 360 degrees. This gives us a complete radar scan that is used to generate a map of all the echoes. Using digital processing, we can also use colors to highlight echoes that produced strong backscatter signals indicating that something denser than empty atmosphere is out there. This describes how radar sites worked until the early 1990s. But engineers and research meteorologists developed another important technology called volume scanning. Older radar networks only looked at a single antenna elevation, usually half a degree above the horizon. Instead of sampling the atmosphere on a single geometric plane, why not sample it volumetrically, in three dimensions? Using volume scanning, the antenna does a full sweep, taking about 20 seconds. Then the antenna elevates by about a degree and another sweep is performed. This scan-elevate-scan process is repeated until the antenna can’t point any higher. The maximum elevation for the U.S. WSR-88D radar is 19.5 degrees. All of these elevation slices or “tilts” form what is called a volume scan. Depending on the kind of weather taking place, an individual radar site will select the best volume scan strategy, known to forecasters as volume coverage pattern (VCP). The VCP largely determines how many elevations make up a volume scan and how long it takes to finish a volume scan. This can take anywhere from 4 to 6 minutes in stormy weather and up to 10 minutes in fair weather. Thus, the images you view are not real-time. Add the time it takes to process and transmit the data, and you can see that the image you view can have data that’s 15 minutes old. Weather radar from direct-broadcast satellite and Internet sources is built largely from the WSR-88D radar network of 159 sites across the U.S. Keep in mind that this is S-band radar, which operates at a longer wavelength (lower radio frequency) than airborne radars. S-band radars use large antennas and powerful transmitters to provide excellent penetration through precipitation, minimizing attenuation and shadows. Their underlying images are inherently better than airborne radars that suffer from the restrictions on antenna size. The classic radar image showing strong and weak echoes is depicting reflectivity. This is a measure of backscattered (reflected) energy from the targets. It’s important to note that there are two types of reflectivity products. Base reflectivity is an image from a single elevation tilt. Composite reflectivity is a blend of all the tilts together in the entire volume. At a given spot, the composite-reflectivity value is a measure of the highest reflectivity that was detected at that spot on any of the elevation slices. Reading the Radar The best starting point to interpret your display is the type of reflectivity—base or composite. If the menus and labels aren’t clear and the vendor doesn’t specify it, dig until you find out. Base and composite-reflectivity displays are constructed differently and at times they can appear very different. As a starting point, XMWX and WSI ADS-B show composite reflectivity. By contrast, WSI InFlight uses base reflectivity. Both are excellent products but each has its pros and cons that must be understood to be able to reap the maximum benefit from the information. Why use two types of reflectivity? Composite reflectivity doesn’t miss anything. If it’s near the ground, it shows up. If it’s near the tropopause, it shows up. So why use anything else? Well, meteorologists need more granularity. For example, the hook echo signifying a tornado only shows up in the lowest mile or two of the atmosphere. Using only one elevation at that level, it shows up beautifully, but if we use composite reflectivity, that hook echo is merged into all the hail and rain at higher levels and we just see a blob. So forecasters prefer to look at base reflectivity and page between the different elevations, though composite reflectivity helps with the big picture. You should not be looking for specific shapes and features like hook echoes when assessing things on composite-reflectivity displays. But we do have intensity, which is quite useful even by itself, because the only weather phenomena capable of producing very high intensities are hail, particularly if it’s large and wet, and highly dense volumes of rain. Both of these usually signify a strong storm that should be avoided. Composite reflectivity does have its downsides. For example, anvils from storms have considerable ice content being carried up to a hundred miles downwind by the jet stream, and these plumes are easily picked up by higher radar tilts. This will cause composite reflectivity to “bloom,” showing much larger downwind depth than exists at most levels. This precipitation might very well be affecting a jet at cruise altitude, but a Bonanza near the ground might see only VMC, a broken layer of anvil overhead, and cumulus buildups to the west. Another caution comes from winter weather or a cold-rain situation. If it’s just above freezing in the low levels, there will be a melting level at a specific altitude where ice is falling into warmer air and melting, forming a liquid coating around a solid. This precipitation is highly reflective. On base reflectivity it forms a clear ring of reflectivity at a specific radius around the radar where the beam crosses through that level. The ring radius varies with radar tilt. As a result, composite reflectivity will stack all these different rings together, producing multiple concentric rings around a radar site, with some of these melting level artifacts incorrectly suggesting areas of intense precipitation. While it’s difficult for anyone but a meteorologist to deconstruct a confusing composite-reflectivity image, being aware of how these effects get into your radar display will help you recognize them when they occur. Picking your way through storm echoes has long been a familiar game of dodgeball for pilots, but there is danger in doing this because the most intense vertical motions are associated with the updraft. Unfortunately the updraft is actually a relatively dry part of the storm, meaning it is non-reflective to radar. An updraft core is made up mostly of cloud droplets in the lowest 5 to 10 thousand feet, with progressively larger droplets and higher reflectivity as you ascend to the top of the updraft. The danger is that in the strongest storms, this process is shifted to higher elevations, often 20 to 30 thousand feet off the ground, leaving the lower elevations with no reflectivity. Fortunately, composite-reflectivity products actually offer an advantage in terms of safety, because some of the highest intensities in the storm tend to overlie the updraft. So while a Learjet pilot at only 10,000 feet might glance at a base-reflectivity product or her own on-board weather radar and see an echo-free region where a dangerous updraft is located, a Mooney pilot with composite reflectivity from XMWX will see this area saturated with high intensities and give it wide berth. Composite reflectivity has an advantage here, but if upper winds are particularly strong, the updraft may be tilted, shifting the upper parts of the storm downwind and lowering the composite reflectivity’s margin of safety. Keeping away from the upwind (usually southwest) side of the storm will help keep you safe when upper winds are strong. That said, the AIM’s advice of staying at least 20 miles away from the storm is sound advice indeed. Radar Network Shortfalls The ground-based radar network has a couple of vulnerabilities. One is weather directly over the radar site. Imagine wearing a wide-brimmed hat and not being able to look up. You’d be blind to the storms overhead, though you would be able to see all the rain near the ground at short distances around you. This is the same sort of problem that affects the WSR-88D radar, which can only tilt upward 19.5 degrees. It forms a volume known as the cone of silence, which cannot be sampled. For all practical purposes, this cone only affects areas within 20 miles of the radar. Within this zone, we’re limited to only seeing the lowest parts of the storm. Even if we’re using composite reflectivity, very intense echoes aloft over the radar site will likely be missed and intensities will show up as being lower than they really are. A good weather delivery network like WxWorx from Baron (XMWX weather source) compensates for this by using neighboring radars to illuminate each of the cones of silence and fill the gap. This is very effective in radar-dense areas like Illinois, Indiana, Georgia, and Oklahoma. But in radar-sparse places like the northern plains and the Rockies, there may not be a radar site close enough. So if you’re picking your way through weather and are heading directly over a radar site, you may not be getting the full picture. Some areas are simply too far from a radar site. There are several well-documented areas in the contiguous lower 48 states that are poorly sampled—the four corners area (N.M./Ariz./Utah/Colo.), central Utah, southeast Montana, the high desert and central coast of Oregon, and the Big Bend region of Texas. Here, the radars are far away or blocked by mountains, and they’ll only sample higher elevations. This means if you’re flying in these areas, you’ll need to use extra caution, particularly in IMC and at lower elevations. WSI NOWrad uses the 248 nm products to fill in those distant areas, but even with that feature there’s still no technology that will get samples in the lower troposphere so far from the radar. Of course, if you fly outside the service area of all those great radars, you have the same problem. To a certain extent you can use infrared satellite imagery as a crude substitute, though it’s sensitive to high clouds as well as rain clouds. Getting the Most We’ve only considered radar data itself for weather avoidance. Most of the systems on the market offer many other tools that are excellent for keeping you safe, such as lightning detection and storm track information. While a whole new article could be written explaining how to integrate all these tools, a firm understanding of radar basics, a timely and complete weather briefing and good situational awareness are more than enough to keep you safe. That said, perhaps we can distill this article into a little simple advice. Remain clear of the red areas and of the updraft and downdraft cores. Keep in mind that they aren’t always one and the same. Tim Vasquez is a professional meteorologist in Norman, Oklahoma. See his website at www.weathergraphics.com.
Heat Island Impacts On this page: - Increased Energy Consumption - Elevated Emissions of Air Pollutants and Greenhouse Gases - Compromised Human Health and Comfort - Impaired Water Quality On a hot, sunny summer day, roof and pavement surface temperatures can be 50–90°F (27–50°C) hotter than the air, while shaded or moist surfaces—often in more rural surroundings—remain close to air temperatures.1 These surface urban heat islands, particularly during the summer, have multiple impacts and contribute to atmospheric urban heat islands. Air temperatures in cities, particularly after sunset, can be as much as 22°F (12°C) warmer than the air in neighboring, less developed regions.2 Elevated temperatures from urban heat islands, particularly during the summer, can affect a community’s environment and quality of life. While some impacts may be beneficial, such as lengthening the plant-growing season, the majority of them are negative. These impacts include: - Increased energy consumption; - Elevated emissions of air pollutants and greenhouse gases; - Compromised human health and comfort; and - Impaired water quality. Elevated summertime temperatures in cities increase energy demand for cooling. Research shows that electricity demand for cooling increases 1.5–2.0% for every 1°F (0.6°C) increase in air temperatures, starting from 68 to 77°F (20 to 25°C), suggesting that 5–10% of community-wide demand for electricity is used to compensate for the heat island effect.2 Urban heat islands increase overall electricity demand, as well as peak demand, which generally occurs on hot summer weekday afternoons, when offices and homes are running cooling systems, lights, and appliances. During extreme heat events, which are exacerbated by urban heat islands, the resulting demand for cooling can overload systems and require a utility to institute controlled, rolling brownouts or blackouts to avoid power outages. As described above, urban heat islands raise demand for electrical energy in summer. Companies that supply electricity typically rely on fossil fuel power plants to meet much of this demand, which in turn leads to an increase in air pollutant and greenhouse gas emissions. The primary pollutants from power plants include: - sulfur dioxide (SO2) - nitrogen oxides (NOx) - particulate matter (PM) - carbon monoxide (CO) and - mercury (Hg). These pollutants are harmful to human health and also contribute to complex air quality problems such as the formation of ground-level ozone (smog), fine particulate matter, and acid rain. Increased use of fossil-fuel-powered plants also increases emissions of greenhouse gases, such as carbon dioxide (CO2), which contribute to global climate change. In addition to their impact on energy-related emissions, elevated temperatures can directly increase the rate of ground-level ozone formation. Ground-level ozone is formed when NOx and volatile organic compounds (VOCs) react in the presence of sunlight and hot weather. If all other variables are equal, such as the level of precursor emissions in the air and wind speed and direction, more ground-level ozone will form as the environment becomes sunnier and hotter. Increased daytime temperatures, reduced nighttime cooling, and higher air pollution levels associated with urban heat islands can affect human health by contributing to general discomfort, respiratory difficulties, heat cramps and exhaustion, non-fatal heat stroke, and heat-related mortality. Heat islands can also exacerbate the impact of heat waves, which are periods of abnormally hot, and often humid, weather. Sensitive populations, such as children, older adults, and those with existing health conditions, are at particular risk from these events. Excessive heat events, or abrupt and dramatic temperature increases, are particularly dangerous and can result in above-average rates of mortality. The Centers for Disease Control and Prevention estimates that from 1979–2003, excessive heat exposure contributed to more than 8,000 premature deaths in the United States.3 This figure exceeds the number of mortalities resulting from hurricanes, lightning, tornadoes, floods, and earthquakes combined. High pavement and rooftop surface temperatures can heat stormwater runoff. Tests have shown that pavements that are 100ºF (38°C) can elevate initial rainwater temperature from roughly 70ºF (21ºC) to over 95ºF (35ºC).4 This heated stormwater generally becomes runoff, which drains into storm sewers and raises water temperatures as it is released into streams, rivers, ponds, and lakes. Water temperature affects all aspects of aquatic life, especially the metabolism and reproduction of many aquatic species. Rapid temperature changes in aquatic ecosystems resulting from warm stormwater runoff can be particularly stressful, even fatal to aquatic life. - Berdahl P. and S. Bretz. 1997. Preliminary survey of the solar reflectance of cool roofing materials. Energy and Buildings 25:149-158. - Akbari, H. 2005. Energy Saving Potentials and Air Quality Benefits of Urban Heat Island Mitigation (PDF) (19 pp, 251K). Lawrence Berkeley National Laboratory. - Center for Disease Control and Prevention. 2006. Extreme Heat: A Prevention Guide to Promote Your Personal Health and Safety. - James, W. 2002. Green roads: research into permeable pavers. Stormwater 3(2):48-40.
A polygon is a closed two-dimensional shape. It is also a simple curve that is made up of line segments. It usually has three sides/corners or more. It could also be referred to as 'A closed plane figure bound by three or more line segments'. It has a number of edges. These edges are connected by lines. A square is a polygon because it has four sides. The smallest possible polygon in a Euclidean geometry or "flat geometry" is the triangle, but on a sphere, there can be a digon. The monogon is a theoretical figure that cannot exist - it has only one side and one edge. If the edges (lines of the polygon) do not intersect (cross each other), the polygon is called simple, otherwise it is complex. In computer graphics, polygons (especially triangles) are often used to make graphics. A simple concave hexagon
My Rigid Thinking And RitualisingI have underlined the ones I think figure me out and bullet pointed the ones that effect me ...most! Asperger syndrome is one of the autism spectrum disorders, and is classified as a developmental disorder that affects how the brain processes information. People with Asperger syndrome have a wide range of strengths, weaknesses, skills and difficulties. - Common characteristics include difficulty in forming friendships, communication difficulties (such as a tendency to take things literally), an inability to understand social rules and body language. Asperger syndrome is also known as Asperger Disorder. Although Asperger syndrome cannot be cured, appropriate intervention and experience can help individuals to develop skills, compensatory strategies and help build up coping skills. Social skills training, which teaches individuals how to behave in different social situations, is often considered to be of great value to people with Asperger syndrome. Counselling or psychological therapy (including Cognitive Behaviour Therapy) can help people with Asperger syndrome understand and manage their behavioural responses. Typical adult symptoms More males than females are diagnosed with Asperger syndrome. While every person who has the syndrome will experience different symptoms and severity of symptoms, some of the more common characteristics include: - Average or above-average intelligence - Difficulties in empathising with others - Difficulties engaging in social routines such as conversations and‘small talk’ - Problems with controlling feelings such as anger, depression and anxiety - A preference for routines and schedules which can result in stress or anxiety if a routine is disrupted - Specialized fields of interest or hobbies. A person with Asperger syndrome may have trouble understanding the emotions of other people, and the subtle messages sent by facial ex pression, eye contact and body language are often missed or misinterpreted. Because of this, people with Asperger syndrome might be mistakenly perceived as being egotistical, selfish or uncaring. - These are unfair labels because the person concerned is neurologically unable to understand other people’s emotional states. People with Asperger syndrome are usually shocked, upset and remorseful when told their actions were hurtful or inappropriate. Sexual codes of conduct Research into the sexual understanding of people with Asperger syndrome is in its infancy. Studies suggest that individuals with Asperger syndrome are as interested in sex as anyone else, but many struggle with the myriad of complex skills required to successfully negotiate intimate relationships. People with Asperger syndrome can sometimes appear to have an ‘inappropriate’, ‘immature’ or ‘delayed’ understanding of sexual codes of conduct. This can sometimes result in sexually inappropriate behaviour. For example, a 20-year-old with Asperger syndrome may display behaviours which befit a teenager. Even individuals who are high achieving and academically or vocationally successful can have trouble negotiating the ‘hidden rules’ of courtship. Common issues for partners Some people with Asperger syndrome can successfully maintain relationships and parent children. However, like most relationships, there are challenges. A common marital problem is unfair distribution of responsibilities. For example, the partner of a person with Asperger syndrome may be used to doing everything in the relationship when it is just the two of them. However, the partner may need practical and emotional support once children come along, something that the person with Asperger syndrome may be ill equipped to provide. When the partner expresses frustration or becomes upset that they are given no help of any kind, the person with Asperger syndrome is typically baffled. Tension in the relationship often makes their symptoms worse. An adult’s diagnosis of Asperger syndrome often follows their child’s diagnosis of autism spectrum disorder. This ‘double whammy’ can be extremely distressing to the partner who has to cope simultaneously with both diagnoses. Counselling, or joining a support group where they can talk with other people who face the same challenges, can be helpful. Some common issues for partners of people with Asperger syndrome include: Feeling overly responsible for their partner. Failure to have their own needs met by the relationship. Lack of emotional support from family members and friends who do not fully understand or appreciate the extra strains placed on a relationship by Asperger syndrome. A sense of isolation, because the challenges of their relationship are unique and not easily understood by others. Frustrations, since problems in the relationship do not seem to improve despite great efforts. Doubting the integrity of the relationship, or frequently wondering about whether or not to end the relationship. Difficulties in accepting that their partner will not ‘recover’ from Asperger syndrome. After accepting that their partner’s Asperger syndrome cannot be ‘cured’, partners can often experience emotions such as guilt, despair and disappointment. The Commonwealth Families, Housing, Community Services and Indigenous Affairs (FaHCSIA), in conjunction with a range of specialist employment services, helps to place people with disabilities in the workforce. A person with Asperger syndrome may find their job opportunities limited by their disability. It may help to choose a vocation that takes into account their symptoms, and capitalises on their strengths rather than highlights their weaknesses. Career suggestions for visual thinkers The following career suggestions are adapted from material written by Temple Grandin, who has high-functioning autism and is an assistant professor at Colorado University, USA. Suggestions include: Video game designer Career suggestions for those good at mathematics or music Journalist, copy editor Piano (or other musical instrument) tuner Where to get help Things to remember A person with Asperger syndrome often has trouble understanding the emotions of other people, and the subtle messages that are sent by facial ex Social skills training, which teaches people with Asperger syndrome how to behave in different social situations, is often considered to be of great value to individuals with this syndrome.
Magnitude 7.6 CARLSBERG RIDGE 2003 July 15 20:27:50 UTC - Location Map - Historical Seismicity - Theoretical P-Wave Travel Times - Seismic Hazard Map - Energy and Broadband Solution - Record Section This earthquake occurred on the Carlsberg Ridge, a mid-ocean ridge system that is located in the Arabian sea between India and Northern Africa. The ridge marks the boundary between the Indian and African plates and near the epicenter the Indian plate is moving away from the African Plate at a rate of 33 mm/yr in a northeasterly direction. The Carlsberg Ridge is a slow-spreading ridge with rough topography and a depth that varies from 1700-4400 meters. Mid-ocean ridges are divergent plate boundaries, where two tectonic plates move apart from each other. New oceanic crust is formed as magma rises up between the two diverging plates. Active spreading ridges are offset by zones known as transform faults, where plates slide horizontally past each other neither destroying or forming crust. This gives the plate boundary a zig-zag pattern. Ocean ridges represent the longest, linear uplifted features of the earth's surface and are marked by a belt of shallow earthquakes. Earthquakes can be caused by the release of tensional stress in the uplifted ridge or by the horizontal movement of plates along the transform faults.
Information for Education & Childcare Programs Why should we be concerned about the spread of flu in schools? Students can get sick with flu and schools may act as a point of spread, where students can easily spread flu to other students and their families. What can schools and childcare programs do to prepare for flu response during the upcoming school year? - Review and revise existing pandemic plans and focus on protecting high risk students and staff. - Update student and staff contact information as well as emergency contact lists. - Identify and establish a point of contact with the Ottawa County Department of Public Health. - Develop a plan to cover key positions when a staff person stays home because they are sick. - Set up a separate room for care of sick students or staff until they can be sent home. - Utilize the Ottawa County Department of Public Health education campaign tools and resources. - Promote preventative behaviors among school personnel through newsletters, employee portals and other existing communication mechanisms. - Identify ways to increase social distance (the space between people). - Develop a school dismissal plan and options for how school work can be continued at home (e.g., homework packets, web-based lessons, phone calls), if school is dismissed or students are sent home when sick. Communicate this plan to all community members who would be affected. - Help families understand the important roles they can play in reducing the spread of flu in schools through newsletters, your school website and other parent communications. - Further guidance for higher education, camps and other groups is offered on the CDC website. What can families, students, and personnel do to keep from getting sick and spreading flu? - Practice good hand hygiene. Students and staff members should wash their hands often with soap and water, especially after coughing or sneezing. Alcohol-based hand cleaners are also effective in the absence of soap and water. - Practice respiratory etiquette. The main way that the flu spreads is from person to person in the droplets produced by coughs and sneezes, so it's important to cover your mouth and nose with a tissue. In the absence of a tissue, cough or sneeze into your elbow or shoulder instead of your hands. - Stay home if you are sick. Keeping sick students and staff at home means that they keep their viruses to themselves rather than sharing them with others. - School personnel and teachers should be good role models by not only teaching but practicing flu prevention. Click here for more ideas to teach flu prevention. How long should a sick student or staff member be kept home? Students and staff with symptoms of flu should stay home for at least 24 hours after they no longer have fever or do not feel feverish, without using fever-reducing drugs. If the flu conditions become more severe, these recommendations may change. Can the virus live on surfaces, such as computer keyboards? Yes, flu viruses may be spread when a person touches droplets left by coughs and sneezes on hard surfaces (such as desks or door knobs) or objects (such as key¬boards or pens) and then touches his or her mouth or nose. Schools should consider increasing the cleaning frequency of high contact areas. I What is the guidance for K-12 school closure or dismissal? Based on current flu conditions, the decision to selectively dismiss a school should be made locally and should balance the risks of keeping the students in school with the social disruption that school dismissal can cause. School dismissals may be considered based on the population of an individual school. For example, a school for pregnant teens may choose to dismiss sooner because of the risk factors for its population. School officials should work closely and directly with the Ottawa County Department of Public Health when deciding whether or not to dismiss a school or schools. The decision should consider the number and severity of cases in an outbreak, the risks of flu spread and benefits of dismissal, the problems that school dismissal can cause for families and communities.
The evolution of tropical fruit traits associated with long-distance dispersal by mammals or birds, has contributed to the disjunct tropical biogeographical distribution of rain forest plants. This is the main conclusion of a new study published in the Journal of Biogeography by an international research team, including Daniel Kissling who is a researcher at the Institute for Biodiversity and Ecosystem Dynamics at the University of Amsterdam. Each fruit has specific characteristics called fruit traits: e.g. fruit size, shape and colour. In the newly published study, led by former UvA postdoctoral researcher Renske Onstein, it was studied how these fruit traits relate to long-distance dispersal by birds and mammals. Onstein: ‘We tested this in the Annonaceae plant family which has globally around 2400 species, and has historically colonised all continents and their rainforests. It is an important family for fruit-eating (frugivorous) animals, because most species have fleshy fruits.’ In October 2015 a team of researchers, including Renske Onstein, travelled to the rain forest in Borneo, to collect Annonaceae plants (see below standing video). Afterwards the researchers combined their field knowledge and the compiled trait data with a phylogenetic framework to infer historical biogeography and long-distance dispersals of these tropical plants. What they found is that sets of correlated fruit traits (dispersal syndromes) relate well to historical long-distance dispersal events by animals. They showed for instance that historical long-distance dispersal events are characteristic for plants with large fruits that have dull colours, typically dispersed by large mammals. Text continues under the video. Traditionally, the disjunct biogeographic distribution of rain forest plants on different continents has been explained by ‘vicariance’, i.e. the break-up of the Gondwanan supercontinent. ‘In contrast to this traditional view, our study shows that over-land and across water dispersal of more than 1,000 km is a plausible scenario to explain much of the disjunct distributions of tropical plant lineages in the world’, says UvA researcher Daniel Kissling. ‘Hence, our study sheds light on the dispersal mechanism and potentially the animals that may have facilitated these long-distance dispersals’, continues Kissling. Long-distance dispersal is important for plant survival, and may also play an important role in escaping current and future global warming. For tropical plants that rely on frugivorous animals for seed dispersal, it may be difficult to survive ongoing global changes if large-bodied animal dispersers in the tropics continue to decline. Onstein: ‘Some of the traits we identified in this study may provide insights into which plants depend on which types of animal disperses to move across long distances, and thus can help species to survive under global warming if considerable amounts of habitat continue to remain.’ Renske E. Onstein, W. Daniel Kissling, Lars W. Chatrou, Thomas L. P. Couvreur, Hélène Morlon, Hervé Sauquet (2019). Which frugivory‐related traits facilitated historical long‐distance dispersal in the custard apple family (Annonaceae)? Journal of Biogeography. DOI: https://doi.org/10.1111/jbi.13552
Enslaved African Americans working and living on Mississippi’s plantations faced conditions of abject poverty. Food rations provided by owners often did not provide enough calories or variety. In most cases, it was cheaper for slave owners to allow the slaves to raise and acquire their own food than to provide full rations. Within this economic context, slaves overcame nutritional deficits by supplementing their diets with food resources they acquired themselves. Their primary methods included tending their own gardens, raising their own livestock, and hunting, fishing, and gathering wild food resources. Slaves practiced these subsistence activities not only to supplement rationed food but also to participate in a trade network with their self-acquired goods and to achieve some autonomy in their lives. Accounts by former slaves provide some of the most direct evidence regarding the subsistence economy practiced within the slave quarters. Former slaves who were interviewed in the 1930s through the Federal Writers’ Project of the Works Progress Administration (WPA) provided rich detail regarding the subsistence economy. Charlie Davenport, a former slave from Natchez, recounted that “almost every slave had his own little garden patch and was allowed to cook out of it.” Most of the upkeep in the gardens was carried out on Saturdays and/or Sundays, which were free days on many plantations. Favorite garden items mentioned in the WPA accounts included corn, sweet potatoes, onions, squash, and collard greens. The WPA accounts provide additional information regarding other subsistence practices, including hunting, fishing, and collecting. Favorite game included deer, rabbits, opossums, raccoons, wild turkeys, and rattlesnakes. Davenport also mentioned collecting dewberries and persimmons for wine and gathering black walnuts and storing them under the cabins to dry. A number of archaeological investigations at antebellum plantations throughout the South have confirmed that slave owners often permitted their slaves to tend gardens and raise livestock within the slave quarters area of plantations and to hunt, fish, and gather wild food resources. Evidence shows that slaves cultivated small garden plots adjacent to their cabins and kept livestock in small pens. A variety of fruits and vegetables were grown in the gardens, including beans, peas, collard greens, corn, squash and pumpkins, onions, okra, potatoes (including sweet potatoes), watermelons, and muskmelons. Poultry were the most common livestock raised by slaves, though many also raised pigs and goats. Archaeological evidence indicates that slaves harvested a wide range of wild species. At Saragossa Plantation near Natchez, bones of wild animals discovered within the slave quarters area indicated that slaves there regularly hunted and fished for wild food, including opossum, deer, turtle, gar, sucker, and catfish. The WPA slave narratives offer evidence that the food resources grown, raised, hunted, fished, and gathered by the slaves provided a basis on which they entered an informal market economy. Many sold surplus food goods to their masters, to overseers, or at markets and were allowed to keep the proceeds of their sales, which they used to purchase other goods. Former Mississippi slave Pete Franks reported saving ten dollars from selling vegetables grown in his garden, which he used to buy “lots of pretties.” For many slaves, exercising self-sufficiency through subsistence activities was a way to work for their own interests. Tending their gardens, raising their livestock, and fishing, hunting, and gathering wild resources undoubtedly allowed slaves to feel some control in their lives and were likely precious occupations and pastimes. - Maria Franklin, “‘Out of Site, Out of Mind’” - The Archeology of an Enslaved Virginian Household, c. 1740–1778” (PhD dissertation, University of California at Berkeley, 1997) - Maria Franklin, in Race and the Archaeology of Identity, ed. Charles E. Orser Jr. (2001) - Barbara Heath and Amber Bennett, Historical Archaeology (Summer 2000) - Michael W. Tuma, Mississippi Archaeology 33 (1998)
Mercury has been frequently used in thermometers because it remains in liquid form throughout a wide range of temperatures: -37.89 degrees Fahrenheit to 674.06 degrees Fahrenheit. In a thermometer, a glass bulb attached to a glass capillary tube is filled with mercury. The rest of the tube may be a vacuum, or it may be filled with nitrogen. As the mercury heats up, it rises in the tube, and as it cools, it retracts back into the bulb. The height at which the mercury rests corresponds to calibrated marks on the side of the tube, allowing you to read the temperature of the item or air that is being measured. Mercury will freeze solid at -37.89 degrees F, and if there is nitrogen in the space above the mercury, it will flow down and become trapped below the mercury when it thaws. It will then need to be taken in for repair before it can be used again. For this reason, mercury thermometers are not recommended for cold climates and should be brought indoors when the temperature starts dipping below -30 degrees. Common Uses Today Best used to measure high temperatures, mercury thermometers are still widely used in meteorology and in high temperature places such as autoclaves, which are high-pressure vessels used to sterilize or process equipment. In some cases, there are federal or state regulations that require the use of mercury-containing thermometers, although some alternatives such as digital thermometers and non-mercury liquid-in-glass thermometers are being used more frequently. Phased Out or Banned Mercury is poisonous and is being phased out of use in many industries. In several states it is now illegal to sell mercury thermometers, and many countries have banned the use of mercury thermometers in hospitals and schools. The United States Environmental Protection Agency announced in 2010 that it will be working with industrial stakeholders and laboratories to phase out mercury-containing thermometers to reduce the release of mercury into the environment through spills, disposal and breakage. - An old type of mercury clinical thermometer - close up image by Werg from Fotolia.com
Colds are very common, particularly during the winter months. Colds can affect the nose, the throat and upper airways, and common symptoms include coughing, fever, sore throat, sneezing, blocked or runny nose and general congestion. They are caused by about 200 different viruses and often leave people feeling quite unwell. The flu is a viral infection affecting your nose, throat and sometimes your lungs. Typical symptoms of flu include fever, sore throat and muscle aches. Symptoms of a cold tend to be mild to moderately severe. Both colds and flu can also lead to complications, such as pneumonia, which can sometimes lead to death. Three different types of influenza viruses infect humans – types A, B and C. Only influenza A and B cause major outbreaks and severe disease. There is a vaccine available for the flu and it’s recommended ‘at risk’ people, such as the elderly or those with chronic illnesses have an annual flu vaccination. Flu viruses circulating in the community continually change, and immunity from the vaccine doesn’t last a long time so that’s why yearly vaccination is recommended. Good hygiene is one of the most important ways to help prevent colds and flu. Antibiotics only work for bacterial infections so they won’t work for colds and flu which are caused by viruses. If you are feeling concerned about any symptoms of a cold or flu then please book an appointment to see one of our doctors. For all matters concerning fLu and Cold treatment, please contact MedCentres.
Write a 150- to 250-word response to each of the following questions: • How is discrimination different from prejudice and stereotyping? • What are the causes of discrimination? • How is discrimination faced by one identity group (race, ethnicity, religious beliefs, gender, sexual orientation, age, or disability) the same as discrimination faced by another? How are they different?
Influenza (Flu) (see also the Influenza, commonly called the “flu,” is a contagious respiratory illness caused by influenza viruses. Symptoms include fever, headache, extreme tiredness, dry cough, sore throat, runny or stuffy nose, and muscle aches. In the United States, influenza is associated with approximately 200,000 hospitalizations each year. The Centers for Disease Control and Prevention (CDC) estimate that during the three decades spanning 1976-2007, influenza-associated deaths ranged from 3,000 to 49,000 annually. Everyone 6 months and older should get a flu vaccine as soon as vaccine is available each fall. Since the virus changes each year it is necessary to receive a new influenza vaccine each year. People at high risk for complications • Pregnant women Children younger than five years of age • Adults 50 years of age and older • Anyone who is immunocompromised due to disease or medication • People of any age with chronic medical conditions • People who work or live in nursing homes and other long term care facilities as well as health care and day care workers In addition, practicing good health habits such as hand washing and covering your nose and mouth when coughing or sneezing helps prevent the spread of influenza. If diagnosed within two days of illness, anti-viral medication may be prescribed to treat influenza (note that antibiotics will not work as influenza is caused by a virus and antibiotics are only useful for diseases caused by bacteria). A note on the often confusing terminology of “flu:” Technically, “flu” is the disease you get when you are infected with an influenza virus. However, there are many other respiratory viruses, such as parainfluenza, RSV, adenovirus, enterovirus, and human metapneumovirus that can cause the same symptoms as influenza (fever, cough, sore throat). Furthermore, many use the term “stomach flu” or “GI flu” to describe vomiting, nausea, or diarrhea. However, these symptoms are rarely found with infection by the influenza virus and they are usually caused by other viruses or bacteria. In these pages, when we use “flu,” we are referring to the illness caused by infection with influenza What's happening with flu now Local and National Surveillance For The Public For Health Care Professionals For Skilled Nursing Facilities NEW Influenza Vaccination Tool Kit
Of the planets in the solar system, Jupiter has the strongest magnetic field. The magnetic field interacts with the solar wind to form a bubble that is called a magnetosphere, and within this bubble an energetic plasma emits radio waves to make Jupiter one of the brightest radio sources in the sky. The magnetic field of Jupiter is nearly a dipole field that is tilted 10° to Jupiter's rotation axis. The magnetic field rotates with the planet. The strength of the magnetic field is estimated to be 4.2 Gauss at the equator and 10 to 14 Gauss at the poles; by way of comparison, Earth's magnetic field is 0.3 Gauss at the equator. The basic theory for all magnetic field generation in astrophysics is the dynamo theory. In this theory, magnetic fields are created by the convection of a conducting fluid. In Jupiter, the conducting fluid is the metallic hydrogen of the inner mantle. The dynamo converts gravitational potential energy into magnetic field energy. As Jupiter shrinks, gravitational potential energy is converted into heat at the core of the planet. The hot fluid of the inner mantle is buoyant, so it rises, transferring some of its thermal energy into the kinetic energy of convective motion. Some of the energy in convective motion is extracted in the process of creating the magnetic field. The greater the kinetic energy of the convective motion, the greater the energy that is put into the magnetic field. The cartoon of how the dynamo mechanism works is somewhat reminiscent of a taffy pull. In a perfect conductor, magnetic field lines are frozen to the fluid they pass through, so when an element of the fluid moves, it carries a piece of magnetic field with it. If there is a shear in the fluid perpendicular to the magnetic field lines, the magnetic field lines are stretched, and the magnetic field strength is amplified. It is through this mechanism that energy is transferred from a convective fluid to the magnetic field. A hard limit to the strength of the magnetic field is set by the amount of kinetic energy carried by the fluid. Other factors that limit the strength of the magnetic field are the electrical resistance within the fluid, the increased buoyancy of magnetized fluids, and the tendency of conductors to expel magnetic fields. As with Earth, Jupiter's magnetic field creates a teardrop-shaped bubble in the solar wind around Jupiter. The boundary of this bubble is called the magnetopause. The magnetopause is the boundary between the plasma that is static within Jupiter's magnetic field and the solar wind. In the direction of the Sun, the magnetopause ranges from 45 to 110 Jupiter radii (3 to 7.7 million kilometers) from the planet. From 10 to 30 Jupiter radii (0.7 to 2 million kilometers) ahead of the magnetopause in the direction of the Sun is a shock caused by the supersonic solar wind striking the subsonic cushion of wind ahead of the magnetopause. The magnetosphere has a radius of from 150 to 200 Jupiter radii (10.5 to 15 million kilometers), and it can trail the planet for half a billion kilometers, although this length can vary dramatically over time. The plasma within the magnetosphere is very energetic, with nonthermal electrons of energies above 30MeV and nonthermal ions of energies above 7MeV.1 The composition of the ions is of hydrogen, helium, oxygen, and sulfur. The first two elements are no surprise, given that both Jupiter and the solar wind are predominately hydrogen and helium. In the magnetosphere, the hydrogen is thought to come from Jupiter's atmosphere, and the helium comes from the solar wind. The sulfur, however, is quite surprising unless you pay attention to what the moons are doing. The oxygen and sulfur in the magnetosphere are from the moon Io, which has active volcanos powered by the tidal heating of Io in its orbit of Jupiter. When a volcano erupts on Io, it sends a fountain of sulfur and oxygen high above the surface, and some of this material is injected into the magnetosphere. Much of the heating of the magnetospheric plasma is from the compression of the magnetic field as Jupiter rotates. As the magnetic field is drawn from the more spacious dark side of Jupiter to the sunlit side, where the solar wind pushed the boundary of the magnetosphere closer to Jupiter, the magnetic field is compressed. A fundamental property of magnetic fields is that when they change with time, an electric field is generated. This electric field accelerates plasma particles, heating the magnetospheric plasma. The energy that goes into the plasma is extracted from Jupiter's rotation. Storms of radio bursts are emitted from Jupiter's magnetosphere in the 0.6 to 30 MHz range. Individual bursts last from seconds to minutes, and the storms last from one to two hours. This decameter radio emission is cyclotron emission, which is the emission produced by an electron spiraling in a magnetic field. These storms are synchronized to the orbit of Io around Jupiter; as Io moves through Jupiter's magnetic field, it behaves as a conductor, and it generates an electric field. This field drives a current that flows along the magnetic field lines to Jupiter's atmosphere in much the same way that a changing electric field drives a current along a coaxial cable. The radios emission is associated with this current. 1 The electron volt (eV) is a unit of energy. Formally, it is the amount of energy an electron acquires when it passes through a 1 volt potential. Optical photons carry an energy of roughly 1eV. The unit of measure MeV is 106eV, and it is the unit of energy encountered when characterizing gamma-rays. The rest mass energy of the electron is 0.511MeV.
Information for the General Public What are neutrinos? During the 20th and 21st century, physicists have attempted to understand the fundamental building blocks of our universe. They have found evidence for many different particles that seem to make up the world about us. These particles come in several different sorts. Bosons transmit forces. A photon is a type of boson. Quarks often have high mass and interact very strongly with each other. They make up the heavy nuclei that sit at the centre of atoms. Leptons are usually lower in mass and don't interact as strongly as quarks. Electrons that make up the outside of atoms are a sort of lepton. Neutrinos are also a sort of lepton. The Standard Model of particle physics. Neutrinos are a type of lepton, shown with their charged partner-lepton in green. Leptons come in two different types. There are charged leptons - electrons, muons and taus; and neutrinos - one for each of the charged leptons. Electrons are the most familiar lepton - these are the particles that sit on the outside of an atom to make up the rich tapestry of chemicals in the universe. Their cousins, the neutrinos, are ghost particles, which the Neutrino Factory is designed to study. In particular, neutrinos have several special properties that we would like to understand. - Neutrinos hardly interact with matter at all. If I were to try to catch a neutrino it would take a lot of material before it slowed down. - Neutrinos have hardly any mass. If I could give a neutrino a push, it wouldn't take much effort to make it travel very quickly. - Neutrinos can change from one type to another. This strange flipping between different types of neutrino is called neutrino oscillation. Most physicists think that this oscillation occurs because neutrinos have mass. - Anti-neutrinos may have different properties to neutrinos - we just don't know. This might partly explain why the universe is made mostly of matter and we don't see much antimatter. - Thousands of billions of neutrinos pass through you every second! These neutrinos originate from cosmic rays travelling through space. - Neutrinos are one of the least-studied particles. They are so difficult to make and detect that it is really difficult to do experiments with neutrinos. More about the different sorts of particles can be found in the particle adventure: http://www.particleadventure.org/ Neutrino oscillations occur when a neutrino changes from one type to another. Neutrino oscillations are thought by many physicists to occur because neutrinos have mass. By measuring neutrino oscillations closely, we can calculate the mass of the neutrinos and other fundamental parameters such as whether there is an asymmetry between matter and antimatter in neutrinos. These parameters are fundamental constants of the universe because there is no known way of calculating them from other parameters. That means that it is important that we can measure them and understand them - and by understanding them we may be able to understand better why they take the values that they have. What is a Neutrino Factory? A Neutrino Factory is a special facility that we hope to build, designed for the study of neutrino oscillation. In a Neutrino Factory, physicists will make a beam of high energy muons. This is a sort of lepton, like a heavy electron. Muons are unstable particles that decay into neutrinos. By pointing the muons in the right direction as they decay, we can fire neutrinos at detectors on the far side of the earth. If the neutrino beam is measured before as it leaves the end of the muon accelerator and again as it comes out of the far side of the world, we can look to see if the mix of the beam has changed; and hence observe neutrino oscillations. Neutrinos are manufactured at the muon facility, here shown in North America, and pass through the earth's mantle to a detector, here shown in India. By using muons to make neutrinos, we can make a source of neutrinos that is pure, intense and high energy, and both neutrinos and anti-neutrinos can be produced. Each of these characteristics makes our measurements more precise; a pure beam means that we can understand the mixture of neutrinos in our beam very well; an intense beam means that we can look for oscillation with lots of neutrinos; and a high energy beam means that the chance of seeing each neutrino is higher. The presence of neutrinos and anti-neutrinos means that we can measure the difference between matter and antimatter. Together these factors make for a very precise experiment. How do you build a Neutrino Factory? A Neutrino Factory is constructed of many different parts. We need to make a beam of muons and then accelerate them up to high energy. Then we need to store the muons while they decay into neutrinos, and build detectors to measure the neutrino beam. A schematic of the particle accelerator facility for muon production and acceleration. Protons are created in the proton driver, shown at the top, before being accelerated onto a target where pions are created. These decay to muons, which are accelerated through a number of sections before entering the muon storage ring where they can decay. How can I make a muon beam? Muons can be made from protons - one of the constituents of atomic nuclei. The protons are accelerated to high energy and then fired into a high energy target, where they make particles called pions, which quickly decay into muons. The muon beam is initially very messy - when pions decay they fire muons in all sorts of directions with a broad range of energies. The muons are captured and controlled using very powerful magnets. Initially we seek to control the big range in energies using a special technique called energy-phase rotation to make the muon beam into many smaller bunches of particles with a smaller range in energies. After that we seek to make the muon beam more parallel. We do this using a technique called ionisation cooling. By controlling the energy spread of the beam and making it more parallel, we can make the beam ready to be accelerated. How can I accelerate muons? Acceleration is achieved using cavities filled with very intense electromagnetic fields that are oscillating very quickly. The cavities have millions of volts between one side and the other and this voltage flips hundreds of millions of times per second. The voltage needs to be as large as possible in order to accelerate the particles as quickly as possible and the field needs to flip so that muons are not slowed down by the voltage once they have left the far side of the cavity. Three sorts of accelerators are used. It is harder to control the muons at low energy so we choose to accelerate in a straight line to start with. Once the muons get to higher energy, we can re-use the linear accelerators by recirculating muons through special tear-drop shaped accelerators. This allows us to use the same equipment several times making the accelerator cheaper. Finally we can accelerate the beam in rings, which is an even less expensive technology. How do I make the muons into neutrinos? The last part of the muon facility are storage rings. Muons decay at random times in the storage ring and most of the neutrinos fire along the direction of motion of the muons. We try to point the muons for as much time as possible to the neutrino detectors so that we can get as many neutrinos as possible into the detectors. This leads us to design rings that are racetrack-shaped and sloping steeply so that neutrinos travel into the mantle of the earth. While the top end of the rings is at ground level, the bottom end is many hundreds of metres below the earth's surface. Typically we would want to measure the neutrinos at more than one distance from the muon facility, so we would split the muon beam after acceleration and divert it into two or more storage rings. Also we can calculate the properties of the neutrino beams as muons decay by carefully measuring the muon beams so the storage ring also contains several instruments to precisely measure the muon beams. How do I detect neutrinos? There are several sets of detectors that measure the properties of the neutrinos. The properties of the neutrino beams at the muon facility using a Near Detector. Then the neutrino beams are measured where they leave the earth in Far Detectors. The detectors work by measuring particles such as electrons that are produced when neutrinos hit the material. These particles can be detected in a number of different ways. One method is to put material called scintillator into the detector. Scintillator emits a tiny flash of light every time a particle passes through, which can be detected using very sensitive cameras. Another method is to measure Cerenkov light. This is a form of light that some particles emit in a cone along their direction of travel; it can be measured in another sort of sensitive camera. Also it is possible to put photographic emulsion in the path of the particles. The particles will leave a tiny defect in the emulsion, similar to the process which makes a photograph using light photons. This defect can be seen by eye or measured using automated robots. A final method to detect these particles is to put high voltage cables through some material. The passage of particles causes sparks to form at the cables which produces an electric signal that can be measured. A schematic of a Magnetised Iron Neutrino Detector. Alternating plates of steel and scintillator are used to study particles created as neutrinos interact with the detector material. These detectors share several features. They are usually magnetised so that we can tell the difference between matter and anti-matter. Charged particles travel in circles in magnetic fields. Anti-particles of charged particles have the opposite charge, which means that they circulate in the opposite direction. By measuring the direction of circulation of particles we can assess the charge and so examine whether particles are matter or anti-matter, and so assess whether the original particle in the neutrino beam was a neutrino or anti-neutrino. In addition the detectors have very large mass. Neutrinos only interact with matter very rarely. By making the detectors very massive, we can increase the chance that a neutrino will interact with the detector and can be observed. This increases the number of neutrinos that we can measure and so improves the sensitivity of our measurement.
This diagram illustrates a possible explanation for a series of intense bursts of energy seen by the NASA Swift satellite's Burst Alert Telescope on March 28, 2011. Subsequent Hubble Space Telescope observations showed that the blasts originated from the center of a dwarf spheroidal galaxy located nearly 4 billion light-years away. The unusual string of powerful outbursts likely arose when a star wandered too close to its galaxy's central black hole, weighing perhaps as much as 1 million times the mass of our Sun. Intense gravitational tidal forces tore the star apart, and the infalling gas continues to stream toward the hole. The black hole formed a jet along its spin axis. Because the jet is pointed toward Earth, astronomers see a powerful blast of X-rays and gamma rays. Image Type: Illustration To access available information and downloadable versions of images in this news release, click on any of the images below:
There are two ideas about how glaciers formed on Earth between about 717 and 630 million years ago – a time known as “Snowball Earth”. The idea that at least two long glaciations happened during which communication between the ocean and the atmosphere was cut off (described in the top half of this image) is more likely based on the evidence we have today. In this scenario, the Earth was ice-free at 670 and 630 million years ago because carbon dioxide built up in the atmosphere. Click on image for full size Credit: Zina Deretsky, National Science Foundation Scientists Find Signs of “Snowball Earth” Amidst Early Animal Evolution There used to be sea ice floating on the tropical ocean, according to new evidence found by geologists. This was quite a long time ago, 716.5 million years ago, during a time known as Snowball Earth. This chilly time was among the greatest ice ages known to have taken place on Earth. Ice formed all over the planet, even in the tropics. It was in this frozen world that scientists believe the first animals evolved. The geologists studied ancient tropical rocks and found evidence that ice was floating in the sea at the time when they formed. These rocks formed in the tropics long ago, but today they are located in a remote area of northwestern Canada thanks to the continent-moving abilities of plate tectonics. Based on magnetic information and the minerals in these rocks, scientists know that they used to be located at sea level in the tropics, just north of the equator. Of course, scientists can’t find ice preserved in rocks. So how did they figure out that it was icy near the equator? They found evidence of ice in the rocks that is like a footprint that ice used to be there. This evidence includes pieces of rock that have been gouged by glaciers and pieces of rock that were carried out to sea within ice and then dropped to the seafloor as the ice melted. A world covered with snow and ice must have been a challenging place to live. But life did survive during this time. This suggests that sunlight and water, which are needed by living things, were available somewhere on Earth. So the ocean could not have been entirely covered with ice. The geologists say that sea ice would have been moving and forming patches of open water. These open patches would have provided good places for life to survive. According to the fossil record, all of the major groups within the Domain Eukaryota (except perhaps animals) existed before the ice formed. Scientists have a hypothesis that the cooling climate during Snowball Earth may have allowed animals to evolve. In face, times of stress in environments are thought to prompt new species to evolve. Scientists don't know exactly what caused the glaciers to form or what caused them to melt, but there is evidence that at about the same time lots of volcanic eruptions were happening. This could mean the cold “Snowball Earth” time was either formed by, or ended by, volcanic activity. Shop Windows to the Universe Science Store! Our online store on science education, classroom activities in The Earth Scientist specimens, and educational games You might also be interested in: Sea ice is frozen seawater. It can be several meters thick and it moves over time. Although the salts in the seawater do not freeze, pockets of concentrated salty water become trapped in the sea ice when...more The main force that shapes our planet’s surface over long amounts of time is the movement of Earth's outer layer by the process of plate tectonics. This picture shows how the rigid outer layer of the...more Minerals are the building blocks of rocks. They are non-living, solid, and, like all matter, are made of atoms of elements. There are many different types of minerals and each type is made of particular...more Measuring sea level, the height of the ocean surface, allows scientists to calculate whether sea level is changing over time and how much sea level rise is happening now because of global warming. But...more Frozen water is found in many different places on Earth. Snow blankets the ground at mid and high latitudes during winter. Sea ice and icebergs float in the chilly waters of polar oceans. Ice shelves fringe...more For a glacier to develop, the amount of snow that falls must be more than the amount of snow that melts each year. This means that glaciers are only found in places where a large amount of snow falls each...more Scientists have learned that Mount Hood, Oregon's tallest mountain, has erupted in the past due to the mixing of two different types of magma. "The data will help give us a better road map to what a future...more
A selector is the name used to select a method to execute for an object, or the unique identifier that replaces the name when the source code is compiled. A selector by itself doesn’t do anything. It simply identifies a method. The only thing that makes the selector method name different from a plain string is that the compiler makes sure that selectors are unique. What makes a selector useful is that (in conjunction with the runtime) it acts like a dynamic function pointer that, for a given name, automatically points to the implementation of a method appropriate for whichever class it’s used with. Suppose you had a selector for the method run, and classes ComputerSimulation (each of which implemented a method run). The selector could be used with an instance of each of the classes to invoke its run method—even though the implementation might be different for each. Getting a Selector Compiled selectors are of type SEL. There are two common ways to get a selector: At compile time, you use the compiler directive SEL aSelector = @selector(methodName); At runtime, you use the NSSelectorFromStringfunction, where the string is the name of the method: SEL aSelector = NSSelectorFromString(@"methodName"); You use a selector created from a string when you want your code to send a message whose name you may not know until runtime. Using a Selector You can invoke a method using a selector with performSelector: and other similar methods. SEL aSelector = @selector(run); (You use this technique in special situations, such as when you implement an object that uses the target-action design pattern. Normally, you simply invoke the method directly.)
How Down Syndrome Stops Cancer Customized Down syndrome stem cells reveal a way to starve tumors. For decades scientists have known that people with Down syndrome, who have an extra copy of chromosome 21, get certain types of cancer at dramatically lower rates than normal. Now, partly by using stem cells derived from the skin of an individual with Down syndrome, researchers at Children’s Hospital Boston have pinpointed the gene that appears to underlie the cancer-protective effect. The researchers say the results of their study, which were published today in Nature, may point to a promising new target for future cancer treatments. And according to stem-cell biologists, the work also highlights a growing trend in the field: harnessing disease-specific stem cells not as therapies but rather as models for understanding particular genetic disorders. Stem cells “can be useful not simply because you take them and transplant them,” says Evan Snyder, director of the stem cells and regenerative medicine program at the Burnham Institute for Medical Research in San Diego. “They are useful as models of disease that reveal other kinds of therapies.” Snyder was not involved in the new study. The late Judah Folkman, a cancer researcher renowned for pioneering the notion that blocking angiogenesis–the growth of new blood vessels–can prevent tumors from thriving, hypothesized that the lower cancer rates associated with Down syndrome might be traced to anti-angiogenesis genes on the 21st chromosome. So Sandra Ryeom, a member of the Folkman Laboratory in the Vascular Biology Program at Children’s Hospital, zeroed in on a region on chromosome 21 known to encode a regulator of blood vessel growth called DSCR1. In chromosomally normal mice, the standard two copies of the Dscr1 gene produce just enough protein to help reign in normal blood-vessel growth, but not enough to stem the angiogenesis overload triggered by a developing tumor. But in mice with an artificial version of Down syndrome (and thus a third copy of the Dscr1 gene), Ryeom found that the surplus of DSCR1 protein kept abnormal angiogenesis–and the resulting tumor proliferation–in check. While Ryeom and her colleagues suspect that DSCR1 works in concert with a handful of other chromosome 21 genes, they confirmed that the protein plays a central role in tumor suppression. A third copy of the Dscr1 gene alone was enough to stifle cancer formation in otherwise normal mice, though not to the same degree as in the Down syndrome mice. To confirm that the gene is relevant in human cancers, Ryeom and her colleagues created a custom line of stem cells from skin cells taken from an individual with Down syndrome. Using a relatively new technique called induced pluripotent stem (iPS) cell reprogramming, researchers can express specific genes in differentiated adult cells and revert them to an earlier developmental state, where they are capable of giving rise to many different cell types. Human iPS cells offer a convenient means to study cancer growth. Injected into mice with compromised immune systems, they generate chaotic but benign tumors composed of many kinds of tissue. When the researchers injected iPS cells derived from a chromosomally normal individual, the resulting tumors spawned elaborate networks of blood vessels to feed themselves. But when Ryeom’s team injected iPS cells derived from a Down syndrome patient, the tumors formed hardly any blood vessels at all. In addition, the stem cell approach could allow the researchers to zero in on other potential anti-angiogenic proteins on chromosome 21 by tweaking gene copy numbers in the iPS cells. “We basically can map which genes are necessary in human Down syndrome cells to block blood-vessel growth,” says Ryeom. The iPS cells could also be used to test potential DSCR1-like drugs. “The idea of being able to combine a mouse model of disease with actual human cells in culture is very attractive,” says Jeanne Loring, director of the Center for Regenerative Medicine at the Scripps Research Institute in La Jolla, CA, who was not involved in the research. “It’s a really big step forward.” Now that Ryeom and her colleagues have shown the importance of the DSCR1 pathway in blocking tumors, the researchers are testing it as a potential target for cancer drugs. By chopping the protein into tiny pieces, they have identified the smallest chunk required to interfere with abnormal blood-vessel growth. Ryeom envisions using that chunk not just as a treatment for cancer, but also perhaps as a prophylactic.”If we could take this as sort of a preventative, vitamin-like therapy,” she speculates, “would it block all of us from having tumor cells grow into these huge, lethal masses?” Debabrata Mukhopadhyay, a professor of biochemistry and molecular biology at the Mayo Clinic Cancer Center in Rochester, MN, advises caution. He says that because the role of DSCR1 in normal development isn’t yet well understood, toying with its biological pathway might have unintended consequences. He is optimistic, though, that the new study will help researchers begin to decipher that mechanism. “If there is any distinct difference between DSCR1’s effect on pathological versus physiological angiogenesis, that needs to be resolved,” says Mukhopadhyay. “But this is a very important way of looking for anti-angiogenic therapy.”
Modeling is also known as Aided Language Input or Aided Language Stimulation. It is a research-based strategy to help build a strong foundation for AAC use and language learning. In aided language input, when partners (parents, teachers, and therapists) talk with people who use AAC, the partners also use the same AAC system to communicate. This helps teach AAC by example in real-life interactions. All AAC learners need to see what it looks like to communicate using their AAC systems in real conversations. The idea is to use the AAC learner's system, or another similar AAC system, when you talk with the AAC learner. You don't need to model every single word you say, using the exact correct grammar, especially to start with. This would likely be overwhelming to all concerned. Instead, model one step above the AAC learner's current skill level. So if the AAC learner is not yet using the system to communicate in single words, model at the single word level. For example, if you're leaving the classroom to go to the cafeteria, you can verbally say "It's time to go to the cafeteria" and press the "go" button on the AAC system when you say the word "go". Once the AAC learner is at the one word level, you can step up your game add a word when you model. So if you're leaving the house to go to see grandmother, you can verbally say "Let's go see Granny" and press "go" and "Granny" while you're speaking these words. Aided Language Stimulation explained: Like anything, the more you model, the easier it will be. So focus on modelling those key words and don't think that you always need to model a grammatically complete sentence! The best way that we can support our AAC user is the use the AAC system to talk yourself. If someone spoke French, you would try to speak to them in French! For children learning to use AAC, AAC is their language, so you should talk to them using AAC! http://www.assistiveware.com/dos-and-donts-aac-use-aac-system - Motivate, Model and Move Out of the Way - This PowerPoint slideshow, aimed at parents and caregivers, explains why and how aided language works in the home. - PrAACtical Resources: Video Examples of Aided Language Input - A collection of videos including therapists, educators, and families using Aided Language Input - PrAACtical AAC: Why We Love Aided Language Input - This article links to 4 research articles demonstrating the benefits of Aided Language Input. - Why We Do Aided Language Stimulation And You Should Too! - This guest blog, written by Mary-Louise Bertram, clearly explains why modeling is so important for those beginning to use AAC.
Musical training has recently gained additional interest in education as increasing neuroscientific research demonstrates its positive effects on brain development. Neuroimaging revealed plastic changes in the brains of adult musicians but it is still unclear to what extent they are the product of intensive music training rather than of other factors, such as preexisting biological markers of musicality. In this review, we synthesize a large body of studies demonstrating that benefits of musical training extend beyond the skills it directly aims to train and last well into adulthood. For example, children who undergo musical training have better verbal memory, second language pronunciation accuracy, reading ability and executive functions. Learning to play an instrument as a child may even predict academic performance and IQ in young adulthood. The degree of observed structural and functional adaptation in the brain correlates with intensity and duration of practice. Importantly, the effects on cognitive development depend on the timing of musical initiation due to sensitive periods during development, as well as on several other modulating variables. Notably, we point to motivation, reward and social context of musical education, which are important yet neglected factors affecting the long-term benefits of musical training. Further, we introduce the notion of rhythmic entrainment and suggest that it may represent a mechanism supporting learning and development of executive functions. It also hones temporal processing and orienting of attention in time that may underlie enhancements observed in reading and verbal memory. We conclude that musical training uniquely engenders near and far transfer effects, preparing a foundation for a range of skills, and thus fostering cognitive development. Keywords: musical training, brain plasticity, developmental neuroscience, music education, rhythmic entrainment Psychological and neuroscientific research demonstrates that musical training in children is associated with heightening of sound sensitivity as well as enhancement in verbal abilities and general reasoning skills. Studies in the domain of auditory cognitive neuroscience have begun revealing the functional and structural brain plasticity underlying these effects. However, the extent to which the intensity and duration of instrumental training or other factors such as family background, extracurricular activities, attention, motivation, or instructional methods contribute to the benefits for brain development is still not clear. Music training correlates with plastic changes in auditory, motor, and sensorimotor integration areas. However, the current state of the literature does not lend itself to the conclusion that the observed changes are caused by music training alone (Merrett et al., 2013). In this article we briefly review the recent literature on how musical training changes brain structure and function in adult musicians and during development. We next report evidence for near and far transfer effects in various cognitive functions that are unprecedented in comparison to other long-term practice activities in childhood. Finally, we point out the important and overlooked role of other factors that could contribute to the observed cognitive enhancement as well as structural and functional brain differences between musicians and non-musicians. We propose the mechanism of rhythmic entrainment and social synchrony as factors contributing to the plasticity-promoting role of musical training that is unique to music education. The proposed mechanism of rhythmic synchronization by which musical training yields a unique advantage of transferrable skills may provide a promising avenue of research explaining the beneficial effects on a developing brain. In addition, we pinpoint the potentially important role of genetic predispositions and motivation that is rarely controlled for in the existing literature. The review focuses on studies investigating healthy children's and adults' response to formal musical education (primarily instrumental training) in terms of neuroplasticity observed with neuroimaging techniques, as well as in behavioral effects on cognitive performance in various domains. Although we mention and acknowledge the enormous value of music therapy with the aim of restoring lost function in diseased or disabled individuals, this topic is outside the main focus of this review. Reviewing the progress in musical training research embraced in this article leads us to the promising supposition that the induced changes in brain development and plasticity are not only relevant in music-specific domains but also enhance other cognitive skills. Cognitive, emotional and social functions in music perception and production Listening to music requires certain perceptual abilities, including pitch discrimination, auditory memory, and selective attention in order to perceive the temporal and harmonic structure of the music as well as its affective components, and engages a distributed network of brain structures (Peretz and Zatorre, 2005). Music performance, unlike most other motor activities, in addition requires precise timing of several hierarchically organized actions and control over pitch interval production (Zatorre et al., 2007). Music, like all sounds, unfolds over time. Thus, the auditory cognitive system must depend on working memory mechanisms that allow a stimulus to be maintained on-line to be able to relate one element in a sequence to another that occurs later. The process of music recognition requires access and selection of potential predictions in a perceptual memory system (Dalla Bella et al., 2003; Peretz and Zatorre, 2005). Unlike speech, music is not associated with a fixed semantic system, although it may convey meaning through systems such as emotional appraisal (Koelsch, 2010; Trost et al., 2012) and associative memories. Furthermore, music is also known to have a powerful emotional impact. Neuroimaging studies have shown that musically induced emotions involve very similar brain regions that are also implicated in non-musical basic emotions, such as the reward system, insula, and orbitofrontal cortex, amygdala and hippocampus (Blood and Zatorre, 2001; Koelsch et al., 2006; Salimpoor et al., 2011; Trost et al., 2012). However, music can have a strong influence on the emotion of the listener as well as the performer: musical engagement can be experienced as highly emotional not only as in the case of stage fright (Studer et al., 2011) but also as highly rewarding (de Manzano et al., 2010; Nakahara et al., 2011). Furthermore, in a social context, making music in a group has been suggested to increase communication, coordination, cooperation and even empathy between in-group members (Koelsch, 2010). Therefore, it could easily be conceived how musical training could have a positive impact on the well-being and social development of children and adults. Instrumental training is a multisensory motor experience, typically initiated at an early age. Playing an instrument requires a host of skills, including reading a complex symbolic system (musical notation) and translating it into sequential, bimanual motor activity dependent on multisensory feedback; developing fine motor skills coupled with metric precision; memorizing long musical passages; and improvising within given musical parameters. Music performance, unlike most other motor activities, requires precise timing of several hierarchically organized actions and control over pitch interval production (Zatorre et al., 2007). Music sight-reading calls for the simultaneous and sequential processing of a vast amount of information in a very brief time for immediate use. This task requires, at the very least, interpretation of the pitch and duration of the notes (written on the two staves of a piano score) in the context of the prespecified key signature and meter, detection of familiar patterns, anticipation of what the music should sound like, and generation of a performance plan suited for motor translation. Formal musical instruction, therefore, trains a set of attentional and executive functions, which have both domain-specific and general consequences. The musician's brain: plasticity and functional changes due to musical training Given the engagement of multiple cognitive functions in musical activities, it seems natural that in highly trained musicians brain networks underlying these functions would show increased plasticity. Several recent review papers have critically assessed the effects of musical training on brain plasticity based on neuroimaging literature accumulated to this date (Herholz and Zatorre, 2012; Barrett et al., 2013; Moreno and Bidelman, 2013). Among others, it has been reported that apart from anatomical differences in auditory and motor cortices, there are structural differences (usually in the form of increased gray matter volume) also in somatosensory areas, premotor cortex, inferior temporal and frontal regions, as well as the cerebellum in the brains of musicians compared to non-musicians' (see Barrett et al., 2013). Several longitudinal studies have found a correlation between duration of musical training and the degree of structural change in white matter tracts (Bengtsson et al., 2005), including in the corpus callosum (Schlaug et al., 2005). While it may not be surprising that structural and functional differences are found in those brain regions that are closely linked to skills learned during instrumental music training (such as independent fine motor movements in both hands and auditory discrimination), differences outside of these primary regions are particularly interesting (for instance, in the inferior frontal gyrus in Sluming et al., 2002). Such findings indicate that plasticity can occur in brain regions that either have control over primary musical functions or serve as multimodal integration regions for musical skills, possibly mediating the transfer of musical training onto other skills. For example, a recent study investigating resting-state activity measured with fMRI in musicians compared to non-musicians found that musicians have increased functional connectivity in motor and multi-sensory areas (Luo et al., 2012). This result shows that long-term musical training influences functional brain connectivity even in research designs where no task is given, and points out that for musicians' motor and multi-sensory networks may be better trained to act jointly. In the next section, we review the effects of musical training on cognitive functions and brain plasticity and discuss the role of the age at commencement. However, we note that the evidence for musical training-induced brain plasticity is largely correlational due to the number of additional variables that have not been controlled for in most of the (cross-sectional) studies (Merrett et al., 2013), and that there are unanswered questions surrounding the attribution of causal influence to musical training alone. The few random group assignment studies that have been conducted to this date, typically include a control group of participants that attend theater play, dance (Young et al., 2013), or visual arts classes (Moreno et al., 2009; Moreno and Bidelman, 2013). And while the methodological and subject-specific considerations of this matter have been discussed elsewhere (Barrett et al., 2013; Merrett et al., 2013), in section Variables Modulating Brain Plasticity via Musical Trainin we propose possible unacknowledged mechanisms that enable musicians to excel in many areas unrelated to musical skill (near- and far-transfer skills described in section Effects on Cognitive Functions). Namely, we identify the higher efficiency of attentional and memory processes engendered by rhythmic entrainment, as well as an extension of this phenomenon to social synchrony that is evoked when people sing, play music or dance together in synchrony. To summarize, in Figure 1 we propose a schema depicting the transfer skills that are enhanced by musical instrumental training, including the modulating factors discussed in sections Effects of Musical Training in Childhood and Variables Modulating Brain Plasticity via Musical Training. Schematic representation of near and far transfer skills that benefit from musical instrumental training. In the inner rectangle variables modulating the influence of musical training on cognitive development are listed (see main text, in particular section... Effects of musical training in childhood Correlational and interventional studies of children undergoing music training consistently show that they perform better in the areas closely associated with music: fine motor skill, rhythm perception and auditory discrimination. There is also strong evidence for near-transfer effects of these abilities to phoneme discrimination, as well as far-transfer effects to vocabulary, and non-verbal reasoning subsets of general intelligence tests. While near-transfer effects (transfer to tasks within the same domain) are often observed with various training programs, such as computerized executive function training (attention, working memory and task-switching) (Diamond and Lee, 2011; Jolles and Crone, 2012), far-transfer is notoriously difficult to induce and has been observed only after demanding multi-skills training such as action video games (Bavelier et al., 2010; Green and Bavelier, 2012). The reports we review in this section show that musical training also brings about promising far-transfer effects in domains such as verbal intelligence and executive functions, and may even lead to better general academic performance. Neural development is complex and various neural processes affect plasticity. Such processes include synaptic proliferation, pruning, myelination at neurofilament and neurotransmitter levels, each of which has its own developmental trajectory (e.g., Lenroot and Giedd, 2006; Perani et al., 2010). Observing brain plasticity as years of musical training go by elucidates the way practice becomes engraved in the brain and how memory finds its reflection in brain structure. In general, studies of music learning are consistent with the animal literature indicating greater plastic changes in the brain for behaviorally relevant (e.g., associated with reward or emotional arousal) than for passive exposure to auditory stimuli (Weinberger, 2004). However, the picture is not complete until we take into account the maturational dynamics that shape the brain simultaneously with musical training. The next section introduces the concept of critical and sensitive periods in brain development which, although not exhaustively, adds to the understanding of musical training-induced neuroplasticity. The notion of “windows of opportunity” is important in that it places limits on training-related brain plasticity and hence allows to explain why certain abilities can only be developed in early childhood, which is crucial for the design of educational programs and child rearing. Critical and sensitive periods It is known that plasticity is affected by how much a person actively engages in music training relatively early in their life (Knudsen, 2004). “Sensitive period” is a term applied to a limited period in development when the effects of experience on the brain are unusually strong, derived from the property of particular malleability of the neural circuits (Knudsen, 2004). During this time, the basic architecture of the neural circuits is laid out and all learning (and plasticity) that occurs after the sensitive period will cause alterations only within the connectivity patterns constraint by this framework (Knudsen, 2004). The regulation of sensitive period onset and duration is not simply by age, but by experience, and thus the presence of enriched environments may prolong sensitive periods (Hensch, 2004). For example, second language proficiency is better in individuals who have been exposed to it by the age of 11–13, marking puberty as the end of a sensitive period for language learning (Weber-Fox and Neville, 2001). In other words, the sensitive period is to some extent use-dependent (Hensch, 2004). In contrast, critical periods, are strict time windows during which experience provides information that is essential for normal development and permanently alters performance. For instance, critical period for auditory cortex plasticity ends by the age of 3–4 years in humans, as demonstrated in studies of cochlear implantation in congenitally deaf children: sensory deprivation in that time period prevents normal sensory discrimination and oral language learning (Kral and Sharma, 2012). Not all brain regions develop with the same time course and there are unique timing and duration of critical periods across various neural systems. Sensory and motor regions enter the sensitive period earlier than temporal-parietal and frontal areas (Sowell et al., 2004), the visual cortex reaches adult levels of myelination by few months of life (Kinney et al., 1988), while in the auditory cortex myelination does not finish until 4–5 years of age (Moore and Linthicum, 2007) and white matter connectivity continues to develop until late childhood (Moore and Guan, 2001). Kral and Eggermont (2007) proposed that this extended period of developmental plasticity in the auditory cortex serves for language acquisition, wherein sensory bottom-up processing is trained by feedback from top-down cognitive processes. During this time, between ages 1 and 5, experience-dependent plasticity of the consistency of the auditory brainstem response is maximized (Skoe and Kraus, 2013). Maturation of fiber tracts in the left frontal, temporo-occipital and anterior corpus callosum connecting the frontal lobes coincides with the development of working memory capacity, while reading ability is related to fractional anisotropy values in the left temporal lobe, as observed in children between ages of 8 and 18 (Nagy et al., 2004). Similarly, the maturation of corticospinal fibers parallels the development of fine finger movements (Paus et al., 1999). The cross-sectional area of the corpus callosum grows at least until early adulthood (Keshavan et al., 2002), while projection fibers of the posterior limb of the internal capsule (carrying sensory fibers to their processing areas in respective cortices) only approach an asymptotic point in maturation between the ages of 21 and 24 (Bava et al., 2010). This sub-section emphasized that any intense training, including musical instrumental training in childhood, may have a different impact on brain plasticity and cognitive development depending on the age of commencement. However, many scholars of sensitive periods in brain development note that the role of motivation and attention is profound in all learning and should not be underestimated, especially during sensitive periods (Hensch, 2004). And as the example of language learning in infants shows (Kuhl et al., 2003; Kuhl, 2007), social environment and teachers may be of equally high importance. Effects on brain plasticity Plastic changes in the cortical and subcortical structures of the auditory system (Gregersen et al., 2000; Wong et al., 2007; Penhune, 2011), as well as in the sensory-motor cortex (larger representation of fingers) and their functional expression depend on early age of commencement (Herholz and Zatorre, 2012), which emphasizes the role of sensitive periods in shaping training-induced plasticity (Merrett et al., 2013). Instrumental training may accelerate the gradual development of neurofilament in upper cortical layers that occurs between ages 6 and 12, underlying fast, synchronized firing of neurons (Moore and Guan, 2001; Hannon and Trainor, 2007). Two longitudinal studies tracked the influence of musical training on behavioral and brain activity in children between the ages of five and nine. Schlaug et al. (2005) recruited 50 children who were about to begin their musical education and compared them with a group of 25 age-, socioeconomic status and verbal IQ-matched controls. At baseline, there were no pre-existing cognitive, music, motor, or structural brain differences between the instrumental and control groups as tested by functional MR scans (Norton et al., 2005). Tests performed after 14 months of musical training revealed significantly greater change scores in the instrumental group compared to the control group in fine motor skills and auditory discrimination. However, no significant changes in gray or white matter volume nor transfer effects in domains such as verbal, visual–spatial, and math were found, but the instrumental group showed a trend in the anticipated direction. A study by Hyde et al. (2009) compared two groups of 6 years old children, one of which took private keyboard lessons for 15 months and the other spent a similar amount of time per week in a group music lesson that included singing and playing with drums and bells. Applying deformation-based morphometry to assess the differences between groups throughout the whole brain before and after the musical training revealed that children with piano lessons showed areas of greater relative voxel size in motor brain areas, such as the right precentral gyrus (motor hand area), and the midbody of the corpus callosum, as well as in the right primary auditory region, consistent with the plastic changes observed in professional musicians. Furthermore, structural brain differences in various frontal areas were observed which, however, did not correlate with improvement in behavioral performance. This evidence demonstrates that regular musical training during the sensitive period can induce structural changes in the brain and they are unlikely only due to pre-existing morphological differences. Yet, 14 months may not be long enough to engrave statistically significant growth in white and gray matter volume (Schlaug et al., 2005), and the differences observed may potentially be confounded by parents' higher level of education (Hyde et al., 2009). Effects on cognitive functions A further interesting question we explore in this section is the generalization of musical training-induced learning to other functional domains. According to the “temporal opportunity” conception of environmental stimulation during brain development, experiences in childhood and adolescence are vital to many abilities in adult life, which makes the decision of what education to provide to a child a serious matter. Is musical training a good choice? Although many longitudinal developmental studies of music education include a well-matched control group, such as another arts program, there is only limited research contrasting instrumental training in childhood with dance or sports, which could offer interesting avenues in plasticity research and aid the parents in making an informed decision. Thus, although all arts and sports programs do have beneficial effects on cognitive development (Green and Bavelier, 2008), instrumental musical training appears unique in the wide array of observed long-term effects, although there may be other factors mediating this effect (Young et al., 2013). When comparing musically trained with untrained children, it is not surprising that differences in the performance of listening tasks and auditory processing are found. For example, it has been shown that children who benefit from musical lessons are more sensitive to the key and harmonics of Western music than untrained children (Corrigall and Trainor, 2009). More specifically, concerning pitch processing, children as young as 8, who have undergone a 6-month long music training, demonstrated increased accuracy in minor pitch differences discrimination and its electroencephalographic signature—increased amplitude of the N300 (Besson et al., 2007). No such differences were observed in the control group who has undergone an equal period of painting classes. Another recent well-controlled longitudinal study showed that children aged between 8 and 10 who benefitted from a 12-month music lesson program were better in discriminating syllabic duration and voice onset time in comparison to children who followed painting classes during the same period (Chobert et al., 2012). These results suggest thus that musical training can improve the temporal fine-tuning of auditory perception. Moreover, musicians are better at recognizing speech in noise, an ability developed through consistent practice and enhanced if music training began early in life (Parbery-Clark et al., 2009, 2011; Strait et al., 2012). Taken together, these results suggest that musical training increases listening skills, including sound discrimination, an ability also involved in speech segmentation (Francois et al., 2013), allowing a more accurate processing of speech and voices. In line with our proposed role of rhythmic entrainment (see section Rhythm and Entrainment below), Besson et al. (2011) suggested that these differences in language processing distinguishing musicians from non-musicians may reflect a learned ability to precisely orient attention in time in order to discriminate sounds more accurately. Musical sounds and all other sounds share most of the processing stages throughout the auditory system and although speech is different from music production in several dimensions (Hannon and Trainor, 2007), musical training has been shown to transfer to language related skills. For example, auditory brainstem responses to stop consonants in musically trained children as young as 3 years is more distinct, indicating enhanced neural differentiation of similar sounds that characterizes adult musicians and later translates into better ability to distinguish sounds in speech (Strait et al., 2013). While the cross-links between language and musical training have been reviewed elsewhere (e.g., Chandrasekaran and Kraus, 2010; Besson et al., 2011; Strait and Kraus, 2011, 2013), two examples include neurophysiological mechanisms underlying syntax processing in both music and language that develop earlier in children with musical training (Jentschke and Koelsch, 2009), and the transfer of musical training to pitch discrimination in speech as well as reading aloud in 8-year old children (Moreno et al., 2009). The fact that music and language share common auditory substrates may indicate that exercising the responsible brain mechanisms with sounds from one domain could enhance the ability of these mechanisms to acquire sound categories in the other domain (Patel and Iversen, 2007; Patel, 2008). Patel argues in his OPERA hypothesis that the benefits of musicians in speech encoding are due to five mechanisms (Patel, 2011, 2013). He suggests that there is an overlap of common brain networks between speech and music, which are especially trained because music production demands high precision. Furthermore, musical activities have high emotional reinforcement potential, which stimulates these brain networks repeatedly and requires a certain attentional focus. Patel claims that these processes are responsible for the good performance of musicians in speech processing. This benefit of musical training can not only be found in tasks of auditory perception (for example tested with the Gordon's Intermediate Measures of Music Audiation, Schlaug et al., 2005), but also in verbal abilities such as verbal fluency and memory, second language acquisition and reading abilities, demonstrating far transfer effects of musical training (for a review see Besson et al., 2011). For example, it has been shown that children with musical training performed better at the vocabulary subtest of the Wechsler Intelligence Scale for Children (WISC-III) than a matched control group (Schlaug et al., 2005; Forgeard et al., 2008). Moreover, musical training has also been associated with enhanced verbal memory (Chan et al., 1998; Ho et al., 2003; Jakobson et al., 2003). Research in adults clearly showed that musical ability could predict linguistic skills in the second language learning. Slevc and Miyake (2006) tested 50 Japanese adult learners of English, and found a relationship between musical ability and second language skills in receptive and productive phonology, showing that musical expertise can be a benefit for learning a second language. And in young children, a study by Milovanov et al. (2008) showed that second language pronunciation accuracy correlates with musical skills. Empirical research on children and adults suggests that musical abilities predict phonological skills in language, such as reading. For example, Butzlaff (2000) found a significant association between music training and reading skills. In another study Anvari et al. (2002) studied the relation between early reading skills and musical development in a large sample of English-speaking 4- and 5-year-olds. Learning to read English requires mapping visual symbols onto phonemic contrasts, and thus taps into linguistic sound categorization skills. In this study, both musical pitch and rhythm discrimination were tested. For the group of 5-year-olds, performance on musical pitch, but not rhythm tasks predicted reading abilities. Such a finding is consistent with the idea of shared learning processes for linguistic and musical sound categories. However, despite this negative finding in 5-year old participants, there seems to be a link between abilities of rhythm production and reading, as we elaborate in section Rhythm and Entrainment below. For example, a recent study Tierney and Kraus showed that in adolescents the ability to tap to the beat is related to better reading abilities, as well as with performance in temporal attention demanding tasks, such as backward masking (Tierney and Kraus, 2013). This difference in rhythm processing might be due to the way how rhythm perception and production was studied by Anvari and colleagues, which required short term memory abilities, whereas the task of tapping to the beat solicits rather sensorimotor synchronization, and more importantly temporal orienting of attention—an ability required also in reading. Spatial and mathematical skills A meta-analysis of 15 experimental studies by Hetland (2000) showed that music instruction enhances performance on certain spatial tasks (such as the Object Assembly subtest of the WISC) but not on Raven's Standard Progressive Matrices, which is a test of non-verbal reasoning with some visual-spatial elements. The results of correlational studies testing the association between music training and spatial outcomes show no clear-cut association, with five out of 13 studies reporting a positive correlation between music training and spatial outcomes and eight a negative, null, or mixed results. Forgeard et al. (2008), however, did not find any differences in spatial skills between children who received at least 3 years of musical training and controls. Another study (Costa-Giomi, 1999) found that children receiving piano lessons improved more than controls in visual-spatial skills but only during the first 2 years of instruction, with no differences between the groups by the end of the third year. A study with adults showed that musicians did not perform better than non-musicians in a spatial working memory task (Hansen et al., 2012). It appears, therefore, that instrumental music training may aid the acquisition of spatial abilities in children rather than bring about a permanent advantage in musicians. Finally, Schlaug et al. (2005) found no transfer effects of musical training to math skills or general intelligence in 9–11-year-olds with an average of 4 years of musical training, although the children scored higher on the vocabulary subtest of the Wechsler Intelligence Scale for Children (WISC-III), suggesting that far transfer to linguistic abilities may be the most robust one, observable already after a relatively short period of practice. A meta-analysis of the studies investigating the influence of musical training on math performance did not show convincing evidence in favor of a transfer effect (Vaughn, 2000). Also in more recent studies no positive relation between musical training and performance in a mathematical skills tests (Forgeard et al., 2008), nor increased musicality among mathematicians has been reported (Haimson et al., 2011). The notion of executive function refers to the cognitive processes orchestrated by the prefrontal cortex that allow us to stay focused on means and goals, and to willfully (with conscious control) alter our behaviors in response to changes in the environment (Banich, 2009). They include cognitive control (attention and inhibition), working memory and cognitive flexibility (task switching). Hannon and Trainor (2007) proposed that musical training invokes domain-specific processes that affect salience of musical input and the amount of cortical tissue devoted to its processing, as well as processes of attention and executive functioning. In fact, the attentional and memory demands, as well as the coordination and ability to switch between different tasks, which are involved in learning to play an instrument, are very large. This learning depends on the integration of top-down and bottom up processes and it may well be that it is the training of this integration that underlies the enhanced attentional and memory processes observed in the musically trained (Trainor et al., 2009). Executive functions seem thus highly solicited when learning to play an instrument (Bialystok and Depape, 2009). In fact, Moreno et al. (2011) found that even after a short-term musical training (20 days) with a computerized program children improved their executive functions tested in a go-/no-go task. Similarly, in terms of working memory capacity, a recent longitudinal study showed that children that had been included in 18-months long instrumental music program outperformed the children in the control group that followed a natural science program during the same period (Roden et al., 2013). General IQ and academic achievement Extensive amount of research on how music can increase intelligence and make the listener smarter has been carried out (Rauscher et al., 1993; Degé et al., 2011; Moreno et al., 2011). The outcome of this research shows that not music listening but active engagement with music in the form of music lessons sometimes confers a positive impact on intelligence and cognitive functions although such results are not always replicated. A major discussion in this area is whether musical training increases specific skills or leads to a global un-specific increase in cognitive abilities, measured by a general IQ score. For children, music lessons act as additional schooling—requiring focused attention, memorization, and the progressive mastery of a technical skill. It is therefore likely that transfer skills of executive function, self-control and sustained focused attention translate into better results in other subjects, and eventually in higher scores of general IQ. General IQ is typically tested with Raven's Progressive Matrices (Raven, 1976), although various types of intelligence can also be tested on specific tests. These tests require different kinds of cognitive performance, such as providing definitions of words or visualizing three-dimensional objects from two-dimensional diagrams, and are regarded as a good indicator of mental arithmetic skills and non-verbal reasoning. For example, Forgeard et al. (2008) found that practicing a musical instrument increases the performance in the Raven's Matrices test, which could suggest that non-verbal reasoning skills are better developed in children with musical training. Measuring intelligence implies the sensitive discussion on genetic predisposition and environmental influence, and experience-acquired abilities. Schellenberg points out that children with higher cognitive abilities are more likely to take music lessons and that this fact can bias studies in which participants are not randomly assigned to music or control conditions (Schellenberg, 2011a). Similarly, also the socioeconomic context is known to influence the probability that children get access to musical education (Southgate and Roscigno, 2009; Young et al., 2013). Controlling for this potentially confounding factor, Schellenberg (2006) reported a positive correlation between music lessons and IQ in 6–11 year olds, and showed that taking music lessons in childhood predicts both academic performance and IQ in young adulthood (holding constant family income and parents' education). In another study, two groups of 6 year-olds were tested, one of which received keyboard or singing lessons in small groups for 36 weeks (Schellenberg, 2004), and the other children received drama lessons. The latter did not show related increases in full-scale IQ and standardized educational achievement, but notably, the most pronounced results were in the group of children who received singing rather than piano lessons. Modest but consistent gains were made across all four indexes of the IQ, including verbal comprehension, perceptual organization, and freedom from distractibility and processing speed, suggesting that music training has widespread domain-general effects. Intelligence measurements are often used to predict academic achievement. One question in this domain of research is therefore how musical activities influence academic achievement in children and adolescents. Despite initial claims that this effect may be primarily due to differences in socioeconomic status and family background, intervention studies as well as tests of general intelligence seem to show a positive association between music education and academic achievement. For example in a study by Southgate and Roscigno (2009) longitudinal data bases which include information on music participation, academic achievement and family background were analyzed. Their results show that indeed music involvement in- and outside of school can act as a mediator of academic achievement tested as math and reading skills. However, their results show also that there is a systematic relation between music participation and family background. Nonetheless, a recent study found that academic achievement can be predicted independently of socioeconomic status only when the child has access to a musical instrument (Young et al., 2013). Interestingly, this finding emphasizes that musical activities with an instrument differ from other arts activities in this respect. Furthermore, it has been suggested that executive functions act as a mediator in the impact of music lessons on enhanced cognitive functions and intelligence. Schellenberg (2011a) had the goal to investigate in detail this hypothesis of the mediating effect of the executive functions. He designed a study with 9–12-year old musically trained and un-trained children and tested their IQ and executive functions. Schellenberg's results suggest that there is no impact of executive functions on the relation between music training and intelligence. However, other studies have reported such an influence. For example there has been evidence that musical training improves executive function through training bimanual coordination, sustained attention and working memory (Diamond and Lee, 2011; Moreno et al., 2011). Degé et al. (2011) even used a design very similar to Schellenberg's with 9–12-year old children in order to test the role of executive functions. These authors did find a positive influence of musical training on executive functions and argued that this difference of results is due to the fact that in Schellenberg's study no direct measure of selective attention was included, which supposedly plays a crucial role in music. Apart from the concept of general IQ, Schellenberg (2011b) studied the influence of musical training in children on emotional intelligence but did not find any relation between them. Moreover, another study with 7–8 year-old children found a positive correlation between musical training and emotion comprehension which disappeared, however, when the individual level of intelligence was controlled (Schellenberg and Mankarious, 2012). Also other studies with adults did not find any correlation between musical training and emotional intelligence (Trimmer and Cuddy, 2008). One study by Petrides and colleagues with musicians did find a positive correlation between length of musical training and scores of emotional intelligence (Petrides et al., 2006). There seems to be thus a still contradictory picture concerning the association between emotional intelligence and musical education. This result is interesting insofar as it could be thought that musical training could also increase social competences, given that active musical activities have shown to enhance communicative and social development in infants (Gerry et al., 2012). Moreover, a study by Kirschner and Tomasello (2009) found that in children at the age of 4 musical activities produced behaviors of spontaneous cooperation. Another way to test social skills is to investigate the sensitivity to emotional prosody, which is a precious capacity in social communication. Studies have shown that musical training enhances the perception and recognition of emotions expressed by human voices (Strait et al., 2009; Lima and Castro, 2011), although an earlier study found that not musical training, but rather emotional intelligence predicted the recognition of emotional prosody (Trimmer and Cuddy, 2008). Thus, like with regards to emotional competence, the literature linking musical education and the recognition of emotional prosody is equivocal. The impact of musical education on social skills might therefore have to be investigated more in depth, comparing aspects such as music teaching methods in groups vs. single pupil lessons, and the role of musical activities in groups, for example in instrumental ensembles or choirs. Plasticity over the life-span Musical activities can have a beneficial impact on brain plasticity and cognitive and physical abilities also later in adult life after the critical and sensitive periods in childhood (Wan and Schlaug, 2010). For example, Herdender and colleagues showed that musical ear training in students can evoke functional changes in activation of the hippocampus in response to acoustic novelty detection (Herdener et al., 2010). In general, at an advanced age, a decline of cognitive functions and brain plasticity can be observed. However, physical as well as cognitive activities can have a positive impact on the preservation of these abilities in old age (Pitkala et al., 2013). In this sense, musical training has been proposed as a viable means to mitigate age-related changes in auditory cognition (for a review see Alain et al., 2013) It is often reported that with age fluid intelligence decreases and that this can be related to a diminishment of hippocampal volume (Reuben et al., 2011). In turn, a recent study by Oechslin et al. (2013) found that fluid intelligence is predicted by the volume of the hippocampus in musicians, which suggests that musical training could be used as a strategy to reduce age-related decline of fluid intelligence. In another study by Hanna-Pladdy and Mackay (2011), significant differences between elderly musicians and non-musicians (60–83 years) were found in non-verbal memory, verbal fluency, and executive functions. This shows as well that musical activity can prevent to some degree the decline of cognitive functions in ageing. However, these differences could be due to predisposition differences. Nonetheless, Bugos et al. (2007) performed a study in which predisposition influences were ruled out as they assigned participants randomly to two groups that received either piano lessons or no treatment. They found that persons over 60 who only began to learn to play the piano and continued during 6 months showed improved results in working memory tests as well as tests of motor skills and perceptual speed, in comparison to a control group without treatment. Dalcroze Eurhythmics, which is a pedagogic method based on learning music through movements and rhythm as basic elements has also been administered to seniors. One study showed that a treatment with this method during 6 months positively influences the equilibrium and regularity of gait in elderly (Trombetti et al., 2011). Given that falls in this population are a major risk, it is especially important to engage in training of these physical abilities at this age, which seems to be more efficient in combination with musical aspects of rhythmical movement synchronization and adaptation within a group. Although there are promising results suggesting that older musicians compared to matched controls show benefits not only in near-transfer but also some far-transfer tasks such as visuospatial span, control over competing responses and distraction (Amer et al., 2013), the nature vs. nurture problem remains. Apart from the study of Bugos et al. (2007) who used a random-assignment design, research on the influence of musical training on plasticity and cognitive benefits in advanced ages should take into account the influence of other cognitive stimulations and overall physical fitness, which are known to play an important role in the preservation of cognitive functioning and independence in the elderly (Raz and Rodrigue, 2006; Erickson et al., 2012). Variables modulating brain plasticity via musical training One challenge in assessing developmental changes in the brain due to long-term learning such as musical training is that many studies demonstrating structural brain differences are retrospective and look at mature musicians, which does not rule out the possibility that people with certain structural atypicalities are more predisposed to become musicians. If this is the case, then the distinction between innate and developed differences is rather difficult. In fact, the biggest goal for most training studies, notwithstanding musical training, is to disentangle the effects of longitudinal training and pre-existing differences or factors other than the intervention, such as gender, genetic predisposition, general IQ, socio-economic background and parents' influence. Another difficulty of interventions in young populations concerns the fact that children's brains are very inhomogeneous, and therefore comparisons, even within similar age groups, may not be very informative. The musician's brain is recognized as a good model for studying neural plasticity (Munte et al., 2002). The fact that in several studies, a correlation was found between the extent of the anatomical differences and the age at which musical training started strongly argues against the possibility that these differences are preexisting and the cause, rather than the result of practicing music. On the other hand, the contamination of most longitudinal studies with children is that they are correlational, and most do not assign the subjects randomly to either musical education or a control group. As a result, the observed positive effects on cognitive functioning may not solely derive from practicing music but also from differences in motivation for learning or general intelligence, musical predispositions aside. Because general cognitive abilities (Deary et al., 2010) and personality (Veselka et al., 2009) are to some degree genetically predetermined, individual differences in these areas observed in musicians (vs. non-musicians) are unlikely to be solely a consequence of music training (Barrett et al., 2013; Corrigall et al., 2013). The nature vs. nurture debate around musical practice-induced plasticity goes on and has begun to gain momentum as the number of neuroimaging studies continues to grow and recent genome-wide association studies have confirmed that many attributes of musicality are hereditary. Musical pitch perception (Drayna et al., 2001), absolute pitch (Theusch et al., 2009), as well as creativity in music (Ukkola et al., 2009), and perhaps even sensitivity to music (Levitin et al., 2004), have all been found to have genetic determinants. Importantly, these predispositions are typically tested for in children in a music school entrance exam. Therefore, it is fair to acknowledge that while learning a complex skill, such as playing an instrument, shapes brain function and structure, there may be additional explanatory variables that contribute to the observed differences between the brains of “musicians” and “non-musicians.” Motivation and the rewarding power of music At least some components of cognitive abilities that are found to be better in the musically-trained stem from innate qualities (Irvine, 1998), but it is difficult to expect ecologically-valid intervention studies to be able to untangle this factor from the effect of training (Barrett et al., 2013). Corrigall et al. (2013) have pointed out that musically trained children and adolescents are typically good students, with high auditory and visual working memory and high IQ not necessarily due to their music education but due to genetic predispositions, which also make them more likely to take on instrumental classes. They describe how a number of individual traits, such as conscientiousness, persistence, selective attention and self-discipline that are needed in music training, could be the pre-existing qualities that facilitate learning, brain plasticity, as well as far-transfer effects. In fact, personality trait “openness to experience,” which Corrigall et al. (2013) found to be considerably more prominent in those who took music lessons than in those who did not, is correlated with curiosity and tendency to explore, and may affect the way children learn and approach new skills such as music. This particular personality trait is genetically determined to some extent and may be also responsible for motivation to learn. Specifically, the expression of dopamine D4 receptors in the prefrontal cortex has been associated with the trait Openness/Intellect (DeYoung et al., 2011) and it is considered that prefrontal dopaminergic transmission is responsible for attentional control and working memory (Robbins, 2005). Dopamine receptors also play a major role in shaping motivation: genetic variants of the proportion of the dopamine receptors type 1 to type 2 in the striatum (Frank and Fossella, 2010), determine the tendency to learn from positive feedback as opposed to negative feedback and may thus affect intrinsic motivation—a major factor in training any complex skill in the long term. Rewarding value of a musical activity could be one of the driving forces for brain plasticity induced by musical training. Due to dopamine's important role in long-term memory formation (e.g., Lisman and Grace, 2005; Schott et al., 2006; Rossato et al., 2009; Wimber et al., 2011), both the genetic polymorphisms suggested above and activity-induced dopaminergic transmission will have an influence on learning outcome as well as on future learning and the reinforcing quality of music learning. A positive affective experience, such as pleasure and pride derived from first music lessons will likely promote future practice and total duration of training. In practice, it is difficult to control for levels of intrinsic motivation in empirical studies of musical training, such as those conducted by Moreno and colleagues (Besson et al., 2007; Moreno et al., 2009; Moreno and Bidelman, 2013), but its role may considerably affect the long-term outcome. Other factors that affect music performance ability are emotional support from parents and a nurturing relationship with the teacher characterized by mutual liking (Sloboda, 1993). Although these are not the focus of this article, they greatly affect a child's motivation to practice and the learning outcome, and should be taken into consideration in future studies investigating effects of musical training compared to other forms of long-term training intervention. Variance within musicians may also be a variable contributing to the musical training effect. The level of musical training is linked to pleasurable experience when listening to music (Gold et al., 2013), due to the adopted listening style in musicians and an involvement of the musically activated reward system that is also implicated in reinforcement learning (Salimpoor et al., 2013; Zatorre and Salimpoor, 2013). However, little is known about individual variability in music-induced positive emotional responses. It is possible, for instance, that individuals who experience deeply rewarding musical emotions are drawn to taking on musical training (again, with potential genetic influences such as in individuals with William's syndrome, Levitin, 2012). Later on, pleasure from the performance of music may add to the intrinsic motivation to continue training, thus forming a self-reinforcing cycle in which a student with innate predispositions to rewarding musical emotions experiences satisfaction with his own performance which encourages the student to practice. In addition, as with any skill learning that takes years to master, a high tolerance to frustration and perseverance are personality traits that would render a student more likely to continue the training (Barrett et al., 2013). Interestingly, musicians may differ in the level of enjoyment they derive from their artistic activity, with a particular difference between popular, jazz- and folk- vs. classical musicians. Although studies mostly concentrate on musicians trained in playing a particular instrument, the type of education they received may affect the outcome not only due to instructional differences but also through differences in motivation. One large survey conducted in the UK between 2006 and 2008 reported that folk, jazz and popular music students/artists derive more pleasure from their work than classical musicians (de Bezenac and Swindells, 2009). The non-classical musicians reported more frequent “playing for fun” and generally more enjoyment derived from group performances. One of the study's conclusions was that popular music artists tend to have higher levels of intrinsic motivation (and reportedly learning to play an instrument out of own desire) and later age at training commencement than classical musicians. The latter, who may have been confronted with higher demands for discipline and compliance in the formal educational system, tended to value technical skills higher than pleasure, and presumably had higher levels of extrinsic motivation for awards in adult career, and for teacher's praise during training. Although brain plasticity studies have so far mainly concentrated on classical music education, it may be important to note that students with classical and non-classical music education may actually differ in personality traits (such as conscientiousness, Corrigall et al., 2013), motivational goals, and these could in turn contribute to observed transfer of cognitive advantage and their functional and structural brain correlates. The aforementioned consideration of motivation as a learning-modulating variable leads us to the question of what happens to the learning outcomes and skill transfer in children who are forced to learn to play an instrument. In this case, music training may be an unpleasant and stressful experience. Stress experienced around the learning episode may actually promote the formation of memory related to the stressor via the cortisol and noradrenergic receptor activation in the amygdala which projects to the hippocampus and prioritizes consolidation of the emotional arousal-laden stimuli (Joëls et al., 2006). However, evidence form more ecological designs shows that stress impairs word learning and recall performance in comparison to no stress (Schwabe and Wolf, 2010). This has to do with the role of the amygdala in memory formation under stress: it not only enhances the consolidation of the stress-related stimuli but also facilitates a switch toward more habitual responding (mediated by the dorsal striatum) and away from goal-directed behavior that is mediated by the medial temporal lobe and the prefrontal cortex (Schwabe et al., 2010). The equivalent of such a switch in a typical learning situation would be moving away from deep, reflective processing under supportive, non-demanding circumstances to superficial processing under test-anxiety, which profoundly affects factual memory (Fransson, 1977). Stress derived from fear of punishment therefore affects the way we learn and often leads to worse performance than reward motivation. The effect depends on the task at hand but a negative impact has been found in the formation of spatial (Murty et al., 2011), procedural (Wächter et al., 2009) and declarative memory formation that requires cognitive processing (Schwabe et al., 2010). Although we cannot exhaustively elaborate on the literature treating motivation, learning and transfer in education research, suffice it to say that some forms of punishment motivation resulting in stress have a negative impact on learning (Lepine et al., 2004). In the context of musical education, we suggest thus that the aforementioned influence of personality and intrinsic motivation should be taken into account in future studies. For example, in random assignment studies on the impact of musical training, participants should also be asked to declare their personal motivation to adhere to the training at least before and after the intervention. Furthermore, personality questionnaires could be incorporated to test for traits that affect the learning style (e.g., reward sensitivity, openness, perseverance). These factors could then be used as covariates in the analysis of the effect of musical training in both behavioral and neuroimaging studies. Such information would help determine the extent of the influence of personality and motivational disposition on the long-term adherence to the program as well as its outcome in terms of transfer skills. This could be particularly pertinent given the fact, that these factors could not only limit the positive effects of musical activities but even be detrimental to cognitive and emotional development if the activity represents mainly a source of stress and negative affect. In addition, this information might also help to disentangle the real impact of the training from the influence of personality and motivation. Rhythm and entrainment Here we want to point at one specific aspect, which could represent an underlying mechanism of the beneficial transferrable effects of musical training. This specific feature is related to the fact that musical activities are usually based on rhythm. Most musical styles have an underlying temporal pattern that is called meter, which defines a hierarchical structure between time points (London, 2004). Ontogenetically, rhythm discrimination is observed in infants as young as 2 months of age (Trehub and Hannon, 2006). Like adults, 7-months old infants can infer an underlying beat, categorizing rhythms on the basis of meter (Hannon and Johnson, 2005), and 9-month old infants can more readily notice small timing discrepancies in strongly metrical than in non-metrical rhythms (Bergeson and Trehub, 2006). The theory of dynamic attending suggests that rhythmical patterns in music can only be perceived because of a synchronization of attentional processes which entrain to the periodicities contained in the auditory rhythm (Jones and Boltz, 1989). In fact, neuronal populations in the visual cortex entrain to the regular rhythm of stimulus presentation which constitutes a mechanism of attentional selection (Lakatos et al., 2008). It has therefore been suggested that musical activities that imply perception and production of rhythms train attentional processes which benefits also other cognitive functions. Indeed, a recent study with children showed that musical activities increase the accuracy of produced rhythms (Slater et al., 2013), while adult musicians are significantly more accurate in reproducing rhythmic intervals (Chen et al., 2008), detecting metrical irregularities (James et al., 2012) and maintaining the rhythm when none is externally provided (Baer et al., 2013). Entrainment is in fact a physical principle which describes the adaptation of at least two oscillating agents toward a common phase and period, which could eventually lead to perfect synchronicity between the oscillators (Rosenblum and Pikovsky, 2003). In this sense also the adjustment of behavior (own musical output, in ensemble playing, or movements, as in dance) to the perceived regular rhythm or extracted pulse can be regarded as entrainment (Fitch, 2013). Humans can also entrain multiple motor modalities, including for example body or limb motions, vocalization and even breathing and heart rate (Müller and Lindenberger, 2011; Trost and Vuilleumier, 2013). Neural populations can also be entrained by sensory stimulation (Gander et al., 2010) or motion, such as being rocked (Bayer et al., 2011). Research on subcortical brain plasticity has used the frequency following response (FFR) as an indicator of perceptual acuity (Moreno and Bidelman, 2013). The FFR is a component of the auditory brainstem response (Tzounopoulos and Kraus, 2009) that is phase- and frequency-locked to the acoustic parameters of an auditory stimulus. In this sense the FFR represents evidence of direct neural entrainment to the sound, be it music or speech. Several studies have used this method to test training-derived plasticity in the perceptual processing of musical and vocal parameters or speech, demonstrating faster response in musical experts (Tzounopoulos and Kraus, 2009; Chandrasekaran and Kraus, 2010). Furthermore, there is a close link between language and reading skills and the ability to perceive and produce rhythm, as widely documented by studies in children with dyslexia (Huss et al., 2011; Goswami, 2012), or with attention deficits as for example attention-deficit-hyperactivity disorder (Ben-Pazi et al., 2003), who show difficulties in rhythmic tasks. In fact, priming with a rhythmic sequence facilitates speech processing (Cason and Schon, 2012), and performance on perceptual discrimination in all sensory domains as well as motor response tasks is better when stimuli are presented isochronously (Nobre et al., 2007). It appears thus that in musical education, daily training of the temporal processing mechanisms has a beneficial effect on other cognitive functions, such as reading, in which attention has to be guided in a specific manner. Moreover, a study by Tierney and Kraus (2013) showed that the ability to tap to a beat was associated with better performance not only in reading but also in other attention-demanding tasks which are purportedly at the basis of executive functions. Tapping to, producing or merely perceiving a rhythm in any sensory domain leads to formation of expectations that facilitates orienting of attentional resources (Bolger et al., 2013) and entrainment of various bodily and neural functions. There is also evidence that timing or temporal processing is a skill partially explaining individual variability in cognitive-speed and non-verbal ability measures—findings based on the isochronous serial interval production task (Sheppard and Vernon, 2008; Holm et al., 2011; Loras et al., 2013). And it may even support superior auditory verbal memory in musicians (Jakobson et al., 2003). Being able to tap to an acoustic beat may be important for executive function (Tierney and Kraus, 2013) and implies coordination of movements, anticipation and sensorimotor integration. Being able to synchronize to an external rhythm while playing an instrument requires not only fine motor skills but also good auditory-motor coordination and sensorimotor integration—capacities that are also vital in planning and executing movements in general. Indeed, the functional neuroimaging signature of sensorimotor integration is increased in musicians performing a temporal synchronization task and involves increases in brain network interaction including premotor cortex, posterior parietal cortex and thalamus (Krause et al., 2010), which are also involved in attentional processes and motor planning (Coull, 2004). Furthermore, this ability of locking into temporal patterns is a skill that is useful in social communication, in which reciprocity and turn taking is essential. The mentioned aspects of attentional guiding, forming temporal expectations, auditory-motor integration, coordination of movements and social interaction have all in common that they are based on a synchronization and adaptation of internal processes to the external rhythm of the music, or the actions of other musicians (Trost and Vuilleumier, 2013 Music is the one incorporeal entrance into the higher world of knowledge which comprehends mankind but which mankind cannot comprehend. - Ludwig van Beethoven There is growing interest among TDLC scientists in the effects of music on the brain. Music is intrinsically temporal – it integrates sensory, motor and affective systems in the brain. Because of its temporal nature, it is an ideal focus for TDLC research, as well as a way to integrate across TDLC networks and initiatives. Music can affect the brain in many ways, many of which are just now being studied. Music therapy — the clinical application of music to treat a wide range of diagnoses using physiological and medical approaches – has advanced dramatically over the past decade. It is proving to be an effective clinical tool for treating medical diagnoses such as Alzheimer's disease, autism, post-traumatic stress disorder, dementia, stroke, NICU infants, language acquisition, dyslexia, pain management, stress and anxiety, coma, and more. Recently, TDLC members were involved in organizing two conferences about music and the brain. The first, held on March 24, 2011 -- the Newark Workshop on Music, Brain and Education at Rutgers University -- was sponsored by TDLC and organized by TDLC co-Director Paula Tallal. The second -- the New York Academy of Science multidisciplinary conference on "Music, Science and Medicine" -- occurred the next day, on March 25, 2011. TDLC PIs, Paula Tallal and Gyorgy Buzsáki, were involved in organizing the NYAS conference, with the main organizer being Dr. Dorita Berger, Editor-In-Chief at on-line Journal of BioMusical Engineering. This landmark meeting explored the connection between recent scientific findings and their possible application to clinical music and physiological function. The ultimate goal of the conference was to bring together experts studying music in human adaptive function, physiological sciences, neuroscience, neurology, medical research, psychology, music education, and other related disciplines, and to promote collaborative research, communication, and translation of scientific research into music-based clinical treatments of disease. (Please click here for conference abstracts. To view conference talks on The Science Network please click here). On March 24, 2011, TDLC Cod Paula Tallal organized a TDLC-sponsored conference -- the Newark Workshop on Music, Brain and Education at Rutgers University. TDLC members Paula Tallal and Gyorgy Buzsáki were also involved in organizing a New York Academy of Science multidisciplinary conference on "Music, Science and Medicine" on March 25, 2011. " Auditory Processing and Language, Memory and Attention Dr. Paula Tallal, who helped to organize the NYAS music conference, explains how music -- more specifically timing in the auditory system -- might affect language development: "Understanding the importance of auditory processing speed is really important for understanding how language works in the brain ... Children with language learning problems (or weak language development) can't sequence two simple tones that differ in frequency when they are presented rapidly in succession. They do absolutely fine when you present two tones separated further apart in time." She continues, "So the actual precision of timing in the auditory system determines what words we actually hear. In order to become a proficient reader and to learn how to spell, we need to hear these small acoustic differences in words and learn that it's those acoustic differences that actually go with the letters." Another TDLC member, April Benasich (and her colleagues at Rutgers University) are studying how infants only a few months old process sound. Using electroencephalographic recording, they have found that the way these infants process sound in their brains may provide a way to predict later language difficulties. The researchers hope to develop interventions that might correct any early deficiencies (please click here for more). Additional studies have revealed that musical training may help language processing. Studies by Nadine Gaab, assistant Professor of Pediatrics at Children's Hospital Boston and Harvard Medical School, have demonstrated that people with musical experience found it easier than non-musicians to detect small differences in word syllables. Musical experience improved the way people's brains process split-second changes in sounds and tones used in speech, and consequently, may affect the acoustic and phonetic skills needed for learning language and reading. "The brain becomes more efficient and can process more subtle auditory cues that occur simultaneously," she said. (Please click here to listen to Nadine Gaab present her recent research at the NYAS 2011 Conference, posted on The Science Network). Another key investigator in the field of music and the brain, Nina Kraus from Northwestern University, is studying the neurobiology underlying speech and music perception. She has found that "musical experience strengthens neural, perceptual and cognitive skills that undergird hearing speech in noise throughout the lifespan." TDLC researchers Alexander Khalil, Victor Minces, and Andrea Chiba have observed a correlation between musical synchrony and attentional performance.Their pilot project, conducted at the Museum School (a San Diego City Schools charter school), demonstrated a significant correlation between the ability of 150 children to synchronize in an ensemble setting—regardless of other musical abilities— and their ability to "pay attention" or maintain focus not only in music class but in other areas as well. This increase in overall attentional performance was measured by standard psychometric tests and teacher questionnaires. Now that a relationship between the ability to synchronize musically and attentional performance has been established, and because musical synchrony can be learned, the research team seeks to determine whether a period of musical practice might translate to overall improvement in attentional performance. Please see the Gamelan Project website for more information about this study. Another TDLC PI, Dr. Isabel Gauthier, and her graduate student Yetta Wong are interested in the holistic processing of musical notation. They studied brain activity in people with various degrees of musical experience, and were surprised by exactly how much of the brain becomes engaged in the simple act of perceiving a single note, especially in advanced musicians. (For more, please see "The Musical Brain Sees Faster"). The Brain Computer Interface Converts Emotions into Music Three teams of researchers at UC San Diego's Swartz Center for Computational Neuroscience (SCCN) and Institute for Neural Computation (INC) are pioneering a new field called Brain Computer Interface (BCI). Scott Makeig, Tzyy-Ping Jung, and colleagues are developing technology that links thoughts, commands and emotions from the brain to computers, using EEG. "In addition to gadgets like mind-dialed cell phones, devices to assist the severely disabled and a cap to alert nodding-off air traffic controllers, new technology could reshape medicine." Scott Makeig, Director of SCCN, has integrated music into his BCI research. His studies use the Brain Computer Interface to read emotions and convert those emotions into musical tones. In his "Quartet for Brain and Trio" Project ("Just: A Suite for Flute, Violin, Cello, and Brain"), Dr. Makeig composed the music and performed the violin, accompanied by a flutist, a cellist, and a so-called "brainist". The "brainist," cognitive science graduate student Tim Mullen, focused on one of five distinct emotional states, feeling it fully inside his body. When he entered into a certain feeling, sensors brought brain signals to the Mobile Brain/Body Imaging (MoBI) laboratory, converting the emotion into a similar-feeling tone complex. The musicians then played the piece that corresponded to that ground tone complex. This research demonstrates that a computer can decode primal emotions and the brain can communicate these feelings through music, without even lifting a hand. In addition to the BCI studies, SCCN is involved in several other music projects. In one, PhD student Grace Leslie is working with Scott Makeig to study emotional expression of a person attempting to convey the feeling of the music they are hearing via expressive 'conducting' gestures, using their new Mobile Brain/Body Imaging (MoBI) laboratory. In a related pilot project, Dr. Makeig and his team are using an instrumented violin bow to collect EEG, motion capture, and bow dynamics from violinists attempting to express various feelings vita simple open-string bowed violin tones. Because studies show correlations between auditory processing ability and cognitive functions, investigators have begun to develop and test music-based interventions that might help improve children's cognitive abilities (e.g. The Gamelan Project). Researchers are finding that music training correlates with cognitive and language improvements. Laurel Trainor, director of the Institute for Music and the Mind at McMaster University in West Hamilton, Ontario, and colleagues compared preschool children who had taken music lessons with those who did not. Those with some training showed larger brain responses on a number of sound recognition tests given to the children. Her research indicated that musical training appears to modify the brain's auditory cortex. Even a year or two of music training led to enhanced levels of memory and attention (when measured by the same type of tests that monitor electrical and magnetic impulses in the brain). Harvard University researcher Gottfried Schlaug found a correlation between early-childhood training in music and enhanced motor and auditory skills as well as improvements in verbal ability and nonverbal reasoning. TDLC researcher Terry Jernigan is involved in a newly developing study to explore the impact of musical/symphonic training on cognitive and brain development in children in Chula Vista elementary schools. The study involves a new partnership between The Neurosciences Institute (represented by Aniruddh Patel and John Iversen), The San Diego Youth Symphony (led by Dalouge Smith), and UC San Diego (TDLC's Terry Jernigan at the Center for Human Development). The team is especially interested in how musical training impacts the development of language, attention, and executive function, and the brain networks that support these abilities. The project builds on the strengths of the three participating organizations: NSI (~15 years of research on music neuroscience), the San Diego Youth Symphony (extensive experience in music education) and Dr. Jernigan's lab (leading experts in cognitive and brain development). The researchers, who are currently looking into possible funding sources, are currently doing pilot studies in two Chula Vista elementary schools, with children primarily learning string instruments (e.g., violin). The team plans to use behavioral cognitive tests and structural brain imaging. TDLC's Paula Tallal explains that auditory language training, as well as musical training, is being shown to alter the functional anatomy of the brain that is traditionally associated with speech and language processing. She explains, "behavioral data shows that musical training, as well as neuroplasticity-based acoustic training, significantly improves language and reading skills. Thus, one route by which music therapy may most significantly impact clinical populations is by improving dynamic auditory attention, sequencing and memory processes." Now that a correlation has been found between music training and cognitive and language improvements, the next step is to create and test different interventions that might help improve cognitive and language skills in children at risk or struggling in reading or other attentional tasks. Paula Tallal, as part of the Scientific Learning Corporation, has helped develop the Fast ForWord® Language and Reading products. The program consists of a series of computer-delivered brain fitness exercises to help educators improve children's academic achievement. "After auditory language training of children with dyslexia," Dr. Tallal explains, "metabolic brain activity more closely resembles "normal" readers, and reading improved enormously after intervention." In fact, Improving neural capacities has been shown to improve student performance, independent of content (language, math, science) or curriculum used (Tallal, 2004). So, Dr. Tallal explains, "even children's math scores improve tremendously in large clinical trials in the schools, even though we don't train math. We train the brain's precision to process auditory and language information. Our focus is not just auditory, it is not just music, but what that does for language and how important language is for ALL academic achievement." Additional Information - NYAS Conference "Music and the Brain" (Lectures by TDLC PIs Dr. Tallal and Dr. Buzsáki): The Role of Auditory Processing in Language Development and Disorders Paula Tallal, PhD, Rutgers University Neural syntax: what does music offer to neuroscience (and vice versa) Gyorgy Buzsáki, MD, PhD, Rutgers University > Additional lectures from the NYAS Music, Science and Medicine Conference (on The Science Network)
NASA’s landmark photo providing the first glimpse of our home planet from deep space occurred 45 years ago. You think your vacation pictures are impressive? Try to imagine what it was like 45 years ago as scientists and engineers produced the very first images of our planet from deep space. On August 23, 1966, NASA’s Lunar Orbiter 1 took the first photo of Earth from the moon’s orbit, and it forever changed how we see our home planet. “You’re looking at your home from this really foreign kind of desolate landscape,” said Jay Friedlander, who started his NASA career 20 years ago as a photographic technician working on images including those from the Lunar Orbiter at NASA’s Goddard Space Flight Center . “It’s the first time you’re actually looking at Earth as a different kind of place,” said Friedlander, currently a multimedia specialist at Goddard. Pictures of Earth from space had been taken before, by rockets in the 1940s, and satellites in the 1950s and 1960s. However, those pictures captured just parts of Earth, as opposed to a full-on view of the planet. But that was about to change. View full-size image The world’s first view of Earth taken by a spacecraft from the vicinity of the moon. The photo was transmitted to Earth by the United States Lunar Orbiter I and received at the NASA tracking station near Madrid. This crescent of the Earth was photographed August 23, 1966 when the spacecraft was on its 16th orbit and just about to pass behind the moon. View full-size image The Lunar Orbiter’s onboard camera contained dual lenses that took photos simultaneously. One lens took wide-angle images of the moon at medium resolution. A second telephoto lens took high-resolution images in greater detail. Credit: Courtesy of George Eastman House, International Museum of Photography and Film In the summer of 1966, the Beatles were performing their last string of public concerts, the Baltimore Orioles were on the way to their first World Series championship, the National Organization for Women was founded, and the United States was preparing to send the first humans to the moon. But before NASA could send astronauts to our lunar neighbor, they needed to find a safe place to land. So from 1966-67, the Lunar Orbiter program dispatched unmanned reconnaissance spacecraft to orbit the moon. “The basic idea was preparing to go to the moon for the Apollo missions,” said Dave Williams, a planetary curation scientist at Goddard. According to Williams, NASA “needed high resolution pictures of the surface to make sure this is something they could land on and pick out landing sites.” NASA needed to map the moon quickly. As it turned out, they could call upon off-the-shelf technology: Boeing and Eastman Kodak had previously developed a spacecraft with an onboard camera system for the Department of Defense The first spacecraft, Lunar Orbiter 1, left Earth on August 10, 1966 — 92 hours later it was orbiting the moon. It was like a flying photography lab, according to Friedlander. “The camera system itself took up at least a third of the spacecraft,” said Friedlander. Just about everything else, he said, “was power and propulsion.” The Lunar Orbiter camera contained dual lenses, taking photos at the same time. One lens took wide-angle images of the moon at medium resolution. A second telephoto lens took high-resolution images yielding details as small as 5 meters in size. For every swath of real estate on the moon that the medium resolution lens imaged, the high resolution lens would take three snapshots of smaller areas within that swath. The entire camera contraption would have made Rube Goldberg proud, exposing, developing, and processing photographic film onboard a moving spacecraft, traveling around the moon constantly between hot and cold temperature extremes anywhere from approximately 27 to 3,700 miles above the lunar surface. “This thing is going around the moon in zero gravity and developing film,” said Williams. “It was an amazing achievement that they could do this.” Williams said that the camera had “these big honking reels” of 70 mm film. The film would roll through, the camera would take pictures, and then move the exposed film to an automated developer. The automated film developer contained a mix of chemicals that would develop the film using a process similar to the method used by Polaroid cameras. An electron beam would then scan each developed image before transmitting the photos back to Earth using radio signals — the same way television satellites would analog signals to TV stations. Deployed one after the other, five Lunar Orbiter spacecraft produced a medium-detail map of 99 percent of the moon. Only in the last two years has NASA’s Lunar Reconnaissance Orbiter — still actively circling the moon — generated higher-resolution maps of the entire lunar surface. In addition, the first three spacecraft took highly detailed photos of 20 potential landing sites that looked promising. Friedlander said that personnel receiving the images on Earth would make giant prints of these images “and lay them out so they could walk on top of them and look for landing sites.” But at some point during the Lunar Orbiter 1’s mission, NASA contemplated pointing the spacecraft’s camera at Earth. “That wasn’t planned originally,” said Williams. “That only came up after the mission was already in operation.” Williams said that repositioning the satellite was a high risk maneuver. “If you turned the spacecraft maybe it wouldn’t turn back again. You don’t want to mess with a working spacecraft if you don’t have to.” But there was a debate about whether they should even attempt this at all. In the end, Williams said that NASA decided it wanted the picture, and would not blame anyone if something went wrong during the repositioning maneuver. So on August 23, the spacecraft successfully took a photo of an earthrise, the blue planet rising above the moon’s horizon. “NASA took the image and they created a poster of it which was given as gifts to everybody,” said Friedlander. “Senators and congressmen would give it out as presents to constituents and visiting dignitaries.” More pictures followed, including the famous Blue Marble photo of the Earth taken from the window of the Apollo spacecraft. But this elaborate and complex camera system was never really used after the Lunar Orbiter missions. “At the end of each mission, they did purposely crash the Lunar Orbiter,” said Williams. “Ostensibly, [NASA] didn’t want the radio signals from one lunar orbiter to interfere with the next lunar orbiter they put up.” But with the presence of the Soviet Union, which was deploying lunar orbiters of its own, Williams speculates that national security precautions may have been a factor. Since the spacecraft and camera were originally based on defense technology, they may have been smashed to bits “so that no one could ever get to them,” said Williams. The Lunar Orbiter’s mission may have been accomplished long ago, but its first image of the Earth continues to inspire. “We’re on this little Earth. We’re only part of some grand solar system in some big galaxy and universe. That’s why this picture is important, because this was the first time that anyone on Earth got this sense,” said Friedlander. By Ben P. Stein Inside Science News Service
Chapter 2Electricity & Magnetism2.1The Maxwell equationsThe classical electromagnetic field can be described by theMaxwell equations. Those can be written both asdifferential and integral equations:(D·n)d2A=Qfree,included∇·D=ρfree(B·n)d2A= 0∇·B= 0E·ds=-dΦdt∇ ×E=-∂B∂tH·ds=Ifree,included+dΨdt∇ ×H=Jfree+∂D∂tFor the fluxes holds:Ψ=(D·n)d2A,Φ=(B·n)d2A.The electric displacementD, polarizationPand electric field strengthEdepend on each other according to:D=ε0E+P=ε0εrE,P=∑p0/Vol,εr= 1 +χe, withχe=np203ε0kTThe magnetic field strengthH, the magnetizationMand the magnetic flux densityBdepend on each otheraccording to:B=μ0(H+M) =μ0μrH,M=∑m/Vol,μr= 1 +χm, withχm=μ0nm203kT2.2Force and potentialThe force and the electric field between 2 point charges are given by:F12=Q1Q24πε0εrr2er;E=FQThe Lorentzforce is the force which is felt by a charged particle that moves through a magnetic field. The This is the end of the preview. access the rest of the document. Magnetic Field, Electric charge, Classical Electromagnetic Field, dt dΨ dt
Nuclear Protest Movements Although the Szilard‐Einstein initiative helped launch the Manhattan Project, the Anglo‐American program to build the atomic bomb, many atomic scientists viewed their development of the weapon as a deterrent to its use, presumably by Germany. Therefore, when Szilard and other scientists, principally at the project's Chicago Metallurgical Lab, recognized that it would be employed against a virtually defeated Japan, they urged higher authorities to forgo its use. In the Franck Report of June 1945 (named after the chemist James Franck and written largely by Eugene Rabinowitch), they argued that employment of the weapon would shock world opinion, begin an atomic armaments race, and undermine the possibility of securing an international agreement for nuclear arms control and disarmament. When the U.S. government went ahead with the atomic bombing of Japan, it created an enormous furor around the world, and especially in the United States. Whether or not they supported the U.S. government action, most Manhattan Project scientists recognized that the world now faced the prospect of total annihilation. In the fall of 1945, they established the Federation of Atomic Scientists—quickly changed to the Federation of American Scientists—a group that at its height had some 3,000 members. Two other new entities, the Emergency Committee of Atomic Scientists (a small group of prominent scientists headed by Einstein) and the Bulletin of the Atomic Scientists (edited by Rabinowitch), became close allies. Pacifist groups like the Fellowship of Reconciliation, the War Resisters League, and the Women's International League for Peace and Freedom also worked to publicize nuclear dangers, as did the burgeoning world federalist movement. Arguing that people faced the prospect of “one world or none,” they worked together to champion nuclear disarmament, usually through limitations upon national sovereignty that ranged from international control of nuclear weapons to world government. Similar movements, often modeled on the American, emerged elsewhere—particularly in Western Europe, Canada, Australasia, and Japan. In addition, a Communist‐led movement developed; unlike the other, nonaligned movement, it assailed Western (but not Eastern) nuclear policy. Its best known project was the Stockholm peace petition campaign, a massive antinuclear venture that purportedly drew 2.5 million signatures in the United States. As the Cold War advanced in the late 1940s and early 1950s, the nuclear protest movement lost much of the support it had enjoyed. Public opinion grew more hawkish and increasingly amenable to meeting Communist challenges with military might. Administration officials turned from fostering plans for disarmament to winning the Korean War and developing the most destructive weapon yet: the hydrogen bomb. Buffeted by the Cold War and often confused with their Communist‐led rivals, nonaligned nuclear disarmament groups declined precipitously in influence and membership. Even so, by publicizing the nightmarish quality of nuclear war, they did help to stigmatize the atomic bomb, thereby making it more difficult for governments to use it again in war. They also slowed the development of nuclear weapons programs in some nations and made them unthinkable in others. A second wave of public protest against nuclear weapons began to emerge in 1954, in the United States and around the world. That year, when a U.S. H‐bomb test at Bikini atoll sent vast clouds of nuclear fallout surging across the Pacific and irradiated the crew members of a Japanese fishing boat, the Lucky Dragon, it highlighted the dangers of nuclear testing. The power of the weapon also illustrated the vast destructiveness of nuclear war. In 1955, Einstein joined the British philosopher Bertrand Russell in issuing a widely publicized appeal to the leaders of the great powers to halt the nuclear arms race. As pacifists and other antinuclear activists stepped up their protests against nuclear testing, in 1957 concerned scientists launched a series of Pugwash conferences (named for the original meeting site in Pugwash, Nova Scotia), bringing together scientists from both Cold War camps to discuss arms control and disarmament measures. That same year, Norman Cousins and other leading critics of nuclear testing formed the National Committee for a Sane Nuclear Policy (SANE), whose startling antinuclear ads helped catalyze an organization of some 25,000 members, with chapters around the country. Meanwhile, in 1958, the chemist Linus Pauling released a petition, signed by 11,000 scientists from 49 nations (including 2,875 from the United States), urging the signing of a nuclear test ban treaty. In contrast to the first wave of public protest against nuclear weapons, students' and women's groups played a very prominent role in this one. Organized in 1959, the Student Peace Union established chapters on dozens of college campuses, and in early 1962, staged the largest disarmament vigil yet seen at the White House. In 1961, women's peace activists launched Women Strike for Peace, which, like SANE, organized picketing, petitions, lobbying, and rallies to secure a test ban treaty and other multilateral measures toward nuclear disarmament. Despite its remarkable efflorescence, the nuclear protest campaign began to fade after 1963. To a large extent, this reflected its success: the Limited Test Ban Treaty had been signed (1963), the Soviet Union and the United States seemed on the road to detente, and many activists felt they could return to their private concerns. This mood of relaxation was reinforced by the signing of the Treaty on the Nonproliferation of Nuclear Weapons in 1968. Furthermore, nuclear disarmament activists were almost invariably peace activists, and with the Johnson administration's escalation of the Vietnam War in early 1965, many shifted their focus to a vigorous campaign against American participation in that conflict. By this time, however, the nuclear protest movement had made important headway in altering government policy. Thanks to the widespread public clamor in the United States and around the world, it had contributed substantially to a Soviet‐British‐American moratorium on nuclear testing in 1958, to the decision of numerous nations to not develop or use nuclear weapons, and to the signing of the first nuclear arms control treaties. In the late 1970s and early 1980s, the nuclear protest movement flared up once again. The collapse of Soviet‐American detente, the Soviet Union's deployment of SS‐20 missiles in Eastern Europe, the NATO decision to deploy cruise and Pershing missiles in Western Europe, and especially the advent of the hawkish Reagan administration, with its glib talk of nuclear war, convinced millions of Americans that their lives were once more in peril. New groups like Mobilization for Survival and Physicians for Social Responsibility grew rapidly, as did older ones, like SANE, that had fallen into decay. In June 1982, nearly a million Americans flocked to a New York City rally against nuclear weapons—the largest demonstration in U.S. history. Meanwhile, there emerged a broadly gauged Nuclear Freeze Campaign. Designed to halt the nuclear arms race through bilateral action, it drew the backing of major churches, unions, and the Democratic Party. Despite the best efforts of the Reagan administration to discredit the Freeze movement, polls found that it garnered the support of 70 percent or more of the American public. In the fall of 1982, a majority of voters backed the Freeze in nine out of ten states where it appeared on the ballot. Although rejected by the U.S. Senate, a Freeze resolution passed the House by a comfortable margin and became a key part of the Democratic presidential campaign of 1984. Although the nuclear protest movement ebbed substantially in the late 1980s, it could once again point to some important successes. To be sure, the Freeze proposal never became official U.S. policy and President Ronald Reagan easily won a second term in the White House. Nevertheless, public policy began to shift noticeably. The administration, which had disdained to enter arms control and disarmament discussions with the Soviet government, suddenly started to pursue active negotiations. And when Reagan, to steal the thunder of antinuclear forces in Western Europe and the United States, made arms control and disarmament proposals, the Soviet government startled U.S. officials by accepting them. Part of this sudden accord reflected the shift in Soviet policy under the reform leadership of Mikhail Gorbachev. But Gorbachev too was influenced by Western disarmament groups, and even initiated a nuclear testing moratorium at their suggestion. The result was a burst of diplomatic activity that produced the INF Treaty (removing U.S. and Soviet intermediate‐range nuclear weapons from central Europe) and a number of other nuclear disarmament measures. As the editors of the Bulletin of the Atomic Scientists pushed the hands of their famous “doomsday clock” further back from midnight, the nuclear protest campaign deserved some of the credit. [See also Helsinki Watch; Nuclear Weapons and War, Popular Images of; Peace and Antiwar Movements; SALT Treaties; Strategic Defense Initiative; Vietnam Antiwar Movement.] Alice Kimball Smith , A Peril and a Hope: The Scientists' Movement in America, 1945–47, 1965. Joseph Rotblat , Scientists in the Quest for Peace for Peace: A History of the Pugwash Conferences, 1972. Milton S. Katz , Ban the Bomb: A History of SANE, the Committee for a Sane Nuclear Policy, 1957–1985, 1986. David S. Meyer , A Winter of Discontent: The Nuclear Freeze and American Politics, 1990. Amy Swerdlow , Women Strike for Peace, 1993. Allan M. Winkler , Life Under a Cloud: American Anxiety About the Atom, 1993. David Cortright , Peace Works: The Citizen's Role in Ending the Cold War, 1993. Lawrence S. Wittner , One World or None: A History of the World Nuclear Disarmament Movement Through 1953, 1993. Lawrence S. Wittner , Resisting the Bomb: A History of the World Nuclear Disarmament Movement, 1954–1970, 1997. Lawrence S. Wittner "Nuclear Protest Movements." The Oxford Companion to American Military History. . Encyclopedia.com. (December 12, 2018). https://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/nuclear-protest-movements "Nuclear Protest Movements." The Oxford Companion to American Military History. . Retrieved December 12, 2018 from Encyclopedia.com: https://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/nuclear-protest-movements Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA). Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list. Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites: Modern Language Association The Chicago Manual of Style American Psychological Association - Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates. - In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.
Presentation on theme: "Tips for the Bagrut exam"— Presentation transcript: 1 Tips for the Bagrut exam What you need to know in order to succeed!LOTS, HOTS and BRIDGING QUESTIONS…Good Luck!Pirchy DayanBased on the Literature Handbook (2013) and Bari Nirenberg’s presentations. 2 For each Summative Assessment on a story, poem or play you should answer answer: basic understanding (LOTS) questions. These are basic content questions. Answers should be short and to the point.Analysis and interpretation (HOTS) questions that may include understanding of literary techniques. 3 Extended HOTS question where you: a. name the HOTS you chose to answer the question.b. answer the question showing appropriate use of the HOTS that you have chosen. Use vocabulary that is directly connected to the chosen HOTS.Bridging Text and Context question: 4 Students are asked to make connections between the text, universal themes and new relevant information and ideas from other sources. These sources may include the biography and personality of the author, themes and /or aspects of the historical, social and cultural contexts of the text.The connection must be both accurate and explicit. Use 1-2 examples from the text to support your point. 5 InferringReading "between the lines" to understand information that is not presented directly.Drawing a conclusion from clues.What do you think the character meant when s/he said, “___”?What does ___’s behavior suggest? What is the purpose or function of this information?What different meanings can be inferred from this line in the poem? 6 Useful VocabularyInfer, learn, conclude, read between the lines, assume, clue, hint, imply, probably, likely, unlikely, evidence, what are the consequences of this statement? 7 Sample QuestionsWhen George first meets Mr. Cattanzara he lies to him about reading books because he wants his respect. Why is Mr. Cattanzara's respect so important to George?What is the importance of the setting of the story? (any story)"I shall be telling this with a sigh". What TWO different meanings can we infer from this line in The Road Not Taken? 8 Comparing and contrasting Finding what is similar/different between two or more things.Drawing conclusions based on these similarities and differences.Compare and contrast the conflicts/ problems/dilemmas in two stories or poems.Compare and contrast characters in a text. 9 Useful Vocabularylike, similar, also, similarly, in the same way, likewise, again, compared to, both, have in common. 10 unlike, in contrast with, different than, opposite, (comparative adjectives) on the contrary, however, although, yet, even though, still, nevertheless, regardless, despite, while, on the one hand…on the other hand. 11 Sample QuestionsCompare and contrast the TWO roads in the poem The Road Not Taken.Compare and contrast Mr. Kelada and the narrator.Compare and contrast Joe's view of family before and after he hears the contents of Larry's letter. Support your answer with information from the play. 12 Explaining patternsIdentifying and explaining different patterns of behavior in a text;Explaining why these patterns are important.Identifying and explaining different patterns in a poem - for example: rhythm and rhyme;What behavior does the character repeat?Explain why certain lines/phrases/words are repeated in a story/poem. 13 Useful Vocabularyrepeat, repetition, repetitive, routine, order, notice, noticeable, significance, significant, similar, recur, rule, follow a pattern, pattern of behavior. 14 Sample QuestionsThe relationship between Waverly and her mother is like that of chess players. Explain the pattern of their behavior.What behavior does Mr. Kelada repeat throughout the story? Why is his behavior in the end surprising? Explain.What behavior does George repeat in the story? Based on his pattern of behavior, is he going to read the books? Explain. 15 Explaining cause and effect Identifying reasons why things happen (the cause);Identifying and describing the result (the effect) of actions and circumstances;Explaining the connection between the two.What were the results of ___‘s action?What caused ___ to think that ___? 16 Useful Vocabularycause, effect, result, consequence, consequently, outcome, as a result of, therefore, if....then, in order to, due to, because, thanks to, as a result (of), encourage, persuade, development, explanation, ___ leads to ____. 17 Sample QuestionsWhat caused the speaker in the poem to choose the “road less travelled by”?"He stayed in his room for almost a week, except to sneak into the kitchen when nobody was home." What caused George to stay in his room for a week?While Mr. Kelada is examining the pearls, Mrs. Ramsay's face changes. How does this affect Mr. Kelada? 18 Distinguishing different perspectives Identifying different points of view in a text;Identifying different outlooks on life.Identify how different characters respond to a central event in the story.How does your understanding of the characters’ actions/events in the story change as you read? 19 Useful Vocabularyperspective, point of view, attitude, differences, outside, inside, looking from the side, opinion, reader, narrator, consider, identify, distinguish, tell the difference, 20 however, on the one hand, on the other hand, outlook, standpoint, perception, side, angle. 21 Sample QuestionsHow do the narrator's feelings about Mr. Kelada change from the beginning to the end of the story?How do Waverly and her mother view Waverly's success in chess?How does our opinion of Mrs. Ramsay change at the end of the story? 22 Problem Solving Identifying a problem/dilemma and its solution. Identifying a problem and suggesting a solution of your own based on what you know about the characters, events and circumstances.Define the problem facing the protagonist.What is the central conflict in this text and how is it resolved? Explain.What dilemma does ___ face at this point in the story? 24 Sample QuestionsIn the first stanza of the poem The Road Not Taken (lines 1-5), what is the traveler’s dilemma? How does he solve his dilemma?What dilemma does Mr. Kelada face when he examines the pearls? How does he solve his dilemma? 25 What is considered a good answer? The information is relevant, sufficient, well organized and correct. The answer includes examples/ supporting details from the text. The message is clear. There is correct use of grammar, vocabulary, spelling and punctuation.F - 80% is given for content and 20% for grammar and spelling. 26 General Tips for Success Read the question carefully! You may want to highlight key words in the question to help you focus.Make sure that your answer relates to the entire question. Give examples from the text to support your point!If you are given a quote, the answer must reflect a general understanding of the literary piece, NOT just the quote. 27 If there is a question about a literary term, such as setting or metaphor, you should show understanding of the literary term in connection to the text.You should provide examples from the text in your answer. When you are asked to explain, you must support your explanation with examples.No examples means you lose at least 20% from what the question is worth. 28 Extended HOTS questions For the extended HOTS question:A. answer the question as required.B. choose ONE HOTS, name it and show appropriate evidence of the use of the chosen HOTS. That is, use the relevant vocabulary you have learned in your answer. You will get up to 5 points for using the correct vocabulary. 29 Bridging Text and Context – Tips You need to write words about how certain information you are given is related to the text you read, and enhances ((מחזק your understanding of it.Relate to the information / quote you are given in YOUR OWN WORDS.Explain how the background information you are given is connected to an aspect of the text.Make connections between the new information and the text by giving explanations and 1-2 examples. 30 Give ONLY relevant information! Conclude your answer.Use the following template to help you organize your answer.From this quote/information I learn that ___________. (In addition, it suggests that ______________.) This is reflected in the story/poem/play (in several ways). For example, ___________. (Also, ______________. )To sum up, _______.
Sometimes innovation strikes at the most unlikely of times and in the most unlikely of places. While researchers can labor for months at their research facilities, a breakthrough may come while they’re tinkering at home in their garage or basement. Such was the case for Rutgers Cooperative Extension specialist William Roberts when he used an aquarium air pump to separate two layers of plastic film in a model greenhouse he was building in his basement on Christmas Day in 1964. As innocuous as it may seem, what Roberts did was actually an innovation that would one day, once developed for commercial application, revolutionize the use of greenhouses worldwide and be a boon to the agricultural industry. Greenhouses, once regarded as a luxury for the rich, have been in use for centuries. The traditional glass structures allowed for a controlled environment to protect plants from cold or heat, and for the cultivation of exotic plants. It wasn’t until the 1960s, when polyethylene film became available in wide sheets ideal for industrial applications, that the construction of greenhouses became economical, leading to widespread agricultural use. In the early 1960s, as an extension agricultural engineer, Roberts was working with some growers who were using low-cost polyethylene film on simple wooden frames to construct greenhouses used primarily for spring transplant production and bedding plants. One concern was the tendency for a roof made of a single layer of polyethylene to collect condensation that dripped on the small seedlings. To alleviate this problem, a second layer of film was added by fastening it to the underside of the frame, creating an airspace and keeping the inner layer warmer. This proved cumbersome, so the next development was to install the first layer over the frame and fasten it to the rafters with 2′ x 2′ spacers, followed by a second layer, which were then fastened down with a 1′ x 2′ beam. This was an improvement but still required two fastening steps for every rafter. While this double-layer polyethylene greenhouse showed improved energy efficiency, on windy days the sheets of plastic were prone to flapping in the wind and would only last one growing season before having to be replaced. The new design of applying the second layer made the task slightly less laborious but was still a tedious process, which needed to be repeated annually. This cost growers time and money and limited the use of polyethylene greenhouses. Roberts recalls his breakthrough. “Everyone needs at least one good idea in their career and mine came in 1964. On Christmas morning when I was supposed to be doing something else I was in the basement building a model greenhouse. We had been installing two layers of plastic film on greenhouse structures to reduce the energy consumption by 35% and it was a tedious and labor consuming job; plus it was not very effective. After I had built my model and installed two layers of film on it, I took a small air pump used in fish tanks and rigged it so that air could be blown between the two layers of film. And as I saw it blowing up, I said, ‘Thank you Lord, this is the way to overcome many problems and reduce the tedious work of double glazing’. The outer layer inflated outward and the inner layer was forced down over the rafter supports creating an air space and giving rigidity to the two layers so that in the normal wind situation the plastic would not flap and move in the wind like a sheet hanging on the clothes line. It all clicked in my head as the way to go.” This concept was successfully applied to a greenhouse on the Cook Campus. It was essentially a wooden frame structure designed for the width of polyethylene sheeting available at the time. A small commercial air blower was used in place of the fish tank pump. Researchers noted that not only was there a significant reduction in the required construction materials and labor, but the tension in the film from the slight air pressure reduced the film flexing and flapping in the wind. This, in turn, reduced the likelihood of tearing, thereby increasing structural reliability and extending film life. This very first structure using this air-inflated, double-layer polyethylene design was located on Ag Extension Way, behind the Extension Conference Center off of College Farm Road on the Cook Campus. That greenhouse is now a national landmark but more on that later. This concept was next applied to a portion of a large commercial greenhouse in Allentown NJ at Kube-Pak, Inc., which was then owned by Fred and Bernie Swanekamp. Roberts’ cautious approach was overruled by Fred’s unbridled enthusiasm. “I asked Fred if he wanted to try it on one bay and he said that he wanted to do one half the greenhouse, which was a greenhouse of six acres. I told him I wanted to sleep at night but he was so excited about the idea that he proceeded to cover one half of the structure with the air inflated system,” Roberts remembers. Roberts’ fears were quickly put to the test. “I distinctly remember in the spring of that year on a very windy Saturday staying away from the telephone because we were having hurricane force winds. I didn’t want to hear about the damage to the greenhouse. I heard on the radio that the roof blew off the Polestra, a large basketball arena in Philadelphia and I still received no call from Fred. Finally when I could stand it no longer I called him on Monday morning. He said ‘It was no problem’. The original method of covering on one half of the roof blew off during the storm but the new section was not damaged, so he covered the remaining 500 feet of the entire 1,000-foot long greenhouse with the air inflation system. The Kube-Pak greenhouses today are covered in the same system developed almost 50 years ago.” Roberts is mindful of the fortuitous circumstances of the wind event. “This replicated a very large research project with no need for a sponsored grant or an expensive New Jersey Agricultural Experiment Station project.” The commercial application of the air-inflated, double-layer polyethylene (AIDLPG), also referred to as “double plastic” greenhouses, spread like wildfire. Several companies then developed steel and aluminum frame structures for multi-span and single-span greenhouses, which could effectively use the double plastic system. Roberts also designed wooden greenhouse frames of several sizes to match available polyethylene film widths, as well as a pipe frame structure and pipe bender to assist in the hand-bending of the hoops. He developed the engineering plans and drawings for these easy-to-construct greenhouses. These plans were made available through the extension plan service, a national system at every land-grant university in the U.S., providing blueprints for growers. The early popularity of these designs and their rapid commercial acceptance were due primarily to their low cost relative to conventional greenhouses glazed with glass or fiberglass. The insulation properties of the inflated air space reduced heat requirements by over a third, further reducing costs to growers. The work on the AIDLPG that Roberts initiated at Rutgers in 1965, along with the contributions of the commercial growers who took the early risks, helped this development spread rapidly and extensively into commercial agriculture. The innovation was quickly adopted for commercial use and became the basis for a rapid expansion in plastic greenhouse acreage. Today, about 65 percent of commercial greenhouses in the U.S. and throughout the world that use double-glazing utilize this system. The AIDLPG so revolutionized the industry that in 2004, the American Society for Agricultural and Biological Engineering (ASABE), dedicated the structure of the first air-inflated, double-layer polyethylene greenhouse as an ASABE Historic Landmark. Prior to this dedication, only 42 such landmarks had been dedicated in the U.S. since 1926. How does this historical national honor rank with Roberts? “It was number 43 on the list, along with the development of the first plow and the cotton gin. Very humbling.” Roberts and colleagues were not ones to sit on their laurels. Stemming from that development, and under the leadership of now Professor Emeritus David Mears, Roberts worked on further advances in the technology. The development of the AIDLPG also set in motion other advances like solar heating systems for greenhouses, movable thermal insulation screens and in-floor, root zone heating systems. Today, Arend-Jan Both, associate extension specialist in controlled-environment engineering, is Roberts’ successor. Both points out some further advantages of the AIDLPG design. “While the second layer reduces light transmission, sunlight passing through two layers of greenhouse film is more diffused. Research has shown that this diffused light can reduce heat stress on sunny days, and it makes the light distribution inside the greenhouse more uniform with deeper penetration into the plant canopy.” Both notes the challenge of this system remains that “as a result of breakdown due to UV degradation, the film needs to be replaced after three to four years.” Greenhouse restoration for landmark dedication Joseph Florentine, director of Greenhouse Operations at Rutgers NJAES, discloses the global significance of Roberts’ innovation. “I think the most important aspect of Bill’s invention is that the inflated plastic greenhouse is used extensively in third world countries to extend their growing seasons, thereby increasing food security. Bill’s invention not only aids commercial growers with low cost greenhouses, it is also helping undernourished people in this world to provide their own food. His idea literally revolutionized world agriculture by making food grow, inexpensively, where and when it could not grow before.” While much of the research at Rutgers leading to advances in greenhouse engineering has been conducted in various campus greenhouses, the original AIDLPG structure has also been in continuous use for a variety of research studies. Before the landmark dedication in June 2004, the structure, which was built in 1962, was showing some wear. Greenhouse Operations and Management staff and two carpenters from University Facilities worked to restore the greenhouse, replacing rotted wood, repainting and recovering the structure. Greenhouse Operations has made the commitment to maintain this landmark, despite no dedicated funds to do so. Besides commercial growers, consumers have reaped the benefits of Roberts’ application. Thanks to these double-plastic structures, the flowers we buy in full bloom, the flats of vegetable and herb transplants for springtime planting and local vegetables grown in plastic covered greenhouses to extend the early or late seasons, can be produced locally and economically. The cultivation of tomatoes in these greenhouses can occur year-round, as can other high-value crops. So, the next time you’re poring over the flats of vegetable transplants in a garden center greenhouse, look around to see if the structure is an AIDLPG. If so, you can say thanks to Rutgers Cooperative Extension ag engineer Bill Roberts and his Christmas Day tinkering in 1964.
Students will learn how to categorize ions and write the chemical formula of ionic compounds. This packet includes a video, a table of cations, a table of anions, and the steps needed to determine the chemical formula of an ionic compound. 1) Look at a table to identify the charge sign, charge magnitude, and atomic symbol of each ion 2) Determine the relative abundance of each ion a) Identify the smallest number that is a multiple of both charge magnitudes. This is the least common multple (LCM). b) Calculate the subscript of the cation by dividing the LCM by the charge magnitude of the cation. c) Calculate the subscript of the anion by dividing the LCM by the charge magnitude of the anion. 3) Write the symbol of the cation first and the anion second. Include subscripts, but only if the subscript is greater than one. 4) Put parenthesis around the symbol of polyatimc ions in order to distinguish ions and subscripts.
Definition of phosphene : a luminous impression due to excitation of the retina Did You Know? Phosphenes are the luminous floating stars, zigzags, swirls, spirals, squiggles, and other shapes that you see when closing your eyes tight and pressing them with your fingers. Basically, these phenomena occur when the cells of the retina are stimulated by rubbing or after a forceful sneeze, cough, or blow to the head. The word phosphene comes from the Greek words phōs (light) and phainein (to show). Phainein is also a contributing element in such words as diaphanous, emphasis, epiphany, and phenomenon, among others. Origin and Etymology of phosphene International Scientific Vocabulary phos- + Greek phainein to show — more at fancy First Known Use: circa 1860 Medical Definition of phosphene : a luminous impression that occurs when the retina undergoes nonluminous stimulation (as by pressure on the eyeball when the lid is closed) Seen and Heard What made you want to look up phosphene? Please tell us where you read or heard it (including the quote, if possible).
Ecosystems containing several species are more productive than individual species on their own. Using data from more than 400 published experiments, an international research team has found overwhelming evidence that biodiversity in the plant kingdom is very efficient in assimilating nutrients and solar energy, resulting in greater production of biomass. "Plant communities are like a soccer team. To win championships, you need a star striker who can score goals, but you also need a cast of supporting players who can pass, defend and keep goal. Together, the star players and supporting cast make a highly efficient team," says Lars Gamfeldt of the Department of Marine Ecology at the University of Gothenburg. Gamfeldt is part of an international research team led by Brad Cardinale (University of Michigan, USA) which, in a special issue of the scientific journal American Journal of Botany on biodiversity, presents a study on the significance of biodiversity of plants and algae, which form the base of the food chain. The research team based its study on the question whether ecosystems can maintain important functions such as production of biomass and conversion of nutrients when biodiversity is depleted and we lose species. In their quest for answers they have examined hundreds of published studies on everything from single-celled algae to trees. Using data from more than 400 published experiments, the researchers found overwhelming evidence that the net effect of having fewer species in an ecosystem is a reduced quantity of plant biomass. There are two principal explanations for why species-rich plant communities may be more effective and productive. One is that they have a higher probability of including "super-species," that is to say species that are highly productive and effective in regulating ecological processes. The other is that different species often have characteristics that complement one another. The fact that there is a "division of labour" among different plant species in nature makes it possible for species-rich communities to be more productive. The researchers also note that as a result of climate change and other human impact we are now losing species at a rapid rate. This means that we need to prioritise what we want to protect and preserve, in order to maintain the goods and services humans depend on. "Nearly every organism on this planet depends on plants for their survival. If species extinction compromises the processes by which plants grow, then it degrades one of the key features required to sustain life on Earth," the principal author of the article Brad Cardinale comments. Gamfeldt is affiliated with both the Department of Marine Ecology at the University of Gothenburg and the Department of Ecology at the Swedish University of Agricultural Sciences. Cite This Page:
Coal dust and float coal dust, produced during normal mining operations, in underground coal mines, are carried from the point of origin downstream by the ventilating air, where it deposits on the surfaces of the mine entry. In an explosion, this dust is lifted from the surfaces by the aerodynamic disturbances and, if of sufficient quantity, can continue to propagate the explosion. To prevent the surface coal dust from contributing, it must be inerted, typically by spreading pulverized limestone, i.e., rock dust, over the coal dust surface. To facilitate the dusting operation, the National Institute for Occupational Safety and Health (NIOSH), Pittsburgh Research Laboratory (PRL), developed an automated system that continuously monitors the accumulation of coal dust. This system could activate a rock-dusting machine that disperses rock dust into the ventilation air when dangerous deposits accumulate and deactivate the machine when sufficient inert has been deposited on top of the coal dust. The system consists of a microprocessor-controlled optical float dust deposition meter. This device measures the light intensity reflected from a deposited layer of dust. A standard cap lamp is used as a fixed- position light source. From the reflected light signal, the microprocessor determines the hazard level of the deposited layer and performs the appropriate action.
Ocean currents have been carrying floating debris into all five of the world’s major oceanic gyres for decades. The rotating currents of these so-called “garbage patches” create vortexes of trash, much of it plastic. However, exactly how much plastic is making its way into the world’s oceans and from where it originates has been a mystery — until now. A new study published in the journal Science, quantifies the input of plastic waste from land into the ocean and offers a roadmap for developing ocean-scale solutions to the problem of plastic marine pollution. The research was conducted by a scientific working group at UC Santa Barbara’s National Center for Ecological Analysis and Synthesis (NCEAS) with support from the Washington, D.C.-based Ocean Conservancy. To conduct the research, lead author Jenna Jambeck, an environmental engineer at the Univ. of Georgia, coordinated contributions from experts in oceanography, waste management and plastics materials science. The study found that more than 4.8 million metric tons of plastic waste enters the oceans from land each year, and that figure may be as high as 12.7 million metric tons. That’s one to three orders of magnitude greater than the reported mass of plastic floating in the oceans. A metric ton is equivalent to 1,000 kilograms or 2,205 pounds. “Using the average density of uncompacted plastic waste, eight million metric tons — the midpoint of our estimate — would cover an area 34 times the size of Manhattan ankle-deep in plastic waste,” said co-author Roland Geyer, an associate professor at UCSB’s Bren School of Environmental Science & Management. “Eight million metric tons is a vast amount of material by any measure. It is how much plastic was produced worldwide in 1961.” Previous studies have documented the impact of plastic debris on more than 660 marine species — from the smallest of zooplankton to the largest whales, including fish destined for the seafood market — but none have quantified the worldwide amount entering the ocean from land. “This is the first time people have connected the dots in a quantifiable way,” said Jambeck. According to the study, countries with coastal borders — 192 in all — discharge plastic into the world’s oceans with the largest quantities estimated to come from a relatively small number of middle-income, rapidly developing countries. In fact, the investigators found that the top 20 countries accounted for 83 percent of the mismanaged plastic waste available to enter the ocean. They went on to say that reducing the amount of this waste by 50 percent would result in a nearly 40 percent decline in inputs of plastic to the ocean. “Large-scale removal of plastic marine debris is not going to be cost-effective and quite likely simply unfeasible,” said Geyer. “This means that we need to prevent plastic from entering the oceans in the first place through better waste management, more reuse and recycling, better product design and material substitution.” Knowing how much plastic is going into the ocean is just one part of the puzzle. Millions of metric tons reach the oceans, yet researchers are finding between 6,350 and 245,000 metric tons floating on the surface — a mere fraction of the total. This discrepancy is the subject of ongoing research. “Right now, we’re mainly measuring plastic that floats,” said study co-author Kara Lavender Law, a research professor at the Massachusetts-based Sea Education Association. “There is a lot of plastic sitting on the bottom of the ocean and on beaches worldwide.” The NCEAS working group forecasts that the cumulative impact to the oceans could be as high as 155 million metric tons by 2025. However, the planet will not reach global “peak waste” before 2100, according to World Bank calculations. “We’re being overwhelmed by our waste,” Jambeck said. “The numbers are staggering, but as the group points out, the problem is not insurmountable,” said NCEAS Director Frank Davis, who is also a professor at UCSB’s Bren School. “The researchers suggest achievable solutions that could reverse the alarming trend in plastics being dumped into our oceans.” Among them, according to the study, are waste reduction and “downstream” waste management strategies such as expanded recovery systems and extended producer responsibility. According to the researchers, while infrastructure is being built in developing nations, “industrialized countries can take immediate action by reducing waste and curbing the growth of single-use plastic.”
Riparian setback regulations are designed to establish distances from water resources where building and other soil disturbing activities are prohibited unless the applicant obtains a variance from the local community. The specific purpose and intent of riparian setbacks are to regulate uses and developments within riparian setback areas that would or could impair the ability of riparian areas to reduce flooding and pollutants, stabilize streambanks, prevent streambank erosion, and provide habitat and community character. Riparian setback regulations are recommended as part of a community’s stormwater management program for flood control, erosion control, and water quality protection. Why Riparian Setbacks? Riparian areas are naturally vegetated lands along rivers and streams. When appropriately sized, these areas can limit streambank erosion, reduce flood size flows, filter and settle out pollutants, and protect aquatic and terrestrial habitat. Riparian setbacks are a tool local governments can use to maintain riparian area functions. Communities can establish riparian setbacks through a combination of landowner education, land acquisition, and land use controls on new development. County soil and water conservation districts, land trusts, and other organizations are skilled in assisting communities and landowners with education and acquisition efforts. CRWP Model Riparian Setbacks Regulation: CRWP’s Model Ordinance for the Establishment of Riparian Setbacks recommends setbacks measured on all watercourses, including ephemeral streams. The riparian setback distances are measured from the ordinary high water mark of a stream – Where is the Ordinary High Water Mark of a River?. Setback widths vary from 25 to 300 feet from either side of the stream and are extended to the 100 year FEMA floodplain and to the edge of any wetlands in the riparian corridor. It is important that communities develop a map of potential riparian setbacks to assist them in implementing the ordinance. Please contact CRWP for assistance in developing and tailoring this map to your community's needs. It is hoped that this model regulation will be helpful to local governments in their efforts to provide effective stormwater management throughout the Chagrin River watershed. Please direct questions regarding the model regulations to CRWP at (440) 975-3870. Riparian Setback Resources: - Model Ordinance for the Establishment of Riparian Setbacks - Where is the Ordinary High Water Mark of a River? - Why Riparian Setbacks? - Riparian Setbacks: Why That Width? - Riparian Setbacks: Technical Information for Decision Makers - Adoption Process for Best Land-Use Regulations - Fact Sheet - Riparian & Wetland Setback Model Regulation Adoption Process - Targeting Best Management Practices and Monitoring Stream Hydrology in the Chagrin Watershed: Analysis of Riparian Corridor Connectivity and Urban Stormwater Infrastructure - Summary of Riparian & Wetland Setback Regulations in Ohio - Community Riparian & Wetland Guidance - Hedonic Analysis of Riparian/Wetland Setbacks
The eastern cottontail has the widest distribution of any Sylvilagus. It is found from southern Manitoba and Quebec to Central and northwestern South America. In the contiguous United States, the eastern cottontail ranges from the east to the Great Plains in the west. Historically, the eastern cottontail inhabited deserts, swamps and hardwood forests, as well as rainforests and boreal forests. Currently, the eastern cottontail prefers edge environments between woody vegetation and open land. Its range of habitats includes meadows, orchards, farmlands, hedgerows and areas with second growth shrubs, vines and low deciduous trees. The eastern cottontail occurs sympatrically with many other leporids, including six species of Sylvilagus and six species of Lepus. Adult eastern cottontails reach a length of 395 to 477 mm. A dense, buffy brown underfur and longer, coarser gray- and black-tipped guard hairs cover the back of the eastern cottontail. Its rump and flanks are gray, and it has a prominent rufous patch on its nape. The ventral surface is white. The eastern cottontail shows the white underside of its short tail when it is running. This rabbit undergoes two molts per year. The spring molt, lasting from mid-April to mid-July, leaves a short summer coat that is more brown. From mid-September to the end of October, the change to longer, grayer pelage occurs for winter. The eastern cottontail has four pairs of mammary glands. It also has distinctive large eyes for its size. A mating pair performs an interesting ritual before copulation. This usually occurs after dark. The buck chases the doe until she eventually turns and faces him. She then spars at him with her forepaws. They crouch, facing each other, until one of the pair leaps about 2 feet in the air. This behavior is repeated by both animals before mating. The beginning of reproductive activity in the eastern cottontail is related to the onset of the adult molt. Sexual maturity occurs around 2 to 3 months. An average of 25% of young are produced by juveniles (Banfield, 1981). Bucks are in breeding condition by mid-February and are active until September. Does are polyoestrus, with their first heat occurring in late February. The time of initial reproductive activity varies with latitude and elevation, occurring later at higher conditions of both. The onset of breeding is also controlled by temperature, availability of succulent vegetation and the change in photoperiod (Chapman et al., 1980). Does can have anywhere from 1 to 7 litters per year, but average 3 to 4. Gestation is typically between 25 and 28 days. A few days before the birth of her young, the doe prepares a grass and fur-lined nest. The nest is usually in a hollow beneath a shrub or a log or in tall grass. Litter size varies from 1 to 12 neonates with an average of 5. The newborns weigh 25 to 35 g, and are altricial; they are blind and naked. The young grow rapidly, initially about 2.5 g a day. Their eyes open around day 4 or 5, and they can leave the nest after about two weeks. The litter receives minimal care from their mother; they are nursed once or twice daily. Weaning occurs between 16 and 22 days. Litter mates become intolerant of each other and disperse at around seven weeks. The doe mates soon after her first litter, and she is often near the end of gestation as the current litter is leaving the nest. Eastern cottontail females construct a nest in a protected place a few days before giving birth. They care for their young in the nest and nurse them until they are about 16 days old. Eastern cottontails are short-lived. Most do not survive beyond their third year. Eastern cottontails are solitary animals, and they tend to be intolerant of each other. Their home range is dependent on terrain and food supply. It is usually between 5 and 8 acres, increasing during the breeding season. Males generally have a larger home range than females. The eastern cottontail has keen senses of sight, smell and hearing. It is crepuscular and nocturnal, and is active all winter. During daylight hours, the eastern cottontail remains crouched in a hollow under a log or in a thicket or brushpile. Here it naps and grooms itself. The cottontail sometimes checks the surroundings by standing on its hind legs with its forepaws tucked next to its chest. Escape methods of the eastern cottontail include freezing and/or "flushing" (Chapman et al., 1980). Flushing consists of escaping to cover by a rapid and zig-zag series of bounds. The cottontail is a quick runner and can reach speeds up to 18 miles per hour. Vocalizations of the eastern cottontail include distress cries (to startle an enemy and warn others of danger), squeals (during copulation) and grunts (if predators approach a nesting doe and her litter). Eastern cottontails are short-lived; most do not survive beyond their third year. Enemies include hawks, owls, foxes, coyotes, weasels and man. Eastern cottontails have excellent vision, hearing, and sense of smell. Eastern cottontails make many sounds. They have cries of worry that are used to startle an enemy and warn others of danger. They grunt if predators approach a nesting female and her litter. They also make squeals during mating. The eastern cottontail is a vegetarian, with the majority of its diet made up of complex carbohydrates and cellulose. The digestion of these substances is made possible by caecal fermentation. The cottontail must reingest fecal pellets to reabsorb nutrients from its food after this process. Their diet varies between seasons due to availability. In the summer, green plants are favored. About 50% of the cottontail's intake is grasses, including bluegrass and wild rye. Other summer favorites are wild strawberry, clover and garden vegtables. In the winter, the cottontail subsists on woody plant parts, including the twigs, bark and buds of oak, dogwood, sumac, maple and birch. As the snow accumulates, cottontails have access to the higher trunk and branches. Feeding activity peaks 2-3 hours after dawn and the hour after sunset. Eastern cottontails can escape predators with their fast, jumping form of locomotion. They can run at speeds of up to 18 miles per hour. They will either flush, freeze, or slink to escape danger. Flushing is a fast, zig-zag dash to an area of cover. Slinking is moving low to the ground with the ears laid back to avoid detection. Freezing is simply remaining motionless. The eastern cottontail is abundant and edible, therefore making it a prominent game species. It is hunted for sport, meat, and fur. Eastern cottontails cause a great deal of damage in their search for food. They are pests to gardeners and farmers in the summer. In the winter, they are a threat to the orchardist, forester and landscaper. In addition, humans may contract the bacterial disease tularemia from handling the carcass of an infected cottontail. Eastern cottontails are common throughout their range. Kimberly Mikita (author), University of Michigan-Ann Arbor. living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico. young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching. having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria. uses smells or other chemicals to communicate active at dawn and dusk animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds. union of egg and spermatozoan an animal that mainly eats leaves. A substance that provides both nutrients and energy to a living thing. forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality. An animal that eats mainly plants or parts of plants. offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes). having the capacity to move from one place to another. the area in which the animal is naturally found, the region in which it is endemic. active during the night having more than one female as a mate at one time breeding is confined to a particular season reproduction that includes combining the genetic contribution of two individuals, a male and a female uses touch to communicate that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle). the region of the earth that surrounds the equator, from 23.5 degrees north to 23.5 degrees south. reproduction in which fertilization and development take place within the female body and the developing embryo derives nourishment from the female. Baker, R.H. 1983. Michigan Mammals. Michigan State University Press, Michigan. Banfield, A.W.F. 1981. The Mammals of Canada. University of Toronto Press, Canada. Birney, E.C. and J.K. Jones, Jr. 1988. Handbook of Mammals of the North-Central States. University of Minnesota Press, Minnesota. Chapman, J.A., J.G. Hockman and M.M. Ojeda. 1980. Sylvilagus floridanus. Mammalian Species no.136. The American Society of Mammalogists. Kurta, A. 1995. Mammals of the Great Lakes Region. The University of Michigan Press, Michigan. Nowak, R.M. and J.L. Paradiso. 1983. Walker's Mammals of the World. vol 1. 4th ed. The Johns Hopkins University Press, Maryland.
In Trains, early readers will learn about the different types of train cars and what they are designed to carry. Vibrant, full-color photos and carefully leveled text will engage emergent readers as they discover how trains are built to carry oil, coal, cars, and more. A labeled diagram helps readers identify different parts of a train, while a picture glossary reinforces new vocabulary. Children can learn more about trains online using our safe search engine that provides relevant, age-appropriate websites. Trains also features reading tips for teachers and parents, a table of contents, and an index. Trains is part of Jump 's Machines at Work series.
Education is not just about imparting knowledge; it’s about nurturing the potential within every individual, regardless of their abilities or disabilities. Inclusive education is a powerful approach that recognizes the value of diversity in the classroom, creating an environment where all students can learn, grow, and thrive. In this blog, we will explore the significance of inclusive education and how it empowers students of all abilities. What Is Inclusive Education? Inclusive education is a philosophy that emphasizes providing quality education for all students, including those with disabilities or special needs, in a mainstream classroom. It’s about ensuring that every student feels welcome, valued, and supported, regardless of their abilities or differences. The Power of Inclusion: Equality and Equity: Inclusive education promotes the principles of equality and equity. It ensures that every student has the same opportunities to access quality education, regardless of their abilities. Diverse Learning Styles: Inclusion recognizes that students have diverse learning styles and needs. It celebrates these differences and tailors instruction to accommodate them. Social Integration: Inclusive classrooms foster social integration, allowing students to interact with peers from various backgrounds and abilities. This promotes understanding, empathy, and friendships. Improved Academic Outcomes: Research shows that inclusive education can lead to better academic outcomes for students with disabilities. They benefit from being in a regular classroom environment and have the opportunity to achieve their full potential. Enhanced Life Skills: Inclusive education helps students develop essential life skills, including problem-solving, communication, and teamwork, which are valuable in the real world. Challenges and Solutions: Teacher Training: Teachers need proper training and support to effectively implement inclusive education. Professional development and ongoing education are essential. Accessible Resources: Ensuring that classroom materials, technology, and facilities are accessible to all students is crucial. This includes creating accessible digital content and providing assistive devices as needed. Individualized Support: Students with disabilities may require individualized support plans to address their specific needs. These should be carefully designed and regularly reviewed. Community Involvement: Inclusive education is not solely the responsibility of schools. Communities, families, and policymakers must actively support and promote this philosophy. Success Stories in Inclusive Education: The Story of Sarah: Sarah, a student with autism, struggled in a traditional classroom. However, with the support of her inclusive school, she thrived academically and developed strong social skills, making lifelong friends. Javier’s Journey: Javier, who has cerebral palsy, was welcomed into an inclusive classroom. With specialized support and accessible resources, he not only excelled academically but also became a role model for his peers. Amanda’s Advocacy: Amanda, a teacher with a visual impairment, teaches in an inclusive classroom. Her presence not only demonstrates the power of inclusion but also inspires her students to embrace diversity and strive for excellence. In conclusion, inclusive education is not just an approach; it’s a philosophy that celebrates diversity and recognizes the potential within every student. When we create inclusive classrooms and communities, we empower all individuals, regardless of their abilities, to learn, grow, and contribute to a more inclusive and compassionate world. Inclusive education isn’t just a classroom, it’s a path to a brighter and more inclusive future for all. - Neurodiversity at Work: Fostering Inclusive Employment Practices - Hidden Disabilities: Unveiling the World of Chronic Illness and Invisible Conditions - Living with Hearing Loss: Communication, Culture, and Community - Embracing Diversity and Shifting Perspectives - Unlocking the Potential: Cognitive Disabilities and Empowerment