content
stringlengths
275
370k
The Foundation for the Memory of the Shoah was set up by a French government decree in 2000, when awareness of the French State’s responsibility in the Holocaust was rising. Its purpose was to transmit and spread knowledge about anti-Semitic persecution and human rights violations during the Second World War. However, its missions quickly broadened to encompass solidarity with Holocaust survivors, preserving the Jewish culture that the Nazis tried to destroy and raising awareness of other genocides. Since 2015, the Foundation has also backed projects to fight anti-Semitism and foster intercultural dialogue. The outcome of a rare political consensus, this private foundation with public utility status keeps alive a calmer memory of not only the Holocaust, but also other genocides, without claiming to compete with them. What is the Shoah ? Shoah is the Hebrew word for “catastrophe”. This term specifically means the killing of nearly six million Jews in Europe by Nazi Germany and its collaborators during the Second World War. The English-speaking countries more commonly use the word Holocaust, which is Greek for “sacrifice by fire”. Source : Shoah Memorial - Paris 1945-1995: building a memory of the Holocaust The specific nature of the Jews’ fate was unacknowledged for years after the Second World War. Holocaust survivors spoke up, but few listened to them. The plight of Resistance members and political deportees cast a long shadow over the memory of the deportation. Moreover, during the postwar period heroes were glorified and Vichy was portrayed as a mere parenthesis in French history in order to foster national reconciliation. Since the 1970s, the combined efforts of witnesses, historians and organizations has gradually led to the general public’s awareness of the specific nature of the fate that befell the Jews as such. In 1978, lawyer Serge Klarsfeld published Le Mémorial de la déportation des Juifs de France, which he based on the list of deportees, classified by transport. The 1970s and 1980s saw an upsurge of historical research on the Vichy regime, collaboration and the French State’s involvement in the Holocaust. The 1985 release of Claude Lanzmann’s film Shoah, entirely made up of testimonies, was a bombshell. Its title has entered common parlance as another name for the Holocaust. Meanwhile, the trials of Klaus Barbie (1983-1987) and Maurice Papon (1997-1998), among others, allowed witnesses to testify about the crimes against the Jews during the Second World War. They still stoke controversy about the French State’s responsibility in the Holocaust. 1995: recognition of the French State’s responsibility in the Holocaust During the commemorations of the Vélodrome d’Hiver roundup on July 16, 1995, newly elected French president Jacques Chirac publicly acknowledged the French State’s responsibility in the Holocaust: Those dark times forever sully our history and are an insult to our past and our traditions. We all know that the occupier’s criminal madness was aided and abetted by Frenchmen and the French State. […] Passing on the memory of the Jewish people, the suffering and the camps; testifying again and again; acknowledging the mistakes of the past and the errors committed by the State; and casting light on the dark hours of our history is quite simply a way of upholding a certain idea of man, his freedom and his dignity. It contributes to the struggle against dark forces that are still at work. Fifty years after the war ended, the Vel’ d’Hiv’ speech marked a major turning point. President Chirac’s solemn statement paved the way for other symbolic gestures. The government stepped up its efforts to encourage recognition of aspects of history, whether the Vichy regime’s role in the persecution of Jews or the rescue actions by the Righteous of France, that had been missing from the national narrative until then. 1997-2000: the creation of the Fondation pour la Mémoire de la Shoah In that context, in March 1997 Prime Minister Alain Juppé appointed Jean Mattéoli, a former Resistance member and the president of the Economic and Social Council, to lead a fact-finding mission on the spoliation of France’s Jews from 1940 to 1944. Historians and qualified figures sat on the Mattéoli Commission, which was tasked with determining the scale and extent of the spoliation, studying postwar restitution measures and making proposals on the disposal of unreturned property. On April 17, 2000, Jean Mattéoli presented Prime Minister Lionel Jospin with the Commission’s findings. The report said that postwar restitution was substantial but incomplete. Stressing the significance of memory, it concluded: Of course, the material aspects involving the spoliation of France’s Jews and restitution matter, but they are not the main thing. Spoliation was about more than money. It was a form of persecution with extermination as the endpoint. No history will ever convey the fear, humiliation and misery these men, women and children endured every day. Granted, that happens in every war and others also suffered, but not as the result of discriminatory laws and regulations that isolated them from the national community just for being born who they were. What happened during the war is an unprecedented exception. We must all ensure it never happens again. The Mattéoli Commission’s recommendations led to setting up the Commission for the Compensation of Victims of Spoliation (CIVS), tasked with reviewing individual compensation claims, and the Foundation for the Memory of the Shoah. The public and private institutions concerned pay the Foundation unclaimed funds resulting from spoliation of every kind. The initial endowment was €393 million. Since 2000: history, education and solidarity The Foundation for the Memory of the Shoah was officially set up by a French government decree on December 26, 2000. Its creation marked a key step in strengthening the memory of the Holocaust in France. Chaired by Simone Veil and, since 2007, David de Rothschild, it is administered by representatives of the government and major Jewish institutions as well as by qualified figures. This private foundation with public utility status contributes to the development and dissemination of knowledge about anti-Semitic persecution, the victims of that persecution and the conditions in France that allowed most Jews to escape deportation. The Foundation also has an important solidarity component, in particular by funding initiatives that provide those who suffered under anti-Semitic persecution with moral, technical or financial support. The Mattéoli Commission’s recommendations also stressed the need to preserve and pass on Jewish culture, entire swathes of which vanished during the Holocaust. Considering reflection on the sources and mechanisms of hatred primordial, the Foundation soon expanded the scope of its work to include other genocides and crimes against humanity. A commission tasked specifically with fighting anti-Semitism and promoting intercultural dialogue was set up in 2015. The Foundation for the Memory of the Shoah has funded nearly 3.000 projects in its 15 years of existence.
The October Revolution or Bolshevik Revolution, led by Bolshevik leader Vladimir Lenin, took place in Russia in 1917. Lenin returned from exile when the tsarist (czarist) rule ended and by a coup against the social-democratic government of Alexander Kerensky he led the Bolsheviks into power. The Congress of Soviets, in 1917, was made up of representatives of local soviets which, in turn, were supposedly elected primarily by workers and peasants in the various local districts; in reality, the Bolsheviks secured the delegates by the use of violence and intimidation. In the Congress of Soviets which met at the beginning of November, 1917, the Bolshevik party had a majority. This Congress then declared itself to be "the government": that is to say, it claimed sovereignty and declared that sovereignty was no longer possessed by the Kerensky government which was based upon the remnants of the old Russian Duma and had been in power since the February Revolution. The Soviet Congress then proceeded to enact the chief initial measures of the new regime and to elect an executive - the Council of Commissars. Lenin was dictator of the Soviet state under a system of Marxism-Leninism, involving iron control of the party and the people, until his death in 1924. After his death, a power struggle between Leon Trotsky and Joseph Stalin ostracized Trotsky and brought Stalin to power. Stalin would eventually have Trotsky assassinated; and his increasing paranoia was manifested in a bloody series of purges from the mid 1930s onwards. - The confusion about the naming of the October or November Revolution is because at the time the Russians still used the Julian calendar, which meant that it was around a month behind the more modern states who used the Gregorian calendar, so for the other states the revolution occurred in November, but to the Russians it occurred in October. - Bolshevik, in Russian, means majority, while Menshevik means minority. - Pipes, Richard. A Concise History of the Russian Revolution (1996), by a leading conservative scholar excerpt and text search - James Burnham, The Managerial Revolution, Indiana University Press, Bloomingham 1966.
Our behaviour system is easy to understand and built around the idea of restorative justice. We understand that sometimes in a large school things can go wrong. Our principle is that if we get it wrong we put it right. This means accepting the consequence and apologising for our behaviour, rebuilding relationships. There are four main consequences: ten-minute detention, thirty-minute detention, inclusion and exclusion from school. We would normally try to hold detentions on the same day or soon after. If a student is placed in inclusion or excluded, we would expect parents to come in for a meeting with the headteacher. Each behaviour incidient is reviewed and sanctions are applied on a case by case basis. Examples of behaviour types on each level of the system are shown below:
Various medical tests may help with diagnosis and possibly suggest changes in the intervention or treatment strategy. Hearing: Various tests such as an Audiogram and Typanogram can indicate whether a child has a hearing impairment. Audiologists, or hearing specialists, have methods to test the hearing of any individual by measuring responses such as turning their head, blinking, or staring when a sound is presented. Electroencephalogram (EEG): An EEG measures brain waves that can show seizure disorders. In addition, an EEG may indicate tumors or other brain abnormalities. An electroencephalogram is a recording which shows the variations in electrical potentials at a number of scalp sites. Inside the brain, neurons produce their own electrical fields. These fields are measured in units of micro volts. It is thought that an unhealthy brain will have large changes in the electrical potential compared to the potentials produced by a healthy brain. However, in order to observe an unhealthy brain it must be compared to the same brain when it was healthy. So, for example, to measure the difference between a brain undergoing a seizure, the EEG must last long enough for a seizure to occur. Often a video EEG is done over a period of a day or a week. This form of measuring brain activity is noninvasive (doesn't require any surgical cuts) and relatively inexpensive. This method gives numerical results. The patterns of the numbers are then used to determine whether or not the brain is healthy. The results can also be used to determine which section of the brain is causing problems. Additional tests will likely be needed to make an accurate diagnosis of these conditions. Metabolic Screening: Blood and urine lab tests measure how a child metabolizes food and its impact on growth and development. Some autism spectrum disorders can be treated with special diets. Magnetic Resonance Imaging (MRI): An MRI involves using magnetic sensing equipment to create an image of the brain in extremely fine detail. Sometimes children are sedated in order to complete the MRI. Computer Assisted Axial Tomography (CAT SCAN): An X-ray tube rotates around the child taking thousands of exposures that are sent to a computer where the X-rayed section of the body is reconstructed in great detail. CAT Scans are helpful in diagnosing structural problems with the brain. Genetic Testing: Blood tests look for abnormalities in the genes which could cause a developmental disability. Direct observation, interaction, and interviews assessments: Information about a child's emotional, social, communication, and cognitive abilities is gathered through child directed interactions, observations in various situations, and interviews of parents and caregivers. Parents and family members should be actively involved throughout these assessments. What actually occurs during a specific assessment depends on what information parents and evaluators want to know. Functional assessments: These assessments aim to discover why a challenging behavior (such as head banging) is occurring. Based on the premise that challenging behaviors are a way of communicating, functional assessment involves interviews, direct observations, and interactions to determine what a child with autism or a related disability is trying to communicate through their behavior. Once the purpose of the challenging behavior is determined, an alternate, more acceptable means for achieving that purpose can be developed. This helps eliminate the challenging behavior. Play-based assessments: These assessments involve adult observation in structured and unstructured play situations that provide information about a child's social, emotional, cognitive, and communication development. By determining the child's learning style and interaction pattern through play-based assessments, an individualized treatment plan can be developed. >> Continue to Page 4 >>
What is Music? Music is a universal language that embodies one of the highest forms of creativity. A high-quality music education engages and inspires pupils to develop a love of music and their talent as musicians, and so increases their self-confidence, creativity and sense of achievement. As pupils progress, they will develop a critical engagement with music, allowing them to compose, and to listen critically and develop their own personal musical choices. At our school this means providing our pupils not only with the opportunity to learn about and participate in all aspects of music, but also providing them with the platform from which to be creative, to express themselves, find success and share these experiences, talents and abilities through performance. - To perform, listen to, review and evaluate music across a range of historical periods, genres, styles and traditions, including the works of the great composers and musicians; - To learn to sing and to use their voices, to create and compose music on their own and with others, have the opportunity to learn a musical instrument, use technology appropriately and have the opportunity to progress to the next level of musical excellence; - To understand and explore how music is created, produced and communicated, including through the inter-related dimensions: pitch, duration, dynamics, tempo, timbre, texture, structure and appropriate musical notations.
Benefits of Using Video: As educators, our goal of course, is to get students energized and engaged in hands-on learning experiences, and video is clearly an instructional medium that generates excitement. Using sight and sound, video is the perfect medium for students who are auditory or visual learners. Video taps into emotions which stimulate and enthrall students, and it provides an innovative and effective means for educators to address the curricular concepts. Consider the classroom in which students can hear the cry of a nearly extinct species and see the colors and hear the sounds of animals that thrive only in a remote wilderness half way around the globe. Envision teaching with the voices of the past by introducing young learners to great historians, political figures and famous people who lived centuries ago. Imagine conveying the laws of motion, sound and energy transfer by viewing the launch of the space shuttle on a journey into outer space. Think about how much easier it would be to understand the diverse cultures of people who live in other areas of the world if you could encounter them in their own environments - hearing their songs, observing their rituals or listening to their silence. Video provides another sensory experience that allows concepts to actually be "experienced" and come to life while you guide your students on each adventure. We all know from experience that the more engaged your students are, the more interactive your lesson is, the more your students will enjoy, learn from and retain information from your lessons. It may surprise you to think of video as a means for interactive instruction, but video is a very flexible medium. The ability to stop, start and rewind it can be invaluable. You can stop the video and challenge your students to predict the outcome of a demonstration, or elaborate on, or debate about, a point of historical reference. You can rewind a particular portion of a show to add your own review or view a segment in slow motion to ensure that your students understand a key concept. Furthermore, you can ensure interactivity by replicating activities, workshops, demonstrations and experiments in your classroom environment. Effectively Using Video: Current research reveals that the most effective way to use video is as an enhancement to a lesson or unit of study. Video should be used as a facet of instruction along with any other resource material you have available to you for teaching a given topic and you should prepare for the use of a video in the classroom the very same way you would with any other teaching aid. Specific learning objectives should be determined, an instructional sequence should be developed and reinforcement activities planned. And of course, no video should ever be used in the classroom until it has first been previewed by the instructor. There are a lot of excellent videos available, but a video produced for educational purposes - created with the needs of the classroom in mind - will be structured in a way to most effectively meet your needs. There are over 500 Schlessinger Media programs that have been produced specifically for the classroom - they have been correlated to state, regional and national standards, most come with Teacher's Guides and 3 minute video clips are available online for previewing purposes. There are over 14,000 education titles on our web site and each program has been carefully reviewed by our experienced and knowledgeable staff to ensure its appropriateness for use in the classroom. We welcome any additional ideas you may have about using video in the classroom or feedback regarding the resources we can provide to make it efficient and easy to find educational media for an educational setting. Note: For information about Public Performance Rights and the copyright issues concerning using video in the classroom, see the article, Can These Videos Be Shown in a Classroom or Library Setting?
#22 Frames of Reference: Part of a high school course on astronomy, Newtonian mechanics and spaceflight by David P. Stern This lesson plan supplements: | "Frames of Reference: The Basics," section #22: on disk Sframes1.htm, on the web http://www.phy6.org/stargaze/Sframes1.htm "The Aberration of Starlight," section #22a: on disk Saberr.htm, on the web http://www.phy6.org/stargaze/Saberr.htm "TheTheory of Relativity," section #22b: on disk Srelativ.htm, on the web Goals: The student will learn| Stories and extensions: The story of the aberration of starlight and of Bradley's observation on a boat in the river. About the solar wind and magnetosphere, and how aberration foiled a clever idea of downloading satellite data using a passive laser reflector. Note to the teacher: This lesson is closely related to the one on vectors (section #14 of "Stargazers", lesson plan #23). Some ideas expanded here were already introduced in #14--for instance, the motion of an airplane flying with velocity v1 relative to the air, which itself (because of the blowing wind) has a velocity v2 relative to the ground. In that example, the air and the ground represent two frames of reference moving with respect to the other, and we have already shown that the velocity of the airplane with respect to the ground is the vector sum v1+v2 . Here, however, two additional aspects come into play. One, we are also concerned with accelerations and forces. These are the simplest cases, where all velocities are constant in magnitude and direction, so that shifting from one frame to the other adds no new forces or accelerations (That will no longer hold when we come to discuss rotating frames). And two, we study the changes created by the motion of the observer's own frame of reference. Section (22a) is optional. It contains interesting stories, illustrating the lesson, but can be omitted (and perhaps assigned to some advanced students) if time runs short. It is also possible to teach only the first example, on the aberration of starlight and on its explanation by James Bradley. Starting the Lesson The starting paragraphs of Section #22 are quite appropriate for starting the lesson. After that, bring up the questions below, and continue with Section #22a. Questions and tidbits: --What is meant by a "frame of reference"? --Can you give examples of frames of reference? --Surface of the Earth, the Moon or Mars. --A moving elevator, merry-go-round, roller coaster car or other ride. --The frame of the wind carrying a run-away balloon, or of a river carrying a swimmer. --Also, in certain contexts, the frame of the distant stars. We have two frames of reference: A is the inside an elevator rising with constant velocity u, B is the frame of the building in which the elevator is located. A rider drops a penny inside the elevator. Is the velocity of the penny the same as seen from A and B? In the preceding example, is the acceleration of the penny the same viewed inside the elevator and outside it? You are the passenger in a car driving with velocity u on a rainy night. On the street outside, through the side window of the car, you see raindrops falling. They fall with a constant velocity v (because of air resistance, they no longer accelerate). As you watch them in the light of streetlights, how do they appear to move? What is their apparent velocity w? In what direction do they streak the windows? Their velocity vector w has a vertical downwards component v (magnitude of v) and a horizontal component u (magnitude of –u) to the rear: in vector notation w = v–u = v+(–u). Since v and u are perpendicular to each other, by Pythagoras, w = SQRT(v2 + u2). Their streaks on the window are in the direction of w and the angle A between those streaks and the vertical satisfies sinA = u/w or tanA = u/v. About the Aberration of Starlight How are distances to stars measured by the parallax method? If the directions to C are slightly different when viewed from A and B, then the difference gives the "parallax" angle between AC and BC. Using that angle one can calculate all other properties of the triangle ABC, including the distances AC abd BC from Earth to the star. What changes were observed around 1700 in the position of Polaris? How did astronomers know that it was not Polaris that did the moving? --How did James Bradley know that the shift of Polaris was not a parallax effect? --In the end, how did Bradley explain the strange shift in the position of Polaris and other stars? --The aberration of starlight allows us to deduce that the Earth is indeed moving. Doesn't that contradict an earlier claim that absolute motion is undetectable? [Optional further discussion by the teacher: Actually, a systematic shift does exist, and from it we know that the solar system is moving at about 20 km/s towards a point known as the solar apex, near the star Vega. But in principle, it could also be that we are at rest and all those stars are moving in our direction, away from the solar apex. The physical effects would be exactly the same. It is only our logic that tells us it is more likely that our sun is moving, rather that a large number of distant suns happen to move on parallel tracks.] [Harder poser--perhaps to take home] How do you think would a star on the ecliptic appear to move? Hint: it's not a circle--not even close! About the Aberration of the Solar Wind Why does the solar wind, on the average, appear to come not from the Sun but from a direction 4 degrees off the Sun? What do you know about the "Solar Probe" mission? How would instruments aboard the "solar probe" detect solar wind particles, even though they are shielded from direct sunlight? About the Theory of Relativity What is the principle of relativity? How does the theory of relativity modify Newtonian mechanics? Why did Newton's laws need to be modified? Don't they already satisfy the principle of relativity as they stand--only accelerations can be distinguished, while a constant velocity changes nothing? What does relativity say about time in two moving frames of reference--especially if their relative velocity is close to the velocity of light?? In the late 1930s an unstable particle was discovered, named the muon (originally, "mu-meson"). Muons were fragments of collisions of very fast nuclei, and in the laboratory they decayed radioactively (into an electron and an unseen neutrino) in about 2 millionths of a second (microseconds). How far should muons traveling at the speed of light (300,000 km/s) be able to move, on the average, before decaying? Muons moving close to the speed of light are produced in the atmosphere by collisions of fast atomic nuclei from space ("cosmic rays") at an altitude of about 12 kilometers. Yet a large fraction of them is still observed at sea level (they form the greater part of the cosmic radiation observed there). If they are so short-lived, how come they are not lost by decaying before reaching the ground? [Comment: After relativity was introduced, Newtonian mechanics also became known as "classical mechanics" to distinguish it from "relativistic mechanics." Later still different modifications to Newton's mechanics were found to be appropriate for atomic dimensions, and these became known as "quantum mechanics." (And in case you wonder: yes, there also exist "relativistic quantum mechanics")] | Back to the Lesson Plan Index Back to the Master Index Guides to teachers... A newer one An older one Timeline Glossary Author and Curator: Dr. David P. Stern Mail to Dr.Stern: stargaze("at" symbol)phy6.org . Last updated: 10-24-2004
Communication Considerations A to Z™ What is mainstreaming? At the most basic level, mainstreaming refers to the education placement of a student alongside his or her hearing peers. In reality, mainstreaming has as many definitions as there are individuals with hearing loss. Ideally, each student’s mainstream experience should be as unique and individual as the specific needs of the individual. A student can be placed with his hearing peers from 30 minutes to a full day placement. If a student has good access to the curriculum and she is consistently demonstrating understanding and mastery of the curriculum, then she is appropriately mainstreamed. What issues are at the forefront of mainstreaming? The Individuals with Disabilities Education Act (IDEA) had an impact on the practice of mainstreaming. The concept of Least Restrictive Environment (LRE) was introduced with Public Law 94-142 (later evolved into IDEA). Since President Ford signed PL 94-142 in 1975 into law, LRE has become almost synonymous with mainstreaming. Recognizing the cost effectiveness of removing a student from a self-contained special education model and placing them in a typical classroom, many school districts embraced the idea of LRE and interpreted it as a mainstreaming model that would allow children with hearing loss to be educated with their hearing peers. LRE does not necessarily equal a mainstreamed environment. LRE refers to an educational placement. Mainstreaming is a demonstration of mastery of the curriculum by the student as he sits in the seat next to his hearing peers. Special Education support services and classrooms are expensive for school districts to sustain. It’s important to genuinely question the practice of mainstreaming children with hearing loss into typical classrooms, regardless of their ability to perform; the issue here is educational placement based on student need vs a significant cost savings for districts. Additionally, this practice adheres to the guidelines of IDEA so it has legal support. In this circumstance, students with hearing loss are not held to the same expectations as their hearing peers and can travel through the education system without becoming appropriately educated as their instruction is watered-down through numerous accommodations that will allow them to remain in the typical classroom without having to perform at the same level as the other students in the class. Another mainstreaming issue that has enormous consequences is the one-size-fits-all concept. Students often find themselves in a setting that does not provide adequate access to the communication in the classroom. Auditory-oral children are often required to use a sign interpreter for support and conversely signing students frequently do not have qualified interpreters to appropriately impart the teacher’s instruction to the student. In both cases, the student is the loser as the instruction is passed through a faulty filter. What should every parent or professional know about mainstreaming? All children deserve a free and appropriate education (FAPE). It is the common thought that a student with hearing loss in a mainstream placement will receive a superior education through exposure to typical peers and the instruction in the typical classroom. Certainly there are mainstreamed students who have thrived in mainstream settings because they were well prepared to enter the mainstream and/or because they received excellent support services while in the typical classroom. Being in the hearing classroom does not automatically guarantee a superior education. There are some critical issues that parents need be aware of when their child enters the mainstream: Where can I find more information about mainstreaming? There are a variety of resources available with an equal variety of opinions about the benefits and challenges of mainstreaming a student with hearing loss. Below is a limited list of resources that reflects a diversity of opinions: Ruth F. Mathers, MS Ruth Fouts Mathers is the Campus Director of St. Joseph Institute for the Deaf – Kansas City. She holds a BA in English from the University of Colorado, and an MS in Speech and Hearing with a focus on oral deaf education from Central Institute for the Deaf at Washington University, St. Louis, Missouri. Her teaching license certifies her ability to teach deaf children from birth to 12th grade. Additionally, Mathers has earned interpreter training certification for Professional Sign Language and Oral Interpreting. Her credentials are widely recognized to be unique in their breadth of methodologies and experience with deaf culture. It is this broad perspective of the field of deafness that brought Ruth to the Hands & Voices National Board. Her passion for the field of deaf education stems from growing up with an older brother who is profoundly deaf. * Communication Considerations A to Z™ is a series from Hands & Voices that's designed to help families and the professionals working with them access information and further resources to assist them in raising and educating children who are deaf or hard of hearing. We've recruited some of the best in the business to share their insights on the many diverse considerations that play into communication modes & methods, and so many other variables that are part of informed decision making. We hope you find the time to read them all!
The 2020 Nasa’s Martian rover will carry a selection of advanced instruments to improve understanding of geology of the Red Planet to help future astronauts to utilise local resources. Building on the engineering legacy of the current Curiosity rover and its parent vehicle Mars Science Laboratory, the 2020 rover will carry seven scientific devices including ground penetrating radar, a unit for experimental production of oxygen from atmospheric carbon dioxide or a SuperCam for analysis of chemical composition of Martian soil. The selection of instruments, picked from a pool of 58 proposals submitted by researchers and engineers from around the world in January this year, was announced by Nasa on Thursday. “The Mars 2020 rover, with these new advanced scientific instruments, including those from our international partners, holds the promise to unlock more mysteries of Mars’ past as revealed in the geological record,” said John Grunsfeld, astronaut and associate administrator of Nasa's Science Mission Directorate in Washington. “This mission will further our search for life in the universe and also offer opportunities to advance new capabilities in exploration technology.” The development of the instruments is expected to cost about $130m (£77m). One of the main tasks of the 2020 rover will be to select and collect rock and soil samples for a potential future return to Earth. The researchers hope knowledge gained from the mission would help develop strategies for a possible future manned mission to Mars, allowing the astronauts to utilise local natural resources and mitigate possible environmental hazards. "The 2020 rover will help answer questions about the Martian environment that astronauts will face and test technologies they need before landing on, exploring and returning from the Red Planet," said William Gerstenmaier, associate administrator for the Human Exploration and Operations Mission Directorate at Nasa Headquarters in Washington. "Mars has resources needed to help sustain life, which can reduce the amount of supplies that human missions will need to carry. Better understanding the Martian dust and weather will be valuable data for planning human Mars missions. Testing ways to extract these resources and understand the environment will help make the pioneering of Mars feasible." The rover itself will be lowered onto the surface of Mars using a system that was previously proven during the landing of Curiosity. Reusing the technology will enable to cut development cost and minimise a risk of failure. The 2020 rover, selected by Nasa in 2012, will be preceded to Mars by the Insight mission – a landing module scheduled for launch in 2016 and designed to drill deep into the Martian rock to investigate the planet's interior. In 2018, the European ExoMars rover should be dispatched to Mars, however, recent rumours have suggested the mission could be either postponed till 2020 or even cancelled due to cost overruns. The Mars 2020 rover is part of Nasa's Mars Exploration Program, which includes the Opportunity and Curiosity rovers, the Odyssey and Mars Reconnaissance Orbiter spacecraft currently orbiting the planet, and the MAVEN orbiter, which is set to arrive at the Red Planet in September to study the Martian upper atmosphere. Nasa's Mars Exploration Program seeks to characterize and understand Mars as a dynamic system, including its present and past environment, climate cycles, geology and biological potential. In parallel, Nasa is developing the human spaceflight capabilities needed for future round-trip missions to Mars. The 2020 rover will be built and managed by Nasa's Jet Propulsion Laboratory, who is also in charge of Curiosity’s operations.
Help:Using Wikipedia for mathematics self-study Wikipedia provides one of the more prominent resources on the Web for factual information about contemporary mathematics, with over 20,000 articles on mathematical topics. It is natural that many readers use Wikipedia for the purpose of self-study in mathematics and its applications. Some readers will be simultaneously studying mathematics in a more formal way, while others will rely on Wikipedia alone. There are certain points that need to be borne (kept) in mind by anyone using Wikipedia for mathematical self-study, in order to make the best use of what is here, perhaps in conjunction with other resources. - Wikipedia is a reference site, not a website directly designed to teach any topic. - Wikipedia may supplement a textbook by explaining key concepts, but it does not replace a textbook. - Wikipedia is organized as hypertext, meaning that the information you require may not be on one page, but spread over many pages. - In technical subjects, the material may also be technical: Wikipedia has no restriction on the depth of coverage. The lead section of each article is supposed to give a summary accessible to the general reader. - Wikipedia is a work in progress. Some of our articles are highly polished, while others are in a rougher state. The Wikipedia model relies on volunteers to edit articles, and you're invited to help. All help is welcomed and greatly appreciated. Studying mathematics from a reference source is not ideal. Unless you consult Wikipedia to answer a specific question, it is not reasonable to expect instant results. If you are a student who is studying for school curriculum, you should give first priority to the textbooks. Try to learn from them first, but if you find any concept or any problem hard to understand or solve respectively, then you can jump to Wikipedia for that particular topic. You can get good knowledge about that concept as the content present on Wikipedia is a cumulative contribution of a lot of people. You can also learn about the topics that are related to that particular concept with the help of those hyperlinks, so you should consider Wikipedia as a resource to understand certain things but not the entire subject. When it comes to solving a particular problem it is not always true that you will find the solution on Wikipedia, so you should also have other tools in hand on which you can rely. Mathematics textbooks are conventionally built up carefully, one chapter at a time, explaining what mathematicians would call the prerequisites before moving to a new topic. For example, you may think you can study Chapter 10 of a book before Chapter 9, but reading a few pages may then show you that you are wrong. Because Wikipedia's pages are not ordered in the same way, it may be less clear what the prerequisites are, and where to find them, if you are struggling with a new concept. There is no quick way around the need for prerequisite knowledge. When King Ptolemy asked for an easier way of learning mathematics, Euclid is famously said to have replied, "there is no royal road to geometry". Some background reading is expected when learning a new mathematical subject, and different readers will have greatly different needs regarding introductory material. Therefore: - Be prepared to look at related pages to establish context; - Follow wikilinks for unfamiliar terms, to orient yourself; - To find additional related topics, look under the "See also" header or use the article's categories listed at the bottom. The best advice for retaining definitions of mathematical terms are to draw images or write examples that includes the definitions. Omissions from the encyclopedia Mathematics is something that is done rather than read. A mathematics textbook will contain many exercises, and doing them is an essential part of learning mathematics. Wikipedia does not include exercises; by design, Wikipedia is an encyclopedic reference, not a textbook. When it comes to more advanced topics, mathematics is developed, and largely hangs together, by means of the large body of quite formal proofs that exist in the mathematical literature. Wikipedia does not attempt to condense all of these proofs into encyclopedic form, for reasons that are discussed at length in another essay. Wikipedia assembles the facts uncovered by mathematical investigation, and the definitions underlying the abstract theories. In common with other mathematical encyclopedias, it omits most proofs. Although learning mathematics involves memorization of the sort of factual knowledge that Wikipedia provides, memorization is not enough to master the field. To become a mathematician, you must acquire the skills of creating proofs and doing calculations for yourself, to internalize the material; therefore, you must go beyond the outline a Wikipedia article can supply. We hope that Wikipedia articles can provide a good starting point for the process, along with a reference for topics you have already learned. Remember that any source may contain errors, so do not put too much trust into a single account. Verify proofs and calculations yourself. Because anyone can edit Wikipedia, you can correct any errors you find; this can be a very powerful learning experience. There are some mathematical concepts for which different authors use different definitions. For example, some authors count zero as a natural number while others do not. These differences can affect the way that mathematical theorems are stated. Therefore, double-check the definitions in each article to see whether they match those you are accustomed to. Ways to use Wikipedia The main way to use Wikipedia is to search for an article on a topic that interests you. Follow the wikilinks to articles that explain any terms you don't understand or want to explore further. In addition: - A well-written Wikipedia article will cite references, which you can use to expand your knowledge further and check that the Wikipedia article is correct. - Talk pages (the "Discussion" tab at the top of article pages) are the best way to raise queries about the content of a particular article. - The mathematics reference desk is useful if you have a question and don't know where to look up the answer. - Explore the category system. - The mathematics portal is a good "way in" to mathematics articles on Wikipedia. If you are in doubt, ask at the mathematics reference desk. No one on Wikipedia is going to do your math homework for you ... but if you ask the right question they might point you to some information that will enable you to do it for yourself. For those engaged in self-study, some of Wikipedia's sister projects may help. These have different and definite purposes:
Instructional Goals and Objectives The social practices that allow the spread of the flu virus can be changed by the successful assimilation of this lesson’s information. Therefore, the design of this learning module incorporates a clear and practical method of transferring educational content to the student. A multimodal approach encourages the successful teaching of information by speaking to all levels of learning capabilities in people. Bringing in various sources of information by using digital images, videos, animations and internet data sources helps explain disease prevention to as wide a group of people as possible. When people truly understand the significance of their actions, they are most capable of effecting change. Sequence and Organization At the core of this learning module is the need to alter people’s social behavior on a large scale. Since people with different cultural and educational backgrounds will be the learners, the module begins with very basic, clear information of flu symptoms in an individual. The lesson moves to how the flu virus affects people directly around the individual. The information continues with a larger frame of reference, showing data on how many people nationally are affected severely by the flu. The lesson continues with an animation about avoiding the flu from the Centers for Disease Control and Prevention (the CDC), followed by a quiz section for self-assessment. The last slide offers a final review of the module’s content and a link to find more information about the flu on the Internet. The purpose of this lesson is to instruct active adults on the importance of avoiding the flu virus by following good health hygiene and getting immunized yearly with the flu vaccine. The module’s focus is on college students because they come in contact with a multitude of people during their daily lives. It is often the healthier (younger) members of a family who will nurse sick family members, so people in this age group need practical, accessible medical information for themselves and the people they care about. Assessment activities for this course occur when students engage in quiz testing after viewing slides and an animation about avoiding the flu. Criteria for the assessments include a need to measure knowledge and comprehension Assessment instruments will include testing methods such as a True/False Quiz, Sequence Quiz, Matching Quiz and a Multiple Choice Quiz. This course will be delivered as a web-based, e-learning experience. Students can watch in groups or engage in individual, asynchronous learning. They will need access to a desktop or mobile device with an internet connection. The student will use a browser to open the Captivate learning module and will read all content online. Slide 1, Intro Music Slide 2, Narration: - There’s no substitute for yearly vaccination in protecting the people you love from influenza. - Here’s how influenza can hurt your family. - Influenza can make you, your children, or your parents really sick. - Influenza usually comes on suddenly. Symptoms can include high fever, chills, headaches, exhaustion, sore throat, cough, and all-over body aches. - Some people say, it felt like a truck hit me! Symptoms can also be mild. Regardless, when influenza strikes your family, the result is lost time from work and school. Slide 3, Narration: - Influenza spreads easily from person to person. - An infected person can spread influenza when they cough, sneeze, or just talk near others. They can also spread it by touching or sneezing on an object that someone else touches later. An infected person doesn’t have to feel sick to be contagious, they can spread influenza to others when they feel well, before their symptoms have even begun. Slide 4, Narration: - Influenza and its complications can be so serious that they can put you, your children, or your parents in the hospital , or lead to death. - Each year, more than 200,000 people are hospitalized in the U.S. from influenza and its complications. - Between 3,000 and 50,000 die, which shows how unpredictable influenza can be. Slide 5, Narration: - Influenza can be a very serious disease for you, your family and friends, but you can all be protected by getting vaccinated. - There’s no substitute for yearly vaccination in protecting the people you love from influenza. Either type of influenza vaccine, the shot or nasal spray, will help keep you safe from a potentially deadly disease. - Get vaccinated every year, and make sure your children and your parents are vaccinated, too. Slide 8, Narration - Get vaccinated every year! Get your children vaccinated! Be sure your parents get vaccinated, too! - This website from the Centers for Disease Control and Prevention offers more information for protecting your health. Review it now and share it with your loved ones. Wishing you good health! Slide 1 – Student Services: Get Vaccinated: Fight The Flu! Text Caption in Button: Click To Begin Slide 2 – Symptoms of Influenza (Flu) – Narration of flu facts as text graphics slide into scene. Slide 3 – The Flu is Easily Spread – Narration of flu facts as graphics slide Slide 4 – The Flu Is Serious – Narration while flu facts are shown in infographic. Slide 5 – Get Vaccinated Every Year! Prevent Flu! Get a Flu Vaccine and Take Preventive Actions. Informational, 1-minute-50-second video that raises awareness about important influenza (flu) prevention actions. Slide 6 – Fight the Flu Review Question 1. A total of 4 questions are included. More can be added, with a variety of assessment methods. Slide 7 – Fight the Flu Review Quiz Results Slide 8 – Great Job! Visit the CDC Website for More Facts Buttons: Visit Website, Exit Course Fight Flu Infographic. Centers for Disease Control and Prevention website. Retrieved from https://www.cdc.gov/flu/freeresources/infographics.htm Centers for Disease Control and Prevention Consumer Information. Retrieved from https://www.cdc.gov/flu/consumer/index.html Treser, M. (2015). eLearning Industry, Getting to Know ADDIE. Retrieved from https://elearningindustry.com/getting-know-addie-analysis Flu Facts, National Foundation for Infectious Diseases. Retrieved from Flu Shots By the Numbers infographic. Healthfeed post, University of Utah Healthcare. Retrieved from https://healthcare.utah.edu/healthfeed/postings/2014/09/091514_cvvisual-flu-shots.php CDC Flu Talk animated informational video (2017). Retrieved from Conestoga College Flu Shot Clinic video footage (2016). Retrieved from
Rotational Doppler shift spotted in twisted light Aug 5, 2013 6 comments "Twisted light" has been used by researchers in the UK to develop a new way of measuring the angular velocity of a remote spinning object. The team fired two beams of light carrying orbital angular momentum at a rotating surface and showed that the resulting interference pattern in the reflected light is related to the surface's angular velocity. The researchers hope that the phenomenon can be used to develop systems to carry out a range of practical measurements, from monitoring industrial equipment to calculating rotation rates of astronomical objects. The Doppler shift – a shift in the frequency of waves emitted or reflected by an object moving relative to the observer – is a well-understood phenomenon with numerous uses in science and engineering. These include determining the speed at which distant galaxies are approaching or receding and making it easier for the police to catch speeding motorists. It can also be used to study objects that are rotating when some of the object is rotating towards the observer and some is rotating away. However, it cannot be used to work out how fast an object is rotating about the axis pointing along the direct line of sight between the object, light source and observer. This latest work was done using beams of light that carry orbital angular momentum. This involves the wavefronts of the light's electric and magnetic fields rotating around the direction of the propagation vector. The fields trace out fusilli-like spirals and the faster the rotation, the greater the orbital angular momentum. This twisted light is of great interest to those working in the telecommunications industry and researchers have already shown that orbital angular momentum can be used to boost the amount of information that can be transmitted using light and other electromagnetic radiation. The study was done by Martin Lavery and colleagues at the University of Glasgow, together with researchers at the University of Strathclyde. The team's rotating surface is simple – a piece of aluminium foil stuck to a wheel that is spun by a motor taken from a remote-controlled car. The back of the foil (the matt side) is illuminated by two superposed light beams of the same frequency and intensity but with equal and opposite angular momenta. The researchers found that when the light hit the rotating surface, the two beams are affected slightly differently. This is because although the angular momenta of the two beams are equal and opposite in the laboratory reference frame, the angular momenta relative to the spinning surface are different. The frequency of the scattered beam with orbital angular momentum in the same direction as the surface is raised slightly (blue-shifted), while the frequency of the beam with angular momentum in the opposite direction is lowered (red-shifted) by the same amount (see figure). When the scattered light is detected, it is therefore a mixture of two slightly different frequencies, which move repeatedly in and out of phase. When the frequencies are in phase, this causes constructive interference; while when they are out of phase, the interference is destructive. This results in a regular pulsation in the light intensity that is detected. From the rate of this pulsation, the researchers can calculate the rotation rate of the spinning disc. Lorenzo Marrucci, the leader of the Laboratory of Nonlinear Optical Spectroscopy at the University of Naples in Italy, is intrigued by the work. "I think it's quite unexpected and might be surprising that you have this Doppler effect even though there is nothing that is moving closer or farther from the detector," he says. "Of course, you can understand it with hindsight by reasoning about the effect, but without this work you would not expect it to occur." Bo Thidé, an expert in electromagnetic radiation from space at the Swedish Institute of Space Physics at Uppsala University in Sweden, is more sceptical. He says that while the beating between two reflected waves is a new observation, "the whole concept of the rotational Doppler shift is, as the researchers say, not new. It's inherent in the laws of nature and there are many articles that have discussed it theoretically and described how you can perform this experiment." Protecting wind turbines Lavery is keen to explore the possible applications of the technology in engineering, suggesting that, among other things, it could potentially help prevent turbulence from damaging wind turbines. "Up here on the west coast of Scotland, there was a big fire on a turbine last year," he explains. "With this effect we would potentially be able to make a head-on measurement of the scatter coming back off the atmosphere and determine how fast that atmosphere is rotating. You could use that to make a feedback system that can follow the wind turbines and make sure they can cope with the amount of wind that's coming onto them." Marrucci, meanwhile, is looking beyond windmills and believes that the phenomenon should be further investigated to see if it could be used to study spinning astronomical objects such as stars. "I would really like to see an analysis of whether or not there can be an application in astronomical settings, because if something like that should come out, that would be very interesting," he says. The research is published in Science. About the author Tim Wogan is a science writer based in the UK
|How far away are stars?| |Question Date: 2014-10-02| Really far! As you probably know, the closest star to Earth is none other than our friendly sun - and that's almost 93 million miles away! To give you an idea of how far away that is, if you were driving in a car at the same speed you go on a highway, it would take you 177 YEARS to get from here to the sun. The next closest star to us is called Proxima Centauri and that's almost 300,000 times farther away from Earth than the sun is! Because stars are so far away, measuring the distance to them in miles isn't very useful - you always get huge numbers. Instead, astronomers measure distance by how long it would take light (which can travel 186,000 miles every second) to go somewhere. So for example, since it takes light about eight minutes to get to the Earth from the sun, we say that the sun is eight light-minutes away from Earth. Similarly, Proxima Centauri is 4.2 light years away from Earth. So those are the closest stars. What about the ones further away? Well, as a star gets farther and farther away, it gets dimmer, so the farthest stars you can see with your eyes when you look out at the night sky are about 4,000 light years away. That means it takes light 4,000 years to travel from those stars to your eyes! How about even farther? Well, the solar system lives in the Milky Way galaxy, which is a huge blob of stars. The center of the Milky Way galaxy is 27,000 light years from Earth, while the entire galaxy is about 100,000 light years across. But you can only see stars that far away with a What about even further? Sure! The Milky Way is just one of tons of galaxies - one of the closest galaxies to us is the Andromeda galaxy, which is about 2.5 million light years away. It's so far away that you can't see individual stars in it - you just see an glowing blob. The very farthest galaxies we can see with our fanciest telescopes are about 13 BILLION light-years away - that means that the light that we see from those galaxies left them not very long after the universe was born! So when you look at galaxies that are very far away, you're also looking back in time at what the universe used to look like. It is the closest we've got to a time It varies drastically. The closest star is the sun, which is about 93 million miles away. Farther than that, we use the term "light years" to tell distance. A light year is the distance light can travel in a year. Although it looks instantaneous to us, light actually has a speed, about 300,000,000 meters every second. That's fast enough to get to the moon in about 1.2 seconds. We haven't been able to build anything that can travel nearly as fast as light, but it is a good way to measure distance. Pluto is about 6 hours away by light. The nearest star is four light years away, or about 23,500,000,000,000 miles. The galaxy is about 100,000 light years across, or about 588,000,000,000,000,000 miles (588 quadrillion miles). This sounds ridiculous but is true. That means that if you were to drive a car from one side of the galaxy to the other (assuming someone built that road), at 80mph it would take 839,000,000,000 years. Since the universe is 13,000,000,000 years old approximately, it would take well over the entire lifetime of the universe to travel that distance by car. The short answer is REALLY REALLY FAR away. The distance between galaxies is even larger, so some stars in other galaxies are millions or billions of light years away. Space is big. The closest star to Earth is the sun, which is 92,960,000 miles from earth. The furthest group of stars visible to the naked eye is the Andromeda Galaxy, which is 2.3 million light years away . The furthest stars found by astronomers in 2013 was 13.1 billion light years away . Dunbar, B. (2006, October 3). Amazing Andromeda Galaxy. Retrieved October 3, 2014, from Croswell, K. (2013, October 23). Farthest confirmed galaxy is a prolific star creator. Retrieved October 3, 2014, from There is a wide spectrum of distances that a star might be from Earth. For instance, our solar system's sun, which is just shy of 150 million kilometers away from the Earth, is the closest star to us. After the sun, though, the next closest star is 4.3 light years away, where 1 light year is about 10 trillion kilometers! And the farthest stars that have been measured are around 13.1 billion light years away (around 1023 kilometers away)! Some stars are closer than others. The sun, the nearest star, is 150 million kilometers, or eight light minutes (that is, it takes light from the sun eight minutes to reach the Earth, due to the distance from the sun to the Earth). The nearest star in the sky other than the sun, Proxima Centauri, is four light years away. I don't know how far away the "average" star in the night sky is. There are many many stars that are far, far away, but are too faint in the sky for the human eye to see them. Click Here to return to the search form. Copyright © 2017 The Regents of the University of California, All Rights Reserved.
The Learning Theory Connectivism is an extension of constructivism which takes into account the relevance and importance of computers and the internet to knowledge and learning in the digital age. Computers and handheld smart devices serve to connect learners both with each other and with previously inaccessible sources of information that are now digitally stored. An individual can seek out and learn something new on their own and at their own pace as long as they are connected to the internet. As they learn, they can share what they know and add it to the digital knowledge base. As more and more people do the same, the rate of knowledge creation skyrockets. The ability to know where to find necessary, relevant information becomes an increasingly important skill to have. With connectivism, what we currently know is of less importance than when we know it, as the necessary knowledge of yesterday becomes the obsolete knowledge of today. As such, our capacity to learn is of greater importance than our ability to hold any one piece of information. Massive Open Online Courses (MOOCs), of which I would argue online ABQ Senior Biology is one, are a great example of connectivism in action. One Thing I Would Do I would set up opportunities for my students to engage meaningfully in their biology learning via social media, either internally via Google Classroom or externally via Twitter chats and student blogs. I would also seek out opportunities such as a Mystery Skype with another biology class and/or Google Hangouts with experts in fields related to our units of study. Note: My students have tried all of these activities except for Mystery Skype, even taking part in a Google Hangout with Canopy Meg of the California Academy of Sciences! Why I Would Do This I think this would make a difference because these types of connections clearly illustrate that biology is not just a subject you learn in school. Connections to the outside world, either with another class of biology students, or with an expert, provide real life meaning to the activities taking place in the classroom. These types of connections also serve to breathe a diversity of understandings into the classroom, allowing for novel perspectives to be explored. Education 2020. (2017). Connectivism. Retrieved April 17, 2017, from http://education-2020.wikispaces.com/Connectivism Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1), 3-10. Retrieved April 17, 2017 from http://184.108.40.206/mediawiki/resources/2/2005_siemens_Connectivism_A_LearningTheoryForTheDigitalAge.pdf
One point perspective drawing. Environmental design sketch one point perspective city scape technique. Here is one of many environmental quick sketches for the up coming feature tutorial. Using one point perspective: One vanishing point is typically used for roads, railroad tracks, or buildings viewed so that the front is directly facing the viewer. Any objects that are made up of lines either directly parallel with the viewer’s line of sight or directly perpendicular (the railroad slats) can be represented with one-point perspective. One-point perspective exists when the painting plate (also known as the picture plane) is parallel to two axes of a rectilinear (or Cartesian) scene — a scene which is composed entirely of linear elements that intersect only at right angles. If one axis is parallel with the picture plane, then all elements are either parallel to the painting plate (either horizontally or vertically) or perpendicular to it. All elements that are parallel to the painting plate are drawn as parallel lines. All elements that are perpendicular to the painting plate converge at a single point (a vanishing point) on the horizon. Here is an environmental design using one point perspective. One point perspective construction lines.
From time to time, explosions take place on the surface of the Sun, creating what are called coronal mass ejections (CMEs). This can happen when the magnetic field surrounding the Sun becomes kinked, and the ensuing tension snaps said kink. CMEs contain enormous amounts of charged particles, and solar wind emanating from our Sun can carry these particles into the depths of our solar system at speeds exceeding 7 million miles per hour. Within just 12 hours, a CME could reach the Earth after ejecting from the Sun. The Earth's magnetic field protects us from most of the Sun's charged particles during a CME, but scientists still warn that impending CMEs could trigger geomagnetic storms in our magnetosphere powerful enough to knock out satellite systems and vast amounts of electronics on a global scale. The last time a CME ejected itself in Earth's general direction was back in 1859. While it doesn't happen very often, that's not to say that it won't ever happen again. Scientists continuously monitor the Sun for signs, which is imperative considering just how much the world depends on technology these days.
Agriculture has traditionally driven the Afghan economy, accounting for approximately 50 percent of GDP before the Soviet invasion in 1979. Nevertheless, the agricultural sector has never produced at full capacity. Before the invasion, only 30 percent of the total arable land of 15 million hectares was cultivated. At that time the main exports were sugarcane, sugar beets, fruit, nuts, vegetables, animal skins (Qaraqul) and wool. However, the continuing war reduced production significantly. Soviet troops planted land mines all over the country, rendering large areas of land useless and forcing large sections of the population to become refugees. The resulting cut in production caused massive food shortages. Kabul University produced a report in 1988 which found that agricultural output was 45 percent less than the 1978 level. The UNDP estimated that in 1992 only 3.2 million hectares of land were cultivated of which only 1.5 million hectares were irrigated. In 2001, the principal food crops were corn, rice, barley, wheat, vegetables, fruits, and nuts. In Afghanistan, industry is also based on agriculture, along with raw materials. The major industrial crops are cotton, tobacco, castor beans, and sugar beets. Sheep farming is also extremely valuable. The major sheep product exports are wool and sheep skins specially the Qaraqul skins.. In 2000, Afghanistan experienced its worst food crisis ever recorded because of a very severe drought. Such low levels of recorded rainfall had not been seen in the country since the 1950s. The water used to irrigate the lands comes from melting snow, and in 2000 the country experienced very little snowfall. The southern parts of the country were badly affected, and farmlands produced 40 percent of their expected yields. Half of the wells in the country dried up during the drought, and the lake feeding the Arghandab dam dried up for the first time since 1952. The barley crops were destroyed and the wheat crops were almost wiped out. In the middle of 2000, the drought's consequences were felt in Kabul, when more and more displaced people were migrating to the capital. The prices of staple foods have also increased in different parts of the country because demand is much higher than supply. For instance, in Kabul, a family of 7 can earn US$1.14 a day if the head of the family is lucky enough to find employment, whereas a loaf of bread costs US$0.63, roughly half an individual's income per day. A large segment of the Afghan population depends on food imported from abroad or distributed by aid organizations. The civil strife and drought increased the country's food import requirements to a record 2.3 million metric tons in 2000/2001, according to the UN World Food Programme. Much of the needed imports come from the international community and the rest from Pakistan. The disruption to the flow of this international aid caused by the 2001 war between U.S.-led forces on the Taliban has threatened widespread famine and starvation to much of the Afghan population. The number of livestock was greatly reduced during the years of war. In 1970, the total livestock population was estimated at 22 million sheep, 3.7 million cattle, 3.2 million goats, and 500,000 horses. According to a survey carried out in 1988, the number of cattle had declined by 55 percent, sheep and goats by 65 percent, and the number of oxen used to plow the fields was down by 30 percent. Much of the livestock is malnourished and diseased. In Afghanistan, agriculture provides the main source of household income and is the primary means of food security for 70 percent of the population. These people face incredible obstacles: soaring prices for food, seeds and other supplies; outdated technology; unfavourable or limited access to markets and financial services; and poor soil and water resource management.
Unlike human royalty, a species of octopus that thrives in frigid Antarctic waters has actual blue blood, and scientists think they've figured out its advantage: The key is a blue-hued protein called hemocyanin—which Phys.org notes is comparable to hemoglobin in vertebrates, and which distributes oxygen throughout the body—that makes it not only a literal blue-blooded creature but also uniquely equipped to survive extreme temps on both ends of the spectrum, according to the journal Frontiers in Zoology. Researchers found hemocyanin is efficient in a range of temperatures and, as one puts its, undergoes "functional changes to improve the supply of oxygen to tissue at sub-zero temperatures." Studying three octopus species, including two from warmer climates for comparison, the researchers found that all pump hemolymph, a blood-like fluid that contains high levels of hemocyanin, reports Discovery. The octopus in Antarctica, however, has as much as 40% more hemocyanin than the other two species, which puts it among the highest ever reported. And because it transports oxygen between the animal's gills and tissue even more efficiently when temps are above freezing, it's able to tolerate warmer temps associated with climate change far better than other animals in the area, not to mention hang out in relatively warmer shallow waters, unlike most octopuses. (The octopus also boasts the longest-known brooding period of any animal on the planet.)
A mosaic of sea ice shifted across the Bering Sea west of Alaska on February 5, 2008. On either side of the Bering Strait (top center), the land was blanketed with snow that highlighted the mountainous terrain. Left of center, the twisting branches of the Yukon River Delta make a gray-brown outline shaped like a tree. In the northern part of the image, the sea ice is composed of a mixture of large and small blocks, while along the southern margin, it becomes so fine that is looks like foam from this distance. South of the ice, the sea appears greenish. This color is a sign of life. While the vegetation on land is dormant for the winter, the ocean still blooms with an abundance of microscopic, plant-like organisms called phytoplankton. This image was captured by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Aqua satellite. NASA image by Jeff Schmaltz, MODIS Rapid Response Team, Goddard Space Flight Center. Caption by Rebecca Lindsey. - Aqua - MODIS
The Russian Revolution was a direct product of the First World War. As you know, that war killed an entire generation, devastated Europe’s economies, and spread misery across the Continent. The political fallout in Eastern and Central Europe was revolution in Germany, Austria, Ottoman Turkey, and Russia. What we call the Russian Revolution was actually two revolutions. The first one began in March of 1917 and resulted in the Tsar’s abdication. The second started in November and brought Russia’s Communists to power. The first Russian Revolution began on March 8, 1917, when the St. Petersburg military garrison joined food riots that had broken out across the city. Without military support Tsar Nicholas II abdicated on March 2, and more than 300 years of Romanov rule in Russia ended. Russia had broken with its autocratic past, or so it seemed. A power struggle began immediately, as rival political bodies fought for control over the government. On the one hand, the Russian Duma, a representative body that had come into being through a previous revolution in 1905, appointed the Provisional Government. On the other hand, workers and soldiers in Petrograd had organized themselves into the Petrograd Soviet of Workers’ and Soldiers’ Deputies, a body of 2,500 members that had been elected by workers and soldiers in Petrograd. In the Duma a group called the Mensheviks emerged in a leading position by July of 1917 and set up a Provisional government, which promptly instituted a policy of continuing the war against Germany. At that point, the western allies were sending large amounts of aid to Russia, and the government wanted to keep that aid flowing. The Petrograd Soviet soon showed, however, that it had greater authority. On March 14, the Soviet issued its famous Order Number 1, which directed the military to obey only the orders of the Soviet. The Provision Government was unable to countermand this order, and the Petrograd Soviet only refrained from declaring itself openly as Russia’s real government for fear of provoking a conservative coup. Between March and October the Provision Government reorganized itself four times. The first government was composed entirely of liberal ministers, with the sole exception of the Menshevik Aleksandr Kerensky. The subsequent governments comprised coalitions of various factions. None of these governments was, however, able to cope with the two major problems that confronted the country. First, peasants, who had always lived in the verge of starvation in Russia, began seizing land without government approval. This put the countryside in a state of chaos. Second, the Russian army was collapsing, making an organized defense against the Germans impossible. The Provisional Government continued to insist, nonetheless, that Russia prosecute the war further. This was a bad strategy, since the war had become increasingly unpopular. When Aleksandr Kerensky became the head of the Provision Government in July 1917, he first had to put down a coup attempt by the army commander-in-chief Lavr Georgiyevich Kornilov, though he was still unable to halt Russia’s slide into political, economic, and military chaos. Kerensky’s Russian Social-Democratic Worker’s Party suffered under the strain, too, as a left-wing splinter group called the Left Socialist Revolutionaries eventually left in protest. While the Provisional Government’s power waned, the Soviet government’s power was increasing. By September, the Communists, also known as Bolsheviks, and their allies the Left Socialist Revolutionaries had overtaken the Socialist Revolutionaries and Mensheviks in both the Petrograd and Moscow Soviets. The Russian Revolution included two Civil Wars. One was between the Czars and the Socialist Revolutionaries. The other was within socialism, between the Mensheviks and Bolsheviks. The two groups were originally part of the same party called the Russian Social-Democratic Worker’s party. In 1903, however, a split emerged between Vladmir Ilyich Lenin and his followers and another group around a Socialist named Yuly Osipovich Tsederbaum, who went by the pseudonym L. Martov. Martov wanted the party to be a mass organization modeled on western European Social Democratic Parties. Lenin, however, wanted the party to be a tight-knit group of professional revolutionaries that were devoted to overthrowing the economic and political system. When Lenin and company gained a majority on the party’s central committee they also gained editorial control over the party’s newspaper. This position afforded them the privilege of naming themselves Bolsheviks (those of the majority), while the other side became Mensheviks (those of the minority). The labels stuck, even though the reverse was actually true. On the night of November 6, 1917 the Bolsheviks and the Left Socialist Revolutionaries staged a nearly bloodless coup, occupying government buildings, telegraph stations, and other strategic points. Kerensky’s organization of resistance proved futile and he fled the country. The Second All-Russian Congress of Soviets, which convened in Petrograd at the same time as the coup occurred, approved the formation of a new government that was composed mainly of Bolshevik Commissars. Now we need to consider just who these Bolsheviks were. The Bolsheviks were an orthodox Marxist party whose avowed goal was the overthrow of capitalism. Their leader, known today as Vladimir Ilyich Lenin, was born Vladmir Ilyich Ulyanov in 1870 into a comfortable middle-class family. (He took the name Lenin in 1901, as a cover for his revolutionary activities.) His father was a schoolteacher who had risen within the hierarchy to the status of school inspector. His mother was the daughter of a physician who had also received a small inheritance. Lenin was a good student, and it looked for a time as if he would become a classicist. Two events, however, change the course of his life. The first event was the Czarist government’s attack on public education. Having grown suspicious of all potential sources of subversion, the government bullied and threatened people like Lenin’s father. The second event was his brother’s execution by the Czarist government for conspiring to assassinate the Czar. In Russia, this was a far worse event than one would expect. Not only had the family lost someone to the Czarist police, but since the family had produced a criminal against the state, it was also stigmatized. Lenin’s sister was, thus, banished to Siberia as a potential source of sedition. In 1887, Lenin was nonetheless able to attend university in Kazan, but he was soon expelled for taking part in illegal associations there. This also got him banished to Siberia. He was eventually allowed to return to Kazan, but could not get readmitted to the university. With nothing better to do, Lenin began reading Marx and joined revolutionary Marxist reading groups. By 1889, Lenin had converted to Marxism. Later that same year, Lenin’s family moved to Samara, where he was able to study law. He later moved to St. Petersburg and opened up a legal practice, though his revolutionary activities continued on the side. In 1895, Lenin was sentenced to fifteen months in jail for sedition and after serving out his term was exiled to Siberia again. In 1900, Lenin left Russia and moved to Munich, where he founded the revolutionary newspaper Iskra, which means “The Spark,” and organized a revolutionary political party that would, ultimately, defeat the Mensheviks. With the outbreak of World War I, he left Germany to hide in neutral Switzerland, where he remained until 1917. In that year the Germans allowed him to cross their territory by train, hoping that he would fatally weaken their enemy’s government. Lenin arrived in St. Petersburg on April 16, 1917 and set to work. Lenin was a dogmatic Marxist. He firmly believed in the overthrow of capitalism, the development of a classless society, the withering away of the state, the temporary dictatorship of the proletariat, and the ultimate spread of communist democracy, promising all of these things before the revolution. Marxist dogma held that revolution would come only through war, though when the Russian Revolution finally did come, it seemed that everything would run peacefully. The Communists seized power and then property, with little resistance. The people who had opposed the Communists were given amnesty, and reprisals were discouraged. Soon, however, the situation began to change, as the government became dictatorial. In early November, the Bolsheviks seized control of all Russia’s newspapers, leaving only Pravda and Izvestia to publish the news. (Pravda means “Truth,” and Izvestia “News.” A joke eventually spread among Russians that ran: “In the news there is no truth, and in the truth there is no news.”) On November 22 the government authorized house searches without the need for warrants. On December 11 it took over all of Russia’s schools. On December 14, the banks were nationalized. On December 21, the government empowered Revolutionary Courts to try enemies of the revolution. On December 24, the government nationalized all factories. On December 29, all bank accounts were frozen and the charging of interest was banned. Thus, in a very short time, the government had taken over the essentials of private life. Homes were no longer safe. The news was controlled. Money and property were now under state control. A series of kangaroo courts then made sure that no one did anything about it. The most important government change was, however, the replacement of the Czar’s secret police the Okrana by a revolutionary secret police known as the Cheka. At first, the Cheka was limited in scope. It had only 120 agents and during the first six months of the revolution was responsible for 22 deaths. Nonetheless, even this trend was worrying, since under the Czars the government had killed only (!) 17 people annually. By 1919, however, the Cheka was killing 1,000 people per month. Two years later it had 250,000 full-time agents, whereas at its height, the Okrana had only 15,000. In addition, the Cheka became a government unto itself, setting up its own secret courts and penal camps for punishing the state’s enemies. (The Cheka’s ability to strike fear into the average Russian is evidence by the word Chekist, which was a pejorative term used by the populist for anyone who worked for the Soviet Union’s internal security forces.) It is important to understand that Lenin knew about and fully supported this slaughter. His slavish adherence to dogma meant that the bourgeoisie had to be eliminated as a class, lest they impede the revolution. People were, therefore, arrested and shot, solely because they belonged to the wrong class. This attitude eventually led to the wholesale execution of the Tsar and his family on July 16, 1918. Nothing, not even pity for the Tsar’s children, would stay the hand of revolutionary justice. Lenin’s dogmatic desire to kill his enemies was succored by the difficult political situation in which he and the revolution found themselves. After the Bolshevik Revolution got under way, Russia rapidly descended into chaos. At one time, eighteen different governments existed in Russia, all claiming sovereignty over the whole country. The result was a massive civil war, particularly between Loyalists, Bolsheviks, and Mensheviks. Lenin responded by declaring war on everyone and essentially turning himself into a Marxist czar. In this sense, the Bolshevik turn to violence was inevitable, as the new government confronted a series of problems on all sides. First, the Germans won the war and imposed a harsh peace treaty on the Russians as Brest-Litovsk. Many people within Russia objected to the treaty’s terms, and this peeled support away from the government at a crucial stage. Second, the western allies invaded Russia from all sides. The British and Americans landed troops in the north at Archangel. The French came in from the south. And the Japanese took Vladivostok in the east. The Communist government was in deep trouble. At this moment, however, the Bolsheviks reacted creatively to the many pressures they confronted, including making the prudent decision to end the repression. In 1919, the Bolsheviks suddenly declared the Mensheviks to be legal again. Meanwhile, the Czarist forces engaged in their own repression, shooting Communist sympathizers with abandon, which made them appear worse to the many people who had previously removed their support from the Bolsheviks. By 1921, the Russian Civil War was over, and Leon Trotsky began reshaping the Russian army to defend Mother Russia. He would later pay for his loyalty to the revolution with his life. The allies had no real purpose in Russia other than to prevent the supplies they had sent from falling into the Germans’ hands. When the Civil War was over, they left, too. The Civil War’s resolution was, therefore, the perfect moment to restart the repression. This was deemed necessary, because even good revolutionaries were turning on the new government. In Petrograd-Kronstadt, for example, which is an island in St. Petersburg’s harbor, navy sailors demanded that Lenin fulfill his previous promises about devolving power to the local level. (Lenin had originally begun the Revolution with the cry, “All power to the Soviets!”) Bolshevik armies massacred the sailors, even though they had been at the revolution’s center, and the need for repression intensified as the government’s policy of “War Communism” took full effect. Under “War Communism” the government took over the economy. It outlawed all unions, the official view being that since the Soviet Union was a worker’s state, the government already had the workers’ best interests at heart. In addition, the government took over agriculture, going into the countryside and stealing all the food that the peasants produced to give to the workers in the city. In a preview of the devastation that Mao Zedong’s policies would wreak on the Chinese economy, “War Communism” caused the Russian economy promptly to collapse. By 1920, St Petersburg had lost 75% of its population, and Moscow 50%, while the industrial labor force shrank by 75% overall. Many workers died in the fighting, others starved to death, most simply returned to the land. Thus, industrial production halted and total manufacturing fell to 87% of 1913 levels. This was a reversal of a trend toward industrialization that had begun under the Czars. Twenty years of economic progress was destroyed in one fell swoop. The Communists soon realized that they had to adapt or perish. So in 1921, they announced the New Economic Policy, which represented a temporary retreat from full implementation of the Communist program. Lenin, in effect, became a temporary and tactical capitalist, declaring that it was perfectly all right to allow small businesses to be run independently as part of a larger transition. In 1922, the state even reintroduced money, which had been outlawed earlier in the revolution. As for larger operations, the party took over all big industries, because these were crucial to the economy and had too much power to be out of government hands. Thus, party heads assumed control of factories without having any expertise in industry, and workers were allowed no voice in daily management, since the party was, after all, on their side. The NEP succeeded in stabilizing the Russian economy, and capitalist policies were allowed to persist for a number of years. A fixed tax on income of 20% was instated, which by historical standards is quite low. Agriculture revived slowly and food production increased, though famine did strike across the country. This famine’s worst effects were alleviated, however, by a massive relief effort led by future US president Herbert Hoover. (In later polemics between the United States and the Soviet Union over who started the Cold War this fact was often forgotten on the Soviet side, while a great deal of attention was given to those allied armies that had landed in Archangel.) Even large factories had to work according to vague capitalist practices. Companies paid workers wages, and talented managers even got slightly higher pay. It was even legal to fire people who refused to work. Unions also began to appear and gained the right to bargain collectively, though only with privately held companies. The revolutionary potential of these new unions was dampened, however, by the requirement that the leaders be members of the Communist Party. The NEP also sparked the return of public life. Since it was now legal to buy and sell goods for profit, lively local markets and small stores began to appear. Middlemen began to appear as well. Known as NEPmen, they played the market by positioning their money in particular products or places. Most striking, perhaps, was that cafés opened in Moscow, a notoriously dour town. And there was even that ultimate signal of capitalist enterprise, interest. Interest was still officially illegal, so the government began issuing bonds that were priced at 95 rubles and could be redeemed for 100 rubles after the passage of a fixed amount of time. Russia was, however heading for trouble, since capitalism and Communist Party dogma could not exist over the long term, as Mikhail Gorbachev was to find out much later. As some people began to get rich and independent, a government crackdown became inevitable. Already in 1921 Lenin publicly denounced free speech, calling it deviationism. In addition, he kept his system of authoritarian control in effect. Lenin had promised during the revolution that the Cheka would be disbanded after the dust had settled, and he kept that promise, more or less, when he disbanded the Cheka and renamed it GPU in 1922. Later this organization became world famous under the initials KGB, or Committee on State Security. The need for the tools of oppression never disappeared while the Soviet state existed. Lenin died in 1924, leaving behind a most unstable political situation. There was no clear procedure for succession and a battle for power ensued. By 1928, Josef Stalin had managed to maneuver himself into a position of supreme power, and the authoritarian trends that had begun with Lenin reached their fullest development. In 1928, Stalin ended the NEP, because it never resolved the problem of supplying adequate quantities of grain to the cities. He then imposed on the Russian people a massive forced collectivization program that killed millions. Throughout the late 1920 and early 1930s the government forcibly deprived all Russian peasants of their land, leading to famine in the countryside. In addition, by 1931 the state reimposed its controls on all production and commerce. The Soviet Union was now without any economic freedoms at all. It was also a police state and would remain so until its final dissolution in 1991.
Can You Identify That Nut? More than 3 million people in the United States report being allergic to peanuts, tree nuts, or both. For those with nut allergies, knowing which is which when faced with an assortment can be critical. Unfortunately, new research presented at the 2010 Annual Scientific Meeting of the American College of Allergy, Asthma and Immunology indicates that only about half of the people with a nut allergy can visually identify those they are allergic to. Peanut and Tree Nut Allergy Leading Cause of Food-Related Anaphylaxis The study, conducted by Todd Hostetler MD of the Ohio State University, involved 1,005 adults and children, some with food allergies and some without, who were shown a tray with 10 different nuts in 19 various forms – whole (without the shell), in the shell, slivered, and crushed. The participants had to complete a worksheet identifying each nut. Only 21 of the participants (1.9%) were able to identify all 19. Among the adults, most were able to accurately identify 8 or 9 out of the 19. Peanuts were the most easily recognizable, whether or not they were inside the shell. Hazelnuts and pecans were the least recognized. If the adult had a nut allergy, he or she was able to identify slightly more – 13 out of 19. Children under 18 were only able to identify between 4 to 6 of the different nut varieties, but thankfully, 73.3% of parents of allergic children were able to name all of the nuts that affected the child. The study is important, says Dr. Hostetler, because over a five-year period, 55% of the participants with peanut allergy had reactions after accidently eating the wrong nut. Peanuts and tree nuts are the leading cause of death from food-induced anaphylaxis. Clinicians should include information on how to recognize different nuts, in their different forms, as part of the education upon diagnosis of a peanut or tree nut allergy. Hostetler T, et al "THe ability of adults and children to visually identify peanuts and tree nuts" ACAAI 2010; Abstract 48.
1. Undo the operations outside absolute value bars, so that the problem looks like this. 2. Split the above problem into two inequalities similar to the diagram below. Solve for the unknown in each equation. (Notice that the absolute value bars have been dropped.) 3. Graph the resulting solution set. Directions: Please solve for the unknown in the following absolute value inequality. Step 1. Notice that the absolute value quantity is not isolated on one side of the inequality. We must subtract one on both sides then divide by four on both sides to rectify this. Step 2. Now that the problem is consistent with the diagram above, we can split the problem into two inequalities and solve them. Notice in the inequality on the right, along with making the number negative, we flipped the inequality symbol. 3x-5>4 Or 3x-5<-4 3x>9 Or 3x<1 x>3 Or x<1/3 Step 3. The graph must contain the numbers that are smaller than or equal to negative third and greater than or equal to positive three. Note that the less than or equal to symbol and the greater than or equal to symbol require open dots when they are graphed. Directions: Please solve for the unknown in the following absolute value inequalities. If there is no solution, please indicate so by stating "null set". Also, graph the solutions sets. 1. |3x|>9 2. |18b|>9 3. |2x+12|>14 4. |10c|-15>15 5. 9|6r|>-63 6. 9|4-2y|-6>30 7. -5|6+2r|>20 8. 7|7x|-7>42 9. 3|4+2g|>21 Directions: Please write 1-2 paragraphs that do the following. 1. Compare and contrast the steps used to solve absolute value inequalities with less than to the steps used to solve absolute value inequalities with greater than. You must remark on the similarities and the differences between the diagrams provided on these web pages. Are the methods more similar to each other or different than each other? Can you describe a more general process that works for solving both types of inequalities?
G or PG? Persuade Me! Students will be able to introduce a topic by stating an opinion and create an organizational structure to provide reasons that support their opinion. Students will utilize linking words to connect their opinion and reasons, and provide a concluding statement. Introduction (8 minutes) - Introduce a fictional scenario that centers around the class watching an animated movie. - Explain to students that they can only watch an animated movie. Give students the definition of animated, a style of film that uses sequential drawings to create motion. - Ask students to brainstorm as many animated movies as they can in five minutes and to make a list on their sheet of paper. Explicit Instruction/Teacher Modeling (12 minutes) - After five minutes, ask students to share a few movie titles from their brainstorm and list them on the board. - Set a time limit of about five minutes, as the list can be lengthy. Include some titles of your own. - Next, after presenting the given set of movie titles, explain to students that the school only allows G-rated movies to be shown. Ask students to identify which movies are rated PG. - As students identify the PG-rated movies, cross them off the board. For movie titles that the class is unsure about, do an online search or designate a student do one for the class. - Continue identifying PG-rated movies, and the list will begin to shrink until only a few movie titles will remain. - Ask students how they feel about the remaining movie titles and why they think these movies are acceptable to show to the class. Guided Practice/Interactive Modeling (10 minutes) - Tell students that you value their opinions. Let them know that they will have an opportunity to voice their opinions, and work to persuade others to allow either PG-rated movies or only G-rated movies in school. - Show students the Persuasive Writing organizer and review the sections with them. - Tell students that they will complete the organizer to persuade others to agree with their views on either G or PG-rated movies. - Model for students how to complete the organizer. Show where they should state their opinion and show an example. - Next, show students how to select linking words on the organizer. Instruct students to circle a linking, or transition, word from a given list before stating their supporting reasons. - Demonstrate for the class how to restate the topic sentence in order to provide a concluding sentence. Independent Working Time (15 minutes) - Allow students to work independently on their organizers. - Remind students to write reasons that would help persuade others to agree with them, and to be prepared to share. - Display teacher example for students to refer to. - Enrichment: For students who need an extra challenge, ask them to state four reasons to support their opinion and provide personal examples. Students may also utilize their own linking or transition words. - Support: For students who need support, provide sentence starters to assist in writing the opinion and supporting reasons. If necessary, reduce the assignment to state two reasons instead of three. Assessment (10 minutes) - To check for student understanding, monitor the classroom as students are working. - Collect the persuasive writing organizers at the end of the lesson to review. Review and Closing (10 minutes) - Call on student volunteers to share their organizers with the class. - Ask students to share positive feedback about their classmates’ persuasive organizer. - Ask students if they think their opinions have changed after hearing from their classmates. - Assign the Make it Happen worksheet as a writing homework assignment.
What is a direct cost? A direct cost is a cost that is directly traceable to the production of goods and services. Direct costs can typically include: - Direct materials used in manufacturing - Direct labour - Direct expenses e.g. a royalty - Payment to a patent holder for a specific production process They usually are attached to one cost item, such as a department, product or project, on a balance sheet. Where have you heard about direct costs? Accountants break down costs of production into direct and indirect costs on balance sheets. Unlike direct costs, indirect costs cannot be pinned to a specific cost item. Often called overheads, they include the price of maintaining the whole company. What you need to know about direct costs. If a business correctly allocates its costs to direct and indirect costs, it can price its products more accurately, improve its budgeting and be more attractive to investors. Direct product costs are variable because they increase in total if more units of products are made. Many grants from government and foundations stipulate that funding is allocated in specific amounts to direct and indirect costs so it is important to know which costs are which. Find out more about direct costs. To understand how tracking direct costs is an important part of an accountant's job see: http://www.accountingcoach.com/blog/indirect-cost-expense.
Choose four of the numbers from 1 to 9 to put in the squares so that the differences between joined squares are odd. If there are 3 squares in the ring, can you place three different numbers in them so that their differences are odd? Try with different numbers of squares around the ring. What do you notice? Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this? In this problem we are looking at sets of parallel sticks that cross each other. What is the least number of crossings you can make? And the greatest? In this 100 square, look at the green square which contains the numbers 2, 3, 12 and 13. What is the sum of the numbers that are diagonally opposite each other? What do you notice? Ahmed has some wooden planks to use for three sides of a rabbit run against the shed. What quadrilaterals would he be able to make with the planks of different lengths? Cong from St Peter's RC School shaded the hexagon grid which helped him look for solutions to this problem. Go to last month's problems to see more solutions. Proof does have a place in Primary mathematics classrooms, we just need to be clear about what we mean by proof at this level. This game is known as Pong hau k'i in China and Ou-moul-ko-no in Korea. Find a friend to play or try the interactive version online.
Typically Sahara recalls endless Sandy landscapes. Sahara remembers. The dunes are walking through a camel caravan. The sun radiates brilliantly like a spirit, not the last time. It seems as if the magnificent Golden Dunes were always here. What mysteries, therefore, does the Great Desert conceal from humanity? But just six thousand years ago, it is difficult to believe. The arid desert, surrounded by a dense green carpet, covered more than 9,000,000 sq km of land, almost 3,000,000 sq. miles. What We Know So Far There were whole villages and domestic animals grassed, where almost nothing but hot Sand today is present. The meadows and lakes were lush. It fed the river Tamanrasset to transport its waters to the Atlantic over the green Sahara. The river was over 500 or 311 miles long with its many affluents and would have listed as the world’s longest rivers. The largest Paleo River was found in 2015 by 3-D satellite images. Through these pictures, researchers see the smooth borders. There was a lake called Lake Mega Chad in the old channels of the river, under the new desert, and in Central Africa. Now almost a fantastic lake that only the modern Chad Lake remains. Once larger than the Caspian Sea in antiquity, this lake extended almost 390,000 square kilometers in the Sahara. If the lake has already existed. It’d be the world’s biggest. NASA’s photos demonstrate how enormous it was clearly. The lake has always been marked as a silent reminder to mankind in the desert who had a hand to empty the once deep water reservoir. But what happened? What happened? Why was that once green and a thriving area suddenly the most vacant place on the planet? For example, scientists have various explanations. The environmentalist David Wright believes cattle are one-third of the African continent’s primary source of climate change. Local vegetation has been trampled and eaten by sheep, cows, and goats. The soil was exposed, and the sun was more exposed to the atmosphere. The drought started, and the amount of precipitation fell. That was the start of the end. The dryness killed all plants slowly but gradually. The Green Sahara became about the size of the United States to be a major desert area. However, this doesn’t mean only animals can be held responsible. Maybe livestock farmers had already begun to destroy the green cover as a catalyst to a process. True, it’s not all that easy. In 2018, another team of scientists suggested the old herders. Instead, the green Sahara has continued to flourish. According to one of the research authors, another 500 years ago helps preserve a deteriorating environment through seasonal livestock and selective pasturing. The third group of researchers blames the Earth axis. After studying the dust which has precipitated the West African coast for 240,000 years, the findings have come that the climate in Sahara and Northern Africa shifts every 20,000 years from wet to dry. As the axis tilt of the earth shifts, the distribution of sunshine during the seasons is affected. The longer the sun, the more powerful the monsoon will become, and the higher the precipitation. We may, and a moist climate turns into drought during summer when there’s less sunlight. If this is so, Meadows will once again bloom, and animals graze in the Sahara after around 10,000 years. The Invisible River and the Ghost Lake are not the only mysteries hiding from us by the magnificent wilderness. The harshest ocean seethed was Zoic’s earlier in the message. The ocean retreated here, the continents were separated, platforms moved, and a Wales cemetery seemed to have existed on the surface of the Sand. Now it is called this spot. The Valley of Whales in Arabic Wadi El Hitan, the valley of whales in Arabic. You might be shocked, but contemporary whales have ancient ancestors. Hours of bacila had sharp teeth and real hind limbs. Their limbs were very tiny and not moving. They reached 21 meters or 69 feet and probably were the biggest predators of their time. Think of how surprised scientists were. At the beginning of the 20th century, the bones of old whales, prehistoric dinosaurs, snakes, turtles, crocodiles, and manatees were retrieved in the heat wilderness. They perhaps had their first idea that somebody had to take them here, but it all proved much more exciting. Other secrets remain in the Sahara sands. Not only have there been ancient predatory whales hunting, but dinosaurs once wandered. Here was discovered one of the world’s biggest dinosaurs, which is almost 100,000,000 years old. They were the bones of a sauropod named Perala Titan Stromer. Unknown Dinosaur Found in The Sand This giant, by fossils, can exceed a length of 32 meters or 105 feet, reaching up to 60 tonnes, and, in 2018, the bones of another formerly unknown dinosaur were discovered under the vast desert sands. The discovery was like a bursting bomb for Mansoor Taurus. Very little information was available on the last days of the dinosaurs that lived in Late Cretaceous Africa. The new type of dinosaur has also turned out to be very similar to its European equivalent, and so the dinosaur is apparently coming from modern Europe into Africa. But how can that be? Scientists were previously persuaded. However, after they were divided, the ancient giants could not travel across continents, but this unexceptional discovery shattered accepted theories about dinosaur migration. The newly discovered species could be 8 to 10 meters long or 26 to 33 feet long and weigh up to six and a half tonnes’ African bull elephant. T his discovery is considered one of the most important remains of new dinosaur species found among all. And Manser Asaurus’s incredible discovery will give us more detail on the fauna. However, one of Mauritania’s main mysteries is the most inexplicable mirror of Sahara and Africa. The Eye of The Sahara The eye of the Sahara, or the Richat structure, is an unusual geological formation easily visible from Space. It is a series of large concentric rings with a diameter of approximately 50 kilometers or 31 mi. Since it was found in 1965 by Man’s spacecraft Gemini 4, this object has achieved world renown. Since then, scientists have all these years haunted the enigmatic eye. You looked for a solution to the key issue of what this structure is? If this is the location of an ancient meteorite falling down, then what about the crater? And why are there no traces? If this has an effect that has collapsed for millions of years within the extinguished volcano’s mouth, then why is volcanic rock totally absent? Most scientists are inclined to assume that the structure is created by erosion, but there’s a fault in this theory. There’s also fantastic theories on an alien landing place or the location of Atlantis here. The ring of creation is 2. There is erosion, also a normal night on Earth. The enigmatic I of the Sahara remains one of the most incredible natural phenomena, but you can create something like it anyway. So I’m more likely to stay secret to whom or what the desert looks on. The desert is also silent over towns and civilization, traces of which are registered by satellites buried under Sand. And occasionally discovered by archaeologists, Life had been lively once. The fortifications stood here. Fields waited for harvest, and cattle graze on pastures. Here, here. Here. Wild animals are roaming in search of prey, and the fish are sprinkling in rivers. The desert is now totally unfriendly. Anyone who is trying to explore the Saharan’s vast expanses has power. Sandstorms from the sizzling sun. The greatest mystery of the Sahara, maybe in the hot dunes, is still waiting and hides it from human eyes. How could it be, do you think? Write to us in the commentary, and after a tiring journey through the hot desert, the subscription will be better than any other breeze. Thank You for reading the Complete Analysis on What’s Hidden Under the Sand of Sahara? Secret wonders hidden in the world’s largest deserts.
Devise a tour that gets a tourist from their hotel to all the city sights and back to their hotel. This activity is an example of creating an algorithm that is a simple sequence of instructions to do in order. It shows that if we have written down a solution to the problem in the form of an algorithm then we are able to do tours in future just by following the steps, without having to work it out from scratch again. Also if we write down the algorithm we can check that it definitely works by following it step by step on paper. This activity leads on to the Knight’s Tour Activity – read that next. - data representation - computational thinking - sequences of instructions This session comes with linked activity sheets that you can download: - Activity: Tour Guide [PDF] - Booklet: Computational Thinking – The Knight’s Tour [PDF] - Video: (also about the Knight’s Tour Activity) This activity can be used alone but naturally combines with the Knight’s Tour Activity.
Warning! This learning object uses Adobe Flash and your browser does not support Flash. To view this content you will either need to install and/or enable flash in your web browser or upgrade your web browser. Click Here for instructions to enable Flash. In this interactive object, students view an animated depiction of Class 2 levers. A matching quiz completes the activity. Parallel Circuit Analysis Practice Problems Part 1 By Patrick Hoppe In this interactive object, students work parallel circuit analysis problems. They solve for total resistance and current, the current through each resistor, the voltage across each resistor, and the power dissipated. Electrical Switches & Pushbuttons By Terry Bartelt In this learning activity you'll review the operation and schematic symbols of various types of switches and push buttons used for electronic circuits. In this animated object, learners examine how thermal energy is transferred by conduction, convection, and radiation. A brief quiz completes the activity. Op Amps 5: Non-Inverting Amplifier Circuit Resistance By Todd Van De Hey The learner will calculate basic circuit resistance. Measuring Length in the Metric System By Nabila Dahche In this learning activity you'll explore how units in the metric system are related to each other and practice locating measurements on a metric line. An Inductor Opposing a Current Change Learners read how an inductor opposes a current change when it begins to energize and when it begins to de-energize. A short quiz completes the activity. Mechanical Reasoning Assessment Examples By Marie Hechimovich Learners solve two sample problems for a mechanical reasoning assessment. What Is Torque? Learners read a description of torque and study the factors that cause its magnitude to change. A brief quiz completes the activity. What's Your Point of View? By Rosie Bunnow Learners evaluate how well others describe their points of view in a workplace problem-solving situation. They then apply techniques for explaining their points of view as well as for gaining understanding of others' perspectives. This learning object contains audio. Unit Conversion in the Metric and English Systems: Length You'll practice converting between units of measure for length in the metric and English measurement systems. Review for the physical science test on Newton's laws. CTIS Vocabulary Words 2 2nd six weeks vocabulary words (some physical science and some force and motion) Creative Commons Attribution-NonCommercial 4.0 International License. Learn more about the license » Give your new group a name.
When talking about bullying, it is very important for parents (and teachers and kids) to understand what is not bullying. Many times, a single act or behavior is out of proportion, but it is not considered bullying. Some people think that bullying is any aggressive behavior and although such behaviors are a source of concern and need attention, it is important to separate them from bullying. As I said in the first chapter of the bullying series, bullying is recurring and deliberate abuse of power. It is not easy for kids to understand the difference between a deliberate act and an accidental one, but it surprises me that many grownups also talk about things people do to them as if they were done intentionally to hurt them. Such perception is very dangerous, because every minor act of conflict, done without any intention to harm, can escalate and become a big conflict. Much like in any communication, whether it is verbal or not, there are two sides involved. Bullying is a form of communication and depends not only on the giver but also on the receiver. For an incident to be considered bullying, the aggressor must want to hurt someone and the victim must perceive the incident as a deliberate act of abuse. It is very important for the victim to know what is not bullying to make sure that when things seem hurtful, they will not fall immediately into the category of bullying, because the way to overcome bullying is different from the way to overcome other hurtful acts. Not bullying list This incidents on this list are NOT considered bullying: - Not liking someone – It is very natural that people do not like everyone around them and, as unpleasant as it may be to know someone does not like you, verbal and non-verbal messages of “I don’t like you” are not acts of bullying. - Being excluded – Again, it is very natural for people to gather around a group of friends and we cannot be friends with everyone, so it is acceptable that when kids have a party or play a game at the playground, they will include their friends and exclude others. It is very important to remind kids they do the same thing sometimes too and, although exclusion is unpleasant, it is not an act of bullying. - Accidentally bumping into someone – When people bump into others, the reaction depends mostly on the bumped person’s mood. If they have had a bad day, they think it was an act of aggressive behavior, but if they are in the good mood, they smile back and attract an apology. This is also relevant for playing sport, like when kids throwing the ball at each other hit someone on the head. It is very important for teachers and parents to explain that some accidents happen without any bad intention and it is important not to create a big conflict, because it was NOT an act of bullying. - Making other kids play things a certain way – Again, this is very natural behavior. Wanting things to be done our way is normal and is not an act of bullying. To make sure kids do not fall into considering it as an aggressive or “bossy” behavior, we need to teach them assertiveness. If your kids come home and complain that Jane is very bossy and she always wants things to be done her way, you can show them that they want it too and that Jane is miserable, because she is not flexible enough and she will suffer in life for insisting that things be done her way. Again, although it is not fun or pleasant, this is NOT bullying. - A single act of telling a joke about someone – Making fun of other people is not fun for them, but the difference between having a sense of humor and making fun of someone is very fine. It is important to teach kids (and grownups) that things they say as jokes should also be amusing for the others. If not, they should stop. Unless it happens over and over again and done deliberately to hurt someone, telling jokes about people is NOT bullying. - Arguments – Arguments are just heated disagreements between two (or more) people (or groups). It is natural that people have different interests and disagree on many things. Think about it, most of us have disagreements with ourselves, so it is very understandable to have disagreements with others. The argument itself is NOT a form of bullying, although some people turn arguments into bullying, because they want to win the argument so much. They use every means to get what they want and find a weakness in the other person, abuse knowledge or trust they have gained and use it against the other person. It is very important to distinguish between natural disagreements and bullying during an argument. - Expression of unpleasant thoughts or feelings regarding others – Again, communication requires at least two players. Although it may be unpleasant to hear what someone thinks about you, it is NOT a form of bullying but a very natural thing. In every communication, there are disagreements and some form of judgment about each other’s attitude and behavior. If someone says to you, “I think this was not a nice gesture” or “You insulted me when you said this”, this is NOT bullying but an expression of thoughts and feelings. - Isolated acts of harassment, aggressive behavior, intimidation or meanness – The definition of bullying states that there is repetition in the behavior. Bullying is a conscious, repeated, hostile, aggressive behavior of an individual or a group abusing their position with the intention to harm others or gain real or perceived power. Therefore, anything that happens once is NOT an act of bullying. As a parent, it is important that you pay attention to what your kids are telling you and find out if things are happening more than once. All the behaviors above are unpleasant and need to be addressed, but they are not to be treated as bullying. Many times, labeling a single act of aggression can turn it into bullying just by perceiving it that way. Until next time, happy parenting, This post is part of the series Bullying: - Bullying Facts and Myth - Bullying Statistics are Scary - What is NOT Bullying? - Types of Bullying - Why Do People Bully? - Victims of Bullying - Bullying Bystanders - Home of the bully - Home of the bully (2) - Workplace Bullying - Workplace Bullying (2) - How to Help Bullying Victims - How to Help Bullying Victims (2) - How to Help Bullying Victims (3) - How to Help Bullying Victims (4) - How to Help Bullying Bystanders - How to Help Bullying Bystanders (2) - How to Stop Workplace Bullying - How to Stop Workplace Bullying (2) - How Workplace Bullying Bystanders Can Break the Cycle - How Organizations Can Stop Bullying - How Organizations Can Stop Bullying (2) - Bully Parents - How to Stop Parental Bullying - How to Stop Parental Bullying (2) - How to Stop Parental Bullying (3) - How to Stop Parental Bullying (4) - How to Stop Parental Bullying (5) - How to Stop Parental Bullying (6) - How to Stop Parental Bullying (7) - How to Stop Parental Bullying (8) - How to Stop Parental Bullying (9) - How to Stop Parental Bullying (10) - How to Stop Parental Bullying (11) - How to Stop Bullying with Empathy: The Story of Two Apples
We are beginning the 3rd quarter focusing on grammar. We will introduce other topics as the quarter progresses GRAMMAR is the study of the way words are used to make sentences. Our class will focus on the eight parts of speech NOUN: A noun is a person, place, thing, or idea. We will be learning and dissecting ten different types of nouns The 10 noun types we will be interacting with include the following list 1. Common nouns 2. Proper nouns 3. Material nouns 4. Compound nouns 6. Abstract nouns 7. Concrete nouns 8. Collective nouns 9. Uncollective nouns 10. Countable nouns COMMON NOUNS are person, place, thing or ideas. In Rockford we have the Rockford Ice Hogs. PROPER NOUNS are thing that specific nouns in a text or story Me and my dad drove in his Chevy truck MATERIAL NOUNS are a substance or other things that are made from that substance Me and my mom baked a cake and we had to put eggs in the mix. COMPOUND NOUNS are when u put to words together to make one word. We where playing football in gym class boys vs girls. PRONOUNS are words that replace nouns Me and Jordan where playing Call Of Duty Word War 2 an i beat him. ABSTRACT NOUNS are something u can't touch. Me and my mom show love for each other when we do something together. CONCRETE NOUNS are something u can touch smell see. COLLECTIVE NOUNS are nouns that can either or the singular or plural from depending on the context of the sentence. Me and a group people went to the football game. UNCOUNTABLE NOUNS are nouns that cant be counted for. I put to much sugar in my kool-aid now its to sweet. COUNTABLE NOUNS are nouns that you can count. Me and my family are moving so we need a lot of boxes.
Maintaining strong, healthy bones is essential as we grow older. Daily doses of calcium and vitamin D, along with exercise, can help the body fight against bone loss. When the body lacks these vital nutrients or muscle-building activities, common bone problems often occur: - Osteoporosis affects approximately ten million Americans. This disease silently weakens the bones, which increases the chances of fractures, and is common in older women. - Osteogenesis Imperfecta (OI) is a genetic disorder that causes the bones to break very easily. It can cause weak muscles, brittle teeth, a curved spine and hearing loss. - Paget’s Disease causes the bones in your body to grow larger and weaker than normal. Other symptoms include arthritis and hearing loss. - Osteoarthrosis (aka degenerative joint disorder) is the most common form of arthritis and occurs when cartilage in your joints is worn down over time. How is osteoporosis diagnosed? A medical evaluation and tests are necessary to detect the possibility of osteoporosis. Some of these tests include: - Physical Examination—After age 50, your height should be examined each year without shoes in order to detect any height loss and examine your spine. - Bone Density Test—This is the only test that can diagnose osteoporosis before a broken bone occurs. This test estimates the density of your bones and your probability for breaking a bone. This test is administered by a DXA machine called a dual-energy X-ray absorptiometry. - FRAX Risk Assessment Tool—This tool uses information about your bone density and other risk factors to estimate your 10-year bone fracture risk. This test focuses on the major bones such as the spine, hip, forearm, and shoulder. What can you do if you’ve already been diagnosed with osteoporosis? Although there is no cure for osteoporosis, there are ways to slow or stop the progress of the disease. In even reverse osteoporosis to some degree. Getting enough calcium and vitamin D is essential to good bone health. There are also medications available that can reduce the risk of broken bones. These medications can slow or stop bone loss and can also rebuild bone to some extent.
Birds in our Lives: Global Legal Protection and Conservation Efforts High buildings, communications towers, and other related structures have become another threat to birds. It is estimated that approximately 3.5 to 975 million birds a year die because of those structures just in North America. The main cause of their death is glass windows, which kill about 100-900 million birds a year. This is followed by hunting that kills more than 100 million birds, house cats (100 million), cars and other vehicles killing about 50-100 million birds, electric power lines affecting 174 million, and pesticides (67 million). Other conservation methods include captive breeding, otherwise known as ex-situ conservation, which works on reintroducing species in captivity, i.e, zoos, breeding facilities with consequent releasing into the wild. Captive breeding techniques have been used for a long period of time to save species from extinction. For instance, the Mauritius kestrel’s population was only 4 individuals in 1974, but by 2006, the population grew to 800. Also, the California condors were decided to be taken into a captive breeding program after 1982, when their global population was just 23 individuals. Since 1992, the population has grown to 410 birds, and, in 2008, there were more California condors in the wild than in captivity for the first time since the beginning of the program.
The law of Ireland consists of constitutional, statute and common law. The highest law in the State is the Constitution of Ireland, from which all other law derives its authority. The Republic has a common-law legal system with a written constitution that provides for a parliamentary democracy based on the British parliamentary system, albeit with a popularly elected president, a separation of powers, a developed system of constitutional rights and judicial review of primary legislation. - Constitutional law - Statute law - Secondary legislation - Common law - European Union law - International law The sources of Irish law reflect Irish history and the various parliaments whose law affected the country down through the ages. Notable omissions from the list include laws passed by the first and second Dáil, and the Brehon Laws which were traditional Celtic laws, the practice of which was only finally wiped out during the Cromwellian conquest of Ireland. These latter laws are void of legal significance and are of historical interest only. The Irish Constitution was enacted by a popular plebiscite held on 1 July 1937, and came into force on 29 December of the same year. The Constitution is the cornerstone of the Irish legal system and is held to be the source of power exercised by the legislative, judicial and executive branches of government. The Irish Supreme Court and High Court exercise judicial review over all legislation and may strike down laws if they are inconsistent with the constitution. The Constitution can only be amended by referendum. A proposal to amend the Constitution is introduced into Dáil Éireann (the lower house of parliament) as a bill and if passed by the Dáil, and passed or deemed to have been passed by the Senate (the upper house), is put to the people. Only Irish citizens resident in the state may vote. There is no threshold for such referendums and a simple majority of voters is sufficient for a proposal to be passed. Once passed by the people the President signs the referendum bill into law. As of November 2011, there have been 33 such referendums: 23 of which were approved by the people and 10 of which were rejected. The constitution was also amended twice during an initial transitional period of three years following the election of the first President of Ireland, when amendments could be made without recourse to the people. Modern-day statute law is made by the bicameral National Parliament — more commonly known by its Irish name, the Oireachtas. Acts of the Oireachtas are split into sequentially numbered sections and may be cited by using a short title which gives the act a title roughly based on its subject matter and the year in which it was enacted. While the Oireachtas is bicameral, the upper house, the Senate, has little power which at most allows the Senate to delay rather than veto legislation, something that has only happened twice since 1937. Article 50 of the Constitution of Ireland carried over all laws that had been in force in the Irish Free State prior to its coming into force on 29 December 1937. A similar function had been fulfilled by Article 73 of the Constitution of the Irish Free State, which carried over all legislation that had in force in Southern Ireland. As a result, while the Irish state has been in existence for less than one hundred years, the statute book stretches back in excess of 800 years. By virtue of the Statute Law Revision Act 2007, the oldest Act currently in force in Ireland is the Fairs Act 1204. The statute law of Ireland includes law passed by the following: Notwithstanding the declaration in the 1937 constitution that the Oireachtas is to be "the sole and exclusive" legislature, it has long been held that it is permissible for the Oireachtas to delegate its law-making power(s) to other bodies as long as such delegated legislation does not exceed the "principles and policies" set out in the relevant authorising statute. All instances of delegated legislation in the Republic are known as statutory instruments, although only a small sub-set of these are numbered as statutory instruments and published by the Stationery Office. This latter subset is composed of statutory instruments which are required to be laid before the Oireachtas or which are of general application. In addition, a body of charters, statutory rules and orders and other secondary legislation made prior to independence in 1922 continues to be in force in Ireland insofar as such legislation has not been revoked or otherwise ceased to be in force. Ireland was the subject of the first extension of England's common law legal system outside England. While in England the creation of the common law was largely the result of the assimilation of existing customary law, in Ireland the common law was imported from England supplanting the customary law of the Irish. This, however, was a gradual process which went hand-in-hand with English (and later, British) influence in Ireland. As with any common-law system, the Irish courts are bound by the doctrine of stare decisis to apply clear precedents set by higher courts and courts of co-ordinate jurisdiction. The main exception to this rule being that the Supreme Court has declared itself not to be bound by its own previous decisions. While the doctrine clearly means that the present High Court is bound by decision of the present Supreme Court, it is not altogether clear whether the decisions of courts which previously performed the function of courts of last final appeal in Ireland – such as the British House of Lords – bind the present High Court. In Irish Shell v. Elm Motors, Mr. Justice McCarthy doubted that decisions of pre-independence courts bound the courts of the state, stating that "[i]n no sense are our Courts a continuation of, or successors to, the British courts." However the other two judges on the panel hearing the case declined to express an opinion on the matter as it had not been argued at the hearing of appeal. Post-independence judgments of the British courts, and all judgments of the American and Commonwealth courts are of persuasive value only and do not bind the Irish courts. European Union law The European Communities Act 1972, as amended, provides that treaties of the European Union are part of Irish law, along with directly effective measures adopted under those treaties. It also provides that government ministers may adopt statutory instruments to implement European Union law and that as an exception to the general rule such statutory instruments have effect as if they were primary legislation. Ireland is a dualist state and treaties are not part of Irish domestic law unless incorporated by the Oireachtas. An exception to this rule might well be the provision in the constitution which says that "Ireland accepts the generally recognised principles of international law as its rule of conduct in its relations with other States." However while this provision has been held to assimilate the doctrine of sovereign immunity into domestic law, the Supreme Court have held that the provision is not capable of conferring rights on individuals. The dualist approach in international law contained in the Irish Constitution allows the state to sign and ratify treaties without incorporating them into domestic law. Thus while Ireland was one of the first states in Europe to ratify the European Convention on Human Rights it was one of the last to incorporate the Convention into domestic law. And when done it was not directly incorporated into Irish law but given indirect, sub-constitutional, interpretative incorporation. In Crotty v. An Taoiseach, the Irish Supreme Court asserted a power to review the constitutionality of treaties signed by the state, such that the government could be prevented from signing up to international agreements which would be contrary to the constitution. A ruling which has resulted in ad hoc amendments to the constitution to permit the state to ratify treaties that might otherwise have been contrary to the constitution.
Helium is a colorless, odorless, gaseous element. Here are some fun helium facts from The Encyclopedia of Trivia French astronomer Pierre Jules César Janssen discovered helium in 1868, while analyzing the chromosphere of the sun during a total solar eclipse in Guntur, India. Because helium was found in the Sun before it was found on Earth, its name comes from the Greek word for Sun, helios. Although helium is the second most abundant element in the universe, most of it in the Earth's atmosphere bleeds off into space. Helium is one of lightest and least dense of all the elements. Its low density is what causes balloons filled with the gas to float, buoyed up by the denser surrounding air. When Helium is cooled to almost absolute zero (-460°F or -273°C), the lowest temperature possible, it becomes a superfluid with unusual properties: it flows against gravity and will start running up and over the lip of a glass container. Today, the US alone produces 75 percent of the world's helium. Nearly half of that total, or roughly 30 percent of the world's helium supply, comes from the U.S. Federal Helium Reserve. That reserve is held in a huge natural underground reservoir near Amarillo, Texas called the Bush Dome.
Parents working on Eleven Plus papers are often faced with the problem of how to help their child to decide on the right answer – and hence to solve problems. At one level it is easy to explain the problem in a form of continuous prose and then encourage your child to find the right solution. In other words you talk and talk – and hope your child is listening. Using words will not always give the easy way of helping your child to solve a problem – for example: Steven had new pair of football boots. He did not like the colour of the laces so he exchanged his new boots with William for a game boy. William and Henry had exchanged the game boy for Henry’s new trainers. Henry had obtained the new trainers from Arthur. A different method of helping our Eleven Plus child to solve problems is to encourage trial and error or discovery. This is where you set your child an Eleven Plus problem and expect (or hope) that your child will work out the answer by `discovering’ the correct answer. Here your Eleven Plus child may have to come up with a vast number of solutions before finding one to suit the problem. Boredom, frustration and an `unhelpful attitude’ may creep in. There is also the risk that at incorrect solution may be presented as the correct answer simply to be able to move on. Children need to be aware of this when tying to look at the alternatives in multiple choice answers. A third way is to set present a series of recipe type solutions. If your child follows certain steps then you a reasonably confident that he or she will solve the problem. The great advantage here is that your child may not need to work through every single step of the recipe to be able to emerge with the correct answer. What is the best ending? A party always has (ice cream, paper hats, jelly, fancy dress, and people) Does every party have ice cream? Yes / No. Does every party have paper hats? Yes / No. A fourth way is set up a decision table – where you ask a series of questions and then guide your child towards a solution. Which two letters occur least often in the word DISINTERESTED? Look at the first letter of the word. Is it in the word again? Now look at the second letter – the letter `I’. Is there another `I’ in the word? We move now to the letter `S’. Are there any more `S’s in the word? Neither you nor your child can hope to solve all Eleven Plus problems by following rules. The easier the problem the more likely that words alone will help. More complex problems may have to use a combination of methods.
There are several good reasons why Central Auditory Processing Disorder, abbreviated CAPD, is difficult to diagnose correctly. The disorder is not because the youngsters cannot hear words and phrases being directed at them, but because their brains lack the ability to interpret and process the words and grasp them, which implies that conventional hearing tests do not always identify CAPD. Furthermore, children who have CAPD frequently establish coping mechanisms to conceal or mask their condition; they can’t really understand the words people are speaking, however they learn to read their lips or their expressions to pretend to understand. The same characteristics that make CAPD difficult to diagnose also make it tricky to treat; any individual treating a child with CAPD needs to keep these traits in mind at all times. There is presently no sure-fire cure for CAPD, and treatments for the disorder must,out of necessity, be personalized and modified to the limits of each CAPD patient. Nevertheless, there are a variety of treatment protocols which are greatly boosting childrens’ developmental prognosis. There are three major categories of CAPD treatments – environmental change, compensatory strategies and direct treatment. Direct Treatment – Direct treatment means the use of 1-on-1 therapy sessions and computer-assisted learning to capitalize on the brain’s natural plasticity, its capacity to reinvent itself, and establish new ways of processing and thinking. These treatment options commonly consist of, at home, in therapy sessions or in the classroom, the use of Scientific Education’s “Fast ForWord” educational software or the “Simon” game by Hasbro to help learners to enhance the discrimination, sequencing, and processing of acoustic events. Some direct CAPD therapy uses dichotic training which trains the brain on hearing multiple sounds in different ears and processing the combined inputs correctly. The “Earobics” program by Houghton Mifflin Harcourt, is also employed by some professionals to develop phonological awareness. Compensatory Strategies – The set of methods including attention, language improvement, memory and problem-solving skills is called compensatory strategies. These strategies give pupils enhanced everyday life techniques and skills that enable them to do well at learning, and also teach them to be accountable for their own academic success. Techniques and strategies of this type include exercises in “active listening” and solving word problems. Environmental Change – Within the category of environmental change one strategy is minimizing the quantity of background noise via soundproofing and putting in acoustic tiles, wall hangings or curtains because background noise is proven to make it harder for a person with Central Auditory Processing Disorder to comprehend speech. In some classrooms, the teachers wear a microphone and the CAPD students wear small receivers, so that the instructor’s voice is amplified and clarified, making it distinct from other sounds or voices. Even improved lighting may help, because a dimly lit teacher’s face is not as easy to scan for cues as a fully lit speaker’s face. Fortunately there are therapy possibilities for kids with CAPD. Having said that, early and accurate diagnosis is crucial to the success of many of these strategies. Don’t forget that our skilled hearing experts are here to assist you in any way possible and to refer you to other respected area specialists for the very best Central Auditory Processing Disorder diagnostic and therapy options.
Newly discovered nerves in the mouths of massive whales can unfold, nearly doubling in length, and recoil like a bungee cord. These stretchy nerves could explain how the whales are able to eat by ballooning their mouths during dives. Researchers discovered the surprisingly elastic nerves after collecting samples from a commercial whaling station in Iceland. "This discovery was totally unexpected and unlike other nerve structures we've seen in vertebrates, which are of a more fixed length," said Wayne Vogl, a professor of cell and developmental biology at the University of British Columbia in Canada. [Whale Album: Giants of the Deep] Rorqual whales represent the largest group among baleen whales, tipping the scales at 40 to 80 tons. They eat by ballooning their mouths, capturing prey and then slowly filtering water out through their so-called baleen plates. The volume of water brought in by a single gulp can exceed the volume of the whale itself. They're "unrivaled among any vertebrate known alive today," said study co-author Nicholas Pyenson, curator of fossil marine mammals at the Smithsonian's National Museum of Natural History in Washington, D.C. "It's actually a very interesting question, once you get to animals this scale: how do you actually maintain this nervous system?" The results could even shed light on extinct massive animals, like dinosaurs, the researchers said. But much about rorqual whales remains a mystery. Their remoteness in the ocean's waters makes them extremely difficult to study. Occasionally, scientists will get their hands on whales that have been beached, but then their tissue has likely already decayed, said Pyenson. Even whales in captivity are less than ideal. More often than not, these whales are unhealthy, and don't represent a typical sample. “They live 99 percent of their lives away from the tools of human investigation,” Pyenson told Live Science. “So the question is: How are we going to be able to learn more about them?” Vogl, Pyenson and their colleagues had the unique opportunity to head down to one of the last commercial whaling stations in Iceland. There, they were able to collect tissue samples (less than 24 hours old) from a dozen harpooned whales. “With every carcass that we examine we find something new,” Pyenson said. When the researchers first saw the whales' gigantic nerves, no one was sure exactly what he or she was looking at. Because of their stretchiness, the nerves looked like blood vessels at first. In fact, it took years of examining the samples under a microscope before the puzzle finally came together. Next, the team plans to look at animals genetically related to rorqual whales and other animals of similar size. Pyenson is also especially interested in studying long-necked and long-tailed dinosaurs, known as sauropods. He hopes that better understanding the nervous system in massive whales will shed light on how a sauropod's nerves may have coursed from its chest, along its 50-feet-long (15 meters) neck, to its head. "I really think we're in a golden age of morphological discovery," Pyenson said. "It's not the kind of science that's necessarily been seen as cutting edge but there's so much to discover. […] We know so much about the context that even little pieces of information like this really enhance our understanding at a very broad scale."
We often never think about our perception of time, because we see it as something universal. Throughout the world, however, various cultures do not see time in the same way. Some cultures value a fast-paced life and prefer to be punctual. There are other cultures that prefer to move at their own speed and don’t mind being delayed. Time is a fluid concept that is not seen in the same way in every culture. Considering the perception of time and how it may be different allows us to have higher cultural intelligence and interact more easily with people from different cultures. The measurement of time as it is done in the Western world is a concept that has existed only for a century. Industrialization in Western countries in the early 20th century brought a new conception of time. Standard time was not introduced until the 1880s to organize railroad traffic. So with the invention of standard time, people in industrializing countries became reliant on clocks and lived from that point gradually further under the constraint of time. Other countries that did not have the same industrial revolution adopted time but did not use it to dictate their lives. They did not place as much importance on time as industrialized countries. This attitude towards time is still prevalent today in both scenarios. In future-orientated cultures, people are always looking to the future and are excited to be moving forward. In the United States, for example, being busy is equated to being successful so the culture in the US is future-orientated. Japan is another example of a highly-industrialized culture that emphasizes a fast-paced lifestyle and a full schedule as the markers of success. According to social psychologist Robert Levine, people tend to move faster in places with vital economies. His studies of the perception of time around the world show that the health of the economy is the most relevant factor to consider when determining if a culture is future or past orientated. This explains why countries like the United States and Japan, which enjoy a high GDP, are future-orientated. Past-orientated cultures are ones who did not adopt time in the early 20th century in the same way, and today they do not value punctuality as much as future-orientated cultures. Past-orientated cultures do not look to the future but view time in the scope of all of its history instead. This makes the measurement of minutes and seconds seem insignificant, so being on-time is not extremely important. Oftentimes in past-orientated cultures, things can run minutes, hours, or even days behind schedule. What is valued is the completion of the task, not the speed of completion. There are even a few cultures that do not have any concept of time, and their language only has words to describe the present, not the past or the future. Anthropologist Allen Johnson states the level of industrialization is another important factor to consider after economic health. Countries that are less industrialized are more likely to be past-orientated. Knowing that other cultures view time differently is important when trying to diversify your business. Differing importance placed on the past or the future influences peoples’ thinking and what is important to them. Considering these factors can be helpful when interacting with someone who is from a different culture because their attitudes towards time could affect you in a way you would not expect. Although their perception of time does not affect their ability to contribute to you, it may become an obstacle if you are unaware of it. This is why having cultural intelligence is a virtue in the business world.
Stars begin their lives when hydrogen fusion ignites in their dense, hot cores. Once that process starts, it's game on. The gravitational pull of all the mass of the star tries to squeeze it down into a tiny point, but the energy released by fusion pushes outward, creating a delicate balance that can persist for millions or even trillions of years. Small stars live an incredibly long time. Because of their small stature, they don't need a lot of energy to balance the inward gravitational pull, so they only sip at their hydrogen reserves. In a bonus boost, the atmospheres of these stars constantly circulate, pulling fresh hydrogen down from the outer layers into the core, where it can fuel the continuing fire. All told, a typical red dwarf star will happily burn hydrogen in its core for trillions of years. Not too shabby. As these small stars age, they steadily become brighter until they just sort of vaguely sputter out, becoming an inert, boring lump of helium and hydrogen just hanging around the universe minding nobody's business but their own. It's a sorrowful fate, but at least it's a quiet one. The grand finale When the massive stars in our universe die, it's much more violent. Because of the increased bulk of these stars, fusion reactions need to happen much faster in order to sustain the balance with gravity. Despite being so much heavier than their red dwarf cousins, these stars have much shorter life spans: Within only a few million years (which given astronomical time scales might as well be next week) they die. But when massive stars die, they go out in all their glory.Their huge size, means there's enough gravitational pressure to not only fuse hydrogen, but also helium. And carbon. And oxygen. And magnesium. And silicon. A good number of the elements on the periodic table are produced inside these giant stars near the end of their lives. But once these stars form an iron core, the music stops and the party's over. All that material surrounding the iron squeezes in on the core, but iron fusion doesn't release energy to counteract it. Instead, the core contracts to such incredible densities that electrons get shoved inside of protons, turning the entire core into a giant ball of neutrons. That neutron ball is able to — temporarily, at least — resist the crushing collapse, triggering a supernova blast. A supernova will release more energy in a week than our sun will release over the course of its entire 10-billion-year lifetime. The shock wave and material ejected during the explosion carves bubbles in the interstellar medium, disrupts nebulas, and even sends material spewing out of galaxies themselves.. It's one of the most spectacular sights in the entire universe. When supernovas happen in our neck of the galactic woods, the explosions are bright enough to appear during the day and can even be brighter than the full moon at night. Pretty intense, and what a way to go. One last show It's the medium-size stars that suffer the worst fate. Too big to just go off quietly into the night and too small to trigger a supernova blast, they instead turn into gruesome monsters before finally turning themselves inside out. For these medium stars (which includes stars like our sun), the problem is that once a ball of oxygen and carbon forms in the core, there isn't enough mass surrounding it to fuse it into anything heavier. So it just sits there, getting hotter by the day. The rest of the star reacts to that inferno in the core, swelling and turning red, producing a red giant. When our sun turns into a red giant, its edge will reach nearly the orbit of the Earth. That red giant phase is unstable, and stars like our sun will convulse, collapsing and reinflating over and over, with each event launching winds carrying the bulk of the sun's mass out into the solar system. In its final death throes, a medium-size star spews out its guts to form an effervescent planetary nebula, thin wisps of gas and dust surrounding the now-exposed core of carbon and oxygen at the center. That core gets a new name when exposed to the vacuum of space: a white dwarf. The white dwarf illuminates the surrounding planetary nebula, energizing it for about 10,000 years before the stellar corpse cools too much to enable such light shows. While beautiful and bewildering to behold in a telescope, planetary nebulas are the products of a violent, tortured death of a star. Alluring, yes, but also haunting to contemplate. Learn more by listening to the episode "What happens when stars die?" on the Ask A Spaceman podcast, available on iTunes and on the Web at http://www.askaspaceman.com. Thanks to Mitchell L. for the questions that led to this piece! Ask your own question on Twitter using #AskASpaceman or by following Paul @PaulMattSutter and facebook.com/PaulMattSutter.
Jurassic deep sea predators thrived as sea levels rose during the period while those that survived in the shallows became extinct, a new study found. The change caused by climate hundreds of millions of years ago has implications for today’s species as the world’s oceans face the challenge of global warming. A study of fossilised teeth showed just how reptiles adapted 150 million years ago. And the study revealed for the first time that the broad structure of food chains beneath the sea has remained largely unchanged since the Jurassic era. Various species fed off different food sources so they never had to compete and so could co-habitat – much like today’s sea life. For more than 18 million years, diverse reptile species lived together in tropical waters that stretched from present-day northern France to Yorkshire However, little was known about the structure of the food chain in this region called the Jurassic Sub-Boreal Seaway or how it changed as sea levels rose. University of Edinburgh palaeontologists analysed the shape and size of teeth spanning this 18-million-year period when water levels fluctuated. They found species belonged to one of five groups based on their teeth, diet and which part of the ocean they inhabited. The pattern is very similar to the food chain structure of modern oceans, where many different species are able to co-exist in the same area because they do not compete for the same resources, the team says. As sea levels rose, reptiles that lived in shallow waters and caught fish using thin, piercing teeth declined drastically. But larger species which had broader teeth for crunching and cutting prey and inhabited deeper, open waters began to thrive. The study suggested these deep-water species may have flourished as a result of major changes in ocean temperature and chemical make-up that also took place during the period This could have increased levels of nutrients and prey in deep waters, benefitting species that lived there. The study offers insights into how species at the top of marine food chains today might respond to rapid environmental changes – including climate change, pollution and rising temperatures. Dr Davide Foffa, of the University of Edinburgh’s School of GeoSciences said: “Studying the evolution of these animals was a real – and rare – treat, and has offered a simple yet powerful explanation for why some species declined as others prospered. “This work reminds us of the relevance of palaeontology by revealing the parallels between past and present-day ocean ecosystems.” Dr Steve Brusatte, Reader in Vertebrate Palaentology also in the School of GeoSciences, added: “Teeth are humble fossils, but they reveal a grand story of how sea reptiles evolved over millions of years as their environments changed. “Changes in these Jurassic reptiles parallel changes in dolphins and other marine species that are occurring today as sea-levels rise, which speaks to how important fossils are for understanding our modern world.” The study which also involved the University of Bristol, was published in the journal Nature Ecology & Evolution.
Despite the fact that the dog has long been recognized as man’s best friend, scientists still do not fully understand the process of domestication of this animal. More recently, archaeologists in Italy have discovered the remains of a dog, which may belong to the oldest domesticated individual. Scientists hope this find, which is estimated to be between 14,000 and 20,000 years old, will shed light on how dogs evolved from wild carnivores to loving companions. One popular theory is that wolves became scavengers due to lack of food and this forced them to live close to humans. Experts believe that this is how animals and humans developed a bond and symbiotic relationship. Other researchers suggest that wolves and humans hunted together, and that this is what led to “friendships.” A research team from the University of Siena hopes the paper, published in Scientific Reports, can provide clues to the origins of companion dogs. In their study, scientists describe the remains of an animal found in two Paleolithic caves in southern Italy, the Paglichchi Cave and the Romanelli Cave. Using molecular and morphological analyzes, scientists were able to establish the age of the remains. Experts believe that the find is at least 14,000 years old. “This is evidence of one of the earliest occurrences of domestic animals in the Upper Paleolithic of Europe and the Mediterranean,” the authors of the work comment. Scientists note that analyzes of the remains are continuing, and now there is reason to believe that the ancient dogs found may be 20,000 years old! While determining the true age of the remains is still in progress, the researchers are sure of one thing: their finds include the oldest examples of domestic dogs found in the Mediterranean. The remains of wolves were also found in the caves. They were larger than dogs and had distinct molars designed to rip meat apart. Molecular analysis has shown that the genetic separation of wolves and dogs began somewhere between 20,000 and 30,000 years ago. Scientists agree that the domestication of the dog dates back to the last ice maximum, a period of severe ecological crisis during which many European animal populations and humans sought refuge in warmer regions such as Italy. “During this period of serious crisis, the wolf found a new way of survival – food near human settlements,” explained Dr. Francesco Bosin, lead author of the study. He also thinks it possible that humans tried to speed up the transition from wolf to dog by killing the most aggressive offspring, encouraging calm and obedient genes.
Mosquito Hawk, Blue Snake-hawk, Hovering Kite and Locust-eater. Breeder. Fairly common in spring, summer, and fall in Inland Coastal Plain and Gulf Coast regions. Rare and local in spring, summer, and early fall in Tennessee Valley. In Mountain region, rare but increasing in spring, summer, and fall. Low Conservation Concern. Mississippi kites belong to Class Aves, Order Falconiformes, Family Accipitridae and Genus Ictinia. The Mississippi kite varies in length from 12 to 15 inches, weight is eight to 13 ounces, and wingspan is 41 to 44 inches. The adult Mississippi kite has a white to pale gray head, with black around the eye. The upper wings are slate gray and the upper secondary wing feathers white. When seen in flight and from below, the body looks light to medium gray with the wings a darker gray. The tail is square, blackish and sometimes slightly notched. Young kites have a brown streaked breast with a banded tail. Mississippi kites are found in the southern parts of the midwestern and southeastern United States. The distribution is uneven due to colonial nesting, with the greatest numbers found in the southern Great Plains. In Alabama they can be seen in the black belt region down to the transition zone between the upper and lower coastal plain. The largest concentration is found along the rivers of the lower coastal plain in south Alabama. They migrate as far south as the continent of South America. Mississippi kites nest in wooded areas. They are normally found where woodland are adjacent to rivers, grasslands, or savanna type habitats. They prefer riparian habitat over continuous forest. The Mississippi kite is a graceful flier that can spend hours soaring in the air. It is a gregarious bird, traveling in flocks and even nesting in small colonies. The kite feeds on insects, usually grasshoppers, dragonflies, and mosquitoes. It normally searches for prey while in flight and often eats while in the air. Mississippi Kites can often be seen in large groups foraging in freshly cut hayfields, even flying behind tractors looking for insects. It sometimes preys on small birds, bats, reptiles, and amphibians. LIFE HISTORY AND ECOLOGY: Mississippi kites reach sexual maturity in two years and begin to breed. They find mates on the wintering grounds and during migration to nesting areas. They will nest singularly or in colonies with four to six pairs.. The nest is usually built in the top of large mature pine and hardwood trees. They will return to an old nest and reuse it. They lay one or two eggs and both parents assist with incubation for 29 to 31 days. The fledglings can fly by 50 days and are fed by the parents until they are at least 60 days old. Sub-adult birds have been observed assisting with incubation, feeding and defending the nest. “The Peregrine Fund: Mississippi Kite.” James Altiere, Alabama Division of Wildlife and Freshwater Fisheries
Early human communities did not need to rely solely on rainwater, rivers and lakes for their water supplies. There were alternative natural or fabricated sources that they learned could be drawn upon – springs, oases, cenotes, aquifers, wells, cisterns, dams and qanats. Water in rivers, lakes and even cisterns were often spoiled by mud and minerals or polluted by animal and human waste, so an underground spring was a much cherished alternative source. Springs were often cooler and purer than surface water, originating in underwater aquifers and rising through cracks and fissures in the rock. Today springs are classified by the amount of water they discharge, the technical term is their ‘resurgence’. This ranges from the first magnitude that emits over 100 cu feet per second (2,800 litres/second) to the eighth magnitude that trickles out at just a pint per second (8 millilitres/sec). Because groundwater tends to maintain its temperature their water is cool in the summer and does not freeze in the winter, providing a regular supply throughout the year. Oases are naturally-occurring aquifers and underground rivers that either reach the surface of their own accord or do this through human intervention. An oasis is a type of spring which reveals itself as a patch of vegetation located in otherwise desert areas, the greenery indicating the presence of a water supply. Date palms are often found at an oasis and these tall trees helpfully provide an upper canopy to shade other vegetation below. Migrating birds made use of oases since they first overflew deserts and this in turn attracted human attention to these locations. Their very nature gave oases an automatic human strategic and economic value, becoming staging points for trade routes and for desert caravans. The oasis of Awjila, in today’s north-eastern Libya, was mentioned by Herodotus (fifth-century BCE) as an important waypoint across the Sahara and as a source of excellent date fruits. It handled both east-west traffic from Egypt to Libya and north-south from Benghazi to Lake Chad/Darfur. Herodotus talks of nomadic Nasamones making ten-day journeys between the oases of Siwa and Awjila. The oasis of Ghadames (aka Cydamus), in today’s western Libya, was settled by 4,000 BCE and its city wall is today a UNESCO World Heritage Site #362. The oasis of Kufra, in today’s south-eastern Libya, is located on a strategic high-point that dominates the lower-lying areas around it. It is in fact a series of six oases and featured strategically during WWII. It is also considered a holy place for one Sufi sect. Sadly today it has reputedly become a way-point in a major people-trafficking route. The Mayan civilisation lacked any rivers, streams and wells yet was able to flourish in the Yucatán Peninsula of Mexico from 2,000 BCE because its land was peppered with sinkholes known as cenotes, access holes to underground freshwater supplies. The cenote acts to all intents and purposes as a well yet it is not one. A cenote is created by rain water that has been filtered through the rocks in the ground, mostly limestone or coral. The fresh water then creates what is called a ‘lens aquifer’, sitting on top of the underlying sea water. Different characteristics of the salt and fresh water menisci maintains a virtual diaphragm and they do not mix. Depending on the rainfall and the location this freshwater can be almost negligible to being up to seventy metres deep. The famous Mayan monument’s named Chichén Itzá is derived from chi or ‘mouth’ and ch’e’en or ‘of the well’ so it literally means ‘the mouth of the well of the Itzá’. Some cenotes were naturally formed, others were hewn out by mankind. Together these proved sufficient fresh water to found a tradition for corn farming upon what is otherwise pretty sparse and arid soil. But the Mayans did more. They found natural surface recesses or ones created by their removal of clay for house-building and used these as aquados – ponds or reservoirs. They also captured rainwater in chultuns or cisterns hewn into the limestone. Some of these had quite small entrances, then widened beneath the surface forming a wine-bottle-shaped profile that minimised airborne corruption. In more recent times both aguados and chultuns have been lined with stone or plaster to secure their contents more securely. Artesian or confined aquifers are where water is located underground and under pressure. Often in a defile or valley where hydrostatic equilibrium has yet to be reached, meaning the water has not yet found its own level (see later how this siphon effect was used by Romans with their aqueducts). Digging or drilling a well into an artesian aquifer brings the water to the surface naturally. It overflows at the surface until equilibrium is attained. BRIEFER: In the 12th century Carthusian monks in the French province of Artois created many wells using this principle, Artois was the derivation of the term artesian. Remarkably artesian aquifers are often present under deserts. In the late 1950s the Nubian Sandstone Aquifer System was discovered beneath the Sahara, it had accumulated there back in the ice age and led to the Gaddafi government in Libya developing its Great Man-Made River Project (see later) supplying 6,500,000 cubic metres of fresh water each day to the cities of Benghazi, Tripoli and Sirte. We’ve all heard of the Great Barrier Reef, but few have heard about Australia’s Great Artesian Basin. Discovered in 1878 it is the largest and deepest such basin in the world covering 660,000 sq m spreading across almost a quarter of Australia’s massive land-mass. It contains 64,900 cubic kilometres (15,600 cubic miles) of groundwater and is the source of much of the fresh clean water for the continent. It has been used by Aussies to develop farmland at long distances from any river course. Neolithic settlements have been found on Cyprus that date back to 9,000 BCE. It had always been something of a crossroads, a stopping-off place for seaborne migrants. One PPNB group settled on the island along its south-west coast at Kissonerga-Mylouthkia, showing some knowledge and sophistication by digging two wells to achieve their water supply. In 2011 these wells were uncovered and proved to be the oldest wells discovered to date anywhere in the world. The well had evidently dried up and was filled with debris. Their early age (8,500 BCE) was determined by radiocarbon dating the accumulated rubbish and particularly the skeleton of a young girl that had been discovered in one of them. The world’s third-oldest well (to-date) was discovered in Israel’s Jezreel Valley. It was uncovered when the National Roads Company was widening a highway. It was also found to contain two 6,500 BCE skeletons at its base, an older male and a 19-year-old female. The well was attached to an early farming community as evidenced by flint tools, sickle-shaped flint blades and arrowheads at its base. The quarrying work to create the well suggests a long-term community effort had been undertaken to conceive and create it. Wells dating back to 5,000 BCE were discovered near Leipzig in Saxony, eastern Germany, making them the oldest in Europe and the oldest wooden wells in the world. The four wells led to archaeologists rethinking the capabilities of our ancestors. These well-builders were part of a migration from the Great Hungarian Plain some 7,500 years ago, their trail able to be traced by their distinctive Linear Band Ware pottery – cups, bowls, vases, and jugs without handles (though later versions added lugs). This Linear Pottery was made between 5,600 and 4,900 BCE. The wells were created by farmers, evidently adept at carpentry too. The scientists discovered over one hundred-and-fifty oak timbers used to line the seven-metre deep wells. They were jointed and assembled in a way that has withstood seven centuries beneath ground and kept archaeologists and dendrochronolgists busy. The oak trees were shown to have been felled between 5,206 and 5,098 BCE, so it is claimed as the world’s oldest discovery of any form of wooden architecture. Tool marks discovered on the wood are helping scientists to investigate the woodworking approaches that were used back then. Other remains found in the wells indicate that local humans’ staple foods were two sorts of wheat, supplemented by lentils and peas, with apples, hazelnuts, raspberries, strawberries and sloe, there were also traces of oils taken from linseed and poppies. Add a little meat and these well-diggers would have enjoyed a good and varied diet. Cisterns and Step wells Another approach adopted by early communities from as early as 3,000 BCE was to store water in fabricated cisterns, capturing rainwater in some form of container for later use. These would also serve a defensive purpose, because if you created cisterns inside your city walls then you could withstand a siege for longer periods. For example the cisterns in the acropolis of Pergamon have been calculated to have been large enough to supply its 20,000 people for over a year. The Harrapan civilization in the Indus Valley of India/Pakistan (more later) developed a number of early and innovative approaches to water storage and use. Here stepped wells were relatively common, subterranean water storage systems built from c.3,000 BCE. In western India, Dholavira, excavations revealed the country’s largest and most elegant reservoirs to date. The pictured one is 10m deep, 73 metres long and 29 metres wide. In 2014 another stepwell dating from 3,000 BCE was discovered nearby. UNESCO has recognised Rani-ki Vav, Patan Town in Gujarat as a World Heritage Site #922 for its remarkable stepwell, which has the sobriquet ‘The Queen of Stepwells’. Patan was built during the Solanki dynasty as the Gujarat capital city from 960-1243 CE, a time when Gujarat controlled a major part of the Indian Ocean trade. It became one of the largest cities in India, able to support some 100,000 inhabitants. The step-well is one of the biggest found in the region at 64m long x 20m wide x 27m deep. It was constructed by the orders of Rani Udayamati (1022-63 CE) of the Chalukya Dynasty in memory of her husband. She planned it as an inverted temple to water. It has four storeys each with sculpted pillars and compartments, staircases at the side walls connect the levels. It has bricks faced with stone with the side walls decorated by some five hundred major relief sculptures and a thousand minor ones. They depict for example the ten incarnations of Vishnu (including Buddha), plus Ganesh and the other gods in the Hindu pantheon. The stairwell was invaded by the Saraswati River and filled with silt until the 1980s, which fortunately meant that the sculptures survived intact. Greek history during the late Bronze Age between 1,600 and 1,100 BCE is called Mycenaean. Mycenae was one of its major city strongholds set high upon a rocky hill. In mythology it was claimed to be founded by Perseus and features in both the influential epic poems. The Iliad and The Odyssey. At its peak the citadel of Mycenae could house 30,000 people. They built a 360-metre underground tunnel or syrinx from the Perseia spring (named for Perseus) to supply a cistern beneath the citadel that would sustain them during a siege. The city walls were later extended (c 1,200 BCE) to enclose the spring itself and an 86-step staircase led down 18 metres to retrieve the water. Another way of accumulating water resources was to build a dam. The world’s first dam (discovered to date!) was built In the Black Desert of today’s Jordan. Using earth and masonry, the Jawa Dam was built c 4,000 BCE to create a store of water for irrigation purposes. The next oldest dam discovered was in Sadd el-Kafara some 30 kilometres south of Cairo, Egypt. This was built between 2,900 and 2,700 BCE with earth and a limestone masonry facing of up to 14 metres high. It was built to protect structures in the valley below rather than to store water. This was a period when pyramid construction was still in its formative stage and their technique for this dam proved inadequate. It did not last very long, its lack of a spillway (channel to run off excess water) meant that a flood simply carried it away. The oldest dam still in use today was built c 1,300 BCE on the Orontes River in today’s Syria. Assyrians, Babylonians and Persians built dams between 700 and 250 BCE, these were used both for water supply and for irrigation. In today’s Yemen, the Ma’rib dam was built in the southern Arabian Peninsula at around the same time. The earliest type of aqueduct was created, known as a qanat. Qanats were tunnelled manually through rock and soil from an aquifer in order to supply a community. The tunnels were usually just a little larger than the person digging it and constructed with a gradual downward slope. Every thirty metres or so a vertical shaft was sunk which provided ventilation, but was also used for lifting out the spoil during construction and was a handy access for repairs and routine clearance. The qanat uses gravity with no equipment required to deliver the water, and when gouged through rock the water losses through the tunnel prove minimal. Their downside is that the flow is constant and potentially therefore wasteful at periods of low usage. The Assyrian King Sargon II refers to seeing one in Persia in 7,000 BCE, his son would later build a famous one at Nineveh. The city of Zarch has the oldest and longest extant qanat, being 3,000 years old and 71 kms (44 miles) long. As with the one illustrated here, qanats frequently terminated at a kariz or well. The notion of qanats was transmitted along the Silk Route and again later by the Roman and Islamic empires. In quite recent times in Turpan, China adopted the qanat principle, though they called it a karez, which means ‘well’ in the Uyghur language. The water source was from nearby mountains and some 1,000 wells were supplied by this gravity-fed approach.
USC Rossier School of Education - Online Master of Arts in Teaching in Special Education Capella University - Online MSEd in Special Education Teaching and PhD in Special Education Leadership Purdue University - Online MSEd in Special Education Saint Joseph's University - Online MSEd in Special Education with optional concentrations leading to ASD Endorsement, Special Education Certification or Wilson Reading System® Certification Southern New Hampshire University - Online MEd in Curriculum and Instruction - Special Education Self-esteem is always a concern for students with special needs. In a mainstreamed classroom, it’s not difficult to see students divide into groups. If you as a teacher are aware of this, you can take steps to ensure that the entire class is cohesive. For instance, there may not be a real peer group for the only student in class with visual impairment; therefore, you need to make certain that the entire class is a peer group. This is accomplished through classroom management. Focus on Talents Not all students will excel in academic skills. As a teacher, take time to ask all students what they are really good at and use those skills as much as possible. Are they artistic? Do they play an instrument? Do they have great social skills? There is nothing greater than eavesdropping on a conversation in which a student with special needs is lauded by other students for his or her skill and students form connections based on a common interest. As the parent of a child with special needs, you can make a list of things that the teacher might do or say to help improve your child’s self-esteem. If your child does something very well, take a sample to show his or her teacher. Those teachers who are receptive will look at your child with new respect and they may mention the skill to other students. This could be a huge self-esteem boost. Everyone struggles when they learn something new. It’s important to explain to students with special needs that they are not necessarily struggling because they have a learning disability: they may be struggling because the information is difficult. This helps to reassure students who may be sensitive to their slower rate of learning. Tackling a challenge provides a wonderful chance to gain self-esteem: if students keep trying until they accomplish a goal, their self-esteem increases. Sometimes, the harder the goal, the greater the boost to self-esteem will be. The key to helping students with special needs persevere is to break a difficult task into smaller steps to reach a larger goal. Rejoice in What They Do Well Students gain self-esteem when they do something well, and it’s helpful to focus on the little things they can do well. Many tasks are frustrating for students with special needs; as a parent or a teacher, be patient with what they can’t do and rejoice over what they can do. There are things each of us can’t do, and a lot depends on the standard to which we are held. Most of us would be at a loss in a room full of astrophysicists; however, while we can give ourselves a little grace, knowing that we just can’t do what these scientists can do, sometimes we have trouble translating this concept as it relates to students with disabilities. Help these students understand that everyone has things that they can’t do and things they can do. Help them discover their strengths. Help Them Look Beyond School While it is important that students with special needs meet the requirements of testing and the school, help them to think beyond school. Allow them to explore careers. Look at their positive traits, keeping in mind that these can be very valuable to a potential employer. Do they always arrive early? Do they turn in their work on time? Do they clean up after themselves or others? Are they observant? Can they greet people at the door and make them feel welcome? Involve them in Hands-on Activities If it is possible, enroll students with special needs in some kind of adventure or science field class in which they are exploring or collecting samples outside. This builds self-esteem by giving them a sense of connection and accomplishment. It also allows them to work in groups to solve a problem.
Two-wire signaling cures many noise problems at the cost of a second signal trace. As shown in Figure 6.4, a two-wire transmitter sends current on two wires: a first wire, which carries the main signal, and a second wire, which is provided for the flow of returning signal current. As drawn, the currents on the two wires will be equal and opposite , but the voltages will not be. This architecture provides three important benefits. Figure 6.4. Two-wire transmission provides a signal wire and return wire for each signal. First, it frees the receiver from requiring a global reference voltage. In effect, the second wire serves as a reference for the first. The receiver need merely look at the difference between the two incoming wires. Two-wire signaling renders a system immune to disturbances in distribution of global reference voltages, provided the disturbances do not exceed the power-supply noise tolerance of the logic family or the common-mode input range of the receivers. The reference voltage for TTL, most high-speed CMOS, and ECL is ground; for PECL (positively- biased ECL), it is the power voltage. Second, the two-wire architecture eliminates shared-impedance coupling between a receiver and transmitter in the same package. In Figure 6.4, returning associated with transmitter B flows through the return wire back to the battery at B without traversing z B , and therefore without disturbing the receiver . By eliminating the shared-impedance coupling between circuits A and B , two-wire signaling conquers ground bounce locally generated within the package. Third, two-wire signaling counteracts any type of interfering noise that affects both wires equally. A good example would be the ground shifts encountered in a high-speed connector. When two systems are mated by a connector, the net flow of signal current between the systems returns to its source through the ground (or power) pins of the connector. As it does so, tiny voltages are induced across the inductance of the connector's ground (or power) pins. These tiny voltages appear as a difference between the ground (or power) voltage on one side of the connector and the ground (or power) voltage on the other side. This problem is called a ground shift , and it is yet another form of common impedance coupling. Two-wire signaling fixes this problem. These three benefits do not depend on the use of any changing voltage on the second wire. As shown in Figure 6.4, the return wire merely carries the local reference voltage (ground, in this case) from the transmitter to the receiver, where it may be observed . This simple circuit renders the system immune to local disturbances in the power and ground voltages, ground bounce generated within a package, and ground bounce generated within a connector. That's pretty good. The performance of a two-wire signaling circuit hinges on the assumption that no current flows through impedances z A and z B . Under this assumption the receiver at B can directly observe (on the return wire) the local reference voltage at transmitter A , and the next receiver C can observe the local reference voltage at transmitter B . Any currents flowing through z A or z B change the references voltages on the return wires, interfering with reception . The two-wire circuit must be arranged so that it limits the current through z A and z B to innocuous levels. Unfortunately, in a high-speed system all wires couple to the surrounding chassis and other metallic objects, whether you want them to or not. In Figure 6.4 you can model this coupling as a collection of parasitic lumped-element connections connected from each wire to the reference beam. Current transmitted on the signal wire therefore has a choice of returning pathways . It can return to the source along the return wire (the intended path ), or it can flow through the parasitic connection to the reference beam and from there return to the transmitter through impedance z A . The current that flows through the parasitic pathway is called stray returning signal current . At high speeds the stray returning signal current is often significant enough to impair the effectiveness of a two-wire signaling system. Does this impairment defeat the utility of two-wire signaling for high-speed circuits? Not necessarily , provided that you pick a particular, unique signal for the second wire. The second wire must carry a signal equal in amplitude to the first, but opposite in polarity (an antipodal , or complementary , signal). If you do that, everything still works. POINTS TO REMEMBER Transmission Line Parameters Pcb (printed-circuit board) Traces Generic Building-Cabling Standards 100-Ohm Balanced Twisted-Pair Cabling 150-Ohm STP-A Cabling Time-Domain Simulation Tools and Methods Points to Remember Appendix A. Building a Signal Integrity Department Appendix B. Calculation of Loss Slope Appendix C. Two-Port Analysis Appendix D. Accuracy of Pi Model Appendix E. erf( )
While some readers prefer realistic fiction or graphic novels, others may prefer informational text or biography. It's always a good idea to keep in mind that there are many genres that students would enjoy. When children are introduced to a variety of books, they are transported to worlds that are much different than their own. They become more familiar with language and new vocabulary, and they develop a greater interest in reading and learning. Also, students who read multiple types of books tend to score higher on comprehension tests than those who stick with one type of reading material. Look at the following genres and see if you can find some that might interest your students. Children of all ages enjoy poetry—and have been introduced to this genre years ago, as many early learning books feature poems. Children love humorous poems such as those by Jack Prelutsky, Eloise Greenfield, or Shel Silverstein. Poems that children can relate to personally will hold their interest. When young children hear you read poems, they begin to recognize the rhythm and cadence of language. Who doesn’t love a good fantasy story? Think of Harry Potter, Limony Snicket, or James and his giant peach. Well-written fantasy books take readers on an adventure—almost making it all seem real. They are a great form of fun escapism for many children and adults. Also, some fantasy authors write a series of books about the same character. Once children latch onto one of the books in a series, they're often hooked and can't wait to read the entire collection. Books about the lives of famous people offer children a glimpse into what they could become or how someone was able to triumph over tragedy. This is what kids love about biographies. They can learn about themselves through the lives of others. Also, kids are curious—they want to know why famous people did what they did. - HISTORICAL FICTION Readers of these types of books are enamored by learning about the past... where people lived, what they ate, how they spent their days, and why the time period was important in history. The characters may be a mixture of real and imaginary. The events must be portrayed as if they actually could have happened. One of the most well-known examples of historical fiction is the Little House on the Prairie series. Find the right topic and these types of books are a hit! Informational readers like to skip around to find what peaks their curiosity, and most of these books lend themselves to this type of reading. One tip: Make sure that the books are current enough to contain accurate information.
A vaccination is a treatment which makes the body stronger against a particular infection. The body fights infections using the immune system, which is made up of millions upon millions of cells including T cells and B cells. An important part of the immune system is that it is much stronger when fighting a disease which it has already fought against before. Vaccination involves showing the immune system something which looks very similar to a particular virus or bacteria, which helps the immune system be stronger when it is fighting against the real infection. Vaccination versus ImmunizationEdit Another word used for vaccines is immunization. These words mean things that are a little different. Vaccination is when a person is given something to make the immune system learn to fight an infectious disease. Immunization is when a person's immune system learns to fight an infection. Immunization can happen from vaccination. But immunization can also happen from getting the infection. For example, a person can be immune to hepatitis B if he gets sick with hepatitis B. After a person gets hepatitis B and then gets well, he is immunized from getting it again. A person can also be immunized from to hepatitis B by taking the hepatitis B vaccination. So vaccination and immunization have meanings that are a little different. But when people say these words, they usually mean the same thing. People say immunization to mean the same thing as vaccination. Herd immunity is an important part of how vaccines work. A herd is a group of animals. Herd immunity happens when most of the animals in a group are immune to an infection. If most animals are immune they cannot get the disease. If they do not get the disease, they cannot give it to other animals. So even one animal who is not immune is safer. If none of the other animals in a herd get the infection, they cannot give the infection to the one who is non immune. This is important in people too. If 95% of people in a place are immune to a disease, the other 5% are safer. There will just not be as much of that disease around to get. The people who are in the 5% are there for many reasons. Some got the vaccine but did not react to it. Their immune system did not learn how to fight it well. Some of them are too sick to get the vaccine. It can be children who are too sick with other diseases to get vaccines. It can be a pregnant woman who cannot get the vaccine because it could hurt her baby. It can be a person with cancer who does not have a strong immune system. It can be an older person who has a weak immune system. So if everyone in a place gets vaccinated, it protects these people too. If they are not protected by herd immunity, they can get more sick from an infection. They get the infection more easily and they get sicker from it. So it is important that people who are healthy get their vaccinations. It protects the healthy people. But it also is important to protect other people who are old, weak, or sick. Types of vaccinesEdit There are different types of vaccines: - Inactivated vaccines contain particles (usually viruses). These have been grown for the purpose. They have been killed, using formaldehyde or by other means. But the virus still looks intact; the immune system can develop antibodies against it. - Attenuated vaccines contain live viruses, that have been weakened. They will reproduce, but very slowly, making it an "easy win" for the immune system. Such vaccines cannot be used on patients with a severely weakened immune system, such as those with AIDS, as they are unable to defeat even this very weak virus. - Subunit vaccines show antigens to the immune system, without introducing virus material. Safety of vaccinationEdit Today in modern countries almost all people are vaccinated, which has caused many serious diseases to become rare. However, some people argue against vaccination, as they are worried about possible side effects from the vaccination. Vaccinations do have some side effects. These include swelling and redness around the injection site, a sore arm, or fever. These effects are because of the immune system fighting with the viruses or bacteria which has been injected. Very rarely, the immune system overreacts so much to the virus that the immune system damages other areas in the body. As well as these real side effects of vaccinations, some people believe that vaccines cause other serious problems like autism, brain damage, or diabetes. There is no evidence for this. Almost all doctors and scientists believe that vaccination does not cause any of these things. Overall, the vast majority of medical professionals and scientists believe that vaccinations are a good thing, and that the benefits of avoiding diseases are far greater than the very small risk of side effects. All medical organizations around the world, including the World Health Organization(WHO), the American Medical Association, the American Academy of Pediatrics, and the United States Centers for Disease Control support vaccination. The word "vaccine" was created by Edward Jenner. The word comes from the Latin word vacca, meaning cow. A virus that mainly affects cows (Cowpox) was used in the first scientific demonstration that giving a person one virus could protect against a related and more dangerous one. History of vaccinationEdit The first vaccination ever was for smallpox. In 1796 an English doctor, Edward Jenner, noticed something. He saw that people who got cowpox did not get sick from smallpox. He gave a young boy the cowpox virus to protect him from smallpox. This was done by scratching liquid from cowpox sores into the boy's skin. This same method using liquid from sores was also used to give people smallpox. People did this so they might get smallpox on one place on their body. Then they could pick which body part got scars from smallpox. But sometimes people who did this got very sick from smallpox. Some even died. This was a dangerous thing to do. But people did it because it was less dangerous then getting smallpox. Edward Jenner gave the boy cowpox in the same way people tried to give smallpox. Six weeks later, he scratched smallpox into the boy's skin. The boy did not get sick from smallpox. This boy was the first person ever to get a vaccination. - Bonhoeffer J, Heininger U (2007). "Adverse events following immunization: perception and evidence". Current Opinion in Infectious Diseases 20 (3): 237–46. doi:10.1097/QCO.0b013e32811ebfb0. PMID 17471032. - "History - Edward Jenner (1749 - 1823)". BBC. 2006-11-01. - "Edward Jenner - Smallpox and the Discovery of Vaccination". Retrieved 2009-07-28.
Poisonous plants are not all made equal. Some are more poisonous to humans than others. Some toxins are faster acting than others. Some toxins have an effect when ingested, while others have an effect on contact with the plant. This should not, however, make you afraid of plants. Plants are not out to get you. Many toxins in plants have evolved to defend the plants from insect, pests and other predators. It’s just a case of becoming educated and learning to identify plants that may cause you harm if you ingest or touch them. In general, the most important plants to learn first are the most common. And with poisonous plants, the most important ones to learn first are the most common and widespread species that have significant ability to cause you some trouble. Below are ten plants to get you started with your knowledge of poisonous plants. All are common and widespread, with some resembling edible species. Bittersweet, Solanum dulcamara Also known as woody nightshade, this member of the nightshade family is much more common than its infamous relative deadly nightshade, Atropa belladonna and as such you are much more likely to come across it. Bittersweet is part of the Solanaceae family of plants which also includes tomatoes and potatoes. Bittersweet bears some common family resemblances to these in its flower and leaf shape. Bittersweet flowers are purple, with a yellow centre – very striking. The purple colouration is also present in other parts of the plant, most noticeably the stems of the fruits. The fruits themselves resemble tiny, ovoid tomatoes, which start green and ripen to a full red. The fruits even smell like tomatoes but are poisonous to humans. The plant grows as a climbing vine in hedgerows and woodland edges. It particularly likes damp ground and can be seen amongst plants right on the edge of rivers. Black Bryony, Tamus communis This plant is a twining vine growing in hedges and amongst other plants such as brambles in the woods. Black bryony is related to yams but is a long way from being edible, its starchy root being stuffed with high concentrations of toxins, including sharp calcium oxalate crystals. Black bryony is not so visible in summer but becomes more so in autumn and winter when its attractive red berries, which can stay on the vine right into winter, are more obvious. These berries are also poisonous. Improve Your Tree and Plant Identification Skills Would you like to improve your ability to identify useful trees and plants? I offer an online tree and plant identification course, which flows through the seasons. Find out more about the next available course by clicking the following link: Paul Kirtley’s Tree and Plant Identification Masterclass Dog’s Mercury, Mercuralis perennis Dog’s mercury is a common plant of woodland floors in the UK and Europe, often forming continuous stands. It is not the most showy of plants and it is generally unknown by most people, even though they may see it every day while walking their dog in the woods, for example. Dog’s mercury is most prominent in the spring. At first glance it somewhat resembles a mint but it has some significant differences both in the leaf arrangement and the flowers. Mercurialis is one of two genera of the large Euphorbia family, that contain species native to the UK, both of which contain toxic species. Dog’s mercury is by no means one of the most poisonous plants you’ll find in the woods but it is very common and should not be ingested. Learn it so you can differentiate it from more useful species of plants you will find in the same habitats. Foxglove, Digitalis purpurea Foxglove is a plant that makes its presence known. The imposing flowering stems, holding beautiful purple bell-shaped flowers are obvious to all. What is less well known is that foxglove is a biennial, which means the plant completes its life cycle over two years. In the first year, it produces a broad rosette of leaves, close to the ground. The easy-to-spot purple flowers appear only in the plant’s second year. Hence, it is much less well-recognised in the first year. You should learn to differentiate the leaves of foxglove from other useful edible or medicinal plants such as burdock, mullein and comfrey. All parts of foxglove plants contain powerful toxic alkaloids which act on the heart. Human poisoning has occurred following consumption of the leaves and flowers. Some people use the plant as a toilet paper but this is not to be generally recommended as skin reactions have been reported following contact with the leaves. Giant Hogweed, Heracleum mantegazzianum Giant hogweed originates in central Asia and was introduced to the UK and western Europe as an ornamental. It contains a number of toxins, including chemicals which cause dermatalogical reactions when in contact with human skin. These reactions can be particularly severe when they occur in bright summer sunlight as the toxins make the skin more sensitive to UV light. This photosensitising sap has been increasingly reported in the news as children are typically the ones harmed. That said, the reactions are not limited to children and people cutting the plant down or, even worse, strimming it, spattering sap on their unprotected skin, have had blistering. Eyes can be damaged by getting the sap into them. Check out this article and video on my site on How To Identify Giant Hogweed, Heracleum mantegazzianum Clearly care should be taken around this plant. When it is small, the leaves are quite light green in colour and can be differentiated from other related species, in particular common hogweed, Heracleum sphondylium. When large, the plant is similar to many imaginings of the fictitious triffid and cannot be mistaken for anything else. These plants are becoming more widespread in areas they have been introduced and seem to favour damp ground, particularly the banks of rivers and canals. Hemlock, Conium maculatum This is the classic “poison hemlock”, which many have heard of but far fewer know how to identify. It is important to be able to differentiate this from other less noxious members of the carrot family, to which this highly poisonous plant also belongs. The leaves of hemlock are lacy, feather-like structures, which resemble the commonly known cow parsley. Hemlock has purple splotches on a relatively robust stem, which is round and hollow. The leaves are somewhat finer and more frilly than cow parsley but the differences are relatively subtle. Some people say hemlock has an odd unpleasant smell but personally I find this unreliable. Learn the external features of the plant, including leaves and stem, for positive identification. Along with several of the poisonous species in this article, which share at least some family features with edible species, as with any foraging for wild foods, remember “if in doubt, leave it out.” Parts of the plant that have caused poisoning by eating include the seeds, the leaves and the roots. Potency of each part varies with climate and season but if ingestion of any part of this plant is suspected, urgent medical treatment should always be sought. Signs and symptoms of hemlock poisoning are reported as burning and dryness of the mouth, followed by weakness in the muscles, eventually leading to paralysis and difficulty breathing. Also pupil dilation, vomiting, diarrhoea, convulsions and loss of consciousness have been recorded. Death occurs due to respiratory paralysis. Hemlock Water Dropwort, Oenanthe crocata Hemlock water dropwort is another poisonous member of the carrot family, Apiaceae. This is a plant of wet woodlands, riversides and ditches. It has multiple toxins, including oenanthotoxin and linear furanocoumarins. This plant is more common than you might think. Once you know it you see it in many places. In the UK it tends to be more in the south as well as the west and I have seen it extensively in mainland Europe too. Some of the toxins are photosensitising, so you should avoid contact with the plant and the plant should also not be ingested as it is highly toxic if eaten. There are numerous documented cases of fatal plant poisonings from this plant and its relatives, which contain similar toxins. The leaves resemble flat-leaf parsley or coriander and the roots resemble parsnips. The roots contain four hollow chambers, though. Do not mistake these for other edible roots in this family such as wild carrot, Daucus carota. Lords and Ladies, Arum maculatum Arum is one of the first leaves to emerge in the spring, racing to collect sunshine ahead of the trees coming into leaf. Also known colloquially as cuckoo pint, this plant is in flower in the early spring with its distinctive sheathing bract, enclosing the flower, reminiscent of a mediaeval cowl. Later in the year, from mid summer onwards, its spike of berries, ripening from green to a vibrant and attractive red make its presence known from a distance. Despite the attractiveness of the fruit, some saying they resemble jelly bean sweets, these berries are not for nibbling. Like other members of the Araceae, Arum maculatum contains calcium oxalate raphides, sharp crystals that easily puncture the mucus membranes of your mouth and throat, causing intense irritation and soreness, plus creating an avenue for other toxins to enter. Some cases of skin and eye reactions to contact with juices of the plant have been recorded although handling the plant is relatively low risk. Poisonings have occurred due to consumption of the leaves and roots but it is the berries that most commonly cause an issue. If you have children, be particularly aware of these plants as they are low down to the ground and the berries are a very attractive colour. Woodspurge, Euphorbia amygdaloides Along with dog’s mercury, mentioned earlier in this article, woodspurge is the another common member of the family Euphorbiaceae native to the UK. Like other members of the genus Euphorbia woodspurge is quite a primitive plant, with relatively simple flowers that hardly look like flowers at all. The sap of woodspurge is milky and caustic. Like its African relatives such as the candelabra euphorbia, the sap of this toxic plant was once used to remove warts. You do not want to be handling or ingesting this plant. In the UK the distribution of Euphorbia amygdaloides is a relatively southern one, but where it occurs it is quite common. The lower leaves of the main stem are not dissimilar to the edible rosebay willowherb, or fireweed, Chamaenerion angustifolium, so if woodspurge occurs in your area, make sure you can identify it properly. Yellow Flag Iris, Iris pseudacorus Most often seen on the edges of ponds, although not exclusively, this plant has long strap-like leaves, which are superficially similar to edible species Typha latifolia and Typha angustifolia, the cat-tails. The sap of irises can cause dermatitis on contact with the skin and the roots, while resembling a hairy sweet potato, are toxic if eaten. The delicate and obvious yellow flowers make for an easy differentiator but clearly are not present for most of the year. The plant can also be differentiated by the leaf structure. It has a single diamond-shaped rib down the centre of the leaf, which is otherwise paper-thin. Typha species, by contrast, have a leaf cross section which is entirely crescent shaped. Well, the above is your starter for ten. These ten species are widespread and it is highly likely that some grow near to you, even if you are living in a town. I have seen poison hemlock growing in central London on several occasions and many of the other plants growing in parks. Keep an eye out for them wherever you live and start to become familiar with their size, shape and form. This will provide a good basis for differentiating more useful and edible species in the future… Let me know in the comments below what you found useful or informative in the above article…
There are various kinds of teaching methods that may be sorted into three broad kinds. These are teacher-centered procedures, learner-centered procedures, content-focused approaches and interactive/participative procedures. INSTRUCTOR/TEACHER CENTRED METHODS Here the instructor casts himself/herself at the part of being a grasp of this subject matter. The instructor is looked upon from the students as a professional or an authority. Learners, on the other hand, are assumed to be both extravagant and passive recipients of knowledge in the instructor. Examples of these approaches are expository or lecture approaches – that need little if any participation of students in the teaching procedure. It’s also because of this lack of participation of the students in what they’re taught, such methods are known as “closed-ended”. In learner-centered procedures, that the teacher/instructor is a teacher and a student at precisely the exact same moment. From the words of Lawrence Stenhouse, the instructor plays a double role as a student too “so that in his classroom expands rather than constricts his intellectual histories”. The instructor also learns new things everyday that he did not understand in the practice of education. The instructor “becomes a source as opposed to an authority”. Examples of learner-centered approaches are conversation procedure, discovery or query based strategy along with the Hill’s version of learning through conversation (LTD). Within this class of methods, the instructor and the students must fit in the material that’s educated. Normally, this implies that the advice and skills to be educated are considered as sacrosanct or quite important. The instructor and the students can’t change or be critical of whatever related to the material. A good illustration of a method that subordinates the interests of their instructor and students to the material is your programmed learning strategy. This fourth class borrows a little from the 3 different methods without automatically laying emphasis unduly on the student, teacher or content. These approaches are pushed by the situational evaluation of what’s the most suitable thing for people to learn/do given the problem of students and the instructor. They demand a participatory comprehension of diverse domains and variables. In short, three Kinds of methods commonly Utilized in education are: – We are now able to think about a variety of particular procedures that could be drawn out in the course of classroom education. It’s however, very important to mention that the option of any kind of methods shouldn’t be random, but has to be regulated by the standards we’ve already analyzed. At precisely the exact same time every technique isn’t fool-proof but has its benefits and pitfalls. That’s the reason why I would advise using complementary methods instead of 1 method. It’s the way of relaying factual advice including fundamentals, theories, ideas and THEORETICAL KNOWLEDGE about any subject. In a lecture that the teacher tells, clarifies, clarifies or relates whatever info that the trainees are needed to learn through listening and comprehension. It’s thus teacher-centered. The teacher is quite busy, doing all of the talking. Regardless of the prevalence of cooperation, the absence of active participation of pupils restricts its usefulness as a way of education. The lecture method of education is suggested for trainees with hardly any comprehension or limited background knowledge on this issue. It’s also beneficial for introducing an organized body of fresh information to the student. To succeed in encouraging learning, the lecture should involve some talks and, question and answer period to permit trainees to participate actively. As stated previously, during the lecture, the pupils merely hear the teacher. It’s thus extremely important to take into account the attention span of pupils when planning a lecture. The attention span is that the time period during which the pupils can pay whole attention to what the teacher is speaking about. It’s anticipated to be 15-25 minutes just. It’s tough to hold the trainee’s attention for a very long time period and careful planning of cooperation is quite needed. The teacher needs to have a clear, logical strategy of demonstration. He/she must work out the fundamentals of this subject, organize them based on priorities and logical relations, and establish relationships between the various products. Careful organization of articles enables the trainees to construction and thus, to shop or recall it. When creating a motif in a lecture, the teacher should utilize many different approaches. A Practical principle in any schooling is to go in the KNOWN into UNKNOWN; from SIMPLE into COMPLEX, or by PARTS into some WHOLE. By way of instance, in describing technical procedures the teacher should look for examples which are going to be recognizable to the trainees. Unfamiliar technical words must be introduced carefully. To be able to gain and concentrate the interest of pupils, the teacher should be adequately prepared, eloquent in his/her demonstration and ought to use different teaching aids and examples such as graphs, transparencies, codes as well as the actual objects through demonstration. Question and Answer intervals should be contained at the lecture. QUALITIES OF A Fantastic LECTURE 1. A fantastic lecture shouldn’t be too long as to transcend the researchers’ focus span (around 25 minutes). 2. A fantastic lecture ought to handle one theme. 3. In a fantastic lecture, technical conditions are carefully clarified. 4. Familiar illustrations and analogies have been awarded. 5. A fantastic lecture demonstrates fluency in specialized content. 6. A fantastic lecture uses examples and illustrations. 7. A fantastic lecture builds on present knowledge. 8. A fantastic lecture applies an assortment of approaches. The discussion involves two-way communicating between participants. In the classroom situation that an educator and trainees all engage in conversation. During the conversation, the teacher spends a while listening while the Warriors spend occasionally speaking. A conversation is a way by which individuals share experiences, thoughts, and attitudes. As it will help to nurture trainees involvement in what they’re learning, it can contribute to desirable attitudinal changes. The discussion might be utilized in the classroom with the aim of lesson growth, making trainees apply what they have learned or to track trainees studying by means of feedback. In regions where trainees already have some knowledge or expertise, the dialogue might be employed to develop the principal factors to be covered in a lesson. By way of instance, in security training, lots of the processes and behavior which needs to be detected can be established through discussion with all trainees. The conversation can help clarify the various points of view and might aid each trainee to specify her or his own view. Used this manner, the discussion might be more successful in motivating learners than assignments. Trainees can observe that some significance is attached to their own gifts. Discussion may also be used, after a lecture or presentation, to assist trainees to apply what they’ve learned. The teacher can ask questions, that assist trainees to associate concepts and fundamentals to contexts which are familiar to the trainees or where they will finally be required. By way of instance after a lecture on “forms of timber combined”, the teacher may, direct a conversation directing trainees focus on the areas or parts of furniture at which each type can be located, along with the motives for using one kind than another. Employed in this manner discussion results in the transport of learning. The conversation method also offers a chance to track trainees learning. The replies supplied by the questions that they ask, show the scope and quality of learning occurring. Teachers can utilize this information to replicate or alter an explanation to boost learning. They’re also able to give feedback to pupils, thus helping to reinforce learning that has happened. Discussion employed in this manner should follow along with other procedures of classroom education such as assignments, demonstration or practice sessions. want to learn more? visit https://www.cobbparkwaylocksmith.com/
Physical punishment — from hitting children with sticks to making them kneel to slapping them — occurs every day in classrooms throughout the world. Yet we know experiences of violence at school are linked to lower attendance and academic achievement as well as higher drop-out rates. In turn, lower educational attainment rates among girls can have effects across generations, leading to a weakened household economy, worse health, and higher fertility rates. “Now I have the fear of learning.” — Congolese student (after experiencing corporal punishment in school) Despite the detrimental impacts and staggering rates of corporal punishment in schools, little is known about how to shift the harmful attitudes and behaviors that uphold these practices, particularly in crisis-affected contexts. Even when interventions are proven effective — for example, the Good School Toolkit created by Raising Voices in Uganda reduced the risk of physical violence by school staff against children by 42% — we often don’t know which part of the program contributed to the improvements. Meaning, there is virtually no evidence on effective ingredients and approaches to reduce physical punishment in schools in humanitarian settings. The Behavioral Insights Team (BIT) and the International Rescue Committee (IRC) are therefore exploring whether applying insights from the behavioral sciences — or the study of how people make decisions and why — can shed some light on this issue. So far, the results are promising. Our first in a series of iterative trials has shown which messages are most effective at shifting teachers’ attitudes and beliefs about corporal punishment. Next, we will build on this learning to design a program aimed at improving teachers’ self-regulation, wellbeing, classroom management, and use of positive discipline strategies rather than physical punishments. In this post we outline these findings and next steps. Why are teachers using corporal punishment? Our work takes place in Nyarugusu Refugee Camp in Tanzania — the third largest refugee camp in the world and home to nearly 140,000 refugees from neighboring Burundi and the Democratic Republic of the Congo. Through interviews, group discussions, and observations, we learned that many teachers living in Nyarugusu viewed physical punishment as a way to prepare students for adulthood, teach them to respect their elders, and guide them to a better future. “[Harsh discipline] helps children change and become better students because they are afraid of doing it [misbehaving] again.” — Congolese teacher What can we do about it? A behaviorally-informed approach would typically tweak the environment in which people make decisions to encourage a change in their behavior. As people begin to change their ways, they experience cognitive dissonance (or a discrepancy between their actions and beliefs) that prompts them to justify their new behavior. In this process, people tend to adjust their old beliefs. In this case, we faced teachers who seemed reluctant to change their views or their behavior. So we designed a randomized controlled trial to explore whether leveraging behavioral insights could help reduce favorable attitudes towards corporal punishment. Our trial compared three approaches, one strategy often used in the violence-reduction field (a rights-based approach) and two approaches drawing on behavioral insights. Both of the behaviorally-informed groups were also exposed to messaging promoting a growth mindset — a belief that people’s traits, such as intelligence and abilities, can change with effort — and asked to complete an exercise contrasting their new understanding of corporal punishment with their self-image (as protectors of children, for example). Before engaging with this content, teachers in the empathy building and clinical evidence groups were also asked to reflect on their values and identity (by selecting two to three values from a pre-selected list and writing about how they exemplified these values in their lives). Previous studies have shown that such simple values-affirmation exercises can actually make people more open to accepting new information and to changing their views, while boosting their self-efficacy — i.e., confidence in their abilities to accomplish tasks. To assess the effectiveness of these three approaches (right-based, empathy, and clinical messaging), we offered teachers in each group the option of signing up to receive more information about how to make their classrooms safe. We also asked teachers some questions to gauge their attitudes towards the use of corporal punishment in schools. To our surprise, on average, none of the approaches were more effective at encouraging sign-ups to receive additional information. We found that speaking about the rights of children and the rules that protect them did encourage visible signs of compliance (i.e., enrollment in a program) among certain vulnerable populations though. This is potentially due to the fact that a more institutional or familiar message — focused on rights, laws, and expected conduct — may prompt more people to signal they are in compliance with such rules. This strategy of highlighting rules and rights, however, was not effective in the key outcome of reducing favorable attitudes towards the use of violence against children. Instead, empathy-building exercises that asked teachers to take the perspective of children were most effective at shifting teachers’ opinions, from supporting the use of corporal punishment in schools to disagreeing with it. In fact, engaging with the empathy-building content reduced teachers’ level of agreement with corporal punishment by 31% (using a survey measuring values) and decreased the number of situations in which teachers think that hitting children is acceptable by 26%. Sharing clinical evidence on the negative effects of violence against children was also more effective than the rules and rights-based approach at shifting favorable attitudes towards corporal punishment, albeit to a lesser extent than empathy-building. Figure 1: Effect of behaviorally-informed messages on teachers’ attitudes towards using corporal punishment (using a values-based and a scenarios-based surveys) Lastly, reflecting on values and identity increased teachers’ sense of self-efficacy by nearly 10%, which evidence suggests could yield a number of positive outcomes for children, including improved learning outcomes. The IRC and BIT will build on the learning from this study to design a light-touch, tailored program to prevent and reduce the use of corporal punishment in Nyarugusu, targeting behaviors as a starting point. We will invite teachers to participate in self-guided group sessions inspired by cognitive behavioral therapy — an approach that has been effectively applied to many problems, from reducing destructive behaviors to improving mental health — to help them challenge thinking and patterns of behavior related to using violence. As part of these sessions, we will provide teachers with alternatives to corporal punishment, behaviorally-informed tools — such as planning exercises and timely reminders — and the social support they need to form new habits. Based on our findings, this program will also feature empathy-building exercises to increase teacher’s willingness to change their disciplinary methods. Insights from this trial also highlight the importance of rigorously testing different approaches to learn what’s most effective. In this case, we learned that discussing rights and norms may prompt people to signal compliance (with those rules and norms); so this may be an effective way to encourage one-off actions — like signing up for a program. Once people’s attention is engaged, however, building empathy or even speaking about the potential damage of violence against children may be better strategies to start generating social change to protect children in schools. This experience reinforces our commitment to evaluate the impact of programs as they are being designed, ensuring we craft solutions based on evidence of what works to prevent and reduce violence against children in schools. Originally posted by the IRC on Medium
BECAUSE it takes 10,000 years to create a soil but only 10 years to destroy it Soils are critical for life, yet are vulnerable to pollution and unsustainable exploitation. - Soils store 10 billion tonnes of the UK’s terrestrial carbon and play an important role in modulating the greenhouse gas cycles which control our climate - Soils provide the nutrients and water to grow our food and they regulate floods and droughts - Biologically, soil organisms recycle nutrients, clean our waste and water and provide a biodiverse resource for medical, industrial and agricultural economies - The diversity, and often conflicting services, provided by soils demands an integrated, multidisciplinary approach to their understanding and management Soils and changing climate Climate change research is established in CEH science strategy. Understanding how soils respond to a changing climate is of fundamental importance. - Quantifying changes in soil structure and function in response to climate change and climate extremes - Modelling impacts of extremes of drought or high temperature on soils - Improving understanding of the soil processes involved in land surface-atmosphere feedbacks Natural capital and ecosystem services Soil contributes to the provision of a wide range of ecosystem services. The area of natural capital and ecosystem services is still an emerging science. - Determining landscape-scale linkages between soil and the provision of ecosystem services such as climate regulation, water filtration and storage, and the maintenance of biodiversity - Delivering ecosystem models incorporating current knowledge of soils to improve predictions for change in biodiversity and ecosystem functions - Framework development - mapping, quantifying and valuing the provision of services or functions Soil contamination, risk assessment and remediation The UK has a legacy of contamination that needs to be addressed. - Monitoring large-scale inputs of potentially hazardous contaminants and pathogens (e.g. sewage sludge, manures, industrial wastes) on soils and determining their retention, transport and potential impact on human health and food production - Producing UK risk maps for soil contamination by metals, soil acidification and nitrogen enrichment, including sensitivity and vulnerability analysis - Identifying indicators and predicting recovery time from nitrogen enrichment - Developing new physiological and functional biomarkers, metagenomic and toxicogenomic tools to quantify pollutant impacts on soil communities to support sustainable soil management Managing land and water to protect the soil resource Water is vital. We must ensure enough for consumption and ecological requirements, while too much leads to flooding. - Measuring how changing catchment/riparian land management affects the condition of freshwater ecosystems and the function of soils - Assessing available soil and water resources in a changing world based onlong-term scenarios of climate, land use and demographic change - Quantifying impacts of changing urban, peri-urban and rural land use (including energy crops) on soil function and services, e.g. water storage and reducing flood risk - Identifying and quantifying sources, fluxes and pathways of water, chemicals and sediments including runoff and leaching from soils Biodiversity, distribution and importance of soil organisms Surprisingly, little is known about the biogeography and vulnerability of soil organisms, particularly microbes. - Mapping the distribution and activity of soil organisms - Identifying key functional groups, and the range of taxa, essential for maintaining ecosystem services - Developing tools to enable prediction of below-ground biodiversity, e.g. molecular taxonomy - Quantifying thresholds and predicting impact of above-ground perturbation on soil food webs and ecosystem function - CEH delivers and coordinates the UK Countryside Survey, including the UK Land Cover Map, and coordinates the UK Environmental Change Network that provides detailed ground-based ecosystem assessments of the stock and change in our soils including stored carbon, organisms and pollutants - CEH develops acclaimed models including the Joint UK Land Environment Simulator (JULES) that links soil processes such as soil moisture with atmospheric processes and - climate models - CEH is commissioned by DECC to report annually on inventories and projections of UK greenhouse gas emissions by sources and removals by sinks due to land use, land use - change and forestry - CEH led the soil section of Defra’s Review of Transboundary Air Pollution (RoTAP). The review focuses on the main chemicals causing acid deposition, eutrophication, ground-level ozone and heavy metal pollution in the UK and their impact on soils and their function - CEH published the first national survey of the distribution and diversity of soil meso and microbiota across our landscape
The extreme weather patterns of recent weeks will become worse as ice sheets on both poles continue to melt, a new study published on Thursday says. The weather has led to parts of Europe and North America covered in snow and ice while wildfires raged in a blistering hot Australia. “We will start to see more of this recent extreme weather, both hot and cold — with incredibly disruptive effects for agriculture, infrastructure, and human life itself. This is not accounted for in current global climate policies,” Nick Golledge from Victoria University of Wellington’s Antarctic Research Centre said in a statement. The research, published in Nature magazine, was led by Golledge and involved scientists from Canada, Britain, Germany and the US. It used climate models to simulate what might happen when water from melting ice sheets in Greenland and Antarctica enters Earth’s oceans. The model predictions show that in some areas of the world, ocean changes would lead to more extreme weather events and greater year-to-year variation in temperatures. In spite the cold snap in the US, overall temperatures were warming and under current policy settings the Earth’s temperature would increase by 3 to 4 degrees Celsius by 2100, Golledge says. Significant amounts of melt water from both poles would cause disruption to ocean currents and change climate around the world, he added. The study is the first to use highly detailed models of both the Antarctic and Greenland ice sheets along with observations of recent ice sheet changes from satellites, which create more accurate predictions, Golledge says.
A concussion is a brain injury resulting from a blow to the head. Not the kind of injury you can see on a CT scan or MRI — there’s no broken bones and no squashed or visibly damaged brain. But nonetheless, the brain is damaged. Symptoms tell you immediately after a concussion that the brain has been affected. Sometimes, a person is knocked out cold, but a concussion can occur without unconsciousness. Milder symptoms can include disorientation, confusion, and problems with memory and balance. With time and rest, these symptoms will usually improve, especially after a first concussion. But sometimes concussions can cause real, lasting brain damage. After a concussion, athletes (both professional and student) can suffer from poor attention, headaches, memory problems, and depression — symptoms that may or may not get better with time. Unfortunately, young athletes may be more at-risk than the pros. Young brains are still developing, and are more likely to be injured. There’s also some genetic variability — some people are more resilient than others to the effects of concussions. Repeated concussions can be dangerous to anyone, and a “second hit” after a concussion that hasn’t completely healed can be deadly. As I tell the teenagers: “Protect your brain. You may need to use it later.” What can parents and coaches do to help keep their kids safe? - Provide good training so young athletes know how to play safely. Support coaches who teach student athletes well, and take potential brain injuries seriously. - Make sure that athletes have good protective equipment, including helmets and mouth guards. These don’t prevent all (or even most) concussions, but using them consistently and correctly is still important. - School systems should have mandatory, science-based concussion management systems, developed in accordance with national guidelines. - Officials and referees need to call fouls, and discontinue play when it’s dangerous. Players who put themselves or others at risk should be sent off the field without hesitation. - Coaches on the sidelines need to look for even subtle signs of concussion in their players, and pull them out of the game if there are any signs at all. When in doubt, players should sit out. - Players themselves need to know that they should never tough it out — any “dinger” needs to be reported, even if that means they’ll be pulled from the game. Brains are far more important than scores. - If your child does have a concussion, be sure to follow the guidance of his physician. A gradual return to sports should not begin until all signs and symptoms of concussion have resolved. And if symptoms occur with activity, you must back off again. - If your child has had more than one concussion, or a concussion with prolonged symptoms, consider working with a neurologist to ensure that there’s no lasting damage. Roy Benaroch is a pediatrician who blogs at The Pediatric Insider. He is also the author of Solving Health and Behavioral Problems from Birth through Preschool: A Parent’s Guide and A Guide to Getting the Best Health Care for Your Child.
2 What is Leukemia?Leukemia is a type of cancer that starts in the blood- forming tissue, such as the bone marrow, and causes large numbers of blood cells to be produced and enter the blood stream very quickly. Leukemia usually starts in the white blood cells; the bone marrow starts to make a lot of abnormal white blood cells, called leukemia cells. Your white blood cells are potent infection fighters — they normally grow and divide in an orderly way, as your body needs them. But in people with leukemia, the bone marrow produces abnormal white blood cells, which don't function properly. Leukemia cells can crowd out the normal blood cells and can lead to anemia, bleeding, and infections; also, leukemia can spread to the lymph nodes or other organs and cause swelling or pain.(Sources 1,2,3,6) 3 What are white blood cells? White blood cells, also known as leukocytes, help fight infections and, in general, help the immune system. There are two basic types of leukocytes: granulocytes which process granules, and granulocytes which lack granules. White blood cells are formed from undifferentiated stem cells called hematopoietic stem cells. The hematopoietic stem cells are self-generating and renew themselves throughout your life. They produce extraordinary quantities of blood cells. White blood cells are your primary defense against infection and tissue damage. They are called effector cells and not only kill unwanted organisms, but also act like scavengers to get rid of damaged cells. The leukocytes (white blood cells) get around by what is called ameboid movement and can penetrate tissue to control problems and then later return to the blood stream. (Source 12) 4 What is Bone Marrow?Bone marrow is the soft, spongy, inner part of bones, such as your hip and thigh bones. All of the different types of blood cells are made in the bone marrow. Bone marrow includes blood-forming cells, fat cells and tissues that aid the growth of blood cells. It is the place where new blood cells are produced. it contains immature cells, called stem cells. The stem cells can develop into the red blood cells that carry oxygen through your body, the white blood cells that fight infections, and the platelets that help with blood clotting. When you are healthy, your bone marrow makes: White blood cells, which help your body fight infection. Red blood cells, which carry oxygen to all parts of your body. Platelets, which help your blood clot. When you have leukemia, the bone marrow starts to make a lot of abnormal white blood cells, called leukemia cells. (sources 1, 6, 9) 5 What is the lymphatic system? The lymphatic system is an extensive drainage network that helps keep bodily fluid levels in balance and defends the body against infections. (source 10) 6 How do you get/prevent leukemia? No one knows the cause of leukemia, but there are a few risks that can increase your odds of getting it.Previous cancer treatmentsgenetic disorders (such as Down syndrome)SmokingRadiationdrugs with alkalating agents (common in chemotherapy)exposure to chemicalsfamily history of leukemia(source 1,2,6) 7 How do you Classify Different types of Leukemia? Doctors classify leukemia based on its speed of progression and the type of cells involved. There are several different types of leukemia. In general, leukemia is grouped by how fast it gets worse and what kind of white blood cell it affects.It may be acute or chronic. Acute leukemia gets worse very fast and may make you feel sick right away. Chronic leukemia gets worse slowly and may not cause symptoms for years.It may be lymphocytic or myelogenous. Lymphocytic (or lymphoblastic) leukemia affects white blood cells called lymphocytes. Myelogenous leukemia affects white blood cells called myelocytes.Chronic Lymphocytic Leukemia most often occurs in those older than age 55 and almost never in childrenChronic Myeloid Leukemia affects mainly adultsAcute Lymphocytic Leukemia most common type of leukemia in young children but also may affect adultsAcute Myeloid Leukemia occurs in both adults and children(source 1, 2, 6) 8 What are symptoms of leukemia? Symptoms may depend on what type of leukemia you have, but common symptoms include:Fever and night sweats.Persistent fatigue, weaknessHeadaches.Bruising or bleeding easily. (bleeding gums, purplish patches in the skin or tiny red spots under the skin)Bone or joint pain.Swollen lymph nodes in the armpit, neck, spleen, liver, or groin.A swollen or painful belly from an enlarged spleen.Getting a lot of infections.Feeling very tired or weak.Losing weight and not feeling hungry.Tiny red spots in your skin (petechiae) (source 1, 2, 6) 9 How does your doctor know if you have Leukemia? Make an appointment with your doctor if you have any persistent signs or symptoms that worry you. Leukemia symptoms are often vague and not specific. You may overlook early leukemia symptoms because they may resemble symptoms of the flu and other common illnesses. Rarely, leukemia may be discovered during blood tests for some other condition.To find out if you have leukemia, a doctor will:Ask questions about your past health and symptoms.Do a physical exam. The doctor will look for swollen lymph nodes and check to see if your spleen or liver is enlarged.Order blood tests. Leukemia causes a high level of white blood cells and low levels of other types of blood cells.If your blood tests are not normal, the doctor may want to do a bone marrow biopsy. This test lets the doctor look at cells from inside your bone. This can give key information about what type of leukemia it is so you can get the right treatment.(source 1, 2) 10 How do you treat leukemia? What type of treatment you need will depend on many things, including: what kind of leukemia you have, how far along it is, and your age and overall health.If you have acute leukemia, you will need quick treatment to stop the rapid growth of leukemia cells. In many cases, treatment makes acute leukemia go into remission. Some doctors prefer the term "remission" to "cure," because there is a chance the cancer could come back.Chronic leukemia can rarely be cured, but treatment can help control the disease. If you have chronic lymphocytic leukemia, you may not need to be treated until you have symptoms. But chronic myelogenous leukemia will probably be treated right away. (source 1, 2) 11 Common treatments used to fight leukemia: ChemotherapyBiological TherapyTargeted therapyRadiation therapyStem cell transplantClinical Trials(source 1, 2) 12 ChemotherapyChemotherapy is the major form of treatment for leukemia. This drug treatment uses chemicals to kill leukemia cells. Depending on the type of leukemia you have, you may receive a single drug or a combination of drugs. These drugs may come in a pill form, or they may be injected directly into a vein.(source 1, 2) 13 Biological therapyBiological therapy works by helping your immune system recognize and attack leukemia cells.(source 1, 2) 14 Targeted TherapyTargeted therapy uses drugs that attack specific vulnerabilities within your cancer cells. For example, the drug imatinib (Gleevec) stops the action of a protein within the leukemia cells of people with chronic myelogenous leukemia. This can help control the disease.(source 1, 2) 15 Radiation TherapyRadiation therapy uses X-rays or other high- energy beams to damage leukemia cells and stop their growth. During radiation therapy, you lie on a table while a large machine moves around you, directing the radiation to precise points on your body. You may receive radiation in one specific area of your body where there is a collection of leukemia cells, or you may receive radiation over your whole body. Radiation therapy may be used to prepare for a stem cell transplant.(source 1, 2) 16 Stem Cell TransplantA stem cell transplant is a procedure to replace your diseased bone marrow with healthy bone marrow. Before a stem cell transplant, you receive high doses of chemotherapy or radiation therapy to destroy your diseased bone marrow. Then you receive an infusion of blood-forming stem cells that help to rebuild your bone marrow. You may receive stem cells from a donor, or in some cases you may be able to use your own stem cells. A stem cell transplant is very similar to a bone marrow transplant.(source 1, 2) 17 Clinical TrialsFor some people, clinical trials are a treatment option. Clinical trials are research projects to test new medicines and other treatments. Often people with leukemia take part in these studies.(source 1, 2) 18 Jonny’s DiagnosisMy brother Jonny was diagnosed with Acute Lymphocytic Leukemia on July 3rd, 1995.“What were Jonny’s symptoms that made you take him to the doctor?”“To be completely honest, I was positive that he had an ear infection. We had just moved from Brooklyn to New Providence about a month earlier and we didn’t even have a pediatrician in New Jersey. It was Friday, June 30 (my wedding anniversary!) and I decided to take Jonny to the doctor because he had been running a low-grade fever for two days and seemed a little out of sorts and paler than usual. We were going into the 4th of July weekend and I felt that if I didn’t get someone to see him that afternoon, he probably would not have been seen by anyone until the middle of the following week and my thought was that he probably just needed an antibiotic.” – Jamie D’Amico (mother) 19 Jonny was taken to the New Providence Pediatrics Jonny was taken to the New Providence Pediatrics. The doctor was, luckily, very familiar with pediatric cancer.She examined Jonny and determined no sign of infection but because he was so pale, decided he might be anemic. She did a simple CBC (complete blood count test) in her office. 20 White Blood Cell CountHemoglobinPlateletsJonny’s Results22,0004.917,000Normal Range5,000-10,00012-16150,000 21 Because of the drastic abnormalities in Jonny’s CBC results, the pediatrician contacted Dr. Steven Halpern, Pediatric Hematologist/ Oncologist at Overlook Hospital to let him know that Jonny was on his way over to the hospital to ensure the CBC results were accurate.After 45 minutes, we were informed of the news, Jonny had cancer. While it was fairly certain Jonny had leukemia, it would not be determined what type until he had a Bone Marrow Aspiration – where they do a pathological report of the bone marrow. They also had to do a spinal tap, which is where they remove fluid from a part of the spine to see if there are cancer cells. The spinal tap would determine if Jonny also needed radiation therapy as part of his treatment, luckily, he didn’t.Jonny was diagnosed on July 3rd, 1995 with Acute Lymphocytic Leukemia; he was 3 years and 8 months. Thankfully, this is the most common/treatable cancer in children. 22 Jonny’s TreatmentJonny was given a specific treatment that was administered over years. There were 4 phases of the treatment.InductionConsolidationDelayed IntensificationMaintenance 23 One of the choices given to my family by the doctor was to have Jonny put into a randomized clinical trial where he would receive all of the same medication, but some might be administered differently.For example, 6M was a medication many patients took in pill form, Jonny however had it administered over a 10 hour period through an IV.My parents opted to have Jonny as part of the clinical trial.Jonny, within the first few days, was also given transfusion of platelets and red blood cells and had a portacath surgically implanted to administer his medication 25 (Most) Other Medication: VincristineDecadronColaceZantacBactrimNystatinAllopurinolCyterabine (ARAC)DexorubicinCyclophamideCytoxanTioguanineGiven Everyday Unless Otherwise Noted 26 Response to the Medication Jonny responded well to the medication. The changes in his blood count was almost immediate. Unfortunately, he developed infections and other problems that forced our family to put his treatment on hold on numerous occasions 27 His treatment started on July 5th, 1995 His treatment ended on August 28th, 1998 28 Cancer Free, Now What?Just like everyone else, Jonny has to eat right and exercise to stay healthy; unlike everyone else, there are a few things he has to do now that he is cancer free. Per his doctor, recommendations for follow-up are as follows:History and physical yearlyCBC, chemistry profile, urinalysis yearlyEchocardiogram (EKG) every 5 yearsDEXA scan beginning at age 30 (measures bone density) 29 Jonny on the first day he was taken to the doctor 30 This is Jonny and you can clearly see he has gained weight and had the portacath put in. 31 These photos are from when Jonny was first in the hospital 32 Compared to the last photo of Jonny in a hospital bed, he looks much heavier here because of his steroids. 33 Jonny was put on steroids on day 1, before they were even sure of what type of cancer he had. When on heavy steroids, it is common to be very hungry and eat a lot, because of this, Jonny gained a lot of weight. Jonny was able to eat an entire pizza for lunch and then ask 5 minutes later what was for dinner. 34 Over time, Jonny’s cancer was in remission and then gone Over time, Jonny’s cancer was in remission and then gone. He is now healthier than ever!I am so thankful that Jonny was able to get better. Because of the advances in medication and treatment facilities, I know my brother. I was so young when he was diagnosed, I would have never known him had things been different. Jonny is one of my best friends and can’t imagine a world without him. I can thank the hospitals and doctors who kept him alive for giving me my brother back.34
- Dr. Simone Neuroplasticity; brain at work at any age. Our brains are constantly being shaped by experience. Most of us have very different behaviors and thoughts today than we did 10 or even 5 years ago. This change is related to neuroplasticity, which involves modifications in brain structure and organization as we experience, learn, and adapt. Neuroplasticity is also called brain plasticity or brain malleability. Connections within the brain are constantly becoming stronger or weaker, depending on what is being used. Younger people change easily, their brains are very plastic however neuroplasticity is at work throughout life. When we learn something new, new connections are created between our neurons. We rewire our brains to adapt to new circumstances. This happens on a daily basis, but it’s also something that we can harness and stimulate. Therefore, unlike computers, which are built to certain specifications and receive software updates periodically, our brains can receive hardware updates in addition to software updates. Different pathways form and fall dormant, are created and are discarded, according to our experiences and needs. This property of the brain may involve modifications in overall cognitive strategies to successfully cope with new challenges (i.e., attention, behavioral compensation), recruitment of new or different neural networks, changes in strength of connections or specific brain areas in charge of carrying out a particular task (i.e., movement, language, vision, hearing). At the cellular level, changes in membrane excitability, synaptic plasticity, as well as structural changes has been measured in vivo and in vitro. The study of neuroplasticity engages scientists from many different disciplines because of the profound implications it has for understanding the functional foundations of action and cognition in the healthy and lesioned brain. Neuroplasticity in Children Children’s brains are constantly growing, developing, changing and adapting. Each new experience prompts a change in brain structure, function, or both. At birth, each neuron in an infant’s brain has about 7,500 connections with other neurons; by the age of 2, the brain’s neurons have more than double the number of connections in an average adult brain. These connections are slowly pruned away as the child grows up and starts forming their own unique patterns and connections depending on what is being more frequently used. These processes are stronger and more pronounced in young children, allowing them to recover from injury far more effectively than most adults. In children, profound cases of neuroplastic growth, recovery, and adaptation can be seen. Importance of Neuroplasticity in Neurodevelopmental Disorders Neurodevelopmental disorders are impairments of brain growth and development affecting several of brain’s functions, and include cognitive, motor, language, learning, and behavioral disorders. Neurodevelopmental disorders affect motor, cognitive, language, learning, and behavioral development with lifelong consequences. Infants at high risk for cerebral palsy and other neurodevelopmental disorders can be identified early, ideally in the first weeks or months of life, through careful clinical and neurological evaluation combined with specific image examination, and genetic and metabolic tests when necessary. As indicated by recent scientific evidence, gene abnormalities or congenital brain lesions are not the sole determinants for the neurodevelopmental outcome of affected infants. In fact, environment and experience through neuroplasticity may modify brain development and improve the outcome in infants at risk for neurodevelopmental disorders. Early identification of infants at risk for cerebral palsy is a major prerequisite for effective intervention programmes. This ensures that interventions which aim to positively modify the natural history of this condition can start in the first weeks or months of life when brain demonstrates the greatest plasticity and potential to alter the course of development. The goal of early intervention is to prevent or minimize motor, cognitive, emotional impairments in young children disadvantaged by biological or environmental risk factors. As stated by the World Health Organization, identification of the infant at risk for a neurodevelopmental disorder is a crucial starting point to establish a close relationship between parents and health care providers and to provide early intervention with long lasting positive results. Human neuroplasticity is one of the most important medical discoveries of the past 50 years. It offers new hope to people with a wide variety of neurological problems and even provides hope of improving our life quality as we can life better as we rewire our brain to establish better habits that contribute to our health, success, and well-being. Take advantage of the brain’s plasticity to provide new opportunities for you, and to allow your child to develop well and shine throughout life. A few methods to enhance or boost neuroplasticity include: Physical exercise: Cardiovascular exercises boost oxygen supply to brain and increase brain volume. World Health Organization recommends that children and youth aged 5–17 should accumulate at least 60 minutes of moderate to vigorous-intensity physical activity daily. Reducing stress: Stress is a silent killer, and it also diminishes neuroplasticity. If it is difficult to manage the sources of stress in your life, you can change how you respond to it. An excellent way to relax is to surround yourself with nature, music or to travel. Yoga and meditation can also help to control your stress responses. Sleeping: Improves learning and memory through the growth of connections between neurons and help transfer information across cells. Learning a language and learning a musical instrument: It may increase connectivity between brain regions and help form new neural networks. Traveling: Exposes your brain to novel stimuli and new environments, opening up new pathways and activity in the brain. Non-dominant hand exercises: This type of activity can form new neural pathways and strengthen the connectivity between neurons. Reading a novel: Increases and enhances connectivity in the brain. Expanding your vocabulary: It activates the visual and auditory processes as well as memory processing. Creating craftwork or artwork: Enhances connectivity of the brain, which can boost introspection, memory, empathy, attention, and focus. Dancing: It is an excellent way to be active with creativity and reduces the risk of Alzheimer’s and increases neural connectivity.
‘What is Synaesthesia?’ Synaesthesia – (plural “Synaesthesiae”), comes from the Greek “Syn” meaning “union”, and “Aesthesis” meaning “sensation”; a person who experiences synaesthesia is refered to as a ‘synaesthete’, and whose experiences are ‘synaesthetic’, or which are perceived ‘synaesthetically’. Synaesthesia is a “neurological condition” whereby an external experience or stimulation through one sense is spontaneously associated with an internal experience or perception through a different sense; i.e. at the point of which one of the five senses is aroused, two or more different senses respond, thus creating a multidimensional sensation. For example: most commonly, a synaesthete with grapheme colour synaesthesia will associate specific colours to letters and numbers; likewise, and more rarely, a synaesthete with lexical gustatory synaesthesia will experience tastes in the mouth when reading text. Although commonly referred to as a neurological condition, synaesthesia is not listed in either the “DSM-IV” (Diagnostic and Statistical Manual of Mental Disorders, 4th Edition), or the “ICD” (International Classification of Diseases), since synaesthesia does not, in general, interfere with normal daily functioning. Indeed, most synaesthetes report that their experiences are neutral, pleasant, and may even enhance their ability to achieve certain tasks, such as spelling, and creating works of art. Rather like colour blindness, or perfect pitch, synaesthesia is a difference in perceptual experience, and is, therefore, referred to as a neurological condition in order to reflect the brain basis of this perceptual difference. To date, no research has demonstrated a consistent association between synaesthetic experiences and other neurological or psychiatric conditions, although this is an active area of research. Neurologist Richard Cytowic identifies the following diagnostic criteria of synaesthesia: 1. Synaesthesia is involuntary and automatic. 2. Synaesthetic images are spatially extended, (they often have a definite location). 3. Synaesthetic perceptions are consistent and generic, (i.e.. simple rather than imagistic). 4. Synaesthesia is highly memorable. 5. Synaesthesia is laden with affect. Genuine synaesthesia is spontaneous, specific, consistent and durable. Those tested will report a perfect repetition of their experiences when tested again months later, often scrupulously detailed, particularly with regards to colour, where sometimes indefinable tones or patterns may occur. For example: the colour baby pink as seen by the non-synaesthete, may appear to the synaesthete with tints of orange, speckled with grey, arranged like a spider’s web, and vary in intensity from centre to edge; as opposed to a single block colour of a definite composition. To the synaesthete, their experience of having synaesthesia is simply a part of their individuality and personal identity, perhaps almost like a sixth sense; in other words, synaesthesia forms an important part of their daily experiences, just as seeing or hearing “normally” is to the rest of the population, and cannot be switched on or off as and when desired. Their hypersensitivity is a quality unique to each individual – no two synaesthetes will experience the same form of synaesthesia with the same intensity, nor will they report the same experiences of sensory activation from a single stimulus. Synaesthete Pat Duffy explains: “Other people don’t see what we see and they’re not convinced that we see it ourselves. But what each of us sees is the reality we know. I am no more at liberty to change the white colour of the letter ‘O’ than I am to change its circular shape: for me, the one is as much an attribute of the letter as the other.” (2001). Until recent years, and prior to detailed research becoming publicized, synaesthesia was regarded by society as a superhuman ability, and in some cases, a taboo subject. The sheer suggestion that words could have flavour, and objects have personality, would appear to the non-synaesthete as incredible nonsense, and consequently, isolate the synaesthete from any social acceptance with overwhelming feelings of obscurity and even abnormality. Likewise, a genuine synaesthete will find it equally incomprehensible that not everyone will associate, for instance, colour with letters and numbers, as they may do. As a consequence, synaesthetes have felt compelled to withdraw and contain their experiences within themselves and their private space, and to keep their experiences hidden from the disbelieving society. Neurologist Richard E. Cytowic explains: “Synesthesia is “abnormal” only in being statistically rare. It is, in fact, a normal brain process that is prematurely displayed to consciousness in a minority of individuals. Despite keeping the experience private and hidden, it remains vivid and irrepressible, beyond any willful control.” (“Synesthesia: Phenomenology and Neuropsychology – a review of current knowledge, 1995). The first report of synaesthesia is dated to the year 1812 (Georg T.L. Sachs’ Hochel & Milan, 2008),: “Historiae naturalis duorum leucaetiopum: auctoris ipsius et sororis eius” (“Natural History of Two Albinos: the author himself and his sister”). The term “synaesthesia” was being used by the polymath philosopher Charles S. Peirce before 1866. However, while he used it in a sense similar to how it is used today, he did not use it towards writing about actual medical cases. The first use in case report write ups might be from Mary W. Calkins, in 1894. Although synaesthesia was the topic of intensive scientific investigation during the late 1800s and early 1900s, it was largely abandoned in the mid-20th century, and has only recently been rediscovered by modern researchers. Psychological research has demonstrated that synaesthetic experiences can have measurable behavioural consequences, while functional neuro-imaging studies have identified differences in patterns of brain activation (Hubbard & Ramachandran 2005). Neurologist and author Oliver Sacks explains: “Twenty years ago, synesthesia – the automatic joining of two or more senses – was regarded by scientists (if at all) as a rare curiosity. We now must regard it as an essential, and fascinating, part of the human experience. It may well be the basis for human imagination and metaphor.” (2005). As many as one in 2,000 people experience natural synaesthesia, (Baron-Cohen et al 1996; Simner et al 2006), and at the last report, as many as 80 different forms were recorded, (Dr. Sean A. Day April 2016), linking different senses or perceptions. Thus a synaesthete may associate texture with taste, smell with colour, and so on – any combination of the five senses is possible, and synaesthetes are reported to differ considerably in intensity of the experience (cf. Dixon et al. 2004). Some forms of synaesthesia are more commonly reported than others – around one in 5,000 associate colours with letters; as few as one in 15,000 will associate taste with touch. More rarely still, pain can cause taste or colour sensations for some people. (It should be noted here that these statistics are not solely reliable, and are variable subject to incidence). A small minority of synaesthetes experience multiple synaesthesia, where a single external stimulus will cause multiple internal sensory perceptions, providing them with an almost overwhelming sensory ‘identity’ for different concepts. For example: a synaesthete may experience both sound and colour when exposed to a particular texture; likewise, a synaesthete may experience both colour and tastes in the mouth when reading and writing. Psychologists and neuroscientists study synaesthesia not only for its inherent interest, but also for the insights it may give into cognitive and perceptual processes which occur in everyone, synaesthete and non-synaesthete alike. This phenomenon has been discussed by scientists for some three centuries, but only recently during the mid to late 20th century has it prompted more curiosity and thorough research. Although there are still psychologists who believe that synaesthesia does not exist as a spontaneous experience, others are recognising proof that synaesthesia is an inbuilt neurological condition in its own rite. Tests have shown increased blood flow to those parts of the brain which deal with, for instance, colour and sound perceptions, when activated by an external stimulus such as text or music. Scientists have reported synaesthesia to occur in individuals who have suffered a sensory loss during life, such as blindness or deafness; as a result of brain injury or stroke; use of certain drugs, such as LSD; and through neurological change, such as migraine and epilepsy. It is believed synaesthesia also runs strongly in families, although there is no evidence that it can be passed from father to son, and may therefore be inherited as an X-linked dominant trait. Reports have shown this to be more apparent in left-handed females, although synaesthesia can be passed to either sex, and from generation to generation – “The regions of our DNA that wire some people to “see” sounds have been discovered. So far, only the general regions within chromosomes have been identified, rather than specific genes, but the work could eventually lead to a genetic test to diagnose the condition” (David Robson). Synaesthesia which arises from non-genetic events, is referred to as “adventitious synaesthesia”, to distinguish it from the more common congenital forms. Adventitious synaesthesia, relating to drugs or stroke, (but not blindness or deafness), apparently only involves sensory links between sound, vision, and touch. Physician Ross Quinn explains: “As a physician (who is married to a synesthete), I too think there may be a connection to cerebral vaso-dilation and intensification of synesthesia. Any fever increases metabolism and blood flow generally, and Lola correctly describes the main stream explanation of migraine: an initial cerebral vaso-constriction, followed by over shoot with abnormal vaso-dilation / high blood flow. It seems logical to me that synesthesia would require more blood flow, as more parts of the brain become activated in synesthetes than in us “usual” people in response to every day stimulation. Any part of the brain that functions more requires more blood flow. Perhaps by increasing blood flow generally (e.g., fever, even anesthesia), or locally (e.g., migraine, or concussion, after which brain blood flow is often deranged), this could enhance, intensify, or even create temporarily new synesthesia. This speculation makes me wonder about another aspect of the psychedelics: They have seritonin receptor interactions, which are also involved in blood flow regulation.” (The Synaesthesia List, 2008). One of the crucial effects of synaesthesia is that it reportedly improves memory and recall, (Cytowic 1995; Ward 2008), and synaesthetes appear to have exceptional photographic memory skills. For example: a synaesthete reading a book is more likely to remember a specific page number or place on a page of a particular event; likewise, a synaesthete who has become blind is more likely to visualize the layout of a familiar room. The synaesthetic experience gives additional associations for names, numbers, and sounds, which can provide a vivid link to the information, and the more forms of synaesthesia a person experiences, the better their memory and recall is likely to be, due to their additional sensory receptors being activated at any one time. Although this is currently an active area of research, an experimental study on people with dyslexia and who have grapheme colour synaesthesia showed, by associating letters with a particular colour assisted them in the reading and spelling of words considerably, as the colour progression through a word remained consistent, and appeared more memorable on the page than when asked to spell it verbally. Synaesthete Liz Davies explains: “I’m sure you will get some similar responses, but I think synaesthesia helped me learn to spell as a child – I always found it very easy to remember how to spell even difficult words, by the patterns the colours made. When a word was spelt wrong, I could immediately tell because a colour stood out, or ‘didn’t go’ with the rest of the word. I don’t know if I can still attribute it to synaesthesia as I tend to suppress the colours these days, but I could still spot a spelling error a mile off, which has earned me the additional role as proofreader at work!” (UKSA news letter, December 2007). Many memory improving techniques recommended in self-help books use “artificial synaesthesia”, encouraging the individual to form vivid sensory associations with the information they wish to remember. For example: children’s alphabet shapes are represented by a different colour, in order to help the child make the association between the colour and the letter as a long term learning aid. For the genuine synaesthete, such techniques can simply form an encumbrance, cause confusion, and can even appear irritating – the Russian novelist Vladimir Nabokov, as a child, complained to his mother that the colours on his alphabet block were “all wrong”. Synaesthetes perform well in the superior range of the Wechsler Memory Scale (WMS, David Wechsler), whilst maths and spatial navigation abilities tend to suffer. Synaesthesia continues to be investigated by scientists and neurologists worldwide, with particular reference to those in both the UK and USA. New aspects are being identified as the advances in modern technology reveal new techniques for deeper testing, including, and most recently, possible links between synaesthesia and emotion.
MOTION OF A PARTICLE 295 Evidently when m and mf are negligible compared with M \pfl - WV ' which is of Kepler's third law. PROBLEMS. 1. The gravitational acceleration at the surface of the earth is about 980 cm./sec.2 Calculate the mass and the average density of the earth, taking 6.4 X 108 cm. for the mean radius, and supposing it to attract as if all its mass were concentrated at its -center. 2. The periods of revolution of the earth and of the moon are, roughly, 365i and 27f days. Find the mass of the moon in tons. Take 6.0 X 1027 gm. for the mass of the earth. 3. The periods of revolution of the earth and of the moon are 365 J and 27J days, respectively, and the semi-major axes of their orbits are, approximately, 9.5 X 107 and 2.4 X 105 miles. Find the ratio of the mass of the sun to that of the earth. 4. Taking the period of the moon to be 27J days, and the radius of its orbit to be 3.85 X 1010 cm., show that the acceleration of the moon, due to the attraction of the earth, is equal to what would be expected from the gravitational law. Assume the gravitational acceleration at the surface of the earth, that is, at a point 6.4 X 10s cm. away from the center, to be 980 — -sec.2 6. Show that if the earth were suddenly stopped in its orbit it would fall into the sun in about 62.5 days. 6. Show that if a body is projected from the earth with a velocity of 7 miles per second it may leave the solar system. GENERAL PROBLEMS. 1. Find the expression for the central force under which a particle describes the orbit rn = an cos nd and consider the special cases when (a) n = i, (c) n = 1, (e) n = 2. 2. A particle moves in a central field of force with a velocity which is inversely proportional to the distance from the center of the field. Show that the orbit is a logarithmic spiral.
You can read an introductory post to Krijn’s series on liberal political philosophy here. Thomas Hobbes was an early modern English philosopher. In his treatise Of liberty and Necessity, Hobbes defines individual liberty as follows: “Liberty is the absence of all impediments to action that are not contained in the nature and intrinsical quality of the agent.” From this definition, we can take two essential points about Hobbes’s understanding of liberty: - Liberty(/freedom) is a quality that is attributed to an agent that performs an action. - The action is free, if the agent was not hindered to perform the action, by anything that was out of her control. For example: Imagine a situation in which the agent tries to perform an action, let’s say opening a door. If she is not prevented by anything from opening the door, she is free. If however, the door is locked, she cannot open it, which means she is not free. So far, this seems like a clear and uncontroversial definition. However, this was not the only thing Hobbes believed about liberty. In his treatise, he also included the following statement “I conceive that nothing takes beginning from itself, but from the action of some other immediate agent without itself. (…) So that whereas it is out of controversy that of voluntary actions the will is the necessary cause, and by this which is said the will is also caused by other things whereof it disposes not, it follows that voluntary actions have all of them necessary causes and therefore are necessitated.” What Hobbes basically says here, is that even agents which act freely, act the way they do necessarily. Hobbes believes that the world in which we live is ruled by the law of cause and effect. This means that effects, must necessarily follow from the causes which precede them. Since (human) actions are also effects, they must also have causes which make the effect necessary. This, according to Hobbes, doesn’t only apply to people’s actions, they also apply to people’s thoughts. What we think about, and what we decide to do through our thinking, is a necessary result of the things that have caused us to think in that way. Whether this is true is very controversial. Most people would feel that if our actions are caused by something outside of ourselves, this means that we are not free. However, if you consider Hobbes’s definition of freedom again, you will see that the definition is not incompatible with his idea of necessary cause: Hobbes only distinguishes in his idea of freedom, between agents that can freely do what they want to do, and agents that cannot. Because his definition of freedom only applies to doing, Hobbes can accept that people can be caused to want to do what they do by something outside of themselves. Agents are therefore free to do, but not free to decide what they want to do. As we shall see in my next post, Hobbes’s conception of individual liberty has some clear consequences for the way in which he thinks society should be organised. The quotes in this text are derived from page 38 of Hobbes and Bramhall on Liberty and Necessity, 1999 Cambridge University Press Krijn van Eeden is a member of Liberal Youth (UK). He is originally from the Netherlands but is now studying for his Masters in Philosophy in Frankfurt.
tracing numbers 1 50 worksheets media: 0.00 din 0 voturi |media: 0.00 din 0 voturi| On this kindergarten math worksheet, kids trace the number 15, then write their own. Then they count carrots and record the information in a graph. Worksheets Are you looking for ESL printable worksheets? Here you can find a good collection of worksheets, exercises, lesson-plans, online games, etc. halloween bottle labels printable Feb 8, 2007 Here is a sheet designed to help preschoolers learn to write their numbers. Printable alphabet tracing sheets for . Numbers1-100 tracing numbers 1 50 worksheets Tracing Numbers. Permission is granted to reproduce this worksheet for non-commercial use. Number and Words(1-10)(One-Ten)(Trace and Match) Download pdf. 0 comments: tracing numbers 1-20. 30/8/2011. File Format: PDF/Adobe AcrobatCount and Color Kittens - ordinal numbers, following directions . 10+ items yndash; Practice and test . Teach children the numbers between 1 and 10. These printable pages are geared specifically towards preschool age kids. These worksheets will help your teach kids . Remember, when you buy any product from my web site, you are purchasing the DIGITAL FILE only--you are paying me to customize it (I do NOT do any printing . Find trace numbers 1-50 worksheets from 1000s of teacher approved worksheets by grade and subject. Quickly find worksheets that inspire student learning. Added to queue Mobile Cell Phone Number directions activity 1:50. Villarreal to English Missing: What are the Villarreal numbers from 1 - 50? Preschoo free . Printable number tracing worksheet for preschoolers. Print and let them practice tracing the numbers from 1 to 4. tracing number lookup 1 100 and trace numbers for a toddler - tracing numbers 1 50 worksheets Home: Excel 2007: the missing manual numbers numbers 1 100 - CommerceDroidnumbers bmi index chart in. Preschool and kindergarten children can learn the number one by tracing coloring and counting with this educational worksheet. Free preschool and kindergarten worksheets and printables, alphabet activities, coloring pages, graphics and anything to help children learn their abc Worksheet Set 1 - Preschool Level Features acorns to count and one line of dotted numbers to trace One to Five | Six to Ten printable office stretching File Format: PDF/Adobe Acrobat - Quick ViewYour browser may not have a PDF reader available. Google recommends visiting our text version . Free printable kindergarten and preschool math worksheets. Great resource for parents, teachers, homseschoolers. Alphabet, numbers,counting, coloring, activities . Missing Numbers(1 to Is it possible to shoot aderall Storage unit auctions rochester, ny rachel teutul bikini Frank lampard keiron dyer ayai napa Free android themes ota free download tema lucu untuk blackberry Fill in birth certificates
2018-11-30 11:12:09 UTC Stephen Hawking, "A Brief History of Time", Chapter 3: "Now imagine a source of light at a constant distance from us, such as a star, emitting waves of light at a constant wavelength. Obviously the wavelength of the waves we receive will be the same as the wavelength at which they are emitted (the gravitational field of the galaxy will not be large enough to have a significant effect). Suppose now that the source starts moving toward us. When the source emits the next wave crest it will be nearer to us, so the distance between wave crests will be smaller than when the star was stationary." http://www.fisica.net/relatividade/stephen_hawking_a_brief_history_of_time.pdf Light pulses don't bunch up (the wavelength does not decrease) - bunching up obviously violates the principle of relativity. Rather, the speed of light VARIES with the speed of the emitter, as posited by Newton's emission theory: "Emission theory, also called emitter theory or ballistic theory of light, was a competing theory for the special theory of relativity, explaining the results of the Michelson–Morley experiment of 1887. [...] The name most often associated with emission theory is Isaac Newton. In his corpuscular theory Newton visualized light "corpuscles" being thrown off from hot bodies at a nominal speed of c with respect to the emitting object, and obeying the usual laws of Newtonian mechanics, and we then expect light to be moving towards us with a speed that is offset by the speed of the distant emitter (c ± v)." https://en.wikipedia.org/wiki/Emission_theory In future physics, the false axiom "The speed of light is invariable" will be replaced with the correct one "The wavelength is invariable" This means that, in accordance with the formula (frequency) = (speed of light)/(wavelength) any registered change in frequency corresponds to a proportional change in the speed of light. In other words, the frequency, as measured by an observer (receiver), shifts because the speed of the light relative to him shifts.
About air quality and health This section provides background information about air quality, and how it affects our health. On this page Why is air quality important for health? What is air pollution? What is particulate matter (PM)? Health effects of air pollution Natural and human sources of air pollution Weather conditions and topography affect air quality Guidelines and standards for air quality in New Zealand Monitoring air quality in New Zealand What are airsheds? Good air quality is fundamental to our health and wellbeing. We each breathe about 14,000 litres of air each day. Contaminants in outdoor air can adversely affect our health. Particulate matter in the air can contribute to heart (cardiovascular) and lung (respiratory) diseases, leading to hospital admissions and premature death . Outdoor air pollution can also cause cancer . People more at risk from poor air quality include: - young children - older adults - people with chronic health conditions, particularly cardiovascular or respiratory disease. Air pollution is a complex mix of tiny particles and gases, including: - particulate matter (such as PM10 and PM2.5) - carbon monoxide - nitrogen oxides - sulphur oxides - volatile organic compounds. Particulate matter (PM) consists of small airborne particles, including solid matter and liquid droplets. These particles can’t always be seen by the human eye. PM10(particles with a diameter less than 10 micrometres) is the major air pollutant monitored in New Zealand. The small particles can be breathed into the human lung, and are associated with health problems, particularly affecting the lungs and heart. Fine particulate matter (PM2.5) refers to particles with a diameter of less than 2.5 micrometres. Finer particles can reach further into the lungs than larger particles, and can cause more serious health problems. Figure 1: Relative size of particulate matter Air pollution can affect people’s health, especially their heart and lungs – and can even lead to early death. Most of the health impacts from air pollution are associated with particulate matter . Particulate matter (PM10 and PM₂.₅) can reach far into people’s lungs. Health effects include heart (cardiovascular) and lung (respiratory) disease, and early death . Outdoor air pollution (PM10 and PM2.5) can cause lung cancer, and is also linked to bladder cancer . Sulphur dioxide (SO₂) and nitrogen dioxide (NO₂) can cause health problems at certain concentrations, particularly respiratory symptoms. Carbon monoxide (CO) can cause a range of health symptoms. Lower levels of carbon monoxide can cause respiratory problems, headaches and poor concentration; higher levels can result in unconsciousness and death. Some people are more at risk of poor health due to air pollution. Population groups most affected by air pollution include: children, especially those with asthma; older adults; people with pre-existing health conditions, particularly respiratory and heart conditions, and diabetes. Air pollution can be produced from human activity or naturally. The main sources of air pollution in New Zealand are: - wood and coal fires (for home heating) - motor vehicles - open burning - natural sources. Wood and coal fires produce particulate matter, carbon monoxide, nitrogen dioxide and other organic compounds. Read the latest statistics about wood and coal fires on the wood and coal fires webpage. Motor vehicles produce a range of gases and particles, including particulate matter, carbon monoxide, nitrogen dioxide and sulphur dioxide. Air pollution from vehicles comes from vehicle exhaust and brake and tyre wear. Diesel vehicles, older cars and cars not well maintained tend to produce more emissions. Read the latest statistics on the motor vehicles webpage. Industrial sources of air pollution include major facilities like steel mills, chemical plants and coal-fired power plants. The most common pollutants from manufacturing, construction and electricity production activities are sulphur dioxide, PM₁₀ and nitrogen oxides . Open burning (or outdoor burning) refers to burning combustible material outdoors. These materials can include household rubbish, garden clippings and agricultural waste. Natural sources of air pollution include windblown dust, pollen, volcanic ash and sea spray. Another impact of fires, vehicle emissions and industrial sources is the release of greenhouse gases (mainly carbon dioxide). Greenhouse gases contribute to climate change. Read more on the climate change webpage. Weather and topography can influence air pollution. Air pollution levels tend to be worse during winter, particularly on cold calm days. Weather conditions can affect the quantity, patterns and dispersal trends of air pollutants . - On cold days, households may burn more wood and coal for home heating. Vehicles may also release more emissions due to ‘cold starts’. - Low wind speeds can prevent pollutants from dispersing. - Low wind speeds and cold temperatures can cause temperature inversions. These are where a cold layer of air is trapped by a warmer layer of air above, trapping air pollution near to the ground. Temperature inversions are more likely to occur in valley locations. - Most urban air pollution occurs in winter and is worst during cold calm conditions. - In some cases (usually in warmer months), strong winds can lead to higher PM₁₀ levels by raising dust – particularly during droughts. Guidelines for annual average PM10 levels are set by the World Health Organization (WHO). The guidelines set a maximum annual average PM10 concentration of 20 μg/m3. This guideline provides a minimum level of protection against long-term health risks. However, there is no evidence of a safe threshold for PM10 below which there are no adverse health effects. New Zealand has guidelines (Ambient Air Quality Guidelines 2002), which are mostly drawn from the WHO. See the guideline values on the Ministry for the Environment’s website. The New Zealand National Environmental Standards (NES) for Air Quality set standards for short-term levels of the following air pollutants: - particulate matter (PM₁₀) - carbon monoxide (CO) - nitrogen dioxide (NO₂) - sulphur dioxide (SO₂) - ground level ozone (O₃). For more information, visit the Ministry for the Environment’s National Environmental Standards for Air quality webpage. Air quality is measured at monitoring stations throughout New Zealand. Air quality data is collected by regional councils and unitary authorities, and reported to the Ministry for the Environment. In 2012, there were 54 monitoring sites for PM₁₀, covering about 75 percent of the population. Some of these monitoring sites also monitored other air pollutants. Air quality may vary from year to year, depending on weather conditions. Colder winters may lead to households burning more wood and coal for home heating, and higher levels of air pollution. Statistical analysis of several years of data is needed to determine long-term trends in air quality in an airshed. An airshed is a legally designated (‘gazetted’) area where air quality is monitored. These areas are likely, or known, to have unacceptable levels of pollutants, or may require air-quality management. 1. World Health Organization. 2013. Review of evidence on health aspects of air pollution - REVIHAAP Project: Final technical report. Copenhagen: World Health Organization Regional Office for Europe. 2. Loomis D, Grosse Y, Lauby-Secretan B, El Ghissassi F, Bouvard V, Benbrahim-Tallaa L, et al. 2013. The carcinogenicity of outdoor air pollution. The Lancet Oncology 14(13): 1262-1263. doi: 10.1016/S1470-2045(13)70487-X 3. Ministry for the Environment and Statistics New Zealand. 2014. New Zealand's Environmental Reporting Series: 2014 Air domain report. Wellington: Ministry for the Environment. 4. Kuschel G, Metcalfe J, Wilton E, Guria J, Hales S, Rolfe K, et al. 2012. Updated Health and Air Pollution in New Zealand Study. Volume 1: Summary report. Prepared by Emission Impossible and others for Health Research Council of New Zealand, Ministry of Transport, Ministry for the Environment, and NZ Transport Agency. Available online: http://www.hapinz.org.nz/
The Cyrillic script family contains a large number of specially treated two-letter combinations, or digraphs, but few of these are used in Slavic languages. In a few alphabets, trigraphs and even the occasional tetragraph are used. In early Cyrillic, the digraphs ⟨оу⟩ and ⟨оѵ⟩ were used for /u/. As with the equivalent digraph in Greek, they were reduced to a typographic ligature, ⟨ꙋ⟩, and are now written ⟨у⟩. The modern letters ⟨ы⟩ and ⟨ю⟩ started out as digraphs, ⟨ъі⟩ and ⟨іо⟩. In Church Slavonic printing practice, both historical and modern, ⟨оу⟩ (which is considered as a letter from the alphabet's point of view) is mostly treated as two individual characters, but ⟨ы⟩ is a single letter. For example, letter-spacing affects ⟨оу⟩ as if they were two individual letters, and never affects components of ⟨ы⟩. In a context of Old Slavonic language, ⟨шт⟩ is a digraph that can replace a letter ⟨щ⟩ and vice versa. Modern Slavic languages written in the Cyrillic alphabet make little or no use of digraphs. There are only two true digraphs: ⟨дж⟩ for /d��/ (Belarusian, Bulgarian, Ukrainian) and ⟨дз⟩ for /dz/ (Belarusian, Ukrainian). Sometimes these digraphs are even considered as special letters of respective alphabets. In standard Russian, however, the letters in ⟨дж⟩ and ⟨дз⟩ are always pronounced separately. Digraph-like letter pairs include combinations of consonants with the soft sign ⟨ь⟩ (Serbian/Macedonian letters ⟨љ⟩ and ⟨њ⟩ are derived from ⟨ль⟩ and ⟨нь⟩), and ⟨жж⟩ or ⟨зж⟩ for the uncommon and optional Russian phoneme /ʑː/. Native descriptions of Cyrillic writing system often use the term "digraph" to combinations ⟨ьо⟩ and ⟨йо⟩ (Bulgarian, Ukrainian) as they both correspond to a single letter ⟨ё⟩ of Russian and Belarusian alphabets (⟨ьо⟩ is used for /ʲo/, and ⟨йо⟩ for /jo/). Cyrillic uses large numbers of digraphs only when used to write non-Slavic languages; in some languages such as Avar, these are completely regular in formation. Many Caucasian languages use ⟨ә⟩ (Abkhaz), ⟨у⟩ (Kabardian), or ⟨в⟩ (Avar) for labialization, for instance Abkhaz ⟨дә⟩ for /dʷ/ (sometimes [d͡b]), just as many of them, like Russian, use ⟨ь⟩ for palatalization. Since such sequences are decomposable, regular forms will not be listed below. (In Abkhaz, ⟨ә⟩ with sibilants is equivalent to ⟨ьә⟩, for instance ж /ʐ/, жь /ʒ/~/ʐʲ/, жә /ʒʷ/~/ʐʲʷ/, but this is predictable phonetic detail.) Similarly, long vowels written double in some languages, such as ⟨аа⟩ for Abkhaz /aː/ or ⟨аюу⟩ for Kirghiz /ajuː/ "bear", or with glottal stop, as Tajik аъ [aʔ~aː], are not included. Archi: а́а [áː], аӏ [aˤ], а́ӏ [áˤ], ааӏ [aːˤ], гв [ɡʷ], гь [h], гъ [ʁ], гъв [ʁʷ], гъӏ [ʁˤ], гъӏв [ʁʷˤ], гӏ [ʕ], е́е [éː], еӏ [eˤ], е́ӏ [éˤ], жв [ʒʷ], зв [zʷ], и́и [íː], иӏ [iˤ], кк [kː], кв [kʷ], ккв [kːʷ], кӏ [kʼ], кӏв [kʷʼ], къ [qʼ], къв [q’ʷ], ккъ [qː’], къӏ [qˤʼ], ккъӏ [qːˤʼ], къӏв [qʷˤʼ], ккъӏв [qːʷˤʼ], кь [kʟ̥ʼ], кьв [kʟ̥ʷʼ], лъ [ɬ], ллъ [ɬː], лъв [ɬʷ], ллъв [ɬːʷ], лӏ [kʟ̥], лӏв [kʟ̥ʷ], о́о [óː], оӏ [oˤ], о́ӏ [óˤ], ооӏ [oːˤ], пп [pː], пӏ [pʼ], сс [sː], св [sʷ], тт [tː], тӏ [tʼ], тв [tʷ], твӏ [t’ʷ], у́у [úː], уӏ [uˤ], у́ӏ [úˤ], хх [χː], хв [χʷ], ххв [χːʷ], хӏ [ħ], хьӏ [χˤ], ххьӏ [χːˤ], хьӏв [χʷˤ], ххьӏв [χːʷˤ], хъ [q], хъв [qʷ], хъӏ [qˤ], хъӏв [qʷˤ], цв [t͡sʷ], цӏ [t͡sʼ], ццӏ [t͡sː], чв [t͡ʃʷ], чӏ [t͡ʃʼ], чӏв [t͡ʃ’ʷ], шв [ʃʷ], щв [ʃːʷ], ээ [əː], эӏ [əˤ] Avar uses ⟨в⟩ for labialization, as in хьв /xʷ/. Other digraphs are: - Ejective consonants in ⟨ӏ⟩: кӏ /kʼ/, цӏ /tsʼ/, чӏ /tʃʼ/ - Other consonants based on к /k/: къ /qʼː/, кь /tɬʼː/, - Based on г /ɡ/: гъ /ʁ/, гь /h/, гӏ /ʕ/ - Based on л /l/: лъ /tɬː/ - Based on х /χ/: хъ /qː/, хь /x/, хӏ /ħ/ The ь digraphs are spelled this way even before vowels, as in гьабуна /habuna/ "made", not *гябуна. - Gemination: кк /kː/, кӏкӏ /kʼː/, хх /χː/, цц /tsː/, цӏцӏ /tsʼː/, чӏчӏ /tʃʼː/. Note that three of these are tetragraphs. However, gemination for the 'strong' consonants in Avar orthography is sporadic, and the simple letters or digraphs are frequently used in their place. Chechen and Ingush Chechen uses the following digraphs: - Vowels: аь /æ/, яь /jæ/, оь /ø/, ёь /jø/, уь /y/, юь /jy/ - Ejectives in ⟨ӏ⟩: кӏ /kʼ/, пӏ /pʼ/, тӏ /tʼ/, цӏ /tsʼ/, чӏ /tʃʼ/ - Other consonants: гӏ /ɣ/, кх /q/, къ /qʼ/, хь /ħ/, хӏ /h/ - The trigraph рхӏ /r̥/ In Ingush, there are no ejectives, so for example кӏ is pronounced /k/. Some of the other values are also different: аь /æ/ etc., уь /ɨ/ etc., кх /qχ/ (vs. къ /q/), хь /ç/ The vowel digraphs are used for front vowels for other Dagestanian languages and also the local Turkic languages Kumyk and Nogay. ⟨Ӏ⟩ digraphs for ejectives is common across the North Caucasus, as is гӏ for /ɣ~ʁ~ʕ/. - Slavic дж /ɡʲ/, дз /dz/ - Ejectives in ⟨ӏ⟩: кӏ /kʲ��/ (but кӏу is /kʷʼ/), лӏ /ɬʼ/, пӏ /pʼ/, тӏ /tʼ/, фӏ /fʼ/, цӏ /tsʼ/, щӏ /ɕʼ/ - Other consonants: гъ /ʁ/, жь /ʑ/, къ /qʼ/, лъ /ɬ/ (from л /ɮ/), хь /ħ/, хъ /χ/ - The trigraph кхъ /q/ Labialized, the trigraph becomes the unusual tetragraph кхъу /qʷ/. Tabasaran uses gemination for its 'strong' consonants, but this has a different value with г. - Front vowels: аь /æ/, уь /y/ - Gemination for 'strong' consonants: кк /kː/, пп /pː/, тт /tː/, цц /tsʰː/, чч /tʃʰː/ - Ejectives with ⟨ӏ⟩: кӏ /kʼ/, пӏ /pʼ/, тӏ /tʼ/, цӏ /tsʼ/, чӏ /tʃʼ/ - Based on г /ɡ/: гг /ɣ/, гъ /ʕ/, гь /h/ - Other consonants based on к /kʰ/: къ /qʰː/, кь /qʼ/, - Based on х /ɦ/: хъ /qʰ/, хь /x/ It uses ⟨в⟩ for labialization of its postalveolar consonants: шв /ʃʷ/, жв /ʒʷ/, чв /tʃʰʷ/, джь /dʒʷ/, ь /tʃʼʷ/, ччь /tʃʷʰː/). Tatar has a number of vowels which are written with ambiguous letters that are normally resolved by context, but which are resolved by discontinuous digraphs when context is not sufficient. These ambiguous vowel letters are е, front /je/ or back /jɤ/, ю, front /jy/ or back /ju/; and я, front /jæ/ or back /ja/. They interact with the ambiguous consonant letters к, velar /k/ or uvular /q/, and г, velar /ɡ/ or uvular /ʁ/. In general, velar consonants occur before front vowels and uvular consonants before back vowels, so it is frequently not necessary to specify these values in the orthography. However, this is not always the case. A uvular followed by a front vowel, as in /qærdæʃ/ "kinsman", for example, is written with the corresponding back vowel to specify the uvular value: кардәш. The front value of а is required by vowel harmony with the following front vowel ә, so this spelling is unambiguous. If, however, the proper value of the vowel is not recoverable by through vowel harmony, then the letter ь /ʔ/ is added at the end of the syllable, as in шагыйрь /ʃaʁir/ "poet". That is, /i/ is written with a ы rather than a и to show that the г is pronounced /ʁ/ rather than /ɡ/, then the ь is added to show that the ы is pronounced as if it were a и, so the discontinuous digraph ы...ь is used here to write the vowel /i/. This strategy is also followed with the ambiguous letters е, ю, and я in final syllables, for instance in юнь /jyn/ cheap. That is, the discontinuous digraphs е...ь, ю...ь, я...ь are used for /j/ plus the front vowels /e, y, æ/. Exceptional final-syllable velars and uvulars, however, are written with simple digraphs, with ь for velars and ъ for uvulars: пакь /pak/ pure, вәгъдә /wæʁdæ/ promise. - ан (ян) /(j)æ̃/, он /(j)aŋ/, эр /əɻ/, etc. In the Cyrillization of Mandarin, there are digraphs цз and чж, which correspond to Pinyin z/j and zh. Final n is нь, while н stands for final ng. юй is yu, but ю you, ю- yu-, -уй -ui. - гъ /ɣ/, дж /dʒ/~/dz/, къ /q/, нг /ŋ/. Нг /ŋ/ is also found in Uzbek. - л’ /ɬ/, ч’ /tʃ/ - гъ, гь, къ, кь, кӏ, пӏ, тӏ, уь, хъ, хь, цӏ, чӏ - Slavic дж /dʒ/, дз /dz/ - Ejectives in ⟨ъ⟩: къ /kʼ/, пъ /pʼ/, тъ /tʼ/, цъ /tsʼ/, чъ /tʃʼ/ - гъ /ʁ/, хъ /q/ - дж /dʒ/, дз /dz/, тш /tʃ/ (ч is /tsʲ/.) - Long үй /yː/, from ү /y/. - дь /ɟ/, нь /ɲ/
Where do giant pandas come from? Of course, the proximal answer involves a male and female panda – and maybe some panda porn, if life in captivity dampens the mood – but I’m not talking about that. What I’m wondering about is the evolutionary origin of these bamboo-eating bears. Until recently, there was little to be said about the prehistory of pandas. A few skulls, mandibles, and other assorted fragments from caves and fissures in southwestern Asia were all that had turned up. Prior to the origin of the modern panda, the larger species Ailuropoda baconi lived during the past 750,000 years, and was preceded by the poorly-known Ailuropoda wulingshanensis and a smaller species – Ailuropoda microta – which occupied China between 2 and 2.4 million years ago. Beyond that it gets a bit hazy. The earliest potential member of the giant panda lineage is the approximately seven million year old bear Ailurarctos, but there are not any solid points between it and the later pandas to draw together. Notices of most of these fossil finds were tucked away in obscure journals or were only briefly mentioned in catalogs of specimens recovered during American Museum of Natural History expeditions. From the known parts – especially the teeth – the fossil bears did not seem all that different from the modern pandas. Thanks to a single discovery, though, paleontologists have begun to piece together a better understanding of how these bears changed over time. The fossil that has spurred several new studies into the origins of pandas is the skull of the smallest and earliest giant panda species, Ailuropoda microta. Found in southwestern China’s Jinyin cave, this worn skull is considerably different from those of later species and looks rather puny compared next to them. Nevertheless, the 2007 description of the skull by Changzhu Jin and colleagues points out that this animal shared some tell-tale characteristics associated with the modern panda’s diet of coarse, fibrous bamboo. The cheek teeth of A. microta, though lacking the extra cusps seen in living pandas, were broad and well-suited to grinding, and the back of the skull was expanded for heavy chewing muscles. Overall, its skull was not as heavily-built as that of the largest fossil panda, A. baconi, but it appeared that at least some unique giant panda traits were already present about two million years ago and had just been tweaked a bit since then. Exactly how these species relate to each other is unclear. The authors of the 2007 description interpreted them in a straight-line march from A. microta to A. wulingshanensis and onto A. baconi before a size reduction culminating in the modern A. melanoleuca. (A follow-up paper by Wei Dong on CT scans of the brain cavities of these bears showed that a reduction in brain size went along with the reduction in body size.) Given that we still know so little about these bears, however, an evolutionary march of the pandas cannot be confirmed, and better sampling will be needed to tell whether all these fossil species represent a lineage as straight as a bamboo stem or whether there were splits which led species to overlap in time with each other. There is much that remains unknown about the diversity of prehistoric pandas and their precise placement in time. Even if the recent history of prehistoric pandas remains a bit fuzzy, the discovery of the A. microta skull has allowed paleontologists to identify some of the evolutionary trends that shaped this peculiar group of bears. In 2010 Borja Figueirido and co-authors looked at how many times the group of mammals which contains dogs, cats, and bears – called carnivorans – has evolved similar adaptations in their skulls to eating plants. Their hypothesis was that a combination of shared evolutionary constraints and similar pressures from natural selection determined the unique skull shapes of carnivorans that went vegetarian. One of the prime examples of this kind of convergence comes from the two distantly-related modern pandas. There’s the panda bear, and then there’s the red panda (Ailurus fulgens), which last shared a common ancestor with the giant panda over 40 million years ago. Despite this distance, however, the red panda also feeds on bamboo, has enlarged molars for grinding tough food, and even has a specialized wrist bone (the sesamoid) that creates a jury-rigged, opposable “thumb“. These shared traits may have appeared independently as adaptations to a similar diet, although, as explained in a 2006 study led by Manuel Salesa, the fact that the fossil red panda Simocyon had a pseudo-thumb but lacked plant-crushing teeth suggests that red panda thumbs were initially adaptations to life in the trees and were only later co-opted for eating bamboo. The pattern of convergent evolution cannot be understood without knowing the evolutionary history of the groups being compared and what traits may have undergone a change in function thanks to natural selection. But Figueirido and co-authors were not considering whole bodies. They focused their study on similarities of the skull. What they found was that specialized, plant-eating carnivorans – or species that get 95 percent of their intake from plant food – have broad, short skulls with deep jaws and stout molars. This package of traits generates high bite forces, and the only carnivorans with stronger bites are the hypercarnivorous species which specialize in taking down large prey. The reason for this may be that, compared to ungulates like antelope or deer, plant-eating carnivorans are not well-suited to eating plants. They lack the complex digestive systems of the hoofed mammals for breaking down plants, and the construction of their jaws prevents them from chewing as efficiently. In order to survive, they have to eat heaping helpings of plant food to make up for their general lack of efficiency, and so they were adapted to have very strong jaw muscles to keep working through all that browse. The evolutionary baggage the herbivorous carnivorans carried with them constrained what was possible, and the giant panda is the most famous example of this. Just when giant pandas began to shift to an all-bamboo diet is another matter. On the basis of teeth alone it seemed that bamboo-eating was a long-held giant panda tradition, going back millions of years, but the discovery of the approximately two-million-year-old skull of A. microta has allowed paleontologists to get a better handle on the timing of the associated changes in anatomy. In a study just published in Naturwissenschaften by Figueirido, Paul Palmqvist, Juan Pérez-Claros, and Wei Dong, landmarks on the skulls of the known giant panda species were used to track changes during the group’s evolutionary history. The aim of this research was to determine whether giant pandas really have undergone minimal modification since the late Pliocene or whether the unique traits seen in their skulls developed more recently. The results of the analysis showed that A. microta had a skull very much like that of the modern panda in profile, but it differed in some subtle ways. Its molar tooth row was shorter than in living giant pandas, its snout was comparatively longer, and its braincase was narrower, in addition to a handful of other differences. When looked at all together, the skull of A. microta was most similar to that of other giant pandas but was still intermediate between that of the panda bears and other species of living bears. Contrary to what was reported in the initial description of the skull, giant panda head shape did not remain static for the past two million years. While it is difficult to be sure without the lower jaws and other parts of the skeleton, the skull anatomy of A. microta probably indicates that giant pandas were already bamboo specialists by two million years ago. Minor differences in their anatomy hints that they were not able to eat as much bamboo as their living relatives – their jaw forces were weaker, and they lacked an expanded second molar to grind down on bamboo stems – but their skull shapes are consistent with a diet of tough plants. Frustratingly, though, paleontologists have only an extremely limited view of giant panda evolution. Of three potential fossil species, only two are known from relatively complete skulls, and the fossil teeth of Ailurarctos appear to indicate that the fossil lineage of giant pandas goes back seven million years or more. That leaves us with a five million year gap in panda evolution, and even the history of the more recent pandas is only partially known. In order to fill in those gaps, paleontologists will have to go back to the caves and fissures of Asia to uncover new clues. Top Image: Tai Shan the panda cub while at the National Zoo in the spring of 2008. Photo by the author. Dong, W. (2008). Virtual cranial endocast of the oldest giant panda (Ailuropoda microta) reveals great similarity to that of its extant relative Naturwissenschaften, 95 (11), 1079-1083 DOI: 10.1007/s00114-008-0419-3 Figueirido, B., Palmqvist, P., Pérez-Claros, J., & Dong, W. (2010). Cranial shape transformation in the evolution of the giant panda (Ailuropoda melanoleuca) Naturwissenschaften, 98 (2), 107-116 DOI: 10.1007/s00114-010-0748-x FIGUEIRIDO, B., SERRANO-ALARCÓN, F., SLATER, G., & PALMQVIST, P. (2010). Shape at the cross-roads: homoplasy and history in the evolution of the carnivoran skull towards herbivory Journal of Evolutionary Biology, 23 (12), 2579-2594 DOI: 10.1111/j.1420-9101.2010.02117.x Jin, C., Ciochon, R., Dong, W., Hunt, R., Liu, J., Jaeger, M., & Zhu, Q. (2007). The first skull of the earliest giant panda Proceedings of the National Academy of Sciences, 104 (26), 10932-10937 DOI: 10.1073/pnas.0704198104 Salesa, M. (2006). Evidence of a false thumb in a fossil carnivore clarifies the evolution of pandas Proceedings of the National Academy of Sciences, 103 (2), 379-382 DOI: 10.1073/pnas.0504899102Go Back to Top. Skip To: Start of Article.
You never know when you might find yourself needing to define a word like "mizzle-shinned," one of the Oxford English Dictionary's many words of the day. When you do, it's helpful to the reader to both offer a definition and information about where that definition came from. Citing online dictionaries differs from other Web-based sources because they typically don't have authors. Instead, most style guides suggest you use the word you define as the first component of your citation, followed by website and date information. Some, but not all, style guides also require URL information. Modern Language Association (MLA) Style Students writing about literature, who often use MLA format, could find themselves defining unusual words. To correctly cite a definition from an online dictionary in MLA format, include both the original source and the website information. This example shows the proper formatting: “untenable.” Merriam-Webster Dictionary. New York: Merriam-Webster, 2004. Merriam-Webster.com. Web. 13 Feb. 2015. This citation includes the publisher information for the print copy of the Merriam-Webster dictionary, as well as the URL and date the definition was accessed. According to Merriam-Webster’s website, the following example is also an acceptable MLA style citation: “untenable.” Merriam-Webster.com. Merriam-Webster, 2015. Web. 13 Feb. 2015. The in-text citation for this Works Cited reference would read: (“untenable”). American Psychological Association (APA) Style Students writing in fields like psychology or social sciences commonly define words the reader may not know the definitions of without referring to a dictionary. APA style also uses the entry title of the dictionary as the first item in the citation: pathology. (n.d.) In Encyclopedia Britannica Online. Retrieved from http://www.britannica.com/EBchecked/topic/446440/pathology Note that there is no period after the URL at the end of the citation. Use (n.d.) if there is no date specified in the entry; otherwise, use the year of online publication, often found at the bottom of the page, in parentheses. The in-text citation for this example would read: (“pathology,” n.d.). Chicago Manual of Style (CMS) The Chicago Manual of Style is a comprehensive guide used by many publications. According to the Purdue Online Writing Lab, scholars in the areas of history, literature and the arts tend to use Chicago's "notes-bibliography" system, while its "author-date" system is used more commonly in the social sciences. In either system, you always have the option to reference the name of the dictionary and edition along with the definition in the running text. According to the website of the Simon Fraser University Library, Chicago discourages including well-known dictionaries in the reference list or bibliography; however, including a complete bibliographic entry for less-common dictionaries is recommended. In the author-date system, an in-text citation linked to a specialized dictionary in the reference list includes the name of the dictionary and edition number in parentheses if not simply included in running text. If your professor asks you to use the notes-bibliography system, create a footnote. Chicago lists the dictionary’s name as the first item in the footnote entry, followed by the abbreviation “s.v." (for the Latin phrase "sub verbo"). Here is an example: - Merriam-Webster, s.v. “rhythm,” accessed January 2, 2015, http://www.merriam-webster.com/dictionary/rhythm. The bibliographic entry at the end of your paper is written in a similar format, as in this example from the Simon Fraser University Library: Grove Music Online, s.v. “Sibelius, Jean,” by James Hepkoksi, accessed January 3, 2005, http://www.grovemuscic.com/. Associated Press (AP) Style AP style is typically used by journalists, and definitions from online dictionaries are cited within a sentence. You cite an online dictionary in AP style like this: The Oxford English Dictionary online defines “mizzle-shinned” as “having one’s legs red and blotched from sitting too near a fire.” You identify the source of the definition in the text, but you don't need to include a formal bibliographic citation.
Infectious Disease Online Pathology of Smallpox (Variola) Before its eradication, smallpox (variola) was an acute, highly contagious, exanthematous viral infection. The virus contains a double-stranded DNA and produces a typical plaque, or "pock", when cultured on chorioallantoic membrane of embryonated chicken eggs. Since Jennerís pioneering work in 1796, a similar virus - vaccinia, the causative agent of cowpox - has been used for "vaccination" to protect against small pox. Smallpox was evidently an ancient disease; a rash resembling smallpox was found in the mummified remains of the Egyptian Pharaoh Ramses V, who died in 1160 BC. The disease once had worldwide distribution in both urban and rural areas, afflicting persons of both genders and all ages, but particularly children. In 1967, the World Health Organization began its uniquely successful campaign to eradicate smallpox. By then, smallpox had already been controlled in most developed countries but was still endemic in the less developed world. In 10 years the vaccination campaign eradicated the disease. The successful eradication of smallpox depends on several factors, including the permanence of immunity following vaccination, the stability of the smallpox virus (in contrast to the genetic instability of influenza viruses and many others), and the lack of an animal reservoir for the virus. Smallpox was transmitted in respiratory droplets and almost always involved face to face contact. The virus infected the oropharynx or nasopharynx, multiplied in lymphoid tissue of the upper respiratory tract for about 2 day period of viremia and then entered a 4 to 14 day "latent" period, when it was undetectable in the blood and was assumed to be multiplying in the reticuloendothelial system. After another 1 or 2 day period of viremia, there was a 2 to 4 day prodrome of nonspecific febrile symptoms. The prodrome was followed by characteristic eruption of smallpox, which evolved through several stages, beginning as macules, then progressing over a 1 to 2 week period through papules, vesicles, and pustules. The pustules umbilicated within 2 weeks, and desiccated ("crust") to form scabs. The scabs, which contained the smallpox virus, usually sloughed from the skin, thereby creating fresh, pitted scars. Pitting or pockmarking was most common over facial areas that have numerous sebaceous glands. In the most severe form, black, "hemorrhagic small pox", which was almost always fatal, there was bleeding into the vesicles and pustules. Histologic features of the earliest stage of the rash included hyperemia, swelling of capillary endothelium, and perivascular infiltrates of lymphocytes and histiocytes in the upper dermis. Multiloculated vesicles developed by rupture of the membranes between degenerating epithelial cells. There was ballooning of cells in the lower levels of the stratum spinosum, and some degenerating cells fused into giant cells with two or more nuclei. Eosinophilic intracytoplasmic inclusion bodies (Guarnieriís bodies) were prominent in ballooned epithelial cells. Viral keratitis and secondary bacterial infections of the eyes were frequent complications. Many patients in Asia developed corneal ulcerations, and smallpox was usually the primary cause of blindness during epidemic periods. In pregnant women, the disease frequently caused abortion. vaccination in the preoutbreak setting is contraindicated for persons who have the following conditions or have a close contact with the following conditions: 1) A history of atopic dermatitis (commonly referred to as eczema), irrespective of disease severity or activity; 2) Active acute, chronic, or exfoliative skin conditions that disrupt the epidermis; 3) Pregnant women or women who desire to become pregnant in the 28 days after vaccination; and 4) Persons who are immunocompromised as a result of human immunodeficiency virus or acquired immunodeficiency syndrome, autoimmune conditions, cancer, radiation treatment, immunosuppressive medications, or other immunodeficiencies. Additional contraindications that apply only to vaccination candidates but do not include their close contacts are persons with smallpox vaccine-component allergies, women who are breastfeeding, those taking topical ocular steroid medications, those with moderate-to-severe intercurrent illness, and persons aged < 18 years. In addition, history of Darier disease is a contraindication in a potential vaccinee and a contraindication if a household contact has active disease. Generalized vaccinia, progressive vaccinia, and eczema vaccinatum are rare following smallpox (vaccinia) vaccination: United States surveillance, 2003.Clin Infect Dis. 2005 Sep 1;41(5):689-97. Epub 2005 Jul 26 Copyright © 2018 histopathology-india.net
While humans have been using sonar for almost a hundred years, some marine animals have been using sound to navigate their surroundings for millions of years. Scientists may have finally unlocked the evolutionary origins of this useful ability. A team of researchers have announced the discovery of an ancient whale known as Cotylocara macei. This species of whale is currently the earliest known example of an animal that used echolocation to navigate through their watery domains. The ancient whale was a bit on the small side being only slightly larger than modern bottlenosed dolphins. However, the discovery of the species indicates that toothed whales were the first marine animals to develop echolocation. Cotylocara, which swam in ancient oceans around 28 million years ago, is distantly related to modern toothed whales such as dolphins, killer whales and sperm whales. Scientists also estimate that the first marine animals that used echolocation for navigation may have existed between 32 million to 34 million years ago. "The most important conclusion of our study involves the evolution of echolocation and the complex anatomy that underlies this behavior," said Jonathan Geisler, an associate professor from the New York Institute of Technology (NYIT). "This was occurring at the same time that whales were diversifying in terms of feeding behavior, body size, and relative brain size." In order to use echolocation, certain cetaceans create a high pitched sound using a small, constricted passage just below their blowholes. Unlike the vocalization organs found in other animals, a whale's echolocation system is more complex, using powerful muscles and air pockets to produce powerful sound waves that can propagate through relatively large distances underwater. "Its dense bones and air sinuses would have helped this whale focus its vocalizations into a probing beam of sound, which likely helped it find food at night or in muddy water ocean waters," Geisler added. The unique physiology of whales and dolphins allow them to use sound to find their way through oceans, hunt foot and locate other members of their species. "The anatomy of the skull is really unusual. I've not seen anything like this in any other whale, living or extinct" said Geisler. The fossilized Cotylocara remains were found in Summerville, South Carolina. The scientists were able to recovered various pieces of ribs, neck vertebrae and even a 22-inch skull. The skull was particularly valuable since it provided physiological clues that can be used to study the animal's echolocation abilities. The team published their findings in the online journal Nature.
Denseness of Rational Numbers. Pre-Algebra Mrs. Yow. What does it mean to be DENSE?. Which material is more DENSE here?. Why???????. Which material is more DENSE here?. The Hair!!!. Compare Rational Numbers (Find numbers between). Using Models Using Common Denominators Denseness of Rational Numbers When the denominators of two fractions are the same, the one with the greater numerator represents the larger rational number. The Fundamental Law of Fractions can be used to write equivalent fractions with the same denominator if the denominators of the fractions to be compared are The Cross-Product can also be used to compare fractions that have different denominators. FACE TIME (20-25 minutes Determine the validity of the following statement. “If x and y are rational numbers, then x < y < 0 guarantees that x2 < y2.” Using your calculator, find a rational number between and . Using your calculator, find a fraction between the rational numbers and . (DOK 3) Find the product of and . Then divide the product by 2. Will the answer yield a rational number between and ? 3.45 is a solution to the inequality 3 < x < 3 . Which statement justifies that 3.45 is a true value for x? a) 3.45 is less than 3 . b) 3.45 is greater than 3.5 and less than 3 . c) 3.45 is greater than 3 and less than 3 . d) 3 is greater than 3.45. Write three numbers between: -2.4 < x < -2.31 Write a number that is greater than but less Which of the following rational numbers is not between and ? How many rational numbers are between 3.76 and 3.77?
THE EXERCISES 163 The above method of procedure reaches a pitch of exactitude which in itself is very interesting. IMPRESSION OF FORM THROUGH TOUCH ALONE (EDUCATION OF THE STEREOGNOSTIC SENSE) To recognize the form of an object by feeling it all over, or rather touching it with the finger-tips (as the blind do) means something more than exercising the tactile sense. The fact is that through touch one perceives only the super- ficial qualities of smoothness and roughness. But, whilst the hand (and the arm) is moving all round the object, there is added to the tactile impression that of the movement carried out. Such an impression is attributed to a special sense (a sixth sense) which is called the muscular sense, and which permits many impressions to be stored up in a * muscular memory/ or a memory of movements accomplished. It is possible for us to move without touching anything and to be able to reproduce and remember the movement made, with regard to its direction, the limits of extension, etc. (a pure con- sequence of muscular sensations). But when we touch something as we move, two sensations are mixed up together—tactile and muscular—giving rise to that sense which the psychologists call the " stereognostic sense ". In this case, there is acquired not only an impression of movement accomplished, but knowledge of an external object. This knowledge may be integrated with that gained through vision, thus giving a more concrete exactness to the perception of the object. This is very noticeable in little children who seem to be possessed of greater certainty in recog- nizing things, and above all greater facility in remembering them when they handle them than when they only see them. This fact is made evident by the very nature of the children in their early years. They touch everything they see, obtaining the double image (visual and muscular) of the innumerable different things with which they come in contact in their environment.
No matter the type of art, children can improve writing skills as they study different forms and apply their senses and stories into writing. Help your child select a form of art and, together, complete the following tasks to enhance his or her writing skills. (This is a great activity for Thanksgiving weekend or a holiday break — art is all around you, even in holiday decorations!) Selecting the art Remember that art takes many forms, so follow your child’s interests. Look for an image or piece that inspires your child–one that he or she takes an extra minute to examine. For these activities, use artwork that your child didn’t complete. (We’ll look at using your child’s own art in writing in a later article.) Places to find images of art: - photography magazine - art museum - around your house - online (click here for a google search of art images) Once you’ve selected the art, print it, cut it or take a picture of it so that your child will be able to look closely. If it’s 3D art, be sure to plan for time to work in front of the piece. 3 Activities to enhance writing skills using art Using the art as inspiration, help your child brainstorm words about the art. To guide her, prompt her by asking “what colors do you see?” and “what feelings does this painting (picture, piece…) give you?” and write the answers on a brainstorming paper (usually, a blank paper to list thoughts will work just fine.) It helps to model brainstorming, so sit with her and create your own list. Limit the time you spend brainstorming to about 5 minutes. Write a story about the art They say a picture is worth a thousand words, it’s time to create those words. Help your child organize a story about the art, using some of the describing words. One of the hardest tasks for children is that they often feel they have to write the “right” story, based on what the artist was creating. Remind them that this is not the point of the exercise. The point is to create their own stories. For example, the story can be based on what Mona Lisa had for lunch before the painting was created. Or, in this picture of a dog and birds, perhaps your child will write about the conversation or thoughts the animals or having. Or what may happen next. This image of a fisherman may inspire a story about dinner that evening or a story about the man and his life that led to that moment. What’s important is that there is no right or wrong answer–each person, each writer, will be inspired to tell a different story when he or she sees the picture. Some people can jump right in and start writing. Others may need to discuss ideas first. Find what works best for your child by asking if he’d like to discuss his ideas first. Write a letter to the artist Remind your child that the art was created by someone, and for a reason. Create a list of questions that your child wants to know about the art: What inspired the picture? Why did the artist use the colors he used? Is the little boy in the painting/photo/sculpture a friend or a stranger? Sadly, the chance of having the letter responded to will be difficult in this case, as it’s very difficult to find a way to contact the artist. However, if the artwork is in a gallery or museum, don’t hesitate to send the letter there with cover letter asking that the letter be forwarded to the artist. If the artist is deceased, discuss with your child that, because the letter won’t be returned, it may be fun to try to research the answers on the internet or through the gallery or museum where the art is displayed. Alternatively, your child can write a different letter to the museum. Suggested questions may include - Why did you opt to include this piece in your museum? - Is this a popular piece of art? - What have patrons commented about the art? - How does the art fit in your museum/gallery? How is it displayed? Remember, the image is inspiration–art can take us anywhere. Your job, as a parent, is to guide your child to think beyond the images he sees. © 2014 – 2015, Julie Meyers Pron. All rights reserved.
The study is part of the FLUPOL ('Host-specific variants of the influenza virus replication machinery') project, funded with EUR 1.97 Million under the Policy support budget line of the Sixth Framework Programme (FP6). Seasonal influenza epidemics kill hundreds of thousands of people every year. According to FLUPOL, the deadly H5N1 avian influenza viruses have the potential to cause a devastating pandemic if they become transmissible between humans. The goal of the three-year research project is to provide new knowledge that will enable scientists to better monitor the influenza virus and find ways to combat the emergence of deadly strains. To do this, it is crucial that the mechanisms whereby the virus can adapt itself from bird hosts to humans be fully understood. The influenza virus multiplies rapidly within its host's cells, aided by a viral enzyme called polymerase. The polymerase copies the virus's genetic material and manipulates the host cell to provide a friendly environment for the virus to multiply. The polymerase takes a piece of the host's RNA (genetic material) and adds it to its own. The result: the host's cell starts to produce viral proteins. The part of the RNA that gets 'hijacked' is called the cap, a short bit of the molecule that is found at the beginning of messenger RNA that directs the manufacture of proteins. The viral polymerase swipes the cap and sticks it onto its own RNA. The process, referred to as 'cap snatching', has until now been unclear. The viral polymerase is known to be composed of three subunits (PA, PB1 and PB2); the question of which subunit is the cap-snatcher has been a matter of some controversy. While previous studies demonstrated that PB2 plays a role in cap binding; PB1 was believed to be the cap-snatching culprit. Now, the team led by Dr Stephen Cusak of the European Molecular Biology Laboratory (EMBL) and Dr Rob Ruigrok of the National Centre for Scientific Research (CNRS) discovered that a different part of the polymerase, PA, is actually responsible for slicing the cap off the host's mRNA. The investigators created crystal structures of the polymerase subunits, and examined them under X-ray beams at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. The resulting high-resolution image clearly showed the individual amino acids that form the site where the cap is cleaved from the mRNA. The scientists revealed that the PA subunit plays a unique role in cleaving the RNA. 'Our results came as a big surprise, because everybody thought that the cleaving activity resides in a different part of the polymerase,' said Dr Ruigrok. Dr Cusack added: 'These new insights make PA a promising antiviral drug target. Inhibiting the cleaving of the cap is an efficient way to stop infection because the virus can no longer multiply. Now we know where to focus drug-design efforts.' Their findings are supported by a second study, published in the same issue of Nature, by researchers in China and the UK which shows PA to be an important target for the design of new anti-influenza therapies.
No Educator Left Behind is a series providing answers from the U.S. Department of Education to questions about the federal No Child Left Behind Act and how it will affect educators. If you have a question about No Child Left Behind, send an e-mail to Ellen Delisio, and we will submit your question to the Department of Education. What does the phrase adequate yearly progress mean, and how will schools measure it? U.S. Department of Education: Adequate yearly progress, or AYP, refers to the growth needed in the proportion of students who achieve state standards of academic proficiency. A state's definition of AYP primarily is based on the state's academic assessments. The definition of AYP must also include graduation rates for high schools and an additional indicator for middle and elementary schools. The AYP also will be based on separate reading-language arts and math achievement objectives. The new definition of AYP is diagnostic in nature and intended to highlight where schools need improvement and should focus their resources. States may calculate AYP for a school using up to three consecutive years of data, but if a state chooses to average data over two or three years, it must still determine whether a school or district made AYP each year. For a school to make AYP, each subgroup and the school overall must make AYP, and the school must test at least 95 percent of students, including 95 percent of each subgroup. Schools must report all results by subgroup, but if the number of students in a group won't produce statistically reliable results, the state need not identify the school as not making AYP based on the subgroup results. Schools that receive federal Title I funds to improve learning among disadvantaged children and fail to make AYP for two years in a row are considered in need of improvement and face a range of consequences. Read other questions and answers in our No Educator Left Behind archive.
NGC 346, the brightest star-forming region in the neighbouring Small Magellanic Cloud galaxy, lies some 210,000 light-years away from Earth. The image was obtained at the La Silla Observatory in Chile. A dramatic new image of a star cluster in our neighboring galaxy reveals light, wind and heat flowing in dramatic spirals. The cluster, designated NGC 346, is a loosely bound collection of stars hovering in a nearby mini galaxy called the Small Magellanic Cloud. The bright, wispy cobweb shape is gas that has been heated up by surrounding stars until it's so warm it emits its own glowing light. This area is also a star-forming region where brand new stars are being born. Massive stars will send out powerful winds that compress nearby gas until its density is large enough to ignite nuclear fusion, creating a stellar core. Many stars in NGC 346 are only a few million years old relative babies on a cosmic scale. When new stars come onto the scene they too emit strong winds that alter the shape of the gas and dust around them and can spark even more star formation. This hotbed of activity is about 210,000 light-years away toward the constellation Tucana (the Toucan). The Small Magellanic Cloud orbits around our own Milky Way galaxy, and can be seen as a hazy object with the naked eye. The newly released image was captured by the 2.2-meter (7.2 foot) diameter telescope shared by the Max Planck Gesellschaft Institute and the European Southern Observatory at the La Silla Observatory in Chile. - Gallery: Top 100 Space Photos - Video Zoom In On A Violent Stellar Cradle - Top 10 Star Mysteries
Binary symmetric channel ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (March 2013)| A binary symmetric channel (or BSC) is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver receives a bit. It is assumed that the bit is usually transmitted correctly, but that it will be "flipped" with a small probability (the "crossover probability"). This channel is used frequently in information theory because it is one of the simplest channels to analyze. The BSC is a binary channel; that is, it can transmit only one of two symbols (usually called 0 and 1). (A non-binary channel would be capable of transmitting more than 2 symbols, possibly even an infinite number of choices.) The transmission is not perfect, and occasionally the receiver gets the wrong bit. This channel is often used by theorists because it is one of the simplest noisy channels to analyze. Many problems in communication theory can be reduced to a BSC. Conversely, being able to transmit effectively over the BSC can give rise to solutions for more complicated channels. A binary symmetric channel with crossover probability p denoted by , is a channel with binary input and binary output and probability of error p; that is, if X is the transmitted random variable and Y the received variable, then the channel is characterized by the conditional probabilities - Pr( Y = 0 | X = 0 ) = 1 − p - Pr( Y = 0 | X = 1) = p - Pr( Y = 1 | X = 0 ) = p - Pr( Y = 1 | X = 1 ) = 1 − p It is assumed that 0 ≤ p ≤ 1/2. If p > 1/2, then the receiver can swap the output (interpret 1 when it sees 0, and vice versa) and obtain an equivalent channel with crossover probability 1 − p ≤ 1/2. Capacity of BSCp The converse can be shown by a sphere packing argument. Given a codeword, there are roughly 2n H(p) typical output sequences. There are 2n total possible outputs, and the input chooses from a codebook of size 2nR. Therefore, the receiver would choose to partition the space into "spheres" with 2n / 2nR = 2n(1 − R) potential outputs each. If R > 1 − H(p), then the spheres will be packed too tightly asymptotically and the receiver will not be able to identify the correct codeword with vanishing probability.[clarification needed] Shannon's channel capacity theorem for BSCp Shannon's noisy coding theorem is general for all kinds of channels. We consider a special case of this theorem for a binary symmetric channel with an error probability p. Noisy coding theorem for BSCp The noise that characterizes is a random variable consisting of n independent random bits (n is defined below) where each random bit is a with probability and a with probability . We indicate this by writing "". Theorem 1 For all , all such that , all sufficiently large (depending on and ), and all , there exists a pair of encoding and decoding functions : and : respectively, such that every message has the following property: . What this theorem actually implies is, a message when picked from , encoded with a random encoding function , and send across a noisy , there is a very high probability of recovering the original message by decoding, if or in effect the rate of the channel is bounded by the quantity stated in the theorem. The decoding error probability is exponentially small. We shall now prove Theorem 1 . Proof We shall first describe the encoding function and the decoding function used in the theorem. We will use the probabilistic method to prove this theorem. Shannon's theorem was one of the earliest applications of this method. Encoding function : Considering an encoding function : that is selected at random. This means that for each message , the value is selected at random (with equal probabilities). Decoding function : For a given encoding function , the decoding function : is specified as follows: given any received codeword , we find the message such that the Hamming distance is as small as possible (with ties broken arbitrarily). This kind of a decoding function is called a maximum likelihood decoding (MLD) function. Ultimately, we will show (by integrating the probabilities) that at least one such choice satisfies the conclusion of theorem; that is what is meant by the probabilistic method. The proof runs as follows. Suppose and are fixed. First we show, for a fixed and chosen randomly, the probability of failure over noise is exponentially small in n. At this point, the proof works for a fixed message . Next we extend this result to work for all . We achieve this by eliminating half of the codewords from the code with the argument that the proof for the decoding error probability holds for at least half of the codewords. The latter method is called expurgation. This gives the total process the name random coding with expurgation. A high level proof: Fix and . Given a fixed message , we need to estimate the expected value of the probability of the received codeword along with the noise does not give back on decoding. That is to say, we need to estimate: . Let be the received codeword. In order for the decoded codeword not to be equal to the message , one of the following events must occur: - does not lie within the Hamming ball of radius centered at . This condition is mainly used to make the calculations easier. - There is another message such that . In other words the errors due to noise take the transmitted codeword closer to another encoded message. As for the second event, we note that the probability that is where is the Hamming ball of radius centered at vector and is its volume. Using approximation to estimate the number of codewords in the Hamming ball, we have . Hence the above probability amounts to . Now using union bound, we can upper bound the existence of such an by which is , as desired by the choice of . A detailed proof: From the above analysis, we calculate the probability of the event that the decoded codeword plus the channel noise is not the same as the original message sent. We shall introduce some symbols here. Let denote the probability of receiving codeword given that codeword was sent. Denote by . We get the last inequality by our analysis using the Chernoff bound above. Now taking expectation on both sides we have, [[ ] .. Now we have . This just says, that the quantity , again from the analysis in the higher level proof above. Hence, taking everything together we have , by appropriately choosing the value of . Since the above bound holds for each message, we have . Now we can change the order of summation in the expectation with respect to the message and the choice of the encoding function , without loss of generality. Hence we have . Hence in conclusion, by probabilistic method, we have some encoding function and a corresponding decoding function such that . At this point, the proof works for a fixed message . But we need to make sure that the above bound holds for all the messages simultaneously. For that, let us sort the messages by their decoding error probabilities. Now by applying Markov's inequality, we can show the decoding error probability for the first messages to be at most . Thus in order to confirm that the above bound to hold for every message , we could just trim off the last messages from the sorted order. This essentially gives us another encoding function with a corresponding decoding function with a decoding error probability of at most with the same rate. Taking to be equal to we bound the decoding error probability to . This expurgation process completes the proof of Theorem 1. Converse of Shannon's capacity theorem The converse of the capacity theorem essentially states that is the best rate one can achieve over a binary symmetric channel. Formally the theorem states: For a detailed proof of this theorem, the reader is asked to refer to the bibliography. The intuition behind the proof is however showing the number of errors to grow rapidly as the rate grows beyond the channel capacity. The idea is the sender generates messages of dimension , while the channel introduces transmission errors. When the capacity of the channel is , the number of errors is typically for a code of block length . The maximum number of messages is . The output of the channel on the other hand has possible values. If there is any confusion between any two messages, it is likely that . Hence we would have , a case we would like to avoid to keep the decoding error probability exponentially small. Codes for BSCp Very recently, a lot of work has been done and is also being done to design explicit error-correcting codes to achieve the capacities of several standard communication channels. The motivation behind designing such codes is to relate the rate of the code with the fraction of errors which it can correct. The approach behind the design of codes which meet the channel capacities of , have been to correct a lesser number of errors with a high probability, and to achieve the highest possible rate. Shannon’s theorem gives us the best rate which could be achieved over a , but it does not give us an idea of any explicit codes which achieve that rate. In fact such codes are typically constructed to correct only a small fraction of errors with a high probability, but achieve a very good rate. The first such code was due to George D. Forney in 1966. The code is a concatenated code by concatenating two different kinds of codes. We shall discuss the construction Forney's code for the Binary Symmetric Channel and analyze its rate and decoding error probability briefly here. Various explicit codes for achieving the capacity of the binary erasure channel have also come up recently. Forney's code for BSCp Forney constructed a concatenated code to achieve the capacity of Theorem 1 for . In his code, - The outer code is a code of block length and rate over the field , and . Additionally, we have a decoding algorithm for which can correct up to fraction of worst case errors and runs in time. - The inner code is a code of block length , dimension , and a rate of . Additionally, we have a decoding algorithm for with a decoding error probability of at most over and runs in time. For the outer code , a Reed-Solomon code would have been the first code to have come in mind. However, we would see that the construction of such a code cannot be done in polynomial time. This is why a binary linear code is used for . The rate which almost meets the capacity. We further note that the encoding and decoding of can be done in polynomial time with respect to . As a matter of fact, encoding takes time . Further, the decoding algorithm described takes time as long as ; and . Decoding error probability for C* A natural decoding algorithm for is to: - Execute on Note that each block of code for is considered a symbol for . Now since the probability of error at any index for is at most and the errors in are independent, the expected number of errors for is at most by linearity of expectation. Now applying Chernoff bound, we have bound error probability of more than errors occurring to be . Since the outer code can correct at most errors, this is the decoding error probability of . This when expressed in asymptotic terms, gives us an error probability of . Thus the achieved decoding error probability of is exponentially small as Theorem 1. We have given a general technique to construct . For more detailed descriptions on and please read the following references. Recently a few other codes have also been constructed for achieving the capacities. LDPC codes have been considered for this purpose for their faster decoding time. - Richardson and Urbanke - David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1 - Thomas M. Cover, Joy A. Thomas. Elements of information theory, 1st Edition. New York: Wiley-Interscience, 1991. ISBN 0-471-06259-6. - Atri Rudra's course on Error Correcting Codes: Combinatorics, Algorithms, and Applications (Fall 2007), Lectures 9, 10, 29, and 30. - Madhu Sudan's course on Algorithmic Introduction to Coding Theory (Fall 2001), Lecture 1 and 2. - G. David Forney. Concatenated Codes. MIT Press, Cambridge, MA, 1966. - Venkat Guruswamy's course on Error-Correcting Codes: Constructions and Algorithms, Autumn 2006. - A mathematical theory of communication C. E Shannon, ACM SIGMOBILE Mobile Computing and Communications Review. - Modern Coding Theory by Tom Richardson and Rudiger Urbanke., Cambridge University Press
An Analysis of the Civil Rights Act of 1964: A Legislated Response to Racial Discrimination in the U.S., by Henry A. Rhodes Guide Entry to 82.03.04: This unit deals with the origin and content of the Civil Rights Act of 1964. It examines the contributions made by the Roosevelt, Truman, Eisenhower, Kennedy and Johnson Administrations to this civil rights legislation. The effect that the civil rights demonstrations of the early 1960ís had on this civil rights bill is also analyzed. This unit also examines the effect that the C.R.A. of 1964 had on American society. One special feature of this curriculum unit is a taped interview with Burke Marshall who is the former Assistant Attorney General in charge of the Civil Rights Division of the Justice Department during the enactment of the C.R.A. of 1964. When the students finish this unit they should be able to explain or describe the following: 1. The position and contributions of the Roosevelt, Truman, Eisenhower, Kennedy and Johnson administrations towards civil rights for the American Negro and the Civil Rights Act of 1964. 2. The need for the Civil Rights Act of 1964. 3. The effect of the Civil Rights Movement and its leaders on the enactment of the Civil Rights Act of 1964. 4. The major contributions of the Civil Rights Act of 1964 to American society with respect to equality. 5. The ten Titles which compose the Civil Rights Act of 1964. (Recommended for 7th and 8th grades U.S. Civics and 9-12 grades U.S. History) Civil Rights Act 1964 American Afro-American History Prejudice
A road map or route map is a map that primarily displays roads and transport links rather than natural geographical information. It is a type of navigational map that commonly includes political boundaries and labels, making it also a type of political map. In addition to roads and boundaries, road maps often include points of interest, such as prominent businesses or buildings, tourism sites, parks and recreational facilities, hotels and restaurants, as well as airports and train stations. Road maps come in many shapes, sizes and scales. Small, single-page maps may be used to give an overview of a region's major routes and features. Folded maps can offer greater detail covering a large region. Electronic maps typically present a dynamically generated display of a region, with its scale, features, and level of detail specified by the user. oad maps can also vary in complexity, from a simple schematic map used to show how to get to a single specific destination (such as a business), to a complex electronic map, which may layer together many different types of maps and information – such as a road map plotted over a topographical 3D satellite image (a viewing mode frequently used within Google Earth). A road atlas is a collection of road maps covering a region as small as a city or as large as a continent, typically bound together in a book. Spiral binding is a popular format for road atlases, to permit lay-flat usage and to reduce wear and tear. Atlases may cover a number of discrete regions, such as all of the states or provinces of a given nation, or a single continuous region in high detail split across several pages. Road maps often distinguish between major and minor thoroughfares (such as motorways vs. surface streets) by using thicker lines or bolder colors for the major roads. Printed road maps commonly include an index of cities and other destinations found on the map; smaller-scale maps often include indexes of streets and other routes. These indexes give the location of the feature on the map via a grid reference. Inset maps may be used to provide greater detail for a specific area, such as a city map inset into a map of a state or province. Often a distance matrix is included showing the distance between pairs of cities. Since it is a symmetric matrix, only the upper triangle is displayed.
Solar thermal systems are a promising renewable energy solution -- the sun is an abundant resource. Except when it's nighttime. Or when the sun is blocked by cloud cover. Thermal energy storage (TES) systems are high-pressure liquid storage tanks used along with a solar thermal system to allow plants to bank several hours of potential electricity. Off-peak storage is a critical component to the effectiveness of solar thermal power plants. Three primary TES technologies have been tested since the 1980s when the first solar thermal power plants were constructed: a two-tank direct system, a two-tank indirect system and a single-tank thermocline system. In a two-tank direct system, solar thermal energy is stored right in the same heat-transfer fluid that collected it. The fluid is divided into two tanks, one tank storing it at a low temperature and the other at a high temperature. Fluid stored in the low temperature tank runs through the power plant's solar collector where it's reheated and sent to the high temperature tank. Fluid stored at a high temperature is sent through a heat exchanger that produces steam, which is then used to produce electricity in the generator. And once it's been through the heat exchanger, the fluid then returns to the low temperature tank. A two-tank indirect system functions basically the same as the direct system except it works with different types of heat-transfer fluids, usually those that are expensive or not intended for use as storage fluid. To overcome this, indirect systems pass low temperature fluids through an additional heat exchanger. Unlike the two-tank systems, the single-tank thermocline system stores thermal energy as a solid, usually silica sand. Inside the single tank, parts of the solid are kept at low to high temperatures, in a temperature gradient, depending on the flow of fluid. For storage purposes, hot heat-transfer fluid flows into the top of the tank and cools as it travels downward, exiting as a low temperature liquid. To generate steam and produce electricity, the process is reversed. Solar thermal systems that use mineral oil or molten salt as the heat-transfer medium are prime for TES, but unfortunately without further research, systems that run on water/steam aren't able to store thermal energy. Other advancements in heat-transfer fluids include research into alternative fluids, using phase-change materials and novel thermal storage concepts all in an effort to reduce storage costs and improve performance and efficiency.
By combining the features of a scanning tunneling microscope (STM) and an atomic force microscope (AFM)—two of the most useful nanotech tools—in a single instrument, IBM scientists have measured the forces necessary to move single cobalt atoms and single carbon monoxide molecules across metal surfaces. A better understanding of the forces involved in using scanning probes to manipulate atoms and molecules should allow a more systematic development of some types of nanotechnology. As described by Kenneth Chang in The New York Times in the article “Scientists Measure What It Takes to Push a Single Atom“: I.B.M. scientists have measured the force needed to nudge one atom. About one-130-millionth of an ounce of force pushes a cobalt atom across a smooth, flat piece of platinum. Pushing the same atom along a copper surface is easier, just one-1,600-millionth of an ounce of force. I.B.M. scientists have been pushing atoms around for some time, since Donald M. Eigler of the company’s Almaden Research Center in San Jose, Calif., spelled “IBM” using 35 xenon atoms in 1989. Since then, researchers at the company have continued to explore how they might be able to construct structures and electronic components out of individual atoms. Knowing the precise forces required to move atoms “helps us to understand what is possible and what is not possible,” said Andreas J. Heinrich, a physicist at Almaden and an author of the new Science paper. “It’s a stepping stone for us, but it’s by no means the end goal.” In the experiment, Dr. Heinrich and his collaborators at Almaden and the University of Regensburg in Germany used the sharp tip of an atomic force microscope to push a single atom. To measure the force, the tip was attached to a small tuning fork, the same kind that is found in a quartz wristwatch. In fact, in the first prototype, Franz J. Giessibl, a scientist at Regensburg who was a pioneer in the use of atomic force microscopes, bought an inexpensive watch and pulled out the quartz tuning fork for use in the experiment. The tip vibrates 20,000 times a second until it comes into contact with an atom. As the tip pushes, the tuning fork bends, like a diving board, and the vibration frequency dips. What this increased understanding of pushing atoms on metal surfaces will mean to using scanning probe microscopes to make and break covalent bonds as a path toward productive nanosytems can only be determined as further work is done. One would think it should be quite useful to be able to precisely measure forces of interacting with atoms and molecules. The researchers end their paper by saying “A systematic investigation of the manipulation forces on different surface-adsorbate combinations is now possible, and the driving mechanism to create future nanoscale devices can be explored in a quantitative manner.” The research was published in Science (abstract).
This composite of the Galilean Satellites shows images of the moons taken by the Galileo spacecraft. Details of their surfaces are given in the lower two rows of the composite, including features produced through volcanism, ice, and cratering. Click on image for full size The Galilean satellites are the 4 major moons of Jupiter, Io , and Callisto . In this picture, Io, and Io’s surface, are shown on the left-most end, then Europa, and its surface, then Ganymede, then Callisto. Of Jupiter’s 60 moons, these four are the biggest. These moons were discovered by Galileo Galilei in 1610. Their discovery by Galileo provided the key piece of evidence for Galileo's proof that the Earth was not the center of the Universe. Although Galileo initially thought they were stars, through continued observations over a couple of weeks, he realized that the objects he had observed remained in the vicinity of Jupiter. He was finally able to show that these objects were orbiting Jupiter, thus proving that not all objects in the heavens orbited the Earth. Interestingly, Galileo named these natural satellites of Jupiter the "Medicean satellites" , after the famous Medici family of Renaissance Italy. The colorful names we now use for these satellites can be attributed to Simon Marius (who claimed to have observed the satellites before Galileo in 1609, but did not publish his findings). Marius attributed the suggestion of these names to a suggestion from Johannes Kepler in 1613. Shop Windows to the Universe Science Store! Our online store on science education, ranging from evolution , classroom research , and the need for science and math literacy You might also be interested in: Ganymede was first discovered by Galileo in 1610, making it one of the Galilean Satellites. Of the 60 moons it is the 7th closest to Jupiter, with a standoff distance of 670,900 km. It is the largest moon...more Icy moons are large or small moons which are composed mostly of ice. These moons are unlike the earth's moon, which is made of silicate rock. Perfect examples of icy moons are 3 of the Galilean satellites,...more This drawing shows the positions of the four Galilean satellites relative to each other, and the path of the Galileo spacecraft on one of its flyby's past Jupiter. Of the four Galilean satellites, Io is...more Jupiter has // Call the moon count function defined in the document head print_moon_count('jupiter'); moons and a ring system. The four Galilean satellites; Io, Europa, Ganymede, and Callisto, are among...more Rhea was discovered by G. Cassini in 1672. Rhea is the 5th farthest moon from Saturn, with a standoff distance of 527,040 km. It is one of the icy moons, similar to the Galilean satellites. Rhea is about...more Tethys was discovered by G. Cassini in 1684. Tethys is the 8th closest moon to Saturn, with a standoff distance of 294,660 km. It is one of the icy moons, similar to the Galilean satellites. Tethys is...more Saturn has // Call the moon count function defined in the document head print_moon_count('saturn'); moons and a complex ring system. The moon Titan is one of the few moons in the solar system with a significant...more Several interesting phenomena are found at the poles of Jupiter, the largest planet in our Solar System. Three of Jupiter's four large "Galilean" moons are ice-covered and thus reminiscent of Earth's polar...more
The growing concern over pollution and climate change has numerous companies and individuals researching a need for change in energy sources. Fossil fuels have been the world’s energy resource for generations. With advancing technology, new clean and climate stabilizing resources are available in the form of renewable energy. In broad terms, renewable energy is created from natural resources that are continuously replenished. The environmental benefits of renewable energy exceed the creation of energy for common usage. Without the creation of pollutants, renewable energy is often referred to as green energy or clean energy. The use of solar or photovoltaic power involves harnessing the sun’s daily energy to produce an ongoing energy resource. The collection of solar power may be done with mirrors, heat absorbing cellar panels or semiconducting chips for conversion into energy. The efficiency and reliability of the solar cells have increased over the years. The units are easier to transport and install. Numerous building sites use solar panels during initial construction when other electrical resources are not readily available. According to Water Heater Hub, solar water heaters are becoming more popular each year, especially in warmer climates. Their biggest downside is the high initial cost compared to more traditional models. Solar energy may be used throughout the average home or business for heating resources. By constructing large solar farms, the sun’s natural resource may be used to produce enough electricity to thousands of homes. The growth of the wind industry is proof of the success of converting a natural resource into a clean, affordable and efficient energy resource. The production of wind energy occurs when the wind turbines harness the kinetic energy as the wind moves through the blades of the turbine. The turbines transfer the harnessed energy into electrical energy to be delivered to the attaching grid. The use of wind power is continuously growing as large scale wind farms are being constructed. With the lowest impact on the environment, wind farms are gaining in popularity. Currently, the costs are high for a large-scale wind turbine. Smaller residential units are being designed to help the average person reduce greenhouse emissions and lower electrical costs. Biomass converts waste materials into an energy resource. Depending on the area, biomass energy has both negative and positive attributes. Numerous companies are staying away from wide-spread usage by focusing on other renewable energy resources. Biomass creates energy from corn stalks, grasses, and forest residues. As areas are cleared for biomass use, the protection of wildlife and removal of decaying debris are two of the main environmentally friendly benefits. Initially, biomass producing energy resources are better than fossil fuels. The negative attributes may actually harm the environment which biomass energy is trying to protect. The main downside to biomass energy sources is the depletion or degrading of the natural habitats. The reduction of natural resources threatens biodiversity and public health. Hydro powered electricity is created from the use of flowing water from waves or a waterfall. The vast oceans are an ideal resource for the collection of hydroelectricity. Hydropower is beneficial for both the surrounding local habitats and the entire global environment. A properly built and maintained hydro collecting unit does not create any type of direct waste. Smaller scale hydro units are being installed in rural areas with small rivers or fast flowing streams to help produce electricity. Geothermal energy is produced from the Earth’s heat. Beneath the Earth’s surface, hot water, and hot rock is readily available. By converting the high-temperature heat into electricity, the heat resource is a cost-effective, environmentally safe and reliable. Both residential areas and businesses may benefit from the use of geothermal power to adequate produce energy for the establishments. Renewable energy resources are the future for the world’s growing electricity needs. Advancing technology has allowed companies to create efficient methods for harnessing various renewable energy resources. As the efficiency rise, the effectiveness increases allowing for wide-spread use in both residential and industrial communities. Along with creating an environmentally friendly impact, renewable energy increases the need for employment within a specialized field. The renewable energy industry will continue to grow as the need for harnessing the natural resource increases.
A few fragmentary bones thought to be the remains of Neanderthals actually belonged to medieval Italians, new research finds. The study is a reanalysis of a tooth, which was found in in a cave in northeastern Italy along with a finger bone and another tooth. Originally, researchers identified these scraps as belonging to Neanderthals, the early cousins of humans who went extinct about 30,000 years ago. Instead, the new study reveals the bones to belong to modern Homo sapiens. There's no telling whom the original owner of the teeth and finger was, but the cave where they were discovered was both a hermitage, or dwelling place, and the site of a grisly medieval massacre. [8 Disturbing Archaeological Discoveries] The teeth and the bone were found in the San Bernardino Cave in the 1980s in a rock layer dating back to Neanderthal times, approximately 28,000 to 59,000 years ago. But location alone is not enough for a firm identification, said study researcher Stefano Benazzi, a physical anthropologist at the Max Planck Institute for Evolutionary Anthropology in Germany. An analysis of the bones themselves is necessary, too. Earlier, researchers had conducted this analysis, but they lacked the high-tech tools available to scientists today. "The taxonomical discrimination of the species was based mainly on the layer the human fossil was found instead of the morphological features," or shape and size of the bones, Benazzi told LiveScience. The size and shape of the teeth were consistent with belonging to Homo sapiens, but their rock layer suggested Neanderthal. A look back at the excavations revealed murky geology — at some point in the late middle ages, a wall to seal off the cave had been built, potentially disturbing the rock layers and preventing the researchers from using the layers as proof of age. Human or Neanderthal? Benazzi and his colleagues took a direct approach, analyzing one of the teeth, a molar, found in the cave. (These analyses require the destruction of part of the bone, which is why they are often not done.) First, they took a look at the shape of the tooth using micro-computed tomography (CT), a scanning method that allows researchers to create virtual 3D models of an object. They also sampled for mitochondrial DNA, a type of DNA passed down the maternal line. Next, they used radiocarbon dating to determine the age of the tooth. Finally, they analyzed molecular traces in the tooth to determine the individual's diet. [In Photos: New Human Ancestor Possibly Unearthed in Spanish Cave] The results converged on one answer: This tooth was not Neanderthal. The shape was somewhat ambiguous, but suggestive of a Homo sapiens' tooth. The DNA looked far more human than Neanderthal. The date sealed the deal: Instead of being at least 30,000 years old, the tooth dated back to between A.D. 1420 and 1480. The diet analysis revealed that the ratio of plants and meat eaten by the tooth's owner was consistent with the diet of a medieval Italian who ate millet, a plant not even introduced to Italy until 5,000 years ago or later. "It's great that technology has advanced so far now that we can reassess these older finds," said Kristina Killgrove, a biological anthropologist at the University of West Florida who was not involved in the study. "Now we can use carbon-14 dating and ancient DNA and compare it to the Neanderthal genome." Though the researchers did not chemically analyze the other tooth and finger bone, their sizes and close association with the molar suggest that they, too, are medieval in origin. A grisly history The discovery of medieval bones highlights the cave's long history. It served as a hermitage in the 1400s, and was possibly inhabited by San Bernardino of Siena, a priest and missionary who spent time in the area. In 1510, during the War of the League of Cambrai, the cave was a site of a massacre of local people by mercenary troops. Some died of asphyxiation in the cave itself, where they had fled to seek refuge. Whether the bones belong to one of those victims or to another medieval Italian is unknown, but the construction of a wall over the cave mouth in the Late Middle Ages likely pushed the bones into the deeper rock layers, where they were mistaken for Neanderthal remains. After the massacre, the site became a church. The re-categorization of the bones also shows that anthropology should not focus only on new finds, but also needs to look back at old discoveries, Benazzi said. "We show that a lot of fossils discovered in the past, San Bernardino as an example, need to be reassessed," he said. That work is ongoing, he added, and his research group is working to analyze other remains found in other caves. The findings will be reported in an upcoming issue of the Journal of Human Evolution.
Magnetic vortices could form the basis for future high-density, low-power magnetic data storage With the ever-increasing amounts of digital information being processed, transferred and stored by computers comes a commensurate demand for increased data storage capacity. For magnetic data storage such as the ubiquitous hard disk drive, this requires not only physically smaller memory elements or bits, but also reduced switching power to avoid heat issues. Yoshinori Tokura and colleagues from the RIKEN Center for Emergent Matter Science, in collaboration with a research team from the University of Tokyo, have now shown that structural control of small magnetic vortex structures called skyrmions could lead to a compact, low-power alternative to conventional magnetic data storage1. Skyrmions are very stable magnetic structures that can form within a chiral crystal lattice. "Each skyrmion can be considered as a single particle and could represent an information bit," says Kiyou Shibata from the University of Tokyo. "The small size of skyrmions is also of great advantage to high-density integration in devices." Skyrmions occur rarely in certain magnetic compounds. They only began to attract interest for practical applications when the RIKEN researchers, in previous work, were able to demonstrate that skyrmions can exist near room temperature and can be manipulated using very low electrical current densities of about 100,000 times less than those required for controlling conventional ferromagnetic structures. In another step toward achieving better control over the properties of skyrmions, Tokura, Shibata and their colleagues studied a range of manganese–germanium magnets by preparing magnetic compounds, in which they increasingly replaced manganese with iron. Using a powerful Lorentz microscope capable of visualizing magnetic structures on a nanometer scale, they then studied skyrmions in the various magnets. The researchers observed that the size of skyrmions changes continuously with composition and that a ratio of about 80% iron and 20% manganese changes their orientation. This finding is explained by an iron-dependent change in the magnetic coupling between the magnetic properties of the electrons in the magnet and their movements around the atomic core. It will now be possible to consider practical schemes to design devices based on skyrmions. For example, skyrmions with the desired size and orientation can be created by tuning the composition of the magnet. "The next stage of our research will focus on the manipulation of skyrmions. In particular, the dynamics of isolated skyrmions in confined structures have been predicted theoretically and would be a good subject for experimental research."
In a potential breakthrough for human babies born prematurely, scientists announced this year they’d successfully removed lamb fetuses from their mother’s wombs and raised them into healthy sheep. Their survival comes thanks to an artificial placenta — called a BioBag — created by researchers at the Children’s Hospital of Philadelphia. The fake womb consists of a clear plastic bag filled with electrolytes. The lamb’s umbilical cord pulls in nutrients, and its heart pumps blood through an external oxygenator. The success caps a decades-long effort toward a working artificial placenta. The BioBag could improve human infant mortality rates and lower the chances of a premature baby developing lung problems or cognitive disorders. But there are still challenges to scaling the device for human babies, which are much smaller than lambs. The scientists are also refining the electrolyte mix and studying how to connect human umbilical cords. They expect human trials in three to five years.
This is an excerpt from Kinetic Anatomy 4th Edition With HKPropel Access by Robert S. Behnke,Jennifer L. Plant & Jennifer L. Plant. The two major organs housed within the thorax are the heart and lungs. As with all other anatomical structures, the heart and lungs need nerves and blood vessels to accomplish their functions. The nerves of the heart are cardiac branches of the vagus nerve fibers arising from trunks of the sympathetic nervous system. When stimulated, the sinoatrial node (SA node) of the heart, found in the area of the right atrium near the superior vena cava, sends the impulse to the right and left atria myocardium. Specialized tissue (myocardial cells) of the atrioventricular bundle (AV node; located in the lower aspect of the interatrial septum) receives the impulse after it has passed through the atrium (figure 10.16). The impulse continues through the bundle of His (atrioventricular bundle), which divides into left and right branches (becoming known as myofibers of conduction, or Purkinje fibers) that enter the muscular walls (myocardium) of the ventricles and the papillary muscles. The result is atrial contraction rapidly followed by ventricular contraction. The respiratory center is a group of cell bodies located on each side of the medulla oblongata. Arising from these centers are two sets of nerves (figure 10.17): (1) the phrenic nerves, arising from the cervical plexus and leading to the diaphragm, and (2) the intercostal nerves, which innervate the intercostal muscles. When the respiratory center is stimulated by carbon dioxide, it sends an impulse over the phrenic nerves, causing the diaphragm to contract (pull downward) and increasing space in the thoracic cavity. At the same time, the intercostal nerves cause the intercostal muscles to contract, lifting the ribs and increasing the space within the thoracic cavity. This change in capacity of the thoracic cavity creates a change in atmospheric pressure, causing air to rush in and distend the lungs (inspiration). When the lungs have been distended to a certain point, sensory nerves running from the air sacs, via the vagus nerve, send an impulse to the respiratory center to inhibit it. This stops the center’s impulses to the phrenic and intercostal nerves, causing the diaphragm and intercostal muscles to relax and resulting in a reduction in the size of the thoracic cavity, forcing air out of the lungs (expiration). Two of the largest arteries of the heart are the pulmonary artery, coming from the right ventricle, and the aorta, coming from the left ventricle (figure 10.18). These arteries have valves at their ventricular ends to prevent any backflow of blood. The pulmonary artery has a three-flap valve known as the pulmonary semilunar valve, and the aorta has a similar valve known as the aortic semilunar valve. The right and left coronary arteries come from the aorta and are located on the outer surface of the heart, supplying blood flow to the muscular walls of the heart (myocardium). The branches of the right coronary artery include the posterior (dorsal) interventricular and marginal arteries, which supply the anterior surface of the right ventricle; the aortic and pulmonary branches, which supply the aorta and pulmonary arteries; the interventricular, which supplies both ventricles; the right atrial, which supplies the right atrium surface; and the right marginal, which supplies the inferior surfaces of both ventricles. The branches of the left coronary artery include the aortic and pulmonary branches, which supply the aorta and pulmonary arteries; the circumflex, which supplies the left atrium and ventricles; the anterior (ventral) interventricular, which supplies both ventricles; and the left atrial, which supplies the left atrium. While not the only area of concern regarding the heart, it should be noted that the right and left coronary arteries (and their branches) often are the sites of blockage that diminish or cease blood flow to areas of the heart tissue (muscle) and thus cause a “heart attack.” Frequently, one of the major causes for this condition is the accumulation of low-density lipoprotein (LDL) cholesterol within these vessels, which reduces or stops blood flow to the heart muscle. Poor diet, lack of exercise, overweight, and smoking are some of the habits that can contribute to problems of the heart. Blood from the body drains to the heart via two major veins: The superior vena cava (and its tributaries) drains the upper extremities, head, neck, shoulders, thorax, and a portion of the abdominal wall into the heart’s right atrium. The inferior vena cava (and its tributaries) drains the lower extremities, pelvis, abdominal viscera, and a portion of the abdominal wall into the right atrium. The left atrium contains the opening for the pulmonary veins, which bring the blood from the lungs to the heart. The right and left coronary veins drain into the coronary sinus, which empties into the right atrium. Tributaries of the coronary sinus include the great cardiac vein, which drains the left atrium and both ventricles into the coronary sinus. The great cardiac vein also has a tributary: the left margin vein, which drains the left margin of the heart. Other coronary sinus tributaries include the inferior cardiac vein of the left ventricle, which drains the inferior surface of the left ventricle; the middle cardiac vein, which drains both ventricles and empties into the coronary sinus; the oblique vein of the left atrium, which drains the left atrium into the coronary sinus; and the small cardiac vein, which drains the right atrium and right ventricle into the coronary sinus (figure 10.18). Respiratory Arteries and Veins Blood flow to and from the lung tissues is accomplished through branches of the bronchial arteries and bronchial veins. More in-depth discussions of oxygen and carbon dioxide levels and changes in atmospheric pressures are found in coursework and texts in human physiology. Arterial blood supplies the body cells with oxygen and is therefore well oxygenated. Venous blood is less oxygenated as it returns to the lungs. The blood in the pulmonary veins returning from the lungs to the heart is highly oxygenated, however (figure 10.19). In other words, the pulmonary veins are the only veins in the body that carry oxygen-rich blood. A few veins of the heart do not drain into the coronary sinus. These veins include the anterior cardiac veins, which arise from the wall of the right ventricle and empty into the right atrium, and the venae cordis minimae, which are small veins in the heart walls that drain into the atria. In the capillaries of the pulmonary vessels in the walls of the air sacs in the lungs is where the respiratory exchange of oxygen and carbon dioxide takes place (figure 10.20).
Although water is composed of oxygen and hydrogen atoms, biological life in water depends upon another form of oxygen—molecular oxygen. Oxygen is used by organisms in aerobic respiration, where energy is released by the combustion of sugar in the mitochondria. This form of oxygen can fit into the spaces between water molecules and is available to aquatic organisms. Fish, invertebrates, and other aquatic animals depend upon the oxygen dissolved in water. Without this oxygen, they would suffocate. Some organisms, such as salmon, mayflies, and trout, require high concentrations of oxygen in their water. Other organisms, such as catfish, midge fly larvae, and carp can survive with much less oxygen. The ecological quality of the water depends largely upon the amount of oxygen the water can hold. The quality of the water can be assessed with fair accuracy by observing the aquatic animal populations in a stream. In this experiment, you will - Use a Dissolved Oxygen Probe to measure the concentration of dissolved oxygen in water. - Study the effect of temperature on the amount of dissolved oxygen in water. - Predict the effect of water temperature on aquatic life.
In a monarchy, the Crown is an abstract concept or symbol that represents the state and its government. In a constitutional monarchy such as Canada, the Crown is the source of non-partisan sovereign authority. It is part of the legislative, executive and judicial powers that govern the country. Under Canada’s system of responsible government, the Crown performs each of these functions on the binding advice, or through the actions of, members of Parliament, ministers or judges. As the embodiment of the Crown, the monarch — currently Queen Elizabeth II — serves as head of state. The Queen and her vice-regal representatives — the governor general at the federal level and lieutenant-governors provincially — possess what are known as prerogative powers; they can be made without the approval of another branch of government, though they are rarely used. The Queen and her representatives also fulfill ceremonial functions as Head of State.
I cannot understand what my child is saying, should I be concerned? Speech development in toddlers and children Lips, teeth, nose, palate, tongue, cheeks, lungs- these are all body parts that every person uses for speaking! Learning to speak is a crucial part of a child’s development and the most intensive period of speech and language development happens in the first three years of life. Babies: 0-1 year During their first year, children develop the ability to hear and recognise the sounds of their parents’ language. They experiment with sounds by babbling (e.g. “baba”, “babamada”), and over time, their babbling begins to sound more and more like real words. Between 9-12 months babies communicate by babbling, using more sounds (e.g. d, m, n, h, w, t) and around 12 months babies begin to use some words. Toddlers: 1-3 years Toddlers experience a huge development in speech sounds and triple the number of words they can say between 1 and 2 years of age. As a result, their speech becomes easier to understand. At 2 years, 50% of their speech should be understood, and at 3 years, approximately 75% of their speech should be understood by family and friends. What can most toddlers do? - By 2 years, toddlers can say a range of speech sounds when talking (e.g. p, b, m, t, d, n, h, w) - By 3 years, toddlers can say even more sounds (e.g. k, g, f, s, ng) Preschool and School aged years The year before a child starts school is extremely important for speech development. There is a strong link between speech sounds articulation and academic success. For instance, in kindergarten, children have early words to learn to spell and read. If your child is not able to say specific sounds clearly, this may affect their spelling. The word is ‘cat’, they say ‘tat’. They sound out t-a-t, and they will write what they say. By the age of 4, 90-100% of your child’s speech should be understood by an unfamiliar listener. What can parents do to help? Parents can continue to help their children’s speech development by modelling the correct way of saying words, particularly when children make occasional sound errors. However, if a child’s speech is very difficult for parents to understand, or if children are using gestures (and grunts) in place of words, parents should contact a speech pathologist for further advice. If parents are concerned about their child’s speech development, they are advised to have their child’s hearing checked by an audiologist, as hearing is important in learning how to say sounds correctly. If you are worried about your child’s speech, if your child sounds different to the ages and stages outlined or if your three or four-year-old cannot be understood by adults, you may need to seek help from a speech pathologist. A speech pathologist has been professionally trained to advise, diagnose and work with adults and children who have difficulty in communicating. Feel free to contact us here or call our Speech Pathologist, Ahlam Hussein, on 0478 940 120, if you have concerns about your child’s speech or if you would like to discuss your child’s speech, language, and literacy development.
Three new species of miniaturized tropical salamanders are already endangered An international team of researchers has completed a decades-long study of tiny salamanders found in the high-mountain forests of Oaxaca, Mexico, and concluded that they represent three new species of the enigmatic genus Thorius. With adults smaller than a matchstick, these salamanders are the smallest tailed tetrapods—their miniaturized bodies are highly unusual for a vertebrate, with structures for feeding and reproduction being among the most prominent. Although once extremely abundant, populations of Thorius have declined precipitously over the last 30-35 years and living Thorius are now rarely found in nature. The new species were discovered by using a combination of sophisticated molecular analyses (including DNA sequencing), digital imaging (X-ray computed tomography) and statistical analysis of external and internal anatomy. They have been named Thorius pinicola (meaning "Pine-dwelling Minute Salamander"), Thorius longicaudus ("Long-tailed Minute Salamander") and Thorius tlaxiacus ('Heroic Minute Salamander'). The findings underscore the large number of amphibian species that remain to be discovered and formally described—and hopefully saved—before they are lost. Thorius were first discovered in the 19th century, and for the next 75 years scientists believed there was only a single species. Nine additional species were discovered between 1940 and 1960, but the adults are so small that the species were hard to tell apart. A breakthrough came in the 1970's, when biologists discovered that many species, while anatomically similar, could be readily told apart by using molecular techniques, which then revealed subtle anatomical features that differentiate them. Since then, many more species have been discovered and named and the three newly named species bring the current total to 29. This dramatic increase in the number of known species of Thorius parallels what has been happening in the study of amphibians generally. For at least the last 30 years, the number of valid, named amphibian species worldwide has increased at a rate of about 3% per year. Whereas in 1985 biologists thought there were around 4,000 species of amphibians, today they recognize more than 7,500. More new ones are being discovered almost daily. Tragically, the discovery and documentation of hidden amphibian diversity coincides with the precipitous decline of amphibians globally. Many once-abundant species have gone extinct in the last 50 years, and others are likely doomed to a similar fate barring effective steps to save them. Of the nearly 30 species of Thorius now recognized, almost all are regarded as Endangered or Critically Endangered by the International Union for the Conservation of Nature. Indeed, Thorius may be the world's most endangered genus of amphibians. There is a realistic chance that all living species could be extinct within the next 50 years.
Let’s talk about human respiration. In its simplest form, the respiratory system takes in oxygen, delivers it to the bodily tissues that need it, and then gets rid of carbon dioxide. Since carbon dioxide is a byproduct of respiration, it is predominantly thought of as a waste gas. But that’s not its only purpose. The Bohr Effect demonstrates how athletes can leverage carbon dioxide to boost their endurance. Quick Flashback to Biology Class: Human Respiration To understand how CO2 can work to our advantage via the Bohr Effect, we need a more detailed understanding of respiration. Let’s dive in. When air is inhaled, it passes through the lungs to the bronchial tubes and from there to the alveoli. These alveoli are tiny air sacs in the lungs that allow gases to be exchanged through tiny tubes called capillaries. After passing through the capillaries, oxygen molecules are grabbed by the hemoglobin molecules within red blood cells, thus oxygenating the blood. Hemoglobin is responsible for carrying oxygen from the lungs to the entire body, and this is fundamental to the Bohr Effect. Oxygenated blood then travels from the lungs through the left side of the heart to the rest of the body. Carbon dioxide moves the opposite way. It is created as a byproduct of cellular respiration, where glucose and oxygen are converted into ATP (energy), water, and carbon dioxide. Red blood cells then transport the CO2 from throughout the body back to the alveoli, where it can then exit through lungs. Now that we’ve refreshed our understanding of respiration, let’s understand how the Bohr Effect works. What is The Bohr Effect? At any given time, blood oxygen levels are around 95-99% in a healthy human. However, there is a key difference between the blood oxygen levelsand our ability to deliver this oxygen in our cells, also known as oxygenation. Just because we have ample oxygen stores doesn’t mean that all of it gets delivered to our cells. That is where the Bohr Effect comes in. Back in 1904, a Danish biochemist named Christian Bohr made an eye-opening discovery: The lower the partial pressure of carbon dioxide is in arterial blood, the greater the affinity is of hemoglobin for the oxygen it carries. More simply, an increase in CO2 concentration decreases the blood’s pH, which in turn leads to hemoglobin proteins releasing the oxygen they carry. Even more simply, more CO2 in the body allows oxygen to be delivered more efficiently. The Bohr Effect: The lower the partial pressure of CO2 in arterial blood, the lower the amount of oxygen hemoglobin will release to cells for energy. When carbon dioxide is dissolved in the blood, carbonic acid is formed. This is what Bohr refers to that makes the blood acidic, or low in pH. The structure of hemoglobin changes as a result, making it harder for it to bind to oxygen, or, in other words, making it easier for oxygen to be released. The Bohr Effect During Physical Activity When we exercise, our demand for oxygen in the body rises. Our cellular respiration rate increases, boosting the amount of carbon dioxide in the blood, especially around highly respiring tissues i.e. the muscles you are using. This increases the CO2 concentration in the active areas, allowing for increased oxygen delivery to the cells that need it most. As a result, the Bohr Effect allows for humans to unload oxygen in an enhanced way. Once the body unloads bound oxygen by hemoglobin that is passing through metabolically active tissues, oxygen delivery improves. As more metabolism occurs throughout the body, the carbon dioxide partial pressure rises even further. This causes local pH to become even lower and for a larger amount of oxygen to be unloaded. Using the Bohr Effect to Your Advantage While the Bohr Effect naturally allows us to perform during periods of physical activity, we can actually use it to our advantage to further improve our aerobic fitness, also known as our aerobic endurance. This means that we can use the Bohr Effect to help us run, bike, and swim faster than before, for longer periods of time, with less effort. So how do we do this? Well, quite simply, if we can increase our body’s CO2 tolerance, we can increase our aerobic endurance.By increasing our body’s CO2 tolerance, we will in turn have more CO2 in our body during periods of both rest and physical exertion. This means that our body will be able to oxygenate its cells more efficiently, leading to less strain on our respiratory and cardiovascular systems. Here’s how that translates into better fitness: Increasing your CO2 tolerance leads to higher CO2 levels in your body. Higher CO2 levels means more efficient oxygenation. More efficient oxygenation means less blood is required to meet the oxygen demands of your body during physical exertion. The decreased blood requirement translates into a lower heart rate and lower blood pressure. Lower heart rate and blood pressure means that you can maintain the same level of exercise for longer, with less effort. So the question becomes “How do we naturally, and safely, increase our CO2 tolerance?” Thankfully, the answer is simple. Swap Out Mouth Breathing for Nose Breathing That’s right. The simplest and most effective way to increase your CO2 tolerance and reap the performance benefits of the Bohr Effect is to stop breathing through your mouth. Mouth breathing is actually overbreathing, meaning that you are inhaling and exhaling too much air. This causes the body to get rid of too much carbon dioxide with each breath, meaning that oxygen cannot be delivered at an effective rate. This also decreases your body’s ability to tolerate carbon dioxide, making it harder to reach a solid oxygenation rate in the future. As a result, this leads to increased breathlessness and poor oxygen delivery. With nasal breathing, there is a smaller volume of air and a longer point of time between each breath. This gives the lungs time to hold on to carbon dioxide, deliver oxygen where it needs to go, take care of hemoglobin, and fill up to capacity before the next breath. This allows the body to benefit from The Bohr Effect, lowering your heart rate and improving performance during exercise. The Additional Benefits of Nose Breathing On top of the Bohr Effect, nose breathing has a massive amount of health benefits, no matter what we’re doing throughout the day. Whether you’re resting, sleeping, or pushing it hard at the gym, nasal breathing provides just the right amount of resistance to boost oxygenation in the body. Maintain Homeostasis During Rest As we have discussed, nose breathing regulates levels of oxygen and carbon dioxide in the body. We maintain a balance of the two by inhaling oxygen, absorbing it in the lungs, and exhaling carbon dioxide. When we mouth breathe, too much carbon dioxide is expelled thanks to the larger breaths and higher breathing rate. The lungs don’t have time to absorb as much of the oxygen, so the breathing rate rises. This leads to hyperventilation, lightheadedness, and a decrease in homeostasis. By nose breathing, we can regulate the exchange of oxygen by breathing at a relaxed rate. During rest, this helps to maintain homeostasis rather than wreak havoc on it. Not to mention, nitric oxide is produced during nasal breathing, which takes stress levels down, lowers heart rate, and boosts cognitive function. Decongest During Sleep Chronic mouth breathing can lead to a sore throat, dry mouth, and nasal congestion. Since mouth breathing dries out the mouth by causing saliva to evaporate, we experience discomfort and even dehydration. If you wake up in the morning with chronic dry mouth, you probably mouth breathe in your sleep. If you’re often congested in the morning, it’s likely due to mouth breathing as well. Most people think that mouth breathing is caused by a congested nose, but the opposite is true: mouth breathing can cause a stuffy nose when we mouth breathe at night, as it makes us overproduce mucus, clogging our sinuses. This leads to a chain reaction that can cause heart issues, dental problems, snoring, and even sleep apnea. Boost Performance and Recover Faster After Workouts When you breathe through the nose during a workout, you can help to coax your body into a faster rate of recovery. Since nasal breathing helps the body operate more efficiently, your heart rate eventually syncs up to the rhythm of your breath. This tells the body and brain that you’re ready to slow down and relax back into rest mode by activating the parasympathetic nervous system. As a result, blood pressure decreases, along with stress levels, and your body will exit fight or flight mode. This leads to quicker recovery times after you’ve been physically active. Nasal breathing also boosts your performance during exercise. Many of us overbreathe and hyperventilate while working out, so nasal breathing during physical activity can be hard to get used to. Once you do, your performance will rise thanks to increased endurance caused by the Bohr Effect. Nasal Breathing Doesn’t Have to Be Bohring Now that you know how The Bohr Effect helps boost athletic performance, you’re probably eager to get started. But sometimes it’s trickier than making sure you’re breathing through the nose, especially if you’re a chronic mouth breather having a hard time breaking the habit. That’s where mouth tape comes in. SomniFix Strips are specially designed to inhibit mouth breathing and promote nose breathing during sleep. In turn, this increases your CO2 tolerance, rebalances oxygen levels, and boosts performance during periods of physical activity. Give them a try tonight risk-free and take your performance to new heights! Try SomniFix Tonight! If you don't LOVE your sleep in 7 nights, we'll give your money back guaranteed! 🌟
This survey received 285 records totaling 1500 bees. With bees seen from Jersey to Thurso (3 bumblebees). 34 was the highest bumblebee count from Teignmouth. Winter active bumblebees is a recent phenomenon. Two species the Buff- & white-tailed bumblebees have started nesting in winter in the UK; the queens do this instead of hibernating. So workers can be seen through the winter gathering resources. Whether for the 2 of the 24 resident bumblebee species now staying active over the winter this is a successful way of weathering climate change remains to be seen. What is certain is that without greater connectivity of habitats bumblebees will not be able to move north. This is why a habitat recovery network incorporating B-Lines is urgently needed for bumblebees and thousands of other species to be able to move through the countryside and respond to climate change. Otherwise species will become stranded & face extinction. This also raises questions about providing sufficient winter pollen sources for bees. Honeybees were also found to be very active in the survey, when they too would normally hibernate. There is an existing bumblebee survey that can provide further interest: BeeWalk is a long-term bumblebee monitoring scheme run by the Bumblebee Conservation Trust, involving volunteers walking a fixed route once a month between March and October and counting the bumblebees that they see. This lets us understand how bumblebee populations are changing, before species are lost from areas of their current range. To take part, visit. Alternatively consider joining in with the Flower-Insect Timed Count (FIT Count) is run by the UK Pollinator Monitoring Scheme (PoMS) collecting records that contributes pollinator abundance data at a national scale (currently England, Wales, and Scotland). This simple citizen science survey collects data on the total number of insects that visit a particular flower. Anyone with a spare 10 minutes can volunteer and complete a FIT Count between April and September. To find out more information and get involved, you can download the materials, and watch the PoMS: Flower-Insect Timed Count video, from here.
Students who successfully complete this course will develop their French proficiencies to the Novice Mid range according to the ACTFL* Proficiency Guidelines. Students will also show proficiency in the "five C's" denoted in ACTFL's Standards for Foreign Language Learning in the 21st Century: Communication, Cultures, Connections, Comparisons, and Communities. By the end of French 1, students will be able to speak, read, and write short sentences about a variety of familiar topics. *American Council on the Teaching of Foreign Languages Mode of Communication, Novice Mid: I can communicate using a number of isolated words and memorized phrases about familiar contexts. I can present using isolated words and memorized phrases limited by the particular context in which the language has been learned. I can reproduce from memory a modest number of words and phrases in context. I can supply limited information on simple forms and documents, and other basic biographical information, such as names, numbers, and nationality. I can write well about practiced, familiar topics using limited formulaic language. I can list key actions from short stories such as personal anecdotes, familiar fairy tales. I can identify people and objects in their environment or from other school subjects, based on oral descriptions. I can report out the content of brief, written messages and short personal notes on familiar topics such as family, school events, and celebrations. I can recognize familiar practiced words and phrases, usually one phrase at a time, and repetition may be required. Each unit includes cultural elements relevant to the unit. 1. Nice to Meet You (greetings, introductions, classroom expressions) 2. My Daily Life (calendars, time, school and daily activities) 3. Friends & Family (Family trees, likes/dislikes, descriptive adjectives and body parts) 4. Bonne Fête (French celebrations and traditions) 5. Body care (body parts, illness) 6. French City Life (directions, shopping, money and foods) 7. Traveling (phone, technology, transportation, hotel) 8. La Francophonie (the francophone world) Major Grammar Concepts for the Year - Adjective agreements - Irregular verbs to be, to go, to have, to do, to want, to be able to, to take... - 3 major verb classes (classifying and conjugating verbs: ER/IR/RE) - Verb conjugations: present tense (affirmative / negative), reflexive verbs, near future and past tense (passé composé). Grading categories and percentage breakdowns for the course are as follows: Interpersonal 2-way Communication such as conversations and text chats (15%), Presentational 1-way Output such as speeches and writing (15%), Interpretive 1-way Input such as listening and reading (15%), Tests and Quizzes (25%), Final Exam (20%), Homework / Classwork (10%).
Sand timers provide a visual representation of a period of time and children respond incredibly well in a variety of contexts. Activities in school and at home – - Use it to time participation in games. - To help children calm down by taking ‘time out’ when upset. - To develop the awareness of the length of two minutes. - To motivate as the student engages in ‘beat the clock ‘activities. - Classroom management- setting a target time for pupils to enter class and settled down. - Use to time an oral presentation. - Use in timed tests. - Use in teaching analogue clock- how minutes work
Christoph Mertz has a position as a project scientist at Carnegie Mellon University, and he spent months gathering photographs of various hillsides. These photos showed that these areas are becoming wetter and wetter as time goes on. Mertz was determined to develop a machine that would detect signs of landslides becoming active such as cracks in the ground or tilting trees. The current model that him and his team have developed allow these risk factors to be identified for necessary policy adjustments and budget allocation. - Although landslides are a natural occurrence, favorable conditions for their occurrence can be manmade. - When human activity alters a natural slope, or reroutes where waterways, or where rain runoff generally goes, the conditions for a landslide may be initiated. - In 2018, the city of Pittsburgh used up the one million in funds allocated to deal with landslide damage in a matter of months. “Combined with increased rainfall rates related to climate change, landslides in the United States have become more common and more severe.”
The dissolution of the Western Roman Empire is popularly perceived as a sudden and dramatic cataclysm, with the “fall of Rome” often precisely dated to 476 CE. In that year, so the story goes, the child Emperor of the West (later derisively referred to as Romulus Augustulus), was deposed by the barbarian warlord Odoacer who then established the first barbarian Kingdom of Italy. However, this familiar version of the fall of Rome can be charitably described as “incomplete.” By 476 CE, the political power structures that underlay the Empire had long ago shifted to Constantinople, the capital of the largely Greek-speaking Eastern half of the Empire. Over a century earlier, Constantine I had effectively made the eponymous capital the center of the Roman political world. In 324 CE, when Constantinople was founded on the existing city of Byzantium, Constantine had just defeated his former ally, the pagan Eastern Emperor Licinius, becoming sole ruler of a united Empire. Constantine’s victories in the civil wars of the Tetrarchy (against Maxentius in Rome and Licinius in the East) were contemporaneously viewed within a religious context. Though he favored Christianity over the old pantheon of the state, Constantine’s adoption of the Chi-Rho-adorned labarum (which would become the military standard of the late Empire) was inspired by his famous vision before the Battle of the Milvian Bridge rather than any devout adherence to existing Christian belief. Nevertheless, the end of the civil wars was perceived (and propagandized) as a spiritual triumph of Christianity and monotheism over the traditional gods of the state. After the collapse of the Roman political system in the Crisis of the Third Century, and its tenuous rebirth and regression to chaos in the civil wars, it must have been especially salient to see political stability finally return alongside the symbolic defeat of the one decrepit state institution that had so far survived—the old religion. It’s important to note that Constantine and his cultishly-devoted soldiers believed in a world of many deities to whom competing interests could be appealed in order to win favor on the battlefield (a blend of religious exceptionalism and henotheism was the prevailing norm of the time). So unremarkable was this belief that the ostensibly pagan Licinius informed his troops not to lay eyes on Constantine’s labarum out of real fear that its magical power would curse them. Like Constantine, Licinius had previously had a divine vision of his own that presaged his victory in the Battle of Tzirallum against the Eastern Emperor Maximinus Daia. Constantine himself simultaneously favored the Christian God and the sun deity Sol Invictus (popularized in the previous generation during the reign of Aurelian), perhaps seeing them as manifestations of the same supreme deity. It’s worth noting that the Arch of Constantine was positioned so that it aligned with Colossus Solis, the colossal statue of Sol Invictus that stood near the Colosseum (which derived its name from the statue). The dramatic shift in Roman semiotics that characterizes this period was undoubtedly facilitated by Constantine’s imposition of his own understanding of Christianity on the Roman political and military cultures. Beyond the defeat of Licinius and the symbolic victory of Constantine’s pseudo-monotheism over the traditional pagan religion of the old Roman state, Constantine had other reasons to move the capital to Greece. Greek culture had shaped the adopted norms of Roman elites for centuries; it’s informative that Suetonius reported the then widely-held belief that the last utterance of Julius Caesar was the Greek phrase “καὶ σὺ, τέκνον” to his friend Marcus Junius Brutus, while Caesar’s famous “Commentāriī dē Bellō Gallicō” was explicitly made to further endear him to the masses. Thus, Constantine’s transfer of Roman central power to Greece represents a logical transition in the wake of decades of destructive in-fighting and the resultant decay of long-standing political institutions and networks in and around Rome itself. While we’re on the subject of language, it’s worth noting that the word “pagan” derives from the Latin paganus, originally a descriptor used to identify people in rural areas in the countryside, away from cities. Whatever negative connotations it had (like today’s country “bumpkin”) were amplified as Christianity was increasingly adopted throughout urban communities and among educated elites. That the term eventually came to refer simply to any non-Christian reveals the stark urban/rural cultural divide of the period, in which country people presumably held onto traditional polytheistic religious norms as the new Christian religion permeated burgeoning urban power networks in the post-Constantine empire. These urban power networks—in Rome and in cities across the Empire—were destined to be dominated by the Christian Church. Internecine feuding among the various leaders of the nascent church throughout the Empire increasingly left Rome and its bishops (mostly members of the ancient families of the senatorial class) the vanguards of what would become Nicene Christianity. The city of Rome itself gradually became a merely symbolic if not spiritual capital. More often than not, ultimate political and military authority in the west would lay elsewhere; in what are now the cities of Milan, Trier, and Ravenna. For example, it was from Trier in Gaul that Valentinian I, the last Western Emperor who wasn’t a mere figurehead, ruled the West in the mid-to-late fourth century. Thus, while the sack of Rome in 410 CE by Alaric and the Visigoths during the reign of the Western Emperor Honorius amounted to a humanitarian disaster for the city’s people, it was a symbolic defeat that, in and of itself, had little real impact on the political stability of the Empire as a whole even though it led to widespread consternation among Christians. Indeed, Augustine’s seminal work, The City of God, was a direct response to the spiritual misgivings that had swept through the Empire in the wake of the sack of Rome. By then, the disastrous reign of Honorius (who became Western Emperor at the age of ten) had all but ensured that future Emperors in the West would be little more than puppets. The 400s were marked by the ascendency of de facto rule by the Magister Militum (“Master of Soldiers”) in the Western Empire, essentially the Emperor’s top general. At this point, it’s worth emphasizing that the Roman armies of the West were increasingly populated by men who would have been perceived as only nominally Roman if not “barbarian.” The Magister Militum Flavius Stilicho, who held true power during the reign of Honorius, was himself the son of a Germanic Vandal and a provincial Roman woman. Indeed, military service had gradually become synonymous with barbarian identity, and the Roman soldiery was eventually comprised of a motley assortment of men of disparate ethnic backgrounds who came to speak a blend of Latin and Germanic languages distinctive to the Roman military. Trousers, long associated with “barbarians,” became widely adopted by Roman soldiers. Among them were “first-generation Romans” who came from migrant families that had only recently settled within the Empire or along its borders, as well as men whose families had considered themselves Roman for generations. It was largely this amalgam of military men, who often simultaneously held allegiance to the Roman state and to the various branches of Germanic tribes to which they belonged, that would ultimately come to gain complete political control of fragments of the Western Empire. Odoacer provides an instructive example of the gradual shift towards “barbarian” military rule in the West. In fact, unlike the Visigothic “King” Alaric, Flavius Odoacer did not explicitly identify with any one tribe or group of people (his ethnic origin remains hotly debated). However, like Alaric, who was in fact a Roman military commander who served under the Eastern Emperor Theodosius before he rebelled and eventually attacked Rome, Odoacer too was a Roman military officer. Shortly after the Eastern Emperor Zeno appointed Julius Nepos to be the Western Emperor in 474, Nepos was betrayed by his Magister Militum, a Roman aristocrat named Orestes who had come to notoriety as a secretary and envoy in the court of Atilla the Hun. The fact that Orestes chose to remain Magister Militum, instead proclaiming his young son Romulus Augustus the new Western Emperor, is an indication that the existing military establishment in the west viewed the imperial title as little more than ceremonial by this point (importantly, this was not the case in the east, where the Eastern Emperor was no figurehead). Nevertheless, Nepos had escaped to Dalmatia, and Zeno refused to recognize Orestes or Romulus Augustus, considering them usurpers. It was in this context that the Roman officer Odoacer (who claimed no noble lineage), led the foederati (members of “barbarian” tribes from beyond the borders of the Empire proper who were bound by treaty to serve in the Roman military) to revolt against Orestes when he refused to cede to their demands for permanent land and housing within Italy. In fact, the largely Germanic foederati were joined by much of the standing Roman army (that is, ostensibly ethnic “Romans”). When Odoacer defeated Orestes and deposed the usurper Romulus Augustus, he sent a senatorial delegation to Zeno to return the regalia of the Western Roman Emperor. Zeno bestowed Odoacer with Patrician rank, and granted him authority to rule Italy in the name of Julius Nepos. While Odoacer observed political niceties, officially ruling in the name of Nepos and Zeno, he never allowed Nepos to return to Italy. Was Odoacer’s rise to power the end of Roman rule in Italy? What about the rest of the Western Empire? The better question to ask is: What would the people who lived in Western Europe have thought? For the Romans who had survived the especially violent upheaval of the period (seemingly endless invasions, sieges, civil wars, famines, uprisings, lawlessness, and mass migrations), self-perceptions about political identity across continental Europe seemed to change little but for the fact that the small groups of men in power were foreigners who were no longer truly answerable to any Emperor. In some cases, the initial fiction of unbroken Roman political continuity in the West even included the minting of coins depicting the Eastern Emperor. Yet, while Germanic conquerors held political leadership, some portion of the bloated governmental bureaucracy that had developed across both halves of the Empire in the fourth and fifth centuries remained intact. The well-to-do continued to be educated in Latin and Greek, to study classical literature and philosophy, and aspired to bishoprics and governmental appointments (under Franks in Gaul or Visigoths in Spain). Roman populations under Germanic rule throughout Europe and the Mediterranean more or less remained Roman, and for the most part, so did their societies and cultures, at least in the beginning. Despite popular perceptions that Romans were supplanted by Germanic populations, genetic evidence is finally settling the question of just how great the impact of barbarian migrations was on the demographics of the Western Europe that emerged from the carnage of late antiquity. It turns out that the answer is: Not very.1,2 It was only quite gradually (over hundreds of years) that the populations of Europe would come to see themselves as having socioculturally distinct ethnic identities synonymous with that of their conquerors rather than as being one pan-European “Roman” people. While the notion that there was ever a singular Roman identity may seem an oversimplification, it’s worth remembering that the cosmopolitan classical Empire had included a heterogeneous mix of peoples who were nevertheless united by a common subsuming Roman culture, so that ardent adherents of the very same mystery cults were as likely to be found in Britain as in Syria, to say nothing of the common worship of manifestations of the old Greco-Roman gods. The Romans of Western Europe weren’t replaced; instead, over generations, they adopted the identities of their new masters. However, this was a transformation that went both ways, and Germanic peoples became Romanized to varying degrees, their original languages and traditions gradually fading away or transforming so as to be ultimately unrecognizable. This paved the way for the incubation of the distinct, nascent European cultures that would develop in the ensuing centuries. On the other hand, it’s important to point out where this shift in identity didn’t happen, namely in the Eastern Roman Empire, where the people of the Greek-speaking world called themselves “Romans” (a term virtually synonymous with “Christian”) through the fall of Constantinople in 1453 and beyond. Although the seeds of a distinct pan-Greek identity were planted during the later years of the Eastern Empire, it came to full fruition as late as the 1800s during the movement for Greek independence from the Ottoman Empire. Ironically, the Greek identity movement saw the revival of the classical hellenes ethnonym, a term which their Roman ancestors had derogatorily applied to small pockets of non-Christians in Greece during late antiquity and the middle ages. Roman identity persisted elsewhere throughout Eastern Europe and the Middle East. The name of the country Romania essentially means “land of the Romans.” Writing in 1570, the Croatian Archbishop Antun Vrančić remarked on the people of Wallachia (at the time, an exonym used by outsiders to refer to a major region in Romania), “When they ask somebody whether they can speak Wallachian, they say: do you speak Roman? and [when they ask] whether one is Wallachian they say: are you Roman?” The Romagna region in Northern Italy is so named because it was controlled by the Eastern Roman Empire until the mid-8th century. The concept of a pan-Italian self-identity didn’t emerge until the 15th-18th centuries. Under Islamic rule, the people of what was formerly Roman Anatolia (modern day Turkey) initially came to identify their land with the Arabic word Rûm (Rome) and themselves as Rûmi (Roman). The concept of “Roman” as an ethnic identity simply didn’t persist in Western Europe as it did in the East. The new Germanic “kingdoms” of Western Europe initially began with edicts and law codes that specifically distinguished between the Germanic ethnic group that conquered a given territory and the Roman subjects within (whose lives continued to be governed by the old Roman laws). For example, in Visigothic Spain, the law distinguished between romani and gothi until around 654 CE when the Lex Visigothorum abolished the distinction, recognizing all the kingdom’s people as hispani. During the reign of the Ostrogoths in Italy (discussed further below), intermarriage between Goths and Romans was explicitly outlawed. By the time the Germanic Lombards came to rule most of Italy, their Edictum Rothari in 643 CE (named for the Lombard King Rothari) continued to identify themselves as distinct from the Romans and therefore subject to different laws, but by the mid 8th century the legal distinction disappeared. These cultural transformations were accompanied by truly staggering declines in civil infrastructure, technology, economic sophistication, and the dissemination of information. During the Roman Principate (the period that typically comes to mind when one thinks of “the Roman Empire,” stretching from the reign of Augustus to the Crisis of the Third Century), the European/Mediterranean economy reached heights that would be unequaled for another thousand years. This was largely facilitated by an infrastructure of ports and roads that were built, maintained, and secured by the Roman military. By the fifth and sixth centuries, not only had this transportation infrastructure fallen into disrepair and banditry, it now spanned a patchwork of territories controlled by Germanic invaders and bands of what might have once been called Roman soldiers but who were now loyal to various competing warlords. The gradual loss of freedom of movement and travel was accompanied by declines in trade. This only further served to accelerate economic decline that had begun during the Crisis of the Third Century and the civil wars that followed. Greenland ice core data gives an indication of the level of global lead production, which closely tracks the fortunes of the Roman economy (see below).3 Amazingly, lead production levels would not surpass those of the Roman Principate until mere decades before the American Revolution. It’s worth noting that the period in which lead production bottoms out was a time in which the most powerful political leader of Western Europe—Charlemagne—was illiterate, scrupulously learning to read and write well into adulthood. Dark Ages indeed. And what of the Italian peninsula after 476? Zeno ultimately sought to remove Odoacer from power, employing yet another group of foederati, this time a distinct tribe of Romanized Goths called the Ostrogoths. Their leader, Flavius Theodoricus (known to history as “Theoderic the Great”), had been made Magister Militum in Constantinople and even served as consul. After Zeno promised Theoderic control of Italy, the Ostrogoths invaded the peninsula and defeated Odoacer’s forces. A peace treaty negotiated by the bishop of Ravenna ensured that both Odoacer and Theoderic would have joint control of Italy. At a feast meant to celebrate the peace between them, Theoderic drew his sword and personally killed Odoacer in front of the gathering. Just as Odoacer had done before him, Theoderic pledged to rule the lands of Italy in the name of Zeno, but his was less of an empty promise. The Roman political infrastructure that had existed before Odoacer’s revolt was largely restored including Roman legal codes, and Theoderic allowed free movement of people to and from the Eastern Empire. In return, the Eastern Emperor made the dramatic (albeit symbolic) gesture of sending the Western Imperial regalia to Theoderic, tacitly acknowledging him as Emperor of the West. Theoderic was eager to take the role, though never officially, instead retaining the pretense of subordination to the Eastern Emperor. Theoderic’s de facto reign was characterized by public works, rebuilding of infrastructure, and a deliberate effort to “restore” the symbols of power associated with principate Rome. Nevertheless, continued tensions with the Eastern Imperial court would eventually spark war in 535, ten years after Theoderic’s death. The Eastern Emperor Justinian I sent his general Belisarius to lead a massive and costly campaign to finally regain proper direct political control of Italy and Rome, the namesake of both Empires and peoples. What followed was two decades of warfare that ravaged the Italian peninsula. In order to finance 20 years of war, the Eastern Imperial court levied increasingly inordinate taxes on its populace. The war turned out to be a Faustian bargain for the East, driving many of its people to destroy their own buildings and property to avoid the ruinous tax burden, or to sell themselves and their family members into indentured servitude and debt bondage. Although the Eastern Empire ultimately regained direct control of Rome and Italy, the Gothic War devastated the local population and infrastructure. Crucially, this period coincided with the Plague of Justinian, one of the deadliest pandemics in human history; the plague would ultimately claim the lives of an estimated 25 million people, or more than a tenth of the total global population. Arguably more than any one event, it was this Justinianic Plague that was the death knell of the classical Roman world; a final, fatal blow against the already fractured civilization. If not for the devastation of the plague, Roman civilization and culture might have continued (perhaps even rebounded and flourished) under Germanic rulers and their assimilated peoples in much of Western Europe. Ostrogothic Italy provides a window into the stillborn beginnings of what might have been. Yet by the time the pandemic finally subsided, the major Roman cities throughout the Empire and its former territories saw their populations bottom out. Abandoned infrastructure was now in decay, and ruined public buildings were adapted for any number of new utilitarian functions. The dissemination of specialized knowledge required to maintain the technology that underlay Roman urban life (e.g. indoor plumbing) had long ago largely ceased and eventually faded away. Once splendid villas now crumbled, becoming overgrown, dilapidated husks that dotted the countryside. By the middle of the sixth century, Rome was a largely empty city of abandoned and ruined buildings. Constantinople didn’t fare much better, with nearly half of its population wiped out. Though the Roman Empire had withstood the Antonine Plague in the second century, a pandemic that claimed as many as 5 million lives including much of the Roman military, that outbreak had taken place at the zenith of Roman power. The Antonine Plague may have paved the way for the Crisis of the Third Century roughly fifty years later (during which yet another plague unfolded, the Plague of Cyprian). Thus, pandemics had the potential to shake the foundations of even a thriving empire. Not only was the empire not thriving in the sixth century, it was undergoing total collapse and dismemberment. Beyond Italy, Britain had been long-abandoned by the central Roman state, and all of western Europe had been carved into various territories in which self-styled kings and their marauders ruled local populations (e.g. the Gallo-Romans were ruled by the Franks, the Hispano-Romans were ruled by the Visigoths). In some cases, these territories had been taken over by bands of soldiers who had simply been billeted in the region by the Roman state. Although Roman ways of life ostensibly continued but for the change in management, Britain was a different story; archaeological evidence shows a dramatic and rapid decline in the level of technological sophistication there, a process that would unfold more gradually (and less completely) throughout continental Europe. If any part of the Empire could fairly be described as experiencing the fall of civilization, it was in Roman Britain. Indeed, the desperate state of affairs in Britain may have made the Roman population there especially susceptible to the Justinianic Plague, which further depopulated the island and may have facilitated its conquest and settlement by the diverse Germanic peoples who would come to be called the Anglo-Saxons. In this case too, new genetic evidence is finally overturning the long-held perception that Germanic foreigners completely replaced local Roman populations. Rather than experiencing a massive influx of Anglo-Saxons, it seems that Britain was conquered by relatively small bands of invaders. As was the case in continental Europe, these people have left little genetic imprint on the population.4,5,6 Despite notions long-cherished by some, the idea that the English have a common Anglo-Saxon origin is a myth that has now been conclusively upended by genetic evidence. Although Rome itself would remain under the control of the Eastern Empire for the next two centuries, the end of the Gothic Wars, the Plague of Justinian, and the subsequent invasion of Italy by the Langobards (better known as the “Lombards”) marked the completion of the gradual destruction of the Western Roman Empire, the ultimate collapse of classical civilization, and the transition into a new Medieval world. Yet, even though Medieval Europe and classical Roman civilization of the principate (the 1st-3rd centuries) would surely seem mutually alien to people native to each but transplanted across time, the people of the Middle Ages retained more than a patina of cultural continuity with the old civilization, living as they were within its dusty skeleton. The revival of the Western Imperial title, bestowed by Pope Leo III to Charlemagne in 800, offers some sense of the value Medieval aristocracy placed in the pretense of political continuity with Roman civilization. It’s worth noting that the Frankish kingdom Charlemagne ruled, known to historians as the Carolingian Empire, had been known to its own people as Romanorum sive Francorum imperium, the “Empire of the Romans and Franks,” or simply Romanum imperium. Charlemagne’s title as “Emperor of the Romans” was revived in 962, with the Germanic Romanum imperium evolving into the patchwork of territories that came to be called the Holy Roman Empire by the 13th century. The Empire lasted until 1806 when it was dissolved by Napoleon. Although the Eastern Roman Empire would come to be known as the Byzantine Empire, this is a relatively modern historiographical invention; as previously mentioned, despite the fact that Greek was their lingua franca, the people of the Eastern Roman Empire referred to themselves as Romans and called their empire the “Roman Empire.” The Eastern Roman Empire would go on to face invasions by both Christian Crusaders from the west and Muslim warriors from the east and south, ultimately coming to an end with the fall of Constantinople to the Ottomans and the death of the last Eastern Roman Emperor in 1453. The Ottoman leader, Sultan Mehmed II, claimed the title “Qayser-i Rûm” (Caesar of the Roman Empire), and portrayed the Ottoman state as a direct continuation of the Roman Empire. It’s worth noting that at its zenith, the Ottoman Empire’s territory roughly coincided with that of the Eastern Roman Empire. Constantinople was renamed Istanbul and became the Ottoman capital until the Empire’s dissolution in the 20th century following World War I and the rise of the Young Turks. Amorim CEG, Vai S, Posth C, et al. Understanding 6th-Century Barbarian Social Organization and Migration through Paleogenomics. 2018. doi:10.1101/268250. Ralph P, Coop G. The Geography of Recent Genetic Ancestry across Europe. PLoS Biology. 2013;11(5). doi:10.1371/journal.pbio.1001555. Greenland Ice Core Evidence of Hemispheric Lead Pollution Two Millennia Ago by Greek and Roman Civilizations, Hong. S. et al., Science, vol. 265, 23 September 1994, page 1842. Töpf AL, Gilbert MTP, Dumbacher JP, Hoelzel AR. Tracing the Phylogeography of Human Populations in Britain Based on 4th–11th Century mtDNA Genotypes. Molecular Biology and Evolution. 2005;23(1):152-161. doi:10.1093/molbev/msj013. Schiffels S, Haak W, Paajanen P, et al. Iron Age and Anglo-Saxon genomes from East England reveal British migration history. Nat Commun. 2016;7:10408. Sayer D. Why the idea that the English have a common Anglo-Saxon origin is a myth. The Conversation. https://theconversation.com/why-the-idea-that-the-english-have-a-common-anglo-saxon-origin-is-a-myth-88272. Published April 26, 2018. Accessed April 28, 2018.
Nacho Matemáticas (B) is ideal for students who are learning Spanish and want to add some math to their learning experience. This book is full of math problems that can be solved in a fun and entertaining way. In addition to learning about shapes, numbers, and counting, students will be able to draw and color as they would in any other Nacho book. Level: Advanced Spanish Evidence of Learning: Expand your understanding of numbers and quantities in your everyday environment. Expand your understanding of simple and repeating patterns. Expand your understanding of comparing, ordering, and measuring objects. Identify and use a variety of shapes in your everyday environment. Identify, describe, and construct a variety of different shapes. Solve simple addition and subtraction problems with a small number of objects. Use of mathematical thinking to solve problems that arise in their everyday environment. Meets Common Core Standards for grades K-2.
Samir Saran|Vidisha Mishra At the heart of the problem of climate change is a twisted irony – the countries that have been least responsible for the problem are the ones likely to suffer the most. Climate change poses both direct and indirect threats to human rights: the right to food, the right to water and sanitation, access to affordable commercial energy, as well as the consequent larger right to development. Issues such as forced mass migration, threat of climate-linked conflict situations, direct and indirect threats to health and healthcare systems, and the impacts on land and livelihoods all demonstrate that climate change and human rights concerns are closely interwoven. The right to a life of dignity and the right to life itself are at stake. At the heart of the problem of climate change is a twisted irony – the countries that have been least responsible for the problem are the ones likely to suffer the most. Anthropogenic greenhouse gas emissions arose from the economic activity of developed countries but the worst impacts of climate change will be felt by poorer nations. People who are already vulnerable and marginalized will be more affected than those who have greater capacity to absorb adverse impacts. The impacts of climate change will be transnational but they will not affect everybody equally. At present, almost a third of all yearly human deaths are due to poverty-related causes. The situation is only likely to get exacerbated in the future with the increasing impact of climate change. Women and girls make up a disproportionate number of the world’s poor, which renders them even more vulnerable. For instance, in rural India, women are predominantly responsible for providing food and water. Hence, the effects of climate change on soil fertility, water availability and food security have very direct impacts on women. Further, the 2004 earthquake and tsunami highlighted the higher vulnerability of Indian women in disaster situations, when four times as many Indian women as men died in the affected region. This is one example of how climate change widens existing inequalities, which could be lethal for India where besides gender, caste- and class- related disparities also determine the levels of human rights enjoyed by citizens. While global climate negotiations must inevitably focus on protecting the environment and safeguarding natural resources for future generations, it is essential that they never forsake the immediate development needs of the most vulnerable populations across the globe. To do that, the debate on climate change must focus especially on equitability, access to energy, and sharing of space. Clearly, development is not just an economic and social necessity; it is also the best adaptation to climate change. Development which leads to strengthening of the response-capabilities and assets of vulnerable populations is crucial for safeguarding their basic human rights to life, health and livelihoods, as well as for successful climate change adaptation and mitigation. This is especially relevant for emerging economies like India, home to an estimated 33% of the world’s poorest 1.2 billion people. Safeguarding the right to development is crucial here, as it will implicate the right to life itself. A successful approach would be one that does not view environment protection and poverty eradication as mutually-exclusive domains. There is little morality in saving the planet when a third of all humans still do not live beyond the fourth decade, while a seventh of them live well beyond eight decades. In fact, the dominant narrative of de-linking energy emissions from growth within climate negotiations fosters an implicit narrative of possible human rights suppressions in developing countries. Economist Tim Jackson has explored the popular narrative of “absolute decoupling” of emissions from economic growth. According to his findings, while it is possible to slow the growth of emissions relative to the growth rate of the economy, it is implausible to stall or reverse emissions while the economy is still in the process of expanding – the existence ofcarbon-saving technologies notwithstanding. India has yet to peak its energy consumption and is still striving to provide the minimum lifeline energy of 2000-W per capita – that is, the per capita energy consumption with which a first world citizen could live in 2050 without lowering their present standard of living (as per a 1998 study by the Federal Institute of Technology in Zurich). Research suggests that access to energy is essential for poverty alleviation, and in improving livelihood opportunities in developing countries. Although India’s per capita energy consumption is far lower than that of China, the U.S. and the European Union, India is the world’s fourth largest energy consumer overall and the world’s third largest carbon emitter. The country’s stand at climate change negotiations is likely to be focused on the twin ambitions of economic growth and access to energy for human development while pursuing a clean energy agenda. What concerns much of the developed world is that while they have generally reduced their coal consumption in the recent past (post-financial crisis), India has increased its consumption over the same period. However, analysis indicates that this increase in consumption should not be considered reflective of the country’s ‘irresponsibility’ towards the climate. Rather, it must be emphasised that on a per capita basis, India burns roughly a fifth of the coal that the U.S. does, and a third of the EU. As we move towards 2050, where we seek to limit per capita emission to 2 tonnes of CO2 (Eqv.) for the estimated 9 billion inhabitants of planet Earth, personal energy space, carbon allowances, fuel choices and lifestyle emissions must start to converge. Here, the crucial distinction between accesses to lifeline energy versus lifestyle energy needs to be strongly articulated. The former reflects the minimum energy required to fulfil what can be categorised as “basic human needs”, measured through GDP growth rate targets, HDI levels, as well as estimations of the energy required to meet a predetermined set of development goals. However, if lifeline energy is understood to be high – enough to cover the minimum lifestyle needs of citizens in developed countries – anything beyond that ought to be defined as lifestyle energy. Therefore, while it will strive to move towards cleaner energy, India is likely to rely on coal consumption in order to grow its industrial base and develop its economy. Without development and poverty alleviation, India will be unable to invest in renewables or be climate-resilient. More succinctly, “India will need to grow its coal capacity if it is to successfully go green”. The existing inequitable sharing of carbon space is the point of departure for conversations around climate justice and equity. In December this year, at the Conference of Parties (COP) 21, countries will attempt to formulate a global climate agreement by integrating voluntary and self-determined national contributions of 193 countries. The negotiations in Paris must ensure that the agreement is not so focused on safeguarding the rights of future generations that it ends up sacrificing the lives and prospects of existing at-risk and vulnerable populations in developing countries. Notwithstanding the “creeping normalcy” of climate impact, climate change induced natural disasters and extreme weather events are already upon those populations and are only likely to be more extreme in the future. In this context, a rights-based approach, could “analyse obligations, inequalities and vulnerabilities,” and “redress discriminatory practices and unjust distributions of power,” as specified by the United Nations Human Rights Commission. It can be established that such obligations apply to the targets and commitments of States in the context of climate change, and therefore future climate regimes should focus on protecting the rights of those most vulnerable to climate change. The Declaration on the Right to Development proclaimed by the UNFCCC articulates these human rights principles, and calls for States to address the issue in keeping with their common but differentiated responsibilities and respective capabilities in order to benefit both present and future generations. In a still dramatically unequal world, realizing low-carbon, climate-resilient, and sustainable development in all countries is not possible without international cooperation in finance, technology, and capacity-building. It must also be acknowledged that climate change mitigation is not plausible without eradicating poverty and ensuring climate justice across and within nations. Integrating human rights into climate actions and empowering the most vulnerable populations such as women and children in developing countries to participate as change-makers in the adaptation and mitigation processes will expedite the mobilisation required to combat the impacts. Providing energy access is an auxiliary for gender equality, women’s empowerment and inclusive development. Ahead of the Paris conference, the Indian Prime Minister has urged the global community to focus on ‘climate justice’ over climate change. Under-consumption of the poor cannot subsidise the over-consumption of the rich, both across and within nations. In order for future negotiations to be sustainable and successful, States must strive to rise above rhetoric and power-play to shoulder the dual responsibilities of protecting the environment while upholding the rights to life and development – equitably, if not equally. This commentary originally appeared in Global Policy.
“Natural selection” is a unifying theory of the “how” of human beings and determines the viewing of what are called the “life sciences”. Its premise is that human reason is historical and is a product of the modifications which human beings have undergone through time. For Darwin, “life” is the determiner of what “modifications” shall succeed and which shall not through the principles of “natural selection” embodied in the phrase “survival of the fittest”. For Nietzsche, “life” is will to power and it is the principle upon which what exists persists in its existence. “Survival” is a persistence in the presence of beings that exercise their will to power. How is this will to power expressed? Knowledge is a grasping and retaining of what is true. Truth and the grasping of truth are “conditions of life” and are prior to what we call “experience”. Knowledge takes place when we think and make assertions about things. Such assertions are “judgements”. The thinking that represents beings/things prevails in perception and cognition “in every kind of experience and sensation”. “To perceive” means to take something in advance as being in this or that way or else not, or as differently, being as it is (reality). Conversely, things/beings only “reveal” themselves to such a perceiving in such and such a manner. To be the same means to belong together in essence: beings/things are not in being as beings, not present, without such perceiving i.e. we are incapable of “seeing” them in any other way. As the Greek philosopher Parmenides would say: “Perceiving and Being the same”. (As an aside, this is why we teach Darwinism as a “reality” rather than a theory of reality in our classrooms i.e. as one among many possible ways of perceiving the world.) One cannot think Parmenides saying in a modern way such as Schopenhauer’s in his World as Will and Representation i.e. “representation and Being are the same” that the world is merely our representation and that it “is” nothing in itself and for itself. Nor can it be thought in Bishop Berkeley’s esse est percipi which denies any reality to beings outside ourselves without our perceiving them or as upheld in their presence through the perceptions of God. Rather, the saying means that Being (Life) is only where perceiving is and perceiving is only where beings/things are. What yokes together being and perceiving is what we conceive as “truth”. For the Greeks this yoking is called nous: the thinking that we associate with Reason which is the enjoinment of thought to beings/things. This enjoining relation was called logos by the Greeks and it expresses how things are addressed: katagorein (the categories). The schemata of the categories (quality, quantity, relation, etc.) is how beings/things are addressed, the form into which we address something as something. This is what is understood as species: that from which and in return to which beings/things are: what they are made of, how large/small, how they are related to other beings/things. Perceiving things as such unfolds in thinking and thinking expresses itself in the assertion, in the logos. Western metaphysics determines things/beings in advance as what is conceivable and definable i.e. what is not “imaginary” or “fantastical”. “Common sense” and metaphysical thinking rest on the “trust” that beings/things show themselves in the thinking of reason and its categories: that what is true and truth are grasped and secured in reason. This has been called the principle of reason. Nietzsche states: “Trust in reason and its categories, in dialectic (Hegel), thus the value estimation of logic, proves only their usefulness for life, proved by experience—not their ‘truth’”. We cannot view the “trust in reason” and the dominance of logos as ratio as one-sidedly rationalism or rationalistic. Irrationalism belongs within the “trust” in reason where irrationalism determines the “world view”: the triumphs of rationalism, the principle of reason, are both celebrated within the technological and the adherence to fundamental irrational world views. “Trust in reason” is a basic constitution of human beings—the animale rationale. The power and the capacity that brings human beings before beings/things and that represents beings/things for human beings is delivered over to reason. Only what represents and secures rational thinking has claim to the assertion of a being that is in being. Reason determines what is in being and what is not. Reason is the most extreme pre-decision as to what Being (Life) means. “Logic” and the “logical” are calculated on the basis of trust in reasons. When physics thinks beings/things in certain categories (matter, cause, energy, potential) and in its thinking trusts these categories from the start and in its research continually attains new results, such trust in reason in the form of science does not prove that “nature” reveals its essence in anything that is objectively shaped and represented by the categories of physics. Such scientific knowledge only demonstrates that our thinking about nature is “useful” for “life”. (See the blog entry on The Natural Sciences). What generates practical use is true and the truth of what is true is to be estimated only according to its degree of usefulness. Here in TOK we refer to this as “robust knowledge” if that usefulness is great. That something is “useful” pertains to the conditions of “life”. What we think these conditions are, the essential determination of these conditions, the ways of their conditioning, and the character of their conditioning depends upon the way in which life itself is defined in its essence. That something is useful for life means that scientific knowledge through the principle of reason posits and has posited “nature” as being in a sense that secures modern technological success in advance through the calculations of the schemata adopted. This is the framing that is the technological and why “technology” is referred to as a fate in these writings and why “choice” is placed in quotations marks. Truth and what is true: How are we to understand “truth as correctness” or what is called “the correspondence theory of truth” according to Nietzsche? How is “correctness” to be understood? Truth as a characteristic of reason (and thus knowledge) and that this characteristic is used to assemble and represent beings/things and why it must be used as such must be clarified. In (WP #507) Nietzsche says: “that a great deal of belief must be present; that judgements may be ventured, that doubt concerning all essential values is lacking that is the pre-condition for every living thing and its life”. What Nietzsche is saying is that truth and what is true are not determined subsequently in terms of practical use merely accruing to life i.e. from experience, but rather that truth must already prevail in order for what is alive to live so life as such can remain alive. Accordingly, what is believed and held to be true can (“in itself”) be a deception and untrue; it suffices for it merely to be believed and, best of all, for it to believed unconditionally and blindly. Does Nietzsche somehow support or believe the current “alternate facts” and machinations that are so much a part of modern politics and propaganda? Is Nietzsche’s conception of truth quite mad? There is the statement that the truth must exist but that it does not necessarily need to be “true”. For Nietzsche, “truth” is a necessary “value” but it is not the highest value. Our current actual historical conditions and situations are the consequences of the hidden essence of truth, and as consequences they have no control over their ground or origin. Irrationalism and rationalism are bound together. What is essential is conceived as essential in relation to “value” and to its character as a value. “Survival of the fittest” is a value and as a value is a “condition of life”. The conditions of our preservation are predicates of Being (Life) for Nietzsche. The necessity of being stable in our beliefs if we are to prosper requires that we determine a “true” world that is in opposition to a world of change and becoming. The “modification” apparent in Being which is a product of necessity and chance is countered by the “true”, stable world of the principle of reason grounded in Being. Being (nature) chooses which modifications will survive and which will not, and these “surviving” modifications are evidence of “progress” conceived as more “fitter” or “fitted”. This apparent opposition of the worlds of Being (nature) and becoming (modification) is something which has been present in the thinking of the West since its beginnings. In Platonic philosophy (Platonism, which is to be distinguished from the thinking of Plato himself) the eidos or the outward form/appearance and the idea or the “whatness” of something are enjoined. But in Plato, the things that are: this computer, these letters, this software, are eidola or outward appearances only because they must show their form in sensuous appearance. They are lacking in true “substance”. Yet what is computer-like, software-like still shows itself in its presence here and now, but what makes a computer be a computer and software be software are not in the things themselves but only in the eidos and the ideas of the things. We say that something is which we always in advance encounter as always at hand: what is always present and has constant stability in this presence. We call this the true world, reality. The “apparent world” is what is not in being, what is inconstant and without stability, what constantly changes and in appearing disappears again. The Christian faith’s distinction between the earthly and the eternal, shaped by faith in redemption and salvation is an example of the distinction between the “true” and “apparent” worlds. Nietzsche states: “Christianity is Platonism for the masses”. Nietzsche’s thought searches for the origin of this distinction between the worlds and he finds this origin in “value relations”. What is constant and stable is of higher value to what is changing and flowing. Why? Nietzsche understands “value” as a “condition of life”. To “condition”, being a “condition” signifies essence, what something is, what state it is in. Life, both of human beings and of “nature”, stands under certain conditions and it posits and preserves these as its own and in so doing preserves itself. Value-positing does not mean a valuation that someone gives to life from the outside; valuation is the fundamental occurrence of life itself; it is the way life brings its essence to fulfillment. Essence precedes existence. Human life will in advance direct the positing of the conditions securing it preservation (survival) according to how life itself determines its essence to itself for itself. If life is only constantly concerned with maintaining itself and being secured in its constancy, if life means securing the constancy that has come down to it and been taken over by it, then life will make whatever is enough for securing its constancy (preservation) its most proper conditions and these will have the highest value. Only what has the character of maintaining and securing preservation in general can be taken as a condition of life i.e. has a value. Only this is real. Nietzsche says: “We have projected the conditions of our preservation as predicates of Being in general”. Human beings are driven to securing their own permanence (currently manifested in the drive for AI). The only condition is that life instill of itself and in itself a belief in something it can constantly hold in all matters (the reasoning behind the statement made elsewhere that religion is what we bow down to or what we look up to—what we hold to be of “highest value”.) The taking of something to be true is not some arbitrary activity; it is not like the machinations of the “alternative facts” charlatans who float on a sea of nihilism. It is rather the behaviour necessary for securing the permanence of life itself. The next steps are to gain insight into the metaphysical connections between life as “preservation”/constancy and the role of value in this determination of what gives this preservation permanence.
Attitude is a group of opinions, values and dispositions to act associated with a particular object or concept. Measuring attitude in your survey can be difficult because it requires a series of questions to evaluate it effectively. Here are some examples of subjects that an attitude survey might attempt to measure. - Attitude on Immigration - Attitude on Space Exploration - Attitude on Stem Cell Research There are four factors that influence the responses: (1) There is bias to respond with ‘agree’ categories rather than ‘disagree.’ (2) There is bias to select categories to the left of the scale rather than the right side. (3) There is a tendency to select responses towards the center of the scale and avoid extremes of “strongly” agree or disagree. (4) There is a tendency for respondents to fall into a pattern of response such as all “agree” or “no opinion.” Likert Scale Questions – How It Helps Measure Respondent Attitude Likert Scale is a psychometric scale where questions based on this scale are normally used in a survey. It is one of the most widely used question types in a survey. In a Likert Scale Survey respondents simply don’t choose between “yes/no”, there are specific choices based on “agreeing” or “disagreeing” on a certain question in the survey. Likert scale survey questions are essential in measuring a respondent’s opinion or attitude towards a given subject. Likert Scale is typically a five, seven, or nine point agreement scale used to measure respondents’ agreement with a variety of statements. Organizational psychologist Rensis Likert developed the Likert Scale in order to assess the level of agreement or disagreement of a symmetric agree-disagree scale. In general, a series of statements each designed to view a construct from a slightly different perspective are leveraged. The power of this technique is that it works across disciplines—it is just as applicable to a social science construct as it is a marketing one. Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling and high response rates will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire. Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers also are often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys. LIKERT SCALE QUESTIONNAIRE AND EXAMPLES: Unipolar Likert Scale Examples Unipolar scales are more contoured, allowing users to instead focus on the absence or presence of a single item. The scale measures the ordinal data, but most of the times unipolar scales generate more accurate answers. An example of a unipolar satisfaction scale is: not at all satisfied, slightly satisfied, moderately satisfied, very satisfied, and completely satisfied. Likert Scale Questions (Unipolar) A unipolar Likert scale question type indicates a respondent to think of the presence or absence of a quality. For example, a common unipolar scale includes the following choices: not at all satisfied, slightly satisfied, moderately satisfied, very satisfied, and completely satisfied. It is arranged on a 5 point scale. A to E. Also, Unipolar question types lend themselves where there is a maximum amount of the attitude or none of it. For instance, let’s say, how helpful was the apple pie recipe? Very helpful, somewhat or not at all. From there, we can safely assume there is something in between–like “sort of” helpful. Bipolar likert scale examples A bipolar scale indicates a respondent to balance two different qualities, defining the relative proportion of those qualities. Where a unipolar scale has one “pole,” a bipolar scale has two polar opposites. For example, a common bipolar scale includes the following choices: completely dissatisfied, mostly dissatisfied, somewhat dissatisfied, neither satisfied nor dissatisfied, somewhat satisfied, mostly satisfied, and completely satisfied. That is a scale with 0 in the middle (-3, -2, -1, 0, 1, 2, 3).
The Five Pillars of Islam are the framework of a Muslim’s life. They are the belief or testimony of faith, prayer, giving zakat (support of the needy), fasting during the month of Ramadan, and the pilgrimage to Makkah once in a lifetime for those who are able. The testimony of faith is saying with conviction, “La Ilaha Illallah, Muhammadur Rasool Allah.” This saying means “There is no true God but Allah, and Muhammad (PBUH) is the Messenger (Prophet) of Allah.” The first part, “There is no true God but Allah,” means that none has the right to be worshipped but Allah alone, and that Allah has neither partner nor son. This belief or testimony of faith is called the Shahada. Muslims perform five prayers a day. Prayer in Islam is a direct link between the worshipper and Allah. There are no intermediaries between Allah and the worshipper. Prayers are performed at Dawn, Noon, Mid-Afternoon, Sunset, and Night. The original meaning of the word Zakat is both ‘Purification’ and ‘Growth.’ Giving Zakat means ‘giving a specified \"two and a half percent\" on properties to certain classes of needy people.’ Every year in the month of Ramadan, Muslims fast from Dawn until Dusk, abstaining from food, drink and other prohibited activities. The annual pilgrimage (Hajj) to Makkah is an obligation once in a lifetime for those who are physically and financially able to perform it.
Inflection - teaching idea In this step we consider how to use film to explore inflection in an active way. We’ve posted an audio clip from the opening to another one of our shorts for you to listen to. From the sound of the voices, what can you tell about who the character/s might be, what kind of situation they are in, and what kind of exchange this is? What kinds of mood are being created? What range of emotions can you pick up on, just from the sounds of voices? If you don’t speak French, is it possible to pick up from intonation and inflection whether the character is asking or answering a question, or making an exclamation? NB, if you speak French already, it might not necessarily help! Add your comments and ideas to the comments section. Ask your learners to write a ‘story opener’ - a one sentence beginning to a story - that sets out what you think is going on. This is a task that can be carried out in a target language and you could specify a given tense or grammatical form. Beyond this, think about how you might scaffold or support your learners’ engagement with intonation and inflection, using a sequence of film dialogue such as this one. How could you develop or adapt this idea/activity for use with your learners? © Annick Teninge - La Poudriere
The history of human evolution has been rewritten after scientists discovered that Europe was the birthplace of mankind, not Africa. Currently, most experts believe that our human lineage split from apes around seven million years ago in central Africa, where hominids remained for the next five million years before venturing further afield. But two fossils of an ape-like creature which had human-like teeth have been found in Bulgaria and Greece, dating to 7.2 million years ago. The discovery of the creature, named Graecopithecus freybergi, and nicknameded ‘El Graeco’ by scientists, proves our ancestors were already starting to evolve in Europe 200,000 years before the earliest African hominid. An international team of researchers say the findings entirely change the beginning of human history and place the last common ancestor of both chimpanzees and humans – the so-called Missing Link – in the Mediterranean region. “To some extent this is a newly discovered missing link”Professor Nikolai Spassov, Bulgarian Academy of Sciences At that time climate change had turned Eastern Europe into an open savannah which forced apes to find new food sources, sparking a shift towards bipedalism, the researchers believe. “This study changes the ideas related to the knowledge about the time and the place of the first steps of the humankind,” said Professor Nikolai Spassov from the Bulgarian Academy of Sciences. “Graecopithecus is not an ape. He is a member of the tribe of hominins and the direct ancestor of homo. “The food of the Graecopithecus was related to the rather dry and hard savannah vegetation, unlike that of the recent great apes which are leaving in forests. Therefore, like humans, he has wide molars and thick enamel. “To some extent this is a newly discovered missing link. But missing links will always exist , because evolution is infinite chain of subsequent forms. Probably El Graeco’s face will resemble a great ape, with shorter canines.” The team analysed the two known specimens of Graecopithecus freybergi: a lower jaw from Greece and an upper premolar tooth from Bulgaria. Using computer tomography, they were able to visualise the internal structures of the fossils and show that the roots of premolars are widely fused. “While great apes typically have two or three separate and diverging roots, the roots of Graecopithecus converge and are partially fused – a feature that is characteristic of modern humans, early humans and several pre-humans,”, said lead researcher Professor Madelaine Böhme of the University of Tübingen. The lower jaw, has additional dental root features, suggesting that the species was a hominid. The species was also found to be several hundred thousand years older than the oldest African hominid, Sahelanthropus tchadensis which was found in Chad. “We were surprised by our results, as pre-humans were previously known only from sub-Saharan Africa,” said doctoral student Jochen Fuss, a Tübingen PhD student who conducted this part of the study. Professor David Begun, a University of Toronto paleoanthropologist and co-author of this study, added: “This dating allows us to move the human-chimpanzee split into the Mediterranean area.” During the period the Mediterranean Sea went through frequent periods of drying up completely, forming a land bridge between Europe and Africa and allowing apes and early hominids to pass between the continents. The team believe that evolution of hominids may have been driven by dramatic environmental changes which sparked the formation of the North African Sahara more than seven million years ago and pushed species further North. They found large amounts of Saharan sand in layers dating from the period, suggesting that it lay much further North than today. Professor Böhme added: “Our findings may eventually change our ideas about the origin of humanity. I personally don’t think that the descendants of Graecopithecus die out, they may have spread to Africa later. The split of chimps and humans was a single event. Our data support the view that this split was happening in the eastern Mediterranean – not in Africa. “If accepted, this theory will indeed alter the very beginning of human history.” However some experts were more skeptical about the findings. Retired anthropologist and author Dr Peter Andrews, formerly at the Natural History Museum in London, said: “It is possible that the human lineage originated in Europe, but very substantial fossil evidence places the origin in Africa, including several partial skeletons and skulls. “I would be hesitant about using a single character from an isolated fossil to set against the evidence from Africa.” The new research was published in the journal PLOS One.