content
stringlengths
275
370k
The central dogma is the two-step process involving transcription and translation, by which gene information flows into proteins: DNA → RNA → protein. Proteins are encoded by individual genes used to orchestrate possibly every cell function. The central dogma shades light on the flow of genetic information in cells from DNA to proteins (mRNA). Furthermore, it states that genes are used to state the sequence of mRNA molecules used to specify the protein sequence in that order. The main processes used by all cells to maintain their genetic information and convert the genetic information encoded in the DNA to form gene products, either RNAs or proteins depending on the gene, are: - Replication– this is the basis of biological inheritance in living organisms. The process copies a cell’s DNA where the enzyme known as DNA polymerase copes a single parental double-stranded DNA molecule into two daughter double-stranded DNA molecules. - Transcription-this process is used in the making of a complementary RNA copy a DNA sequence. RNA and DNA are nucleic acids that use the basic pairs of nucleotides as a language that enzymes can convert back and forth from DNA to RNA. The enzyme known as RNA polymerase forms an RNA molecule that is complementary to the gene-encoding stretch of the deoxyribonucleic acid (DNA). - Translation– this is the process through which messenger RNA is decoded and translated to form a protein otherwise referred to as a polypeptide sequence. Using mRNA as a template, the ribosome creates a polypeptide chain of amino acids which folds to become a protein. The free amino acids are transferred from the cytoplasm to a ribosome by the transfer ribonucleic acid (tRNA). The amino acids are continuously added to the end of the growing polypeptide chain until they reach a stop codon on the mRNA. The ribosome thus releases the completed protein into the cell. In eukaryotic organisms (organisms that have a nucleus), transcription and replication take place within the nucleus while translation takes place outside of the nucleus, in the cytoplasm. On the other hand, replication, translation, and transcription take place in the cytoplasm in prokaryotic organisms (organisms that do not have a nucleus). - Ribosome: mRNA or protein molecules that exist in all cells used in the production of proteins by translating mRNA. - Codon: a series of three adjacent nucleotides that encode to create a specific amino acid during protein synthesis. - Degenerate: this is the redundancy of the genetic code or simply when more than one codon codes for each amino acid.
In the third week of the course we continue with our focus turns to pedagogical issues in learning and teaching. In Week 3 of the course we focus on pedagogical issues in learning and teaching. We start with blended and online learning as different modes of teaching. No matter what mode of teaching is used, the teacher needs to make sure that assessment is well-designed as both assessment and feedback play a pivotal role in the curriculum and in supporting student learning. Therefore, this week we will also begin to investigate the issue of assessment, and related issues of feedback and academic integrity. Historically, teaching and learning have been conducted ‘in person’, in physical spaces and places where people would come together. The activities of teaching and learning were physical. As technologies advanced, ‘distance education’ emerged using such mediums as paper, post, radio, film, video and television. Today, the Internet, computers, tablets and smartphones have enabled further developments in integrating technology into the teaching context and opening up new ways to facilitate teaching and learning. Among other terminologies, ‘digital’, ‘online’, and ‘blended’ learning have become useful and practical concepts for integrating technology to facilitate teaching and learning. Consider a course that you have taught or studied. - What was the role of technology in the course? - How did technology help students to learn? (Keep a record for your ePortfolio) As well as continuing to teach face-to-face teachers worldwide are engaging students online or in digital tasks for learning. Increasingly, universities ‘blend’ the face-to-face course with the online aspects of the course. Teaching in these complex environments provides both opportunities and challenges for teachers and learners. Let’s explore what these terms mean. Online learning gives students access to their courses from a remote location. They can complete learning activities and assessments online that are equivalent to an on-campus course (UNSW, 2015). Digital learning is any type of learning facilitated by technology or by instructional practice that makes effective use of technology. Digital learning occurs across all learning areas and domains. It covers the application of a wide spectrum of practices including: blended and virtual learning. (Education and Training, 2015). Blended learning uses various combinations of traditional face-to-face learning experiences with online and mobile technologies. Blended learning can, but doesn’t have to, occur at the same time as face-to-face. These days many universities provide blended and online courses It is now one of the most significant technological trends driving educational change in higher education institutions (Johnson, et al., 2016, p. 1). Blended learning can support student-centred approaches to learning. It enables teachers to move material that would usually be covered in class into the online or mobile environment, sharing the responsibility for learning with the student. The online components and the face-to-face time need to be integrated so that the learning experience is connected in the two modes. This requires thoughtful course design to make best use of both contexts. What is your experience with blended learning? (Keep a record for your ePortfolio)ReferencesEducation and Training (2015). Teaching with Digital Technologies. Victoria State Government.Johnson, L., Adams Becker, S., Cummins, M., Estrada, V., Freeman, A., and Hall, C. (2016). NMC Horizon Report: 2016 Higher Education Edition. Austin, Texas: The New Media Consortium.UNSW (2015). Blended and online learning.
All English courses include development of skills in reading, writing, speaking and listening. All courses include specific study of vocabulary, spelling, usage, punctuation and grammar, library research and assigned outside reading in addition to the literature studied in class. All courses include work in composition. ENGLISH 9 (1 UNIT)Throughout tenth grade English, students will continue to work on the skills that were begun in English 9. In addition, they will be reading and evaluating short stories, novels, plays, poetry, and essays. They will be asked to read critically and write complete and insightful responses. They will need to become proficient in their grammar skills, especially punctuation, spelling, capitalization, correct pronoun use, and paragraphing. Throughout ninth grade English, students will use literary elements (irony, figurative language, symbolism) to understand reading selections. They will also be able to distinguish differences among various forms of poetry (sonnet, lyric, narrative, epic) and engage in a variety of shared-reading experiences. Students will also analyze poetry in order to recognize the differences between poetic and everyday language. During the course of the year, the students will study the development of characters and central themes. There will be a variety of pre-writing and writing tasks for students to perform in order to demonstrate their abilities as writers who understand their audiences and acceptable conventions of the English language versus e-mail. Knowledge and the ability to use a variety of research tools (newspapers, magazines, and internet – on-line data resources) in order to distinguish between provable statements and assumption will be taught. The ISafe Program will be presented during all English 9 classes. Students will understand the consequences of plagiarism. ENGLISH 10 (1 UNIT) Students will also work on improving their listening, thinking, and writing skills in order to challenge the Regents in grade 11. ENGLISH 11 (1 UNIT) This course emphasizes the application of writing skills through the organization of composition and themes. An introduction to American Literature is used for interpretation and critical reading. Students will successfully complete writings in persuasion and various modes of exposition about a variety of topics. Students will develop research skills and gain confidence in completing a well-documented research based paper. Students will also continue to develop their command of the conventions of standard written English. These activities should ultimately produce improved critical thinking skills that will evidence themselves in written and oral communication. Finally, since writing is not only about clear and precise communication but also a tool for personal discovery, students should see this class as an opportunity to continue to develop their own individual "voice." This course focuses on imaginative literature – drama, poetry, and prose fiction. Students will be expected to consider how authors utilize the tools at their disposal (elements of fiction, figurative language, devices of sound and structure, etc.) in order to create their literary works. In the process students will be exposed to works from a variety of cultures and time periods. Students will be expected to improve their abilities to write critically and analytically about drama, poetry, and prose fiction. Students will also be expected to see literature as a commentary on human experience and view literary works from a variety of perspectives and interpretive approaches. ENGLISH 12 (1 UNIT) This course must prepare seniors for the variety of opportunities and responsibilities they will encounter after graduation. Ultimately, English 12 will help students apply their English language skills to the world beyond the classroom. English 12 will offer reading, writing, and speaking components with units in the novel, research writing, drama and film, persuasive writing, children’s literature, and the short story. Within these units poetry will be considered and creative writing and public speaking tasks will be assigned. In some cases these components will be mixed and merged under thematic units. For example, a thematic unit entitled "Power, Authority, and Civic Responsibility" will allow students to consider drama, novels, short stories, poems, as well as non-fiction articles and essays. From these sources students can then engage in a variety of writing tasks and speaking opportunities.
By George Christian Stowe, PA Office This edition’s “Bug-of-the-Month” is a Decapod in the genus Palaemonetes (commonly referred to as the freshwater prawn, freshwater shrimp, or grass shrimp), a member of the familiar crayfish order Decapoda, which translates into “ten feet.” This is in reference to one of four types of legs found on all members of this order. Although closely related to crayfish, prawns are completely shrimp-like in appearance. What is interesting about them is that they represent a freshwater analog to marine shrimp, an environment where there are exceedingly more species than what are found in freshwater. It is always interesting to find a freshwater version within a group that is almost exclusively marine. For example, there is a freshwater polychaete worm called Manayunkia (one of our previous Bugs-of-the-Month), there is a freshwater jellyfish, and there is a freshwater sponge. Our specimen came from a sample collected from a small coastal plain stream called Devils Brook. The sample was collected by a watershed association called The Watershed Institute, which administers a monitoring program near Pennington, New Jersey. One of the more interesting characteristics of the decapods is the number of different kinds of legs they have and the degree of specialization for each. Up front are the maxillipeds, which function like the mouth parts of most crustaceans. Prawns are omnivorous and use maxillipeds to handle and mince their food, as well as to clean their antennae. Along the thorax are five pairs of what are called pereiopods, or “walking legs.” The first of these are chelate (adapted for grasping prey) and the remaining four are used for locomotion. This is where the “ten feet” name comes in. On the abdomen are five more pairs of what are called pleopods, or “swimmerets,” which (as implied by the name) are used to swim through the water and by the females to hold and incubate their eggs. The last two legs form the “tail fan” of the prawn and are called uropods. These are flattened appendages that give prawns the ability to escape predators by swiftly darting backward. So, freshwater shrimp can move by walking along bottom, steadily swimming, or darting through the water. Palaemonetes, our Normandeau “Bug-of-the-Month.”
Toxicity of Black Walnut Trees in Ohio Black walnut trees growing in Ohio yards and landscapes can be a plus or minus, depending on where they are located. Although black walnut trees produce edible nuts for people and wildlife, they may also harm nearby plants, gardens, and shrubbery. Shade, Sustenance, and Beauty Black walnuts (Juglans niagra) are wonderful shade trees. They can grow up to 100 feet high. Walnut trees produce food for humans and wildlife (squirrels, chipmunks, birds, and insects). These critters love them! But, in Ohio (and other states), these trees release a naturally toxic anti-fungal chemical substance called juglone. Nearby grass, gardens, and other plants can die from it. Gardening experts say that not all plants are sensitive to the chemical, however, shrubs, vines, ground covers, perennials, annuals, and gardens may be affected when embedded near black walnut trees. Toxicity of Juglone in Black Walnut Trees The juglone chemical spreads itself in all parts of black walnut trees, including; buds, leaves, roots, stems, nuts, and hulls. The severity of the damage varies, but the toxic substance can mostly affect other trees through root contact, falling and decayed plants in the soil, and rainfall (foliage may drip juglone onto vegetation). Plants, gardens, shrubbery and other greenery that live underneath a mature walnut tree’s canopy—about 50 to 80 feet from the trunk—may be severely damaged and eventually die. Effects of Juglone on Nearby Plants Just like people, plants need oxygen to survive. When various types of vegetation come in contact with juglone, these plants yellow and wilt because they cannot produce carbon dioxide and oxygen needed to breathe. Black walnut trees in Ohio (and other areas of the Midwest) can partially or completely kill off gardens—especially those growing tomatoes, eggplant, potatoes, and peppers. Flowers and ornamental plants are also sensitive to the effects of juglone’s toxicity. What to Watch For Juglone causes what is known as “walnut wilt"—it may be the reason for limp and decaying greenery if your garden or landscaping is located near a black walnut tree. Yellowing, fading leaves and shriveling, discolored stems are indicators of juglone poisoning. When it comes to some plants, like tomatoes and potatoes, negative reactions can happen quickly and kill off the vegetation within a day or two. Shrubs and trees have similar symptoms – especially on new growth – but older leaves, stems, and branches may cause the death of the plants. Controlling the Effects of Jugalone Black walnut tree roots exude the chemical of juglone into the soil; is it possible to control the substance so that it would not kill neighboring plants? No, but, also … yes. Juglone cannot be controlled by sprays or antitoxins, say gardening experts from Ohioplants.org. The best thing for homeowners to do—avoid planting gardens near black walnut trees, and vice versa. Separating black walnut trees from other vegetation is the best way to keep the toxicity of juglone under control. Tomatoes, apples, pears, berries, potatoes, and various landscaping shrubbery are in danger of being poisoned, as well as rhododendrons, lilacs, and azaleas that are growing too close to tree roots. Planting Around Black Walnut Trees Having black walnut trees in your yard does not mean you cannot have a vegetable garden, but you will want to be cautious about where you place it. Dig your garden 100 feet (or more) away from mature trees. The trees' toxic zones are typically within 50 or 60 feet of the trunks, but roots typically extend to 80 feet or longer. What You Can Do Raised soil beds (in sunlit locations with proper pH balance) will help protect gardens in Ohio landscapes that have black walnut trees. Raised garden beds with amended soil will lessen opportunities for tree roots to grow toward the plot. Black walnut tree twigs, nuts, hulls, leaves, stems, and branches must be kept away from the garden. Remove black walnut trees seedlings as they sprout in unwanted areas. Other Ways to Control the Spread of Juglone - Do not chop up fresh black walnut trees for mulch or wood chips. When completely dried, black walnut tree bark is useful for mulch—but it must be composted for at least six months. (Toxic chemicals break down in compost when exposed to water, bacteria, and oxygen. The poisonous output typically degrades within two to four weeks in compost, but up to two months in soil). - Improve garden soil and metabolize toxins while encouraging organic matter and drainage. - Choose “tolerant” shrubbery, ground covers, flowers, grasses, and vines to plant near the black walnut tree. (Check with your local garden store). When it comes to controlling the toxin, cutting the tree down won’t solve the problem. Juglone remains in the wood as the roots are decomposing; this could take five years or longer. What to Plant While this list is not a scientific determination, says K-State Research & Extension, these vegetable plants and fruit trees are known to thrive under black walnut trees: - Beans, beets, melons, carrots, onions, corns, squash, parsnips - Peaches, nectarines, cherry plums, pears, black raspberries, quince Annuals: begonias, violets, morning glories, impatiens, pansies, marigolds and zinnias. Various shrubs, vines and trees as well as certain bulb flowers (like daffodils) can be planted near black walnut trees. What Not to Plant This vegetation is susceptible to the toxins of juglone: - Tomatoes, rhubarb, cucumbers, eggplant, potatoes, peppers, alfalfa, asparagus, and cabbage - Blueberries, blackberries, grapes, various kinds of pears Annuals, perennials and bulbs: petunias, coral bells (heuchera sp.), Chrysanthemum (morifolium) and Colorado Columbine Aquilegia (caerulea). Planting Advice for Gardens and Landscape Near Black Walnut Trees in Ohio Because there are so many fruits, vegetables, vines, ground covers and shrubberies that can die from the toxic effects of juglone, the best advice for planting near black walnut trees is to consult with gardening experts or your county extension service. Keep in mind, however, that any lists available may not be all-inclusive or completely accurate because scientific outcomes do vary; some of these experiments are only based on observation of plants in specific environments. Ohio's black walnut trees are prolific—they produce their fruits easily (and can sprout up in a number of surroundings). Before planting gardens and landscapes, make note of where the black walnut trees are growing in your yard. This content is accurate and true to the best of the author’s knowledge and is not meant to substitute for formal and individualized advice from a qualified professional. Questions & Answers Could juglone be responsible for the death of Japanese Maple, Hydrangea petiolaris and Porcelain Vine? Depending on where these trees are planted, the proximity could affect -- although the trees you've noted are not known to become victims. Other factors could play a part in the death of the trees, such as disease, insect infestations, soil, etc. © 2017 Teri Silver
March 8, 2019 In 1911, over one million people took to the streets of Austria, Denmark, Germany, and Switzerland for equal rights and suffrage. It was the first International Women’s Day—a day the world continues to celebrate more than a century later. Those inaugural participants had little reason to include heat-trapping emissions or global warming in their concerns, although American scientist Eunice Newton Foote had defined the greenhouse effect decades prior, in 1856. (A first for which more credit is due.) Ice core research shows that Earth’s atmosphere had just over 300 parts per million of carbon dioxide in 1911. In 2019, we hover around 410 parts per million. Those numbers can seem abstract, but they are deeply consequential. At 410 parts per million and rising today, we face a rapidly warming world, with emissions at an all-time high. These are planetary conditions unknown to any human beings before us—and uncharted territory for our survival. Since 1911, we have entered a new geologic age, The Anthropocene, so called because human activity is now the dominant influence shaping the planet. Our warming world is the defining backdrop for International Women’s Day in 2019. The theme of this year’s International Women’s Day—#BalanceforBetter—calls for improved gender parity to improve the world. That aspiration is entangled with climate change in two elemental ways. First, while the negative effects of climate change touch everyone, research shows they hit women and girls hardest. Simultaneously, and surprisingly, advancing key areas of gender equity can help curb the emissions causing the problem. These dual dynamics forge an inextricable link between climate change and the possibility of a more gender-balanced society. Women and girls face disproportionate harm from climate change because it is a powerful “threat multiplier,” making already tenuous situations or existing vulnerabilities worse. We have seen that play out in places from New Orleans after Katrina to Nairobi. Bottom of FormEspecially under conditions of poverty, women and girls face greater risk of displacement or death from natural disasters. Droughts and floods have been tied to early marriage and sexual exploitation—sometimes last-resort survival strategies. Tasks such as collecting water and fuel or growing food fall on female shoulders--sometimes literally—in many cultures. Already challenging and time-consuming activities, climate change can deepen the burden, and with it, struggles for health, education, and financial security. In very real ways, climate change thwarts the rights and opportunities of women and girls. These realities make gender-responsive strategies for climate resilience and adaptation critical. They make centering the rights, voices, and leadership of women and girls a necessity. Turns out, gender is equally important for solutions to stem climate change. Research from Project Drawdown shows that securing the rights of women and girls can have a positive impact on the atmosphere, comparable to wind turbines, solar panels, or forests. Why? In large part because gender equity has ripple effects on growth of our human family. When girls and women have access to high-quality education and reproductive health care, they have more agency and make different choices for their lives. Those choices often include marrying later and having fewer children. The decisions individual women and their partners make add up. Across the world and over time, they influence how many human beings live on this planet and eat, move, build, produce, consume, and waste -- all of which generates emissions. To be sure, those emissions are not generated equally. The affluent produce far more than the poor. The average American produces almost 17 tons of carbon dioxide per capita each year compared to the 1.7 tons or just one-tenth of a ton of someone in India or Madagascar, respectively. Anyone who says curbing population is a silver bullet is ignoring critical variables of production and consumption. We must see the whole ecosystem, not just the trees. Both education and family planning are basic human rights, not yet reality for too many people. Around the world, 130 million school-age girls are not in the classroom. They are missing a vital foundation for life, and that fundamental right must be secured. The same is true for access to high-quality, voluntary reproductive health care. Some 45% of pregnancies in the United States are unintended, while 214 million women in lower-income countries say they want to prevent pregnancy but have “unmet need” for contraception. Policy changes made by the Trump administration are set to worsen both of those statistics, with ripple effects for the planet. Of course, girls’ and women’s leadership on climate also goes way beyond family choices. Many of the vital voices and agents of change for a liveable planet are female. Women and girls are overcoming unequal representation at decision-making tables and underinvestment in their efforts. One need look no further than the example of 16-year-old Swedish activist Greta Thunberg and the growing community of teenage girls leading school strikes for climate around the world. “The climate crisis has already been solved,” Thunberg has said. “We already have all the facts and solutions. All we have to do is to wake up and change. ... So instead of looking for hope, look for action. Then, and only then, hope will come.” I imagine today’s school strikers would find kindred spirits among the participants in International Women’s Day 1911. They are certainly building on the legacy of raising voices and asserting rights. More importantly, they need courageous comrades today. We are reckoning with a planetary challenge of unprecedented scale and severity. The world must mobilize climate solutions as quickly and fully as possible, remembering that gender equity is itself one. Perhaps the silver lining of The Anthropocene is that if human forces can put our planet in the balance, we can also regain equilibrium. It is our choice. That may be the truest, most crucial meaning of #BalanceforBetter. © Katharine K. Wilkinson
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work! Karl Lashley, in full Karl Spencer Lashley, also called Karl S. Lashley, (born June 7, 1890, Davis, West Virginia, U.S.—died August 7, 1958, Paris, France), American psychologist who conducted quantitative investigations of the relation between brain mass and learning ability. While working toward a Ph.D. in genetics at Johns Hopkins University (1914), Lashley became associated with the influential psychologist John B. Watson. During three years of postdoctoral work on vertebrate behaviour (1914–17), Lashley began formulating the research program that was to occupy the remainder of his life. He cooperated with Watson on studies of animal behaviour and also gained the skills in surgery and microscopic tissue study needed for investigating the neural basis of learning. In 1920 he became an assistant professor of psychology at the University of Minnesota, where his prolific research on brain function gained him a professorship in 1924. His monograph Brain Mechanisms and Intelligence (1929) contained two significant principles: mass action and equipotentiality. Mass action postulates that certain types of learning are mediated by the cerebral cortex (the convoluted outer layer of the cerebrum) as a whole, contrary to the view that every psychological function is localized at a specific place on the cortex. Equipotentiality, associated chiefly with sensory systems such as vision, relates to the finding that some parts of a system take over the functions of other parts that have been damaged. Lashley was a professor at the University of Chicago (1929–35) and Harvard University (1935–55) and also served as director of the Yerkes Laboratories of Primate Biology, Orange Park, Florida, from 1942. His work included research on brain mechanisms related to sense receptors and on the cortical basis of motor activities. He studied many animals, including primates, but his major work was done on the measurement of behaviour before and after specific, carefully quantified, induced brain damage in rats. Learn More in these related Britannica articles: thought: Elements of thought…such as the American psychologist Karl S. Lashley that thinking, like other more-or-less skilled activities, often proceeds so quickly that there is not enough time for impulses to be transmitted from the central nervous system to a peripheral organ and back again between consecutive steps. So the centralist view—that thinking… Brain, the mass of nerve tissue in the anterior end of an organism. The brain integrates sensory information and directs motor responses; in higher vertebrates it is also the centre of learning. The human brain weighs approximately 1.4 kg (3 pounds) and is made up of billions of cells called… Johns Hopkins University Johns Hopkins University, privately controlled institution of higher learning in Baltimore, Md., U.S. Based on the German university model, which emphasized specialized training and research, it opened primarily as a graduate school for men in 1876 with an endowment from Johns Hopkins, a Baltimore merchant. It also provided undergraduate instruction…
How to Write a Music Review How to write a good music review This was written by a student editor for his peer group. List of names of famous African American Olympians Resource materials for the students or access to the school library Note: Playback system will also be needed. Introduction Ask the students to think about their favorite athlete or an athlete that they know of. Ask the students to share their knowledge of the Olympics. Tell the students that they are going to read and listen to a story about a famous American athlete named Jesse Owens. They will learn several reasons why he is special when they read the book. Show the students the cover. Using the Learning Through Listening website as a guide, introduce or review if the students have used the strategy before the POWER listening strategy to the students. Distribute the summarizing form found on the website within the strategy. Listen to the story with the students, stopping when appropriate to take notes and check for comprehension. Have students complete the summary forms. Allow students to share what they learned about Jesse Owens by using the information on their summary sheets. Discuss the importance of Jesse Owens and his accomplishments. Tell the students that during the next class, they will research information about other famous African American Olympians. Practice Review the information the students learned yesterday about Jesse Owens. Use the overhead projector to show students the graphic organizer they will use for individual research. Either assign students the names of athletes or have them choose a person to research from a list of names. Students should research the person selected or assigned and complete the graphic organizer. Wrap-up Allow students time to share the information about the person they researched. Discuss where and when the next Olympic games will be held and the various events that will be included. Completion of graphic organizer during individual research. Differentiated Instruction Lesson Tips Have students only complete the Instruction part of the lesson. Assign specific names to individual students and provide resources for them at their instructional reading level. Have students use the graphic organizer to write a report about the person they researched. Have students create oral presentations about the person they researched and bring in five objects or pictures to represent that person.Bach Brandenburg Concerto No 2 in F major, BWV mvt2 Andante D°,Karl Richter See more. by some oane. Music for July 4th - Fanfare For The Common Man (SQUILT lesson #11) — Homegrown Learners See more. In , J.S. Bach dedicated six orchestral pieces to Margrave Christian Ludwig of Brandenburg, ostensibly in response to a commission, but more likely as a sugarcoated job application. Music Appreciation Assignments Semester 1. Music Appreciation Assignments Semester 2. US History. US History Assignments. Intramurals. Brain Game HQ. Contact Me. Here I be. Learn about me. Describe the role and importance of the choir in sacred Baroque music. 6. Define oratorio. Download and Print Birdland sheet music for bass (tablature) (bass guitar) by Manhattan Transfer. Tablature included. High Quality PDF to download. Download free sheet music -- thousands of pieces for guitar, piano, choral, Christmas, brass, violin, woodwind, and more in our free sheet music libraries! No limits! Many different musical styles and genres. In Pet Sounds, the Beach Boys: A) took elements from other musical genres and incorporated them into their music B) All of the choices are correct. C) did not garner any critical acclaim D) extended the band’s music in its own right, particularly by using instruments (such as a harp or bass harmonica) that no one would have imagined in a 60s-era rock album |ADDITIONAL MEDIA||The warm up can also be adjusted to meet your various needs. You might have a weak area, and you may wish to increase time in that section.| |Contributors||Bach dedicated six orchestral pieces to Margrave Christian Ludwig of Brandenburg, ostensibly in response to a commission, but more likely as a sugarcoated job application. These pieces display a variety of styles, influences, and musical preoccupations and were probably not conceived of as a set.| |Goldberg Variations BWV988||Facsimile of Bach's Brandenburg Concerto 6, with two solo violas and no violins! Until the second half of the last century it was common to replace the two gambas with cellos but the concerto loses that special sonority.|
Mental Maps Exercise In the 1940s, psychologist Edward Tolman observed through his study of rats that we navigate through space based on a cognitive map of our world that we keep in our heads. Unlike a reference map like Google Maps that contains a wide variety of detail, cognitive maps are more selective and reflect the things we have experienced filtered through our own subjectivities and values. This group exercise is intended to help you examine your own mental maps in comparison to others in the group. Take out a sheet of paper and take 15 minutes to draw your mental map of an area in your life that you know fairly well. - This should be an area other than your home to preserve your privacy - This should be an area other than the Farmingdale State College campus and immediate vicinity so we get some variety in class - Do this from memory rather than using an online resource On the map clearly indicate: Cartographer: <Your Name> Find a partner, introduce yourself, exchange maps, and compare your two mental maps. Add the following as a numbered list of comparisons between your two maps: On your partner's map, clearly indicate: Reviewer: <Your Name> 2. Spatial Scale Scale in mapping is the amount of area covered by a map. Compare the difference (if any) in scale between your maps. Detail is the level of attention to the small elements in the landscape. Compare the differences in the level of detail between your maps. Annotations or labels are text added to a map to describe specific elements of the map. For this response, compare your map with your partner's map and briefly describe the differences in the way you annotated your maps. Accuracy is how well a map reflects the reality of the characteristics on the ground. This reality can be two things: - The physical reality if you were measuring distances with physical measuring devices like rulers or surveying equipment - The lived reality of the way that the cartographer perceives their environment as they live in it For this response, compare your map with your partner's map and briefly describe how accurately your partner's map seems to reflect the actual physical distances on the ground. Aesthetics is about beauty. Maps are evaluated based on both their usefulness and their aesthetic appeal. Maps are both beautiful and useful. The balance between those two characteristics depends on both the intention and the philosophy of the cartographer. For this response, briefly compare the artistic differences between your two maps. The choices of what to map, how to map it, and what to omit on a map reflect the values and personality of the cartographer. For this response, briefly describe what you feel your partner's map says about their values and their personality.
Photons are interesting little things. Modern physics teaches us that photons have zero mass and can therefore travel at the speed of light. It’s impossible for anything with mass to travel through spacetime at the speed of light. Pure energy. No mass. Even so, pure energy is nothing to be trifled with. I’ve talked about radiation pressure before; radiation pressure, quite simply, is light pushing objects around. Radiation pressure from massive stars drives powerful stellar winds. If a star is too massive, radiation pressure can tear it apart. In fact, this is what happens in Wolf-Rayet stars. At the same time, the humble photon has a surprising number of ways it can affect objects with mass. The simplest is that photons carry away energy, which can lead to a reduction in mass. Actually, this is one way in which stars lose mass — through photons. The same way, photons being absorbed increase the energy of a system, increasing the mass. The process of gravitational lensing also tells us that photons are affected by gravity. More accurately the spacetime they move through is affected by gravity. A gravitational lens is caused by a strong gravitational field warping the path of light that travels through it. Photons are so affected by mass that black holes are (theoretically, at least) surrounded by a ‘photon sphere’ — a shell of photons actually captured into an orbit around the black hole. The frequency of a photon is actually lowered by moving to a higher gravitational potential. In fact, this is responsible for the Integrated Sachs-Wolfe effect. The ISW effect is a nice little cosmological concept of how EM radiation is propagated by gravitational fields. Photons are accelerated towards a gravitational well. The resulting acceleration acts like a catapult, slinging them away from the other side of the gravity field with higher energy than they had initially. It’s the ISW effect that explains many of those fluctuations in the cosmic microwave background. Photons can also affect the momentum of objects they interract with. The Poynting-Robertson effect, for instance, causes dust grains orbiting stars to slowly spiral inwards. The Yarkovsky effect, in turn, has an impact on the orbital paths of asteroids, due to both visible photons being absorbed by and infrared photons being emitted from the asteroid. Photons even exert a gravitational attraction on other objects, which is strange for an object that supposedly has no mass. According to general relativity, photons contribute to the stress-energy tensor and hence, they have their own gravity. An infinitessimally small quanta of gravity, but gravity nonetheless. And there are a lot of photons in the universe. Relativity famously says that E=mc2. Or in natural units, c=1 and therefore E=m. Interesting, wouldn’t you say. Natural units also consider that h=1, so the “mass” of a photon would be directly equal to it’s frequency. In other words, an x-ray photon is more massive than an infrared one. So, despite what we’re taught, photons apparently do have mass, are affected by gravity, and exert gravitational attraction on other objects with mass. Or at least they seem to. Kinda makes you wonder if we really need dark matter, don’t you think?
Teaching Kids About Money Working with your children, teaching them the importance of savings and credit, and encouraging them to practice good money management habits is among the valuable lessons you can provide as a parent. Early on, we must start teaching our children about the importance of using money wisely. We can teach them how to avoid making mistakes such as overspending or misusing credit, and help them become wise consumers and financially secure. - Teach them to save money by having them contribute towards purchasing an item. - Encourage them to earn money such as having a paper route, doing odd-jobs for relatives or family, or working summer jobs. - When you are grocery shopping, have them think about comparing the costs of items. - Read or talk with them about money-topics such as credit or savings. - To encourage banking, open custodial accounts, then have them make deposits and balance their checkbooks. - Teach them about the misuse of credit cards by having them understand about interest rates and fees. - Encourage them to earn money through after-school summer jobs. - Talk with them about investment concepts such as saving for a car or college. - Encourage them to participate in a Financial Education Course though a nonprofit organization or bank to learn personal finance concepts such as budgeting and credit. - Describe how credit card debts, opened while in college, may limit financial options after college. - Have them complete a personal finance budget, identify future financial goals, and determine how they will manage their monthly living expenses. Getting A Head $tart Program To help children learn about money management, Trinity works with local schools and community organizations to promote youth financial education. This program teaches basic financial concepts, such as budgeting, saving, and the proper use of credit. The course introduces children to wise consumer habits and the importance of financial planning. Request a copy of Getting A Head $tart by calling (800) 364-1086, or download a copy at trinitycredit.org.
If the thought of eating genetically modified food makes you cringe because it seems unnatural, think again. Bacteria modified the genes of plants on their own long before humans figured out how to do it, and we’re still enjoying the fruits (and vegetables) of their work today. It turns out that the sweet potato, the beloved orange root vegetable frequently eaten on Thanksgiving, has been harboring DNA inserted into its genome by bacteria long before humans started growing it for food around 8,000 years ago. A team of scientists analysed the genomes of hundreds of varieties of domestic sweet potatoes and found they had bits of DNA from a microbe commonly used in plant genetic engineering. The scientists, lead by Jan Kreuze of the International Potato Center in Peru, published their results in the May 5 edition of the Proceedings of the National Academy of Sciences. One stretch of DNA — the code of which matched with a species of bacteria called Agrobacterium — was present in all 291 cultivated sweet potato varieties studied, but not in closely related wild plants. That makes the scientists think the bit of DNA contains a gene that gave the sweet potato a trait humans found desirable and selected for when domesticating the plant. If that’s the case, the sweet potato’s genetic modification may very well be the reason why we can eat (and love) it. Unlike real potatoes, which are tubers that come from the plant’s stem, the sweet potato is actually the root of the plant. These woody and fibrous bit of the plant probably needed some genetic TLC to become an edible and delicious Thanksgiving staple. “We think the bacteria genes help the plant produce two hormones that change the root and make it something edible,” virologist Jan Kreuze told NPR’s Goats and Soda blog. “We need to prove that, but right now, we can’t find any sweet potatoes without these genes.” Whatever the reason the bacteria genes stuck around, the way they got there in the first place is not too mysterious to plant geneticists. “I don’t think that’s all that surprising,” Greg Jaffe, the GMO expert at the Center for Science in the Public Interest in Washington, told Goats and Soda. “Anyone who’s familiar with genetic engineering wouldn’t be surprised that the [bacteria] Agrobacterium inserted some DNA into some crops.” That’s because Agrobacterium is a bacteria that infects plants sort of like a virus does. The microbe inserts a bit of its DNA into the genome of a plant, a process known as horizontal gene transfer, the scientists explain in PNAS. The DNA insertion from Agrobacterium causes a plant’s roots to grow like crazy or grow into tumours, a condition called crown gall disease. The scientists think an ancient horizontal gene transfer happened between Agrobacterium and the plant that all today’s sweet potatos are descended from. So even though we haven’t known it until now, we’ve been eating genetically modified foods for thousands of years, and it hasn’t killed us yet. NOW WATCH: 6 Food Myths That Are Completely Wrong NOW WATCH: Briefing videos Business Insider Emails & Alerts Site highlights each day to your inbox.
CPSC100: Practical Computer Fluency (F'05) Week 2 Lecture Notes: Computer Fundamentals and Operating Systems - Despite the variety of computer types, models, and technologies, all are made up of four primary components: input devices or connections, a central processing unit or processor, storage devices, and output devices or connections. The following diagram illustrates the relationships between these components. Data flows in the directions of the arrows. Not all computers are connected to a network as illustrated. - If you were to compare this to the typical human, the input would be the 5 senses, the output would mostly be muscle contractions, the storage is a person's memory, and the CPU is the brain. The "program" that runs on the brain CPU might be called your mind, your personality, or even your soul. - Input devices provide data from the outside world to the processor: keyboards, mice, joysticks, microphones, digital cameras, scanners, and so forth, including various sensors for embedded computers that measure temperature, speed, throttle position, or whatever. - Output devices allow the processor to communicate the results of its work to the outside world: monitors, printers, speakers, LED and LCD displays, and all sorts of different servos and actuators for embedded computers that change valve positions, angle rudders, pump the brakes--the list is endless. - Storage devices are used by the processor to temporarily or permanently store data so that it can be retrieved at a later time. "Volatile" storage loses its contents when the computer power is turned off (the human analogy would be your own memories). The most common example of this type of storage is Random Access Memory, or RAM, or just "memory". "Persistent", or "non-volatile", storage does not lose its contents when the power is turned off: hard disks, floppy disks, CD-ROMs, DVDs, and so forth (the human analogy would be books, cave drawings, and oral histories passed from generation to generation). - The Central Processing Unit, or just "processor" for short, is the engine or brain of a computer. The processor executes a series of instructions that gather data from the input devices, occasionally store intermediate results using the storage devices, and then produce final results suitable for the output devices. - The diagram above also shows a network of some sort (perhaps the Internet, or maybe a cell phone network). The computer's interface with a network involves both input and output devices, although these may be physically incorporated into a single piece of hardware (e.g. a network interface card, or NIC). This is because the computer both sends messages out to a destination computer somewhere on the network, and also listens for messages that arrive as input to the computer. - All of the arrows on the diagram are typically realized by cables of some sort, or they may be wireless (radio or infrared) signals, or they may be "wires" built in to the computer's main circuit board (sets of these wires, or "traces", are often called "busses"). There are many, many types of these cables, each with its own type of connector: coaxial, serial, parallel, USB, FireWire, ribbon, twisted-pair, Ethernet, and so on. The Central Processing Unit - The Central Processing Unit (CPU) is the brains of the computer. It's only about the size of a fingernail, but is encased in a large plastic enclosure with many electrical pin connectors sticking out. Recent CPUs also have large heat sinks and sometimes even little fans to keep - CPUs are purely digital devices. This means that they only understand discrete numerical values. In fact, the only numerical values a CPU understands is zero and one: a numbering system called "binary". Ultimately, everything we do with a computer must be translated into those binary values. - The instructions that CPUs understand are very simple: fetch a value from memory, perform simple calculations, store a value in memory, and compare two values and jump to another instruction depending upon the outcome. Other than these jumps, the CPU simply executes each instruction in turn, one by one. - The CPU is hooked to a crystal clock that ticks very quickly. Each time the clock ticks, the CPU performs one of these very simple operations (often it takes multiple ticks to perform one operation, but never less than one). - These ticks are produced at a certain frequency. Apple ][+ era computers (The Mostek 6502 CPU) had a frequency of about 1.5MHz (1.5 million ticks per second). If you look very closely at the internal video display used by the Terminator in the first movie, it appears that Cyberdyne Systems Model 101 is built using a Mostek 6502 CPU. - The most recent CPU chips from Intel run at a frequency of over 2GHz (more than 1,000 times that of the first Apples). But frequency alone shouldn't be used as criteria for comparing the speed of CPUs, particularly if the chips are from different product lines or from different manufacturers. - In 1964, Gordon Moore (who co-founded Intel in '68) announced that the number of transistors on computer chips appeared to be doubling every 18 months. The trend has continued to this very day. It also loosely translates to a doubling of CPU speed every 18 months. - Software is that part of the computer that you cannot kick. - You may think of software as the recipe that a CPU must follow in order to perform a task. - Computers are very literal: they will only do exactly what you tell them to do--even if you really meant to tell them to do something else (this is what we call a bug). - Computers are also deterministic: given the same input, they will always produce the same output. This is a good thing, because otherwise it would be impossible to program computers to do the same thing twice (although quantum theory heralds the day when computers don't behave deterministically). - A program is just a list of instructions to the CPU, that when executed, turn into what we think of as software. A program implements an algorithm (a mathematical formula or set of instructions) in the "machine language" of the CPU. The Boot Process - Before any computer can perform useful work, it must first initialize itself using a process known as bootstrapping, or simply the boot process. When power is first applied to a computer, it is in a random state: all of the volatile storage devices are filled with To move from this chaotic state to an ordered state involves a bit of effort (entropy must be reversed). Most PCs follow a similar set of steps to bootstrap themselves, as outlined in the following points. Compare this to your own experience of first waking in a hotel room while on vacation: you need to do some work to: remember where you are and why, get up, get dressed, shower, and so forth. You can't really perform useful work until all of that has been done. - A special chip in the computer holds what is called the BIOS (Basic Input/Output System), essentially just the instructions needed by the PC to get things started. The first thing the BIOS does is execute the POST (Power On Self Test) which checks the video card (so that any errors can be displayed to the user--if the video card doesn't work, then the BIOS beeps a few times to indicate the problem). - The BIOS then checks that the other major devices are connected and functional: keyboard, mouse, internal busses (circuit board "wires" connecting critical components). - The next step is to verify that the RAM (volatile storage) in the computer works. The BIOS writes to each location in memory and then reads the same data back to make sure that the memory chips are This step can take a few seconds, depending upon how much memory is installed. - The BIOS also checks the main storage devices connected to the system, including floppy drive(s), CD-ROM drive(s), and hard drive(s). - The BIOS then starts looking for an operating system to load. It usually checks the floppy first, then the CD-ROM, and finally the hard drives. As soon as it finds something promising, it loads the first "chunk" of data from the storage device into memory and tells the processor to execute the instructions found in that chunk. The BIOS's work is now done. - The first chunk begins to execute. It should contain just enough instructions to load the next chunk stored on the disk into memory and execute those instructions. There may be many such steps, but the end result is to load the operating system into memory and start it running. - The operating system (OS) first initializes itself, and then loads any special device drivers (little programs that allow the OS to communicate with specific types of hardware). On Windows, little dots or a progress bar are displayed while this is going on. - Once the drivers are loaded, the OS runs any programs that are supposed to run every time the computer is booted up (e.g. virus scans). - The operating system then runs a program that allows a user to log on to the system with a username and password. Some operating systems skip this step entirely if they're not meant to be used by more than one person. - The operating system then runs a program called a "shell" and loads any of your user preferences (colours and so forth). This is the program that displays the graphical desktop in Windows or Macintosh (and others), or a text-based prompt in other operating systems. This shell program runs the entire time you use the computer and allows you to interact with other programs (e.g. word processors, web browsers, calculators). - When sold as shrink-wrapped products, we know operating systems by such names as Microsoft Windows, Apple Macintosh, UNIX (and its variants Linux, Solaris, AIX, HP-UX, etc.), OS/2, and even ones you may never have heard of: OS/390, OS/400, Be, CP/M, VMS, TOPS, MVS, ITS, etc. - But those products include a whole lot more than just the operating system program: web browsers, disk utilities, graphical user interface shells, file managers, calendars, e-mail programs, and so forth. - An operating system (OS) is a program that we never interact with directly. The operating system has two main responsibilities: - Manage hardware and software resources so that programs may use these resources without conflict. - Provide a consistent and hardware-independent programming interface for software development. - The first of these responsibilities is most important for operating systems that can run more than one program at a time (almost any OS with which you might be familiar). These programs are completely unaware that other programs are also running. Suppose that two programs that are currently running each wish to display something on the screen, save a file to the hard disk, and play a sound on the speakers. If they both tried those simultaneously, they would likely overwrite each other's efforts: producing a garbled screen display, a corrupted file containing interspersed bits written by both programs, and some random noise from the speakers. The operating system's job in such a scenario is to isolate each program from the hardware devices by playing a "traffic cop" role. The programs are still allowed to use the hardware, but they always do so through the intermediating OS, which prevents the programs from interfering with each other (perhaps by isolating the two programs' display into separate windows, by placing the saved files in two different locations, and by letting only one program at a time use the speakers). The operating system does this by essentially restricting each running program to its own "virtual computer". If a program ever tries to break out of this virtual environment (usually because of a programming defect, or "bug"), the operating system can detect this condition and halt the offending program. You may get an error message (perhaps a General Protection Fault), and you will lose any unsaved data in that application, but the OS should be able to restrict the damage to only that virtual computer; all of the other programs should continue running normally. - Because the OS must be an intermediary between programs and the computer hardware, this leads directly to the second responsibility: The OS hides the specifics of any particular piece of hardware, proving only a standard way of accessing hardware of that nature. This means that application programs (word processors, spreadsheets, games, etc.) do not have to be written for specific hardware, only for specific operating systems. In the days of yore, this was not the case, and WordPerfect (the champion word processor of its day) had to include hundreds of printer definition files for every conceivable printer make and model on the market. Today, the operating system loads just one such device driver for a particular printer (or any other device) and all programs simply ask the operating system to print something, none the wiser about the model of printer connected to the computer. - How does a single computer run many programs simultaneously? The operating system manages this by letting each program in turn use the processor for a short period of time (microseconds or nanoseconds). The switch is so fast that we don't notice that only one program is really running at any one time.
You may not choose the Fast Fourier Transform for Lab 7. The Fast Fourier Transform is an important algorithm in digital signal processing. It provides a fast way to convert between the time domain and frequency domain representations of a sequence of sound samples (or other types of data). Time domain represents sound as a series of values changing over time (like the graph in the digital audio handout). Frequency domain represents sound as the sum of sine and cosine waves of different amplitudes, frequencies, and phases. For example, the graph in the digital audio handout could be represented by giving a single sine wave with amplitude 1000 and frequency 440 Hz. To learn more, see The Scientist and Engineer's Guide to Digital Signal Processing (or take EECS 216 and EECS 451). For your educational toy, the FFT can help you find or modify the frequency components of sound segments. It is possible to compute an FFT in software on the E100. However, the E100 can not run the FFT algorithm fast enough to keep up with the audio sampling rate, so we provide a co-processor that computes an FFT in hardware. This section describes how to use the FFT co-processor. To use the FFT co-processor, an E100 program sends a sequence of up to 1024 data points to the FFT co-processor, which then computes the FFT of that sequence. Later, the E100 program reads the result of the FFT, which is a sequence of 1024 data points. Each data point in a sequence consists of a real and imaginary component. Each component is limited to the range [-215, 215-1]. For time-domain data, the real component represents the magnitude of a sound sample, and the imaginary component is not used. For frequency-domain data, the real and imaginary components represent a frequency component in the complex plane. The magnitude of this frequency component is square_root(real*real + imaginary*imaginary). The phase of this frequency component is arctangent(imaginary/real). When sending or receiving time-domain data, data points are given in order of increasing time. When sending or receiving frequency-domain data, data points are given in increasing order of frequency. Point #0 is for frequency 0 Hz, and each succeeding point is for a frequency that is 7.8125 Hz more than the last point, up until point #512 (the 513th point), which is for frequency 4000 Hz (the Nyquist frequency for our audio rate). After point #512, the frequencies goes down by 7.8125 Hz between points; each of these points are the complex conjugates of the corresponding point in the first 512 samples. [This description assumes that sequences represent samples taken at the standard audio rate of 8 KHz. More generally, the frequency gap between points is the sampling frequency divided by the number of samples.] fft_send_command and fft_send_response implement the standard I/O protocol. The command parameters are fft_send_real, fft_send_imaginary, fft_send_inverse, and fft_send_end. There are no response parameters. fft_send_real, fft_send_imaginary specify the real and imaginary components of a data point. Remember that the range of these components is [-215, 215-1] (note that differs from the [-231, 231-1] range of the speaker and microphone). fft_send_inverse specifies the direction of the transform and is used only on the first data point in a sequence (it is ignored on other points in the sequence). fft_send_inverse=0 specifies a forward transform (time domain -> frequency domain); fft_send_inverse=1 specifies an inverse transform (frequency domain -> time domain). fft_send_end specifies whether this data point is the last point in the sequence. If the E100 program ends a sequence before sending 1024 points (by setting fft_send_end=1), the FFT co-processor will assume the remaining data points are (0, 0). The FFT co-processor will automatically end the sequence after the E100 sends 1024 points. After sending a sequence of data points to be transformed, the E100 program can read the results (another sequence of data points) from the FFT controller. fft_receive_command and fft_receive_response implement the standard I/O protocol. There are no command parameters. The response parameters are fft_receive_real and fft_receive_imaginary, which specify the real and imaginary components of a data point. If the E100 program starts sending a new sequence of points to the FFT controller, the controller will flush all remaining points in the result sequence. ase100 simulates the FFT co-processor accurately enough for you to test your device driver and to run assembly-language programs. The exact results returned by ase100's simulated FFT co-processor may differ slightly from those returned by the DE2-115's FFT co-processor. The FFT co-processor on the DE2-115 uses a time-limited module that is intended for lab use (not for final deployment). If the DE2-115 board is disconnected from USB-Blaster, the FFT co-processor will only work for 1 hour. If the DE2-115 board is connected to a computer via USB-Blaster, the FFT co-processor will work indefinitely.
In this tutorial you will learn about different layouts in android. Android layouts are used to define the visual structure of user interface. The UI components like label, button, textbox, etc. are defined inside a layout. So before designing UI for an android application you must know about different layouts available in android. There are two ways to design UI in android. - Using XML file - Using Java code at run time Designing layout using XML file is better because the presentation of the app is kept separate from the code that controls the behavior. By doing this the debugging and alteration in UI becomes easier. The layout XML files are placed inside res/layout folder. Types of Layouts in Android The various layouts that are available in android are given below. Here I am giving a brief introduction of each layout. I will explain them in detail with example in upcoming tutorials. As its name indicates, Linear Layout is used to arrange its children in linear manner, either vertically or horizontally. This layout is used to place child views relative to each other. We can specify the position of each view or layout relative to sibling or parent. This layout is used to arrange the child views into rows and columns. Absolute Layout is used to specify the exact locations of its children in x and y coordinates. This layout is designed to block out an area on the screen to display a single view. It provides a horizontal layout to display tabs. It is used to display a list of vertically scrollable items. It is used to display items in two-dimensional scrolling grid. If you found any mistake or have doubts regarding above layouts in android tutorial then feel free to mention it by commenting below.
If global warming will continue to gain momentum, the first living creatures who will disappear him from the face of the Earth, will become lizards. According to the American scientists from Clemson University, to climate change cold-blooded reptiles to adapt will not. According to scientists, the more hot it will become in the habitats of lizards, the harder it is for these animals to maintain optimum body temperature. When reptiles will “overheat”, they will need to look for a shadow, and the more similar will happen, the more time and effort the lizards have to spend on this quest. Specialists conducted a series of experiments involving animals, and also built a computer model that allows to understand how the lizards will affect climate change. In the study, specialists found out that the most comfortable lizards feel only the territories covered with many small shadows (e.g. from grass), but not where there is a large “solid” shading (e.g. from trees). According to scientists, climate change will affect the vegetation and its distribution in such a way that suitable for lizards of the shadows to become smaller, while the demand for them will only increase. Ultimately, this can lead to the disappearance of lizards, at least, those who today inhabit hot regions. A few years ago, a number of studies have shown that to the 80-th years of the current century due to global poteplenie disappear 40% of lizards, and 20 percent of existing species will cease to exist completely. However, according to the authors of a new study, their predecessors came from the fact that to find the shadow of the lizard is not difficult, and did not take into account that this requires time and effort, so the results were too “optimistic”.
Questions about multiple colors or channels gradually blended together. A rainbow is an example of a natural gradient. A gradient is the outcome of blending multiple colors or channels gradually together. A rainbow is an example of a gradient. The most common types of gradients are linear, radial, angle, reflected and diamond: Web 2.0 gradients "Web 2.0" gradients are usually described as either: - Sharp gradients, ie. gradients that have a color stop between two colors without any blending (see e.g. the angle gradient at 12 o'clock under "Common types"). ⇒ To make the object look glossy. - Subtle gradients, where colors either blend very slowly or where the blended colors are close together. ⇒ To give the object texture. Some examples of gradients that could be described as "web 2.0 gradients": Using transparency instead of a color gives the image/object a fading effect: (Image by @Johannes)
The Reading Teacher URL with Digital Object Identifier This teaching tip outlines a comprehension strategy designed to support early primary students in reading to learn while learning to read. The strategy is borne of our classroom practices and is designed to support young children in reading and understanding informational texts by facilitating close interactions between text and reader. Through the steps, Read, Stop, Think, Ask, Connect, the strategy supports beginning readers in recognizing and responding to the challenges that informational texts hold for reading and comprehending. The strategy is designed to be used flexibly to account for the diversity of readers and of texts within early primary classrooms, and encourages educators to consider students’ prior learning, text selection, and multimodal supports when connecting beginning readers with informational texts.
Saw-scaled viper, (genus Echis), any of eight species of small venomous snakes (family Viperidae) that inhabit arid regions and dry savannas north of the Equator across Africa, Arabia, and southwestern Asia to India and Sri Lanka. They are characterized by a stout body with a pear-shaped head that is distinct from the neck, vertically elliptical pupils, rough and strongly keeled scales, and a short thin tail. On both sides of the body are several rows of obliquely arranged serrated scales. Adults range in length from 0.3 to 0.9 metre (1 to 3 feet). Echis coloration includes various shades of brown, gray, or orange with darker dorsal blotches and lateral spots. Saw-scaled vipers move by sidewinding locomotion (see sidewinders). They are nocturnal, coming out at twilight to hunt for food, which includes mammals, birds, snakes, lizards, amphibians, and invertebrates such as scorpions and centipedes. Egg-laying species, producing up to 23 eggs per female, reside in northern Africa, whereas live-bearing species, such as E. carinatus, inhabit the Middle East and southern Asia. Saw-scaled vipers are small, but their irritability, aggressive nature, and lethal venom make them very dangerous. When alarmed, saw-scaled vipers will move slowly with the body looped into S-shaped folds. The oblique scales are rubbed against each other to produce a hissing sound, which is a defensive alarm used to warn potential predators. These snakes are, however, quick to strike, and mortality rates for those bitten are high. In the regions where they occur, it is believed that saw-scaled vipers are responsible for more human deaths than all other snake species combined.
Immersive, Virtual Reality Study of Nations Engages 6th Graders Teams of Collegiate School 6th Grade students in Mike Ferry's social studies class are answering the question How would you help a country with a low gross domestic product per capita improve its economy and standard of living? Students select a country to research and then leverage DollarStreet , the CIA World Factbook and Google Earth Virtual Reality to explore and understand each country’s people, culture, natural resources, geography and climate. While immersed in Google Earth Virtual Reality, students experience digital replicas of these countries and fly over or walk down streets, meander through national parks and mountain ranges or investigate airports and city neighborhoods, thus creating a deeper understanding of the infrastructure and geography of these locations. The study enables students to engage in a wider world from the confines of a classroom. The exercise simultaneously boosts creativity, critical thinking, collaboration and research skills. At the conclusion of their studies, the students submit their recommendations for each country, providing ideas for how the native populations of the low-GDP countries can improve their GDP.
Antarctic waters hover just above freezing, but that doesn't stop fish from prospering in the chilly depths. The resilient fish — known as Antarctic notothenioids — keep from freezing solid thanks to a special "antifreeze protein" that prevents their bodily fluids from turning into crystals. In a new study, published in the June 19 online edition of the Proceedings of the National Academy of Sciences, scientists have revealed the source of the fishes' key to survival. Though the antifreeze glycoprotein, called AFGP for short, was first documented 35 years ago, scientists didn't know how or where the fish produced the special molecules. For many years researchers believed their production occurred in the liver, in part because the organ is a well-known factory for blood proteins. But subsequent studies had trouble tracking AFGP back to the liver. The new study analyzed tissue from notothenioids and found the pancreas and stomach are the main sources of fish antifreeze. "It turns out that the liver has no role in the freezing avoidance in these fishes at all," said study co-author Christina Cheng of the University of Illinois at Urbana-Champagne. In the Southern Ocean, sea water temperatures rarely rise above about 28.5 degrees Fahrenheit, the freezing point of water, but fish fluids freeze at about 30.2 degrees Fahrenheit. The water is often filled with tiny ice crystals, which the fish ingest as they eat and which could potentially freeze the critter from the inside out. By secreting SGFP into the intestines, where the protein is then absorbed into the blood, the fish prevent their internal fluids from icing up. The evolution of this ability, the researchers write, was probably driven by the need to prevent the intestinal fluid from freezing, but has since allowed notothenioids to survive where many animals dare not swim. © 2012 LiveScience.com. All rights reserved.
Here’s a SIMPLE drawing lesson strategy, That will turn you into a profound artist overnight! Select and draw simple basic shapes to begin with. Then move on to more complex shapes later, after you have learnt how to capture the basic shapes of objects: Oh, you say you can’t even draw, now! That’s rubbish! Drawing isn’t hard. Anyone can draw and paint. If they stopped and observed things more carefully, before trying to copy what they decided to draw or paint. Okay then. How? - Do you remember when you learnt to write your ABC in grade one? How long did it take you to write your name or a sentence? - Do you remember how long it took to write a simple sum at school and add it up? Really it wasn’t long to learn the basics, was it! But perfecting your writing skills took a little bit longer didn’t it? So it is with art. To become a good artist means spending enough time practicing your new acquired skill. So what are the basic drawing skills then? First, recognizing basic shapes around you: - Look more closely, see cars and bicycles have round wheels. - Houses and buildings are made up of squares, rectangles and triangle shapes. - Fir trees and ice-cream cones have cone shapes. - Drinking glasses have up-side-down cone shapes, with oval eclipse bases and top-opening. - And body-parts of people basically consist of oval, round and triangle/wedge shapes. Drawing simple shapes gives you confidence! The next stage, is to link the ‘dots” Have you ever filled in those exercises in the children’s section of magazines? Where you need to draw a line (with a pencil) from one number to another, until an object is recognizable? Well, that’s how you draw objects. Simplifying your drawings: - Your object may look somewhat complex at first, but once you have observed its basic outline and simple shapes within it, it doesn’t look so complex after all. - Start drawing your object, with those simple basic shapes and leave out the detail. When doing this for the first time, try doing only bold objects at first, like balls, apples and fir trees. - Don’t hold your pencil tightly and be finicky, in the effort to perfect or neaten your lines. Lightly draw those shapes softly and loosely. Don’t put pressure on your pencil. - Let your pencil flow ‘lazily’ around and over the basic shapes as you draw around, joining and linking the shapes, until the object’s outline is recognizable. - Don’t worry about defining details yet. Reiterated lines are okay for the time being. The reiterated lines allow you later to select which lines really want, to embody the shape or not. It also gives the object an animated appearance. - At this point, your soft synopsis allows you to judge its possible position in the composition. What’s so great about this way of working lightly; is that the light synopsis sketch can be eased-out or adjusted, before perfecting the shape or its proper position. - The human form is more complex. When it has been broken down to basic shapes, it looks may look somewhat like a robot at first. But once you have linked and rounded off the body parts, it starts to look more realistic. Drawing results and conclusion: - Been more observant is important. Judging what you look at, by shape and tonal, contrast helps to define what is important and what’s unnecessary. - The Chinese recognized this principle of painting simple shapes many centuries ago. They also understood the symbolic outlines of their brushstrokes said it all. - Like toilet and road icon signs, symbolic shapes are far more quickly recognized by people when they look at your paintings. That’s why modern artists realize that bold shapes have more impact in their paintings. - Having started with soft simple outlines, reduces your composing time and also makes it easier to capture quickly moving objects. - It also proves that outer outlines are symbolically recognizable. And if outlines are symbolic that means internal details aren’t so important. The internal section only needs a few details added, if really necessary, to create mood or if the object is the main point of interest. - So learning to draw like this, with this guileless `ABC’ method; proves you can draw even the simplest of objects, if you really want to. Last retort on drawing: Being an artist doesn’t happen by accident! If you practice often enough, you will become a good artist, in spite of what you think at the present moment!
Solving system of equations can be done by any of these methods In this page we are to discuss about the topic solving linear equations with two variable by using graphical method. 1) solving system of equations by graphing: When we are solving equations of two variables then we can get three types of solutions those are both lines are intersecting, both lines are parallel, and both lines are coincident.The graph any linear equation will be in the form of a straight line. If we graph the given two lines we will have these kind of solutions. In the graph below we have two distinct lines which are intersecting exactly at one point. This is called independent system of equations and the solution is always some (x,y) point. That is the solution for these kind of equations will be unique. Here we can see only one line. But actually it is the same line drawn two times. Which shows that the lines intersect at each point along the whole length. In other words we can say the second line is also lying on the first line. This is called a dependent system and the solution is whole line . Which means it is having infinitely many number of solution. here we have two distinct lines which are parallel to each other. since parallel lines never cross, there is no solution. This is called an inconsistent system of equation, which has no solution Solve the following equations: y = 10 - 2 x x + y/2 = 5 From the following graph we will come to know that both the equations represent the same line. So the system of equations is dependent systemThese are the examples of solving system of equations. Related topics of solving system of equations
Publications – journal articles: Emma Kerr with Jane Whittle Emma details the timeline of events on Shackleton’s Endurance Expedition and considers ways in which these can be explored in the classroom. Geography is the glue! Building on a previous article, Emma models examples from and interconnected learning experience that uses Antarctica, Ernest Shackleton’s 1914-17 ‘Endurance’ Expedition and extreme Polar environments as the ‘geography glue’. A series of 6 lessons based on Geographical Enquiry, linked to Shackleton’s Endurance Expeditions has been written by Emma for the RGS. It is due to be published on-line in early 2015 and is entitled ‘Exploring Shackleton’s Antarctica’. ‘Exploring Shackleton’s Antarctica’: The aim of the module is to develop an enquiry on the Polar region of Antarctica focusing on Shackleton’s 1914–17 Endurance Expedition. This sequence of lesson plans will demonstrate geographical based, hands-on, cross-curricular activities such as role play to nurture pupils’ fascination with and curiosity about this significant global locality, remote landscape and extreme environment. Proven case studies linked to this topic demonstrate how these lessons engage pupils in the geographical skills of developing knowledge within a context and define the physical and human characteristics and processes of a locality. Pupils will progress with their atlas skills, interpreting a range of sources of geographical information and be provided with opportunities to communicate their findings in a variety of ways. The lesson activities develop geographical and context specific vocabulary and literacy through a series of re-iterative activities that expect pupils to develop and use language in a context-specific way. Moreover, opportunities for cross-curricular subject links will be suggested as a starting point to embed this topic to create a half term or full term’s worth of work. In addition children will be offered opportunities to write at length within this geographical context. Talks & Conferences: Geographical Association Annual Conference and Exhibition 2014, University of Surrey, Wednesday 16th April: Teaching about the Antarctic (KS1–3) ‘Based on an enquiry focusing on Shackleton’s 1914–17 Endurance Expedition, this session will demonstrate hands-on cross-curricular activities such as role play to nurture pupils’ fascination with, and curiosity about, this remote landscape and extreme environment.
"Success looks different to me with each student. For some students, just being able to talk about a book at all is a plus. And then other students are able to come up with questions that go beyond the book. They are able to talk about meanings, about interpretations...and that's success." Identifying appropriate and useful assessment tools is a complicated task in any classroom. In envisionment-building classrooms finding relevant means of assessment becomes even more complex. How do teachers fully assess students' understandings of literary texts or students' abilities to participate in discussions about those texts? How do they judge the richness of student thinking? Clearly many quantifiable paper and pencil tools—true/false or multiple-choice tests, for example—provide inadequate representations of the intricate and nuanced web of knowledge and skills that students bring to literary discussion. Out of necessity, teachers devise other means of representing student progress and identifying directions for further instruction. -Latosha Rowley, 4th- and 5th -Grade Teacher, Indianapolis Public Schools Center for Inquiry, Indianapolis, Indiana Focused as much on students' developing understandings and interpretations of texts as on their understanding of any single text, teachers in envisionment-building classrooms rely heavily on ongoing means of recording student progress. Habitual note-taking, focusing on developments in student performance, areas of difficulty, and ideas for later discussion; checklists; anecdotal records; informal conferences; and portfolio collections of student work all contribute to building a richly refined portrait of each student's abilities as a reader of literature. By and large, the activities commonly a part of envisionment-building classrooms and instruction help students perform well on state standardized tests with only a modicum of explicit test preparation. Additionally, in this video you will listen as the workshop teachers describe ways in which they have developed procedures that involve students and parents in their assessment processes. Appreciating the power of authentic assessment and valuing their own on-going professional development, several of these teachers reverse conventional patterns and ask students for feedback on their teaching as well. For a complete guide to the workshop session activities, download and print our support materials.
A breastplate is a device worn over the torso either to protect the torso from injury, or as an item of religious significance, or as an item of status. A breastplate was sometimes worn by mythological beings as a distinctive item of clothing. In Judaism, the Breastplate (Hoshen) is a sacred garment worn by the High Priest, woven out of multiple fabrics and set with twelve precious stones representing each of the tribes of Israel. Traditionally, it had a fold containing the Urim and Thummim. The Breastplate is described in some detail in and . It is placed over the Mantle (Me'il). In modern Judaism, there is a breastplate, usually silver gilt, which is placed over the Torah Scroll when it is placed in the Aron Kodesh (ark). This breastplate is removed when the Torah is read during synagogue services. Christian tradition, particularly Roman Catholic and Anglican, uses a hymn entitled the "Breastplate of St. Patrick", or "Lorica" (ostensibly written by St. Patrick himself), which is a lyrical prayer to God for protection. It is found in an Old Irish text from the 8th century. The English translation was made by Cecil Frances Alexander and set to the melody "St. Patrick" by Charles Villiers Stanford. The morse (clasp) on a cope, particularly one worn by a bishop, is said to symbolize the breastplate worn by the High Priest. In the Roman Catholic Church, this was especially true of the morse on the mantum previously worn by the pope. The breastplate is also of significance in the Latter Day Saint Movement, as one is believed to have been maintained anciently, along with other sacred artifacts, by Book of Mormon prophets (cf Doctrine and Covenants 17:1, and Joseph Smith History 1:35, 42, 52). The hair-pipe breastplates of 19th-century Plains Indians were made from bones from the West Indian conch, brought to New York docks as ballast and then traded to native Americans of the upper Missouri River. Their popularity spread rapidly after their invention by the Comanche in 1854. They were too fragile and expensive to be considered armor, and were instead a symbol of wealth during the economic depression among Plains Indians after the buffalo were exterminated.
Infants and children with HIV infection or AIDS need the same things as other children -- lots of love and affection. Small children need to be held, played with, kissed, hugged, fed, and rocked to sleep. As they grow, they need to play, have friends, and go to school, just like other kids. Kids with HIV are still kids, and need to be treated like any other kids in the family. Kids with AIDS need much of the same care that grown-ups with AIDS need, but there are a few extra things to look out for. - Watch for any changes in health or the way the child acts. If you notice anything unusual for that child, let the doctor know. For a child with AIDS, little problems can become big problems very quickly. Watch for breathing problems, fever, unusual sleepiness, diarrhea, or changes in how much they eat. Talk to the child's doctor about what else to look for and when to report it. - Talk to the doctor before the child gets any immunizations (including oral polio vaccine) or booster shots. Some vaccines could make the child sick. No child with HIV or anyone in the household should ever take oral polio vaccine.Advertisement - Stuffed and furry toys can hold dirt and might hide germs that can make the child sick. Plastic and washable toys are better. If the child has any stuffed toys, wash them in a washing machine often and keep them as clean as possible. - Keep the child away from litter boxes and sandboxes that a pet or other animal might have been in. - Ask the child's doctor what to do about pets that might be in the house. - Try to keep the child from getting infectious diseases, especially chickenpox. If the child with HIV infection gets near somebody with chickenpox, tell the child's doctor right away. Chickenpox can kill a child with AIDS. - Bandage any cuts or scrapes quickly and completely after washing with soap and warm water. Use gloves if the child is bleeding. Taking care of a child who is sick is very hard for people who love that child. You will need help and emotional support. You are not alone. There are people who can help you get through this.
In higher organisms the eye is a complex optical system which collects light from the surrounding environment, regulates its intensity through a diaphragm, focuses it through an adjustable assembly of lenses to form an image, converts this image into a set of electrical signals, and transmits these signals to the brain through complex neural pathways that connect the eye via the optic nerve to the visual cortex and other areas of the brain. Eyes with resolving power have come in ten fundamentally different forms, and 96% of animal species possess a complex optical system. Image-resolving eyes are present in molluscs, chordates and arthropods. The simplest "eyes", such as those in microorganisms, do nothing but detect whether the surroundings are light or dark, which is sufficient for the entrainment of circadian rhythms. From more complex eyes, retinal photosensitive ganglion cells send signals along the retinohypothalamic tract to the suprachiasmatic nuclei to effect circadian adjustment. Eye. Overview Complex eyes can distinguish shapes and colours. The visual fields of many organisms, especially predators, involve large areas of binocular vision to improve depth perception. In other organisms, eyes are located so as to maximise the field of view, such as in rabbits and horses, which have monocular vision. Compound eyes are found among the arthropods and are composed of many simple facets which, depending on the details of anatomy, may give either a single pixelated image or multiple images, per eye. Each sensor has its own lens and photosensitive cell(s). Possessing detailed hyperspectral colour vision, the Mantis shrimp has been reported to have the world's most complex colour vision system. Trilobites, which are now extinct, had unique compound eyes. In contrast to compound eyes, simple eyes are those that have a single lens. Evolution Overview. Types of Eyes. Nutrients. Relationship to life requirements. Visual acuity. Perception of color. Pigmentation. Eyeball. Mammalian Eye. Cephalopod Eye. Mollusc Eye. Arthropod Eye. Tapetum lucidum. Reflection of camera flash from tapetum lucidum In darkness, eyeshine reveals this raccoon Similar adaptations occur in some species of spiders, although these are not the result of a tapetum lucidum. Most primates, including humans, lack a tapetum lucidum, and compensate for this by perceptive recognition methods. Eyeshine White eyeshine occurs in many fish, especially walleye; blue eyeshine occurs in many mammals such as horses; green eyeshine occurs in mammals such as cats, dogs, and raccoons; and red eyeshine occurs in coyote, rodents, opossums and birds. A three-month-old black Labrador puppy with apparent eye shine Despite it being present in some primates, the human eye has no tapetum lucidum, hence no eyeshine. Blue-eyed cats and dogs Odd-eyed cat with eyeshine, plus red-eye effect in one eye Classification A classification of anatomical variants of tapeta lucida defines 4 types: Nictitating membrane. The nictitating membrane (from Latin nictare, to blink) is a transparent or translucent third eyelid present in some animals that can be drawn across the eye for protection and to moisten it while maintaining visibility. Some reptiles, birds, and sharks have full nictitating membranes; in many mammals, a small, vestigial portion of the membrane remains in the corner of the eye. Some mammals, such as camels, polar bears, seals and aardvarks, have full nictitating membranes. Often called a third eyelid or haw, it may be referred to in scientific terminology as the plica semilunaris, membrana nictitans or palpebra tertia. The nictitating membrane (mid-blink) of a bald eagle Unlike the upper and lower eyelids, the nictitating membrane moves horizontally across the eyeball. Eye movement. Eye movement in two seconds. Arthropod eye. "Fly eye" redirects here. For Calvin Harris's record label, see Fly Eye Records. Adaptation (eye) In ocular physiology, adaptation is the ability of the eye to adjust to various levels of darkness and light. The eye takes approximately 20–30 minutes to fully adapt from bright sunlight to complete darkness and become ten thousand to one million times more sensitive than at full daylight. In this process, the eye's perception of color changes as well (this is called the Purkinje effect). However, it takes approximately five minutes for the eye to adapt to bright sunlight from darkness. This is due to cones obtaining more sensitivity when first entering the dark for the first five minutes but the rods take over after five or more minutes. Dark adaptation is far quicker and deeper in young people than the elderly. Visual Response to Darkness A minor mechanism of adaptation is the pupillary light reflex, adjusting the amount of light that reaches the retina. Above a certain luminance level (about 0.03 cd/m2), the cone mechanism is involved in mediating vision; photopic vision. 1. 2.
This British Newspaper bears a Tax Stamp used in the British Isles. The 1765 Stamp Act required documents to be printed on taxed paper. An elaborate emblem that included royal symbols was required on to be printed or attached to documents and papers. The paper was stamped in Britain, sent to the colonies and sold by government appointed officials. The emblem proved that tax had been paid. The American Colonists argued that only their local colonial assemblies could enact such a tax. The Stamp Act was one of the catalysts for the American Revolution. The British Parliament passed the Stamp Act in March in 1765 which imposed direct taxes on the colonies for the first time. All official documents, newspapers, almanacs, pamphlets and decks of playing cards were required to have the stamps. The colonists objected because they had no representation in the Parliament. In 1765, the Sons of Liberty formed and used public demonstrations, boycott and violence to ensure that the British tax laws were unenforceable. In Boston, the Sons of Liberty burned the records of the vice admiralty court and looted the home of chief justice. Several colonial legislatures called for united action and nine colonies sent delegates to the Stamp Act Congress in New York City in October 1765. A “Declaration of Rights and Grievances” stating that taxes passed without representation violated their rights as Englishmen. Colonists went further and started boycotting imports of British merchandise. Massachusetts was declared in a state of rebellion in 1775 and the British garrison was ordered to disarm the rebels and arrest their leaders. These orders lead to the Battles of Lexington and Concord which marked the start of the military campaign of the American Revolution. Other Historical Items from the American Revolution include: - George Washington’s War Tent - Inn Sign from the “General Wolfe” Tavern - British Newspaper with a Tax Stamp - “The March to Valley Forge” by William Brooke Thomas Trego - Name: British Newspaper with a Tax Stamp - Original Location: American Colonies - Made: 1766 - Material: Paper and Ink - Museum: Museum of the American Revolution “Be courteous to all, but intimate with few, and let those few be well tried before you give them your confidence.” Photo Credit: 1) By GordonMakryllos (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)%5D, via Wikimedia Commons
Scientists at CSIRO and RMIT University have produced a new two-dimensional material that could revolutionise the electronics market, making "nano" more than just a marketing term. The material – made up of layers of crystal known as molybdenum oxides – has unique properties that encourage the free flow of electrons at ultra-high speeds. In a paper published in the January issue of materials science journal Advanced Materials, the researchers explain how they adapted a revolutionary material known as graphene to create a new conductive nano-material. Graphene was created in 2004 by scientists in the UK and won its inventors a Nobel Prize in 2010. While graphene supports high speed electrons, its physical properties prevent it from being used for high-speed electronics. The CSIRO's Dr Serge Zhuiykov said the new nano-material was made up of layered sheets – similar to graphite layers that make up a pencil's core. "Within these layers, electrons are able to zip through at high speeds with minimal scattering," Dr Zhuiykov said. "The importance of our breakthrough is how quickly and fluently electrons – which conduct electricity – are able to flow through the new material." RMIT's Professor Kourosh Kalantar-zadeh said the researchers were able to remove "road blocks" that could obstruct the electrons, an essential step for the development of high-speed electronics. "Instead of scattering when they hit road blocks, as they would in conventional materials, they can simply pass through this new material and get through the structure faster," Professor Kalantar-zadeh said. "Quite simply, if electrons can pass through a structure quicker, we can build devices that are smaller and transfer data at much higher speeds. "While more work needs to be done before we can develop actual gadgets using this new 2D nano-material, this breakthrough lays the foundation for a new electronics revolution and we look forward to exploring its potential." In the paper titled 'Enhanced Charge Carrier Mobility in Two-Dimensional High Dielectric Molybdenum Oxide,' the researchers describe how they used a process known as "exfoliation" to create layers of the material ~11 nm thick. The material was manipulated to convert it into a semiconductor and nanoscale transistors were then created using molybdenum oxide. The result was electron mobility values of >1,100 cm2/Vs – exceeding the current industry standard for low dimensional silicon.
Cells are the units that make up life. Multi-cellular organisms like humans or plants can be millions or trillions of cells. Most cells are also specialized in function. Skin cells look and do very different things from a brain neuron. A leaf cell is much different than a root cell. Specialized (i.e. differentiated) cells started as undifferentiated stem cells. How does a cell figure out its identity? That is a big topic in developmental biology. How does a cell figure out its identity? That is a big topic in developmental biology. Two recent papers in The Plant Cell on Arabidopsis root development provide some answers to this question. The broad answer is that plants (most living things, actually), have genetic programs it runs that define the cells that make up the root. From a seed Inside a plant seed is an embryo. For this story, the relevant part is the few cells that will grow into the entire root of the plant. After germination, these few cells that are the new root start dividing and establish a root designed to find water, soil nutrients, interact with fungi/microbes in the soil, and anchor the shoot system (that has its own meristem and story for another day). After a population of dividing cells (also known as transit activated cells or TACs) is established, elongation and differentiation begin to occur. A dynamic balance of cell division and differentiation is then in place with the in-between territory (where a cell isn’t fully differentiated, but unlikely to divide) being the transition zone. Depending on conditions, cell division activity can increase, differentiation can increase, or both can be in equal balance. A population of stem cells is maintained throughout root growth as plant growth and development occurs almost entirely post-embryonically– post seed– a contrast with animals where a lot of development happens before birth. This is the root meristem. A constantly maintained population of stem cells, actively dividing cells. The meristem then ends and cells transition to elongation/differentiation of cells occurs. How does a root cell know if it’s a stem cell? A dividing cell? A differentiating cell? Two recent papers in The Plant Cell address this question. As cells move from dividing stem cells (that divide to maintain a stem cell and a cell that will become a transit activated cell) through the meristem, and into differentiation, there are specific genetic programs that define each area and let the cells know their identity. As cells move, differing instructions cause cells to adopt their new behavior/function. This can occur across a single cell distance, as the dividing stem cell example in this paragraph shows. How cells know who they are Plant cells are constrained by a rigid cell wall and don’t move. Part of how a plant cell knows what to do is based on its position. In the root, a key factor is distance from the stem cell niche (SCN). Another is the internal-external root axis (aka across) consisting of files/rows of cells. Roots mostly have radial symmetry, though some of the vascular tissues, near the center of the root, adopt a two-sided (bilateral) symmetry. How does a plant cell tell it’s position and what it’s supposed to do at any given time? Stem Cells vs. Dividing Cells Rodriguez et al. (1) provide evidence for one mechanism by which plants differentiate stem cells vs. the zone of cell division. The researchers looked at a gene regulatory loop that helps define the stem cell niche and demonstrated how it is altered when the scientists manipulated either one or the other of these components. One of these components is a family of transcription factors (proteins- gene products- that turn other genes up or down), dubbed “GROWTH-REGULATING FACTORS“, or GRFs. The other is a microRNA, specifically microRNA396, or miR396. microRNAs are genes encoded in the DNA of the genome. However, the gene product is a small (21-24 nucleotide) RNA molecule of specific sequence that can prevent the making of proteins whose RNA complements the microRNA’s sequence (creating a double stranded RNA that cells can cut apart). microRNAs negatively regulate genes. In the case of miR396, it’s target is the GRF transcription factor family. The PLT-mir396-GRF genetic circuit is just one of many operating, but it is one way plants specify identity. Rodriguez et al. found that that mir396 is specifically expressed in the stem cell niche and keeps the GRFs from being expressed there, maintaining stem cell identity. The GRFs, expressed in the meristem/TAC cells, at least partially defines those cells as actively dividing. GRFs repress a family of transcription factors that also define stem cells, the PLETHORA (PLT) genes. PLT actually turns on miR396 in the stem cell niche. As a cell leaves the stem cell niche, PLT expression is repressed by the GRFs, promoting transit-activated status. The PLT-mir396-GRF genetic circuit is just one of many operating, but it is one way plants specify identity. Cycling vs. non-cycling cells Otero et al. (2), the scientists looked at a rather generic type of protein: histones. Histones are proteins that are part of packaging DNA in cells. Histone octamers (groups of 8) have DNA wound around them, a bit like spools of thread. Each octamer has around 150bp of DNA wrapped around it. This means that per chromosome (thousands-millions of base pairs), there are a lot of histone core complexes. There are also linker histones, that bind DNA that spans the distance between histone core octamers. Histone octamers can be loosely or tightly packed together or moved along the DNA, making parts of the genome more or less accessible to things like the above mentioned transcription factors. This matters for what genes will be active or not in any given cell. Histone octamers can be loosely or tightly packed together or moved along the DNA, making parts of the genome more or less accessible to things like the above mentioned transcription factors. This matters for what genes will be active or not in any given cell. There are variants of histones in the genome, or several different types comprising the octamer. One component of the octamer are histones of the family H3. Specifically, Otero et al. looked at tagged versions of H3.1 and H3.3 and looked at how H3.1 in particular gets replaced from histone octamers at some specific times in root cell development. They noticed that a high ratio of H3.1/H3.3 seems to mark cells for high proliferation ability (starting with the center of the stem cell niche, the Quiescent Center, that contains cells that do not divide, and are therefore lacking in H3.1, but have H3.3). H3.1 seems to get replaced from DNA when cell division is going to cease and the scientists demonstrated this for the root meristem. Not only does a high H3.1/H3.3 mark cells as actively proliferating and going through the cell cycle, when cells approach the transition zone/cessation of cell division, the final cell division has lengthier G2 cell cycle phase, likely because H3.1 is being completely evicted and the genetic program of the cells is changing, though that aspect is not explored in this work. A really fascinating aspect is that H3.1 gets reincorporated after the final meristem cell division for part of the differentiation process known as endocycling. Endocycling involves making more copies of the genome without going through a cell division (and seems to be a feature of many differentiating plant cells). H3.1 is incorporated during the S phase (S for synthesis, as in DNA synthesis/copying). An endocycle is basically a cycle between G2 and S phases, skipping the other stages of the cell cycle, G1 and actual mitosis or cell division. However, endocycles stop too, and indeed, at the end of endocycling ahead of a fully differentiated root cell, H3.1 is evicted again. Thus H3.1 seems to be a marker of cycling cells- either mitotic or endocycling. This dynamic seems to exist in other plant tissues as well (guard cell differentiation for instance). More to the story Both of these studies show two ways plant cells can determine their identity and behavior. That said, there are bigger networks of genes than the ones mentioned here. There are a lot of transcription factors that integrate cues from plant growth hormones, environmental cues, and from signals from neighboring cells (or even further away cells). Some transcription factors actually move between one cell file and another as one example of how plant cells talk to each other. Histone dynamics are also not just down to one being incorporated into the histone octamer that wraps DNA. Histones also can be marked in various ways creating different accessibility to DNA for other proteins that would bind to a region of DNA (for instance, transcription factors), this is part of epigenetic regulation. These genetic programs are operating in all plants, creating the rich network of a plant’s inner life below and above ground. Plant cells have different genetic programs to specify cell function. Some factors activate genes, others repress gene (or protein) function. There can be hundreds of genes involved and plant scientists are really starting to appreciate the network aspects of how genes regulate plant growth and development. These genetic programs are operating in all plants, creating the rich network of a plant’s inner life below and above ground. - Rodriguez, R.E., Ercoli M.F., Debernardi J.M., Breakfield N.W., Mechhia M.A., Sabatini, M., Cools, T., De Veydler, L., Benfey, P.N., Palatnik, J.F. 2016. MicroRNA miR396 Regulates the Switch between Stem Cells and Transit-Amplifying Cells in Arabidopsis Roots. Plant Cell 27: 3354-3366 doi: www.plantcell.org/cgi/doi/10.1105/tpc.15.00452 - Otero, S., Desvoyes, B., Peiró, R., Gutierrez, C. 2016. Histone H3 Dynamics Reveal Domains with Distinct Proliferation Potential in the Arabidopsis Root. Plant Cell 28: 1361-1371 doi: www.plantcell.org/cgi/doi/10.1105/tpc.15.01003
Global undernourishment shouldn’t exist. Each day the world’s farmers and ranchers produce the equivalent of 2,868 calories per person on the planet – enough to surpass the World Food Programme’s recommended intake of 2,100 daily calories and enough to support a population inching toward nine billion. The world as a whole does not have a food deficit, but individual countries do. Why do 805 million people still have too little to eat? Access is the main problem. Incomes and commodity prices establish where food goes. The quality of roads and airports determines how easily it gets there. Even measuring undernourishment is a challenge. In countries with the highest historical proportions of undernourishment, it can be hard to get food in and data out. Things are slowly getting better. Since the early 1990s world hunger has dropped by 40% – that means 209 million fewer undernourished people, according to the Food and Agriculture Organization of the United Nations. Future progress may prove difficult. “It is critical to first improve overall food production and availability in places like sub-Saharan Africa,” says FAO economist Josef Schmidhuber. “Then one can focus on access.” -Daniel Stone, National Geographic magazine, December, 2014
The star in the east is Sirius, the brightest star in the night sky, on December 24, which aligns with the 3 brightest stars in Orion’s Belt. These 3 bright stars are called today what they were called in ancient times: “The 3 Kings!” The 3 kings and Sirius, all point to the place of the sunrise on December the 25th. This is why the 3 Kings follow the eastern star, in order to locate the sunrise… “The birth of the sun(Son)” The Virgin Mary is the constellation Virgo, also known as the Virgin. Virgo, in Latin, means; Virgin. The ancient glyph for Virgo is; “The altered “ɱ. This why Mary along with other virgin mothers, such as Adonis’s mother Myrrha, or Buddha’s mother Maya begin with an M. Virgo is also referred to as, “The House of Bread”, and the representation of Virgo is a virgin holding a sheaf of wheat. This House of Bread and its symbolic of wheat represents August and September, the time of harvest. In turn, Bethlehem, in fact literally translates to “House of Bread”. Bethlehem is thus a reference to the constellation “Virgo” a place in the sky, and not on Earth. There is another very interesting phenomenon that occurs around December the 25th, or the winter solstice. From the summer solstice too the winter solstice, the days become shorter and colder. From the perspective of the northern hemisphere, the sun appears to move south and get smaller and more scarce. The shortening of the days and the expiration of the crops when approaching the winter solstice symbolizes the process of death to the ancients. It was, “The Death of The Sun(son). By December the 22nd, the sun’s demise was fully realized, for the sun, having moved south continually for 6 months, makes its lowest point in the sky. Here a curious thing occurs; The sun stops moving south, (at least it seems to) for 3 days. During this 3 day pause, the sun resides in the vicinity of “The Southern Cross” or “Crux”, constellation. And, after this time on December the 25th, the sun moves 1 degree, this time north, foreshadowing longer days, warmth and “Spring”. And thus it was said: “The Sun(son) Died On The Cross”, was dead for 3 days, only to be resurrected or born again. This why Jesus and numerous other Sun Gods share the crucifixion, 3 day death, and resurrection concept. It is the sun’s transition period before it shifts its direction back into the Northern Hemisphere, bringing Spring, and thus salvation. However. They did not celebrate “The Resurrection of The Sun(son), until the Spring Equinox, or Easter. This is because at the Spring Equinox, the sun officially overpowers the evil darkness, as daytime thereafter becomes longer in duration than night, and revitalizing conditions of Spring emerges. Now, probably the most obvious of all the astrological symbolism around Jesus regards “The 12 Disciples”. They are simply the 12 constellations of the Zodiac, which Jesus, being The Son(sun), travels about with. Coming back to “The Cross of The Zodiac”, the figurative life of the Sun, was not just an artistic expression or tool to track the Sun’s movements. It was indeed, also a Pagan spiritual symbol, the shorthand of which looked like this. This is not a symbol of Christianity, but a pagan adaptation of “The Cross of The Zodiac”. This is why Jesus in early occult art is always shown with his head on the cross with the Sun behind him. For Jesus is the Sun(son), “The Sun(son) of God, The Light of The World,, The Risen Savior, who will “Come Again” as it does every morning. “The Glory of God”, who defends against, “The Works of Darkness”, as he is bron again every morning, and can be seen “Coming In The Clouds”, “Up In Heaven”, with his “Crown of Thorns”, or “Sun Rays”. Furthermore, the character of Jesus, is a literary and astrological hybrid, and is most explicitly a plagiarization of, “The Egyptian Sun God; “Horus”. For example, encrypted around 3500 years, on the walls of “The Temple of Luxor”, in Egypt are images of “The Enunciation and Annunciation, “The Immaculate Conception”, The Birth” and “The Adoration of Horus”, the images begin with “The Thaw” announcing to “ “the Virgin Isis” that she will conceive “Horus”, then “Nef” The Holy Ghost”, impregnating her, and then “The Virgin Birth and Adoration. This is exactly the story of, “Jesus”. The Miracle Conseption. In Fact, the literary similarities between Horus, Jesus, Dionysus, Mithra Krishna, and Attis are staggering. Now, of the many astrological-astronomical metaphors in the Bible, one of the most important has to do with, “The ages”. Throughout the scripture there are numerous reference to “The Age”. In order to understand this, we need to be familiar with the phenomenon known as, “The Precession of The Equinoxes”. “The Ancient Egyptians”, along with cultures long, before them recognized that approximately every 2150 years the sunrise on the morning of, “The Spring Equinox” would occur at a different sign of “The Zodiac”. This has to do with a slow angular wobble the Earth maintains as it rotates on it’s axis. It is called a precession, because the constellations go backwards, rather than through the normal yearly cycle. The amount of time that it takes for the precession to go through all 12 signs is roughly 25, 765 years. This is also called; “The Great Year”, and Ancient Societies were very aware of this. They referred to to 2150 year period as an “Age”. From 4300 B.C. to 2150 B.C. it was “The Age of Taurus”, “The Bull”. From 2150 B.C. to 1 A.D. it was The Age of Aries”, “The Ram”, and from 1 A.D. to 2012 A.D. it was “The Age of Pisces”, “The Fish” (Christian Holy Symbol is 2-Headed Fish) the age age from which we are currently ascending as we reach a higher consciousness and be delivered by means of a spiritual “Rapture” as we enter into “A New Age” - “The Age of Aquarius!” Which we are currently living. Cupid D. Atkins
Today, we'll continue through my occasional series on basic astronomy concepts. Previously, we've discussed the difference between solar systems, star clusters, and galaxies. We've also discussed telescopes and observatories. Today we're going to talk about the main way astronomers learn about distant planets, stars, and galaxies: the electromagnetic spectrum. Within our Solar System, we can send robots (and maybe, someday, people) to run all kinds of tests. But even the closest star is still trillions of miles further away than our most distant and speediest robot probes. To explore other stars and the Universe, we have three choices: - Electromagnetic radiation ("light"). This is by far the most common and most successful method. - Gravitational waves. Gravitational waves are like ripples of gravity propagating through the Universe. This might be detectable in the not-too-distant future, but for now they have not been definitively detected. - Cosmic rays. These are high-energy particles coming from all corners of the Universe. We can detect these on Earth and in space, but so far it is hard to identify where a particular particle came from (except for the many that come from the sun). Electromagnetic waves, which I'm just going to call "light" from now on so I don't have to keep typing "electromagnetic", seem to have a range of properties. Radio waves carry information, microwaves heat our food, infrared light carries heat and allows us to "see in the dark" with night-vision goggles, visible light is at the heart (or eyeball) of one of a human's primary senses, ultraviolet light gives us suntans and skin cancer, X-rays allow us to see inside our bodies, and gamma rays turn us into monstrously strong, large, green humanoids when we get angry (or at least that's what I've been told). Yet all of these "different" types of light are really the same thing; the differences are all due to the different energies of the light. Radio waves are very low energy -- an AM radio tower transmitting at a few kilowatts can be heard for hundreds of miles, yet a house lit by a few kilowatts of lightbulbs is difficult to see more than a mile or two away. X-rays can be hundreds or thousands of times more energetic than visible light, which is one reason why they can be harmful to our health. Because all forms of light are the same basic thing, physicists and astronomers often refer to all these types of light as the electromagnetic spectrum. There's no official dividing line between one type of light and another, they all sort of blend like a rainbow (hence the term spectrum). Therefore, we astronomers tend to use a property of light called wavelength to identify exactly where in the spectrum we are looking. Why would astronomers study different parts of the electromagnetic spectrum? Because the light we see at different parts of the spectrum come from different processes. Radio waves tend to come from very cold clouds of gas and from electrons caught in magnetic fields. Infrared light comes from objects we might consider "warm" or "hot". Humans glow in infrared light. So do planets and stars. Visible light comes mostly from this same heat radiation, just from very hot objects like the sun and other stars. X-rays and gamma rays come from very energetic phenomena, such as gas at temperatures of millions of degrees, collisions of fast-moving objects, and nuclear reactions. Therefore, when we look at the sky in different types of light, we see different things. Here is a slideshow I created that shows how the entire sky looks in different parts of the electromagnetic spectrum, starting with the weak radio light and moving up in energy through gamma ray light. I don't have a pretty map of ultraviolet light, and I also add in a few special wavelengths of light where hydrogen, the most common element in the Universe, likes to reveal itself. These pictures are like maps of the sky, centered on the center of our Milky Way galaxy. In each image, the Milky Way runs from left to right across the center of the image. Each image is also labeled with the part of the electromagnetic spectrum, the wavelength of the light (shorter wavelengths = more energetic light), and an everyday object that is roughly the size of the wavelength of light. I also give credit as to where the image came from (and I put links at the end of this post). Note how different the sky looks in different parts of the spectrum! I could go on for hundreds of pages talking about all of the neat things you can see in these pictures, and maybe I'll write a few blog posts about it soon, but for now just admire how the sky changes. The different physics behind how light is created also affect how we can detect light. Optical light detectors work a lot like our eyes: light hits a pixel (or rod or cone in the case of the eye) and is converted to a signal. This works great for visible light, infrared light, and ultraviolet light. For radio light, we need to use antennas and dishes just like satellite TV dishes to detect a signal. And gamma rays are so energetic, they can pass right through normal cameras, so we need all sorts of clever detection equipment. One last comment. Visible light is a very tiny part of the whole electromagnetic spectrum, the tiniest part that gets its own name. Even so, visible light is still the most common flavor of light used in astronomy. There are several reasons for this. We understand visible light very well. Stars emit most of their light in the visible spectrum. Most atoms have unique fingerprints that we can observe and study in visible light. The atmosphere is transparent to visible light. Perhaps the biggest reason is tradition. Visible light astronomy dates back to the first human who noticed the sun, moon, and stars. Radio wave astronomy, on the other hand, didn't start until the 1930s, and gamma-ray astronomy started in the 1960s. All-Sky Map Image Sources (original image sources are given at each site; I list the site from which I downloaded the map): - Radio: The Multiwavelength Sky - Atomic hydrogen: The Legacy Archive for Microwave Background Data Analysis - Microwave: Wilkinson Microwave Anisotropy Probe 7 Year Data - Thermal Infrared: IPAC Cool Cosmos IRAS Gallery - Near Infrared: 2MASS All-Sky Data Release Explanatory Supplement - Hydrogen (H-alpha): H-alpha Maps of D. Finkbeiner - Optical (Visible): Gigagalaxy Zoom - X-ray: ROSAT Gallery - Gamma Ray: Fermi's Best-Ever Look at the Gamma-Ray Sky
The GROUNDHOG also known as a Woodchuck, and Whistle pig as they make an unusual whistling sound. They are stout-bodied mammals of the squirrel family. Groundhogs have black feet, reddish-brown or brown fur with little to no white, except around the mouth. They range in size from 17 to 20 inches long with 4 to 6 inch long tails, and weigh between 4 and 14 pounds. Groundhogs are found from the eastern and central United States northward across Canada and into Alaska. They are animals of open fields and woodland edges, where they feed mainly on low green vegetation, and can be extremely harmful to crops. They live on the land, but are good swimmers and climbers. They feed heavily in summer, storing fat for their long winter hibernation. Groundhogs are excellent diggers, constructing a burrow with a main entrance and an escape tunnel. Groundhogs generally live alone. An excellent source of Groundhog photographs, sounds and information may be found at: Groundhogs at Hog Haven!
Ghent, Belgium -- A universal influenza vaccine that has been pioneered by researchers from VIB and Ghent University is being tested for the first time on humans by the British-American biotech company Acambis. This vaccine is intended to provide protection against all ‘A’ strains of the virus that causes human influenza, including pandemic strains. Influenza is an acute infection of the bronchial tubes and is caused by the influenza virus. Flu is probably one of the most underestimated diseases: it is highly contagious and causes people to feel deathly ill. An average of 5% of the world’s population is annually infected with this virus. This leads to 3 to 5 million hospitalizations and 250,000 to 500,000 deaths per year. In Belgium, an average of 1500 people die of flu each year. A ‘more severe flu year’ - such as the winter of 1989-1990 - claimed in our country 4500 victims. Besides the annual flu epidemics, there is the possibility of a pandemic, which occurs every 10 to 30 years and causes more severe disease symptoms and a higher mortality rate. During the pandemic caused by the Spanish flu in 1918-1919, the number of deaths worldwide even rose to over 50 million. Why an annual vaccine? Today’s flu vaccines need to be adapted every year and, consequently, they must also be administered again every year. The external structure of the flu virus mutates regularly, giving rise to new strains of flu. Due to these frequent mutations, the virus is able to elude the antibodies that have been built up during a previous infection or vaccination. This is why we run the risk of catching the flu each year and also why a new flu vaccine must be developed each year. A universal flu vaccine that provides broad and lifelong protection - like the vaccines we have for polio, hepatitis B or measles - is not yet available. Universal flu vaccine In the 1990s, VIB researchers connected to Ghent University, under the direction of Prof. Emeritus Walter Fiers, invented a universal flu vaccine. One protein on the surface of the influenza virus, the so-called M2 protein, remains unchanged in all human flu viruses known, including the strains that caused the pandemics in the last century. On the basis of the M2-protein they developed a vaccine and successfully tested it on mice and other laboratory animals: the M2 vaccine provided total protection against ‘A’ strains of flu, without side effects. Furthermore, this universal influenza vaccine is the first example of a vaccine inducing a protective immune response that normally does not occur in nature, for example following infection by a virus or a bacterium. Clinical trials on humans Acambis - a biotech company that specializes in the development of vaccines - has been exclusively licensed rights to VIB’s flu vaccine patent portfolio and has entered into a collaboration with VIB for further development work. At the moment, Phase I clinical trials on humans are underway - that is, the candidate vaccine is being administered to a small group of healthy people in order to verify the safety of the product and to provide an initial insight into the vaccine’s effect on the human immune system. Xavier Saelens, Prof. Emeritus Willy Min Jou, and Prof. Emeritus Walter Fiers are leading the fundamental research forward with respect to protection against influenza epidemics and pandemics. This involves, amongst other, supporting research required for the planned Phase II and III clinical trials. Through their collaboration with Acambis, they hope that annual flu vaccines can ultimately be replaced by the new, universal flu vaccine. The goal for this vaccine is that two inoculations would suffice to protect people against all ‘A’ strains of flu. AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
Not all information is created equal. Just because you find information at the library does not guarantee that it is accurate or good research. In an academic setting, being able to critically evaluate information is necessary in order to conduct quality research. Each item you find must be evaluated to determine its quality and credibility in order to best support your research. To evaluate a source consider the following: - Who published the source? Is it a university press or a large reputable publisher? Is it from a government agency? Is the source self-published? What is the purpose of the publication? - Where does the information in the source come from? Does the information appear to be valid and well-researched, or is it questionable and unsupported by evidence? Is there a list of references or works cited? What is the quality of these references? - Who is the author? What are the author's credentials (educational background, past writing, experience) in this area? Have you seen the author's name cited in other sources or bibliographies? - Is the content a first-hand account or is it being retold? Primary sources are the raw material of the research process; secondary sources are based on primary sources. - When was the source published? Is the source current or out of date for your topic? - What is the author’s intention? Is the information fact, opinion, or propaganda? Is the author's point of view objective and impartial? Is the language free of emotion-rousing words or bias? - Is the publication organized logically? Are the main points clearly presented? Do you find the text easy to read? Is the author repetitive?
On this day in 1905, some 450 people attend the opening day of the world’s first nickelodeon, located in Pittsburgh, Pennsylvania, and developed by the showman Harry Davis. The storefront theater boasted 96 seats and charged each patron five cents. Nickelodeons (named for a combination of the admission cost and the Greek word for “theater”) soon spread across the country. Their usual offerings included live vaudeville acts as well as short films. By 1907, some 2 million Americans had visited a nickelodeon, and the storefront theaters remained the main outlet for films until they were replaced around 1910 by large modern theaters. Inventors in Europe and the United States, including Thomas Edison, had been developing movie cameras since the late 1880s. Early films could only be viewed as peep shows, but by the late 1890s movies could be projected onto a screen. Audiences were beginning to attend public demonstrations, and several movie “factories” (as the earliest production studios were called) were formed. In 1896, the Edison Company inaugurated the era of commercial movies, showing a collection of moving images as a minor act in a vaudeville show that also included live performers, among whom were a Russian clown, an “eccentric dancer” and a “gymnastic comedian.” The film, shown at Koster and Bial’s Music Hall in New York City, featured images of dancers, ocean waves and gondolas. Short films, usually less than a minute long, became a regular part of vaudeville shows at the turn of the century as “chasers” to clear out the audience after a show. A vaudeville performers’ strike in 1901, however, left theaters scrambling for acts, and movies became the main event. In the earliest years, vaudeville theater owners had to purchase films from factories via mail order, rather than renting them, which made it expensive to change shows frequently. Starting in 1902, Henry Miles of San Francisco began renting films to theaters, forming the basis of today’s distribution system. The first theater devoted solely to films, The Electric Theater in Los Angeles, opened in 1902. Housed in a tent, the theater’s first screening included a short called New York in a Blizzard. Admission cost about 10 cents for a one-hour show. Nickelodeons developed soon after, offering both movies and live acts.
I wrote this as a report for school in the spring of 2001. Accompanying it was a series of small models of various arches from different cultures and time periods. The arch is an incredible architectural discovery, dating back to ancient times but still in wide use today, as, up until the 19th century, it was the only known method for roofing a building without the use of beams. It comes in many shapes-semicircular (Roman), segmental (less than half a circle), or pointed (Gothic). The arch developed from the post and lintel or possibly the corbel, which is similar in shape and principle to the arch. Efforts to build corbeled roofs with smaller units and less weight could have eventually led to the discovery of the arch. Arches are made of wedge-shaped blocks, called voussoirs, set with their narrow side toward the opening so that they lock together. The topmost voussoir is called the keystone. Once locked into place, the arch cannot collapse under any amount of weight and the only danger is of the voussiors crumbling under the pressure. To keep this from happening, most arches require support from other arches, walls, or buttresses. The arch has been found in many different cultures, as early as Mesopotamia. The Egyptians used it in tombs and vaults but never for monumental architecture, such as temples. They apparently thought it unsuited to this purpose. The Greeks also used the arch solely for practical constructions, but many of the principles they developed were later exploited by the Romans. Overall, it was not until the time of the Etruscans that the arch was used in any kind of monument. The best example of this is the Porta Augusta, where the arch is combined with Greek architectural ideas. The Romans borrowed this combination and used it over and over again, but its invention belongs solely to the Etruscans. The Romans took many great strides in the development of the arch. While they borrowed many techniques from earlier races, the Romans invented the idea of setting an arch on top of two tall pedestals to span a walkway such as a public highway. The outer wall of the Colosseum appears composed almost completely of arches. Here we see examples of the barrel vault and the more complicated groined vault, both developed by the Romans from the basic arch. The Romans also used arches for common purposes, such as in the building of bridges and aqueducts. Arches continued to be used in Medeival times, especially in cathedrals, (above, second from right), where they helped support the great weight of the stone ceilings, especially when walls were weakened by the presence of many windows. It is here that buttresses were often used to support the arches. Sometimes called "flying buttresses" because of their height, buttresses are a simple construction of a stone pillar with a "bridge" at the top that joins onto the arch or walls of the building, giving extra support to the construction. Arches were also often found in long rows in cathedrals to help support each other. It is about this time that the pointed arch began to be developed, as an alternative to the traditional rounded arch. This pointed or Gothic arch became very prevalent in the architecture of the time. Unique to architecture was the Islamic arch, found about the same time in the Middle East. Many advances were made in the arch by this culture as well. While the pointed arch was used here, the Muslims also developed a horseshoe-shaped arch and "stacked arches," an arch built above an arch. It is believed that the "stacked arch" idea developed by accident, when a builder was forced to use columns too short for his purpose and so stacked them on top of each other, with arches holding the stacked columns together. Islamic arches can be found in mosques throughout the Middle East.
In this part of the lab, you must write a program in C that prints the temperature at equally spaced points on a triangular plate. We assume that the triangular plate that is enclosed by the lines x = 0 and y = 0 and x + y =100. Start by sketching this region on a piece of paper. The temperature at a point (x,y) on this plate is given by: T(x,y) = sin( x + y ) e -(x+y)/100 Your program must: print the temperature at points spaced 20 units apart in the x-direction (starting at x = 0 ) and 10 units apart in the y-direction (starting at y = 0) print the temperature to 5 decimal places in a field of width 9 Here is the expected output given the specification provided above: -0.16728 -0.44658 -0.18628 -0.15914 0.38430 0.36347 0.49946 -0.16728 -0.44658 -0.18628 -0.73195 -0.15914 0.38430 0.36347 0.74746 0.49946 -0.16728 -0.44658 -0.18628 -0.49225 -0.73195 -0.15914 0.38430 0.36347 0.00000 0.74746 0.49946 -0.16728 -0.44658 -0.18628 Note that the first line of output above corresponds to the row y = 100 while the last line corresponds to y = 0. Write your program in such a way that the separation between points can be changed easily. You will find the following functions in the library math.h useful: double sin( double angle ); /* returns the sine of angle */ double exp( double num ); /* returns e^num */ Having some trouble with this problem..anyone have some ideas for a solution?
The pileated woodpecker is monogamous and territorial, and, as it is a non-migratory bird, pairs will defend their large territory year-round (2) (3). Only when one of the pair dies will the other find a new mate, which it then allows to move into its territory (2) (3). Pileated woodpecker pairs occasionally allow non-breeding adults within their territory (2) (3), although this is more frequent during the winter (3). To defend their territory, the pair will use vocalisations and drumming, as well as chasing, striking with twigs and poking the intruder with their bills during conflicts (2). During the breeding season, the male pileated woodpecker selects a nest site and builds most of the nest, which is an oblong cavity in a tree lined with the shavings of wood that are produced during excavation. The nest can take up to six weeks to create (2) (3) (4). The excavations of the pileated woodpecker are made using its long, powerful bill, which is repeatedly drummed on the trunk of a dead tree to create an entrance hole into the hollow interior. The pileated woodpecker creates distinctive, rectangular holes which can be over 60 centimetres deep and are used for roosting and nesting (2) (3). The pileated woodpecker is an extremely important part of the forest ecosystem, as its excavations also provide shelter for many other species, including swifts, owls, bats and pine martens (2) (3). The female pileated woodpecker lays one clutch per breeding season, with four eggs being most common, although the clutch can range between one and six eggs. The eggs are white and slightly glossy (2), and are incubated for 15 to 18 days by both the male and female, after which both sexes alternately feed the young in the nest for the next 24 to 28 days (2) (3). After three to five months, the young leave the adults, but do not venture far from the natal territory (2). In addition to excavating holes for nest and roosting sites, the pileated woodpecker will drill holes into trees to gain access to its wood-boring insect prey, which includes carpenter ants (Camponotus spp.), termites, beetle larvae and other insects (2) (3) (4). The pileated woodpecker’s long, barbed tongue is used to extract its prey from the wood (3). This species also feeds on wild nuts and fruit (2) (3) (4).
A flame spectrometer heats the atoms of a sample to an excited state and then analyzes the resulting emitted spectra to determine the atomic makeup of the sample. Spectroscopy is a technique used to assess the concentration or amount of a given element by examining wave lengths emitted by an excited sample. A spectrometer is an instrument used in spectroscopy, measuring the properties of light over a certain electromagnetic spectrum. In the case of flame spectroscopy, the spectrum analyzed does not typically include visible light. In flame spectroscopy, a sample is excited by a burner or nebulizer/burner combination, thereby exciting electrons into a higher energy state (also known as "incandescence"). These excited electrons emit energy waves that are characteristic of the atomic makeup of the material. The spectrometer examines the emitted spectrum of resulting waves to determine the sample's constituent elements. Proper identification depends on comparison of observed spectrum patterns with indexed patterns already stored in some database. Remember: the quantity that is measured in flame spectroscopy is an energy level, not a physical substance. Types and Elements Analyzed There are three types of flame spectroscopy: atomic emission, atomic absorption and atomic fluorescence. Each type of flame spectroscopy requires different and particular methods. For example, in atomic absorption spectroscopy, atoms are heated until they reach a ground state, and in this ground state absorb increasing amounts of energy as the concentration of an element increases, but in atomic emission spectroscopy, the energy emitted by heated atoms is measured. Regardless of the specific type, flame spectroscopy is used to measure the concentration of metallic elements. Fuel Gases Used in a Flame Spectrometer Most spectrometers require a gaseous fuel to produce a "clean" flame. Common fuels include hydrogen or acetylene. Common oxidant gases, necessary for the burning of the fuel, are nitrous oxide, pure oxygen or air. Flame spectrometers are capable of analyzing metallic elements in the part per million or billion concentration ranges. In some cases, detection of lower concentrations is possible, depending on the element, the instrument and the methods used. - Photo Credit Image by Flickr.com, courtesy of Hey Paul What Is an AAS Degree? An AAS degree typically focuses on the knowledge a student needs to succeed in a particular job rather than providing credits to... How to Do Spectrophotometer Analysis A spectrophotometer measures the amount of visible light with a particular wavelength that a test material absorbs. Analytical chemists frequently use a...
The teenage years are a unique period of growth and development that are filled with energy, excitement and new experiences. No two teens are alike and each experience their teen years uniquely. Parental and cultural influences affect teenage development in different ways. However, all of them go through hormonal changes and physical changes that contribute to forming their sense of independence and identity. Independent, Emotional and Rebellious Typical teenage rebellion can last up to six years and can include defiant behavior and rapidly changing moods, according to Dr. Barton D. Schmitt reported in the article, "Adolescents: Dealing with Normal Rebellion," on the Children's Physician Network's website. Although not all teenagers become rebellious, many do become more resistant to authority, often having a major impact on family dynamics and personal relationships. Teens form their self-concept and sense of identity by establishing independence from parents, sometimes engaging in emotional verbal conflict with family or other rebellious behavior. Energetic, Adventurous and Risk-Taking Sleep patterns may change as teens are often full of energy and prefer to stay up later. Incomplete frontal lobe development makes it difficult for most teens to control impulses, according to the U.S. Department of Health and Human Services. Adventurous or risk-taking behavior is not uncommon. Teens often have a need for excitement and adventure, which sometimes causes them to overlook the potential dangers involved in risk-taking activities, such as unprotected sexual activity or drug experimentation. Maturing Physically, Hormonal, Sexually Aware and Social Teenagers may experience significant growth spurts between the ages of 13 and 18. Hormonal levels increase, as adolescent girls begin producing more estrogen. Teen girls fill out physically, begin menstruation, gain weight and can grow almost 10 inches taller between these ages. Teen boys also experience hormonal changes and begin producing more testosterone. Physical changes common in adolescent boys include growth of facial hair and significant weight gain. Teen boys can grow up to 20 inches taller between these ages. Physical and hormonal changes also bring about an increased sexual awareness, leading many teens to begin to experiment with their sexuality. Many teenagers begin to engage in sexual activity early in adolescence, according to a report in “Pediatrics,” the official journal of the American Academy of Pediatrics. Some teens might become involved in a sexual relationship with a boyfriend or girlfriend or dedicate much of their time to socialization. Time with friends sometimes takes priority over schoolwork or time with family. Teens grow intellectually during adolescence and are able to begin making life goals. The ability to understand abstract reasoning increases and teens begin to consider and conceptualize possibilities to hypothetical situations. Some teens might begin to question their parents’ points of view, and they may enjoy debating ideas. Organizational skills tend to improve, as many teens are able to handle multiple responsibilities, including work, socialization and school, according to The Palo Alto Medical Foundation. However, impulsivity often wins over intellectual growth, and teens often act before thinking of long-term consequences. - Palo Alto Medical Foundation: Teenage Growth and Development: 15-17 Years - Lucile Packard Children's Hospital at Stanford: The Growing Child: Adolescent (13-18 Years) - Pediatrics: Executive Summary - American Psychological Association: Developing Adolescents - MayoClinic.com: Sexual Health - KidsHealth: A Parent’s Guide to Surviving the Teen Years - Children's Physician Network: Adolescents: Dealing with Normal Rebellion - Lucile Packard Children’s Hospital at Standford: Cognitive Development - U.S. Department of Health and Human Services: Maturation of the Prefrontal Cortex - Jupiterimages/Photos.com/Getty Images
June 1 marks the beginning of the six-month-long hurricane season in the Atlantic. The season for these largely warm-season phenomena already began in the Pacific, where they are variously known as typhoons, cyclones, or just hurricanes, on May 15. They will peak in frequency on June 22 in the Northern Hemisphere and on December 22 in the Southern Hemisphere. Britannica says of these powerful storms, which are all technically tropical cyclones: Tropical cyclones are compact, circular storms, generally some 320 km (200 miles) in diameter, whose winds swirl around a central region of low atmospheric pressure. The winds are driven by this low-pressure core and by the rotation of the Earth, which deflects the path of the wind through a phenomenon known as the Coriolis force. As a result, tropical cyclones rotate in a counterclockwise (or cyclonic) direction in the Northern Hemisphere and in a clockwise (or anticyclonic) direction in the Southern Hemisphere. The wind field of a tropical cyclone may be divided into three regions, as shown in the diagram. First is a ring-shaped outer region, typically having an outer radius of about 160 km (100 miles) and an inner radius of about 30 to 50 km (20 to 30 miles). In this region the winds increase uniformly in speed toward the centre. Wind speeds attain their maximum value at the second region, the eyewall, which is typically 15 to 30 km (10 to 20 miles) from the centre of the storm. The eyewall in turn surrounds the interior region, called the eye, where wind speeds decrease rapidly and the air is often calm.
A new look at ancient craters on Mars finds five that are arrayed along an arc that's part of a giant circle around the planet. The circle may have been Mars' equator long ago. The craters might all have been formed when one giant asteroid broke apart, its fragments slamming into the planet at different times and locations around the then-equator, says Jafar Arkani-Hamed of McGill University in Montreal. If the analysis is right, it has implication for where water might lurk beneath the Martian surface today. Mars was once warmer and wetter, several lines of evidence suggest. Scientists speculate that while much of the water evaporated into space, some may have penetrated underground and remains. Water is a key ingredient for life as we know it. Like Earth, the poles of Mars have not always been where they are today. In fact, Mars seems to have transformed dramatically during its roughly 4.5-billion-year life. One striking feature known as the Tharsis Bulge — it's 5 miles high (8 kilometers) and covers a sixth of the planet — illustrates how a changing shape would have altered its axis over time, scientists say. The five basins identified in the new study are Argyre, Hellas, Isidis, Thaumasia and Utopia. They were thought to have all formed prior to the development of the Tharsis Bulge. Arkani-Hamed's calculations show the basins could all have been created by fragments of an asteroid orbiting routinely around the sun. Most asteroids, as well as all the planets except Pluto, hew roughly to the same region in space, an imaginary plane that extends outward from the sun's equator. The asteroid may have been 500 to 600 miles in diameter (800 to 1,000 kilometers). It came too close to Mars at some point and gravity yanked it apart, the thinking goes. Later, the pieces hit the planet. The craters form a circle whose center is at latitude -30 and longitude 175, which Arkani-Hamed figures was once the south pole. "The region near the present equator was at the pole when running water most likely existed," Arkani-Hamed said in a statement Monday. "As surface water diminished, the polar caps remained the main source of water that most likely penetrated to deeper strata and has remained as permafrost, underlain by a thick groundwater reservoir." The bottom line: Future missions looking for underground water — and any possible life that might exist there — might bore into the Martian equator of today, where the ancient water could still reside. The idea is presented in the Journal of Geophysical Research (Planets). © 2013 Space.com. All rights reserved. More from Space.com.
During the 19th century, common, roseate, arctic, and least terns nested on islands along the entire New England coast. At the turn of the century, hats adorned with tern feathers became the height of fashion and a drastic reduction of the tern population followed. During this decline, gulls, which compete for the same habitat as terns, started to encroach on feeding and nesting sites traditionally used by terns. In the 1970s, the closing of open landfills displaced hundreds and thousands of gulls, who moved to offshore nesting habitat used by the declining tern population. Again the large, aggressive gulls kept the terns from their traditional offshore habitats. When they nest inland, terns not only have to compete with gulls but also have to deal with mainland predators, thus limiting their reproductive success. In 1956 Thacher Island was the breeding ground for 1125 pairs of arctic, common and roseate terns. There are currently no nesting terns on the island. However, Thacher Island has the potential to regain its status as a prime area for tern breeding. The refuge initiated a tern restoration program in 2001. If successful, we can look forward to the return of terns to Thacher Island in the next decade. Follow Us Online This bird has a circumpolar distribution, breeding in temperate and sub-Arctic regions of Europe, Asia and east and central North America.
Demos and Exhibits Radiology is the process of using x-rays to generate pictures on film. Radiographs can be used to evaluate all parts of the body for various injuries or diseases including: the abdomen, the thorax, the head and neck, and the legs. Radiographs are evaluated based on the different “opacities” of the body tissues. Air is the least opaque material. It will show up black on X-ray film. Fat, soft tissue, and fluid are increasingly more opaque. Bones show up white. Metal is even more opaque; it appears whitest on X-ray film. Ultrasound uses sound waves. Sound waves reflect off organs, and a special machine called a transducer converts the sound waves into a picture. Certain organs are more “echogenic” than others, meaning they reflect more of the waves and therefore appear whiter on the screen. For example, in the abdomen, the prostate is more echogenic (whiter) than the spleen followed by the liver, and then the kidney. Ultrasound can be used to diagnose many diseases, such as cancer, and can also be used to evaluate pregnancies. Other imaging tools that are used at the University of Illinois Veterinary Teaching Hospital include nuclear scintigraphy, CT (computed tomography), and MRI (magnetic resonance imaging).
Working With Truth Our third degree, as everyone knows, is the Degree of Truth. Odd Fellows are told that “truth IS the imperial virtue.” But how do we know what truth is? How do we arrive at “truth?” Is there anything that can be called “THE truth?” Your Conductor’s goal here is to simply raise questions about “truth” and start a dialog in your mind about it. By the way: your Conductor does not claim to be a philosopher. Here are three observations about “truth.” The first one: Examine the following sentence: “This sentence is false.” Can we say this sentence is true? Or is it as the sentence says: “false?” Or is it both at the same time? Actually, the sentence is a paradox. A paradox is a statement or situation that involves a certain tension between two claims that seem obviously true. “This sentence is false” states that the sentence is clearly false. But the sentence also lays out a position that it must be true. So, which is it? Is the sentence partially true, or completely true? Or is it partially false or completely false? Here’s the second one: Experiences may be measured with a sliding scale. One end of the sliding scale is completely true. The other end is completely false. If we say, “today the wind did not blow.” Is this a true statement or can it be only partly true? If we think about it, the wind likely blew somewhere on earth “today.” So, this statement would not be completely true, and it would fall somewhere along the sliding scale, but not at the “completely true” end. On the other hand, if we say “It did not rain today,” but we collected a measurable amount of rain we might say the sentence is false. But, it may not have rained in other places and, why does it take a certain amount of water falling for us to call it rain? So, we can’t put our “rain” statement on the false end of the scale either. So, at what point does true become false? At what point does false become true? At what point does true Friendship become true Love? At what point does red become orange? Is there an exact point we can all agree upon? Some would say that true and false are expressions of the same thing. If we assume that ideas about truth follow along this sliding scale and nothing is necessarily true or false always, then why does humanity often search for a single universal truth? Which is the MOST truthful view of Van Gogh in this video? As he saw himself or as the video maker sees him? And, what is the MOST truthful view of you? As you see yourself, or as others see you? (continued below video) Here’s the third one, and it “is” about a simple word: “is.” The word “is” may be problematic when discussing the present moment. It implies that something could be total, absolute, perfect OR absolutely true in that moment. “The Mountain IS beautiful.” “Today IS Tuesday.” “2+2 IS 4.” “This IS a table.” “She IS mad.” “This IS art.” “This IS a problem.” And so on down the line. Someone once observed that the concept behind the word “is” limits our ability to search for solutions. The concept behind “is” meaning something solid, firm, unchanging—something we don’t need a solution for or something we don’t need to see differently. In fact, as this person observed, we might be better off using “could be,” or “might be” in place of “is”. “The mountain MIGHT BE beautiful,” “today COULD BE Tuesday,” “2+2 MIGHT BE 4,” “this COULD BE a table,” “she MIGHT BE mad,” “this COULD BE art,” “this MIGHT BE a problem.” Heck, “Truth COULD BE the imperial virtue…” Viewing the word “is” in this way opens an entire world of possibilities, solutions, and yes, more problems. It opens the mind to further inquiry and in this we can discover more about ourselves and others. Scott The Conductor Scott Moye is an award winning history educator and collector of Arkansas folkore. He grew up on a cotton farm and is currently a museum worker. Hobbies include: old house restoration, writing, amateur radio, Irish traditional music, archery, craft beer, old spooky movies, and street performance. He is a member of Marshall Lodge #1, in Marshall, Arkansas. He is a founder of Heart in Hand blog.
> This course is designed to meet the requirement of the students to understand the Complete Introduction of Biology. >This course presents many of the topics Found in O and A level biology. >The topics are presented through PowerPoints presentation.Each body system is presented separately and there are quizzes at the end of each section. >In this Course , Students will learn the introduction of Biology. >Understand the Division and Branches of Biology. >Understand the relationship of Biology to other sciences. >Understand the Careers in Biologie. > Understand the History of Biology >Scientific Method of biology. Biology is the study of living things. It encompasses the cellular basis of living things, the energy metabolism that underlies the activities of life, and the genetic basis for inheritance in organisms. Biology also includes the study of evolutionary relationships among organisms and the diversity of life on Earth. It considers the biology of microorganisms, plants, and animals, for example, and it brings together the structural and functional relationships that underlie their day-to-day activities. Biology draws on the sciences of chemistry and physics for its foundations and applies the laws of these disciplines to living things. Many subdisciplines and special areas of biology exist, which can be conveniently divided into practical and theoretical categories. Types of practical biology include plant breeding, wildlife management, medical science, and crop production. Theoretical biology encompasses such disciplines as physiology (the study of the function of living things), biochemistry (the study of the chemistry of organisms), taxonomy (classification), ecology (the study of populations and their interactions with each other and their environments), and microbiology (the study of microscopic organisms). Who this course is for: - Students persuing O and A level or intermediate course.
Year 5 Home Learning Work for Friday 26th June Use the spreadsheet below to set yourself a times table grid to complete. Ask someone at home to test you on this week's spelling list. Today I would like you to have a go at a type of puzzle which involves both numbers and letters. A codeword puzzle is a grid of words, where each letter has been coded with a number. You are given some letters to start with, but then have to work out which numbers correspond to which letters. Here is an example: Use the link below to access a free online codeword site. Click on 'Play now in your browser' and use the tutorial at the start so that you know how to solve the puzzle on your device. History - How can we know what life was like 1,000 years ago? Use the Powerpoint presentation below to learn about the discovery of a famous Mayan artefact: Look at the document below and think about the discovery as a historian. Ask yourself these 3 questions: Now look at the Powerpoint presentation below. STOP at slide 2 and try to guess what the Mayan objects were for (ask others at home to have a guess too). Then move on and you will find out if you were right or not. Finally, you must make a decision about some Mayan sites. Budget cuts mean you cannot afford to maintain and study all sites. Look through the presentaion below and do some of your own research. Which site do you think is the most important and therefore should get the funding?
Massive human induced freshwater redistribution with profound consequences A 14-year (April 2002 – March 2016) NASA’s GRACE (Gravity Recovery and Climate Experiment) mission[i] has confirmed that a massive redistribution of freshwater is occurring across Earth, with middle-latitude belts drying and the tropics and higher latitudes gaining water supplies. The results, which are probably a combination of the effects of climate change, vast human withdrawals of groundwater, Dams and natural changes, could have profound consequences if they continue. The paper[ii] is the first to use gravitational satellite data to map global trends in fresh water availability across a 14-year period. The research identifies 34 areas where water resources rose or fell significantly during the period. “The largest changes we see anywhere are the ice sheet and glacier losses, those are the fastest rates of change. But that is not typically water you would use for drinking or agriculture,” says Matthew Rodell, the lead author on the report and a hydrologist at Nasa. Broader observations The results[iii] emerge from the 2002-2016 GRACE mission, supplemented with additional data sources. The GRACE mission, which recently ended but will soon be replaced by a “GRACE Follow-On” endeavor (Scheduled to be launched from California on May 22, 2018), consisted of twin satellites in orbit that detected the tug of Earth’s gravity below them — and monitored mass changes based on slight differences in measurements by the two satellites. The new research, led by NASA’s Matthew Rodell, pulls together these findings to identify 34 global regions (they do not all have the same cause — not even close) that gained or lost more than 32 Billion Tons (BT) of water between 2002 and 2016. The resulting map of the findings shows an overall pattern, in which ice sheets and glaciers lose by far the most mass at the poles, but at the same time, middle latitudes show multiple areas of growing dryness even as higher latitudes and the tropical belt tend to see increases in water. The idea of mid-latitude drying and higher- and lower-latitude wetting is a common feature of climate change models and GRACE conclusions. There are other human-induced changes, relating not to climate change but, rather, to direct withdrawals of water from the landscape. In northern India, the northern China plain and the Caspian and Aral seas in Central Asia, among other regions, human withdrawals for agriculture have subtracted enormous amounts of water from Earth. There are also major cases of humans increasing water storage in the landscape, particularly in China, where massive dam construction has created enormous reservoirs. What’s striking about the map is the way that a combination of human-driven water withdrawals and droughts seem to be punishing the central latitudes of the Northern Hemisphere, in particular, but also the Southern Hemisphere to a significant extent. However, the data remains coarse and the causes behind the trends in many cases remain a matter of interpretation. Consistent with IPCC “While the pattern of wet-getting-wetter, dry-getting-drier is predicted by the Intergovermental Panel on Climate Change models for the end of the 21st century, we can’t yet attribute the emergence of a similar pattern in the GRACE data to climate change,” Famiglietti said. “But it’s consistent with what the climate models project.” Big changes in 34 Regions: Three categories The NASA researchers, as noted above, have identifies 34 regions where major changes have occurred. They have divided these changes in three broad categories. During the 14 years of satellite measurements, nearly all the 34 identified regions lost or gained at least 32 BT. Eleven of the regions lost or gained 10 times that or more. “The numbers are huge. It’s pretty staggering,” Rodell said. “A large portion of them, either direct or indirect human impacts were factors, if not outright the major cause.” - They ascribed changes in 12 regions to natural variability, including a progression from a dry period to a wet period in the northern Great Plains, a drought in eastern Brazil and wetter periods in the Amazon and tropical West Africa.[iv] - In 14 of the areas — more than 40 percent of the hotspots — the scientists associated the water shifts partially or largely with human activity. That included groundwater depletion combined with drought in Southern California and the southern High Plains from Kansas to the Texas Panhandle, as well as in the northern Middle East, northern Africa, southern Russia, Ukraine and Kazakhstan. Many of the areas where the researchers saw direct human impacts are farming regions that have relied heavily on groundwater pumping, including northern India, the North China Plain and parts of Saudi Arabia. The scientists also identified other human-driven impacts, including water diversions that have led to declines in the Caspian Sea and the construction of Three Gorges Dam and other reservoirs in China. - The research identifies eight regions where the changes were mainly caused by climate. Key regional analysis Some Key regional information and analysis: - The study, published in the May 17, 2018 issue of the journal Nature said: Groundwater, soil moisture, surface waters, snow and ice (these five components collectively form TWS: Terrestrial Water Storage) are dynamic components of the terrestrial water cycle. Although they are not static on an annual basis, in the absence of hydro-climatic shifts or substantial anthropogenic stresses they typically remain range-bound. Here they map TWS change rates around the globe based on 14 years of GRACE observations. The paper used 1979–2016 precipitation data from the Global Precipitation Climatology Project version 2.3. - On a monthly basis GRACE can resolve TWS changes with sufficient accuracy over scales that range from approximately 200,000 km2 at low latitudes to about 90,000 km2 near the poles. However, owing to GRACE’s coarse spatial resolution, the inability to partition component mass changes and the brevity of the time series, proper attribution of the TWS changes requires comprehensive examination of all available auxiliary information and data, which has never before been performed at the global scale. - By far the largest TWS trends occur in Antarctica (region 1; −127.6 ± 39.9 BT/yr averaged over the continent), Greenland (region 2; −279.0 ± 23.2 BT/yr), the Gulf of Alaska coast (region 3; −62.6 ± 8.2 BT/yr) and the Canadian archipelago (region 4; −74.6 ± 4.1 BT/yr). Excluding those four ice-covered regions, one of the most striking aspects of changing TWS is that freshwater seems to be accumulating in far-northern North America (region 5) and Eurasia (region 6) and in the wet tropics, whereas the greatest non-frozen-freshwater losses have occurred at mid-latitudes. India and surrounding areas What the analysis says about India and surrounding areas? - NORTH INDIA: The hotspot in northern India (region 7) was among the first non-polar TWS trends to be revealed by GRACE. It results from groundwater extraction to irrigate crops, including wheat and rice, in a semi-arid climate. Fifty-four per cent of the area is equipped for irrigation. We estimate the rate of TWS depletion to be 19.2 ± 1.1 BT/yr, which is within the range of GRACE-based estimates from previous studies of differently defined northern-India regions. The trend persists despite precipitation being 101% of normal (namely, the 1979–2015 GPCP annual mean for the region) during the study period, with an increasing trend of 15.8 mm/yr. The fact that extractions already exceed recharge during normal-precipitation years does not bode well for the availability of groundwater during future droughts. The contribution of Himalayan glacier mass loss to the regional trend is minor. - The increasing trend in central and southern India (region 8; 9.4 ± 0.6 BT/yr) probably reflects natural variability of (mostly monsoon) rainfall, which was 104% of normal with an increasing rate of 3.7 mm/yr (0.4% per year). - The negative trend that extends across East India, Bangladesh, Burma and southern China (region 13), −23.3 ± 1.9 BT/yr, may be explained by a combination of intense irrigation (25%) and a decrease in monsoon season precipitation during the study period. The total annual precipitation was well above normal from 1998 to 2001, resulting in elevated TWS. During the GRACE period, precipitation declined at a rate of −10 mm/yr (−0.7% per year), and the annual accumulations were below average from 2009 to 2015. This is the third most heavily irrigated of the study regions, so TWS decline is likely to continue, although perhaps at a slower rate, given that rainfall should normalize eventually and a 15% increase in rainfall is predicted by 2100. - Satellite altimetry and Landsat data indicate that the majority of lakes in the Tibetan Plateau have grown in water level and extent during the 2000s owing to a combination of elevated precipitation rates and increased glacier-melt flows, which are difficult to disentangle. From 1997 to 2001 the average annual precipitation in region 10 was 160 mm/yr, well below the 2002–2015 average of 175 mm/yr. - Millions of climate refugees in South Asia? “Human security in South Asia is really at risk, in my opinion, due to decreasing water availability, due to disappearing glaciers, groundwater depletion, and changing extremes,” Famiglietti said. “The disappearance of Himalayan glaciers could result in millions of climate refugees in the coming decades.” - Perilous path of unmanaged groundwater The NASA team said other regions including northern India, the Middle East and the area surrounding the Caspian Sea are “on a perilous path.” Famiglietti said. “But overall, groundwater continues to be undermanaged, if managed at all. And that’s why we see the rapid disappearance.” Depleted aquifers lose their water holding capacity? After a wet winter that refilled reservoirs, California Gov. Jerry Brown declared the drought over in 2017. But the scientists said it’s “doubtful that aquifer storage will recover completely without large usage reductions.” The reason given by the NASA scientists should be worrying us in India. They said this is “in part because when aquifers are depleted the water-storing spaces between rocks, clay and sand can compact, permanently reducing how much water they hold.” This means that the aquifers depleted of water can significantly lose its water storage capacity. In Conclusion Rodell said, “One of the important things about this map is it provides information for policymakers and decision makers to think about longer-term strategies for how we’re going to make sure that the world continues to have enough water to grow food for a growing population.” “When I sit back and look at it, I still am surprised by the human fingerprint — how strong it is, how we have really drastically altered the freshwater landscape,” said Famiglietti. There is enough here that should worry us in India and south Asia, the authors of NASA study mince no words when they say we are on perilous path and if we do not wake up, we may be on path to making millions in South Asia as climate refugees. Unfortunately, this is already happening, its not in future. Compiled by SANDRP ([email protected])
Decision-making in the fly brain When food smells simultaneously appealing and repulsive, the learning centre aids decision-making For most of us, a freshly brewed cup of coffee smells wonderful. However, individual components that make up the fragrance of coffee can be extremely repulsive in isolation or in a different combination. The brain therefore relativizes and evaluates the individual components of a fragrance. Only then is an informed decision possible as to whether an odour and its source are “good” or “bad”. Scientists from the Max Planck Institute of Neurobiology in Martinsried have discovered how conflicting smells are processed in the mushroom body of the brain of the fruit fly. The results assign a new function to this brain region and show that sensory stimuli are evaluated in a situation-dependent context. In this way the insects are able to make an appropriate decision on the spur of the moment. Most sensory impressions are complex. For example, a fragrant substance usually appears in combination with many other odours - like the smell of the aforementioned cup of coffee, which consists of over 800 individual odours, including some unpleasant ones. For the fruit fly Drosophila, the smell of carbon dioxide (CO2) is repellent. Among other things, the gas is released by stressed flies to warn other members of the species. When the insects smell CO2 an innate flight response is triggered. However, CO2 is also produced by overripe fruit – a coveted source of food for many insects. Foraging flies must therefore be able to ignore their innate aversion to CO2 in instances where the gas is present in combination with food odours. It is still poorly understood how the brain compares individual olfactory sensations and classifies them according to the situation at hand in order to reach a sensible decision (here: food or danger). “The opposing significance of CO2 for fruit flies is an ideal starting point to explore how the brain correctly evaluates individual sensory impressions depending on the situation,” says Ilona Grunwald Kadow. Together with her team at the Max Planck Institute of Neurobiology, she studies how the brain processes odours and makes decisions based on the results. The scientists have now been able to show that complex or opposing sensory information is processed in the mushroom body. Until now, this brain area was thought to be a centre for learning and memory storage. The new results show that the mushroom body has an additional function: it evaluates sensory impressions independently of learned content and memory to allow instantaneous decisions. The scientists were able to show that CO2 activates neurons in the neural network that includes the mushroom body. Those neurons, in turn, trigger the flies’ flight behaviour. However, if CO2 occurs along with food odours, the food odour stimulates neurons within the mushroom body network that release the neurotransmitter dopamine. Dopamine occurs in many species, including humans, in connection with positive values. When food smells are present along with CO2, these dopaminergic neurons in fruit flies transmit this information to the mushroom body, where they suppress the innate CO2 response by inhibiting “avoidance neurons”. “Interestingly, the experience that CO2 frequently occurs together with food odours does not cause the insects to lose their aversion to CO2 forever,” says Grunwald Kadow. When the information about the simultaneous occurrence of CO2 and food odours is transmitted to the “learning centre” in the mushroom body, an immediate change of behaviour occurs, but not a permanent change with regard to the negative evaluation of CO2. This could apply to other sensory impressions as well, such as vision. The researchers speculate that the absence of a permanent change in behaviour could be vital in many situations. The smell of predators, for example, triggers an instinctive fear in humans. We do not lose this fear, even after experiencing caged predators and their smell at a zoo. The human brain therefore also appears to compare and draw different conclusions depending on the circumstances.
The operating system is the layer of software that drives the hardware of a computer and provides the user with a comfortable work environment. Operating systems vary, but most have a shell, or text interface. You use the GNU shell every time you type in a command that launches an email program or text editor under GNU. In the following sections of this chapter, we will explore how to create a C program from the GNU shell, and what might go wrong when you do.
European explorers were the first to encounter indigenous peoples of the New World. The first contact may have occurred when Thorvald, brother of Leif Eriksson, died in a skirmish with natives near Vinland in present-day Newfoundland. Thorvald may have been the first European to die and be buried in America. Nearly five centuries later, word that Christopher Columbus had discovered what was believed to be a western approach to the East Indies spread through Europe, which energized other nations to dispatch explorers. Like Columbus, they were searching for gold, silver, spices and other valuables. They also were looking for new lands to claim for their empires. They quickly realized that what Columbus had actually found was another world altogether. The Europeans came to the New World with underlying assumptions, some based on what would later be called Manifest Destiny, others based on Christian beliefs. Some, including Columbus, believed it was God's design to convert non-Christians everywhere. Extracting wealth from the New World was justifiable because the peoples there were heathen. Imprecations against Native Americans were sanctioned because they were Satan's own, and it followed that their cultures could be crippled as well. By and large, indigenous peoples first welcomed European explorers, and trade was a hallmark of their relationship from the beginning. The natives introduced the Europeans to such plants as maize, potatoes, other edible plants, and tobacco. The Europeans introduced the Native Americans to horses, guns, and alcohol, among other things. Contact would wreak radical changes in Native American lifeways through the impact of trade, missionary influence, intermarriage, disease, enslavement, defeat in battle, forced relocation to reservations, and acculturation.
What is the full form of GPS? The full form of GPS is the Global Positioning System and it is a satellite navigation system used to identify the ground position of an object. The US military first used GPS technology in the 1960s and broadened into civilian applications over the next few decades. Today, many commercial products involve GPS receivers, such as smartphones, automobiles, GIS devices, and fitness watches. GPS are widely used for tracking and guiding vehicles, providing the best route from one place to another for shipping companies, airlines, drivers and courier services. Various Parts of GPS Parts can be broken up into three separate segments, such as - A segment of space – It is referred to as satellites. In six orbital planes, roughly 24 satellites are distributed. - A segment of control- It is referred to stations installed on Earth to manage and track the satellites. - User Segment – It is referred to users who process the navigation signals received from the GPS satellites to measure position and time. Working Principle of GPS - The GPS network comprises 24 satellites that are deployed approximately 19,300 kilometres above the Earth’s surface. They circle the Earth at an incredibly fast speed of around 11,200 km / h (once every 12 hours). The satellites are evenly spaced so that four satellites can be seen with a clear line of sight from anywhere in the globe. - Each satellite is fitted with a computer, radio and an atomic clock. With knowledge of its orbit and the clock, it constantly transmits its shifting place and time. - GPS makes use of the triangulation method to identify the user’s location. Triangulation is a mechanism in which a GPS first establishes a working and receiving information link with 3 to 4 satellites. The satellite then transmits one piece of message information, including the receiver ‘s location. - If the receiver already has a computer screen showing a map, then the position can be shown on the monitor. - If a fourth satellite can be accessed, the receiver may measure both the altitude and the geographical position. - Your receiver will also calculate your travel speed and direction if you are travelling and give you approximate arrival times to specific locations. Applications of GPS The GPS is used in technology to provide data that has never been available before, with the amount and degree of precision that the GPS makes possible. Researchers use the GPS to measure the change in the arctic ice shift, Earth’s tectonic plates and volcanic activity. - GPS gives the exact location. - To track a person or object movement. It helps the creation of World maps. - It affords the universe a precise timing. - During travel from one location to the other.
Cotton is a soft, fluffy staple fiber that grows in a boll, or protective case, around the seeds of the cotton plants of the genus Gossypium in the mallow family Malvaceae. The fiber is almost pure cellulose. Under natural conditions, the cotton bolls will increase the dispersal of the seeds. The Indian Origin The origins of cotton production and use go back to ancient times. The first evidence of cotton use was found in India, and dates from about 6,000 B.C. Scientists believe that cotton was first cultivated in the Indus delta, historically a part of Hindu Civilisation. Cotton bolls discovered in a cave near Tehuacán, Mexico, have been dated to as early as 5500 BC, but this date has been challenged. More securely dated is the domestication of Gossypium hirsutum in Mexico between around 3400 and 2300 BC. In Peru, cultivation of the indigenous cotton species Gossypium barbadense has been dated, from a find in Ancon, to c. 4200 BC, and was the backbone of the development of coastal cultures such as the Norte Chico, Moche, and Nazca. Cotton was grown upriver, made into nets, and traded with fishing villages along the coast for large supplies of fish. The Spanish who came to Mexico and Peru in the early 16th century found the people growing cotton and wearing clothing made of it. The Greeks and the Arabs were not familiar with cotton until the Wars of Alexander the Great, as his contemporary Megasthenes told Seleucus I Nicator of “there being trees on which wool grows” in “Indica”.This may be a reference to “tree cotton”, Gossypium arboreum, which is a native of the Indian subcontinent. Cotton has been spun, woven, and dyed since prehistoric times. It clothed the people of ancient India, Egypt, and China. Hundreds of years before the Christian era, cotton textiles were woven in India with matchless skill, and their use spread to the Mediterranean countries. In Iran (Persia), the history of cotton dates back to the Achaemenid era (5th century BC); however, there are few sources about the planting of cotton in pre-Islamic Iran. The planting of cotton was common in Merv, Ray and Pars of Iran. In Persian poets’ poems, especially Ferdowsi’s Shahname, there are references to cotton (“panbe” in Persian). Marco Polo (13th century) refers to the major products of Persia, including cotton. John Chardin, a French traveler of the 17th century who visited Safavid Persia, spoke approvingly of the vast cotton farms of Persia. During the Han dynasty (207 BC – 220 AD), cotton was grown by Chinese peoples in the southern Chinese province of Yunnan. During the late medieval period, cotton became known as an imported fiber in northern Europe, without any knowledge of how it was derived, other than that it was a plant. Because Herodotus had written in his Histories, Book III, 106, that in India trees grew in the wild producing wool, it was assumed that the plant was a tree, rather than a shrub. This aspect is retained in the name for cotton in several Germanic languages, such as German Baumwolle, which translates as “tree wool” (Baum means “tree”; Wolle means “wool”). Noting its similarities to wool, people in the region could only imagine that cotton must be produced by plant-borne sheep. John Mandeville, writing in 1350, stated as fact that “There grew there [India] a wonderful tree which bore tiny lambs on the endes of its branches. These branches were so pliable that they bent down to allow the lambs to feed when they are hungry.” Cotton manufacture was introduced to Europe during the Muslim conquest of the Iberian Peninsula and Sicily. The knowledge of cotton weaving was spread to northern Italy in the 12th century, when Sicily was conquered by the Normans, and consequently to the rest of Europe. The spinning wheel, introduced to Europe circa 1350, improved the speed of cotton spinning. By the 15th century, Venice, Antwerp, and Haarlem were important ports for cotton trade, and the sale and transportation of cotton fabrics had become very profitable. In India between 16th to 18th Century, Indian cotton production increased, in terms of both raw cotton and cotton textiles. The largest manufacturing industry in this era was cotton textile manufacturing, which included the production of piece goods, calicos, and muslins, available unbleached and in a variety of colours. The cotton textile industry was responsible for a large part of the empire’s international trade. India had a 25% share of the global textile trade in the early 18th century. Indian cotton textiles were the most important manufactured goods in world trade in the 18th century, consumed across the world from the Americas to Japan. The most important center of cotton production was the Bengal Subah province, particularly around its capital city of Dhaka. The worm gear roller cotton gin, which was invented in India during the early 13th–14th centuries, is still used in India through to the present day. Another innovation, the incorporation of the crank handle in the cotton gin, first appeared in India. he production of cotton, which may have largely been spun in the villages and then taken to towns in the form of yarn to be woven into cloth textiles, was advanced by the diffusion of the spinning wheel across India, lowering the costs of yarn and helping to increase demand for cotton. The diffusion of the spinning wheel, and the incorporation of the worm gear and crank handle into the roller cotton gin, led to greatly expanded Indian cotton textile production. It was reported that, with an Indian cotton gin, which is half machine and half tool, one man and one woman could clean 28 pounds of cotton per day. With a modified Forbes version, one man and a boy could produce 250 pounds per day. If oxen were used to power 16 of these machines, and a few people’s labour was used to feed them, they could produce as much work as 750 people did formerly. In the early 19th century, a Frenchman named M. Jumel proposed to the great ruler of Egypt, Mohamed Ali Pasha, that he could earn a substantial income by growing an extra-long staple Maho (Gossypium barbadense) cotton, in Lower Egypt, for the French market. Mohamed Ali Pasha accepted the proposition and granted himself the monopoly on the sale and export of cotton in Egypt; and later dictated cotton should be grown in preference to other crops. Egypt under Muhammad Ali in the early 19th century had the fifth most productive cotton industry in the world, in terms of the number of spindles per capita. The industry was initially driven by machinery that relied on traditional energy sources, such as animal power, water wheels, and windmills, which were also the principal energy sources in Western Europe up until around 1870. It was under Muhammad Ali in the early 19th century that steam engines were introduced to the Egyptian cotton industry. By the time of the American Civil war annual exports had reached $16 million (120,000 bales), which rose to $56 million by 1864, primarily due to the loss of the Confederate supply on the world market. Exports continued to grow even after the reintroduction of US cotton, produced now by a paid workforce, and Egyptian exports reached 1.2 million bales a year by 1903. East India Company and Industrial Revolution The English East India Company (EIC) introduced the Britain to cheap calico and chintz cloth on the restoration of the monarchy in the 1660s. Initially imported as a novelty side line, from its spice trading posts in Asia, the cheap colourful cloth proved popular and overtook the EIC’s spice trade by value in the late 17th century. The EIC embraced the demand, particularly for calico, by expanding its factories in Asia and producing and importing cloth in bulk, creating competition for domestic woollen and linen textile producers. The impacted weavers, spinners, dyers, shepherds and farmers objected and the calico question became one of the major issues of National politics between the 1680s and the 1730s. Parliament began to see a decline in domestic textile sales, and an increase in imported textiles from places like China and India. Seeing the East India Company and their textile importation as a threat to domestic textile businesses, Parliament passed the 1700 Calico Act, blocking the importation of cotton cloth. As there was no punishment for continuing to sell cotton cloth, smuggling of the popular material became commonplace. In 1721, dissatisfied with the results of the first act, Parliament passed a stricter addition, this time prohibiting the sale of most cottons, imported and domestic (exempting only thread Fustian and raw cotton). The exemption of raw cotton from the prohibition initially saw 2 thousand bales of cotton imported annually, to become the basis of a new indigenous industry, initially producing Fustian for the domestic market, though more importantly triggering the development of a series of mechanised spinning and weaving technologies, to process the material. This mechanised production was concentrated in new cotton mills, which slowly expanded till by the beginning of the 1770s seven thousand bales of cotton were imported annually, and pressure was put on Parliament, by the new mill owners, to remove the prohibition on the production and sale of pure cotton cloth, as they could easily compete with anything the EIC could import. The acts were repealed in 1774, triggering a wave of investment in mill based cotton spinning and production, doubling the demand for raw cotton within a couple of years, and doubling it again every decade, into the 1840s Indian cotton textiles, particularly those from Bengal, continued to maintain a competitive advantage up until the 19th century. In order to compete with India, Britain invested in labour-saving technical progress, while implementing protectionist policies such as bans and tariffs to restrict Indian imports. At the same time, the East India Company’s rule in India contributed to its deindustrialization, opening up a new market for British goods, while the capital amassed from Bengal after its 1757 conquest was used to invest in British industries such as textile manufacturing and greatly increase British wealth. British colonization also forced open the large Indian market to British goods, which could be sold in India without tariffs or duties, compared to local Indian producers who were heavily taxed, while raw cotton was imported from India without tariffs to British factories which manufactured textiles from Indian cotton, giving Britain a monopoly over India’s large market and cotton resources. India served as both a significant supplier of raw goods to British manufacturers and a large captive market for British manufactured goods. Britain eventually surpassed India as the world’s leading cotton textile manufacturer in the 19th century India’s cotton-processing sector changed during EIC invasion in India in the late 18th and early 19th centuries. From focusing on supplying the British market to supplying East Asia with raw cotton. As the Artisan produced textiles were no longer competitive with those produced Industrially, and Europe preferring the cheaper slave produced, long staple American, and Egyptian cottons, for its own materials.
Life started From Lightning Bolts It’s possible that life on Earth began with a bolt of lightning. No, the world’s first bacteria were not literally animated by a stray thunderbolt (sorry, Dr. Frankenstein). Trillions of lightning bolts over a billion years of Earth’s early history may have helped unlock essential phosphorus compounds that paved the way for life on Earth, according to a new study published Tuesday (March 16) in the journal Nature Communications. Lead research author Benjamin Hess, a graduate student at Yale University’s Department of Earth and Planetary Sciences, told Live Science, “In our study, we demonstrate for the first time that lightning strikes were possibly a major source of reactive phosphorus on Earth around the time that life evolved [3.5 billion to 4.5 billion years ago].” “Lightning strikes may have thus contributed to the emergence of life on Earth by supplying phosphorus.” Life Scattered Through Sky? How does an out-of-the-blue event contribute to the emergence of terrestrial life? It’s all about the phosphorus atoms, or more precisely, the organic materials that phosphorus atoms can produce when combined with other bio-essential elements. Consider phosphates, which are ions made up of three oxygen atoms and one phosphorus atom and are important to all forms of life. Phosphates are major components of bones, teeth, and cell membranes, and form the backbones of DNA, RNA, and ATP (the primary source of energy for cells). However, while there was probably plenty of water and carbon dioxide in the atmosphere to work with around 4 billion years ago, both of which are important for life’s fundamental molecules, much of the planet’s natural phosphorus was wrapped up in insoluble rock, making it difficult to mix into organic phosphates. So, how did Earth come to possess these vital compounds? According to one hypothesis, early Earth got its phosphorus from meteorites carrying a mineral called schreibersite, which is partly made of phosphorus and soluble in water; if tonnes of schreibersite meteorites crashed into Earth over millions or billions of years, enough phosphorus could be released into a concentrated region to provide the right conditions for biological life, according to the new study. When life appeared on Earth around 3.5 billion to 4.5 billion years ago, the rate of meteor strikes on Earth fell “exponentially,” Hess said, since most of our solar system’s planets and moons had largely taken shape. This complicates the hypothesis of interstellar phosphorus. Hess says that there is another way to render schreibersite right here on Earth. It just takes a little bit of ground, a cloud, and a few trillion jolts of lightning. Bolts in Billions Lightning can heat surfaces to nearly 5,000 degrees Fahrenheit (2,760 degrees Celsius), forming new minerals that were not previously present. Hess and his colleagues looked at a lightning-blasted clump of rock called fulgurite that was previously excavated from an Illinois site for the new research. Tiny balls of schreibersite, as well as a number of other glassy minerals, were discovered inside the rock by the team. The team had to determine whether enough lightning might have hit early Earth to release a large amount of phosphorus-rich schreibersite into the atmosphere, now that they had preliminary proof that lightning strikes would produce phosphorus-rich schreibersite. The researchers calculated how many lightning strikes could have fallen over the planet last year using models of Earth’s early atmosphere. Currently, approximately 560 million lightning bolts strike the Earth each year; 4 billion years ago, when Earth’s atmosphere was considerably richer in the greenhouse gas CO2 (and therefore hotter and more vulnerable to storms), the team determined that anywhere from 1 billion to 5 billion bolts struck each year. The team calculated that between 100 million and 1 billion bolts hit the ground each year (the rest discharged above the oceans). And, according to Hess, up to a quintillion (a 1 followed by 18 zeros) lightning strikes could have reached our young earth over a billion years, each one releasing a small amount of available phosphorus. Lightning strikes alone may have provided Earth with anywhere from 250 to 25,000 pounds of phosphorus (110 to 11,000 kilogrammes) each year between 4.5 billion and 3.5 billion years ago, according to the researchers. Phosphorous on Consideration That’s a wide selection, with a lot of confusion about early Earth conditions baked in. However, Hess believes that even a small amount of phosphorus may have influenced the emergence of life. “All that is needed for life to form is a single location with the right ingredients,” Hess told Live Science. “Yes, [250 lbs.] of phosphorus per year might have been enough if it was concentrated in a single tropical island arc. However, if there are several such sites, this is more likely to occur.” The question of whether lightning hit sufficiently exposed land on early Earth to have an effect on life will never be completely resolved. However, the latest research shows that it was mathematically feasible. According to the researchers, it’s possible that a combination of asteroid impacts and lightning strikes provided Earth with the phosphorus it required to weave the first bio-essential molecules including DNA and RNA. However, future research into early terrestrial life should avoid striking lightning in the record.
Passion Projects were a staple in my gifted enrichment classroom in the 1970s. Back in those days, we called them Type III enrichment activities (Renzulli Enrichment Model) or Independent Studies. But thankfully, things changed as general education has embraced the concepts of thinking skills, creative production, and talent development. Today we see these activities implemented in all types of classrooms through Genius Hour, Passion Projects, and Maker Spaces. Whatever title you choose to give them, Passion Projects promote student-centered investigations. They are examples of personalized learning and differentiation at their finest. Why Use Passion Projects? These projects are one of the best ways to keep students engaged and excited about learning. A bonus is that they address many of your students’ social-emotional needs. Passion Projects have an immediate buy-in from the students because they are centered on a topic each student is very interested in. They complement the core curriculum because they integrate reading comprehension skills, note-taking skills, and writing skills. More than just learning research skills, Passion Projects promote creativity and innovative thinking. Students develop self-awareness, self-management, and responsible decision making. They are learning essential life skills such as planning, decision making, persistence through overcoming obstacles, problem-solving, time management, and personal reflection. In this blog post, I am going to give you all the steps I use when leading my students through the Passion Project process for the very first time. You will be able to shorten the steps for later projects. Be forewarned! This blog post is long, but if you read it in its entirety, you will know everything you need to know to successfully implement Passion Projects in your classroom. If you want to learn about using Passion Projects through Distance Learning, you can check out that blog post here. Precisely What Are Passion Projects? A Passion Project is an investigative activity. Rather than being a teacher-centered curriculum, it is a student-centered curriculum. Students identify an area of interest. Next, they develop a burning question related to that topic. The burning question is a question they want to find the answer to, but it is a little more complicated. It is what we like to call a non-Google-able question. You can’t find the answer through a simple Internet search. Students use a variety of sources to research their burning question. Next, they create a product to present to an audience of their choice. The final, often overlooked stage of the process is reflection. During reflection, students think about what went well, what they would change, and what they learned. My favorite thing about Passion Projects is that throughout the entire process, students are in charge of their learning. They get to experience the joy of discovery and learning new things. The role of the teacher changes as they become coaches, facilitators, procurers of resources, editors, and cheerleaders. Teachers coach and encourage students when they get stuck or experience failure, which some inevitably will. How Can I Fit Passion Projects Into My Schedule? I usually set aside one hour per week for students to work on their Passion Projects. We call this time Genius Hour because it is modeled after the 20% Time model used by many corporations. Setting the Stage for Passion Projects Before I even begin to introduce Passion Projects to my students, I like to share some picture books about real people who have followed their passions to achieve incredible things. There are tons of picture books out there you can use, but these are just three I have used. My favorite is The Junkyard Wonders by Patricia Polacco. This picture book is the true story of Patricia and some of her classmates in a class for special needs students. Others have dubbed the class The Junkyard Wonders. With the help of a forward-thinking teacher, these students take a classmate’s passion for rockets and space, and together they design, build, and launch a rocket. At the end of the book, Patricia writes an afternote and lets you know all the characters are real people. She tells how the members of the group, whom everyone considered “junk,” followed their passion and went on to do incredible things. One becomes an aeronautical engineer for NASA, another the artistic director for a ballet company, a third went on to become a textile designer. Patricia herself became a best-selling children’s book author. Next, we talk about the difference between a passion and a hobby. Passions are things that are all-consuming while hobbies are things you tend to do for fun. Finally, I share examples of everyday kids who used their passion to achieve great things and make a difference. Identifying Areas of Interest or Passion In this stage, students examine their interests and categorize them. Next, they narrow their list down to three interest areas. I do this because so many of my students think they are interested in and curious about everything! I use the website Wonderopolis® with students who have difficulty identifying areas of interest. Creating Burning Questions For Your Passion Project I think creating burning or non-Google-able questions may be one of the most challenging stages of the Passion Project process. So many of the kids want to develop a question that could quickly be answered by a Google™ search and then just begin their project. To help students create their burning questions, you need to give some examples. Some questions I have had students create are: - How can people earn a living by playing video games? - How have computers changed our lives? - How do you become a professional football player? - How can you build a robot? - How can you make sweet desserts that aren’t fattening? If you notice, most of the questions begin with the word “how.” Questions that begin with “what” need to say things like What are some ways… What are some things we can do to… etc. Notice there is no one right answer. It’s fun to have a “Wonder Wall” as a bulletin board so you can display each student’s burning question. Selecting Resources to Research For Your Passion Project In this stage, during Genius Hour, students think of all the different sources they can use for information to help them answer their burning question. Of course, there are the usual resources such as books, magazines, newspaper articles, the Internet, etc. But I like for them to consider a variety of sources. Here’s a list of some great ones I have found. - Library Spot this free virtual library center has links to everything your kids might need - National Geographic Kids - School Tube - Project Gutenberg an excellent source for primary documents and out of print books – all free - Fact Monster - Your local PBS Learning website for teachers has tons of stuff for kids - Interviews with experts through ZOOM or email Selecting the Product to Create One of the most critical steps in the Passion Project process is selecting a product. The students will need to consider their burning question, their intended audience, time, and the materials they will need to create a product. For example, if a student’s burning question is, How did dinosaurs become extinct?, a brochure or a skit might not be the best product to allow him to present his research. It might be better if he made a PowerPoint presentation, a project board, or a magazine article. Students must also think about their audience when selecting a product to create. If they wish to write a magazine article about the extinction of dinosaurs, what grades would find it interesting? Would that be an appropriate product for kindergarten or first grade? Another factor when selecting your product is time and materials. A video will take a lot of time, and you might not have the equipment you need readily available, but you might have the resources to create a picture book. With so many factors to consider, students should think carefully about the product they wish to produce. I have had many occasions when students were overly ambitious in their choice of a product. Consequently, they became very frustrated and overwhelmed during the product creation stage. Submitting the Project Proposal I like to have my students complete a Passion Project proposal form. In their proposal, I ask for the following five things: - Name of the project - The burning question you want to answer - Resources you will use to answer your question - Your audience - Your product During Genius Hour, I meet with students individually to discuss their proposals. I sometimes offer suggestions, and we might modify the proposal. We both sign off on the proposal, and the real work begins. It may sound like I spend a lot of time upfront before the students start their research. However, my experience has been that unless you spend time planning and organizing BEFORE you get to the researching and creating stage, you are likely to encounter many problems later. For example, kids might discover their question is so obscure they can’t find information. Or, they might decide their plans were a little too ambitious, and they need to scale back a little. Researching and Creating The Product Sometimes when students have a big task such as a question to research, a product to create and a presentation, it seems overwhelming. If they only focus on the whole picture, they can become discouraged, put it off, and not even know where to begin. It is easier for them to focus on a small task rather than thinking of the entire process. I have found that the easiest way to overcome this is to have them break the process down into small steps they will complete each week during Genius Hour. I tell them it’s like putting together a jigsaw puzzle – they focus on one piece at a time. It helps to have each create a checklist of all the small steps they need to make from researching to presenting. That way, when they complete each step, they can check it off. It gives them a sense of accomplishment. I also have students reflect at the end of each session and make a plan for what they want to accomplish during their next Genius Hour period. Practice, Practice, Practice, PRESENT! Your students have finished their research; their products are complete. Now it’s time to prepare to present! The key is PRACTICE. I encourage my students to rehearse their presentations in front of a mirror, friends, and family. The more they practice, the more comfortable and confident they will be during their presentations. Rehearsing is one part of the process I have them work on at home rather than limiting it to our Genius Hour time. Reflecting, the Often Forgotten Step Throughout the process, I have students reflect on what they are doing. Reflection gets them “thinking about their thinking.” At the end of the Passion Project process, students complete a rubric for both their presentation and their work during the Passion Project process. I also have students complete a form about the entire process where they reflect on what went well, what they learned, and what they would do differently. I don’t assign grades for Passion Projects. Throughout the process, I am giving feedback and also complete the same rubrics the students do. During Genius Hour, I hold a post-project conference with each student. The student shares his rubric and reflection with me and I share my copy of the rubric. If you haven’t tried Passion Projects in your classroom, I encourage you to give it a try. I often had students tell me it was their favorite activity in school. I have a product in my TpT store called Discover Your Passion which includes PowerPoint Presentations, handouts, graphic organizers, lessons for each stage of the process, and rubrics. The product contains a version for traditional classroom instruction and Distance Learning. You can purchase it by clicking on the picture below.
Dr. Morehouse leads a lab that studies insects and spiders. He has a special interest in how they see the world, and how their vision influences the choices they make. He was drawn to the University of Cincinnati because the school has a strong community working on the biology of vision, philosophy of perception, and other fields related to sight. He is currently part of an effort to create a central place for this community through the Institute for Research In Sensing (IRIS). Planning is ongoing, but programming is staged to begin by Spring 2021. Dr. Morehouse is partly interested in the vision of spiders and insects because of the diversity of ways that their eyes function. Vertebrates all have eyes similar to a camera; they have a single lens in front of a cavity above a sheet of cells that receives light. Arthropods have a wider diversity of types of eyes. In insects, the most common is the compound eye, which has thousands of individual flat lenses that are all sensitive to light. The information from these pieces together a clearer mosaic image. They also have a lens that gives them separate information about which way is up and helps them make quick decisions important to flight. Spiders are even more complex; they have 8 eyes. 6 of these evolved from compound eyes, derived from a common ancestor with insects. However, these have lost their ability to create a detailed image, likely because spiders lived underground for a large part of their evolutionary history. These eyes have a very low resolution and cannot see color, much like our peripheral vision. Their other eyes collect information for a more complex color image. They form at a different stage in the spider’s development, and even connect to a different part of their brains. This pair of eyes has a single lens, with a long cavity behind it, like a Galilean telescope. This is called a diverging lens, and magnifies anything they focus on. This means that despite having eyes that are only ½ mm wide, they can see patterns as well as an elephant can and can see better than most other animals their size. One of the overarching questions Dr. Morehouse and his lab are pursuing is “why?” Spiders have 3-color, vision like humans do, although the exact colors they see are different. Some can see even more than 3 colors. Their interactions, especially during mating, are very reliant on visual cues and color. However, it is unlikely that these displays evolved until after their vision did; after all, why show off if no one can see it? So why did they evolve such complicated vision in the first place? To help them hunt? To avoid something toxic? This research has taken them around the world. Ongoing research in the lab includes whether the male and female audacious jumping spiders see the world differently. Both sexes track each other’s movements closely during mating and develop in similar ways. One notable difference is that the females have an extra stage or two of development (instars) before maturity, which might allow their eyes to get bigger. There are some differences in the way genes linked to vision are expressed, but the physical effects of that expression are still being figured out. Dr. Morehouse also has students working on the evolution of illusions and how non-human animals discriminate faces. Such studies are possible with arthropods because the lab has technology that can track the movement of their eyes. Dr. Morehouse was inspired to study arthropods when he was three years old; he would go into his backyard and pick up bumblebees, get stung, and pick them back up. He tries to foster the curiosity of children through long term mentoring programs. He participates in the STEM Girls programs at the Cincinnati Museum Center, afterschool programs, and summer camps. Most recently, he ran a summer camp that allowed students to write their own superhero persona, including a disguise, personality, and power, that was inspired by the natural world. At the end of the week, he showed up in disguise as a supervillian with his own powers, and challenged them to defeat him with their own creativity. Dr. Morehouse continues to be excited about his field. It has incredible implications for technology; understanding how animals process information could inspire biomedical advances, the engineering of computers that can process information as quickly as arthropods, and programming for the decision making of autonomous cars. In his words, “the natural world has had millions of years to figure out the answers to questions that we are only beginning to ask.” But Dr. Morehouse’s main mission is more philosophical. “To be honest, those [questions] aren’t what motivate me. Its cool, but it doesn’t drive me. I would feel like my life had been wasted if I didn’t spend it in the pursuit of curiosity. …I actually think that to be curious is an essential part of what it means to be human. If we forget …it as a basic human pursuit, we’re lost. We should encourage healthy curiosity. In part, what I’m doing is art: I want to spark the curiosity of others. Have I changed how people view their world? Is there more magic to their backyard? If I can just move people’s feet from where they were before, that’s success.”
The difference between a banjo and a ukulele may seem obvious at first glance. A Ukulele is a small, guitar-like instrument, and a standard banjo is a larger instrument, with strings stretched across a drum. The confusion begins with a hybrid instrument called the "banjo-ukulele" or "banjolele". Ukuleles originated in Hawaii during the late 1800s, and were adapted from the Portuguese "braguinha", a small, guitar-like instrument. Ukuleles are made in four sizes—from the traditional small soprano size, to increasingly larger concert, tenor, and baritone sizes—and produce sound with four nylon strings stretched over a hollow body. The banjo is based on traditional African counterparts, brought to the United States from the West Indies, and in 1800 was developed into the version we know in 2009. Banjos are made in many sizes and string configurations, including hybrids of the guitar, mandolin and ukulele. The most common are the 4-string tenor and 5-string bluegrass banjo. Banjos produce sound with steel strings stretched over a tunable drum. The banjo ukulele, or banjolele, uses elements from both instruments. Banjo ukuleles take the short-scale neck and four nylon strings from the ukulele and combined them on the tunable-drum banjo body. It was made popular in the United States during the 1920s to 1940s. A major difference between banjos and ukuleles is their tuning. Banjos use numerous tunings based on musical style and player preference, but are often tuned to "open chord tuning", of G, A or D. Standard ukulele tuning is G-C-E-A for sopranos, concerts and tenors, and D-G-B-E for baritones. Banjos, including the banjo ukulele, produce a bright tone that can be modified by tuning the banjo drum head. Ukuleles produce a mellow tone that is dependent on the wood and body chamber. It cannot be altered. Matt McKay began his writing career in 1999, writing training programs and articles for a national corporation. His work has appeared in various online publications and materials for private companies. McKay has experience in entrepreneurship, corporate training, human resources, technology and the music business.
At the Lawrence Berkeley National Laboratory, Janet Jansson stocks her fridge with baggies full of dirt. It comes from places as diverse as Antarctic permafrost and Kansas farmland. Her samples are the starting point for the Earth Microbiome Project, an epic effort to figure out how all the world’s microbes collectively support life. Every gram of soil contains tens of thousands of species—up to 100 terabytes of genetic data. Those critters sequester carbon, fertilize plants, decompose organic material, and do a lot of other work we barely understand. Problem is, the microbes are so interdependent that isolating the most industrious organisms is tricky. “They live together in communities,” Jansson says. “It's hard to break up those associations.” So instead the scientists are hunting DNA, isolating all the genes in soil and seawater, regardless of which organism they belong to. The plan is to build a global “gene atlas,” then to work out how nutrients and waste products migrate through the ecosystem. Eventually that understanding might allow us to engineer microbes to be ultraefficient producers of biofuel, or even take control of the carbon cycle. Three places where the Earth Microbiome Project is mapping how small things shape big ecosystems: By monitoring the balance of bacteria in the English Channel, project scientists can predict changes in the population of microscopic sea plants known as phytoplankton. Since these organisms occupy the bottom of the food chain, the researchers could forecast what types and quantities of fish to expect in the coming season. Bacterial populations on native, uncultivated plots in Kansas, Iowa, and Wisconsin are more similar to one another than to those on adjacent native and cultivated plots in those states. This shows that growing crops like corn seriously alters the microbial balance of the prairie. Since microbes are responsible for how carbon and nitrogen move through the ecosystem, a better understanding of them could help tell farmers which bugs might be used to boost crop production. Scientists at the project are learning how to monitor oil-eating bacteria near the wreckage of the Deepwater Horizon rig. They hope to detect seepage caused by the BP disaster and analyze its long-term environmental impact. The bacteria could provide more accurate data than even the best scientific instruments. - Connected Sensors Watch the Earth's Murmurs and Translate Into Data - The Boat That Could Sink America's Cup
Fostering Listening Skills Research shows that only 25% of adults listen effectively. Learn the necessary techniques to improve your child’s listening skills, which in turn helps their overall language development. In this course, you will discover: - The definition of active listening - The role of parents in creating an environment that promotes active listening - Methods for establishing a good listening environment and promoting the development of strong listening skills Speaking and listening skills form the foundation of early literacy. Young children learn word meaning (vocabulary) and sentence structure by listening to adults. Through speaking, children put their knowledge of language and understanding of key concepts to use. Therefore, language development should be a major focus in early childhood. Adding product to your cart
Worldwide, around 300 million people are living with viral hepatitis without even knowing that they are infected with the Hepatitis virus. Without finding the undiagnosed and linking them to care, millions will continue to suffer, and lives will be lost. On World Hepatitis Day, 28 July, the World Health Organization calls on people from across the world to take action and raise awareness to find the “missing millions”. “Hepatitis” means inflammation of the liver. The liver is a vital organ that processes nutrients, filters the blood, and fights infections. When the liver is inflamed or damaged, its function can be affected. Heavy alcohol use, toxins, some medications, and certain medical conditions can cause Hepatitis. However, Hepatitis is most often caused by a virus. There are 5 main Hepatitis viruses, referred to as types A, B, C, D and E. These 5 types are of greatest concern because of the burden of illness and death they cause and the potential for outbreaks and epidemic spread. In particular, types B and C lead to chronic disease in hundreds of millions of people and, together, are the most common cause of Liver cirrhosis and cancer. Hepatitis A and E are typically caused by ingestion of contaminated food or water. Acute infection may occur with no symptoms, or may include symptoms such as jaundice (yellowing of the skin and eyes), dark urine, extreme fatigue, nausea, vomiting and abdominal pain. Hepatitis A and E viruses are responsible for several outbreaks of sporadic viral Hepatitis in India, usually secondary to contamination of drinking water. It is the most common cause of acute viral Hepatitis in children. However, in the recent times there has been an epidemiological shift in Hepatitis A infection in India, with increasing incidence of infection being noted in the adult and adolescent population compared with children. Most people who get Hepatitis A and E infection feel sick for several weeks, but they usually recover completely and do not have lasting liver damage. In rare cases, Hepatitis A can cause liver failure and death; this is more common in people older than 50 and in people with other underlying liver diseases. Hepatitis A and E infection is treated with rest, adequate nutrition, and fluids. Some people will need medical care in a hospital. The best way to prevent Hepatitis A is through vaccination with the Hepatitis A vaccine. Two doses of Hepatitis A vaccine are recommended at 6 months interval. Safe and effective vaccines to prevent Hepatitis E infection have been developed but are not widely available. Hepatitis B, C and D usually occur as a result of receipt of contaminated blood or blood products, through sexual contact, sharing needles, syringes, or other drug-injection equipment; or from mother to baby at birth. Hepatitis B is a viral infection that attacks the liver and can cause both acute and chronic disease. World Health Organization estimates that in the year 2015, 257 million people were living with chronic hepatitis B infection. The average estimated carrier rate of Hepatitis B virus (HBV) in India is 4%, with a total pool of approximately 50 million hepatitis B infected patients (second only to China) and constitutes about 15 per cent of the entire pool of Hepatitis B in the world.” India falls in the intermediate endemicity zone (prevalence of 2–7%, with an average of 4%), with a disease burden of about 50 million. Pockets of higher endemicity are found in tribal areas where the high burden is maintained through inter-caste marriages, tribal customs, illiteracy and poor exposure to health care resources. Every year, nearly 600,000 patients die from HBV infection in the Indian continent. For some people, Hepatitis B is an acute, or short-term, illness but for others, it can become a long-term, chronic infection. Risk for chronic infection is related to age at infection: approximately 90% of infected infants become chronically infected, compared with 2–6% of adults. Chronic Hepatitis B can lead to serious health issues, like liver failure or liver cancer. Hepatitis B can be prevented by vaccines that are safe, available and effective. Most individuals with chronic Hepatitis B do not have any symptoms, do not fall ill, and can remain symptom free for decades. When and if symptoms do appear, they are similar to the symptoms of acute infection, but can be a sign of advanced liver disease. About 1 in 4 people who become chronically infected during childhood and about 15% of those who become chronically infected after childhood will eventually die from serious liver conditions, such as cirrhosis (scarring of the liver) or liver cancer. Even as the liver becomes diseased, some people still do not have symptoms, although certain blood tests for liver function might begin to show some abnormalities. For acute infection with HBV, no medication is available; treatment is mainly supportive. There are several antiviral medications for people with chronic infection. People with chronic HBV infection require regular monitoring to prevent liver damage and/or liver cancer. According to World Health Organization, globally, an estimated 71 million people have chronic Hepatitis C virus infection. A significant number of those who are chronically infected will develop cirrhosis or liver cancer. WHO estimated that in 2016, approximately 399,000 people died from Hepatitis C, mostly from cirrhosis and hepatocellular carcinoma. Due to the absence of a chronic Hepatitis C virus surveillance system in India, there is a complete lack of knowledge about the actual number of people living with HCV-related liver diseases and the people who died of it. Global studies estimate that there are 8.7 million people living with chronic HCV in India. For some people, Hepatitis C is a short-term illness but for 70–85% of people who become infected with Hepatitis C, it becomes a long-term, chronic infection. Chronic Hepatitis C is a serious disease than can result in long-term health problems, even death. The majority of infected people might not even be aware of their infection because they are not clinically ill. Currently there is no effective vaccine against Hepatitis C; however, research in this area is ongoing. The best way to prevent Hepatitis C is by avoiding behavior that can spread the disease, especially injecting drugs. There isn’t a recommended treatment for acute Hepatitis C. People with acute Hepatitis C virus infection should be followed by a doctor and only considered for treatment if their infection remains and becomes a chronic infection. There are several medications available to treat chronic Hepatitis C. Hepatitis C treatments have gotten much better in recent years. Current treatments usually involve just 8-12 weeks of oral therapy(pills) and cure over 90% of people with few side effects. How would you know if you have Hepatitis? The only way to know if you have Hepatitis is to get tested. Blood tests can determine if a person has been infected and cleared the virus, is currently infected, or has never been infected. Who should get tested for Hepatitis B and why? - All pregnant women are routinely tested for Hepatitis B: If a woman has Hepatitis B, timely vaccination can help prevent the spread of the virus to her baby. - Household and sexual contact of people with Hepatitis B are at risk of getting Hepatitis B: Those who have never had Hepatitis B can benefit from vaccination. - People with certain medical conditions should be tested, and get vaccinated if needed. This includes people with HIV infection, people who receive chemotherapy and people on hemodialysis. - People who inject drugs are at increased risk for Hepatitis B: but testing can tell if someone is infected or could benefit from vaccination to prevent getting infected with the virus. - Men who have sex with men have higher rates of Hepatitis B: Testing can identify unknown infections or let a person know that they can benefit from vaccination. Who should get tested for Hepatitis C? The only way to know if you have Hepatitis C is to get tested. Early detection can save lives. - Anyone who has injected drugs, even just once or many years ago - Anyone with certain medical conditions, such as chronic liver disease and HIV or AIDS - Anyone who has received or donated blood/ organs before 1992 - Anyone born between 1945 – 1965 - Anyone with abnormal liver tests or liver disease - Health and safety workers who have been exposed to blood on the job through a needle stick or injury with a sharp object - Anyone on hemodialysis - Anyone born to a mother with Hepatitis C Why is it important to get tested for Hepatitis C? - Millions of Americans have Hepatitis C, but most don’t know it. - About 8 in 10 people who get infected with Hepatitis C develop a chronic, or lifelong, infection. - People with Hepatitis C often have no symptoms. Most of the people can live with an infection for decades without feeling sick. - Hepatitis C is a leading cause of liver cancer and the leading cause of liver transplants. - New treatments are available for Hepatitis C that can get rid of the virus. A balanced and healthy lifestyle with controlled consumption of alcohol and tobacco is necessary to fight the disease that is an alarming public health concern in India. In addition, maintaining hygiene, avoiding roadside food and beverage, being careful in salons and tattoo parlors for avoiding infections and washing hands can help protect us from Hepatitis.
Worksheets and lesson ideas to challenge students aged 11 to 16 to think hard about bacteria (GCSE and Key Stage 3) You are composed of more bacterial cells than human cells. Your energy-producing organelles were once free-living bacteria. There are more bacteria in one gram of soil than people on earth. Yet we can grow less than 1% of bacterial species in the lab. Bacteria are amazing! The resources below will help your students appreciate this incredible group of small but very significant microbes. Why are bacteria so successful? GCSE and A Level worksheet on Bacteria. This provides students with information about bacteria. They answer some challenging questions to consider why this group of organisms has been so incredibly successful. (PDF) Bacterial cell division GCSE worksheet and activity for students to understand why bacterial growth is so dangerous. Students calculate how many bacteria are produced from one bacterium after 24 hours. They calculate the new mass of the patient and bacteria and consider why in reality this does not happen. This is a useful resource to help teach standard form and introduce the concept of limiting factors. (PDF)
Throughout this course, you’ve had the opportunity to learn about many concepts regarding guiding children’s behavior. Some of them include: Differences between guidance, discipline, punishment, and consequences Levels of Mistaken Behavior Rewards vs. Punishment Encouragement vs. Praise Behaviorist and Constructivist Theories The use of Timeouts Treating Children with Respect Building Positive Relationships In a 2 – 3 page paper, written in APA format using proper grammar and spelling, address the following: Choose two (2) concepts from the course that you feel are the most important for an early childhood professional to understand and explain why. You may select from the list above or offer concepts not presented in the list. Explain how you can incorporate your two (2) chosen concepts into your work as an Early Childhood Professional. For each concept, describe a lesson or activity you could use with the children in your care. You can even create a game if you’d like. Be creative! Go back through this course and analyze the resources (videos, readings, and lectures). Choose two (2) which resonated with you the most and that you will share with colleagues and/or parents. For each resource chosen, explain why it resonated with you and how you will use it with your colleagues and/or parents of children in your care.
Ever wish you could do a quick "breath check" before an important meeting or a big date? Now researchers, reporting in ACS' journal Analytical Chemistry, have developed a sensor that detects tiny amounts of hydrogen sulfide gas, the compound responsible for bad breath, in human exhalations. According to the American Dental Association, half of all adults have suffered from bad breath, or halitosis, at some point in their lives. Although in most cases bad breath is simply an annoyance, it can sometimes be a symptom of more serious medical and dental problems. However, many people aren't aware that their breath is smelly unless somebody tells them, and doctors don't have a convenient, objective test for diagnosing halitosis. Existing hydrogen sulfide sensors require a power source or precise calibration, or they show low sensitivity or a slow response. Il-Doo Kim and coworkers wanted to develop a sensitive, portable detector for halitosis that doctors could use to quickly and inexpensively diagnose the condition. To develop their sensor, the team made use of lead(II) acetate - a chemical that turns brown when exposed to hydrogen sulfide gas. On its own, the chemical is not sensitive enough to detect trace amounts (2 ppm or less) of hydrogen sulfide in human breath. So the researchers anchored lead acetate to a 3D nanofiber web, providing numerous sites for lead acetate and hydrogen sulfide gas to react. By monitoring a color change from white to brown on the sensor surface, the researchers could detect as little as 400 ppb hydrogen sulfide with the naked eye in only 1 minute. In addition, the color-changing sensor detected traces of hydrogen sulfide added to breath samples from 10 healthy volunteers.
Determine values for a and b that make each system of equations true (i.e., solve each system). Be sure to show your work or explain your thinking clearly. Once you have the value for Solve both equations for Set both equations equal to each other (Equal Values Method), and multiply both sides by the common denominator of both fractions.
Helpful Resources for First time and Seasoned investors What is mineral exploration? Mineral exploration is a sequential process of information gathering that assesses the mineral potential of a given area. It starts with an idea or geologic model that identifies lands worthy of further exploration. Suitable target areas may then be staked as mineral claims to secure the mineral rights. The next step is to carry out early exploration work to identify mineralization or geologic anomalies that may lead to a mineral discovery. As more geological knowledge of the mineral claim block is gathered, the various claims are either accepted or rejected for further work. Later, intensive drilling programs are undertaken on the most promising claims in order to provide statistically robust estimates of the extent and quality of the deposit. The intermediate product of exploration is the improved geological knowledge of a defined area. The final product of successful exploration is mineral deposits that are economically feasible to extract. How is mineral exploration conducted? The general stages that a mineral exploration project will follow are: - Planning– Exploration starts with the gathering and analysis of publicly available information on potential exploration areas. The purpose is to identify areas of potential exploration interest and to plan the following exploration stages. Public information includes government geological survey reports, maps, and company filed assessment reports on exploration projects from AANDC’s mining recorder’s offices. Information made public by other mineral exploration firms and from past and current mines in the areas of interest can also be used. - Recording of Mineral Claims– Once geologically favorable areas worthy of further exploration are identified, the explorer would secure the mineral rights. On Crown land this is achieved through the staking and recording of mineral claims with AANDC’s Mining Recorder’s Office. - Reconnaissance– The purpose of reconnaissance is to rapidly identify geological anomalies that indicate the presence of mineralization in the areas highlighted during the planning stage. These anomalies become the targets for further exploration. Reconnaissance activities can include: - Prospecting and geological mapping: the on-the-ground visual identification of favorable rock types, alteration and surface mineralization. - Rock sampling: when a rock sample from a showing is sent for a chemical analysis called an assay. - Geophysical surveys: a measure of the physical properties of a rock. This includes electromagnetic, gravitational, radiometric, and electrical conductivity surveys. Regional geophysical surveys can be conducted by aircraft, while detailed surveys are conducted by ground crews. - Geochemical surveys: soil, water and sediment samples from land, lakes and streams in the area of interest are examined. They can identify if indicator chemical elements or minerals are in concentrations significantly higher than normal. - Advanced Exploration– Once significant anomalies have been identified, exploration can move to a more intensive phase to determine if deposits of economic minerals are present. Advanced exploration activities can include: - Stripping and Trenching: an activity that can include using heavy equipment to remove the shallow overburden and then explosives to blast a trench in the rock to provide larger volumes of material for further sampling and assays. - Drilling: drilling produces rock cores that can be examined to determine the mineral concentration and depths at which mineralization occurs. A widely spaced pattern of drill holes provides the geologist with the information needed to estimate the size, geometry and grades of ore present in the deposit. Drilling is typically the most expensive stage of exploration and on average accounts for 50% of total exploration spending. - Sampling and Assaying– Sampling is the collection of a representative part of the mineral deposit. Assays are chemical tests that determine the metallic content of a sample of rock. Sampling and assaying can occur at different stages of exploration but will be concentrated at the advanced exploration stage. - Economic Evaluation– Once the size and quality of an ore deposit has been determined to a high degree of probability, an economic evaluation of developing a mine can be conducted. This evaluation, also known as a feasibility study, will estimate the capital and operating costs of a mine, the expected revenue from the ore concentrate and/or metals produced, the mine life and post closure rehabilitation costs. If the project is estimated to achieve the “hurdle rate of return” it will be economic to proceed. The hurdle rate is the rate of return required to justify the investment in a capital-intensive, high-risk investment.
Did you know dehydration can influence your mental functioning, your heart rate and your ability to regulate body temperature and blood pressure? If you lose even 1% of your body weight in water, your physical performance is affected and you feel tired. If you lose 2-4%, your mental functioning is affected. In cases where more than 10% is lost, a medical emergency can result and (if not reversed) can lead to death. Infants, young children, people with certain chronic health problems and elderly adults are more susceptible to the effects of dehydration, which is why it’s important to practice safe measures to prevent dehydration in yourself and others. So how much water do you need? This depends on your age, percent of body fat, general health, diet, temperature of the air around you and your level of activity. You lose water through urine, sweat, feces and the air you exhale. The Institute of Medicine (IOM) suggests that the average healthy woman drink about 9 cups a day of liquids and the average man about 13 cups a day. How does dehydration occur? You can become dehydrated by not consuming enough fluid from foods and beverages. Other conditions that also can make dehydration more likely include: - Sweating during exercise that is not compensated by drinking extra fluids; exercise even in cold weather can cause sweating - Hot, humid weather - High altitude, which causes rapid breathing and increased urine output - Illnesses such as poorly controlled diabetes, and illnesses that cause vomiting, diarrhea or fever - Certain medications Each of these conditions alone can contribute to dehydration and a combination of them can cause it to arise more quickly. Feeling under the weather? Click here to find an Ochsner Urgent Care location near you. Can Other Types of Drinks Make You Dehydrated? Although caffeine does cause you to urinate more frequently, the effect is short-term and does not typically cause dehydration. Both caffeinated and non-caffeinated beverages can be used as sources of water. Alcoholic drinks also can make you urinate more frequently, but, like caffeine, this increase is short-term and usually does not cause dehydration if you drink in moderation. 8 Symptoms of Dehydration - A dry or sticky mouth, caused by too little saliva - Less urine than normal, or no urine for eight hours. Urine that is darker than usual may indicate dehydration; diet, medications, and vitamin supplements can also affect urine color. - Few or no tears - Sunken eyes - Dry, cool skin - Fast heart rate - Lethargy, irritability or fatigue - Listlessness or coma – this is a sign of severe dehydration How to Recover from Dehydration Drink at least 12 8-ounce glasses of fluid every day to overcome dehydration. Fluid may include water; orange juice; lemonade; apple, grape, and cranberry juice; clear fruit drinks; electrolyte replacement and sports drinks; and teas and coffee without caffeine. Be sure to follow up with your health care provider if you don't get better within 24 to 48 hours. Remember – warning signs you should seek medical attention include very dark urine, little urine output and/or dizziness, weakness, confusion and fainting.
Multiple Sclerosis is an autoimmune disease that affects the central nervous system (brain, spinal cord, optic nerves). The central nervous system is surrounded and protected by a fatty tissue, myelin. Myelin is what helps nerve fibers conduct electrical impulses. For people with MS, the myelin is lost in many areas that leaves scar tissue (sclerosis). Myelin’s job is also to make the nerve fibers work. Because the myelin is gone or damaged in multiple sclerosis, the ability of the nerves to conduct electrical impulses to and from the brain is disrupted. This is what causes the symptoms of Multiple Sclerosis. Women are more likely to get MS than men and it usually afflicts people between the ages of 20 – 50. There are currently about 400,000 people in the U.S. diagnosed with MS. MS can take any one of four disease courses and depending on the patient, their symptoms can be mild to severe. This type of MS is where the patients symptoms will wax and wane, just like with CFS or FM. This is the most common form of MS diagnosed (85%). There will be episodes of worsening acute neurologic function and then will have periods that are free of symptoms and disease progression. Patients with this form of MS will notice that their symptoms will start gradually and continually worsen. They will not have the remissions and flares as seen in the first type. This form of MS is only seen in about 10% of patients. MS patients with this form of disease will experience a an initial period of relapsing-remitting MS, followed by a steadily worsening disease course with or without occasional flare-ups, minor recoveries (remissions), or plateaus. The National MS Society reports: 50% of people with relapsing-remitting MS developed this form of the disease within 10 years of their initial diagnosis, before introduction of the “disease-modifying” drugs. Long-term data are not yet available to demonstrate if this is significantly delayed by treatment. In this type of MS, patients will have a steadily worsening illness from the beginning, but they will also have acute relapses with or without recovery. The difference between this form and the relapsing-remitting is that in this form, the patient will notice continual worsening of disease progression. This form is seen in only about 5% of MS cases. The symptoms of MS will vary on the patient and their individual case. The MS Society says, One person may experience abnormal fatigue, while another might have severe vision problems. A person with MS could have loss of balance and muscle coordination making walking difficult; another person with MS could have slurred speech, tremors, stiffness, and bladder problems. While some symptoms will come and go over the course of the disease, others may be more lasting. - Walking difficulties - Bladder/bowel disturbances - Visual problems - Cognitive dysfunction - Abnormal sensations in body – numbness, tingling - Changes in sexual function/libido - Mood swings - Speech problems - Swallowing problems - Hearing impairment There is currently not one single test that will diagnose Multiple Sclerosis. It takes a several tests for a diagnosis to be made. Physicians will take into account the patient’s complete medical history and symptoms. Testing of balance, reflexes, coordination and areas of numbness are looked at. MRIs are used to detect lesions on the brain that are common in MS. Spinal fluid is checked for signs of MS and evoked potential tests are done to determine how well a patient’s nervous system responds to certain stimulation. Click here to see the complete list of FDA approved medications and information about each one that is used to treat MS. - Avonex – interferon beta-1a - Betaseron – interferon beta-1b - Copaxone – glatiramer acetate - Rebif – interferon beta-1a - Tysabri – natalizumab - Physical Therapy - Occupational Therapy - Speech Therapy - Cognitive Rehabilitation - Vocational Rehabilitation - Alternative Therapy A small proportion of people with primary or secondary progressive MS eventually develop symptoms and disabilities that require skilled care and special equipment. Some of those individuals with very severe MS will experience a somewhat shortened lifespan. This is almost always due to a complication, such as overwhelming infection, skin breakdown, or malnutrition. The cause of MS is unknown, but most people with MS have a normal life expectancy. The vast majority of MS patients are mildly affected, but in the worst cases, MS can render a person unable to write, speak, or walk.
What is a leptospirosis? Is leptospirosis contagious? What is the contagious period for leptospirosis? In general, leptospirosis is considered weakly contagious. This is because, like other animals, humans can shed leptospirosis in the urine during and after illness. Consequently, individuals exposed to the urine of humans who are infected may become infected. For example, although the bacteria are not airborne and have a low risk of being in saliva, individuals handling wet bedding or blood-soaked material from an infected person can increase the chances of getting the infection. There are a few reports of transmission between sexual partners, but the incidence of this type of spread seems very low. Unfortunately, pregnant mothers who get leptospirosis can infect their fetus. The contagious period for leptospirosis depends on how long viable organisms are shed in the urine. Most individuals will shed organisms in the urine for a few weeks but there are reports that humans can continue to shed the organisms in urine for as long as 11 months. Some experts suggest that there is risk for up to 12 months after getting the initial infection. What are leptospirosis symptoms and signs? The symptoms and signs of leptospirosis are variable and are similar to those seen in many other diseases (dengue fever, hantavirus, brucellosis, malaria, and others). Symptoms can arise about two days to four weeks after exposure to the bacteria. Although some people have no symptoms, others may exhibit - high fever, - muscle aches, - sore throat, - abdominal pain, - pain in the joints or muscles, - rash, and - reddish eyes. How can leptospirosis disease be prevented? Scientists have developed vaccines that seem to provide some protection against leptospirosis. Vaccines for humans are only available in some countries, such as Cuba and France. However, these vaccines may only protect against certain forms of Leptospira bacteria, and they may not provide long-term immunity. There’s no vaccine available for humans in the United States, although vaccines are available for dogs, cattle, and some other animals. If you work with animals or animal products, you can lower your risk of infection by wearing protective gear that includes: - waterproof shoes You should also follow proper sanitation and rat-control measures to help prevent the spread of Leptospira bacteria. Rodents are one of the primary carriers of infection. Avoid stagnant water and water from farm runoffs, and minimize animal contamination of food or food waste
The Cause & Effect Model "The Little Engine That Could" Teacher Name(s): Adriane R. Crawford Date: July 24, 2005 Grade level(s): All Content Areas: Literature & History Description/Abstract: Plot makes us aware of events not merely as elements in a temporal series but also as an intricate pattern of cause and effect...Surely our sense of the menaing of experience is closely tied to our understanding of what causes what, and it is the business of plot to clarify causal relationships. William Kenney Timeline: 1 Week Goals/Content and Cognitive: One of the primary goals of the models discussed is to have students become acive participants in the learning process rather than passive recipients of information. Links to Curriculum Standards: State standards are embedded in the TEKS, published in 1998. The TEKS also serve as curriculum guidelines for elementary and middle school programs and for high school courses in the arts, career education, ELA, FACS, health and PE, languages other than English, math, science, social studies, and technology. A typical TEKS document contains lists of knowledge and skills that students should master at each grade level or in each high school course. "Why do you think the little blue engine was able to pull all thosw dolls and toys and present over the mountain?" and "Why was such a litle engine about to do something that bigger engines had said they could not do?" Have constuctive discussions. During the discussion have the students to jot down their ideas on paper. The teacher is the facilitator. The students will be involved in ongoing assessment by having class discussions. Students will assess themselves through the ability to answer of discuss the topic. Most if not all students have read "The Little Engine That Could" The difficulties the students might have is during the class discussions (agree or disagree). The curriculum connection I can make in this lesson with other topics that I teach is that this model can be used in all subjects. Learning Activites or Tasks: The model will help the students with writing ceatively, writing critical essays, producing themes and making predictions. The teacher will put a chat on the board. The numbers indicate sequence. My student will work in the classroom in groups. But once they fell comfortable they will then work individually. I will modify according to the students IEP's. There will not be a lot of technology used if any. Materials and Resources: teacher-selected and/or student-researched resources, paper, pens or pencils Lesson Evaluation and Teacher Reflection Was this lesson worth doing? In what ways was this lesson effective? What evidence do you have for your conclusion? How would you change this lesson for teaching it again? What did you observe your students doing and learning? Did your students find the lesson meaningful and worth completing?
Solar energy is that one of the most cost effective solutions to energy problems in places where there is no mains electricity. Solar energy is a completely renewable source available to use, since we can always rely on the sun showing up the very next day as a power source. Solar energy are also clean and non-polluting. However, there are some disadvantage and limitation on the current solar energy technology. Such as, it requires large areas of land to collecting solar energy. Its intensity is not constant; it changes from early in the morning to evening. This amount does not remain the same during the whole day. It also changes with different seasons of the year and depends upon the sky conditions. Now there is an innovative solar device called the Betaray that can harness solar energy from the sun, the moon, or even the gray sky of a cloudy day. Like a giant crystal ball, the device was capable of concentrating sunlight up to 10,000 times – making it significantly more efficient than traditional photovoltaic designs. This perfectly spherical glass ball is the work of a German architect named André Broessel, who began working on it three years ago with the aim of making solar power more efficient and less expensive.
Why is it important to REDUCE and REUSE? “Reduce, Reuse, Recycle” is a familiar phrase but what does it really mean? REDUCE is the most important action we can take – to reduce the amount of trash you throw away. Be a smart shopper. Choose items with less packaging. Avoid having to throw out food waste. Buy in bulk when possible rather than individually wrapped items. Then REUSE is next to extend the useful life of something by reusing it. Sell or donate to let someone else use it. Repair or repurpose the item to get the most out of its useful life. The most effective way to reduce waste is to not create it in the first place. Making a new product requires a lot of materials and energy – raw materials must be extracted from the earth, and the product must be fabricated then transported to wherever it will be sold. As a result, reduction and reuse are the most effective ways you can save natural resources, protect the environment and save money. (Source: https://www.epa.gov/recycle/reducing-and-reusing-basics#main-content ) What do we RECYCLE in Bermuda? When the trash truck comes, your household garbage is taken to the Tynes Bay Waste-to-Energy Facility where it is burned in a controlled system to generate electricity. Therefore, it is important to separate out any non-burnable items from your household waste. You should rinse out all TIN, ALUMINIUM and GLASS items and put them together into a blue recycling bag for bi-weekly curbside collection. The tin and aluminium are sorted at the processing plant located at the Government Quarry in Hamilton Parish. These items are exported and sold in the USA which earns some revenue for the Bermuda Government. The glass is crushed and used locally as a substrate for drainage in construction projects, instead of the need for imported gravel. Recycling these items means that the resource materials can be used again at a fraction of the cost of using raw materials. In Bermuda, you can also recycle: - household batteries (drop in the collection tubes at pharmacies and grocery stores) - vehicle batteries and window air-conditioners (at Tynes Bay Public Drop-Off) - computers and office photocopy machines (through special arrangement with [email protected] or by calling (441) 278-0560. Remember: We do not recycle plastics in Bermuda for several reasons. (1) We are not trying to divert plastics from a landfill. Other countries separate out plastics in an effort to recycle them because it is important to keep them out of landfills. Plastics do not biodegrade and can leach toxins into the soil and ground water if put into landfills. (2) Plastics recycling is flawed; it’s a complicated process that does not include many of the seven types of plastics and may involve a large carbon footprint of transportation, costs, and big use of other natural resources. (3) Often what the consumer sends to be recycled ends up in a foreign country being burned for energy in an uncontrolled incinerator. (4) Lastly and more importantly, plastic has good calorific value to be burned quickly to generate energy (electricity) if done in a regulated emissions-controlled incinerator which is what we have in Bermuda. In that sense, we “reuse” plastics right here at home to support Bermuda’s energy needs. Melting or burning plastics at low temperatures (200 – 350 degrees Celsius) in a burn barrel or bonfire is very dangerous to your health because the plastic may smolder and emit toxic fumes that contain dioxins which are known carcinogens. The Tynes Bay Facility safely burns plastics at a very high temperature (800 degrees Celsius) in a closed-loop system. Our Tynes Bay Waste-to-Energy Facility is a renewable energy source for Bermuda’s energy needs. What is the most littered item in Bermuda? Cigarette butts are the most littered item in Bermuda. Many people mistakenly think that cigarette filters are made from cotton and will disintegrate quickly. That is not true. The filters are made from a special type of plastic called cellulose acetate and can take about 10 years to degrade. Meanwhile if they have been carelessly tossed onto the road, it is likely they will be swept down a storm drain the next time it rains. From there they will float out to sea and a bird or turtle will mistake it for food. Plastic, like this, is killing our wild animals and marine life. The second most littered item in Bermuda is glass bottles – beer bottles. Archeologists studying ancient civilizations have been able to carbon date glass to be 1,000 years old. So potentially all those beer bottles lying in the woods and along our roadsides could be there for a very long time! Not just an ugly blight but a human health hazard too, they trap rainwater and are a breeding ground for mosquitoes. It only takes a tablespoon of water and 10 days to hatch a batch of mosquito larvae. Other critters like tree frogs or Bermuda skinks might crawl into the glass bottles and die when they become trapped inside. What are the best ways to help the environment? One easy way to take action to protect the environment is to join a KBB Clean Up. Everyone from age 2 to 92 is welcome to participate. Students can earn Community Service credit for school. It’s a great way to get the whole family involved in an outdoor activity. This is a rewarding volunteer activity because you can see the results of your labour right away! Of course, the best way is to never litter in the first place! But you can help to pick up litter when you see it, and join any of the monthly KBB Clean Ups that are scheduled https://www.kbb.bm/clean-ups/ Other ways to help protect the environment: - Use a reusable shopping bag for groceries and retail - Use a reusable water bottle and lunch container - Say “No thanks” to unnecessary packaging, particularly single-use plastics - Report any bad areas of littering or illegal dumping to KBB
The Centers for Disease Control and Prevention (CDC) announced, “September 28 is World Rabies Day, a global health observance started in 2007 to raise awareness about the burden of rabies and bring together partners to enhance prevention and control efforts worldwide. World Rabies Day is observed in many countries, including the United States.” Rabies is an infectious disease that affects the central nervous system. The disease is caused by transmission of the rabies virus through the bite or scratch of an infected animal. Saliva from an infected animal is infectious and can spread disease if it finds its way into the eyes, nose, mouth, or opening in the skin of a susceptible mammal. While rabies is 100% preventable, more than 59,000 people die from the disease around the world each day. Most of these deaths occur in Africa and Asia, and nearly half of the victims are children under the age of 15. If you have been exposed to the rabies virus, and not swiftly treated with a series of post-exposure vaccines, the disease is nearly 100% fatal. The cost of post-exposure prophylaxis typically exceeds $3,000. Despite its rarity in the United States, occasional cases of rabies in humans do occur with one to three cases reported annually. In January 2018, a 6-year-old boy from Florida died after being scratched by an infected bat. In August 2018, a woman from Kent County, DE died after being exposed from an unknown source. Since January 2018, the Division of Public Health of DE has performed rabies tests on 83 animals, nine of which were confirmed rabid. Positive tests were identified in three foxes, three raccoons, one cat, one dog, and one horse. On the east coast, raccoons are the most common incubator of the virus. It should be noted that all land mammals are susceptible to the virus, it simply occurs more commonly in some species. A surprising number of indoor cat owners are unaware or argue a cat that never goes outside should not require a rabies vaccine. Cats commonly slip out an open door, inadvertently fall from a screened window, or a rabid animal could even find their way into the home. You cannot guarantee an indoor cat is completely free from risk of exposure. Rabies vaccination is required by law for all dogs, indoor cats, and ferrets. The vaccine must be given by a licensed veterinarian. Implementation of this law has dramatically reduced the amount of human exposure seen in the US. In the event an unvaccinated dog or cat is bitten by a known rabid animal, euthanasia is usually recommended. In the event an unvaccinated dog or cat is bitten by an animal with unknown history, a 6-9 month quarantine is imposed on the pet (at owner expense). It simply is not worth the heartache, worry, and aggravation of not vaccinating your pet. At Longwood Veterinary Center, we require all our patients, except those suffering from cancer or immunosuppressive disease, to remain current on their rabies vaccine Several major health organizations, including the World Health Organization (WHO), World Organization for Animal Health (OIE), and the Food and Agriculture Organization of the United Nations (FAO), have pledged to eliminate human deaths from dog-transmitted rabies by 2030. You too can take steps to help prevent and control rabies by vaccinating your pets and learning how to keep yourself safe from the animals that commonly spread rabies in the U.S. including raccoons, bats, skunks, and foxes. Please visit cdc.gov/worldrabiesday/ for more information and call to schedule an appointment to update your pet’s vaccine! Written By: Tara Corridori, LVT Edited By: Corrina Snook Parsons VMD
You want a paper aeroplane to do more than just fall slowly and gradually through the environment. You want it Origami Instructions Box to move ahead. You make a papers aeroplane move forward by throwing it. Usually the harder you throw a paper aeroplane the further it will fly. Typically the forward movement of your aeroplane is called thrust Thrust helps to give an aeroplane lift. Here's how. Hold one end of a sheet of paper and move it quickly through the environment. The toned sheet hits against the air in its way. The air pushes upward the free part of the moving paper. A paper aeroplane must move through the air so that it can stay upward for longer flights. Here is how you can see and feel what happens when Mon Bateau De Papier Jean Humenry Paroles air pushes. Location a sheet of document flat against the palm of your upturned hands. Turn your hand over and push down quickly. You can go through the air pressing against the paper. The paper stays in place against your hands. You can see the paper's edges pushed back again by the air. Right now hold a piece of crumpled paper in your palm. Again turn your hand over and push down. The smaller surface of the paper hits less air. You are feeling less of a push against your hand. Unless you push down very quickly, the paper will fall to the ground before your hand reaches Air is a real substance even though you can't see it. A flat sheet of document falling downwards pushes against the air in the path. The air pushes back from the paper and slows its fall. A new crumpled piece of paper has a smaller surface pushing against the air. The air doesn't push back as strongly as with the toned piece, and the golf ball of paper falls faster. The spread-out wings of a paper aeroplane keep it from falling quickly down to the floor. We the wings give a plane lift. The secret lies in the form of the wing. The front edge of an aeroplane's Bateau De Papier Chanson wing is more rounded and thicker than the rear advantage. Which paper falls to the ground first? What seems to keep the smooth sheet from falling quickly? We live with air all around us. Our planet earth is between a layer of air called the atmosphere. The atmosphere expands hundreds of miles over a surface of the planet. Take two sheets of the same-sized paper. Crumple one of the papers into a ball. Hold the crumpled paper and the flat paper high above your face. Drop them both at the same time. The force of gravity drags them both downward. Have you ever flown a paper aeroplane? Sometimes it twists and Avion En Papier Dessiner loops through the air and then comes to red, gentle as a feather. Additional times a paper rudder climbs straight up, flips over, and dives headfirst into the ground. What keeps a paper aeroplane in the air? How can you make a paper aeroplane require a00 long flight) How can you make it loop or switch! Does flying a papers aeroplane on a windy day help it to stay aloft? What can you learn about real aeroplanes by making and flying paper aeroplanes? Why don't experiment to discover some of the answers. The particular Paper Aeroplane Book Why is paper aeroplanes soar and plummet, loop and glide? Why do they Tuto Avion En Papier Qui Vole Bien travel whatsoever? This book will show you how to make them and clarifies why they are doing things they do. Making paper eeroplanes is fun and. using the author's stepby- step instructions and doing the simple experiments he implies, you will also discover what makes a real aeroplane travel. As you make and fly paper planes of different Designs, you will learn about lift, thrust, pull and gravity; you will see how wing size and ships and fuselage weight and balance affect the lift of a aircraft: how ailerons, alleviators and the rudder work to make a plane great or climb. loop or glide, roll or rewrite. Once Clear diagrams and delightful drawings show each step for making the aeroplanes and illustrate the experiments suggested by the author. Typically the front edges of the wings of any real aeroplane are usually tilted slightly upwards. As with a kite, the air pushes against the tilted underside of the wings, giving issues the plane lift. The greater the angle of the lean the more wing surface the air pushes against. This particular results in a greater amount of lift. But if the angle of the tilt is actually great, the Avion En Papier Qui Vole Loin Et Bien air pushes from the larger wing surface presented and slows down the forwards movement of the airplane. This is called drag. Move functions slow a airplane down, as thrust works to make it move forward. At the same time, lift functions make a plane go up, as gravity tries to make it fall down. These four forces are usually working on paper aeroplanes just as they work on real aeroplanes. There is still another way most real aeroplanes and some paper aeroplanes use their wings to increase lift. The top-side as well since the bottom part side of the side can help to give the plane lift.
In the morning we read a book called, I Am Not A Number. It shared the story of a First Nations girl in Canada who was taken from her family and sent to a residential school. It was shocking to hear about how cruel people can be. Although it is hard to hear these stories it is important to learn about the past so that as move forward we can learn form the mistakes that have been made. After lunch, we introduced the criteria for a powerful response and showed the students an example of the process and a final written piece. Students then had some time to get started on their response to the book they have been working on. In the afternoon, we had our science evaluation and reviewed order of operations in math. - Sign up for the potluck (see yesterday post) - Math party Thursday - Finish the rough draft/plan of your powerful response by using the template to organize your thinking for each section. - Post several things that you learned from listening to the book, I Am Not A Number. - What was the most shocking? - Post your thoughts on the idea of reconciliation (apologizing / trying to restore) with First Nations People - Why is it important? - Who is it important to? Why? - Reply to another persons post
“We are not protecting nature from people but for people,” said Yolande Kokabadse, president, International Union for the Conservation of Nature (IUCN), at the opening session of the September 2003 World Parks Congress. Almost 3000 delegates from157 nations came to this meeting in Durban, South Africa where they assessed the global status and critical issues facing protected areas (PAs). The conference theme - Benefits beyond Boundaries - provided the framework within which these delegates developed an agenda for managing protected areas during the next decade and beyond. Achim Steiner, director general, IUCN celebrated “... one of the world’s intergenerational gifts,” that 10% of the world’s land mass was in protected areas. Unfortunately, these areas were often “paper parks,” underfunded or fragmented. The park management had to be strengthened (capacity building); and protected areas had to be expanded so the worldwide loss of biodiversity (extinction of plant and animal species) could be stopped. Although 25 biological hotspots were already designated for protection, participants called for gap analysis to assess where biodiversity needed to be protected. Freshwater, coastal, and mountain ecosystems were in trouble; and, since only 1% of the oceans were in marine reserves, this was a major concern. Biodiversity and natural ecosystems were declining due to alien species, pollution, poverty, unsustainable consumption, armed conflict and climate change. “This could precipitate a crisis for all mankind,” wrote Kofi Annan, director general, United Nations, in a message delivered to the Congress. Why? “Because all humanity depends on biodiversity for health.” Park managers, representatives of indigenous groups and NGO environmentalists spoke of the significance of nature as a refuge, a sacred place, a beautiful place that inspired humans and sustained cultures. But they acknowledged that most of the world’s citizens lived in cities, without easy access to wild green places. Therefore, in order to persuade government leaders and the public to conserve biodiversity, they must communicate its great economic value: - Natural areas supply natural resources (such as wood, minerals, and fuels) that are the raw materials for economic development. - Oceans sustain fish; and the grasslands serve as grazing areas for cattle and wild animals that are consumed as food. - Prairies and untamed environments provide livelihoods for indigenous and rural communities. - Wild plants, animals and microbes are the source of foods, medicines and other commodities of world trade. - Biodiversity offers genetic resources, the basis of biotechnology. That is why some speakers referred to tropical forests, with their wealth of biodiversity, as “future pharmacies.” Conference participants emphasized that, in addition to individual species, ecosystems- communities of plants, animals and microbes that interact- were vital. Ecosystems regulate the chemistry of the atmosphere, soil, and water, so they are the life support system of the planet. Unfortunately, most economists, politicians, and the public thought of ecosystems services (also called nature’s services) as “free services,” so natural areas were undervalued. However, ecosystems give humans many economic benefits: - Wetlands serve as nurseries for ocean fish so they sustain commercial fisheries. - Wetlands also prevent flooding and clean up pollution. - Forests serve as watersheds, protecting the water supply. - Tropical forests, but also grasslands and oceans, act as carbon sinks, thus slowing global warming. - Coastal wilderness prevents disasters by shielding inland areas from storm surges and flooding. - National parks and other wilderness are favored destinations for tourism. Delegates concluded that the monetary benefits of natural ecosystems, would be a powerful incentive for politicians and the public to conserve biodiversity and protect natural areas. A variety of actions to conserve nature’s wealth were announced at the World Parks Congress: - BP and Shell Oil Company agreed not drill for oil in World Heritage Sites. - Madagascar and Brazil established huge natural reserves. - Limpopo Transboundary Park - with corridors that connect national parks in South Africa, Zimbabwe, and Mozambique - would allow elephants to migrate and promote peace between these nations. - The World Bank, United Nations Environment Program and United Nations Development Program would partner on meeting Millennium Goals for sustainable development. - Former South African President Nelson Mandela called for much greater involvement of youth in programs for protected areas. - Conference documents including: “Recommendations,” “The Durban Accord,” The Durban Action Plan,” “The Durban Consensus on African Protected Areas for the New Millennium,” the “Message to the Convention on Biological Diversity,” and “Emerging Issues,” would inform decision makers about crucial issues and offer strategies to conserve the world’s natural heritage. “Nature is most important for sustainable development. More important than infrastructure or finance. What we are doing here is not marginal, but for development and peace in the world.” Dr. Klaus Toepfer, director, United Nations Environment Program, World Parks Congress September 7-17, 2003 Durban, South Africa By Isabel Abrams
What is FAST FOOD? Fast food is characterized by being fast, convenient and inexpensive alternatives to homemade meals, as well as being high in saturated fat, sugar, salt and calories. Although junk food can satisfy your hunger, it provides very little nutrition. Consistently consuming malnourished foods will make your appetite for more nutritious foods too small, increasing your risk of nutritional deficiencies. What you eat and drink every day will affect your physical and mental health. Good nutrition and regular exercise will help you maintain a hea8lthy weight while reducing the risk of chronic diseases such as heart disease. However, regular consumption of fast food and junk food can affect your health and negatively affect your body. Effect of fast food on our health; 1: Fast foods like hamburgers, fries, and shakes are often high in fat, fast foods are often high in calories. Most of them often make you feel full and lack energy. 2: Eating fast food can cause skin problems like acne. Not chocolate or frying ingredients, but empty carbs like simple sugars, white flour, and French fries will make your look pale 3: High calories in fast food are accompanied by low nutrients. If you consume too much, your body will start to lack the nutrients needed for normal functioning. Your body is temporarily full of empty food without nutrients, so even if you eat a lot of calories, you will not be full for a long time. 4: Regular intake of soft drinks can lead to poor oral health. Drinking a lot of soda will increase the amount of acid in the mouth, which will eventually lead to tooth decay and tooth decay. 5: Dietary fiber (usually found in vegetables, fruits, whole grains, nuts, and seeds) plays a leading role in the digestive system. Fiber helps your digestive tract work properly by removing waste from the body. It can help lower cholesterol and keep blood sugar levels normal. Unfortunately, most fast foods are not high in dietary fiber and this will lead to constipation.
How do you test vocabulary in a fun way? If you’re looking for a creative and fun vocabulary assessment, incorporate a drawing activity that students are sure to love. After giving students a list of vocabulary words to study, call out words from the target list one at a time and ask students to draw a picture that represents the word. How do I create a vocabulary quiz in Google Classroom? Go to the presentation section. Select the options to show the progress bar and to shuffle the question order. Go to the quiz section. Enable the option to make this a quiz. What is a vocabulary test? Definition of vocabulary test : a test for knowledge (as of meaning or use) of a selected list of words that is often used as part of an intelligence test. Are vocabulary tests effective? Using correlations with other vocabulary tests, Anderson and Freebody (1983) determined that the yes-no task is a reliable and valid measure of vocabulary assessment. They found that it provides a better measure of student knowledge than a multiple-choice task, particularly for younger students. How can I practice vocabulary at home? 9 Tips to Build Your Child’s Vocabulary at Home - Have Conversations. Talk with your child every day. - Involve Your Child. - Use Big Words. - Go for a Walk. - Talk About Books. - Tell Stories. - Sorting and Grouping Objects. - Keep Track of New Words. How do I take MCQ test in Google Classroom? Create a question - Go to classroom.google.com and click Sign In. Sign in with your Google Account. - Click the class. Classwork. - At the top, click Create. Question. - Enter the question and any instructions. - For short-answer questions, students can edit their answer and reply to each other. How do I make a free online quiz for students? How our quiz maker works - Log into SurveyMonkey and choose one of our free online quiz templates, or select “Start from scratch.” - Add quiz questions to your survey. - Select “Score this question (enable quiz mode)” for each quiz question. - Assign your answer options points with the plus or minus signs. What is the vocabulary size test? The Vocabulary Size Test is designed to measure both first language and second language learners’ written receptive vocabulary size in English. The test measures knowledge of written word form, the form-meaning connection, and to a smaller degree concept knowledge. Are online vocabulary tests accurate? No matter the results, it’s not necessarily an accurate representation of your intelligence or education level. But it may be an indication of how much you read, as one commenter pointed out. What is the vocabulary knowledge scale? The Vocabulary Knowledge Scale (VKS) is a 5-point self-report scale developed by Wesche & Paribakht (1996) that allows students to indicate how well they know items of vocabulary. It measures small gains in knowledge in order to compare the effectiveness of different vocabulary instructional techniques.
A “gene drive” occurs when a specific gene is spread at an enhanced rate through an animal or plant population. It’s something that happens in nature. Across the world, we’ve already seen examples of natural gene drives affecting gene frequencies in insects and mice, and the successful use of natural gene drives in changing mosquito populations to reduce disease transmission. But new technologies such as CRISPR are enhancing opportunities for scientists to use gene drives in an applied manner. This week, the Australian Academy of Science released a paper to trigger discussion around the scientific, practical, regulatory and ethical issues in anticipation of gene drives becoming a tool for controlling pests and diseases in Australia. What is a gene drive? Offspring normally carry two copies of a gene, one being inherited from each parent. However, this pattern of inheritance is upset by a gene drive which increases the likelihood that both copies come from only one of the parents. If we think of genes as the “selfish” elements within a chromosome, gene drives help the most selfish element to win, and eventually to take over in a population. Gene drives are present in nature. Transposons, also known as “jumping genes”, represent an example of a natural gene drive. A transposon copies itself to different parts of the genome and becomes transmitted to offspring at a rate higher than the usual 50%. However, while some types of natural gene drives have been used in suppressing disease transmission, potential applications have greatly expanded with the advent of synthetic gene drives. This creates new issues. The power of CRISPR It has recently become possible to create or synthesise gene drives via genetic engineering, using a gene editing tool known as CRISPR-Cas9. This tool is used to link up selfish genes such as homing endonucleases (which cut DNA at specific locations) with genes targeted to be spread through a population. When present on one chromosome, the resulting genetic construct is copied to the other chromosome through a process of being cut by the endonuclease and then repaired. This process can potentially be used to drive almost any gene through a population. It is most likely to be effective in organisms that reproduce quickly and have a short generation time. Although there are technical challenges to creating stable gene drives, scientific academies around the world including Australia are discussing potential applications of this technology. Safeguards that need to be put in place are also being considered. Genes that spread by themselves present some unique opportunities as well as challenges. Why should Australia consider gene drives? Gene drives could be especially useful in Australia for controlling pests and diseases. We are currently engaged in a losing battle with many invasive organisms. Damage to the environment and reduced agricultural output are caused by incursions of pest mammals, insects, weeds, birds and fish. Gene drives provide a way of potentially suppressing populations of these species and reducing damage. For example, the introduction of genes that alter sex ratios to become male biased can limit reproduction. Drives also could be used to introduce genes that suppress the ability of vectors such as mosquitoes, ticks and midges to transmit diseases to humans and livestock, and to introduce genes that make weeds and pests susceptible to pesticides. Use of a gene drive to eliminate a weed or pest could reduce the need for chemical spraying and potentially increase farmers’ crop yields. Safety, transparency and regulation Because gene drives are designed to spread by themselves, stringent safeguards are needed for developing, testing and using the technology. Transparency is critical, both about research on and regulation of synthetic gene drives. Although Australia has a well established regulatory framework for gene technology, gene drives present different issues to traditional genetically modified organisms (GMOs). That’s because they aim to spread new traits throughout a population, and hence may spread beyond geographical boundaries. Thus international harmonisation of regulation will be critical. Given the range of contexts in which gene drives might be deployed, coordination will be required across a number of Australian regulatory agencies. This includes those charged with oversight of environmental, human and animal health, quarantine, and food-related issues. Gene drives may cause public concern, particularly with regard to potential unintentional ecological and environmental effects. As has been learned from debates over GMOs and particularly about the limitations of labelling GM-free products, simply educating the public will not be sufficient. Underlying values are more important than information. It is critical to ensure that the public is engaged on an ongoing basis about potential applications, risks and benefits of gene drive technologies in alignment with best practices for science engagement, and that funding be provided for research into these issues. This is especially important for communities likely to be affected, such as the agricultural sector or those living close to areas where intentional release may occur. Funding agencies can assist by providing resources to test physical containment facilities and develop molecular containment procedures such as a daisy chain system that limits the spread of a drive system. Modelling and experiments can be used to assess the broader ecological consequences of suppressing pest populations, and research is needed to identify the risks of drives losing effectiveness due to evolution of the target species. All of these issues need to be explored on a case-by-case basis before any decisions are made about release of drives into the environment. The wider implications of gene drives also must be carefully assessed. For instance, a gene drive targeting pest fruit flies may be a problem for countries such as Japan, which have highly specific regulations about fruit imports. Trade relationships with countries with limits on GMOs such as many parts of the EU and Japan could be negatively affected by use of gene drives in agriculture. Domestic economic effects might include problems in obtaining organic certification for crops due to contact with organisms containing synthetic gene drives. Early engagement with various domestic and external stakeholders about these issues will be essential.
Learning to read is a skill everybody needs as it sets you up for life and is one that most people start to learn at a very young age. It isn’t a natural process and takes some effort learning the relationships between sounds and words and how to use them. Phonics is a system that teaches children how read, write and spell in English, using different sounds to distinguish words from each other. Using phonemes (sounds) associated with particular graphemes (letters) you will learn to decode words, learn to read, write and spell and it can even help with speech. For example, to spell CAT you would first sound it out which becomes /k, æ, t/. This makes it look like a very complex learning tool, but phonics does make it easier (as I found out when having to use it having never been taught it). Despite the English alphabet only having 26 letters, phonics uses 44 unique sounds (phonemes) to learn how different letters work within different words. Once you master the sounds you should be able to use them to decode words and read them. There are many ways to learn and teach phonics to help with reading, and one way to keep it fun and interesting is to incorporate it into a game. Got It! Learning have created some card games, designed by an experienced specialist teacher in line with the National Curriculum that covers phases 2-3 of Letters and Sounds. They include picture prompt cards, multi-sensory and dyslexia friendly cards, teaching simple and high frequency words and word endings. The Got It! Learning cards come in five individual different sets to cover various stages of phonics from one syllable CVC words (consonant, vowel, consonant) such as CAT and DOG to more complex words such as PURE and JOINED. - Set 1: Includes CVC, one syllable words like: dad, get, his, not, but. Covers all 26 initial alphabet sounds. Focus sounds: medial short vowels – a, e, i, o, u. Includes double final consonants: ll, ss, zz, ff - Set 2: Includes words like: that, back, ship, much, long. Focus sounds: th, ck, sh, ch, ng - Set 3: Includes words like: fail, feels, sigh, coats, too. Focus sounds: ai, ee, igh, oa, oo. Includes suffixes: s, es - Set 4: Includes words like: hood, hard, short, turn, town. Focus sounds: oo, ar, or, ur, ow. Includes suffixes: s, es, ed, ing - Set 5: Includes words like: joined, clear, pairs, pure, under. Focus sounds: oi, ear, air, ure, er. Includes suffixes: s, ed, ing Each pack of cards contains: - 5x Instruction cards - 5x Picture prompt cards - 8x Got it! cards - 40x Word cards with 5 different sounds We were sent Set 4 to try out that focus on words using the phonics sounds oo, ar, or, ur, ow. They feature 5 different games to play that take around 5 to 10 minutes each to play (each game can be played as a READING version or READING and SPELLING version). The games can be played by 2 to 4 players, aged 5-95! (they are not only good for children starting their reading journey but are good for dyslexic children and adults). - Word Switch – This game is a sound matching game, and the winner is the one to get rid of all their cards by matching sounds to the card currently in play. - Word Match – The game also using sound matching, and the winner is the first player to cover all of their cards. - Word Pairs – This game also uses sounds matching and the idea is to collect as many pairs as you can. - Word Race – This game requires putting your phonics skills to the test as fast as you can with the winner being the first one to discard all of their cards by reading the word on it correctly. - Word Sets – This game sees the winner as the player that correctly collects three word sets using the blue letter sounds on the cards. The Got It! Learning cards are a fun way to reinforce phonics and put skills to the test for emerging readers, especially as there are different ways to play them. The five games (and the two ways to play each game) are quite easy to set up and play. All the sounds are printed in blue on the cards (in each corner and within the word in the middle of each card) making it very clear what sound makes up each word. Whilst the games are fast and fun, there is also an added element with the Got It! cards to change the way a game is played and possibly the winner of a game! The Got It! cards make it that more interesting, enjoyable and help level up the different abilities of the players as the best reader will not necessarily always win. We found that these cards are easy to use, the right size for small hands and the playing side of the card is uncluttered and not too busy so that young players can easily see what they are doing. As they are just a pack of cards they can be played at home and make an ideal travel game as well as they take up very little space (just make sure that you have pen and paper handy if you are playing the spelling versions). We can see that they would be ideal in a school setting as well. With home learning more prominent these days, the Got It! Learning cards are an excellent tool to continue with and support a child’s education in a way that everyone can do, you do not need to be a qualified teacher to help your children with their reading and spelling. If you want to buy the cards as individual sets rather than the full set of five but are not sure what set is best for your child’s level, Got It! Learning have created an assessment sheet to help you identify the most appropriate set for your needs. A fun and easy way to support children with their reading. They take very little time to learn and children can have fun whilst still being educated. Learning through play is always a winner. RRP: £9.95 per individual set pack or £42.50 for the full set of 1-5. For more information or to buy, visit gotitlearning.co.uk.
Treblinka was an extermination camp, built and operated by Nazi Germany in occupied Poland during World War II. It was in a forest north-east of Warsaw, 4 km (2.5 mi) south of the village of Treblinka in what is now the Masovian Voivodeship. The camp operated between 23 July 1942 and 19 October 1943 as part of Operation Reinhard, the deadliest phase of the Final Solution. During this time, it is estimated that between 700,000 and 900,000 Jews were murdered in its gas chambers, along with 2,000 Romani people. More Jews were murdered at Treblinka than at any other Nazi extermination camp apart from Auschwitz-Birkenau. Managed by the German SS with assistance from Trawniki guards – recruited from among Soviet POWs to serve with the Germans – the camp consisted of two separate units. Treblinka I was a forced-labour camp (Arbeitslager) whose prisoners worked in the gravel pit or irrigation area and in the forest, where they cut wood to fuel the cremation pits. Between 1941 and 1944, more than half of its 20,000 inmates were murdered via shootings, hunger, disease and mistreatment. The second camp, Treblinka II, was an extermination camp (Vernichtungslager), referred to euphemistically as the SS-Sonderkommando Treblinka by the Nazis. A small number of Jewish men who were not murdered immediately upon arrival became members of its Sonderkommando whose jobs included being forced to bury the victims' bodies in mass graves. These bodies were exhumed in 1943 and cremated on large open-air pyres along with the bodies of new victims. Gassing operations at Treblinka II ended in October 1943 following a revolt by the prisoners in early August. Several Trawniki guards were killed and 200 prisoners escaped from the camp; almost a hundred survived the subsequent pursuit. The camp was dismantled in late 1943. A farmhouse for a watchman was built on the site and the ground ploughed over in an attempt to hide the evidence of genocide. In the postwar Polish People's Republic, the government bought most of the land where the camp had stood, and built a large stone memorial there between 1959 and 1962. In 1964, Treblinka was declared a national monument of Jewish martyrdom in a ceremony at the site of the former gas chambers. In the same year, the first German trials were held regarding the crimes committed at Treblinka by former SS members. After the end of communism in Poland in 1989, the number of visitors coming to Treblinka from abroad increased. An exhibition centre at the camp opened in 2006. It was later expanded and made into a branch of the Siedlce Regional Museum.
The central feature of any democratic political system is the ability of citizens to elect their representatives (or, similarly, the capacity for self-representation). There are, however, several different means to this end, each of which produces different results. In Canada, for instance, each province, as well as the federal government, utilizes the first-past-the-post Single Member Plurality (SMP) electoral system, in which the winning candidate is required only to obtain the plurality of votes, as opposed to a majority of them. In contrast, Proportional Representation (PR) electoral systems apportion each party’s share of the legislature according to their share of the popular vote. Criticisms of the Canadian electoral system typically focus on the weaknesses of SMP compared to PR, which some feel is more democratic. While there are clear advantages to PR, it can create representational issues overlooked in contemporary debate. In principle, PR would change the role of Members of Parliament (MP) from being representatives of their geographic constituency, which would further diminish their ability to distinguish themselves from their party caucus. Read last week’s post, Achieving Greater Democratic Representation, which discusses this issue in detail. The SMP electoral system works by dividing a country into ridings, of which there are 308 in Canada. Each riding returns a single representative, often associated with some political party, which represents that constituency in Parliament. The problem is that the composition of the House of Commons rarely reflects each parties share of the popular vote. For instance, the current government, formed by Stephen Harper and the Conservative Party, has a majority in the House of Commons, whereas the party received roughly 40 per cent of the popular vote. In order to correct “artificial majorities,” to which they are referred, some observers propose that Canada implement a PR-based electoral system. In this case, the electorate would vote for a political party, rather than an individual to represent them in Parliament. For instance, if a party received 30 per cent of the popular vote in a general election, it would receive 30 per cent of the seats in the House of Commons. In most countries that utilize PR, the party chooses who sits on those seats; however, some countries allow voters to rank their preferences. A PR-based system could conceivably fix the “artificial majority” issue, yet it changes the nature of the MP from being a representative of a community to solely being the representative of a political party. Although in our current system MPs still owe allegiance to the party, PR systems do not guarantee representation for smaller communities. In Canada, for instance, each constituency has a representative in Parliament. MPs have specific geographic communities that they represent and, subsequently, risk losing the support of the community if they do not represent it effectively. Switching to a PR system would diminish this relationship to the geographic communities and, instead, compel political parties to focus their attention on communities with the largest share of the population. In conclusion, SMP and PR electoral systems have their advantages and disadvantages. Despite there being many arguments in favour of both systems, it is important to scrutinize them objectively. Moving forward, it is important to delineate the objectives of the Canadian political system and measure whether they are achieved efficiently using SMP or PR. Randy Kaye is a 2013-2014 Atlantic Institute for Market Studies’ Student Fellow. The views expressed are the opinion of the author and not necessarily the Institute
In this activity students will explore the rules of Mendelian Genetics using a single trait, height, in pea plants. A monohybrid cross using a simple Punnett Square is performed. Students can randomize their parental generation using the reset button so they can explore all facets of dominant and recessive alleles as well as understand the differences between phenotype and genotype. Before the Activity Students should have had an introduction to terminology and discussion about why members of the same species have variation. A high-level discussion about heredity should precede this lesson. During the Activity During this activity students will determine the probability of alleles appearing in the F1 generation based on the parental generation genotype.
- We should not forget that the oxygen that we breathe now is in great part reduction by photosynthesis of CO2 millions years ago. - Mankind emits 1,022 tons of CO2 into the atmosphere every second, in other words, 32.000 million tons each year. 23.270 million tons of oxygen are associated to these 32.000 million tons of - If we bury these 32 * GT of CO2 23 GT of oxygen will disappear along with the carbon, in other words, a part of our oxygen will be buried each year. - CCS (Carbon Capture and Storage) The operational costs of separating CO2 from the rest of the flue gases and then burying it underground are very high. This workload does not produce any revenue and there is no way to guarantee the stability of the storage underground. - DAC (Direct Capture of CO2 from the atmosphere) This requires a huge amount of energy which implies significant CO2 emissions, the energy used will emit 70 kg of CO2 per each ton of CO2 filtered from the atmosphere.
According to Newton's Laws of Motion, an unbalanced force is one that causes a change in the motion of the object to which the force is applied. An object at rest or an object in steady motion continues at rest or in unchanged motion unless it is subjected to an unbalanced force. In that case, the object accelerates in the direction of the force according to the equation: force equals mass times acceleration. An unbalanced force continues to accelerate an object until until a new counterforce builds up and a new balance of forces is established. The accelerated object then maintains a steady velocity, and the previously unbalanced force is balanced by the new force. TL;DR (Too Long; Didn't Read) An unbalanced force is a force that changes the position, speed or direction of the object to which it is applied. The unbalanced force accelerates the object with the acceleration directly proportional to the size of the force and inversely proportional to the mass of the object. How Unbalanced Forces Work In a steady state situation, all forces are balanced with all objects either at rest or moving in a given direction with a fixed speed. If one force starts increasing or a new force is introduced, the situation can change, depending on the strength of the increasing or the new force. If the increasing force or new force is weak, a new balance of forces is established and nothing changes. If the increasing or new force becomes too strong for the existing balance of forces, objects will accelerate, move and change their position or speed. The situation will keep changing until a new balance of forces is achieved. For example, a car rolling in neutral on a straight, flat highway is subject to several balanced and unbalanced forces. The weight of the car pushing down is exactly balanced by the force of the pavement pushing up. The car therefore does not accelerate up or down. The friction of the tires rolling on the pavement and the resistance of the air are two unbalanced forces acting to decelerate the car. The inertia of the car keeps the car rolling, but the two unbalanced forces slow it down to a stop. When the car stops, all forces are in balance again and there is no new acceleration unless the driver starts the car and drives away, adding a new unbalanced force that overcomes the previous two forces. Common Unbalanced Forces Common forces that are often unbalanced include the force of gravity and applied forces. When these forces are unbalanced, objects accelerate, change their position and find new configurations for which all forces are again balanced. The weight of an object is the force exerted by gravity on that object. If an apple is hanging in a tree, the downward force of gravity is balanced by the upward force of the apple's stem attached to a branch. Once the apple is ripe, the stem becomes detached. At that moment the upward force becomes zero, and there is an unbalanced force of gravity downward. The apple falls. When it hits the ground, the Earth provides a new upward force equal to the force of gravity, and the situation is again balanced. Applied forces are important because they are used to move objects in accordance with specific purposes. For example, to move a dining room table to the other side of the room against a wall, one or more people apply a force by pushing it. Before the new force is applied, everything is in balance. At first the people may not push very hard, and the table doesn't move. Then people push on the table and their feet push on the floor with the force of friction. Similarly the table pushes back with an equal force due to the friction of its legs on the floor. Eventually the people push hard enough to create an unbalanced force to overcome the friction of the table, and the table accelerates to slide across the floor. When the people have pushed it against the wall, there is a new balance of forces and a new, steady-state situation. In all these cases, unbalanced forces cause a re-arrangement of objects to a new, balanced situation.
May 17, 2018, will mark the 64th anniversary of the Brown v. Board of Educationdecision, the landmark civil rights case that desegregated public schools. Arguably, it’s the most important legal decision ever decided for black Americans. Without it, black students might still be relegated to dilapidated schools under the farce of the separate but equal doctrine formulated in Plessy v. Ferguson. We owe a tremendous debt of gratitude to the late Supreme Court Justice Thurgood Marshall and the other lawyers and staff at the NAACP LDFfor winning the Brown case. Despite Brown’s import, the country took many years to desegregate public schools. Whites were determined to undercut Brown in southern cities like Little Rock, Arkansas and northern cities like Boston, Massachusetts. They fought forced busing vehemently to keep black children out of their neighborhoods and to keep their children out of black schools. I was in the 3rd grade when my school district finally integrated in 1969, which was fifteen years after Brown was decided. In the sixty years since the Brown decision, many public schools have re-segregated due to white flight, poverty and housing discrimination. Black and Latino students are more separated today in some cities than before Brownaccording to a report recently released by the Civil Rights Project at UCLA, entitled Brown at 60: Great Progress, a Long Retreat and an Uncertain Future. The report further stated: The consensus of nearly 60 years of social science research on the harms of school segregation is clear: separate remains extremely unequal. Racially and socioeconomically isolated schools are strongly related to an array of factors that limit educational opportunities and outcomes. These factors include less experienced and less qualified teachers, high levels of teacher turnover, less successful peer groups, and inadequate facilities and learning materials. This quote demonstrates exactly the import of Brown. It shows that black students struggle in segregated schools, which is expected. However, data show that a disproportionate percentage of black students also struggle in integrated schools. Solving this problem should be on our collective minds as we celebrate Brown and astutely continue the fight for its full implementation. To solve the problem of black academic underachievement the country must confront structural racism. For example, in my book, Education Injustice, I demonstrate unequivocally that structural racism is the number one reason public schools disproportionately suspend, expel, discipline, spank and place black males in special education. It’s also the main reason black students still lag behind white and Asian students in the achievement gap. Structural racism is not the only cause of black academic underachievement. Family disintegration, joblessness, and street culture contribute too. But no challenges are insurmountable and black students must not quit on themselves. Black parents must do their part and push their kids to excel academically despite structural racism, poverty or any other challenge. We must insist that our children never squander the academic opportunities that Brown protects. That would be a slap in the face to Justice Marshall, his team and martyrs like Dr. Martin Luther King, Jr., Medgar Edgars, James Chaney, Andrew Goodman, and Michael Schwerner whose deaths symbolize the spirit of Brown. As we celebrate Brown’s 64th birthday, all races must recommit to the principle thatBrown famously annunciated and became known for: segregation is inherently evil. Brownteaches us that diversity and integration strengthen our nation, not weaken it. Given the challenges our nation faces, both foreign and domestic, all races must be willing, ready and able to tackle these challenges. Leaving any race behind is counterproductive to the noble ideas that Brown envisioned. The Supreme Court unanimously birthed that vision on May 17, 1964, and we must protect it.
SOS Children offer a complete download of this selection for schools for use on schools intranets. SOS Children is the world's largest charity giving orphaned and abandoned children the chance of family life. History (from Greek ἱστορία - historia, meaning "inquiry, knowledge acquired by investigation") is an umbrella term that relates to past events as well as the discovery, collection, organization, and presentation of information about these events. The term includes cosmic, geologic, and organic history, but is often generically implied to mean human history. Scholars who write about history are called historians. History can also refer to the academic discipline which uses a narrative to examine and analyse a sequence of past events, and objectively determine the patterns of cause and effect that determine them. Historians sometimes debate the nature of history and its usefulness by discussing the study of the discipline as an end in itself and as a way of providing "perspective" on the problems of the present. Stories common to a particular culture, but not supported by external sources (such as the tales surrounding King Arthur) are usually classified as cultural heritage or legends, because they do not support the "disinterested investigation" required of the discipline of history. Events occurring prior to written record are considered prehistory. Herodotus, a 5th century B.C. Greek historian is considered to be the "father of history", and, along with his contemporary Thucydides, helped form the foundations for the modern study of human history. Their influence has helped spawn variant interpretations of the nature of history which have evolved over the centuries and continue to change today. The modern study of history is wide-ranging, and includes the study of specific regions and the study of certain topical or thematical elements of historical investigation. Often history is taught as part of primary and secondary education, and the academic study of history is a major discipline in University studies. A derivation from *weid- "know" or "see" is attested as "the reconstructed etymon wid-tor ["one who knows"] (compare to English wit) a suffixed zero-grade form of the PIE root *weid- 'see' and so is related to Greek eidénai, to know". Ancient Greek ἱστορία (hístōr) means "inquiry","knowledge from inquiry", or "judge". It was in that sense that Aristotle used the word in his Περὶ Τὰ Ζῷα Ἱστορίαι (Perì Tà Zôa Ηistoríai "Inquiries about Animals"). The ancestor word ἵστωρ is attested early on in Homeric Hymns, Heraclitus, the Athenian ephebes' oath, and in Boiotic inscriptions (in a legal sense, either "judge" or "witness", or similar). The word entered the English language in 1390 with the meaning of "relation of incidents, story". In Middle English, the meaning was "story" in general. The restriction to the meaning "record of past events" arises in the late 15th century. It was still in the Greek sense that Francis Bacon used the term in the late 16th century, when he wrote about " Natural History". For him, historia was "the knowledge of objects determined by space and time", that sort of knowledge provided by memory (while science was provided by reason, and poetry was provided by fantasy). In an expression of the linguistic synthetic vs. analytic/isolating dichotomy, English like Chinese (史 vs. 诌) now designates separate words for human history and storytelling in general. In modern German, French, and most Germanic and Romance languages, which are solidly synthetic and highly inflected, the same word is still used to mean both "history" and "story". The adjective historical is attested from 1661, and historic from 1669. Historian in the sense of a "researcher of history" is attested from 1531. In all European languages, the substantive "history" is still used to mean both "what happened with men", and "the scholarly study of the happened", the latter sense sometimes distinguished with a capital letter, "History", or the word historiography. Historians write in the context of their own time, and with due regard to the current dominant ideas of how to interpret the past, and sometimes write to provide lessons for their own society. In the words of Benedetto Croce, "All history is contemporary history". History is facilitated by the formation of a 'true discourse of past' through the production of narrative and analysis of past events relating to the human race. The modern discipline of history is dedicated to the institutional production of this discourse. All events that are remembered and preserved in some authentic form constitute the historical record. The task of historical discourse is to identify the sources which can most usefully contribute to the production of accurate accounts of past. Therefore, the constitution of the historian's archive is a result of circumscribing a more general archive by invalidating the usage of certain texts and documents (by falsifying their claims to represent the 'true past'). The study of history has sometimes been classified as part of the humanities and at other times as part of the social sciences. It can also be seen as a bridge between those two broad areas, incorporating methodologies from both. Some individual historians strongly support one or the other classification. In the 20th century, French historian Fernand Braudel revolutionized the study of history, by using such outside disciplines as economics, anthropology, and geography in the study of global history. Traditionally, historians have recorded events of the past, either in writing or by passing on an oral tradition, and have attempted to answer historical questions through the study of written documents and oral accounts. For the beginning, historians have also used such sources as monuments, inscriptions, and pictures. In general, the sources of historical knowledge can be separated into three categories: what is written, what is said, and what is physically preserved, and historians often consult all three. But writing is the marker that separates history from what comes before. Archaeology is a discipline that is especially helpful in dealing with buried sites and objects, which, once unearthed, contribute to the study of history. But archaeology rarely stands alone. It uses narrative sources to complement its discoveries. However, archaeology is constituted by a range of methodologies and approaches which are independent from history; that is to say, archaeology does not "fill the gaps" within textual sources. Indeed, Historical Archaeology is a specific branch of archaeology, often contrasting its conclusions against those of contemporary textual sources. For example, Mark Leone, the excavator and interpreter of historical Annapolis, Maryland, USA has sought to understand the contradiction between textual documents and the material record, demonstrating the possession of slaves and the inequalities of wealth apparent via the study of the total historical environment, despite the ideology of "liberty" inherent in written documents at this time. There are varieties of ways in which history can be organized, including chronologically, culturally, territorially, and thematically. These divisions are not mutually exclusive, and significant overlaps are often present, as in "The International Women's Movement in an Age of Transition, 1830–1975." It is possible for historians to concern themselves with both the very specific and the very general, although the modern trend has been toward specialization. The area called Big History resists this specialization, and searches for universal patterns or trends. History has often been studied with some practical or theoretical aim, but also may be studied out of simple intellectual curiosity. History and prehistory |↑ before Homo (Pliocene)| |Three-age system prehistory| The history of the world is the memory of the past experience of Homo sapiens sapiens around the world, as that experience has been preserved, largely in written records. By "prehistory", historians mean the recovery of knowledge of the past in an area where no written records exist, or where the writing of a culture is not understood. By studying painting, drawings, carvings, and other artifacts, some information can be recovered even in the absence of a written record. Since the 20th century, the study of prehistory is considered essential to avoid history's implicit exclusion of certain civilizations, such as those of Sub-Saharan Africa and pre-Columbian America. Historians in the West have been criticized for focusing disproportionately on the Western world. In 1961, British historian E. H. Carr wrote: The line of demarcation between prehistoric and historical times is crossed when people cease to live only in the present, and become consciously interested both in their past and in their future. History begins with the handing down of tradition; and tradition means the carrying of the habits and lessons of the past into the future. Records of the past begin to be kept for the benefit of future generations. This definition includes within the scope of history the strong interests of peoples, such as Australian Aboriginals and New Zealand Māori in the past, and the oral records maintained and transmitted to succeeding generations, even before their contact with European civilization. Historiography has a number of related meanings. Firstly, it can refer to how history has been produced: the story of the development of methodology and practices (for example, the move from short-term biographical narrative towards long-term thematic analysis). Secondly, it can refer to what has been produced: a specific body of historical writing (for example, "medieval historiography during the 1960s" means "Works of medieval history written during the 1960s"). Thirdly, it may refer to why history is produced: the Philosophy of history. As a meta-level analysis of descriptions of the past, this third conception can relate to the first two in that the analysis usually focuses on the narratives, interpretations, worldview, use of evidence, or method of presentation of other historians. Professional historians also debate the question of whether history can be taught as a single coherent narrative or a series of competing narratives. Philosophy of history History's philosophical questions Philosophy of history is a branch of philosophy concerning the eventual significance, if any, of human history. Furthermore, it speculates as to a possible teleological end to its development—that is, it asks if there is a design, purpose, directive principle, or finality in the processes of human history. Philosophy of history should not be confused with historiography, which is the study of history as an academic discipline, and thus concerns its methods and practices, and its development as a discipline over time. Nor should philosophy of history be confused with the history of philosophy, which is the study of the development of philosophical ideas through time. Historical method basics The following questions are used by historians in modern work. The first four are known as higher criticism; the fifth, lower criticism; and, together, external criticism. The sixth and final inquiry about a source is called internal criticism. The historical method comprises the techniques and guidelines by which historians use primary sources and other evidence to research and then to write history. Herodotus of Halicarnassus (484 BC – ca.425 BC) has generally been acclaimed as the "father of history". However, his contemporary Thucydides (ca. 460 BC – ca. 400 BC) is credited with having first approached history with a well-developed historical method in his work the History of the Peloponnesian War. Thucydides, unlike Herodotus, regarded history as being the product of the choices and actions of human beings, and looked at cause and effect, rather than as the result of divine intervention. In his historical method, Thucydides emphasized chronology, a neutral point of view, and that the human world was the result of the actions of human beings. Greek historians also viewed history as cyclical, with events regularly recurring. There were historical traditions and sophisticated use of historical method in ancient and medieval China. The groundwork for professional historiography in East Asia was established by the Han Dynasty court historian known as Sima Qian (145–90 BC), author of the Shiji ( Records of the Grand Historian). For the quality of his written work, Sima Qian is posthumously known as the Father of Chinese Historiography. Chinese historians of subsequent dynastic periods in China used his Shiji as the official format for historical texts, as well as for biographical literature. Saint Augustine was influential in Christian and Western thought at the beginning of the medieval period. Through the Medieval and Renaissance periods, history was often studied through a sacred or religious perspective. Around 1800, German philosopher and historian Georg Wilhelm Friedrich Hegel brought philosophy and a more secular approach in historical study. In the preface to his book, the Muqaddimah (1377), the Arab historian and early sociologist, Ibn Khaldun, warned of seven mistakes that he thought that historians regularly committed. In this criticism, he approached the past as strange and in need of interpretation. The originality of Ibn Khaldun was to claim that the cultural difference of another age must govern the evaluation of relevant historical material, to distinguish the principles according to which it might be possible to attempt the evaluation, and lastly, to feel the need for experience, in addition to rational principles, in order to assess a culture of the past. Ibn Khaldun often criticized "idle superstition and uncritical acceptance of historical data." As a result, he introduced a scientific method to the study of history, and he often referred to it as his "new science". His historical method also laid the groundwork for the observation of the role of state, communication, propaganda and systematic bias in history, and he is thus considered to be the "father of historiography" or the "father of the philosophy of history". In the West historians developed modern methods of historiography in the 17th and 18th centuries, especially in France and Germany. The 19th-century historian with greatest influence on methods was Leopold von Ranke in Germany. In the 20th century, academic historians focused less on epic nationalistic narratives, which often tended to glorify the nation or great men, to more objective and complex analyses of social and intellectual forces. A major trend of historical methodology in the 20th century was a tendency to treat history more as a social science rather than as an art, which traditionally had been the case. Some of the leading advocates of history as a social science were a diverse collection of scholars which included Fernand Braudel, E. H. Carr, Fritz Fischer, Emmanuel Le Roy Ladurie, Hans-Ulrich Wehler, Bruce Trigger, Marc Bloch, Karl Dietrich Bracher, Peter Gay, Robert Fogel, Lucien Febvre and Lawrence Stone. Many of the advocates of history as a social science were or are noted for their multi-disciplinary approach. Braudel combined history with geography, Bracher history with political science, Fogel history with economics, Gay history with psychology, Trigger history with archaeology while Wehler, Bloch, Fischer, Stone, Febvre and Le Roy Ladurie have in varying and differing ways amalgamated history with sociology, geography, anthropology, and economics. More recently, the field of digital history has begun to address ways of using computer technology to pose new questions to historical data and generate digital scholarship. In opposition to the claims of history as a social science, historians such as Hugh Trevor-Roper, John Lukacs, Donald Creighton, Gertrude Himmelfarb and Gerhard Ritter argued that the key to the historians' work was the power of the imagination, and hence contended that history should be understood as an art. French historians associated with the Annales School introduced quantitative history, using raw data to track the lives of typical individuals, and were prominent in the establishment of cultural history (cf. histoire des mentalités). Intellectual historians such as Herbert Butterfield, Ernst Nolte and George Mosse have argued for the significance of ideas in history. American historians, motivated by the civil rights era, focused on formerly overlooked ethnic, racial, and socio-economic groups. Another genre of social history to emerge in the post-WWII era was Alltagsgeschichte (History of Everyday Life). Scholars such as Martin Broszat, Ian Kershaw and Detlev Peukert sought to examine what everyday life was like for ordinary people in 20th-century Germany, especially in the Nazi period. Marxist historians such as Eric Hobsbawm, E. P. Thompson, Rodney Hilton, Georges Lefebvre, Eugene D. Genovese, Isaac Deutscher, C. L. R. James, Timothy Mason, Herbert Aptheker, Arno J. Mayer and Christopher Hill have sought to validate Karl Marx's theories by analyzing history from a Marxist perspective. In response to the Marxist interpretation of history, historians such as François Furet, Richard Pipes, J. C. D. Clark, Roland Mousnier, Henry Ashby Turner and Robert Conquest have offered anti-Marxist interpretations of history. Feminist historians such as Joan Wallach Scott, Claudia Koonz, Natalie Zemon Davis, Sheila Rowbotham, Gisela Bock, Gerda Lerner, Elizabeth Fox-Genovese, and Lynn Hunt have argued for the importance of studying the experience of women in the past. In recent years, postmodernists have challenged the validity and need for the study of history on the basis that all history is based on the personal interpretation of sources. In his 1997 book In Defence of History, Richard J. Evans, a professor of modern history at Cambridge University, defended the worth of history. Another defence of history from post-modernist criticism was the Australian historian Keith Windschuttle's 1994 book, The Killing of History. Areas of study Historical study often focuses on events and developments that occur in particular blocks of time. Historians give these periods of time names in order to allow "organising ideas and classificatory generalisations" to be used by historians. The names given to a period can vary with geographical location, as can the dates of the start and end of a particular period. Centuries and decades are commonly used periods and the time they represent depends on the dating system used. Most periods are constructed retrospectively and so reflect value judgments made about the past. The way periods are constructed and the names given to them can affect the way they are viewed and studied. Particular geographical locations can form the basis of historical study, for example, continents, countries and cities. Understanding why historic events took place is important. To do this, historians often turn to geography. Weather patterns, the water supply, and the landscape of a place all affect the lives of the people who live there. For example, to explain why the ancient Egyptians developed a successful civilization, studying the geography of Egypt is essential. Egyptian civilization was built on the banks of the Nile River, which flooded each year, depositing soil on its banks. The rich soil could help farmers grow enough crops to feed the people in the cities. That meant everyone did not have to farm, so some people could perform other jobs that helped develop the civilization. World history is the study of major civilizations over the last 3000 years or so. It has led to highly controversial interpretations by Oswald Spengler and Arnold J. Toynbee, among others. World history is especially important as a teaching field. It has increasingly entered the university curriculum in the U.S., in many cases replacing courses in Western Civilization, that had a focus on Europe and the U.S. World history adds extensive new material on Asia, Africa and Latin America. - History of Africa begins with the first emergence of modern human beings on the continent, continuing into its modern present as a patchwork of diverse and politically developing nation states. - History of the Americas is the collective history of North and South America, including Central America and the Caribbean. - History of North America is the study of the past passed down from generation to generation on the continent in the Earth's northern and western hemisphere. - History of Central America is the study of the past passed down from generation to generation on the continent in the Earth's western hemisphere. - History of the Caribbean begins with the oldest evidence where 7,000-year-old remains have been found. - History of South America is the study of the past passed down from generation to generation on the continent in the Earth's southern and western hemisphere. - History of Antarctica emerges from early Western theories of a vast continent, known as Terra Australis, believed to exist in the far south of the globe. - History of Australia start with the documentation of the Makassar trading with Indigenous Australians on Australia's north coast. - History of New Zealand dates back at least 700 years to when it was discovered and settled by Polynesians, who developed a distinct Māori culture centred on kinship links and land. - History of the Pacific Islands covers the history of the islands in the Pacific Ocean. - History of Eurasia is the collective history of several distinct peripheral coastal regions: the Middle East, South Asia, East Asia, Southeast Asia, and Europe, linked by the interior mass of the Eurasian steppe of Central Asia and Eastern Europe. - History of Europe describes the passage of time from humans inhabiting the European continent to the present day. - History of Asia can be seen as the collective history of several distinct peripheral coastal regions, East Asia, South Asia, and the Middle East linked by the interior mass of the Eurasian steppe. - History of East Asia is the study of the past passed down from generation to generation in East Asia. - History of the Middle East begins with the earliest civilizations in the region now known as the Middle East that were established around 3000 BC, in Mesopotamia (Iraq). - History of South Asia is the study of the past passed down from generation to generation in the Sub-Himalayan region. - History of Southeast Asia has been characterized as interaction between regional players and foreign powers. Military history concerns warfare, strategies, battles, weapons, and the psychology of combat. The "new military history" since the 1970s has been concerned with soldiers more than generals, with psychology more than tactics, and with the broader impact of warfare on society and culture. History of religion The history of religion has been a main theme for both secular and religious historians for centuries, and continues to be taught in seminaries and academe. Leading journals include Church History, Catholic Historical Review, and History of Religions. Topics range widely from political and cultural and artistic dimensions, to theology and liturgy. Every major country is covered, and most smaller ones as well. Social history, sometimes called the new social history, is the field that includes history of ordinary people and their strategies and institutions for coping with life. In its "golden age" it was a major growth field in the 1960s and 1970s among scholars, and still is well represented in history departments. In two decades from 1975 to 1995, the proportion of professors of history in American universities identifying with social history rose from 31% to 41%, while the proportion of political historians fell from 40% to 30%. In the history departments of British universities in 2007, of the 5723 faculty members, 1644 (29%) identified themselves with social history while political history came next with 1425 (25%). The "old" social history before the 1960s was a hodgepodge of topics without a central theme, and it often included political movements, like Populism, that were "social" in the sense of being outside the elite system. Social history was contrasted with political history, intellectual history and the history of great men. English historian G. M. Trevelyan saw it as the bridging point between economic and political history, reflecting that, "Without social history, economic history is barren and political history unintelligible." While the field has often been viewed negatively as history with the politics left out, it has also been defended as "history with the people put back in." The chief subfields of social history include: Cultural history replaced social history as the dominant form in the 1980s and 1990s. It typically combines the approaches of anthropology and history to look at language, popular cultural traditions and cultural interpretations of historical experience. It examines the records and narrative descriptions of past knowledge, customs, and arts of a group of people. How peoples constructed their memory of the past is a major topic. Cultural history includes the study of art in society as well is the study of images and human visual production (iconography). Diplomatic history, sometimes referred to as "Rankian History" in honour of Leopold von Ranke, focuses on politics, politicians and other high rulers and views them as being the driving force of continuity and change in history. This type of political history is the study of the conduct of international relations between states or across state boundaries over time. This is the most common form of history and is often the classical and popular belief of what history should be. Although economic history has been well established since the late 19th century, in recent years academic studies have shifted more and more toward economics departments and away from traditional history departments. Environmental history is a new field that emerged in the 1980s to look at the history of the environment, especially in the long run, and the impact of human activities upon it. World history is primarily a teaching field, rather than a research field. It gained popularity in the United States, Japan and other countries after the 1980s with the realization that students need a broader exposure to the world as globalization proceeds. The World History Association publishes the Journal of World History every quarter since 1990. The H-World discussion list serves as a network of communication among practitioners of world history, with discussions among scholars, announcements, syllabi, bibliographies and book reviews. A people's history is a type of historical work which attempts to account for historical events from the perspective of common people. A people's history is the history of the world that is the story of mass movements and of the outsiders. Individuals or groups not included in the past in other type of writing about history are the primary focus, which includes the disenfranchised, the oppressed, the poor, the nonconformists, and the otherwise forgotten people. This history also usually focuses on events occurring in the fullness of time, or when an overwhelming wave of smaller events cause certain developments to occur. Historiometry is a historical study of human progress or individual personal characteristics, by using statistics to analyze references to eminent persons, their statements, behaviour and discoveries in relatively neutral texts. Gender history is a sub-field of History and Gender studies, which looks at the past from the perspective of gender. It is in many ways, an outgrowth of women's history. Despite its relatively short life, Gender History (and its forerunner Women's History) has had a rather significant effect on the general study of history. Since the 1960s, when the initially small field first achieved a measure of acceptance, it has gone through a number of different phases, each with its own challenges and outcomes. Although some of the changes to the study of history have been quite obvious, such as increased numbers of books on famous women or simply the admission of greater numbers of women into the historical profession, other influences are more subtle. Public history describes the broad range of activities undertaken by people with some training in the discipline of history who are generally working outside of specialized academic settings. Public history practice has quite deep roots in the areas of historic preservation, archival science, oral history, museum curatorship, and other related fields. The term itself began to be used in the U.S. and Canada in the late 1970s, and the field has become increasingly professionalized since that time. Some of the most common settings for public history are museums, historic homes and historic sites, parks, battlefields, archives, film and television companies, and all levels of government. Professional and amateur historians discover, collect, organize, and present information about past events. In lists of historians, historians can be grouped by order of the historical period in which they were writing, which is not necessarily the same as the period in which they specialized. Chroniclers and annalists, though they are not historians in the true sense, are also frequently included. The judgement of history Since the 20th century, Western historians have disavowed the aspiration to provide the "judgement of history." The goals of historical judgements or interpretations are separate to those of legal judgements, that need to be formulated quickly after the events and be final. A related issue to that of the judgement of history is that of collective memory. Pseudohistory is a term applied to texts which purport to be historical in nature but which depart from standard historiographical conventions in a way which undermines their conclusions. Closely related to deceptive historical revisionsm, works which draw controversial conclusions from new, speculative, or disputed historical evidence, particularly in the fields of national, political, military, and religious affairs, are often rejected as pseudohistory. From the origins of national school systems in the 19th century, the teaching of history to promote national sentiment has been a high priority. In the United States after World War I, a strong movement emerged at the university level to teach courses in Western Civilization, so as to give students a common heritage with Europe. In the U.S. after 1980 attention increasingly moved toward teaching world history or requiring students to take courses in non-western cultures, to prepare students for life in a globalized economy. At the university level, historians debate the question of whether history belongs more to social science or to the humanities. Many view the field from both perspectives. The teaching of history in French schools was influenced by the Nouvelle histoire as disseminated after the 1960s by Cahiers pédagogiques and Enseignement and other journals for teachers. Also influential was the Institut national de recherche et de documentation pédagogique, (INRDP). Joseph Leif, the Inspector-general of teacher training, said pupils children should learn about historians’ approaches as well as facts and dates. Louis François, Dean of the History/Geography group in the Inspectorate of National Education advised that teachers should provide historic documents and promote "active methods" which would give pupils "the immense happiness of discovery." Proponents said it was a reaction against the memorization of names and dates that characterized teaching and left the students bored. Traditionalists protested loudly it was a postmodern innovation that threatened to leave the youth ignorant of French patriotism and national identity. Bias in school teaching In most countries history textbook are tools to foster nationalism and patriotism, and give students the official line about national enemies. In many countries history textbooks are sponsored by the national government and are written to put the national heritage in the most favorable light. For example, in Japan, mention of the Nanking Massacre has been removed from textbooks and the entire World War II is given cursory treatment. Other countries have complained. It was standard policy in communist countries to present only a rigid Marxist historiography. According to sociologist James Loewen, in the United States the history of the American Civil War some places has been phrased to avoid giving offense to white Southerners and blacks. Academic historians have often fought against the politicization of the textbooks, sometimes with success. In 21st-century Germany, the history curriculum is controlled by the 16 states, and is characterized not by superpatriotism but rather by an "almost pacifistic and deliberately unpatriotic undertone" and reflects "principles formulated by international organizations such as UNESCO or the Council of Europe, thus oriented towards human rights, democracy and peace." The result is that "German textbooks usually downplay national pride and ambitions and aim to develop an understanding of citizenship centred on democracy, progress, human rights, peace, tolerance and Europeanness."
There are two distinct regions in the pituitary gland: the anterior lobe (adenohypophysis) and the posterior lobe (neurohypophysis). The activity of the adenohypophysis is controlled by releasing hormones from the hypothalamus. The neurohypophysis is controlled by nerve stimulation. Hormones of the anterior lobe (adenohypophysis)Growth hormone (GH) is a protein that stimulates the growth of bones, muscles, and other organs by promoting protein synthesis. This hormone drastically affects the appearance of an individual because it influences height. If there is too little growth hormone in a child, that person may become a pituitary dwarf of normal proportions but small stature. An excess of the hormone in a child results in an exaggerated bone growth, and the individual becomes exceptionally tall or a giant. Thyroid-stimulating hormone, or thyrotropin, causes the glandular cells of the thyroid to secrete thyroid hormone. When there is a hypersecretion of thyroid-stimulating hormone, the thyroid gland enlarges and secretes too much thyroid hormone. Adrenocorticotropic hormone (ACTH) reacts with receptor sites in the cortex of the adrenal gland to stimulate the secretion of cortical hormones, particularly cortisol. Gonadotropic hormones react with receptor sites in the gonads, or ovaries and testes, to regulate the development, growth, and function of these organs. Prolactin hormone promotes the development of glandular tissue in the female breast during pregnancy and stimulates milk production after the birth of the infant. Hormones of the posterior lobe (neurohypophysis)Antidiuretic hormone (ADH) promotes the reabsorption of water by the kidney tubules, with the result that less water is lost as urine. This mechanism conserves water for the body. Insufficient amounts of antidiuretic hormone cause excessive water loss in the urine. Oxytocin causes contraction of the smooth muscle in the wall of the uterus. It also stimulates the ejection of milk from the lactating breast (see mammary gland. Other terms related to the pituitaryHypophysectomy is the surgical removal or destruction of the pituitary gland. The operation may be conducted by opening the skull or by the insetion of special needles that produce a very low temperature (cryosurgery). Radiotherapy (e.g., by insertion of needles of yttrium-90) can also be used to destroy parts of the pituitary. A brief history of the pituitaryThe pituitary was certainly known to Calen in 200 AD but its role was misunderstood. Even in the 16th century Vesalius proclaimed that its function was to secrete a substance into the nose. It was not until the latter part of the 19th century that its true properties began to be elucidated. A significant breakthrough came in 1886, when the French neurologist Pierre Marie described acromegaly and gigantism, conditions caused by overactivity and enlargement of the pituitary. Related category• ANATOMY AND PHYSIOLOGY Source: National Cancer Institute Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact
To prepare children for next stage in education and for life by equipping them with: These aims apply to all children, regardless of age or ability. We believe children learn best by moving from concrete first hand experience, through pictorial/diagrammatic representation to secure abstract knowledge and understanding. This is true whenever a new concept is being introduced, whatever age the child and should not be limited to the youngest children. Children should be encouraged to shift between the various stages as and when they need, for example it may be that a child has a secure abstract understanding of addition, but still requires to represent their thinking pictorially when working with fractions - therefore practical equipment/diagrammatic models etc must always be available throughout the school. In order to meet our aims, we believe that children must have exposure to a broad maths curriculum covering the 8 key areas set out by the National Curriculum. Whilst learning the fluency of number is essential, we believe that the learning must provide plenty of opportunities for children to explain their mathematical thinking and reasoning through: an encouragement to use practical resources, to create diagrams, to talk through thinking and to show their working out. Opportunities for application and problem solving are also essential to the children's learning. All teachers are required to have a good subject knowledge. This assumes: Teachers at Hallgate are expected to show a commitment to providing a well thought out mathematical curriculum. This is outlined in the guidance notes and supported by the following documents: Whole School Curriculum map Curriculum coverage map Progression of Big Ideas Learning Objectives map Problem Solving Policy
The sun is the largest and the most massive object in the solar system, but it is just a medium-sized star among the hundreds of billions of stars in the Milky Way galaxy. Radius, diameter & circumference The sun is nearly a perfect sphere. Its equatorial diameter and its polar diameter differ by only 6.2 miles (10 km). The mean radius of the sun is 432,450 miles (696,000 kilometers), which makes its diameter about 864,938 miles (1.392 million km). You could line up 109 Earths across the face of the sun. The sun's circumference is about 2,713,406 miles (4,366,813 km). Mass and volume The total volume of the sun is 1.4 x 1027 cubic meters. About 1.3 million Earths could fit inside the sun. The mass of the sun is 1.989 x 1030 kilograms, about 333,000 times the mass of the Earth. The sun contains 99.8 percent of the mass of the entire solar system, leading astronomers Imke de Pater and Jack J. Lissauer, authors of the textbook "Planetary Sciences," to refer to the solar system as "the sun plus some debris." It may be the biggest thing in this neighborhood, but the sun is just average compared to other stars. Betelgeuse, a red giant, is about 700 times bigger than the sun and about 14,000 times brighter. The sun is classified as a G-type main-sequence star, or G dwarf star, or more imprecisely, a yellow dwarf. Actually, the sun — like other G-type stars — is white, but appears yellow through Earth's atmosphere. Stars generally get bigger as they grow older. In about 5 billion years, scientists think the sun will start to use up all of the hydrogen at its center. The sun will puff up into a red giant and expand past the orbit of the inner planets, including Earth. The sun's helium will get hot enough to burn into carbon, and the carbon will combine with the helium to form oxygen. These elements will collect in the center of the sun. Later, the sun will shed its outer layers, forming a planetary nebula and leaving behind a dead core of mostly carbon and oxygen — a very dense and hot white dwarf star, about the size of the Earth. — Tim Sharp, Reference Editor
- Ages: 6 to 9 years - Children-Teacher Ratio: 15:1 - Schedule: Monday-Friday 8:45am-3:00pm Elementary children have questioning minds, abilities to abstract and imagine, moral and social orientation and unlimited energy for research and discovery. They are naturally curious and eager to expand their understanding in all fields of knowledge. In a research style of learning, elementary children work in small groups on a variety of projects, which spark the imagination and engage the intellect. Lessons given by a trained Montessori Directress guide children toward activities which help them to develop reasoning, problem-solving and critical-thinking skills. Children aged 6-11 years are driven to understand the universe and their place within it. Their capacity to assimilate aspects of cultures is boundless. Elementary studies include geography, biology, history, language, geometry and mathematics, science, music, and art. Exploration of each area is encouraged through exploratory trips outside the classroom to community resources such as libraries, the planetarium, botanical gardens, the Rochester Museum and Science Center, factories, hospitals, businesses, etc. This inclusive approach to education fosters a feeling of connectedness to all humanity and empathy for others. Students are encouraged to satiate their natural desire to make contributions to the world.
The inner ear serves two purposes: hearing and balance. There are mechanisms in the ear that inform the brain about your position, orientation in space and movement at all times – to keep you in balance. A false sensation of spinning or whirling, known as vertigo, can occur when the signal to the brain is blocked or misfires. In addition to the sensation of dizziness, symptoms may include headache, nausea, sensitivity to bright light, blurred vision, ringing in the ears, ear pain, facial numbness, eye pain, motion sickness, confused thinking, fainting and clumsiness. Dizziness can also be a symptom of a more serious medical problem, such as high or low blood pressure, heart problems, stroke, tumor, medication side effect or metabolic disorders. Therefore you should always seek medical attention if you experience ongoing or repetitive dizziness. Hearing loss has a lot of different causes and manifestations. It can be sudden or gradual. It can occur in one ear or both ears. It can be temporary or permanent. It happens to people of all ages and is associated with the aging process. Before discussing causes and treatments for hearing loss, it is important to understand how hearing works. How We Hear There are three sections of the ear: the outer ear, middle ear and inner ear. Each section helps move sound through the process of hearing. When a sound occurs, the outer ear feeds it through the ear canal to the eardrum. The noise causes the eardrum to vibrate. This, in turn, causes three little bones inside the middle ear (malleus, incus, and stapes) to move. That movement travels into the inner ear (cochlea), where it makes tiny little hairs move in a fluid. These hairs convert the movement to auditory signals, which are then transmitted to the brain to register the sound. Causes of Hearing Loss Hearing loss occurs when sound is blocked in any of the three areas of the ear. The most common cause of hearing loss — and one of the most preventable — is exposure to loud noises. Infections, both of the ear or elsewhere in the body, are also a major contributor to hearing loss. - In the Outer Ear: Earwax build-up, infections that cause swelling, a growth in the ear canal, injury or birth defects can restrict hearing in the outer ear. - In the Middle Ear: Fluid build-up is responsible for the most common infections and blockages in the middle ear. Fluid in the middle ear prevents the bones from processing sounds properly. Tumors, both benign and malignant, can also result in hearing loss in the middle ear. - In the Inner Ear: The natural process of aging diminishes hearing from damage to the cochlea (mechanism for converting sound vibrations to brain signals), vestibular labyrinth (which regulates balance), or the acoustic nerve (nerve that sends sound signals to the brain). Additionally, inner ear infections, Meniere’s disease and other nerve-related problems contribute to hearing loss in the inner ear. The ear is made up of three sections: the outer ear, middle ear and inner ear. Each of these areas is susceptible to infections, which can be painful. Young children have a greater tendency to get earaches. While most ear pain resolves itself in a matter of days, you should get a physical examination to understand the type of infection, prevent it from spreading and obtain treatment to help alleviate the pain. Outer Ear Infection (Otitis Externa) Also known as Swimmer’s Ear, outer ear infections result from an inflammation, often bacterial, in the outer ear. Generally, they happen when water, sand or dirt gets into the ear canal. Moisture in the air or swimming makes the ear more susceptible to this type of ear infection. Symptoms include: severe pain, itching, redness and swelling in the outer ear. There also may be some fluid drainage. Often the pain is worse when chewing or when you pull on the ear. To reduce pain and prevent other long-term effects on the ear, be sure to see a doctor. Complications from untreated otitis externa may include hearing loss, recurring ear infections and bone and cartilage damage. Typically, your doctor will prescribe eardrops that block bacterial growth. In more severe cases, your doctor may also prescribe an antibiotic and pain medication. Most outer ear infections resolve in seven to 10 days. Middle Ear Infection (Otitis Media) Middle ear infections can be caused by either bacterial or viral infection. These infections may be triggered by airborne or foodborne allergies, infections elsewhere in the body, nutritional deficiencies or a blocked Eustachian tube. In chronic cases, a thick, glue-like fluid may be discharged from the middle ear. Treatment is contingent on the cause of the infection and ranges from analgesic eardrops, medications to the surgical insertion of a tube to drain fluid from the middle ear or an adenoidectomy. Inner Ear Infection (Otitis Interna) Also known as labyrinthitis, inner ear infections are most commonly caused by other infections in the body, particularly sinus, throat or tooth infections. Symptoms include dizziness, fever, nausea, vomiting, hearing loss and tinnitus. Always seek medical attention if you think you may have an inner ear infection. If you suspect you or your child may have an ear infection, please contact our office and schedule an appointment with one of our otolaryngologists.
(9 pm. – promoted by ek hornbeck) Last time we started our discussion about sodium, and tonight we shall continue it. We have pretty much covered the quantum mechanical part and the properties and uses of elemental sodium, so tonight we shall focus on some of the compounds of that element. Sodium compounds are extremely common and widespread, but not universally distributed. This is important for reasons to be seen later. The most common sodium compound is common salt, or sodium chloride, NaCl. Everyone has personal experience with salt, both as a nutrient and as a melting aid for icy surfaces. Sodium chloride occurs in mineral deposits (many of them from ancient, dried up seabed deposits), in seawater, and as the mineral halite, amongst other forms. Tremendous amounts are used in industry as a starting material or processing aid, in deicing operations, and only a little is used as food. One fascinating thing about salt is that it is the only rock that humans eat on a regular basis. It is also the only inorganic material to which humans have evolved specific taste receptors. Salt is absolutely essential for mammalian (and much other) life. It serves several functions in the body. They are all important, so my ordering of these functions is arbitrary. Sodium ions, along with potassium ions, serve to conduct electrical impulses in nerves. Without these ions we would surely die, and fast. Normally, within neurons potassium levels are higher than in the plasma, and sodium concentrations are lower. As a neuron fires, this situation is temporarily disturbed, but the equilibrium conditions are rapidly restored. If they were not, neurons would become irreversibly depolarized and thus unable to function again. Some neurotoxins operate by preventing the restoration of initial conditions. Sodium, along with chloride, also are important for maintaining the osmotic balance betwixt cells and extracellular fluids. It the concentration of salt gets too high outside the cells, like in dehydration, the cells begin to lose water because of osmosis. Severe dehydration is a gradual thing, in general, and must be treated gently or those shrunken cells will absorb water too fast and burst, killing them. This is also why drinking seawater in quantity is deadly, since it has over three times more salt than plasma, increasing dehydration even though it is mostly water. The deicing properties of salt are interesting insofar as how it works. Most people know that adding salt to ice makes the temperature decrease (the classic home example is making home made ice cream in a freezing device that is surrounded by an ice and salt mixture), but the connexion betwixt that and deicing roadways is often lost. Here is how it works. A saturated solution of salt in water has a melting point of brine, saturated salt water, is zero degrees F (that is why Herr Docktor Fahrenheit used it as one of his reference points). If one puts salt onto ice, even though the initial interaction may be a solid/solid one (a slow process), as soon as a little salt is dissolved there become liquid domains, and solid/liquid interactions are much faster, especially since salt is quite soluble in water. Once that begins, the temperature of the ice/salt mixture begins to drop, but let us look at a typical example, perhaps where snow has covered a road at an ambient temperature of 20 degrees F. The temperature of the ice/salt mixture drops, but since the roadway is warmer, heat is transferred to the mixture, raising it above zero degrees. As more salt becomes dissolved, more heat transfer occurs and finally the ice or snow just melts away as brine. There is one caveat: if the road temperature approaches zero degrees F, the process stops because there is no energetic driving force to transfer heat from the road to the ice/salt mixture. In extreme climates other materials that have lower freezing points than ice/salt ones are used. Now you know why ice and salt freezes ice cream in your home ice cream maker: the bucket containing the ice and salt is a pretty good insulator, but the metal one containing the goodies is an excellent conductor. Thus, the heat is extracted from the ice cream mix to the ice/salt mix and the ice cream freezes. Who would have thought that road salt and ice cream have so much in common! The other similarity is that rock salt, a rather crude but cheap material, is usually used for both. Salt is also the starting material for many basic chemicals used industrially and in consumer products. Here are a couple of examples. If one takes salt and mixes it with calcium chloride (to lower the melting point, otherwise it is inert) and makes that mixture the electrolyte in an electrolytic cell, properly designed, from one electrode one gets sodium hydroxide (lye) and from the other chlorine gas. We shall discuss chlorine another time, but sodium hydroxide is extremely important. Hydrogen is also produced, because to make it all work some hydrogen and oxygen have to be added, and that comes from the cheapest of all raw materials, water. Sodium hydroxide is the archetypical base. It is extremely corrosive to most materials (hence the old name, caustic alkali or caustic soda) because it “wants” to abstract a hydrogen ion from anything that can give one to produce the very stable compound, water. So what are its uses? Hundreds, if not thousands of uses exist for it. One that you touch every day is soap. This has been known since antiquity, but is well refined today. It turns out that if one takes the proper amounts of sodium hydroxide and fat, like animal fat or vegetable oil, reacts them with water and allows time to make its magic, instead of something that will burn you, the lye, or something that will just make you more greasy, soap is formed that not only will not burn, but also removes grease! I have written about that a long time before in this space, but have not the energy to find it now. You used to be able to buy lye at the big box stores, but it is getting sort of hard to find because of meth cookers. The last time I looked it was available at Lowe’s, but the label did not say LYE on it, but rather DRAIN OPENER and you have to look at the label closely to see what you are getting. The liquid drain openers are concentrated sodium hydroxide solutions, many with thickening agents. You can get the “crystal” Drain-o at most stores, and it is mostly lye, but with the addition of some aluminum turnings. It turns out that sodium hydroxide gets very hot when dissolved in water (this is called a large, negative heat of solution), and that helps melt grease from plumbing. The sodium hydroxide solution reacts with the aluminum, releasing even more heat and hydrogen gas, which helps to dislodge clogs as well. Sodium hydroxide is extremely corrosive to skin and especially eyes. It is hard to wash out concentrated solutions from the eyes before the cornea is destroyed, so handle it with extreme care. The reason that lye solutions feel soapy is that it literally making soap out of the lipids in your skin, doing damage at the same time. If you get a lye solution on you, vinegar or lemon juice is a specific treatment. Another basic sodium chemical is sodium sulfate, Na2SO4. You see it mainly in powdered detergents and the use of it in detergents is waning because most liquid ones do not contain it. It is also used in the glass industry and in textiles. A really interesting use of it takes advantage of its relatively low (about 32 degrees C, just below body temperature) melting point. Since energy is released when it reverts to a solid, it is thus useful for storing heat in some solar heating schemes. I used to use it in the laboratory to dry solutions of organic materials in organic solvents. At room temperature the stable form is the decahydrate, one sodium sulfate unit associated with ten water molecules. If the anhydrous kind is put into wet solutions of organic solvents (almost all organic solvents dissolve at least a little water, and ether around 10%), the sodium sulfate attracts the water and traps it in a solid, easily filtered out, mass. It has a further advantage of being pretty much inert to most organic substances, so solutions can be dried without degrading whatever material is being sought. Most sodium sulfate is produced my mining, but a fair amount used to come from the production of hydrochloric acid by reacting salt with sulfuric acid. Since both hydrogen and chlorine are produced by the same cells that produce sodium hydroxide, the bulk of hydrochloric acid now produced is made by burning the hydrogen in the chlorine, so the old process is not much used any longer. One sodium compound that almost everyone has seen and touched is sodium bicarbonate, or baking soda, NaHCO3. It is only weakly basic, so it is not corrosive. Some of it is mined, but most of it on the market is produced from salt, carbon dioxide, water, and ammonia. The ammonia is regenerated at the end of the process and so is recycled, otherwise the process would be too expensive. Sodium bicarbonate is used in cooking, mainly to produce carbon dioxide gas when baking cakes, biscuits, and other quick breads that are not risen with yeast. When mixed with some sort of acid, the carbon dioxide is released and the dough or batter rises. Buttermilk is the traditional acid, but in baking powders some solid acidic material is used so it does not react until being wetted. Usually a little less acid than required to neutralize the soda is added, because when it gets to around 70 degrees C it begins to evolve carbon dioxide even without acid. This property makes baking soda useful for putting out kitchen fires, particularly grease fires. Remember, NEVER use water on a grease fire. The best thing to do is just to put the lid on the flaming vessel, but if this is not possible, throw some soda on it. It decomposes so fast that it does not have time to cause the grease to splatter unless you use a whole lot and it can stay relatively cool in the middle of the mass, so do not throw a whole box into a deep fryer! I neat trick to clean silver or copper of tarnish is to wash the item(s) thoroughly in detergent and water to remove dirt and grease, then put the items in either an uncoated aluminum pan or in any pan after first putting in a piece of aluminum foil. Add some baking soda and put the pan on the heat. This sets up a battery that electrolytically reduces the tarnish (copper or silver sulfide) back to the metal. This is preferable to silver polish because the polish removes the silver sulfide, whereas the soda method regenerates the silver. This is particularly important for silver plate since the plating is quite thin and repeated use of polishes will wear it away. Baking soda is famous for being used in tooth cleaners, and it has real merit. It is a mild abrasive, but not so much that it damages tooth enamel, a mild antiseptic to kill bacteria, and a mild base that neutralizes the acids that bacteria produce that DO attack enamel. It is also a fair whitener. My grandmum used it (with a frizzled sweet gum stick as a toothbrush) until the she was around 50 or so and still had many of her natural teeth when she died at 101 and a half years! The mild abrasive properties make soda great for cleaning items around the house without scratching them, but do not use it on really delicate items like laptop screens or lacquered surfaces. Also, do not use it on aluminum because it is basic enough to dissolve the protective oxide layer on aluminum. By the way, aluminum is so reactive that without this oxide layer (which forms immediately when aluminum is in contact with air) aluminum items would corrode away in minutes or weeks, depending on the size and geometry of the item. Aluminum foil would not exist with this protective layer, being destroyed by the air instantly. Remember, aluminum is produced by dissolving aluminum oxide in molten cryolite, sodium hexafluoroaluminate, a more basic and lower melting sodium salt. Sodium bicarbonate can not be used because it decomposes when it gets hot as mentioned earlier. Many people use soda as an antacid, and for me it has no peer. Since I cut way back of fatty foods I rarely use it any more, when when I got heartburn half a teaspoon of soda in water would instantly, within 10 seconds, begin to give me relief. For those on sodium restricted diets it should not be used unless specifically directed by a medical professional. Alka-Seltzer is a combination of soda, citric acid, and aspirin. When the tablets are put into water, the citric acid reacts with the sodium bicarbonate to form carbon dioxide, water, and sodium citrate, itself a good antacid. Some of the aspirin also reacts with the soda to form sodium acetylsalicylate, a water soluble form of aspirin. That is why the aspirin gets into your system much faster than when taking tablets, because aspirin itself has a low water solubility. Another common sodium compound is sodium carbonate, Na2CO3, or washing soda. It is present in almost all laundry detergents because it not only helps to soften water. It is also basic enough to saponify fat, turning it into soap and thus making fat water soluble. It is more basic than baking soda but not nearly as caustic as lye. Just do not get strong solutions in your eyes, but you CAN wash out washing soda solutions quickly enough to prevent corneal damage. Sodium carbonate is used in enormous quantities as a raw material for glassmaking. Common glass is essentially a mixture of sodium carbonate, calcium oxide, and white sand, heated until it melts and then molded into glasses, bottles, and similar things, floated on molten tin to make window glass, or rolled into thicker plates for plate glass. All three of the raw materials are incredibly cheap, and that makes glass cheap as well. Now, there are hundreds of different glass formulations, but unless special properties are required, the soda lime glass is used because of its low cost. The next time you are at the hardware store look at the edge of a piece of window glass. You will see that it is quite green! That is because the glass is contaminated with trace amounts of iron, and iron in the 2+ oxidation state is green. The thin sections of the glass are too thin for the tint to be perceptible, but the edges of a piece a couple of feet wide are quite noticeable. For some applications even those traces are unacceptable so specially purified starting materials are used, increasing the cost of the glass considerably. Another commonly used sodium compound is sodium borate, or borax, Na2B4O7. It is an excellent laundry booster and I use some in every load. It is also germicidal, and the former Mrs. Translator and I used it in the diaper pail to reduce odor of rinsed, soiled diapers. It is also an excellent and relatively nontoxic (to mammals) insecticide, and is safe to use around children and pets. People building or renovating houses are wise to put borax in the framing for drywall to act as a long lasting, nontoxic roach preventative. Yet another commonly encountered sodium salt is monosodium glutamate, or MSG. This is used as a flavor enhancer in many salty and savory dishes. One of the reasons that soy sauce tastes like soy sauce is that soy beans are very high in the amino acid glutamic acid. As the protein is hydrolyzed, glutamic acid is released and combines with sodium, forming this material. Contrary to popular belief, real sensitivity to MSG is almost unknown and it IS a natural product. Almost any cheese has lots of it, except for unripened ones like cottage cheese and cream cheese. As a matter of fact, the scientific consensus is that MSG is the specific trigger for the fifth human taste umami (roughly translated from the Japanese as “delicious”). So go ahead and do not feel bad about using it. It really does make savory foods taste better, but works best with foods that are a little salty in the first place. There are thousands of compounds with sodium in them, and we can only scratch the surface here. Therefore, we shall consider only one more. Chlorine bleach is a water solution of sodium hypochlorite, NaClO. It is an oxidizing agent that is used to render colorless (and often water soluble) stains on fabric. It is made by reacting chlorine with sodium hydroxide, another product of the chloralkali process mentioned earlier. Bleach also contains a little sodium hydroxide to keep the pH high, because hypochlorite is unstable in neutral or acidic conditions, releasing chlorine. That is why it bleaches. When you add bleach to laundry, the pH decreases and chlorine, the real bleaching agent, is liberated. It is important to rinse garments thoroughly after using chlorine bleach because if the hydroxide is not removed it will damage the fabric. Weak bleach solutions are also excellent disinfecting solutions on hard surfaces that are not damaged by it. Well, you have done it again! You have wasted many more einsteins of perfectly good photons reading this salty piece. And even though Huckabee realizes that making jokes about Chris Matthews wetting his pants when he reads me say it, I always learn much more than I could possibly hope to teach by writing this series, so please keep those comments, questions, corrections, and other feedback coming. Tips and recs are also always quite welcome. Remember, no science or technology issue is off topic here. I shall stay around tonight for as comments warrant, since A***** is out of town. Tomorrow I shall return around 9:00 PM Easter for Review Time. Doc, aka Dr. David W. Smith Daily Kos, and
The Amazon Basin is the epicenter of the world’s hydropower plants—the same gushing rains that give the region its lush foliage make it a prime destination for developers seeking to capitalize on this allegedly renewable energy source. But the long-term sustainability of these projects, which use the natural flow of water to generate electricity, is now under scrutiny. A new study of the Belo Monte Dam, one of the world’s largest hydropower energy complexes currently under construction on the Xingu River in the eastern region of the basin, found that large-scale deforestation in the Amazon poses a significant threat to a dam’s energy-generating potential. Although many studies have examined the impacts of deforestation on the immediate vicinity of hydropower projects, less attention has been paid to its effects on a regional scale. In fact, earlier studies found that a loss of trees within the water basin of hydropower sites increased the energy-generating capacity of the dam in the short-term, because less trees were available to suck water from the ground and export it outside the watershed in a process known as evapotranspiration. But across an entire region, less foliage means less rainfall, so rivers flow less powerfully. In their study, published in Proceedings of the National Academy of Sciences, researchers in the U.S. and Brazil found that large-scale deforestation in the Amazon had profound effects on the region’s water cycle—and on its climate. A loss of 40 percent of Amazonian rainforest, the scientists predicted, would reduce regional precipitation by up to 43 percent between July and October, prolonging the area’s dry season. Deforestation would thereby reduce river water discharge—assuming zero forest loss, river water surges for five months between February and June. But if 40 percent of the region’s trees were cleared, that window of heavy flow would narrow, running only from March until about May. Essentially, “the peaks get tighter,” says Michael Coe, a senior scientist at the Woods Hole Research Center’s Amazon Program in Falmouth, Mass., who worked on the study. Further, the April peak in river discharge would fall by approximately 33 percent. So, regardless of whether hydropower developers push for increased conservation in the Xingu Basin, the study suggests they will have to take into account the effects of regional deforestation on the energy-generating capacity of their projects. “You can do a really good job conserving forest in one location,” Coe says, “but you might be undermined by activities occurring elsewhere.” Researchers estimate that if tree-clearing practices continue as projected, the Belo Monte project could see its energy generation potential slashed by as much as 38 percent.
Grapevine Water Use From a Distance Technology to remotely measure thermal energy in vineyards may help with irrigation management An example: Mendez noted that Napa Valley generally starts with saturated soil in the spring and then drops to a threshold where growers start irrigating. The actual irrigation start date and in-season management should reflect the targets set for yield and grape and wine quality. Too much water, of course, encourages vegetative growth. Therefore, the correct amount of water depends on the yield and quality desired based on the cultivar and style of wine. “Deficit irrigation is widely used in later stages of ripening, but cutting water off before harvest isn’t the best solution,” Mendez said. It can stress the plants unnecessarily and reduces yield but doesn’t always improve flavors. On the other hand, water may be wasted if applied where or when it’s not needed. Unfortunately, growers may not have complete flexibility to irrigate. Availability of water for vine roots and protecting vines from frost by overhead spraying is increasingly looming as the largest issue facing grapegrowers in California. Grapevines will grow and produce grapes in many areas with the natural rainfall, but much falls at the wrong time (winter), varies by year and doesn’t necessarily provide optimum yield and quality. Measuring water use traditionally Mendez, who has a Ph.D. focusing on irrigation management and how it influences grape yield and grape and wine quality, discussed new techniques for remotely measuring vineyard water use, and ways to correlate that to actual plant needs. He started with a reminder of all the standard tools available to growers. Traditionally, these included visual clues like cessation of tip growth, soil moisture sensors and plant-based measurements like leaf water potentials (measured using a pressure chamber), sap-flow sensors or leaf porometers that measure leaf stomatal conductance. These tools often have been used to estimate plant evapotranspiration (ET) using reference coefficients, a process that invokes controversy as it’s not related to the specific site and therefore does not always reflect the intrinsic characteristics of each vineyard. Moreover, it’s difficult to find representative leaves or vines, as vineyards are largely variable in terms of soil texture, rooting depth and other characteristics that affect vine growth, yield and grape and wine quality. Remote sensing becomes practical More recently, remote sensing using thermography and energy balances have become practical for assessing large areas instead of individual leaves or vines. They operate on a simple principle: Transpiring plants have lower leaf temperature than those that aren’t giving off water. In fact, Mendez said that there is a linear correlation between stomatal conductance and temperature; you can calibrate measurements with a leaf covered with water vs. one covered with Vaseline, which inhibits transpiration, although a more practical approach is to look at the difference between air and leaf temperature on a larger scale. An expensive thermo graphic camera flown over the vineyard, or even tied to a balloon or installed in a satellite, can measure temperature and locate warm and cool areas (i.e, those with less or more water being transpired). Another approach to remote sensing are vegetation indexes, perhaps the most commonly used being NDVI (Normalized Difference Vegetation Index), which can be used to estimate plant leaf area based on the different reflectance of light from plants at different wavelengths. One limitation of remote sensing is the impact of bare ground or cover crops that can skew readings of the measured indices, though a solar array on the ground can provide an indication of solar activity for compensation. It’s also important to differentiate between actual vine evapotranspiration (ETa) from potential (ETp), which is more commonly referenced. One approach is SEBAL (Surface Energy Balance Algorithm for Land), which combines spectral radiance readings from satellite-based sensors with actual meteorological data to calculate the energy balance at the earth’s surface. It produces water consumption, or actual evapotranspiration and biomass production of agricultural crops and native vegetation. Another energy balance model is METRIC (Mapping Evapo-Transpiration at high Resolution with Internalized Calibration), developed by Dr. Rick Allen of the University of Idaho. In sum, Mendez noted that methods of evaluating proper irrigation levels are evolving from visual clues and vine and soil-based measurements to large-scale monitoring via remoter sensing. This can give a better indication of vineyard health and account for differences between and within vineyards, while avoiding the uncertainty caused by extrapolating a vineyard’s health from perhaps unrepresentative samples.
Back To CourseBasics of Astronomy 28 chapters | 325 lessons As a member, you'll also get unlimited access to over 70,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed.Free 5-day trial The surface temperature of a planet is mainly controlled by two factors. The first is heating of the planet as it absorbs energy from sunlight. Like the sunlight warms your face on a warm day, it heats the surface of a planet as well. The second is cooling by way of radiation. This is sort of like how a really hot stovetop cools off after you turn off the gas or electricity. So, when it comes to a planet, there clearly has to be some sort of relationship between the absorption and emission of radiation that must go on, one that determines the surface temperature of a planet. To figure out how this process of heating and cooling occurs we actually have to first understand how electromagnetic (EM) radiation (that is to say, light) is absorbed and emitted by matter itself. Electromagnetic radiation (light) is a form of energy, having wave-like properties, that consists of rapidly varying electric and magnetic fields. To understand all of what we just went over, we're going to use two examples: snow and a blacktop surface (that is to say, asphalt concrete). Light that hits an object will be either absorbed or reflected by it, and the portion of light that is absorbed by an object will heat it up. Snow is a bad absorber of light but a good reflector of it. That's why it's cold and blindingly bright at the same time. It's also why you can get a sunburn in the wintertime when you're around a lot of snow, even though it's cold outside. It's because snow reflects radiation, including sunburn-causing U.V. light, right at you. But a blacktop is totally the opposite of snow. It's a very good absorber of light and a poor reflector of it. That's why you can touch a blacktop on a sunny day and feel that it's very hot. But unlike snow blindness, you've never heard of blacktop blindness, have you? That's because it's a poor reflector of light. Nevertheless, even a blacktop cannot perfectly absorb all wavelengths of light. However, a hypothetical and idealized object, one that is a perfect absorber of all wavelengths of radiation that fall on it, is called a blackbody. It's a blackbody because it doesn't reflect any light, and since it doesn't reflect any light, it's totally black. An object that is a good absorber of radiation is also a good emitter of radiation. This means that a blackbody is the best emitter of radiation. Why is it also the best emitter of radiation? In very simple but direct terms, any body will emit radiation at a set temperature and frequency precisely as well as it absorbs this radiation. So, a perfect absorber has to be a perfect emitter and vice versa. For a bit more detail, consider this. We know energy must flow from a hot object to a cold object or to its cooler surroundings (thanks to the second law of thermodynamics). Simply put, this is why ice cubes melt in the face of the sun. If a blackbody, being a perfect absorber of radiation, was anything less than a perfect emitter of radiation at the same time, it could theoretically continue to absorb net energy from its surroundings (meaning it's going to get hotter) even if it was already hotter than its surroundings to begin with. If there wasn't a sort of balance between absorption and emission, water placed near the sun would turn into ice instead of evaporating away as the sun grew even hotter. This impossible scenario clearly contradicts the laws of thermodynamics. In short, this is why a blackbody must be the best emitter of radiation. Although that blacktop we discussed before isn't a true blackbody, we can create an object as close as possible to a blackbody. This is done by using a hollow sphere with a very small hole in it. The inside of the sphere, as you can only imagine, is blackened and roughened. When light is passed through the hole, it strikes the opposite side of the sphere. Most of the light is absorbed, but some of it is reflected. However, unlike a blacktop exposed to the wider world, the light here has nowhere to go but to reflect off of other blackened points inside the sphere. This way, as the remaining light bounces around inside the sphere, most of it is absorbed. It's basically a light trap of sorts, since virtually all the light that enters the sphere is absorbed prior to it having a chance to be reflected out of the hole. As a result, the hole in this sphere is like a blackbody because basically all of the radiation passing through this hole is absorbed. That, and radiation inside the sphere, is also emitted through this hole. The electromagnetic radiation emitted by a theoretically perfect radiator (a blackbody) is known as blackbody radiation. This lesson's very important concepts are tied to Wien's Law and the Stefan-Boltzmann Law, which are covered elsewhere, but we need to simply review what we've learned already. Electromagnetic (EM) radiation (light) is a form of energy, having wave-like properties, that consists of rapidly varying electric and magnetic fields. Light that hits an object will be either absorbed or reflected by it, and the portion of light that is absorbed by an object will heat it up. A hypothetical and idealized object, one that is a perfect absorber of all wavelengths of radiation that fall on it, is called a blackbody. An object that is a good absorber of radiation is also a good emitter of radiation, which means that a blackbody is the best emitter of radiation. This radiation, termed blackbody radiation is defined as electromagnetic radiation emitted by a theoretically perfect radiator (that is to say, a blackbody). Once you've completed this lesson, you'll be able to: To unlock this lesson you must be a Study.com Member. Create your account Did you know… We have over 95 college courses that prepare you to earn credit by exam that is accepted by over 2,000 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Back To CourseBasics of Astronomy 28 chapters | 325 lessons
Controversial issues: guidance for schools 7. The importance of citizenship education A consideration of the issues surrounding the teaching of controversial issues serves only to underline the importance of good citizenship education from an early age. If children become accustomed to discussing their differences in a rational way in the primary years, they are more likely to accept it as normal in their adolescence. Citizenship education helps to equip young people to deal with situations of conflict and controversy knowledgeably and tolerantly. It helps to equip them to understand the consequences of their actions, and those of the adults around them. Pupils learn how to recognize bias, evaluate argument, weigh evidence, look for alternative interpretations, viewpoints and sources of evidence; above all to give good reasons for the things they say and do, and to expect good reasons to be given by others.
The history of Federalism in Nigeria can be traced to the division of the country into three provinces (Northern Province, Western Province and Eastern Province) by Governor Bernard Bourdillion in 1939. Governor Bernard Bourdillion (1935 – 1943) recommended the replacement of the provinces by regions which Arthur Richard’s Constitution later implemented in 1946. It was the idea of the Richard’s constitution that brought in a Federal structure but which it didn’t accomplish to the end. However, in 1953, Governor Macpherson’s constitution improved on that of Richard’s by creating House of Rep. with powers to make law for the country and Regional Houses of Assembly to make law for the regions. Later, in 1954, the Lyttleton constitution came in with a Federal system of government for the country. This was as a result of the constitutional conference that was held in London in 1953 (1953 London Constitutional Conference) where it was decided that Nigeria should become a Federal State. Federalism is a system of government whereby power is constitutionally shared between the central government and other component units e.g. State/region and local government, but in 1954 there were only the central and regional government in Nigeria. Their powers and functions were shared to them by the constitution. Exclusive legislative functions were meant for the central government while concurrent legislative functions were meant for both the central and regional government and residual legislative functions were meant for the regions. Here are some reasons why Federalism was introduced into Nigeria. * Cultural diversity * Fear of domination by the minorities * The size of the country * Geographical factor * Bringing government nearer to the *British colonial policy * Economic factor * Effective administration. * C. C. Dibie; Essential Government for Senior Secondary Schools; 3rd edition; Lagos; Tonad Publishers; 2008