content
stringlengths
275
370k
Introduction According to Merriam-Webster Dictionary (2016), one of the definitions of Language is that it is “The system of words or signs that people use to express thoughts and feelings to each other”. Therefore, language is used to express the thoughts and feelings, and vocabulary is necessary to express it. But learning the definitions of words is fundamental. Many teachers believe that an effective instructional technique is to define words before reading a text because it supports vocabulary growth and helps them comprehend what they read; however, research indicates otherwise. Teaching English vocabulary may be challenging, and it takes a challenging teacher to achieve it. The beneficiaries of this study are; Ministry of Education, teachers teaching English Second Language and learners. The Ministry will conduct in-service training workshops to help the teachers to acquire knowledge and skills. When the English teachers are well equipped on pronouncing English words, they will be in a position to help learners to improve on pronouncing English words. Learners will acquire the knowledge from teachers in English lessons and utilize the methods that will help them to improve their Thus, the ability in speaking skill is a crucial and important part of second language learning and teaching process. The mastery of speaking skill in the English language is a priority for many second language or foreign language students. Even though periods of focusing on language form and enhance vocabulary are important in English language learning, but developing the students’ ability to really communicate with English in the classroom are the main goal of an English language teaching. At the end of the study, the students should be able to communicate effectively in English for study, work, and leisure outside the classroom. Therefore, it is essential that English teachers pay great attention in teaching speaking to the students. The Aural-Oral approach is very effective to be implemented in English Language Teaching in case to build communicative competence of student. It enhances listening and speaking also it increases new vocabulary for student. The aim of this essay is to give real imagine about how the Aural-Oral approach can be taught in or during English learning and give good improvement in both listening and speaking in order to reach student’s communicative competence. The first focus of Aural-Oral Approach is to teach English for student Concerning the importance of speaking skill, Gammidge (2004, p.7) claims that "Speaking is a highly challenging yet essential skill for most learners to acquire." In addition, Renandya and Richards (2002) state that “a large percentage of the world's language learners study English in order to develop proficiency in speaking”(p.201). Many English foreign language students consider the mastery of speaking skill a priority. Besides, they evaluate their success according to their spoken language proficiency. (Richards, 2008, p.19) For many teachers, teaching speaking is so important. The lack of proper pronunciation causes problem for students in real life communication. On the other hand, most students believe that if they are better in pronunciation, they will be more confidence in English. It is also seen that generally pronunciation is neglected in classrooms. Even if pronunciation is taught with considerable amount of time, students should practice individually. Practicing only in classroom is not enough for achieving desirable U.S. is known as the place of opportunities and education is the biggest way to accomplish one 's dreams but when someone moves to the U.S. from another country not being fluent in English can be a huge barrier to being able to accomplish what one wants, and being successful in the educational system. Therefore, the people who control the education system have set specific standards in order to teach students who are learning the English language. The set standard for students who don 't speak English is supposed to help them to learn English and keep up with the subjects that the other students who speak English are learning and are being tested on. The idea of that is great and should promote both content retention and the development of the English language at an academic level, but that is not happening. This program in theory would very beneficial to a lot of English language learners but the way the course is set up the main focus is not teaching the students content rather than teaching them english. Keeping in mind the end goal to enable learners to create certainty and confidence, cognitively teachers must examine the proposes of reading instruction and enable learners to create explanatory, procedural, and restrictive learning of these psychological methodologies, in this way assembling that would advance learners metacognitive control of particular learning strategies. The Linguistic foundations of reading and writing development is based on the viewpoint that the writer or reader uses their knowledge of the things around them and the structure of language to make connections of reading or writing content. According to research linguists, all cultures try to represent key aspects of their verbal language into their written languages. Based on major developments and contributions, " Letters and letter units correspond to particular sounds (phonemes); spaces in between words represent junctures in spoken language; and typographical RUNNING HEAD: Benchmark Reading Instruction features represent other linguistic properties (emphasis, the end of the sentence, etc.)" Reading about CLT made me conscious of its potential for addressing the difficulty in communication that my students had and this is what led me to search about the principles that I've chosen. I will give my own perspective on the use of authentic language, use of games and expressing thoughts and ideas in EFL classrooms. Additionally, I will offer evidence to support my position from literature. My choice of principles is showed in appendix 1. I believe that understanding these principles would allow me to support my learners effectively in their attempts to speak English.
Geological time scale The vast expanse of geological time has been separated into eras, periods, and epochs. The numbers included below refer to the beginnings of the division in which the title appears. The numbers are in millions of years. The named divisions of time are for the most part based on fossil evidence and principles for relative dating over the past two hundred years. Only with the application of radiometric dating have numbers been obtained for the divisions observed from field observations. Adapted from Lutgens and Tarbuck. They cite the Geological Society of America as the source of the data. There is another kind of time division used - the "eon". The entire interval of the existence of visible life is called the Phanerozoic eon. The great Precambrian expanse of time is divided into the Proterozoic, Archean, and Hadean eons in order of increasing age. The names of the eras in the Phanerozoic eon (the eon of visible life) are the Cenozoic ("recent life"), Mesozoic ("middle life") and Paleozoic ("ancient life"). The further subdivision of the eras into 12 "periods" is based on identifiable but less profound changes in life-forms. In the most recent era, the Cenozoic, there is a further subdivision of time into epochs. Lutgens & Tarbuck Ch 2, 18 Geologic Time and the Geologic Column This approach to the sweep of geologic time follows that in "The Grand Canyon", C.Hill, et al., eds. to organize the different periods of life since the beginning of the Cambrian period. The time data from radiometric dating is taken from that source. The times are in millions of years. For examples that cover most of these time periods, see the outline of the Grand Canyon and Grand Staircase. Some descriptive information about the different divisions of geologic time is given below. Lutgens & Tarbuck take on the task of surveying Earth history in one chapter, Chapter 19 of Essentials of Geology. The brief outline below draws from that material and elsewhere to provide a brief sketch of Earth history. Note that the dates in millions of years are representative values. Research publications would give error bars for such division dates - it is not implied here that these boundaries are known to 3 or 4 significant digits. The division of the geologic column into different periods is largely based upon the varieties of fossils found, taken as indicators of a time period in Earth's history. In the time scale of Lutgens & Tarbuck, the Quaternary Period is further divided into the Pleistocene Epoch from 1.8 to 0.01 Myr and the most recent Holocene Epoch from 0.01 Myr to the present. By the beginning of the Quaternary Period, most of the major plate tectonic movements which formed the North American continent had taken place, and the main modifications past that were those produced by glacial action and erosion processess. Human beings emerged during this Period. The Paleogene Period (or the early part of the Tertiary Period) represents the time period after the major extinction that wiped out the dinosaurs and about half of the known species worldwide. Lutgens & Tarbuck further subdivide this time period into the Paleocene Epoch (65-54.8Myr), the Eocene Epoch (54.8-33.7Myr), and the Oligocene Epoch (33.7-23.8 Myr). The Cretaceous Period is perhaps most familiar because of the major extinction event which marks the Cretaceous-Tertiary boundary. It is typically called the K-T extinction, using the first letter of the German spelling of Cretaceous, and it marked the end of the dinosaurs. There is large body of evidence associating this extinction with the large impact crater at Chicxulub, Yucatan Peninsula, Mexico. The Cretaceous, Jurassic and Triassic Periods are collectively referred to as the "age of reptiles". The first flowering plants appeared near the beginning of the Cretaceous Period. Evidence suggests that a vast shallow sea invaded much of western North America, and the Atlantic and Gulf coastal regions during the Cretaceous Period. This created great swamps and resulted in Cretaceous coal deposits in the western United States and Canada. The distinctive fossil progression characteristic of this period was first found in the Jura Mountains of Russia. Dinosaurs and other reptiles were the dominant species. The Jurassic Period saw the first appearance of birds. It appears that a shallow sea again invaded North America at the beginning of the Jurassic Period. But next to that sea vast continental sediments were deposited on the Colorado plateau. This includes the Navajo Sandstone, a white quartz sandstone that appears to be windblown and reaches a thickness near 300 meters. The early Jurassic Period at about 200 Myr saw the beginning of the breakup of Pangaea and a rift developed between what is now the United States and western Africa, giving birth to the Atlantic Ocean. The westward moving Atlantic plate began to override the Pacific plate. The continuing subduction of the Pacific plate contributed to the western mountains and to the igneous activity that resulted in the Rocky Mountains. Dinosaurs became the dominant species in the Triassic Period. The Permian Period is named after the Perm region of Russia, where the types of fossils characteristic of that period were first discovered by geologist Roderick Murchison in 1841. The Permian, Pennsylvanian and Mississippian Periods are collectively referred to as the "age of amphibians". By the end of the Permian Period the once dominant trilobites are extinct along with many other marine animals. Lutgens & Tarbuck label this extinction "The Great Paleozoic Extinction" and comment that it was the greatest of at least five major extinctions over the past 600 million years. The modeling of plate tectonics suggests that at the end of the Permian Period the continents were all together in the form called pangaea, and that the separations that have created today's alignment of continents have all occurred since that time. There is much discussion about the causes of the dramatic biological decline of that time. One suggestion is that having just one vast continent may have made seasons much more severe than today. The Pennsylvanian Period saw the emergence of the first reptiles. This period saw the development of large tropical swamps across North America, Europe and Siberia which are the source of great coal deposits. Named after the area of fine coal deposits in Pennsylvania. The Devonian and Silurian Periods are referred to as the "age of fishes". In the Davonian Period fishes were dominant. Primitive sharks developed. Toward the end of the Davonian there is evidence of insects with the first insect fossils. From finger-sized earlier coastal plants, land plants developed and moved away from the coasts. By the end of the Davonian, fossil evidence suggests forests with trees tens of meters high. The Devonian period is named after Devon in the west of England. By late Devonian, two groups of bony fishes, the lung fish and the lobe-finned fish had adapted to land environments, and true air-breathing amphibians developed. The amphibians continued to diversify with abundant food and minimal competition and became more like modern reptiles. The Ordovician and Cambrian Periods are referred to as the "age of invertebrates", with trilobites abundant. In this period, brachiopods became more abundant that the trilobites, but all but one species of them are extinct today. In the Ordovician, large cephalopods developed as predators of size up to 10 meters. They are considered to be the first large organisms. The later part of the Ordovician saw the appearance of the first fishes. The beginning of the Cambrian is the time of the first organisms with shells. Trilobites were dominant toward the end of the Cambrian Period, with over 600 genera of these mud-burrowing scavengers. The Cambrian Period marks the time of emergence of a vast number of fossils of multicellular animals, and this proliferation of the evidence for complex life is often called the "Cambrian Explosion". Models of plate tectonic movement suggest a very different world at the beginning of the Cambrian, with that plate which became North America largely devoid of life as a barren lowland. Shallow seas encroached and then receded. Near the end of the Precambrian, there is fossil evidence of diverse and complex multicelled organisms. Most of the evidence is in the form of trace fossils, such as trails and worm holes. It is judged that most of Precambrian life forms lacked shells, making the detection of fossils more difficult. Plant fossils were found somewhat earlier than animal fossils. There is no coal, oil or natural gas in Precambrian rock. Rocks from the middle Precambrian, 1200 - 2500 Myr hold most of the Earth's iron ore, mainly as hematite (Fe2O3). This can be taken as evidence that the oxygen content of the atmosphere was increasing during that period, and that it was abundant enough to react with the iron dissolved in shallow lakes and seas. The process of oxidizing all that iron may have delayed the buildup of atmospheric oxygen from photosynthetic life. There is an observable end to this formation of iron ore, so the increase in atmospheric oxygen would have been expected to accelerate at that time. Fossilized evidence for life is much less dramatic in the pre-Cambrian time frame, with amounts about 88% of Earth's history. The most common Precambrian fossils are stromatolites, which become common about 2000 Myr in the past. Stromatolites are mounds of material deposited by algae. Bacteria and blue-green algae fossils have been found in Gunflint Chert rocks at Lake Superior, dating to 1700 Myr. These are prokaryotic life. Eukaryotic life has been found at about 1000 Myr at Bitter Springs, Australia in the form of green algae. Evidence for prokaryotic life such as bacteria and blue-green algae has been found in southern Africa, dated to 3100 Myr. Banded iron formations have been dated to 3700 Myr, and presuming that this requires oxygen and that the only source of molecular oxygen in this era was photosynthesis, this makes a case for life in this time period. There are also stromatolites dated to 3500 Myr. The age of the Earth is projected to be about 4500 Myr from radiometric dating of the oldest rocks and meteorites. There is evidence of a time of intense bombardment of the Earth in the time period from about 4100 to 3800 Myr in what is called the "late heavy bombardment". There is ongoing discussion about what may have caused this time of intense impacts (see Wiki). There is no evidence for life in this Eon whose name translates to "hellish". Hill, C., Davidson, G.,Helble, T.,& Ranney, W.,eds. The Grand Canyon, Monument to an Ancient Earth. Lutgens & Tarbuck Ch 18, 19 Principles for Relative Dating of Geological Features From over two hundred years of careful field explorations by geologists, a number of practical principles for determining the relative dates of geologic features have emerged. The assignment of numerical ages to these relative dates had to await the development of radiometric dating. Lutgens & Tarbuck
Food Protein-Induced Enterocolitis Syndrome (FPIES) Defined Food Protein-Induced Enterocolitis Syndrome (FPIES), sometimes referred to as a delayed food allergy, is a severe condition causing vomiting and diarrhea. In some cases, symptoms can progress to dehydration and shock brought on by low blood pressure and poor blood circulation. Much like other food allergies, FPIES allergic reactions are triggered by ingesting a food allergen. Unlike a typical food allergy, FPIES allergic reactions are delayed, occurring within hours after eating the trigger allergen. Most children with FPIES have only one or two food triggers, but it is possible to have FPIES reactions to multiple foods. FPIES often develops in infancy, usually when a baby is introduced to solid food or formula. Learn more about FPIES.
Three scientists have jointly earned the Nobel Prize in physics for their work on blue LEDs, or light-emitting diodes. Why blue in particular? Well, blue was the last — and most difficult — advance required to create white LED light. And with white LED light, companies are able to create smartphone and computer screens, as well as light bulbs that last longer and use less electricity than any bulb invented before. LEDs are basically semiconductors that have been built so they emit light when they’re activated. Different chemicals give different LEDs their colors. Engineers made the first LEDs in the 1950s and 60s. Early iterations included laser-emitting devices that worked only when bathed in liquid nitrogen. At the time, scientists developed LEDs that emitted everything from infrared light to green light… but they couldn’t quite get to blue. That required chemicals, including carefully-created crystals, that they weren’t yet able to make in the lab. Once they did figure it out, however, the results were remarkable. A modern white LED lightblub converts more than 50 percent of the electricity it uses into light. Compare that to the 4 percent conversion rate for incandescent bulbs, and you have one efficient bulb. Besides saving money and electricity for all users, white LEDs’ efficiency makes them appealing for getting lighting to folks living in regions without electricity supply. A solar installation can charge an LED lamp to last a long time, allowing kids to do homework at night and small businesses to continue working after dark. A modern white LED lightblub converts more than 50 percent of the electricity it uses into light. Compare that to the 4 percent conversion rate for incandescent bulbs. LEDs also last up to 100,000 hours, compared to 10,000 hours for fluorescent lights and 1,000 hours for incandescent bulbs. Switching more houses and buildings over to LEDs could significantly reduce the world’s electricity and materials consumption for lighting. A white LED light is easy to make from a blue one. Engineers use a blue LED to excite some kind of fluorescent chemical in the bulb. That converts the blue light to white light. Two of this year’s prize winners, Isamu Akasaki and Hiroshi Amano, worked together on producing high-quality gallium nitride, a chemical that appears in many of the layers in a blue LED. The previous red and green LEDs used gallium phosphide, which was easier to produce. Akasaki and Amano discovered how to add chemicals to gallium nitride semiconductors in such a way that they would emit light efficiently. The pair built structures with layers of gallium nitride alloys. The third prize-winner, Shuji Nakamura, also worked on making high-quality gallium nitride. He figured out why gallium nitride semiconductors treated with certain chemicals glow. He built his own gallium nitride alloy-based structures. Both Nakamura’s and Akasaki’s groups will continue to work on making even more efficient blue LEDs, the committee for the Nobel Prize in physics said in a statement. Nakamura is now a professor at the University of California, Santa Barbara, although he began his LED research at a small Japanese chemical company called Nichia Chemical Corporation. Akasaki and Amano are professors at Nagoya University in Japan. In the future, engineers may make white LEDs by combining red, green, and blue ones, which would make a light with tunable colors, the Nobel Committee wrote.
Pulmonary function tests are a group of tests that measure how well the lungs take in and release air and how well they move gases such as oxygen from the atmosphere into the body's circulation. How the Test is Performed Spirometry measures airflow. By measuring how much air you exhale, and how quickly, spirometry can evaluate a broad range of lung diseases. In a spirometry test, while you are sitting, you breathe into a mouthpiece that is connected to an instrument called a spirometer. The spirometer records the amount and the rate of air that you breathe in and out over a period of time. When standing, some numbers might be slightly different. The most important issue is to perform the test always while at the same position. For some of the test measurements, you can breathe normally and quietly. Other tests require forced inhalation or exhalation after a deep breath. Sometimes you will be asked to inhale the substance or a medicine to see how it changes your test results. Lung volume measurement can be done in two ways. The most accurate way is to sit in a sealed, clear box that looks like a telephone booth (body plethysmograph) while breathing in and out into a mouthpiece. Changes in pressure inside the box help determine the lung volume. Lung volume can also be measured when you breathe nitrogen or helium gas through a tube for a certain period of time. The concentration of the gas in a chamber attached to the tube is measured to estimate the lung volume. To measure diffusion capacity, you breathe a harmless gas, called a tracer gas, for a very short time, often for only one breath. The concentration of the gas in the air you breathe out is measured. The difference in the amount of gas inhaled and exhaled measures how effectively gas travels from the lungs into the blood. This test allows the doctor to estimate how well the lungs move oxygen from the air into the bloodstream.
Imagine that you are a nutritionist trying to explore the nutritional content of food. What is the best way to differentiate food items? By vitamin content? Protein levels? Or perhaps a combination of both? Knowing the variables that best differentiate your items has several uses: 1. Visualization. Using the right variables to plot items will give more insights. 2. Uncovering Clusters. With good visualizations, hidden categories or clusters could be identified. Among food items for instance, we may identify broad categories like meat and vegetables, as well as sub-categories such as types of vegetables. The question is, how do we derive the variables that best differentiate items? Principal Components Analysis (PCA) is a technique that finds underlying variables (known as principal components) that best differentiate your data points. Principal components are dimensions along which your data points are most spread out: A principal component can be expressed by one or more existing variables. For example, we may use a single variable – vitamin C – to differentiate food items. Because vitamin C is present in vegetables but absent in meat, the resulting plot (below, left) will differentiate vegetables from meat, but meat items will clumped be together. To spread the meat items out, we can use fat content in addition to vitamin C levels, since fat is present in meat but absent in vegetables. However, fat and vitamin C levels are measured in different units. To combine the two variables, we first have to normalize them, meaning to shift them onto a uniform standard scale, which would allow us to calculate a new variable – vitamin C minus fat. Combining the two variables helps to spread out both vegetable and meat items. The spread can be further improved by adding fiber, of which vegetable items have varying levels. This new variable – (vitamin C + fiber) minus fat – achieves the best data spread yet. While in this demonstration we tried to derive principal components by trial-and-error, PCA does this by systematic computation. Using data from the United States Department of Agriculture, we analyzed the nutritional content of a random sample of food items. Four nutrition variables were analyzed: Vitamin C, Fiber, Fat and Protein. For fair comparison, food items were raw and measured by 100g. Among food items, the presence of certain nutrients appear correlated. This is illustrated in the barplot below with 4 example items: Specifically, fat and protein levels seem to move in the same direction with each other, and in the opposite direction from fiber and vitamin C levels. To confirm our hypothesis, we can check for correlations (tutorial: correlation analysis) between the nutrition variables. As expected, there are large positive correlations between fat and protein levels (r = 0.56), as well as between fiber and vitamin C levels (r = 0.57). Therefore, instead of analyzing all 4 nutrition variables, we can combine highly-correlated variables, leaving just 2 dimensions to consider. This is the same strategy used in PCA – it examines correlations between variables to reduce the number of dimensions in the dataset. This is why PCA is called a dimension reduction technique. Applying PCA to this food dataset results in the following principal components: The numbers represent weights used in combining variables to derive principal components. For example, to get the top principal component (PC1) value for a particular food item, we add up the amount of Fiber and Vitamin C it contains, with slightly more emphasis on Fiber, and then from that we subtract the amount of Fat and Protein it contains, with Protein negated to a larger extent. We observe that the top principal component (PC1) summarizes our findings so far – it has paired fat with protein, and fiber with vitamin C. It also takes into account the inverse relationship between the pairs. Hence, PC1 likely serves to differentiate meat from vegetables. The second principal component (PC2) is a combination of two unrelated nutrition variables – fat and vitamin C. It serves to further differentiate sub-categories within meat (using fat) and vegetables (using vitamin C). Using the top 2 principal components to plot food items results in the best data spread thus far: Meat items (blue) have low PC1 values, and are thus concentrated on the left of the plot, on the opposite side from vegetable items (orange). Among meats, seafood items (dark blue) have lower fat content, so they have lower PC2 values and are at the bottom of the plot. Several non-leafy vegetarian items (dark orange), having lower vitamin C content, also have lower PC2 values and appear at the bottom. Choosing the Number of Components. As principal components are derived from existing variables, the information available to differentiate data points is constrained by the number of variables you start with. Hence, the above PCA on food items only generated 4 principal components, corresponding to the original number of variables in the dataset. Principal components are also ordered by their effectiveness in differentiating data points, with the first principal component doing so to the largest degree. To keep results simple and generalizable, only the first few principal components are selected for visualization and further analysis. The number of principal components to consider is determined by something called a scree plot: A scree plot shows the decreasing effectiveness of subsequent principal components in differentiating data points. A rule of thumb is to use the number of principal components corresponding to the location of a kink. In the plot above, the kink is located at the second component. This means that even though having three or more principal components would better differentiate data points, this extra information may not justify the resulting complexity of the solution. As we can see from the scree plot, the top 2 principal components already account for about 70% of data spread. Using fewer principal components to explain the current data sample better ensures that the same components can be generalized to another data sample. Maximizing Spread. The main assumption of PCA is that dimensions that reveal the largest spread among data points are the most useful. However, this may not be true. A popular counter example is the task of counting pancakes arranged in a stack, with pancake mass representing data points: To count the number of pancakes, one pancake is differentiated from the next along the vertical axis (i.e. height of the stack). However, if the stack is short, PCA would erroneously identify a horizontal axis (i.e. diameter of the pancakes) as a useful principal component for our task, as it would be the dimension along which there is largest spread. Interpreting Components. If we are able to interpret the principal components of the pancake stack, with intelligible labels such as “height of stack” or “diameter of pancakes”, we might be able to select the correct principal components for analysis. However, this is often not the case. Interpretations of generated components have to be inferred, and sometimes we may struggle to explain the combination of variables in a principal component. Nonetheless, having prior domain knowledge could help. In our example with food items, prior knowledge of major food categories help us to comprehend why nutrition variables are combined the way they are to form principal components. Orthogonal Components. One major drawback of PCA is that the principal components it generates must not overlap in space, otherwise known as orthogonal components. This means that the components are always positioned at 90 degrees to each other. However, this assumption is restrictive as informative dimensions may not necessarily be orthogonal to each other: To resolve this, we can use an alternative technique called Independent Component Analysis (ICA). ICA allows its components to overlap in space, thus they do not need to be orthogonal. Instead, ICA forbids its components to overlap in the information they contain, aiming to reduce mutual information shared between components. Hence, ICA’s components are independent, with each component revealing unique information on the data set. Information has thus far been represented by the degree of data spread, with dimensions along which data is more spread out being more informative. This is may not always be true, as seen from the pancake example. However, ICA is able to overcome this by taking into account other sources of information apart from data spread. Therefore, ICA may be a backup technique to use if we suspect that components need to be derived based on information beyond data spread, or that components may not be orthogonal. PCA is a classic technique to derive underlying variables, reducing the number of dimensions we need to consider in a dataset. In our example above, we were able to visualize the food dataset in a 2-dimensional graph, even though it originally had 4 variables. However, PCA makes several assumptions, such as relying on data spread and orthogonality to derive components. On the other hand, ICA is not subjected to these assumptions. Therefore, when in doubt, one could consider running a ICA to verify and complement results from a PCA. Did you learn something useful today? We would be glad to inform you when we have new tutorials, so that your learning continues! Sign up below to get bite-sized tutorials delivered to your inbox: Thanks to Aram Dovlatyan for pointing out a typo in this post. Copyright © 2015-Present Algobeans.com. All rights reserved. Be a cool bean.
Helaine Camélia September 10, 2021 Math Worksheets Using the easy to use features that are available on your computer, you can create a worksheet that will aid you in learning purposes. Your students will enjoy a great deal having to learn using it. With the simple steps, you can definitely enhance the learning experiences of your student What to Consider When Using a Writing Worksheet – Parents and teachers should always take into consideration the child or student they are teaching. It is good to customize the worksheet based on the profile of the learner. For example, if it is a preschooler you are teaching, it is best to choose worksheets that have colorful graphics for they not to lose interest in it. Additionally, the use of simple words is also necessary to promote understanding, especially for young kids. Older kids can very well benefit from worksheets that bring out their creative thinking abilities and those that will help them widen their vocabulary. Using pictures – While your child is still learning to recognize the letters of the alphabet, you can use pictures (or the actual item) to help them practice their sounds. Find pictures of a bird, a ball, a bat, a bath, a book, and so forth to practice the letter ’B. Choose a letter for the day and encourage your child to find items that start with that letter around the house. Printable worksheets should have nice exercises for this as well. Before creating the worksheet for children, it is important to understand why the worksheet is being made. Is there a message to be conveyed? Can students record information that can be understood later? Is it being created to just teach a basic concept to little children? A well designed worksheet will make its objective clear. The different aspects that should influence the design of the worksheet are the age, ability and motivation of the students. There are many types of writing worksheets. There are the cursive writing worksheets and the kindergarten worksheets. The latter is more on letter writing and number writing. This is typically given to kids of aged four to seven to first teach them how to write. Through these worksheets, they learn muscle control in their fingers and wrist by repeatedly following the strokes of writing each letter. Read, read and read some more – You don’t need worksheets for this one either, but you may want to join the local library rather than spend a fortune on books that your child outgrows as quickly as they outgrow their clothes! The more you read to your child, with your child, and in front of your child, the quicker they will learn to read, and learn how to enjoy it too. Tag Cloud8th grade math assessment test mathematics classes for adults division worksheets grade 3 word problems puzzle it math computation puzzles answers math word problems for adults language tutor free touch math worksheets std 3 math worksheets easy word problem worksheets 5th grade math fraction word problems addition and subtraction fluency worksheets tenth grade math worksheets fast math games for kids multiplication coloring worksheets 5th grade are all integers natural numbers
Scientists find evidence the early solar system harbored a gap between its inner and outer regions. The cosmic boundary, perhaps caused by a young JupiterJupiter is the largest planet in the solar system and the fifth planet from the sun. It is a gas giant with a mass greater then all of the other planets combined. Its name comes from the Roman god Jupiter.”>Jupiter or an emerging wind, likely shaped the composition of infant planets. In the early solar system, a “protoplanetary disk” of dust and gas rotated around the sun and eventually coalesced into the planets we know today. A new analysis of ancient meteorites by scientists at MITMIT is an acronym for the Massachusetts Institute of Technology. It is a prestigious private research university in Cambridge, Massachusetts that was founded in 1861. It is organized into five Schools: architecture and planning; engineering; humanities, arts, and social sciences; management; and science. MIT’s impact includes many scientific breakthroughs and technological advances.”>MIT and elsewhere suggests that a mysterious gap existed within this disk around 4.567 billion years ago, near the location where the asteroid belt resides today. The team’s results, published on October 15, 2021, in Science Advances, provide direct evidence for this gap. “Over the last decade, observations have shown that cavities, gaps, and rings are common in disks around other young stars,” says Benjamin Weiss, professor of planetary sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “These are important but poorly understood signatures of the physical processes by which gas and dust transform into the young sun and planets.” Likewise the cause of such a gap in our own solar system remains a mystery. One possibility is that Jupiter may have been an influence. As the gas giant took shape, its immense gravitational pull could have pushed gas and dust toward the outskirts, leaving behind a gap in the developing disk. Another explanation may have to do with winds emerging from the surface of the disk. Early planetary systems are governed by strong magnetic fields. When these fields interact with a rotating disk of gas and dust, they can produce winds powerful enough to blow material out, leaving behind a gap in the disk. Regardless of its origins, a gap in the early solar system likely served as a cosmic boundary, keeping material on either side of it from interacting. This physical separation could have shaped the composition of the solar system’s planets. For instance, on the inner side of the gap, gas and dust coalesced as terrestrial planets, including the Earth and MarsMars is the second smallest planet in our solar system and the fourth planet from the sun. Iron oxide is prevalent in Mars’ surface resulting in its reddish color and its nickname “The Red Planet.” Mars’ name comes from the Roman god of war.”>Mars, while gas and dust relegated to the farther side of the gap formed in icier regions, as Jupiter and its neighboring gas giants. “It’s pretty hard to cross this gap, and a planet would need a lot of external torque and momentum,” says lead author and EAPS graduate student Cauê Borlina. “So, this provides evidence that the formation of our planets was restricted to specific regions in the early solar system.” Weiss and Borlina’s co-authors include Eduardo Lima, Nilanjan Chatterjee, and Elias Mansbach of MIT; James Bryson of Oxford University; and Xue-Ning Bai of Tsinghua University. A split in space Over the last decade, scientists have observed a curious split in the composition of meteorites that have made their way to Earth. These space rocks originally formed at different times and locations as the solar system was taking shape. Those that have been analyzed exhibit one of two isotope combinations. Rarely have meteorites been found to exhibit both — a conundrum known as the “isotopic dichotomy.” Scientists have proposed that this dichotomy may be the result of a gap in the early solar system’s disk, but such a gap has not been directly confirmed. Weiss’ group analyzes meteorites for signs of ancient magnetic fields. As a young planetary system takes shape, it carries with it a magnetic field, the strength and direction of which can change depending on various processes within the evolving disk. As ancient dust gathered into grains known as chondrules, electrons within chondrules aligned with the magnetic field in which they formed. Chondrules can be smaller than the diameter of a human hair, and are found in meteorites today. Weiss’ group specializes in measuring chondrules to identify the ancient magnetic fields in which they originally formed. In previous work, the group analyzed samples from one of the two isotopic groups of meteorites, known as the noncarbonaceous meteorites. These rocks are thought to have originated in a “reservoir,” or region of the early solar system, relatively close to the sun. Weiss’ group previously identified the ancient magnetic field in samples from this close-in region. A meteorite mismatch In their new study, the researchers wondered whether the magnetic field would be the same in the second isotopic, “carbonaceous” group of meteorites, which, judging from their isotopic composition, are thought to have originated farther out in the solar system. They analyzed chondrules, each measuring about 100 microns, from two carbonaceous meteorites that were discovered in Antarctica. Using the superconducting quantum interference device, or SQUID, a high-precision microscope in Weiss’ lab, the team determined each chondrule’s original, ancient magnetic field. Surprisingly, they found that their field strength was stronger than that of the closer-in noncarbonaceous meteorites they previously measured. As young planetary systems are taking shape, scientists expect that the strength of the magnetic field should decay with distance from the sun. In contrast, Borlina and his colleagues found the far-out chondrules had a stronger magnetic field, of about 100 microteslas, compared to a field of 50 microteslas in the closer chondrules. For reference, the Earth’s magnetic field today is around 50 microteslas. A planetary system’s magnetic field is a measure of its accretion rate, or the amount of gas and dust it can draw into its center over time. Based on the carbonaceous chondrules’ magnetic field, the solar system’s outer region must have been accreting much more mass than the inner region. Using models to simulate various scenarios, the team concluded that the most likely explanation for the mismatch in accretion rates is the existence of a gap between the inner and outer regions, which could have reduced the amount of gas and dust flowing toward the sun from the outer regions. “Gaps are common in protoplanetary systems, and we now show that we had one in our own solar system,” Borlina says. “This gives the answer to this weird dichotomy we see in meteorites, and provides evidence that gaps affect the composition of planets.” Reference: “Paleomagnetic evidence for a disk substructure in the early solar system” by Cauê S. Borlina, Benjamin P. Weiss, James F. J. Bryson, Xue-Ning Bai, Eduardo A. Lima, Nilanjan Chatterjee and Elias N. Mansbach, 15 October 2021, Science Advances. This research was supported, in part, by NASAEstablished in 1958, the National Aeronautics and Space Administration (NASA) is an independent agency of the United States Federal Government that succeeded the National Advisory Committee for Aeronautics (NACA). It is responsible for the civilian space program, as well as aeronautics and aerospace research. It’s vision is “To discover and expand knowledge for the benefit of humanity.””>NASA, and the National Science Foundation.
Between the vertebrae in the spinal column are intervertebral discs that act like shock absorbers. They consist of a disc (like a cushion filled with nucleus pulposus, which has the consistency of jelly) and ligaments to surround and anchor it in place. What you should know about disc disorders - There are many types of disc disorders. - Disc disorders do not always cause pain. - Disc disorders can occur in anyone at any time, but they are more common in older people. - A disc disorder can be caused by an injury or accident, but it may also happen over time due to degeneration (“wear and tear”). Some main types of disc disorders For a physician, a disc disorder is any problem with the disc(s) between the vertebrae. The main types of disc disorders are: - Herniated or ruptured disc - Degenerative disc disease - Thinning disc - Bulging disc Herniated or ruptured disc A herniated or ruptured disc is a “noncontained” disc disorder because the disc has partly or completely broken open and the nucleus pulposus is no longer contained in the disc. When the nucleus pulposus leaks out, two things happen: - This jelly-like substance can spread out and put pressure on the spinal cord or nerves. - The disc loses some of its ability to be a shock absorber. Sometimes people call a herniated or ruptured disc a “slipped disc,” but the disc does not literally slip. It breaks open. Degenerative disc disease and thinning discs Degenerative disc disease occurs when the disc starts to wear out. The disc may shrink, lose its shape, lose its flexibility, wear out or get very thin. Degenerative disc disease (DDD) is technically not a disease at all, but would be called a “condition” by medical experts. Disc degeneration is a natural part of aging, and it is more common in older people. Whether or not disc degeneration produces symptoms and how severe those symptoms are can vary among individuals. DDD is the general term for all disorders involving the progressive degeneration of the disc. Thinning discs are a particularly common form of DDD. DDD is considered a “contained” disc disorder because the nucleus pulposus does not leak out. A bulging disc is a “contained” disc disorder because the disc has not yet broken open. However, a bulging disc is likely to rupture. The bulge in a herniated disc may extend in any location and may press on a nerve or cause pain. However, some people with bulging discs have no symptoms. Diagnosing disc disorders University Spine Center specializes in diagnosis of disc disorders. Disc disorders can be challenging to diagnose because the symptoms of disc disorders can vary with patients and do not necessarily match the severity of the condition. A person with a severe problem may have mild symptoms and, vice versa, a person with a mild problem may have severe symptoms. Typical symptoms of disc disorders include: - Back pain - Leg pain - A sense of numbness, tingling or “pins and needles” - Leg weakness
In this example you will learn how to check if a number is prime or composite in R Programming. You will also print a list of prime numbers from 2 to a given number. A prime number is defined as any positive number which is only divisible by 1 and itself. Any number which is not prime is called composite. 1 is considered as neither prime nor composite. Example of prime numbers are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, ..... If you are an absolute beginner in R and want to learn R for data analysis or data science from R experts I will strongly suggest you to see this R for Absolute Beginners Course If you want to check if a number is prime or not, simply see its factors. If it has only two factors, i.e 1 and the number itself then it is a prime number. Check if 10 is a prime number. The factors of 10 are 1,2,5,10. Hence it is not a prime number Check if 17 is a prime number. The factors of 17 are 1,17. Hence it is a prime number. To find if a number is prime or not, we define a function isprime. In this example the function isprime() checks if a number is prime. First a variable lim is created which is half of the original number. It is to cut iteration of for loop in to half as there are no more factors possible after half of a number. The variable prime contains T or TRUE initially, and if the number is not prime, it will be changed into F.
We think of tornadoes as a spring and summer phenomena but all it takes is instability…weather instability that is and that is what winter tornadoes are all about. Tornadoes form in unusually violent thunderstorms when there is sufficient (1) instability and (2) wind shear present in the lower atmosphere. Instability refers to unusually warm and humid conditions in the lower atmosphere, and possibly cooler than usual conditions in the upper atmosphere. This past week, a massive storm system spawned dozens of tornadoes and caused extensive damage across a swath of the southern United States, from Texas to Florida. At least 19 deaths have been blamed on the storms so far, as emergency crews and first responders are still searching through wreckage for survivors. The worst of the system appeared to have moved out to sea on January 23, as residents who sheltered from the storm returned to their damaged homes to assess the damage and salvage what they can. If you live in a tornado area or region, pay attention to these weather changes and be aware that tornadoes can strike at anytime of the year. It just requires a bit of weather instability. Remember tornado safety rules: - The safest place to be is an underground shelter, basement or safe room. - If no underground shelter or safe room is available, a small, windowless interior room or hallway on the lowest level of a sturdy building is the safest alternative. - Mobile homes are not safe during tornadoes or other severe winds. This link has some great images of this winter tornado storm.
Collagen – An Overview The word collagen is derived from the Greek word meaning glue. The protein collagen is the main substance of connective tissue and is present in all multi-cellular organisms. It is found in a many different tissues and organs like bones, tendons, cartilage, skin, blood vessels, teeth, cornea, inter-vertebral disks, vitreous bodies, placenta, etc. The main function of collagen is mechanical reinforcement of connective tissues of vertebrates. It enwraps the organs and holds specialized cells together in discrete units. Thus, it prevents organs and tissues from tearing or loosing their functional shape when exposed to sudden or rough movements. In addition to this structural role in mature tissues, collagen plays a regulating role in developing tissues, influencing the proliferation and differentiation of unspecialized cells. Collagen is a highly conserved protein preserving the amino acid sequence and typical triple helix structure across species lines. At present, over fifteen different types of collagens have been identified. All collagens contain the unique triple helix structure; however, the length and nature of the helix as well as the size of non-helical portions of the molecule vary from type to type. The predominant collagen of the skin, tendon and bone is type I collagen; type II collagen is essentially unique to cartilage; type III collagen occurs in adult skin (5-10%), blood vessels and internal organs; type V is found in bone, skin, tendons, ligaments, and cornea; types IV and VIII are network-forming collagens. Aging and Maturation of Collagen The aging and maturation of collagen in the body is controlled by two main processes: cross linking and degradation. Collagen Cross-Linking Process There are two distinct mechanisms of collagen cross-linking in the body. One is an enzymatic-mediated process and the second is a non-enzymatic reaction called glycation which is mediated by a reducing sugar, glucose. Non-enzymatic glycation is the process by which sugars in the body, mainly glucose, act as reducing agent to cross-link native collagen, leading to stiffness of the tissues. Over time, the initial products of the glycation reaction slowly undergo further re-arrangements, resulting in the irreversible formation of a family of cross-linked structures. Watch the video to learn more. This glycation induced cross-linking is considered to be the major mechanism in extending biological half life of the native collagen. Indeed in studies examining the effect of glycation on collagen scaffolds and tissue, it has been shown to decrease the rate of degradation of the scaffold matrix. Additionally it has been demonstrated that changes in glycation levels are associated with alterations in mechanical strength, solubility, ligand binding and conformation. Collagen Degradation Process The native collagen is degraded in two steps. First the triple helical molecule is specifically split into two parts by specific enzymes called collagenases. Second, the a-chains of the split fragments unfold and become susceptible to digestion by gelatinases and non-specific proteinases. This activity initiates a breakdown of the collagen molecules, leading ultimately to harmless absorption by the body.
At the start of their research, paleobiologists Christine Janis and Borja Figueirido simply wanted to determine the hunting style of an extinct marsupial called Thylacine (also known as the "marsupial wolf" or the "Tasmanian tiger"). In the end, the Australian relic, which has a very dog-like head but with both cat- and dog-like features in the skeleton, proved to be uniquely unspecialized, but what emerged from the effort is a new classification system that can capably predict the hunting behaviors of mammals from measurements of just a few forelimb bones. "We realized what we are also doing was providing a dataset or a framework whereby people could look at extinct animals because it provides a good categorization of extant forms," said Janis, professor of ecology and evolutionary biology at Brown University, and co-author of a paper describing the framework in the Journal of Morphology. For example, the scapulas (shoulder blades) of leopards (ambush predators who grapple with rather than chase their prey) and those of cheetahs (pursuit predators who chase their prey over a longer distance) measure very differently. So do their radius (forearm) bones. The shapes of the bones, including areas where muscles attach, place the cheetahs with other animals that evolved for chasing (mainly dogs), and the leopards with others that evolved for grappling (mostly other big cats). "The main differences in the forelimbs really reflect adaptations for strength versus adaptations for speed," Janis said. In plots of the data in the paper, cheetahs and African hunting dogs appear to be brethren by their scapular proportions even though one is a cat and one is a dog. But the similar scapulas don't lie: both species are acknowledged by zoologists to be pursuit predators. In all, Janis and Figueirido of the Universidad de Malaga in Spain made 44 measurements on five forelimb bones in 62 specimens of 37 species of ranging from the Arctic fox to the thylacine. In various analyses the data proved helpful in sorting out the behaviors of their bones' owners. Given measurements from all of the forelimb bones of an animal, for example, they could accurately separate ambush predators from pursuit predators 100 percent of the time and ambush predators from pouncing predators 95 percent of the time. Results were similar for analyses based on the humerus (upper arm bone). They were always able to make correct classifications between the three predator styles more than 70 percent of the time, even with just one kind of bone. The elusive thylacine The thylacine has not been known from mainland Australia in recorded human history, and by official accounts it disappeared from the Australian island of Tasmania by 1936 (although some locals still believe they may be around). In a similar vein, the beasts evaded Janis and Figueirido's attempts at a neat classification of their mode of carnivory. By some bones they were ambushers. By others they were pursuers. In the end, they weren't anything but thylacines. Janis notes that they could do just fine as generalists, given their relative lack of competition. Historically Australia has hosted less predator diversity than the Serengeti, for example. "If you are one the few predators in the ecosystem, there's not a lot of pressure to be specialized," she said. In the thylacine's case the evidence from forelimb bone measurements supports their somewhat unusual status by the standards of the rest of predatory mammals as generalists. For other extinct predators, the framework will support other conclusions based on these same standards. "One thing you tend to see is that people want to make extinct animals like living ones, so if something has a wolf-like head with a long snout as does the thylacine, although its skull is more delicate than that of a wolf, then people want to make it into a wolf-like runner," she said. "But very few extinct animals actually are as specialized as modern day pursuit predators. People reconstruct things in the image of the familiar, which may not reflect reality." But Janis said she hopes the framework will provide fellow paleobiologists with an empirical basis for guiding those determinations. The Bushnell Foundation supported the study with a research and teaching grant. The Museum of Comparative Zoology at Harvard University, the American Museum of Natural History in New York, and Australia's Museum Victoria and Queensland Museum provided access to specimens for measurement.
Preservation metadata is information that supports and documents acts of preservation on digital materials. A specific type of metadata, preservation metadata works to maintain a digital object’s viability while also ensuring continued access through providing contextual information as well as details on usage and rights. It describes both the context of an item as well as its structure. As an increasing portion of the world’s information output shifts from analog to digital form, preservation metadata is an essential component of most digital preservation strategies, including digital curation, data management, digital collections management and the preservation of digital information and information objects over the long-term. It is an integral part of the data lifecycle and helps to document a digital object’s authenticity while maintaining usability across formats. Metadata surrounds and describes physical, digitized, and born-digital information objects. Preservation metadata is external metadata (related to an object, and typically created after a resource has been separated from its original creator), value-added, item-level data that stores technical details on the format, structure and uses of a digital resource, as well as the history of all actions performed on the resource, including changes and decisions regarding digitization, migration to other formats, authenticity information including technical features or custody history, and rights and responsibilities information. In addition, preservation metadata may include information on the physical condition of a resource. Preservation metadata is dynamic and access-centered and should accomplish four goals: include details about files and instructions for use; document all updates or actions that have been performed on an object; show provenance and demonstrate current and future custody; list details on the individual(s) who are responsible for the preservation of the object and changes made to it. Preservation metadata often includes the following information: - Provenance: Who has had custody/ownership of the digital object? - Authenticity: Is the digital object what it purports to be? - Preservation activity: What has been done to preserve the digital object? - Technical environment: What is needed to render, interact with and use the digital object? - Rights management: What intellectual property rights must be observed? Methods of metadata creation include: - Automatic (internal) - Manual (often created by a specialist) - Created during digitization Uses of Preservation Metadata Digital materials require constant maintenance and migration to new formats to accommodate evolving technologies and varied user needs. In order to survive into the future, digital objects need preservation metadata that exists independently from the systems which were used to create them. Without preservation metadata, digital material will be lost. “While a print book with a broken spine can be easily re-bound, a digital object that has become corrupted or obsolete is often impossible (or prohibitively expensive) to repair”. Preservation metadata provides the vital information which will make “digital objects self-documenting across time.” Data maintenance is considered a key piece of collections maintenance by ensuring the availability of a resource over time, a concept detailed in the Reference Model for an Open Archival Information System (OAIS). OAIS is a broad conceptual model which many organizations have followed in developing new preservation metadata element sets and Archival Information Packages (AIP). Early projects in preservation metadata in the library community include CEDARS, NEDLIB, The National Library of Australia and the OCLC/RLG Working Group on Preservation Metadata. The ongoing work of maintaining, supporting, and coordinating future revisions to the PREMIS Data Dictionary is undertaken by the PREMIS Editorial Committee, hosted by the Library of Congress. Preservation metadata provides continuity and contributes to the validity and authenticity of a resource by providing evidence of changes, adjustments and migrations. The importance of preservation metadata is further indicated by its required inclusion in many Data Management Plans (DMPs) which are often key pieces of applications for grants and government funding. Considered by the National Information Standards Organization (NISO) to be a subtype of administrative metadata, preservation metadata is used to promote: - Digital object management - Preservation (often in conjunction with technical metadata) Complications of Preservation Metadata Concern over the poor management of digital objects notes the possibility of a “digital dark age”. Many institutions, including the Digital Curation Center (DDC) and the National Digital Stewardship Alliance (NDSA) are working to create access to digital objects while ensuring their continued viability. In the NDSA’s Version 1 of the Levels of Digital Preservation, preservation metadata is grouped under Level Four, or “Repair your metadata”, part of the macro preservation plan intended to make objects available over the long term. The differing uses of digital resources across space, time and institutions requires that one object or set of information be accessible in a variety of formats, with the creation of new preservation metadata in each iteration. Anne Gilliland notes that these variations create the need for wider data standards that can be used within and across industries that will then result in further use and interoperability. The value of interoperability is further validated by the expense, both temporal and financial, of metadata creation. The creation of preservation metadata by multiple users or institutions can complicate issues of ownership, access and responsibility. Depending on an institution’s mission, it may be difficult or outside the scope of responsibility to perform preservation while providing access. Further research into cross-institutional collaboration may provide greater insight into where data should be stored, and who should be managing it. Scholar Maggie Fieldhouse notes that the creation of metadata is shifting from collections managers to suppliers and publishers, while Jerome McDonough identifies the collaborative potentials of multiple partners working together to enhance metadata records around an object with preservation metadata as a key piece in cross-institutional communication. Sheila Corrall notes that the creation and management of preservation metadata represents the intersection between libraries, IT management and archival practice. Current Developments in Preservation Metadata Preservation metadata is a new and developing field. The OAIS Reference Model is a broad conceptual model which many organizations have followed in developing new preservation metadata element sets. Early projects in preservation metadata in the library community include CEDARS, NEDLIB, The National Library of Australia and the OCLC/RLG Working Group on Preservation Metadata. The ongoing work of maintaining, supporting, and coordinating future revisions to the PREMIS Data Dictionary is undertaken by the PREMIS Editorial Committee, hosted by the Library of Congress. - Digital preservation - Preservation Metadata: Implementation Strategies (PREMIS) - Digital library - Content Management Systems - Mitchell, E (2015). Metadata Standards and Web Services in Libraries, Archives and Museums: An Active Learning Resource. Santa Barbara: Libraries Unlimited. ISBN 978-1610694490. - Gilliland, A.J. (2016). Setting the Stage: An Introduction to Metadata, Third Edition. Los Angeles: Getty Research Institute. - Poole, A.H. (2016). "The conceptual landscape of digital curation". Journal of Documentation. 72 (5): 961–986. doi:10.1108/JD-10-2015-0123. - "Preservation Metadata". National Library of Australia. 2011-08-24. Retrieved May 2, 2019. - Woodyard, D. (April 2002). "Metadata and preservation". Information Services & Use. 22 (2–3): 121–125. doi:10.3233/ISU-2002-222-311. - "PREMIS: Preservation Metadata Maintenance Activity". Library of Congress. Retrieved May 2, 2019. - Lavoie, B.; Dempsey, L. (2004). "Thirteen Ways of Looking at...Digital Preservation". D-Lib Magazine. Vol. 10 Number 7/8. - Corrall, S. (2012). Fieldhouse, M.; Marshall, A. (eds.). Collection Development in the Digital Age. Facet Publishing. pp. 3–25. ISBN 978-1856047463. - Lee, C. (2009). "Open Archival System (OAIS) Referece Model". In Bates, M.J.; Maack, M.N. (eds.). Encyclopedia of Library and Information Sciences, Third Edition. CRC Press. pp. 4020–4030. ISBN 9780203757635. - Cook, T. (Spring 1997). "What is Past is Prologue: A History of Archival Ideas Since 1898, and the Future Paradigm Shift". Archivaria. 43: 17–63. - "Understanding Metadata: What is Metadata, and What is it For?: A Primer". NISO. Retrieved May 2, 2019. - "Levels of Digital Preservation". NDSA. Retrieved May 2, 2019. - Borgman, C.L. (2015). Big Data, Little Data, No Data. London: MIT Press. ISBN 9780262529914. - Noonan, D.; Chute, T. (2014). "Data Curation and the University Archives". The American Archivist. 77 (1): 201–240. doi:10.17723/aarc.77.1.m49r46526847g587. - Fieldhouse, M. (2012). "The Process of Collection Management". In Fieldhouse, M.; Marshall, A. (eds.). Collection Development in the Digital Age. Facet Publishing. pp. 27–43. ISBN 978-1856047463. - McDonough, J.P. (2010). "Packaging Video Games for Long-Term Preservation". Journal of the American Society for Information Science and Technology. 62 (1): 171–184. doi:10.1002/asi.21412. - Dappert, Angela; Guenther, Rebecca Squire; Peyrard, Sébastien (2016). Digital Preservation Metadata for Practitioners. doi:10.1007/978-3-319-43763-7. ISBN 978-3-319-43761-3. - Gartner, Richard; Lavoie, Brian (2013). "Preservation Metadata (2nd Edition)". doi:10.7207/twr13-03. Cite journal requires - Conway, Paul (2010). "Preservation in the Age of Google: Digitization, Digital Preservation, and Dilemmas" (PDF). The Library Quarterly. 80 (1): 61–79. doi:10.1086/648463. hdl:2027.42/85223. JSTOR 10.1086/648463. - Altman, Micah; Adams, Margaret; Crabtree, Jonathan; Donakowski, Darrell; Maynard, Marc; Pienta, Amy; Young, Copeland (2009). "Digital Preservation through Archival Collaboration: The Data Preservation Alliance for the Social Sciences". The American Archivist. 72: 170–184. doi:10.17723/aarc.72.1.eu7252lhnrp7h188. - Australian National Data Services (ANDS) Data Sharing Verbs - Capability Maturity Model for Scientific Data Management - CEDARS (2000) "Metadata for Digital Preservation: The CEDARS Project Outline Specification" - Controlled LOCKSS - Data Curation Profiles - Data Documentation Initiative (DDI) - DataONE Data Lifecycle - Dublin Core Metadata Initiative Preservation Community - Digital Curation Center (DCC) Digital Curation Lifecycle Model - I2S2 Idealized Scientific Research Activity Lifecycle Model - Lots of Copies Keeps Stuff Safe (LOCKSS) - Merritt Repository - National Digital Stewardship Alliance (NDSA) - National Library of Australia, Preserving Access to Digital Information - National Library of New Zealand Metadata Standards Framework — Preservation Metadata - NEDLIB (2000) "Metadata for Long Term Preservation" - NISO Primer "Understanding Metadata" - Reference model for an Open Archival Information System (OAIS) - Research360 Institutional Research Lifecycle - UK Data Archive Data Lifecycle
Otitis Media is an infection that occurs predominantly in the middle ear, behind the eardrum. This condition occurs mostly in children and toddlers and is quite common, with over 10 million cases diagnosed annually in India. This infection is not a dangerous one and is treated quite easily. Parents need not be anxious when their child comes down with an ear infection. However, they should certainly know about the common otitis media symptoms so that they can get their children the care they need. Otitis Media – The Symptoms to Watch Out For: Some of the common otitis media symptoms to occur in toddlers and children include: - Fullness in the ear - Fluid drainage from the ear - Pulling on the ears (and then wincing in pain) - Neck pain - Ear pain - Hearing loss - Lack of balance Causes of Otitis Media: To understand otitis media causes, you must understand exactly where this infection occurs. The eustachian tube is a canal that reaches from the back of the throat to the middle of the ear. When this tube swells up due to fluid, then infection can develop. As this tube tends to be a bit shorter in children than in adults, they are more likely to suffer from the infection. Some of the reasons why this infection occurs include: - A cold - The flu - Exposure to cigarette smoke - Sinus infections - Enlarged or infected adenoids - Drinking milk while lying down (this is one of the main otitis media causes in infants). Who is at a Risk of Suffering from Otitis Media? Children and toddlers are the most at-risk for developing otitis media, meaning parents of small children need to be aware that their child is prone to this infection. However, certain children are more at-risk than others. This includes children who are: - Between the ages of 6 months to 36 months - Attending day care - Using pacifiers for soothing - Exposed to cigarettes and their smoke - Drinking while lying down - Bottle fed instead of breastfed - Living in very polluted cities - Living in an area that has cold climate - Suffering from a cold or flu How is Otitis Media Diagnosed? A doctor uses an otoscope to look inside the ear of the child to identify any signs of otitis media. These visible otitis media symptoms can include pus, perforation in the ear drums, blood, air bubbles, swelling, redness, and fluid present in the middle ear. Conversely, a doctor may also use a tympanometry in order to check the child’s air pressure within the ears. Similarly, a doctor can also use a reflectometry, which is an instrument that is used to make a sound near the child’s ear to determine whether there is fluid trapped inside. If the child ends up experiencing hearing loss, then the doctor will also conduct a hearing test. How is this Infection Treated? The doctor will prescribe specific otitis media antibiotics in order to clear out the infection and relieve your child of pain. However, if the ear infections are recurring in nature, and the antibiotics don’t help, then the doctor may recommend surgery. Before resorting to antibiotics, though, parents can consider otitis media management at home. Certain home remedies can help your child tremendously. These include the following: - Apply a warm washcloth that is moist to the infected ear - You can use OTC ear drops to help clear out the infection - You can also give your child a few OTC painkillers that are not too strong - Adding drops of brandy to the ear is a common home remedy too How to Prevent This Infection? Your child is bound to have ear infections, especially when very young. In order to prevent this from happening too often, you can do the following: - Avoid giving your child pacifiers for soothing - Ensure your child get regular vaccines, including seasonal ones recommended by your doctor - Breastfeed your child instead of relying on a bottle - Do not expose your child to cigarette smoke Otitis media symptoms can be challenging to spot in infants, however increased fussiness and crying intensely are two red flags that should prompt you to see your paediatrician sooner rather later. An ear infection can be very painful, which is why it is best to take your child to the doctor if you are not sure why your child is crying so much.
Fracking doesn’t consist of large, stationary pollution sources, like U.S. Steel’s metallurgical coke plant in Clairton, where emissions are monitored daily, the pollutants they contain are known and the health effects of those pollutants are well understood from decades of research. Fracking’s effect on regional air quality and health is a puzzle of factors that in many cases are still poorly documented and understood. But researchers are starting to put together the pieces. Nearly 80 published studies have investigated fracking’s impact on air quality and health, and more than 8 in 10 report that fracking poses risks to both. Still unclear, however, is the level of risk. Air Quality Impact The range of emissions typically released from fracking operations is largely known and includes major pollutants regulated by the U.S. Environmental Protection Agency, such as fine particulates, as well as chemicals identified as hazardous air pollutants, these include, which are known or suspected to cause cancer and other health conditions. Those pollutants tend to vary depending on the source. Diesel emissions, for example, are usually the greatest when heavy trucks and machinery are used to prepare well sites. Diesel emissions contain many toxic chemicals, most notably fine diesel particulate matter, which can increase the risk of asthma attacks, heart and respiratory disease, adverse birth outcomes and premature death. Diesel emissions from fracking were significant enough that the National Institute for Occupational Safety and Health identified diesel particulates as a hazard for oil and gas workers based on readings at oil and gas sites in five states, including Pennsylvania. Hazardous air pollutants can be emitted from fracking sources such as condensate tanks, diesel trucks and wastewater impoundment pits. Exposure to this class of pollutants, which includes benzene and toluene, is linked to effects ranging from cancer to high-risk pregnancies and asthma. The precise health risks the pollutants pose depend on a number of factors, including concentration, proximity to the pollution source and length of exposure. A recent study commissioned by the West Virginia Department of Environmental Protection, for example, found that in several cases benzene concentrations 625 feet from gas drilling activity were above the “minimum risk level” for the pollutant set by the Centers for Disease Control and Prevention to identify potential health concerns. Some fracking emissions can have broader impact, particularly those that contain nitrogen oxides and volatile organic compounds, two ingredients of ground-level ozone, a far-traveling pollutant. Ozone levels in southwestern Pennsylvania exceed EPA limits. Studies suggest exposure to ozone raises the risk of respiratory conditions, cardiovascular disease and other health problems. Fracking operations are the major source of volatile organic compounds emitted in Butler, Fayette, Greene and Washington counties, which hold the majority of natural gas wells in southwestern Pennsylvania, according to state Department of Environmental Protection data. Some 46 studies that looked at how fracking affects air quality were published from 2009 to 2015, according to a 2016 survey of peer-reviewed research in the journal, PLoS ONE. Of those, 87 percent linked fracking to higher levels of air pollution. The rest found no evidence fracking resulted in higher emissions or concentrations of air pollutants. Fewer studies examined the public health impact of fracking using epidemiological data, self-reported symptoms and other sources. Of the 31 studies that did, 84 percent reported findings that suggest a link between fracking and elevated health risks and poor health outcomes. In most cases, people most vulnerable to fracking’s public health impacts are closest to the emission sources, research suggests. But the variety of emission sources, some of which are mobile, and other factors make identifying the at-risk population difficult. “What we don’t know is how people may be exposed,” said Sam Rubright, at Fractracker Alliance, a nonprofit that studies oil and gas production. “We’re talking about citizens and residents who live near the well site and workers. We don’t know exactly how people are exposed to these compounds because exposure is going to vary so much.” Research does, however, identify areas of concern that warrant further investigation. Johns Hopkins University researchers, for example, used data from the Geisinger Health System to compare birth weight to the distance between pregnant women’s homes and well pads. The women were a representative sample based on age, sex, race/ethnicity, and rural residence. Their babies were delivered at two hospitals in Pennsylvania’s North Central region. The sample was adjusted for other factors, including the socio-economic status of the women. Geographic comparisons were made to examine birth weight of infants born to the women. Women were 40 percent more likely to give birth before 37 weeks when they lived in the areas with the highest fracking activity compared to the surrounding zip codes, and 30 percent more likely to have a high-risk pregnancy. University of Pittsburgh researchers reported that babies born in the areas with the greatest exposure to fracking activities were 34 percent more likely to be small for gestational age compared to those born in places with the least exposure. The findings are based on birth records in Washington, Westmoreland and Butler counties from 2007 to 2010. And last year, the federal Occupational Safety and Health Administration revised standards to limit workers’ exposure to silica based on a study that found that gas and oil workers are at high risk of inhaling the chemical compound when working near frack sand used to hold open the cracks created in the fracking process. Inhaling silica can cause lung disease, silicosis and cancer. The degree of risk people face from air pollutants depends on many factors, including the kind of pollutant, the exposure level and duration, and the person’s age and health. The Known Unknowns With fracking operations, a shortage of key data has hampered efforts to more precisely measure the level of risk emissions pose to public health. One reason for uncertainty surrounding exposure levels is that vast differences in emissions are found from one well site to the next, even among those owned and operated by the same company. And unconventional natural gas wells are not considered major sources of air pollution by the EPA. There is no required air quality monitoring and reporting for the industry, even for fixed sources, such as compressor stations used to keep the pipeline pressurized and separate the gases. “Risk is a function of exposure and toxicity,” said Bernard Goldstein, professor emeritus at the University of Pittsburgh School of Public Health. “We know about the toxicity of the chemicals, the hazard of the chemicals. But without understanding exposure, we don’t know risk.” In the absence of industry– wide monitoring, researchers and local public health projects have begun collecting their own data. In Washington County, for example, the Southwest Pennsylvania Environmental Health Project collects air quality data mostly by using at-home SPECK air monitors spread across more than 400 households to better understand exposure and understand the risk to communities. “What does an exposure pattern look like?” asked Raina Rippel, director of the project. “What is it like if you’re within one kilometer of a fracking well? If you’re within one kilometer of multiple installations, pipelines, compressor stations, metering stations? People are being exposed to emissions from the traffic to and from well pads, to compressor stations, to chemicals used during fracking and to various operations that happen before and after — drilling the borehole, the flow-back. There’s this whole spectrum of activity and there seem to be different health symptoms that happen during those different periods.” Research is increasing. And major ongoing studies, such as the Johns Hopkins investigation of links to asthma and pregnancy outcomes, are expected to help shed more light on the impact of fracking on nearby communities. In the meantime, uncertainty remains for those in the shale gas fields of southwestern Pennsylvania. “One of the major issues is that studies that look at emissions show marked variability from site to site. And it’s not very clear as to why that is,” Goldstein said. “What do you tell someone who is worried? Be lucky and be near the site that’s hardly emitting anything and not the one that’s emitting a lot?”
The first law of thermodynamics is basically the law of conservation of energy as relevant to thermodynamic systems. The change in internal energy of a system, (ΔU) is equal to the sum of the heat and work (q + w) of the system. The second law of thermodynamics describes whether or not a change is spontaneous, expressing it in terms of entropy. Entropy (S), is the thermodynamic quantity that describes the disorder, (randomness) in a system. The entropy is related to the number of states a molecule has available to it. A molecule at high temperature has more vibrational states available than one at a lower temperature, and therefore has a higher entropy. A crystal locks molecules into a certain configuration, whereas molecules in a gas are free to move about and therefore have higher entropy. The second law of thermodynamics states that the total entropy of a system and its surroundings always increases for a spontaneous process. Generally, we refer to this is the entropy of the universe, as the sum of the entropies of the system and surroundings must increase, however one may decrease and the process may still be spontaneous. ΔSuniverse = ΔSsystem +ΔSsurroundings We can restate this law so it refers only to the system, and as heat flows into or out of the system, the entropy goes with it. So, at a certain temperature, the entropy is associated with heat q; ΔS > q/T for a spontaneous process Therefore, for a spontaneous process at a certain temperature, the change in the entropy must be greater than the heat divided by the absolute temperature. For systems that are at equilibrium, the entropy is equal to the heat over temperature. The entropy of a phase change is derived from the equation above. Where the ΔH is the heat of the phase change, and the temperature at which the phase change occurs. Looking at entropy and enthalpy, we can determine whether or not a process is spontaneous, and we introduce the concept of free energy (sometimes called Gibbs’ free energy, ΔG), which is equal to: ΔG = ΔH – TΔS For a spontaneous process, ΔG ≤ ΔH – TΔS, (ie, negative). We want the TΔS term to be larger than the ΔH term, indicating that even if a reaction is endothermic, if ΔS is larger, the reaction will still proceed.
Mollisols (from Latin mollis, "soft") are the soils of grassland ecosystems. They are characterized by a thick, dark surface horizon. This fertile surface horizon, known as a mollic epipedon, results from the long-term addition of organic materials derived from plant roots. Mollisols are among some of the most important and productive agricultural soils in the world and are extensively used for this purpose. They are divided into eight suborders: Albolls, Aquolls, Rendolls, Gelolls, Cryolls, Xerolls, Ustolls and Udolls. Mollisols primarily occur in the middle latitudes and are extensive in prairie regions such as the Great Plains of the U.S. Globally, they occupy approximately 7.0 percent of the ice-free land area. In the U.S., they are the most extensive soil order, accounting for approximately 21.5 percent of the land area.
Digital X-rays are performed similarly to conventional X-rays but use a special imaging detector that "reads" the body part rather than exposing it on film. This is the same technique used by digital cameras. The images produced by digital X-rays can be viewed on a computer, which allows for faster results and convenient delivery to other doctors. X-rays are one of the most common procedures used to diagnose a wide variety of conditions in nearly every area of the body. Although X-rays are usually effective in identifying abnormalities, their method for doing so is somewhat outdated. Despite an increasingly digital world, X-rays still use sheets of film that require processing, much like film in a regular camera. Digital Radiography exposes patients to less radiation, a minor risk involved in conventional X-rays. The speed and safety of digital X-rays frequently make them a preferred type of imaging test.
“The more we look at the behavior of insects, birds, and mammals, including man, the more we see a continuum of complexity rather than any dramatic difference in kind.” ~ American ethologists Carol Gould & James Gould Animals originated during the Tonian period. The last common ancestor of animals arose nearly 800 million years ago. The term animal comes from the Latin animalis, meaning “having breath.” But breath does not distinguish animals from plants. Plant pores have a regulated cycle of breathing in carbon dioxide and exhaling water vapor. Animals evolved centralized intelligence processing centers for digestion and cognition. In later-evolved animals, identifiable brains exhibit electro-chemical activity that simultaneously correlates with mental processing. “Every mental sequence runs side by side with the physical aspect.” ~ Scottish philosopher Alexander Bain in 1885 Perception takes sensory input, which is rendered symbolically, and turns it into meaningful patterns: it is a multi-stage process of differentiation and interpretive imagining of relations between discerned objects. Perception occurs in synchronic waves; attention temporally quantizes on symbolic objects. “Our understanding of the world goes through cycles. The senses are not constant but are processed via rhythmic functions. Humans make decisions at the rate of about 1/6th of a second, which is in line with these sensory oscillations.” ~ Australian psychologist David Alais
The legs, church windows or tears of wine. Many a wine drinker and scientist have swirled the wine through their glass and watched this phenomenon with amazement. What causes the tears? And what do they say about the wine? For a long time, science has not been able to explain exactly how the tears come about, but surface tension, temperature, ridge instability and shock waves (!) seem to give the long-expected complete answer. Wine is primarily a mixture of water and alcohol. In a wine glass, due to capillary action, the wine crawls up slightly at the side of the glass. The bend wine surface that is formed is called the meniscus (see figure below). The alcohol evaporates faster in the meniscus than in the rest of the glass. This is because the meniscus has a larger surface area in proportion to the volume underneath. As a result, the wine in the meniscus contains less alcohol and relatively much water compared to the wine in the rest of the glass. Water has a higher surface tension than alcohol and therefore “pulls” harder on the surrounding liquid. The gradient that arises from the high-alcohol wine in the glass to low-alcohol wine in the meniscus causes a difference in surface tension. The higher surface tension of the watery wine in the meniscus makes that more wine is pulled up from the glass, resulting in a film layer of wine on the side of the glass. In 1865 the Italian physicist Carlo Marangoni described this effect in his thesis, and a few years later Josiah Willard Gibbs gave a theoretical thermodynamic description in several articles entitled “On the Equilibrium of Heterogeneous Substances”. Since then, the formation of the film layer of wine on the side of a wine glass has been known as the Marangoni-Gibbs effect. The formation of a wine film layer on the side of a wine glass due to the Marangoni-Gibbs effect. The evaporation of alcohol provides a gradient in the surface tension (λ), and a gradient in the temperature (T) of the film layer on the wall of the glass. The tears of the wine form under the influence of gravity on the ridge of the film layer. Recent research shows that the Marangoni-Gibbs effect can not only be attributed to the evaporation of the alcohol and the difference in surface tension that arises. Venerus et al. showed in 2015 that the evaporation of the alcohol also causes the wine in the film layer to cool down. The importance of this temperature difference for the Marangoni effect has always been overlooked. The resulting temperature gradient in the film layer causes the wine to “flow”. Just like the warm Gulf Stream that brings warm sea water from the Gulf of Mexico to the northern part of the Atlantic. Similarly, the temperature gradient in the film layer ensures that wine flows up the side of the glass. As such, the temperature difference contributes to the Marangoni-Gibbs effect. The flow rate of the wine in the film layer is therefore dependent on the gradient in surface tension AND the temperature gradient, and both are caused by the evaporation of alcohol1. Infrared photo of the tears of wine. The color scale indicates the temperature of the film layer of wine on the glass. The white arrow indicates the direction of the flow of the wine caused by the Marangoni-Gibbs effect. Adapted from Venerus, 2015 via CC BY 4.0 This information further clarifies the Marangoni-Gibbs effect, and explains how it is that wine flows up against the side of the glass. But what makes the wine drip down again? Gravity? Yes, but that is a very simplistic representation of reality. How is it that wine specifically forms drops at regular intervals that as tears flow back into the glass? The ridge, on top of the film layer, falls apart into tears under the influence of gravity. In a 2018 study, Nikolov and his colleagues show that the way in which the ridge of the film layer falls apart corresponds to the theory of the Plateau-Rayleigh-Taylor instability2. This is a mathematical description of the instability that occurs when a liquid with a light density pushes against a liquid with a higher density. In the case of our wine film layer, the film layer is the liquid with a light density that pushes against the wine that forms the ridge and has a higher density. At a certain point the downward pull of gravity on the heavy liquid (that is pulled down more than the light liquid) outweighs the upward force of the light liquid. The heavier liquid breaks through the barrier created by the upward flow of the light fluid and forms tears that stream into the glass. The ridge stability is therefore a balance between gravity and the upward pressure of the light liquid. A simplistic representation of the Plateau-Rayleigh-Taylor instability on the basis of which the wine tears form according to Nikolov et al.. The ridge of the film layer (dark red) has a higher density than the film layer (light red) that under the influence of the Marangoni effect is formed and flows up. However, the last question that remains is: what makes that the film layer has two different densities, creating a ridge that falls apart into tears? The film layer is created by the Marangoni effect, and the tears are caused by the instability of the ridge of the film layer. But how is it possible that this ridge is created at the top of the film layer? According to Dukler et al. of the University of California, a shock wave through the film layer causes the formation of the ridge and the subsequent tears3. A “reverse undercompressive shockwave”, that is. Dukler and his colleagues have developed a theoretical model that shows how this shock wave (in theory) arises from the evaporation of alcohol in the film layer and moves from the meniscus to the ridge of the film layer. A characteristic of this atypical shock wave is that the density of the liquid behind the wave is lower than before the wave. A situation thus arises that the ridge of the film layer has a higher density than the film layer below. And let this be THE starting point for the Plateau-Rayleigh-Taylor instability as described above. “After removing the cover, [from a glass filled with port wine] evaporation quickly increases, inciting a “reverse” front to climb out of the meniscus, followed by the formation of wine tears falling back into the bulk. The forming front is characterized by a depression, i.e. the film ahead of the front is thicker than the film behind it. It is in a sense, a “dewetting” front that leaves a thinner layer behind it.”[Dukler, 2019] The researchers tested their theory by looking at the formation of tears in a stemless Martini glass with Port wine. The shock wave was perceptible (see the figure below) and preceded the tears of the wine. The researchers applied some simplifications for the model, but also for the experiment. For example, a martini glass is used because this glass has a constant angle, and the model assumes a constant gradient in the surface tension (caused by the evaporation of the alcohol). The effect in different wine glasses with a convex surface (and in particular the theory behind it) needs to be further investigated. The ridge of the film layer and therefore also the tears of the wine are caused by a shock wave. The top four photos show from left to right how a wave front forms out of the meniscus and stabilizes in the tears of the wine after 10 seconds. In the lower left photo you can see a close-up of the shock waves that run to the ridge at the top of the film layer. The term “rarefaction” indicates the zone in which the density of the wine is lower than in the rim due to the shock wave. The figure below on the right shows the stemless Martini glass as used by Dukler et al. to test their theory. Adapted from Dukler, 2019 via non-exclusive distribution license What do the tears say? As the theory above shows, the speed with which the alcohol evaporates is particularly important for the formation of tears. This evaporation depends on the amount of alcohol in the wine, but for example also on room temperature, wine temperature, humidity and air pressure. In general, the more alcohol the wine contains, the more tears there are. However, it is difficult to keep all these conditions constant, and in a mountain village, or on a rainy day, the formation of tears will proceed differently than on a beautiful sunny day at the beach. The speed with which the tears flow back into the glass says something about the viscosity of the wine. The slower the tears drip down, the syrupier the wine. This syrupiness depends among other things on the amount of alcohol, sugar and glycerol in the wine. Alcohol, but especially glycerol and sugar, increase the viscosity. Read now: Chardonnay arose through inbreeding! All these different variables make it very difficult to conclude anything based on the tears of the wine about the contents of the wine glass. Provided that you do not only have the alcohol content as a criterion, then the tears of the wine say nothing at all about the quality of the wine. The tears are the secondary effect of the evaporation of alcohol from the wine. With the evaporation of the alcohol, however, all kinds of aromas also evaporate. The wine derives its scent (bouquet) from the evaporation of these aroma substances. Knowledge about the formation of tears, and therefore the evaporation of alcohol in the wine glass, can contribute to the development of wine glasses. Wine glasses that optimally support the film layer on the wall of the glass, due to their shape, or perhaps even a coating, contribute to the release of aroma components and therefore to the bouquet of the wine. Therefore, the knowledge gained by the tears of the wine may eventually lead to happy faces. 1. Venerus DC, Nieto Simavilla D. Tears of wine: new insights on an old phenomenon. Sci Rep. 2015 Nov 9;5:16162. https://doi.org/10.1038/srep16162 2. Nikolov A, Wasan D, Lee J. Tears of wine: The dance of the droplets. Adv Colloid Interface Sci. 2018 Jun;256:94-100. https://doi.org/10.1016/j.cis.2018.05.001 3. Dukler Y, Hangjie J, Falcon C, Bertozzi AL. A theory for undercompressive shocks in tears of wine. arXiv 2019 Sept. https://arxiv.org/abs/1909.09898
<The Detail of This Chapter.pdf> an elementary depiction about Newton's mechanics: 质点力学 space and time At present, when we talk about something occur, means that it must contains the properties of time and space, or it must occur in the space-time specifically. But, we never say that a time event or a space event occurs alone. We don't care such a event at present. No matter what time and space is, however, we can get a sufficient description of them that correspond with our physical concept, because we can always describe phenomenon definitely, in despite of we don't know the essential of it: (ref: Geometry) 1. an affine space. 2. a translation group act to the affine space 3. Euclid structure upon the translation vector space. 4. Galileo structure upon the affine space Each step imply some physical presuppositions: 1. point: time point and space point. Just a point is undistinguishable. 2. there is a distance between two different points. How to know the distance? or how to describe the distance? There are four forms of distance: the time distance at the same space points; the time distance at different space points; the space distance at the same time point; the space distance at different time points. To measure these forms of distance, we have to induct some physical presuppositions: the time distance at the same space point: translation invariant of time. ->the time distance at different space points: simultaneity. ->the space distance at the same time point: rigid body. ->the space distance at different time points: ... 3. there is the property of direction in space and time. We can ask a question: what does direction means? Just like distance, since we can't consider a space event or a time event at present, we have to consider other occurrences that embody the direction of space or time. Let's study the distance and the direction from a simplest occurrence: a point x(x, t_1), and a point y(y. t_2). x . . y we say "at the time point t_1, we find a particle is at the space point x, at the time point t_2, we find the same particle is at the space point y, if we can identify they are the same". or, we can describe the same occurrence as "a particle move from x to y". then, this is become a typical physical occurrence! Means that we can definite the direction of time and space in such a occurrence, furthermore, we can ask the first physical question: how the particle move from x to y? Just experiences can answer it! If the time and the space satisfy our primary rational number describe on them, then we can use interpolation to detect the process of the particle movement: Let's repeat that occurrence, but add a screen to isolate x and y absolutely, and drilling a hole z on the screen, no matter where is z, when the occurrence is repeat successfully, we can claim that the point z is an intermediate occurrence between x and y. This is been required by our basic causality. x . | z . y From this experience, we get Newtonian mechanics and quantum mechanics respectively when we use bullet and photon to realize such a occurrence. 4. reference system 5. velocity and acceleration The velocity is the derivative of space over time. The acceleration is the derivative of velocity over time. momentum and force :: the conservation of momentum As Newton's second definition, momentum is used to measure the movement property of object, and as the fourth definition of Newton, the force is used to describe the cause of the movement of object. the conservation of energy :: kinetic energy and potential energy why Newton is right?
Why teach using Nursery Rhymes? Research shows that children who have memorized nursery rhymes become better readers because they develop an early sensitivity to the sounds of language. (Marie Clay) Nursery Rhymes naturally help young children develop phonemic awareness skills, which are the necessary building blocks that children need explicit instruction in before they can begin to read. Skills you can teach using Nursery Rhymes - Sound/Word Discrimination - Word segmentation (syllables) - Phoneme manipulation Nursery Rhymes Can… - Enrich young children’s vocabulary - Provide opportunities for oral language development - Introduce children to basic story structure such as problem and solution, cause and effect - Be easily integrated into already existing themes - Be FUN and engaging for young children For specific activities for each nursery rhyme see the list below: Printable Nursery Rhyme Books, Charts, and Songs from Dr. Jean Download the free Jack and Jill printable book in black and white and color, song chart with picture support for emergent readers, and the mp3 song by Dr. Jean! Free Nursery Rhyme Printables Nursery Rhyme Links Nursery Rhyme Resources: More Literacy Resources from Pre-K Pages
The character encoding used by your computer depends on the operating system used - Windows uses the ANSI character set, whereas Macintosh uses the Macintosh character set.Therefore, the characters returned by the Excel Char Function for specific number codes may differ across operating environments. The Excel Char function returns the character relating to a supplied number (from 1 to 255), within the character set used by your computer. - note that the character set may vary across different operating systems and so the Char function may return different results on different computers. The syntax of the Char function is : Where the number argument is a number from 1 to 255, and can be supplied to the function either directly or as a reference to a cell containing a number. The following spreadsheet uses the Excel Char function to return the character associated with different supplied numeric values. The format of the function is shown in the spreadsheet on the left and the results are shown in the spreadsheet on the right - Note that these results are from the Ansi character set (used on the Windows operating system). One handy use of the char function is when inserting line breaks into text. This is shown in the example below (Note that, in the Ansi character set, the line break is given by the numer code 10) : Note that, in the example above, in order to display the result with the line break, you will need to ensure that the cell text wrapping is enabled. To do this : Further information and examples of the Excel Char Function are provided on the Microsoft Office website. If you get an error from the Excel Char function, this is likely to be the #VALUE! error : |#VALUE!||-||Occurs if the supplied number argument is not recognised as a numeric value or is a number outside of the permitted range 1 to 255.|
Neurotransmitters are chemicals located and released in the brain to allow an impulse from one nerve cell to pass to another nerve cell. There are approximately 50 neurotransmitters identified. There are billions of nerve cells located in the brain, which do not directly touch each other. Nerve cells communicate messages by secreting neurotransmitters. Neurotransmitters can excite or inhibit neurons (nerve cells). Some common neurotransmitters are acetylcholine, norepinephrine, dopamine, serotonin and gamma aminobutyric acid (GABA). Acetylcholine and norepinephrine are excitatory neurotransmitters while dopamine, serotonin, and GABA are inhibitory. Each neurotransmitter can directly or indirectly influence neurons in a specific portion of the brain, thereby affecting behavior. Mechanism of impulse transmission A nerve impulse travels through a nerve in a long, slender cellular structure called an axon, and it eventually reaches a structure called the presynaptic membrane, which contains neurotransmitters to be released in a free space called the synaptic cleft. Freely flowing neurotransmitter molecules are picked up by receptors (structures that appear on cellular surfaces that pick up molecules that fit into them like a "lock and key") located Once the neurotransmitter is released from the neurotransmitter vesicles of the presynaptic membrane, the normal movement of molecules should be directed to receptor sites located on the postsynaptic membrane. However, in certain disease states, the flow of the neurotransmitter is defective. For example, in depression, the flow of the inhibitory neurotransmitter serotonin is defective, and molecules flow back to their originating site (the presynaptic membrane) instead of to receptors on the postsynaptic membrane that will transmit the impulse to a nearby neuron. The mechanism of action and localization of neurotransmitters in the brain has provided valuable information concerning the cause of many mental disorders, including clinical depression and chemical dependency, and in researching medications that allow normal flow and movement of neurotransmitter molecules. Neurotransmitters, mental disorders, and medications Impairment of dopamine-containing neurons in the brain is implicated in schizophrenia , a mental disease marked by disturbances in thinking and emotional reactions. Medications that block dopamine receptors in the brain, such as chlorpromazine and clozapine , have been used to alleviate the symptoms and help patients return to a normal social setting. In depression, which afflicts about 3.5% of the population, there appears to be abnormal excess or inhibition of signals that control mood, thoughts, pain, and other sensations. Depression is treated with antidepressants that affect norepinephrine and serotonin in the brain. The antidepressants help correct the abnormal neurotransmitter activity. A newer drug, fluoxetine (Prozac), is a selective serotonin reuptake inhibitor (SSRI) that appears to establish the level of serotonin required to function at a normal level. As the name implies, the drug inhibits the re-uptake of serotonin neurotransmitter from synaptic gaps, thus increasing neurotransmitter action. In the brain, then, the increased serotonin activity alleviates depressive symptoms. Alzheimer's disease , which affects an estimated four million Americans, is characterized by memory loss and the eventual inability for self-care. The disease seems to be caused by a loss of cells that secrete acetylcholine in the basal forebrain (region of brain that is the control center for sensory and associative information processing and motor activities). Some medications to alleviate the symptoms have been developed, but presently there is no known treatment for the disease. Generalized anxiety disorder People with generalized anxiety disorder (GAD) experience excessive worry that causes problems at work and in the maintenance of daily responsibilities. Evidence suggests that GAD involves several neurotransmitter systems in the brain, including norepinephrine and serotonin. People affected by attention-deficit/hyperactivity disorder (ADHD) experience difficulties in the areas of attention, overactivity, impulse control, and distractibility. Research shows that dopamine and norepinephrine imbalances are strongly implicated in causing ADHD. Substantial research evidence also suggests a correlation of neurotransmitter imbalance with disorders such as borderline personality disorders , schizotypal personality disorder , avoidant personality disorder , social phobia , histrionic personality disorder , and somatization disorder . Cocaine and crack cocaine are psychostimulants that affect neurons containing dopamine in the areas of the brain known as the limbic and frontal cortex. When cocaine is used, it generates a feeling of confidence and power. However, when large amounts are taken, people "crash" and suffer from physical and emotional exhaustion as well as depression. Opiates, such as heroin and morphine, appear to mimic naturally occurring peptide substances in the brain that act as neurotransmitters with opiate activity called endorphins. Natural endorphins of the brain act to kill pain, cause sensations of pleasure, and cause sleepiness. Endorphins released with extensive aerobic exercise, for example, are responsible for the "rush" that long-distance runners experience. It is believed that morphine and heroin combine with the endorphin receptors in the brain, resulting in reduced natural endorphin production. As a result, the drugs are needed to replace the naturally produced endorphins and addiction occurs. Attempts to counteract the effects of the drugs involve using medications that mimic them, such as nalorphine, naloxone, and naltrexone . Alcohol is one of the depressant drugs in widest use, and is believed to cause its effects by interacting with the GABA receptor. Initially anxiety is controlled, but greater amounts reduce muscle control and delay reaction time due to impaired thinking. Tasman, Allan, Kay Jerald, MD, Jeffrey A. Lieberman, MD, eds. Psychiatry. 1st ed. Philadelphia: W. B. Saunders Company, 1997. Laith Farid Gulli, M.D.
Solving Another Equation with Two Variables Date: 1/30/96 at 20:28:1 From: McKellar Clan Subject: how to do algebra Dear Dr. Math, I am in the 7th grade and our math class is learning algebra. I do not understand it and my teacher is no help in trying to explain it. My mother doesn't understand algebra and my father is too busy to help me. I hope you can. I have included a sample problem from my book; 6y = 3D = 54. I am supposed to solve the equation, but I don't even understand it! PLEASE HELP!!!! I really need to get a handle on this stuff. My name is Brandon. Date: 2/1/96 at 14:44:17 From: Doctor Elise Subject: Re: how to do algebra Hi! You sound really frustrated. Algebra makes a lot of people confused, especially at the beginning, but there's no magic to it. You really can learn it! I'll try to help by talking about algebra in general, and your sample problem in particular. Algebra is the next step in math after you can pretty much add, subtract, multiply, divide, and do anything else you want with actual numbers. In Algebra, instead of using a specific number, like '5', we start using a letter, like 'y', to represent a number we don't know yet. At this point, we also start leaving out the "times" sign when we write equations if we're multiplying a number "times" a letter. What '6y' really means is "six times y, which is a number I don't know yet." The goal of almost every algebra problem is to find out what number (or numbers) the letter could equal. The mathbooks usually call it "solving for y", and you've done it when your equation finally looks like "y = some number". Your example is: 6y = 3D = 54 This looks like a pretty funny equation, doesn't it? That's because...Surprise! It's really three entirely different problems. They are: 6y = 3D 3D = 54 6y = 54 Does that make a little more sense? Let's start with the third one. 6y = 54 In words, that's "six times y equals 54". The goal is to figure out what number we can plug in for 'y' that works. If you just think about it for a minute, you know the answer already, 6 * 9 = 54, right? Here's how you get there using algebra. The way you solve any algebra problem is by putting all the letters on one side of the equals sign, and all the numbers on the other. The big rule is that you can do anything you want to the equation as long as you do the same thing to both sides of the equals sign. Think about it. If I start with 2 = 2, I know this is true. What if I add 4 to both sides: 2 + 4 = 2 + 4 I get: 6 = 6 This is also true. What if I multiply both sides by 3: 6 * 3 = 6 * 3 I get: 18 = 18 Still works. Okay, what if I multiply both sides by some unknown number 'x': 18 * x = 18 * x This is still always true. Remember, in algebra we write this as: 18x = 18x Okay, what if I add 4 * x to both sides? Can I do that? Sure! 18x + 4x = 18x + 4x Works just fine. Now. Remember how you used to factor your plain old vanilla numbers? As in, 6 = 3 * 2? And if you wanted to add 6 + 4 you could write it like 6 + 4 3 * 2 + 2 * 2 2 * (3 + 2) 2 * 5 10 See, you get 10 this way, too. Well, you can do the same thing with this silly 'x' number. 18x + 4x 18 * x + 4 * x write it out the long way x * (18 + 4) pull out the 'x' x * 22 add the numbers 22x here's the answer. Of course, without an "equals" sign, we can't solve it any further. Anyhow, if you have "6y = 54", and what you want is "y = something", you just have to divide both sides by 6, right? 6y = 54 6y / 6 = 54 / 6 and we can write 6y/6 in a bunch of different ways, but it basically boils down to the exact same thing you used to do with fractions. The same way you can reduce 10/15 by factoring it into 5 * 2 / 5 * 3 and then cross out both 5's, we can write 6y/6 as 6 * y / 6 * 1 and cross out the 6's to get plain old y/1, which is 'y'. And, of course, 54/6 = 9, so we have y = 9 Yay! So, for 3D = 54 we do the same approach: 3D /3 = 54/3 D = 18 We can even use 6y = 3D to check our work. If you substitute 9 for y and 18 for D, is the equation true? I hope this helps. Good luck! -Doctor Elise, The Math Forum Search the Dr. Math Library: Ask Dr. MathTM © 1994-2013 The Math Forum
“Fluoride causes more human cancer, and causes it faster, than any other chemical.” – Dean Burk, Chief Chemist Emeritus, US National Cancer Institute Fluoride is commonly found in tap water thanks to a process called fluoridation – which the U.S. government has been repeatedly telling us is a safe and effective way to protect teeth from decay, according to Global Research. But a recent study published in The Lancet, the world’s most renowned medical journal, has actually classified fluoride as a neurotoxin – something which has a negative impact on brain development – alongside other extremely toxic compounds such as arsenic, lead and mercury. A large number of cities within the U.S. are pumping their drinking water systems full of fluoride, and the government is claiming that there are no health risks to this unnecessary practice. Why is fluoride added to water? Fluoride is added to water because of an outdated notion that it prevents dental decay, according to Scientific American. If applied frequently in low concentrations, the U.S. is being led to believe that it increases the rate of growth and size of tooth enamel – which helps to reverse the formation of small cavities. But, modern studies show that dental decay rates are so low in the U.S. that the effects of water fluoridation cannot actually be measured. This means that the benefit of water fluoridation is not clinically relevant, according to Fluoride Alert. There are many safer ways to provide fluoride for your teeth if they are deficient. In fact, there is a very easy one time gel treatment that takes only 15 minutes and lasts a lifetime, according to Ye Olde Journalist. Teenagers and young adults that have dental fluoride deficiency can easily be tested – although it is likely that their dentist will have noticed the problem and already suggested ways to improve the condition. What little benefit fluoridated water might actually provide comes entirely through its topical application – meaning that fluoride does not need to be swallowed to benefit teeth, as reported by Fluoride Alert. So what are the dangers of drinking fluoride? Fluoride is an industrial-grade chemical that is commonly contaminated with trace amounts of toxic heavy metals – such as lead, arsenic and radium. These trace elements accumulate in our bodies over time, and are associated with causing cancer, among other health complications, according to Fluoride Alert. Toothpaste labels commonly contain warnings regarding the dangers of swallowing too much toothpaste at one time; this is due to its fluoride content. Meanwhile, fluoridation isn’t actually a common practice in Europe, with countries such as Austria, Belgium, Finland, Germany, Denmark, Sweden, Norway and The Netherlands opting out of adding fluoride to drinking water, according to Alt Health Works. As noted in the latest study, published in The Lancet, and reported on by Ye Olde Journalist, “A systematic review identified five different similar industrial chemicals as developmental neurotoxicants: lead, methylmercury … arsenic and toluene … Six additional developmental neurotoxicants have also now been identified: manganese, fluoride …” The study notes that neurodevelopmental disabilities, including attention-deficit hyperactivity disorder, dyslexia and other cognitive impairments, are now affecting millions of children worldwide – a “pandemic of developmental neurotoxicity.” Over recent years, more and more people have joined the movement to remove industrial fluoride from the world’s water supply, thanks to an increased awareness of the dangers to human health caused by drinking this toxin for a prolonged period of time. If you are concerned about the amount of fluoride you are putting into your body, you can switch to fluoride-free toothpastes and help spread awareness about the dangers of fluoridation. * * * * * *
Status: Least Concern Range & Habitat Swift foxes are primarily found in the Southwest United States and Western Canada. They live primarily in short grass prairies and deserts. They often form dens in sandy soils on open prairies, along fences or in plowed fields. The Swift fox is one of the smallest foxes in the world and is the smallest of the North American wild dogs. They can reach speeds of over 50 km/h. In the wild, Swift foxes usually live between 3 and 6 years, but may live up to 14 years in captivity. Unlike many other types of foxes, Swift foxes use dens year round, not just while rearing their young. They are considered endangered in the Unites States presently, mainly due to habitat loss. Reproduction & Growth Swift foxes sometimes pair for life, but may not mate with the same partner each year. Male swift foxes mature and mate at one year, while females may wait until their second year before breeding. The breeding season varies depending on location, but is typically late spring/early summer. The gestation period is 50-60 days and pups are born in mid-May. For individuals farther south in the United States, the breeding season begins in late December, early January, with pups born in March and early April. Swift foxes have only one litter annually, with a litter size ranging anywhere from 2 to 6 and are usually born in underground dens. In the Wild: Their diet varies seasonally and they typically eat whatever live prey they can catch. Their diet includes small mammals, birds, reptiles, amphibians, fish, insects, but also includes berries and grasses. In the Zoo: Their diet consists of dry dog food, chunk meat and a special mixture of meat called feline diet. They receive this special diet twice daily.
- Killer whales have a well-developed, acute sense of hearing. A killer whale’s brain and nervous system appear physiologically able to process sounds at much higher speeds than humans, most likely because of their echolocation abilities. - Soft tissue and bone conduct sound to a toothed whale’s middle and inner ears. In particular, fat lobes in the whale’s lower jaw appear to be an adaptation for conveying sound to the ears. - In killer whales, the ear bone complex (ootic capsule) isn’t attached to the skull. Ligaments hold each ear bone complex in a cavity outside the skull. This separation of the ear bone complex allows a killer whale to localize sound (directional capacity), which is important for echolocation. - Hearing range. - Early studies published in 1972 suggested that the hearing range of killer whales was about 0.5 to 31 kHz. More recent studies show killer whales could hear sounds at frequencies as high as 120 kHz. Greatest sensitivity ranged from 18 to 42 kHz with the least sensitivity to frequencies from 60 to 120 kHz. - In comparison, the range of hearing of a young, healthy human is 15 to 20,000 Hz (0.015–20 kHz.) Human speech falls within the frequency band of 100 to 10,000 Hz (0.1–10 kHz), with the main, useful voice frequencies within 300 to 3,400 Hz (0.3–3.4 kHz.) This mainly falls within a killer whale’s hearing range. - Killer whale vision is well developed. - Studies in marine life parks have shown that killer whales have acute vision both in and out of water. In these studies, killer whales visually discriminated among similar objects. During more than one hundred trials, a killer whale was shown an object and cued to find a matching object. When given two choices, the whale chose the matching object with 92% accuracy, and when three choices were presented the whale’s accuracy was about 82%. Researchers did not determine whether the whale was responding to shape, size or color. Future studies may provide more detailed information on the visual abilities of killer whales. The eyes are located in front of and below the eye spot. - The lens of a marine mammal's eye is stronger than that of a land mammal. - In the eye of a land mammal, the cornea focuses light rays toward the lens, which further focuses the light rays onto the retina. Underwater, the cornea isn't able to adequately focus waves into the lens because the refractive index of water is similar to that of the interior of the eye. - The eye of a marine mammal compensates for this lack of refraction at the cornea interface by having a much stronger, spherical lens. It is more similar to the lens of a fish's eye than the lens of a land mammal's eye. - In air, a marine mammal's eye compensates for the added refraction at the air-cornea interface. At least in bright light, constricting the pupil helps, but it doesn't fully explain how a whale achieves visual acuity in air. Research is ongoing. - DNA analysis of several other species of toothed whales indicated that the eyes of these whales do not develop pigment cells called short-wave-sensitive (S-) cones, which are sensitive to blue light. Researchers theorize that all modern cetaceans, including killer whales, lack these visual pigments and therefore aren’t able to discriminate color in the blue wavelengths. - Anatomical studies and observations of behavior indicate that a killer whale's sense of touch is well developed. Studies of closely related species (common dolphins, bottlenose dolphins, and false killer whales) suggest that the most sensitive areas are the blowhole region and areas around the eyes and mouth. - In zoological parks, killer whales show strong preferences for specific types of fishes. Overall, however, little is known about a whale’s sense of taste. - Behavioral evidence suggests that bottlenose dolphins, a closely related species, can detect three if not all four primary tastes. The way they use their ability to “taste” is unclear. - Scientists are undecided whether dolphins have taste buds like other mammals. Three studies indicated that taste buds may be found within 5 to 8 pits at the back of the tongue. One of those studies found them in young dolphins and not adults. Another study could not trace a nerve supply to the taste buds. Regardless, behavioral studies indicate bottlenose dolphins have some type of chemosensory capacity within the mouth. In zoological parks, killer whales show strong preferences for specific types of fish. - Olfactory lobes of the brain and olfactory nerves are absent in all toothed whales, indicating that they have no sense of smell. Being air-breathing mammals that spend a majority of time under water, a sense of smell would go largely unused in killer whales.
Telephone communication, or telecommunication, refers to the practice of communication over a telephone. Although other forms of communication are also possible over the same transmission lines, voice communication is the most common.Continue Reading Telephone communication was first made possible in 1876 by Alexander Graham Bell, and it was subsequently improved upon by many others. The typical components that make voice telecommunication possible are a microphone for capturing the person's voice, a speaker to reproduce the other person's voice, a dial pad to initiate a call, and a ringer to announce an incoming call. Telephones and telephone calls were initially too expensive for the majority of households. As a result, only businesses and the very wealthy had access to them. Telephone communication revolutionized the way businesses performed work. It was no longer necessary for long-distance communication to occur over days or weeks because a phone call could be made in an instant. Since 1876, many advances have built upon the capability that was initially introduced with the first telephone. Telephone lines have also changed greatly to handle the consistently increasing variety and amount of communication traversing them. Telephones, which were originally only capable of voice communication, perform such a variety of functions that entire guidebooks have been written on how to make full use of them.Learn more about Business Communications
Adventist Youth Honors Answer Book/Nature/Moths & Butterflies |Moths & Butterflies| |Skill Level 2| |Year of Introduction: 1933| - 1 1. What is the distinction between moths and butterflies? - 2 2. Define the following terms: antennae, cocoon, pupa, larva, chrysalis. - 3 3. Be able to identify three moths and/or butterflies by their cocoons. - 4 4. What causes colored powder to come off on your hands when you handle the wings of a butterfly or moth? Examine the powder of a butterfly or moth with a magnifying lens and describe your findings. - 5 5. Name three harmful tree moths and one harmful house moth and tell during what stage of their lives they each do their damage. - 6 6. What famous butterfly follows the birds southward every winter and comes northward in the spring ? - 7 7. Identify in the field, then draw, photograph or collect 25 species of moths and butterflies, with not more than two specimens of any one variety. When collecting, specimens should be anesthetized by using carbon tetrachloride or other chemical in collecting jar. In either project correctly label and include the following information: a. Name b. Date observed c. Location d. Time of day e. Plant on which the insect was feeding or the material on which it was perched - 8 8. Describe the life cycle of a butterfly or moth. What lesson can be learned in connection with the resurrection of the righteous? - 9 References The Moths & Butterflies Honor is an optional component of the Naturalist Master Award . The Moths & Butterflies Honor is an optional component of the Zoology Master Award (available only in the South Pacific Division) . 1. What is the distinction between moths and butterflies? The division of Lepidopterans into moths and butterflies is a popular taxonomy, not a scientific one. The distinctions listed here are not absolute. There are many butterflies with some of the characteristics of moths and many moths with some of the characteristics of butterflies. The most obvious difference is in the feelers, or antennae. Most butterflies have thin slender filamentous antennae which are club shaped at the end. Moths, on the other hand, often have comb-like or feathery antennae, or filamentous and unclubbed. Most moth caterpillars spin a cocoon made of silk within which they metamorphose into the pupal stage. Most butterflies on the other hand form an exposed pupa which is also termed as a chrysalis. Coloration of the wings Most butterflies have bright colours on their wings. Nocturnal moths on the other hand are usually plain brown, grey, white or black and often with obscuring patterns of zigzags or swirls which help camouflage them as they rest during the day. However many day-flying moths are brightly-colored, particularly if they are toxic. A few butterflies are also plain-colored, like the Cabbage White butterfly. Time of activity Most moths are nocturnal while most butterflies are diurnal. There are however exceptions, including the diurnal Gypsy moth and the spectacular "Uraniidae" or Sunset moths. Moths usually rest with their wings spread out to their sides. Butterflies frequently fold their wings above their backs when they are perched although they will occasionally "bask" with their wings spread for short periods. 2. Define the following terms: antennae, cocoon, pupa, larva, chrysalis. - Antennae are paired appendages connected to the front-most segments of an insect. Antennae are jointed, at least at the base, and generally extend forward from the head. They are sensory organs, although the exact nature of what they sense and how they sense it is not the same in all groups, nor always clear. Functions may variously include sensing touch, air motion, heat, vibration (sound), and especially smell or taste. - A cocoon is a casing spun of silk by many moth caterpillars and numerous other insect larvae as a protective covering for the pupa. - A pupa is the life stage of some insects undergoing transformation. - A larva is a juvenile form of animal with indirect development, undergoing metamorphosis (for example, insects or amphibians). The larva can look completely different from the adult form, for example, a caterpillar differs from a butterfly. Larvae often have special (larval) organs which do not occur in the adult form. - A chrysalis or nympha is the pupal stage of butterflies. Because chrysalids are often showy and are formed in the open they are the most familiar examples of pupae. Most chrysalids are attached to a surface by a Velcro-like arrangement of a silken pad spun by the caterpillar and a set of hooks at the tip of the pupal abdomen. 3. Be able to identify three moths and/or butterflies by their cocoons. 4. What causes colored powder to come off on your hands when you handle the wings of a butterfly or moth? Examine the powder of a butterfly or moth with a magnifying lens and describe your findings. This powder is made from tiny scales which cover the butterfly's wings. These scales give the wings their color, as the membrane beneath the scales is nearly transparent. The scales detach when abraded by a finger, much as the skin on a person's knee is abraded when it contacts a sidewalk. 5. Name three harmful tree moths and one harmful house moth and tell during what stage of their lives they each do their damage. The gypsy moth was introduced into the United States in 1868 by a French scientist, Leopold Trouvelot, living in New Bedford, Massachusetts. The native silk spinning caterpillars were proving to be susceptible to disease. So Trouvelot brought over gypsy moth eggs to try to make a caterpillar hybrid, that could resist diseases. When some of the moths escaped from his lab, they started to multiply. They eventually grew to be gypsy moths as we know them today. It is now one of the most notorious pests of hardwood trees in the Eastern United States. Since 1980, the gypsy moth has defoliated over 1,000,000 acres (4,000 km²) of forest each year. In 1981, a record 12,900,000 acres (52,200 km²) were defoliated. This is an area larger than Rhode Island, Massachusetts, and Connecticut combined. In wooded suburban areas, during periods of infestation when trees are visibly defoliated, gypsy moth larvae crawl up and down walls, across roads, over outdoor furniture, and even inside homes. During periods of feeding they leave behind a mixture of small pieces of leaves and frass, or excrement. During outbreaks, the sound of chewing and frass dropping is a continual annoyance. Gypsy moth populations usually remain at very low levels but occasionally populations increase to very high levels which can result in partial to total defoliation of host trees for 1-3 years. Gypsy moths eat only during their larval stage. "Tent Caterpillars" are moderately sized species in the genus Malacosoma in the moth family Lasiocampidae. Species occur in North America, Mexico, and Eurasia. Twenty-six species have been described, six of which occur in North America. Although most people consider tent caterpillars only as pests due to their habit of defoliating trees, they are among the most social of all caterpillars and exhibit many noteworthy behaviors. Tent caterpillars are readily recognized because they are social, colorful, diurnal and build conspicuous silk tents in the branches of host trees. Some species, such as the eastern tent caterpillar, Malacosoma americanum, build a single large tent which is typically occupied through the whole of the larval stage while others build a series of small tents that are sequentially abandoned. The forest tent caterpillar, Malacosoma disstrium, is exceptional in that the larvae build no tent at all, aggregating instead on silken mats that they spin on the leaves or bark of trees. Tents facilitate aggregation and serve as focal sites of thermal regulatory behavior. They also serve as communication centers where caterpillars are alerted to the discovery of new food finds. Lesser wax moth Wax moths were first seen in North America in 1806. People believe they came over with honeybees from Europe. The lesser wax moth is very common all over the world, except the colder regions. The larvae are the only ones that eat, the adults will not eat. Their diet typically consists of honey, beeswax, stored pollen, bee shell casings, and, in some cases, bee brood. While tunneling through honeycombs attaining food, these moths are also protecting themselves from their main enemy, the honeybee. Coddling moths are known as an agricultural pest, their larva being the common apple worm or maggot. It is native to Europe and was introduced to North America, where it has become one of the regular pests of apple orchards. It is found almost worldwide. It also attacks pears, walnuts, and other tree fruits. This larva is the famous "worm in the apple" of cartoon and vernacular fame. The Clothing Moth (Tineola bisselliella) is a winged insect capable of flying, which develops from a caterpillar. It is recognized as a serious pest. Like most moth caterpillars, it can (and will) derive nourishment not only from clothing but also from many other sources. Eggs hatch into larvae, which then begin to feed. Once they get their fill, they pupate and undergo metamorphosis to emerge as adults. Adults do not eat: male adults look for females and adult females look for places to lay eggs. Once their job is done, they die. Contrary to what most people believe, adult clothing moths do not eat or cause any damage to clothing or fabric. It is the larvae which are solely responsible for this, spending their entire time eating and foraging for food. The White Shouldered House Moth (Endrosis sarcitrella) is a very common moth and occurs regularly inside buildings, and being continuously-brooded, can be found at any time of year, mainly found indoors via open doors, windows etc. It is a widely distributed species whose larvae infest stored grain 6. What famous butterfly follows the birds southward every winter and comes northward in the spring ? Monarch butterflies are especially noted for their lengthy annual migration. They make massive southward migrations starting in August until the first frost. A northward migration takes place in the spring. Female Monarchs deposit eggs for the next generation during these migrations. By the end of October, they reach their overwintering grounds. The length of these journeys exceeds the normal lifespan of most Monarchs, which is less than two months for butterflies born in early summer. The last generation of the summer enters into a non-reproductive phase known as diapause and may live up to 7 months. During diapause, butterflies fly to one of many overwintering sites. The generation that overwinters generally does not reproduce until it leaves the overwintering site sometime in February and March. It is thought that the overwinter population may reach as far north as Texas and Oklahoma during the spring migration. It is the second, third and fourth generations that return to their northern locations in the United States and Canada in the spring. How the species manages to return to the same overwintering spots over a gap of several generations is still a subject of research; the flight patterns appear to be inherited, based on a combination of circadian rhythm and the position of the sun in the sky. 7. Identify in the field, then draw, photograph or collect 25 species of moths and butterflies, with not more than two specimens of any one variety. When collecting, specimens should be anesthetized by using carbon tetrachloride or other chemical in collecting jar. In either project correctly label and include the following information: b. Date observed d. Time of day e. Plant on which the insect was feeding or the material on which it was perched For identification purposes, you will need to get a copy of a good field guide. As with many other identification tasks, it is best to find a specimen first and then attempt to identify it, rather than going out to look for a particular specimen. Pathfinders are encouraged to draw or photograph specimens rather than collect them. Identification should be done in the field. If going as a group, it is OK to bring one camera (perhaps the instructor's) and then have the rest of the participants make their drawing from the pictures taken. If you have access to a digital camera and a projector, you could project a slide show and have the Pathfinders make drawings as the slides are shown. You may also print the photos and pass them around. 8. Describe the life cycle of a butterfly or moth. What lesson can be learned in connection with the resurrection of the righteous? Butterflies and moths are notable for their unusual life cycle with a larval caterpillar stage, an inactive pupal stage, and a spectacular metamorphosis into a familiar and colorful winged adult form. Unlike many insects, butterflies do not experience a nymph period, but instead go through a pupal stage which lies between the larva and the adult stage. The four stages of a butterfly's life cycle are: - Butterfly eggs are fixed to a leaf with a special glue which hardens rapidly. As it hardens it contracts, deforming the shape of the egg. This glue is easily seen surrounding the base of every egg. Eggs are usually laid on plants. Each species of butterfly has its own hostplant range and while some species of butterfly are restricted to just one species of plant, others use a range of plant species, often including members of a common family. - Larvae, or caterpillars, are multi-legged eating machines. They consume plant leaves and spend practically all of their time in search of food. Caterpillars mature through a series of stages, called instars. At the end of each instar, the larva moults the old cuticle, and the new cuticle rapidly hardens and pigments. Development of butterfly wing patterns begins by the last larval instar. - When the larva is fully grown, hormones are produced. At this point the larva stops feeding and begins "wandering" in the quest of a suitable pupation site, often the underside of a leaf. The larva transforms into a pupa (or chrysalis) by anchoring itself to a substrate and moulting for the last time. The chrysalis is usually incapable of movement, although some species can rapidly move the abdominal segments or produce sounds to scare potential predators. - The adult, sexually mature, stage of the insect is known as the imago. After it emerges from its pupal stage, a butterfly cannot fly until the wings are unfolded. A newly-emerged butterfly needs to spend some time inflating its wings with blood and letting them dry, during which time it is extremely vulnerable to predators. Some butterflies' wings may take up to three hours to dry while others take about one hour. The life cycle of the butterfly has parallels to the life cycle of a Christian. During the larval stage, a butterfly lacks the beauty of an adult. It spends all of its time feeding itself, often causing major damage to plants. This stage of a butterfly's life can be likened to the untranslated state of man, in that it is unlovely and often ignorant of the damage it causes. Eventually, the caterpillar pupates. This stage is parallel to death. While inside its chrysalis, the butterfly is transformed. This is not something it is conscious of doing. At the resurrection of the righteous, the Christian will also transformed by Christ into a new person, free from all defects. Gone are the selfish desires and the ugliness. It is a beautiful creature free from sin.
In the horned beetle world there is a bizarre evolutionary trade-off: The bigger the horn on the head, the smaller the male genitalia on the other end of the animal–or vice versa. As horns evolve to be larger, genitalia become smaller, eventually limiting sexual compatibility and creating a new species of horned beetles. Photos courtesy Armin Moczek/Indiana University Bloomington The separate beetle populations have diverged significantly in the size of the male copulatory organ, “and natural selection operating on the other end of the animal — horns atop the beetles’ heads — seems to be driving it,” they say. “Biologists have known that in these beetles there is an investment trade-off between secondary sexual characters and primary sexual characters,” Moczek said. “As horns get bigger, copulatory organs get smaller, or vice versa. “What was not known was how frequently and how fast this can occur in nature, and whether this can drive the evolution of new species.” Structures directly involved in mating, the genitalia, are known as primary sexual characters. Combat structures like horns — or seductive attributes like a cardinal’s vibrant plumage or a bullfrog’s deeply resonant baritone — are known as secondary sexual characters, the scientists explained. Shown are males of four of the 10 Onthophagus species examined in the study. From top to bottom: O. watanabei (North Borneo), O. taurus (Mediterranean), O. gazella (South Africa), and O. sagittarius (Indonesia). Evolutionary biologists believe changes in copulatory organ size and shape can spur speciation by making individuals from different populations sexually incompatible. The notion that genital size is related to the origin of species is not new. An early “lock and key” model of reproductive isolation was first proposed by L. Dufour 160 years ago to explain why some pairs of species, outwardly identical in every way, are unable to mate. But how genital morphology is related to the creation of new species puzzles biologists. “Individuals of most species do not choose mates according to the size and shape of genitalia,” Moczek and Parzer said in their statement. “Indeed, genitalia may not be relevant until the latter stages of courtship, if at all.” This is where the latest research on the horned beetle Onthophagus taurus may shed some light. Native to Italy, the horned beetle exists in other parts of the world only because of recent human activity. This means, Moczek and Parzer say, that the marked divergences they observed in O. taurus’s horn and copulatory organ size must have occurred over an extremely short period of time — 50 years or less. The four O. taurus populations Moczek and Parzer studied in the U.S. (North Carolina), Italy, and western and eastern Australia, exhibit substantial changes in both horn and genitalia length — as much as 3.5 times, in terms of an “investment” index the scientists devised that takes body size into account. The scientists examined 10 other Onthophagus species, and as expected, they found vast differences between the species regarding horn and male copulatory organ size. Moczek says this suggests that trade-offs between primary and secondary sexual traits continue to shape the way species diverge well after speciation has occurred. Males from most horned beetle species, such as the Onthophagus nigriventris seen here, have faced an evolutionary trade-off to ensure their reproductive success, an earlier study suggested. Photograph courtesy PNAS The speed and magnitude of divergence within O. taurus presents something of a paradox, the scientists say. “How is it that copulatory organ size can be so rigorously maintained within the populations of a single species, yet appear so restless to change?” “In terms of the integrity of a species, it’s important for these things not to change too much,” Moczek explains. “So there is a lot of evidence suggesting that within species or within the populations of species, natural selection maintains genital characters. But if these primary sex characters are linked to other characters that can change readily, then you’ve got what we think is a very exciting mechanism that could prime populations for reproductive isolation.” Horn length and shape can change for many reasons, Moczek says. Among densely populated species, fighting (which favors large horns) may not be an effective strategy for winning mates. “As combative males fight each other, a diminutive, smaller-horned male could simply employ a sneaking strategy to gain access to unguarded females. Under these circumstances, reduced investment in horns seems to result in larger copulatory organs.” Alternately, in lower density populations, most male beetles spend a great deal of time fighting. Longer, bigger horns could serve these males well — and also lead to smaller genitalia. “If this is all it takes to change genitalia, it may be easier to make new species than we thought,” Moczek said. Related National Geographic News stories: - Big Testes or Big Horns? It’s One or the Other for Male Beetles - Evolutionary Oddities: Duck Sex Organ, Lizard Tongue - Sex Tips for Animals — A Lighthearted Look at Mating - “Reverse Evolution” Discovered in Seattle Fish - Snake-Fang Evolution Mystery Solved — “Major Surprise” - Lizards Rapidly Evolve After Introduction to Island - “Living Dinosaur” Is Fastest-Evolving Animal
Some of the accomplishments made by the Sumerian civilization include creating the sexagesimal system, developing a set of laws and creating the cuneiform system of writing. Sumerians have also been credited with developing the potter's wheel, although there is evidence an earlier version may have been invented in Egypt.Continue Reading The Sumerians developed the sexagesimal, or base 60, system in approximately 2,000 B.C. This system influenced the creation of the 60-minute hour and the 60-second minute. No one knows why the Sumerians chose 60 as a base, but the system was very easy to use because 60 is divisible by 10 as well as 12, 15, 20 and 30. The Sumerians had their own system of law to govern legal disputes, using information recorded on tablets to settle civil matters. The civilization even made use of arbitrators to try to settle disputes amicably. If a dispute could not be settled by an arbitrator, it was brought before a panel of judges. The Sumerians are also credited with developing the first written set of laws, the code of Ur-Nammu. The first written language, cuneiform script, was developed by Sumerians. The script was based on pictograms, or visual representations, of objects. The marks of the cuneiform system had a wedge-shaped appearance, making them very different from many of the languages used in modern times.Learn more about Mesopotamia
From Wikipedia, the free encyclopedia |This article needs additional citations for verification. (December 2009)| Pigeonholing is any process that attempts to classify disparate entities into a small number of categories (usually, mutually exclusive ones). Common failings of pigeonholing schemes include: - Categories are poorly defined (often because they are subjective). - Entities may be suited to more than one category. Example: rhubarb is both 'poisonous' and 'edible'. - Entities may not fit into any available category. Example: asking somebody from Washington, DC which state they live in. - Entities may change over time, so they no longer fit the category in which they have been placed. Example: certain species of fish may change from male to female during their life. - Attempting to discretize properties that would be better viewed as a continuum. Example: attempting to sort people into 'introverted' and 'extroverted'. - Criteria used to categorize entities do not accurately predict the properties ascribed to those categories. Example: relying on astrological sign as a guide to someone's personality. Also has to do with "pigeonholing" a bill in Congress |This culture-related article is a stub. You can help Wikipedia by expanding it.|
Lymph nodes are small glands that filter lymph, the clear fluid that circulates through the lymphatic system. They become swollen in response to infection and tumors. Lymphatic fluid circulates through the lymphatic system, which is made of channels throughout your body that are similar to blood vessels. The lymph nodes are glands that store white blood cells. White blood cells are responsible for killing invading organisms. The lymph nodes act like a military checkpoint. When bacteria, viruses, and abnormal or diseased cells pass through the lymph channels, they are stopped at the node. When faced with infection or illness, the lymph nodes accumulate debris, such as bacteria and dead or diseased cells. Lymph nodes are located throughout the body. They can be found underneath the skin in many areas including: - in the armpits - under the jaw - on either side of the neck - on either side of the groin - above the collarbone Lymph nodes swell from an infection in the area where they are located. For example, the lymph nodes in the neck can become swollen in response to an upper respiratory infection, like the common cold. What causes the lymph nodes to swell? Lymph nodes become swollen in response to illness, infection, or stress. Swollen lymph nodes are one sign that your lymphatic system is working to rid your body of the responsible agents. Swollen lymph glands in the head and neck are normally caused by illnesses such as: - ear infection - the cold or flu - sinus infection - HIV infection - infected tooth - mononucleosis (mono) - skin infection - strep throat More serious conditions, such as immune system disorders or cancers, can cause the lymph nodes throughout the body to swell. Immune system disorders that cause the lymph nodes to swell include lupus and rheumatoid arthritis. Any cancers that spread in the body can cause the lymph nodes to swell. When cancer from one area spreads to the lymph nodes, the survival rate decreases. Lymphoma, which is a cancer of the lymphatic system, also causes the lymph nodes to swell. Some medications and allergic reactions to medications can cause the lymph nodes to swell. Antiseizure and antimalarial drugs can also cause lymph nodes to swell. Other causes of swollen lymph nodes include: - cat scratch fever - ear infections - Hodgkin’s disease - metastasized cancer - mouth sores - non-Hodgkin’s lymphoma Detecting swollen lymph nodes A swollen lymph node can be as small as the size of a pea and as large as the size of a cherry. Swollen lymph nodes can be painful to the touch, or they can hurt when you make certain movements. Swollen lymph nodes under the jaw or on either side of the neck may hurt when you turn your head in a certain way or when you’re chewing food. They can often be felt simply by running your hand over your neck just below your jawline. They may be tender. Swollen lymph nodes in the groin may cause pain when walking or bending. Other symptoms that may be present along with swollen lymph nodes are: If you experience any of these symptoms, or if you have painful swollen lymph nodes and no other symptoms, consult your doctor. Lymph nodes that are swollen but not tender can be signs of a serious problem, such as cancer. In some cases, the swollen lymph node will get smaller as other symptoms go away. If a lymph node is swollen and painful or if the swelling lasts more than a few days, see your doctor. At the doctor’s office If you’ve recently become ill or had an injury, make sure to let your doctor know. This information is vital in helping your doctor determine the cause of your symptoms. Your doctor will also ask you about your medical history. Since certain diseases or medications can cause swollen lymph nodes, giving your medical history helps your doctor find a diagnosis. After you discuss the symptoms with your doctor, they will perform a physical examination. This consists of checking the size of your lymph nodes and feeling them to see if they’re tender. After the physical examination, a blood test may be administered to check for certain diseases or hormonal disorders. If necessary, the doctor may order an imaging test to further evaluate the lymph node or other areas of your body that may have caused the lymph node to swell. Common imaging tests used to check lymph nodes include CT scans, MRIs, X-rays, and ultrasound. In certain cases, further testing is needed. The doctor may order a lymph node biopsy. This is a minimally invasive test that consists of using thin, needle-like tools to remove a sample of cells from the lymph node. The cells are then sent to a laboratory where they are tested for major diseases, such as cancer. If necessary, the doctor may remove the entire lymph node. How are swollen lymph nodes treated? Swollen lymph nodes may become smaller on their own without any treatment. In some cases, the doctor may wish to monitor them without treatment. In the case of infections, you may be prescribed antibiotics or antiviral medications to eliminate the condition causing the swollen lymph nodes. Your doctor might also give you medications such as aspirin and ibuprofen (Advil) to combat pain and inflammation. Swollen lymph nodes caused by cancer may not shrink back to normal size until the cancer is treated. Cancer treatment may involve removing the tumor or any affected lymph nodes. It may also involve chemotherapy to shrink the tumor. Your doctor will discuss which treatment option is best for you.
Partial lunar eclipses explaine A partial lunar eclipse occurs when the Earth moves between the Sun and the Moon, but the Sun, Earth and Moon are not precisely aligned. When this occurs only a fraction of the Moon's visible surface moves into the Earth's shadow. Although the Moon is a dark object, it can be seen in the sky most of the time because its surface reflects the Sun's rays back to Earth. A partial lunar eclipse occurs when the Earth moves between the Sun and Moon but the three celestial bodies do not form a perfectly straight line. When that happens, a fraction of the Moon moves into the darkest, central part of the Earth's shadow (umbra) and does not receive any direct sunlight. The other part of its visible surface is within the shadow's much brighter outer part (penumbra). Unlike solar eclipses, which can only be seen along a narrow path on Earth, partial eclipses of the Moon can be observed all across the night side of Earth because observers are situated on the same celestial body that casts the shadow. For this reason, the probability to witness a lunar eclipse from any one point on Earth is much higher compared to solar eclipses, even though both occur in similar intervals. Upcoming Partial Lunar Eclipses |Dates||Visibility Map/Path of the eclipse| |Apr 4, 2015| |Aug 7, 2017| |Jul 16 / Jul 17, 2019| |May 26, 2021| |Nov 19, 2021| |Oct 28, 2023| Visualizing a partial lunar eclipse During the eclipse, the Earth's shadow slowly grows across the Moon's surface until it reaches its greatest magnitude. After this high point, the shadow diminishes again. The eclipsed part of the Moon is still visible as a dark yellow, orange or brown entity. Although the Earth blocks all direct sunlight from that part of the Moon's surface, some rays still find their way via the Earth's atmosphere. When do partial lunar eclipses happen? A partial lunar eclipse can be observed at night and during Full Moon when - the Moon is near one of its orbital nodes so Sun, Earth and Moon roughly form a straight line, - and the observer is located on the night side of Earth. The Moon's orbit and lunar nodes The Earth revolves around the Sun and the Moon circles the Earth. During Full Moon, the Earth passes roughly between Moon and Sun. However, in most cases the three celestial bodies do not form a completely straight line, so the Moon is not eclipsed. The reason why lunar eclipses do not happen every Full Moon is that the lunar orbital plane - the imaginary flat surface whose outer rim is formed by the Moon's path around Earth - runs at an angle of approximately 5 degrees to the Earth's orbital plane around the Sun (ecliptic). The points where the two orbital planes meet are called lunar nodes. Only if the Moon appears near one of the two lunar nodes during Full Moon can a lunar eclipse be observed from the Earth's night side. The type and magnitude of the lunar eclipse depends on how precisely Sun, Earth and Moon line up. A partial lunar eclipse can be observed if the three form an almost straight line. The Earth's shadow Like any other object's shadow, the Earth's shadow consists of three different areas: the innermost and darkest part (umbra), the lighter, outer part (penumbra), and a partly shaded area beyond the umbra (antumbra). During a partial lunar eclipse, parts of the Moon pass through the Earth's umbra, while the remaining portion of its visible surface is within the penumbra. Did you know...? The size of the eclipsed portion of the Moon's surface (magnitude) is the same irrespective of the observer's location on the night side of Earth. However, because observers on the southern hemisphere stand “upside-down” compared to observers on the northern hemisphere, they also see the Moon “upside-down”. The orientation of a lunar eclipse and the direction in which the shadow appears to move across the Moon's surface can therefore vary according to latitude. In this Article All about lunar eclipses - Types of solar and lunar eclipses - Total lunar eclipses - Partial lunar eclipses - Penumbral lunar eclipses - How to view a lunar eclipse Watch daylight move across the planet... More
One of the most compelling questions in the field of pidgins and creoles consists in identifying the linguistic sources and cognitive forces that shape a given creole: why does a particular creole look and sound the way it does? Where do its linguistic properties come from? What are the original populations and languages that contributed to its genesis? The investigation of such questions hopes to shed light on how the mind pulls together linguistic materials from distinct sources to form a creole, and to reveal the nature of the cognitive processes involved in creole formation. Recent developments in language contact studies combined with the findings in other disciplines like developmental psychology are contributing to a better understanding of how creole languages emerge and develop. Topics in this course will include: - socio-historical contexts of creole genesis and how a distinct history of population contact results in distinct structural outcomes; - examination of the morpho-syntactic properties of a set of creole languages; - findings in experimental psychology regarding language learning and development; - identification of the cognitive processes (L1 and L2 acquisition) that contributed to the emergence of specific features. On this issue, we focus particularly on the process of convergence in creole formation and demonstrate how such a hypothesis can be experimentally tested.
Aspects of Task-Based Syllabus Design Introduction and overview Syllabus design is concerned with the selection, sequencing and justification of the content of the curriculum. Traditional approaches to syllabus developed were concerned with selecting lists of linguistic features such as grammar, pronunciation, and vocabulary as well as experiential content such as topics and themes. These sequenced and integrated lists were then presented to the methodologist, whose task it was to develop learning activities to facilitate the learning of the prespecified content. In the last twenty years or so a range of alternative syllabus models have been proposed, including a task-based approach. In this piece I want to look at some of the elements that a syllabus designer needs to take into consideration when he or she embraces a task-based approach to creating syllabuses and pedagogical materials. Questions that I want to explore include: What are tasks? What is the role of a focus on form in language learning tasks? Where do tasks come from? What is the relationship between communicative tasks in the world outside the classroom and pedagogical tasks? What is the relationship between tasks and language focused exercises? Task-based syllabuses represent a particular realization of communicative language teaching. Instead of beginning the design process with lists of grammatical, functional-notional, and other items, the designer conducts a needs analysis which yields a list of the target tasks that the targeted learners will need to carry out in the real-world outside the classroom. Examples of target tasks include: Any approach to language pedagogy will need to concern itself with three essential elements: language data, information, and opportunities for practice. In the rest of this piece I will look at these three elements from the perspective of task-based language teaching. By language data, I mean samples of spoken and written language. I take it as axiomatic that, without access to data, it is impossible to learn a language. Minimally, all that is needed to acquire a language is access to appropriate samples of aural language in contexts that make transparent the relationship between form, function and use. In language teaching, a contrast is drawn between authentic and non-authentic data. Authentic data are samples of spoken or written language that have not been specifically written for the purposes of language teaching. Non-authentic data are dialogues and reading passages that HAVE been specially written. Here are two conversations that illustrate the similarities and differences between authentic and non-authentic data. Both are concerned with the functions of asking for and giving directions. I neednt spell out which is which, because it is obvious. A: Excuse me please. Do you know where the nearest bank is? B: Well, the city bank isnt far from here. Do you know where the main post office is? A: No, not really. Im just passing through. B: Well, first go down this street to the traffic light. B: Then turn left and go west on Sunset Boulevard for about two blocks. The bank is on your right, just past the post office. A: All right. Thank you. B: Youre welcome. A: How do I get to Kensington Road? B: Well, you go down Fullarton Road A: what, down Old Belair Road and around ? B: Yeah. And then you go straight A: past the hospital? B: Yeah, keep going straight, past the racecourse to the roundabout. You know the big roundabout? B: And Kensington Roads off to the right. A: What, off the roundabout? Proponents of task-based language teaching have argued for the importance of incorporating authentic data into the classroom, although much has been made of the fact that authenticity is a relative matter, and that as soon as one extracts a piece of language from the communicative context in which it occurred and takes it into the classroom, one is de-authenticating it to a degree. However, if learners only ever encounter contrived dialogues and listening texts, the task of learning the language will be made more difficult. (Nunan, 1999). The reality is, that in EFL contexts, learners need both authentic AND non-authentic data. Both provide learners with different aspects of the language. In addition to data, learners need information. They need experiential information about the target culture, they need linguistic information about target language systems, and they need process information about how to go about learning the language. They can get this information either deductively, when someone (usually a teacher) or a textbook provides an explicit explanation, or they can get it inductively. In an inductive approach, learners study examples of language and then formulate the rule. Here is an example of an inductive exercise I use to review contrasting points of grammar. It is followed by the inductive reasoning of five of my students who carried out the tasks. |In small groups, study the follow dialogues. Whats the difference between what Person A says and what Person B says? When do we use one form and when do we use the other? A: Ive seen Romeo and Juliet twice. B: Me too. I saw it last Tuesday and again on the weekend. A: Want to go to the movies? B: No, Im going to study tonight. We have an exam tomorrow, you know. A: Oh, in that case, Ill study as well.| Student A: A use present perfect because something happened in the past, but affecting things happening now. Student B: Present perfect tense is used only to describe a certain incidence in the past without describing the exact time of happening. However, it is necessary to describe the time of happening when using the simple past tense. Student C: Simple past is more past than have seen. Student D: We use present perfect tense when the action happen many times. B. focus on actual date and use past. Student E: A use present perfect to show how many times A have seen the film. B use simple past to show how much he love the film. Student A: A is talking about a future action which has no planning. For B, the action has already planned. Student B: A is expressing something he want to do immediately. B is expressing something he want to do in the future. Student C: For A, the action will do in a longer future. For B, the action should be done within a short time. Student D: A doesnt tell the exact time. B confirms the studying time will be tonight. We use the verb to be plus going means must do something. Student E: A is more sure to study than B tonight. From these comments, you can see that learners, even those at roughly the same proficiency level, will be at very different stages in their understanding of grammatical principles and rules. Some proponents of task-based pedagogy argue that an explicit, deductive approach is unnecessary, that it does not work, and that all . Although I am biased in favour of an inductive approach The third and final essential element is practice. Unless you are extraordinarily gifted as a language learner, it is highly unlikely that you will get very far without extensive practice. In designing practice opportunities for my learners, I distinguish between tasks, exercises and activities. A task is a communicative act that does not usually have a restrictive focus on a single grammatical structure. It also had a non-linguistic outcome. An exercise usually has a restrictive focus on a single language element, and has a linguistic outcome. An activity also has a restrictive focus on one or two language items, but also has a communicative outcome. In that sense, activities have something in common with tasks and something in common with exercises. I distinguish between real-world or target tasks, which are communicative acts that we achieve through language in the world outside the classroom, and pedagogical tasks, which are carried out in the classroom. I subdivide pedagogical tasks into those with a rehearsal rationale and those with a pedagogical rationale. These different elements are further defined and exemplified below. Real-world or target task: A communicative act we achieve through language in the world outside the classroom. Pedagogical tasks: A piece of classroom work which involves learners in comprehending, manipulating, producing or interacting in the language while their attention is principally focused on meaning rather than forms. They have a non-linguistic outcome, and can be divided into rehearsal tasks or activation tasks. Rehearsal task: A piece of classroom work in which learners rehearse, in class, a communicative act they will carry out outside of the class. Activation task: A piece of classroom work involving communicative interaction, but NOT one in which learners will be rehearsing for some out-of-class communication. Rather they are designed to activate the acquisition process. Enabling skills: Mastery of language systems grammar, pronunciation, vocabulary etc. which ENABLE learners to take part in communicative tasks. Language exercise: A piece of classroom work focusing learners on, and involving learners in manipulating some aspect of the linguistic system Communication activity: A piece of classroom work involving a focus on a particular linguistic feature but ALSO involving the genuine exchange of meaning. Examples of pedagogical tasks, communicative activities and language exercises from Expressions Write the past tense form of these verbs: go, is, are, do, have, work, study, buy, pick, make, put, read. Now think of four things you did yesterday. Write sentences in the blanks. First I got up and _____________________________________________ Write three hobbies or activities you like / like doing. Ask each person in your group what they like / like doing. Decide on a suitable gift for each person. Pedagogical task rehearsal Write your resume. Now, imagine youre applying for one of these jobs. Your partner is applying for the other. (Students have two job advertisements) Compare your partner with other applications for the job. Who is the best candidate? Pedagogical tasks activation List three things youre thinking about doing this week. Group work. Tell your partners what youre thinking about doing. For each activity, get a recommendation and a reason from three different people. Then write the best recommendations in the chart. The essential difference between a task and an exercise is that a task has a nonlinguistic outcome. Target or real-world tasks are the sorts of things that individuals typically do outside of the classroom. Pedagogical tasks, are designed to activate acquisition processes. Steps in designing a task-based program Having specified target and pedagogical tasks, the syllabus designer analyzes these in order to identify the knowledge and skills that the learner will need to have in order to carry out the tasks. The next step is to sequence and integrate the tasks with enabling exercises designed to develop the requisite knowledge and skills. As I have already indicated, one key distinction between an exercise and a task, is that exercises will have purely language related outcomes, while tasks will have non-language related outcomes, as well as language related ones. These are the steps that I follow in designing language programs. 1. Select and sequence real-world / target tasks 2. Create pedagogical tasks (rehearsal / activation) 3. Identify enabling skills: create communicative activities and language exercises 4. Sequence and integrate pedagogical tasks, communicative activities and language exercises Here is a diagrammatic representation of how I see these various elements fitting together. If you would like further information on the ideas set out here, I suggest that you look at one (or both!) of the following books, both of which were written by me: Designing Tasks for the Communicative Classroom. Cambridge: Cambridge University Press. Second Language Teaching and Learning. Boston: Heinle & Heinle / ThomsonLearning. Additional papers can be found on my website at Back to Top Back to Articles on Language Teaching
External Web sites Britannica Web sites Articles from Britannica encyclopedias for elementary and high school students. - apartheid - Children's Encyclopedia (Ages 8-11) Apartheid was a system for keeping white people and nonwhites separated in South Africa. It lasted from about 1950 to the early 1990s. The word apartheid means "apartness" in Afrikaans, a language spoken in South Africa. - apartheid - Student Encyclopedia (Ages 11 and up) An Afrikaans word for "apartness," apartheid is the name that South Africa’s white government applied to its policy of discrimination-racial, political, and economic-against the country’s nonwhite majority in the second half of the 20th century. From the 1960s the government often referred to apartheid as "separate development."
During a speech before the second Virginia Convention, Patrick Henry responds to the increasingly oppressive British rule over the American colonies by declaring, “I know not what course others may take, but as for me, give me liberty or give me death!” Following the signing of the American Declaration of Independence on July 4, 1776, Patrick Henry was appointed governor of Virginia by the Continental Congress. The first major American opposition to British policy came in 1765 after Parliament passed the Stamp Act, a taxation measure to raise revenues for a standing British army in America. Under the banner of “no taxation without representation,” colonists convened the Stamp Act Congress in October 1765 to vocalize their opposition to the tax. With its enactment on November 1, 1765, most colonists called for a boycott of British goods and some organized attacks on the customhouses and homes of tax collectors. After months of protest, Parliament voted to repeal the Stamp Act in March 1765. Most colonists quietly accepted British rule until Parliament’s enactment of the Tea Act in 1773, which granted the East India Company a monopoly on the American tea trade. Viewed as another example of taxation without representation, militant Patriots in Massachusetts organized the “Boston Tea Party,” which saw British tea valued at some 10,000 pounds dumped into Boston harbor. Parliament, outraged by the Boston Tea Party and other blatant destruction of British property, enacted the Coercive Acts, also known as the Intolerable Acts, in the following year. The Coercive Acts closed Boston to merchant shipping, established formal British military rule in Massachusetts, made British officials immune to criminal prosecution in America, and required colonists to quarter British troops. The colonists subsequently called the first Continental Congress to consider a united American resistance to the British. With the other colonies watching intently, Massachusetts led the resistance to the British, forming a shadow revolutionary government and establishing militias to resist the increasing British military presence across the colony. In April 1775, Thomas Gage, the British governor of Massachusetts, ordered British troops to march to Concord, Massachusetts, where a Patriot arsenal was known to be located. On April 19, 1775, the British regulars encountered a group of American militiamen at Lexington, and the first volleys of the American Revolutionary War were fired.
Definition of Terms For this discussion; - mediated means active participation by a teacher, tutor or facilitator throughout the learning process, - collaborative means students combining their experiences and capabilities to bring both breadth and depth to their learning, and - online learning means delivery of education materials and associated communication and administration via a learning management system such as Moodle or Blackboard. Consistent with online education in general, an obvious benefit of mediated and collaborative online learning is flexibility, especially in terms of a student’s geographical location and competing time demands from employment, family and social obligations. In addition, it is flexible in terms of a student’s intellectual and professional interests and their (probable) need for individualised support. Finally, and perhaps most importantly, it has the benefit of an overarching, externally imposed study discipline. As with all forms of learning and modes of education, mediated collaborative online learning has characteristics that must be carefully weighed and balanced. First is the process of student socialisation; which is to say, the process of introducing students to, and helping them feel comfortable with, the learning environment. The five stages of Salmon’s Model of Online Learning describe this well; beginning with Access and Motivation and then progressing through Online Socialisation, Information Exchange and Knowledge Construction, until reaching the stage of Construction where students take personal responsibility for their learning. A second important consideration is learning style; which is to say, the need to provide appropriate, or at least multiple media and delivery methodologies (eg text, audio, audio-visual, contemplative, reflective, interactive etc). Variety in the learning environment raises the potential to maintain the attention of students with differing learning preferences. Academic capabilities is another issue. Where the poorer a student’s academic capabilities, the greater the need for an imposed study discipline (eg tasks with follow-up assessment by specific dates). Similarly, the poorer a student’s academic capabilities, the greater the need for continuous feedback on performance, focussing not just on absolute performance but also on performance change (eg performance improvements since previous feedback). Structure is an issue sometimes not recognised as important by non-educators. Online learning, typically, can be structured either by topic or time, but to be effective, mediated and collaborative online learning should be structured by time (perhaps weekly). Finally, there is assessment, perhaps the most challenging, but sometimes the least considered, issue in education. In formal academic and professional environments, assessment processes are not just for students and not just about achievement of their learning objectives, but should also be used to identify the effectiveness of an education methodology and its application in a particular context. In an online environment, assessment, whether formative or summative, can be visible or hidden. Where a student produces and submits a piece of work for assessment it is visible, but assessment can also use hidden activities such as whether a student accessed a particular learning object and the time the student spent reading postings to a discussion forum. Hidden assessments, to be hidden, can not be defined in a subject outline or similar public document, and therefore can not be used to justify a student’s grade. They can, however, be used to understand a student’s work rate and method, and thus can be a guide to the possible depth of their understanding and to identify opportunities for remedial intervention. Hidden assessment methods, possible in online learning environments, are most useful in subjects where the expected academic capabilities and quality of assessable artefacts is relatively low. To illustrate aspects of mediated and collaborative online learning, take as examples two semester-length subjects; a university first year undergraduate subject for recent school-leavers and a postgraduate subject for professionals in full-time employment. The undergraduate subject might be built around a textbook (ideally available in both electronic and hard-copy formats), whereas the postgraduate subject might have no textbook but numerous online readings the total of which is too great for a student to read; meaning that they must select just those readings relevant to their personal interests. Associated with the undergraduate textbook could be pointers on how to take notes and identify key points, whereas in the postgraduate subject the readings are followed-up with questions such as “what and how can you apply what you have just learnt to your current work environment” or “how can you use what you have just learnt to progress your career as a …” Sometimes useful is for students to find materials themselves which they then must introduce and justify to their student colleagues. This has the benefit of introducing the students to self-directed study which they can continue after completing their formal coursework. Both types of subject are supported by audio visual materials (perhaps published through YouTube or similar) though the undergraduate videos are typically how-to type instruction videos whereas the postgraduate materials might be interviews and conference presentations published by universities and other research organisations In terms of Bloom’s Taxonomy in the Cognitive Domain, the intellectual skills developed in the undergraduate subject are probably limited to knowledge recall and evaluation, perhaps sometimes to the level of concept evaluation; whereas the postgraduate subject might consistently involve skills for meaning synthesis and the evaluation of ideas. Both subjects will use regular discussion forums (perhaps weekly) which should be assessed; both for formative reasons and also to ensure students post regularly and appropriately. In both subjects the discussion forums are the principal tool for student collaboration so the seeding (in the form of meaningful questions) and support (by providing mid-stream direction to conversations or to motivate participants) is critically important. The difference between the subjects is primarily in the types of question used to seed conversations and the intellectual quality of the subsequent discussions. In the undergraduate subject, discussion forums might be supported with elementary, perhaps multiple choice, quizzes focussing on knowledge recall and precision. This has been a very brief discussion of mediated and collaborative online learning. It has argued that a key benefit of the method is flexibility in terms of student location, time and interests. In addition, and most importantly, it has the benefit of an externally imposed study discipline. The most important take-away message from this discussion is that mediated and collaborative online learning, like all formal education, it is not a trivial activity. To be effective it requires serious consideration by professional educators familiar with different types of learner, the significance of assessment, the methodology itself and the technologies over which it can be delivered.
Lizards are reptiles of the order Squamata, which they share with the snakes (Ophidians). They are usually four-legged, with external ear openings and movable eyelids. Species range in adult length from a few centimeters (some Caribbean geckos) to nearly three meters Some lizard species called "glass snakes" or "glass lizards" have no functional legs, though there are some vestigial skeletal leg structures. They are distinguished from true snakes by the presence of eyelids and ears. The tail of glass lizards, like many other lizards, will break off as a defense mechanism, Many lizards can change color in response to their environments or in times of stress. The most familiar example is the chameleon, but more subtle color changes occur in other lizard species as well (most notably the anole, also known as the "house chameleon" Lizards typically feed on insects or rodents. A few species are omnivorous or herbivorous; a familiar example of the latter is the iguana, which is unable to properly digest animal protein. Until very recently, it was thought that only two lizard species were venomous: the Mexican beaded lizard and the closely-related Gila monster, both of which live in northern Mexico and the southwest United States. However recent research at the University of Melbourne, Australia and Pennsylvania State University has revealed that in fact many lizards in the iguanians and monitor (lizard) families have venom-producing glands. None of these poses much danger to humans, as their poison is introduced slowly by chewing, rather than injected as with poisonous snakes. Nine toxins previously thought to only occur in snakes have been discovered, and a number of previously unseen chemicals as well. These revelations are prompting calls for a complete overhaul of the classification system for lizard species to form a venom clade. "These papers threaten to radically change our concepts of lizard and snake evolution, and particularly of venom evolution," says Harry Greene, a herpetologist at Cornell University in New Most other lizard species are harmless to humans (most species native to North America, for example, are incapable even of drawing blood with their bites). Only the very largest lizard species pose threat of death; the Komodo dragon, for example, has been known to attack and kill humans and their livestock. The Gila Monster and Beaded Lizard are venemous however, and though not deadly, can inflict extremely painful and powerful bites. The chief impact of lizards on humans is positive; they are significant predators of pest species; numerous species are prominent in the pet trade; some are eaten as food (for example, iguanas in Central America); and lizard symbology plays important, though rarely predominant roles in some cultures (e.g. Tarrotarro in Australian mythology). Most lizards lay eggs, though a few species are capable of live birth. Many are also capable of regeneration of lost limbs or tails. Lizards in the Scincomorpha family, which include skinks (such as the blue-tailed skink), often have shiny, iridescent scales that appear moist. However, like all other lizards, they are dry-skinned and generally prefer to avoid water. All lizards are able to swim if needed, however, and a few (such as the Nile monitor) are quite comfortable in aquatic environments. All text is available under the terms of the GNU Free Documentation License
‘The Art of Teaching’ – Mary MacKillop Mary was an outstanding educator, in many ways ahead of her time. She saw teaching and learning as a reciprocal process whereby the teacher ‘must understand what she is about’ and the children ‘must also know their duty’. - instilled a culture of order and self-discipline through praise and encouragement rather than corporal punishment - rewarded exemplary behaviour and achievement with daily marks, coloured ribbons and boiled lollies - appointed the most advanced, courteous and punctual as monitors, a position of trust and responsibility - knew from experience that rote learning without oral instruction was ‘useless’, because children need to ‘understand what they learn’ - asserted that most subjects ‘should be taught orally’, with maps and science charts, for example, carefully explained - secured the children’s attention by insisting that their eyes be ‘fixed upon herself’ and ‘their hands and feet in their proper position’ before she began a lesson - organised feasts, bush picnics and games for special occasions also enjoyed by parents. ‘A good teacher makes good children and a good school where punishment is rarely required.’ Intelligent, warm and personable, Mary developed positive relationships with her pupils. - respected their personal dignity and treated them fairly - was patient and tolerant, but very firm - was compassionate and reassuring - had a sense of humour and laughed readily. Above all, she LOVED them and was KIND. Josephite Education exhibition text: Margaret Muller, Mary MacKillop Penola Centre, 2010 – 1. MacKillop to Woods, Penola, 13.4.1867 – 2. Woods, ‘St Joseph’s Schools, Rules for Teachers’, Adelaide, c1870 – 3. MacKillop, ‘Timetable Explained’, 1875 – 4. MacKillop, ibid, and Woods, ‘Rules for the Institute of St Joseph’, October 1867 -5. MacKillop, ‘Timetable Explained’, 1875
Early Christian Art and Architecture Two important moments played a critical role in the development of early Christianity. The first was the decision of the Apostle Paul to spread Christianity beyond the Jewish communities of Palestine into the Greco-Roman world, and the second was the moment when the Emperor Constantine at the beginning of the fourth century accepted Christianity and became its patron. The creation and nature of Christian art were directly impacted by these moments. As implicit in the names of his Epistles, Paul spread Christianity to the Greek and Roman cities of the ancient Mediterranean world. In cities like Ephesus, Corinth, Thessalonica, and Rome, Paul encountered the religious and cultural experience of the Greco Roman world. This encounter played a major role in the formation of Christianity. Christianity in its first three centuries was one of a large number of mystery religions that flourished in the Roman world. Religion in the Roman world was divided between the public, inclusive cults of civic religions and the secretive, exclusive mystery cults. The emphasis in the civic cults was on the customary practices especially of sacrifices. Since the early history of the polis or city state in Greek culture, the public cults played an important role in defining civic identity. Rome as it expanded and assimilated more peoples continued to use the public religious experience to define the identity of being a citizen in the Roman world. The polytheism of the Romans allowed it to assimilate the Gods of the people it conquered. Thus for the Emperor Hadrian when he created the Pantheon in the early second century, the building's dedication to all the gods signified the Roman ambition of bringing cosmos or order to the gods just as the peoples are brought into political order through the spread of Roman imperial authority. The order of Roman authority on earth is a reflection of the divine cosmos. For most adherents of mystery cults there was no contradiction in participating in both the public cults and a mystery cult. The different religious experiences appealed to different aspects of life. In contrast to the civic identity which was at the focus of the public cults, the mystery religions appealed to the participant's concerns for personal salvation. The mystery cults focused on a central mystery that would only be known by those who had become initiated into the teachings of the cult. These are characteristics Christianity shares with numerous other mystery cults. In early Christianity emphasis is placed on Baptism which marked the initiation of the convert into the secrets or mysteries of the faith. The Christian emphasis on the belief in salvation and an after life is consistent with the other mystery cults. The monotheism of Christianity, though, was a crucial difference from the other cults. The refusal of the early Christians to participate in the civic cults due to their monotheistic beliefs lead to their persecution. Christians were seen as anti-social. The beginnings of an identifiable Christian art can be traced to the end of the second century and the beginning of the third century. Considering the Old Testament prohibitions against graven images, it is important to consider why Christian art developed in the first place. The use of images will be a continuing issue in the history of Christianity. The best explanation for the emergence of Christian art in the early church is due to the important role images played in Greco-Roman culture. As Christianity gained converts, these new Christians had been brought up on the value of images in their previous cultural experience and they wanted to continue this in their Christian experience. For example, there was a change in burial practices in the Roman world away from cremation to inhumation. Outside the city walls of Rome, adjacent to major roads, catacombs were dug into the ground to bury the dead. Families would have chambers or cubicula dug to bury their members. Wealthy Romans would also have sarcophagi or marble tombs carved for their burial. The Christian converts wanted the same things. Christian catacombs were dug frequently adjacent to non-Christian ones, and sarcophagi with Christian imagery were apparently popular with the richer Christians. A striking aspect of the Christian art of the third century is the absence of the imagery that will dominate later Christian art. We do not find in this early period images of the Nativity, Crucifixion, or Resurrection of Christ for example. This absence of direct images of the life of Christ is best explained by the status of Christianity as a mystery religion. The story of the Crucifixion and Resurrection would be part of the secrets of the cult. While not directly representing these central Christian images, the theme of death and resurrection was represented through a series of images many of which were derived from the Old Testament that echoed the themes. For example the story of Jonah being swallowed by a great fish and then after spending three days and three nights in the belly of the beast is vomitted out on dry ground was seen by early Christians as an anticipation or prefiguration of the story of Christ's own death and resurrection. Images of Jonah along with those of Daniel in the Lion's Den, the Three Hebrews in the Firey Furnace, Moses Striking the Rock, among others are widely popular in the Christian art of the third century both in paintings and on sarcophagi. All of them can be seen to allegorically allude to the principal narratives of the life of Christ. The common subject of salvation echoes the major emphasis in the mystery religions on personal salvation. The appearance of these subjects frequently adjacent to each other in the catacombs and sarcophagi can be read as a visual litany: save me Lord as you have saved Jonah from the belly of the great fish, save me Lord as you have saved the Hebrews in the desert, save me Lord as you have saved Daniel in the Lion's den, etc. One can imagine how early Christians who were rallying around the nascent religious authority of the Church against the regular threats of persecution by imperial authority would find meaning in the story of Moses of striking the rock to provide water for the Israelites fleeing the authority of the Pharaoh on their exodus to the Promissed Land. One of the major differences between Christianity and the public cults was the central role faith plays in Christianity and the importance orthodox beliefs. The history of the early Church is marked by the struggle to establish a canonical set of texts and the establishment of orthodox doctrine. Questions about the nature of the Trinity and Christ would continue to challenge religious authority. Within the civic cults there were no central texts and there were no orthodox doctrinal positions. The emphasis was on maintaining customary traditions. One accepted the existence of the gods, but there was no emphasis on belief in the gods. The Christian emphasis on orthodox doctrine has its closest parallels in the Greek and Roman world to the role of philosophy. Schools of philosophy centered around the teachings or doctrines of a particular teacher. The schools of philosophy proposed specific conceptions of reality. Ancient philosophy was influential in the formation of Christian theology. For example the opening of the Gospel of John, which begins "In the beginning was the word and the word was with God...," is unmistakeably based on the idea of the "logos"going back to the philosophy of Heraclitus (ca. 535-475 BCE). Christian apologists like Justin Martyr writing in the second century understood Christ as the Logos or the Word of God who served as an intermediary between God and the World. An early representation of Christ found in the Catacomb of Domitilla shows the figure of Christ flanked by a group of his disciples or students. Those experienced with later Christian imagery might mistake this for an image of the Last Supper, but instead this image does not tell any story. It conveys rather the idea that Christ is the true teacher. Christ draped in classical garb holds a scroll in his left hand while his right hand is outstretched in the so-called ad locutio gesture, or the gesture of the orator. The dress, scroll, and gesture all establish the authority of Christ placed in the center of his disciples. Christ is thus treated like the philosopher surrounded by his students or disciples. Comparably an early representation of the apostle Paul, identifiable with his characteristic pointed beard and high forehead, is based on the convention of the philosopher as exemplified by a Roman copy of a late fourth century BCE portrait of the fifth century BCE playwrite Sophocles. A third century sarcophagus in the Roman church of Santa Maria Antiqua was undoubtedly made to serve as the tomb of relative prosperous third century Christian. At the center appears a seated, bearded male figure holding a scroll and a standing female figure. The male philosopher type is easily identifiable with the same type in another third century sarcophagus, but in this case a non-Christian one: The female figure who holds her arms outstretched combines two different conventions. The outstretched hands in Early Christian art represent the so-called "orant" or praying figure. This is the same gesture found in the catacomb paintings of Jonah being vomited from the great fish, the Hebrews in the Furnace, and Daniel in the Lions den. While the juxtaposition of this female figure with the philosopher figure associates her with the convention of the muse or source of inspiration for the philosopher as illustrated in an early sixth century miniature showing the figure of Dioscurides, an ancient Greek physician, pharmacologist, and botanist: A curious detail about the male and female figures at the center of the Santa Maria Antiqua sarcophagus is that their faces are unfinished. This suggests the possibility that this tomb was not made with a specific patron in mind, but rather it was made on a speculative basis with the expectation that a patron would buy the sarcophagus and have his and presumably his wife's likenesses added. If this is true, it says a lot about the nature of the art industry and the status of Christianity at this period. To produce a sarcophagus like this meant a serious commitment on the part of the maker. The expense of the stone and the time taken to carve it were considerable. A craftsman would not have made a commitment like this without a sense of certainty that someone would purchase it. On the left hand side is represented Jonah sleeping under the ivy after being vomited from the great fish shown on the left. The pose of the reclining Jonah with his arm over his head is based on the figure of the sleeping figure conventional in Greek and Roman art. A popular subject of non-Christian sarcophagi was the sleeping figure of Endymion being approached by Selene. Endymion's wish to sleep for ever and thus ageless and immortal explains the popularity of this subject on non Christian sarcophagi. A third century sarcophagus in the Metropolitan Museum of Art shows Endymion in the same pose as Jonah on the Santa Maria Antiqua sarcophagus. On the right hand side of the Santa Maria Antiqua sarcophagus appears another popular Early Christian image, the Good Shepherd. While echoing the New Testament parable of the Good Shepherd and the Psalms of David, the motif had clear parallels in Greek and Roman art, going back at least to Archaic Greek art as exemplified by the so-called Moschophoros, or calf-bearer, from the early sixth century BCE. On the very right appears an image of the Baptism of Christ. This relatively rare representation of Christ is included probably to refer to the importance of the sacrament of Baptism which signified death and rebirth into a new Christian life. By the beginning of the fourth century Christianity was a growing mystery religion in the cities of the Roman world. It was attracting converts from different social levels. Christian theology and art was enriched through the cultural interaction with the Greco-Roman world. But Christianity would be radically transformed through the actions of a single man. In 312, the Emperor Constantine defeated his principal rival Maxentius at the Battle of the Milvian Bridge. Accounts of the battle, describe how Constantine had seen a sign in the heavens portending his victory. Eusebius, Constantine's principal biographer, describes the sign as the Chi Rho, the first two letters in the Greek spelling of the name Christos. After that victory Constantine became the principal patron of Christianity. In 313 he issued the Edict of Milan which granted religious toleration. Although Christianity would not become the official religion of Rome until the end of the fourth century, Constantine's imperial sanction of Christianity transformed its status and nature. Neither imperial Rome or Christianity would be the same after this moment. Rome would become Christian, and Christianity would take on the aura of imperial Rome. The transformation of Christianity is dramatically evident in a comparison between the architecture of the pre-Constantinian church and that of the Constantinian and post-Constantinian church. During the pre-Constantinian period, there was not much that distinguished the Christian churches from typical domestic architecture. A striking example of this is presented by a Christian community house, from the Syrian town of Dura-Europos. Here a typical has been adapted to the needs of the congregation. A wall was taken down to combine two rooms. This was undoubtedly the room for services. It is significant that the most elaborate aspect of the house is the room designed as a baptistry. This reflects the importance of the sacrament of Baptism to initiate new members into the mysteries of the faith. Otherwise this building would not stand out from the other houses. This domestic architecture obviously would not meet the needs of Constantine's architects. Emperors for centuries had been responsible for the construction of temples throughout the Roman Empire. We have already observed the role of the public cults in defining one's civic identity, and Emperors understood the construction of temples as testament to their pietas, or respect for the customary religious practices and traditions. So it was natural for Constantine to want to construct edifices in honor of Christianity. He built churches in Rome including the Church of St. Peter; he built churches in the Holy Land, most notably the Church of the Nativity in Bethlehem and the Church of the Holy Sepulcher in Jerusalem; and he built churches in his newly constructed capital of Constantinople. In creating these churches, Constantine and his architects confronted a major challenge: what should be the physical form of the church? Clearly the traditional form of the Roman temple would be inappropriate both from associations with pagan cults but also from the difference in function. Temples served as treasuries and dwellings for the cult; sacrifices occurred on outdoor altars with the temple as a backdrop. This meant that Roman temple architecture was largely an architecture of the exterior. Since Christianity was a mystery religion that demanded initiation to participate in religious practices, Christian architecture put greater emphasis on the interior. The Christian churches needed large interior spaces to house the growing congregations and to mark the clear separation of the faithful from the unfaithful. At the same time, the new Christian churches needed to be visually meaningful. The buildings needed to convey the new authority of Christianity. These factors were instrumental in the formulation during the Constantinian period of an architectural form that would become the core of Christian architecture to our own time: the Christian Basilica. The basilica was not a new architectural form. The Romans had been building basilicas in their cities and as part of palace complexes for centuries. A particularly lavish one was the so-called Basilica Ulpia constructed as part of the Forum of the Emperor Trajan in the early second century, but most Roman cities would have one. Basilicas had diverse functions but essentially they served as formal public meeting places. One of the major functions of the basilicas was as a site for law courts. These were housed in an architectural form known as the apse. In the Basilica Ulpia, these semi-circular forms project from either end of the building, but in some cases, the apses would project off of the length of the building. The magistrate who served as the representative of the authority of the Emperor would sit in a formal throne in the apse and issue his judgments. This function gave to the basilicas an aura of political authority. Basilicas also served as audience halls as a part of imperial palaces. A well-preserved example is found in the northern German town of Trier. Constantine built a basilica as part of a palace complex in Trier which served as his northern capital. Although a fairly simple architectural form and now stripped of its original interior decoration, the basilica must have been an imposing stage for the emperor. Imagine the emperor dressed in imperial regalia marching up the central axis as he makes his dramatic adventus or entrance along with other members of his court. This space would have humbled an emissary who approached the enthroned emperor seated in the apse. It is this category of building that Constantine's architects adapted to serve as the basis for the new churches. The original Constantinian buildings are now known only in plan, but an examination of a still extant early fifth century Roman basilica, the Church of Santa Sabina, helps us to understand the essential characteristics of the early Christian basilica. Like the Trier basilica, the Church of Santa Sabina has a dominant central axis that leads from the entrance to the apse, the site of the altar. This central space is known as the nave, and is flanked on either side by side aisles. The architecture is relatively simple with a wooden, truss roof. The wall of the nave is broken by clerestory windows that provide direct lighting in the nave. The wall does not contain the traditional classical orders articulated by columns and entablatures. Now plain, the walls apparently originally were decorated with mosaics. This interior would have had a dramatically different effect than the classical building. As exemplified by the interior of the Pantheon constructed in the second century by the Emperor Hadrian, the wall in the classical building was broken up into different levels by the horizontals of the entablatures. The columns and pilasters form verticals that tie together the different levels. Although this decor does not physically support the load of the building, the effect is to visualize the weight of the building. The thickness of the classical decor adds solidity to the building. In marked contrast, the nave wall of Santa Sabina has little sense of weight. The architect was particularly aware of the light effects in an interior space like this. The glass tiles of the mosaics would create a shimmering effect and the walls would appear to float. Light would have been understood as a symbol of divinity. Light was a symbol for Christ. The emphasis in this architecture is on the spiritual effect and not the physical. The opulent effect of the interior of the original Constantinian basilicas is brought out in a Spanish pilgrims description of the Church of the Holy Sepulcher in Jerusalem: The decorations are too marvelous for words. All you can see is gold, jewels and silk...You simply cannot imagine the number and sheer weight of the candles, tapers, lamps and everything else they use for the services...They are beyond description, and so is the magnificent building itself. It was built by Constantine and...was decorated with gold, mosaic, and precious marble, as much as his empire could provide. Another striking contrast to the traditional classical building is evident in looking at the exterior of Santa Sabina. The classical temple going back to Greek architecture had been an architecture of the exterior articulated by the classical orders, while the exterior of the church of Santa Sabina is a simple, unarticulated, brick wall. This reflects the shift to an architecture of the interior. Missorium of Theodosius, 388, Madrid, Academia de la Historia. The opulent interior of the Constantinian basilicas would have created an effective space for increasingly elaborate rituals. Influenced by splendor of the rituals associated with the emperor, the liturgy placed emphasis on the dramatic entrances and the stages of the rituals. The introit or entrance of the priest into the church was influenced by the adventus or arrival of the emperor. The culmination of the entrance and the focal point of the architecture was the apse. It was here that the sacraments would be performed, and it would be here that the priest would proclaim the word. In Roman civic and imperial basilicas, the apse had been the seat of authority. In the civic basilicas this is where the magistrate would sit adjacent to an imperial image and dispense judgment. In the imperial basilicas, the emperor would be enthroned. These associations with authority made the apse a suitable stage for the Christian rituals. The priest would be like the magistrate proclaiming the word of a higher authority. A late fourth century mosaic in the apse of the Roman church of Santa Pudenziana visualizes this. We see in this image a dramatic transformation in the conception of Christ from the pre-Constantinian period. In the Santa Pudenziana mosaic, Christ is shown in the center seated on a jewel encrusted throne. He wears a gold toga with purple trim, both colors associated with imperial authority. His right hand is extended in the ad locutio gesture conventional in imperial representations. Holding a book in his right hand, Christ is shown proclaiming the word. This is dependent on another convention of Roman imperial art of the so-called traditio legis, or the handing down of the law. A silver plate made for the Emperor Theodosius in 388 to mark the tenth anniversary of his accession to power shows the Emperor in the center handing down the scroll of the law. Notably the Emperor Theodosius is shown with a halo much like the figure of Christ. While the halo would become a standard convention in Christian art to demarcate sacred figures, the origins of this convention can be found in imperial representations like the image of Theodosius. Behind the figure of Christ appears an elaborate city. In the center appears a hill surmounted by a jewel encrusted Cross. This identifies the city as Jerusalem and the hill as Golgotha, but this is not the earthly city but rather the heavenly Jerusalem. This is made clear by the four figures seen hovering in the sky around the cross. These are identifiable as the four beasts that are described as accompanying the lamb in the Book of Revelation. The winged man, the winged lion, the winged ox, and the eagle became in Christian art symbols for the Four Evangelists, but in the context of the Santa Pudenziana mosaic, they define the realm as outside earthly time and space or as the heavenly realm. Christ is thus represented as the ruler of the heavenly city. The cross has become a sign the triumph of Christ. This mosaic finds a clear echo in the following excerpt from the writings of the early Christian theologian, St. John Chrysostom: You will see the king, seated on the throne of that unutterable glory, together with the angels and archangels standing beside him, as well as the countless legions of the ranks of the saints. This is how the Holy City appears....In this city is towering the wonderful and glorious sign of victory, the cross, the victory booty of Christ, the first fruit of our human kind, the spoils of war of our king. The language of this passage shows the unmistakable influence of the Roman emphasis on triumph. The Cross is characterized as a trophy or victory monument. Christ is conceived of as a warrior king. The order of the heavenly realm is characterized as like the Roman army divided up into legions. Both the text and mosaic reflect the transformation in the conception of Christ. These document the merging of Christianity with Roman imperial authority. It is this aura of imperial authority that distinguishes the Santa Pudenziana mosaic from the painting of Christ and his disciples from the Catacomb of Domitilla, Christ in the catacomb painting is simply a teacher, while in the mosaic Christ has been transformed into the ruler of heaven. Even his long flowing beard and hair construct Christ as being like Zeus or Jupiter. The mosaic makes clear that all authority comes from Christ. He delegates that authority to his flanking apostles. It is significant that in the Santa Pudenziana mosaic the figure of Christ is flanked by the figure of St. Paul on the left and the figure of St. Peter on the right. These are the principal apostles. By the fourth century, it was already established that the Bishop of Rome, or the Pope, was the successor of St. Peter, the founder of the Church of Rome. Just as power descends from Christ through the apostles, so at the end of time that power will be returned to Christ. The standing female figures can be identified as personifications of the major division of Christianity between church of the Jews and that of the Gentiles. They can be seen as offering up their crowns to Christ like the 24 Elders are described as returning their crowns in the Book of Revelation. The meaning is clear that all authority comes from Christ just as in the Missorium of Theodosius which shows the transmission of authority from the Emperor to his co-emperors. This emphasis on authority should be understood in the context of the religious debates of the period. When Constantine accepted Christianity, there was not one Christianity but a wide diversity of different versions. A central concern for Constantine was the establishment of Christian orthodoxy in order to unify the church. In 325, Constantine called the First Council of Nicaea. The Christian Bishops were charged with coming up with a consensus as to the nature of Christian doctrine. This ecumenical, or worldwide, council promulgated the so-called Christianity underwent a fundamental transformation with its acceptance by Constantine. The imagery of Christian art before Constantine appealed to the believer's desires for personal salvation, while the dominant themes of Christian art after Constantine emphasized the authority of Christ and His church in the world. Just as Rome became Christian, Christianity and Christ took on the aura of Imperial Rome. A dramatic example of this is presented by a mosaic of Christ in the Archepiscopal palace in Ravenna. Here Christ is shown wearing the cuirass, or the breastplate, regularly depicted in images of Roman Emperors and generals. The staff of imperial authority has been transformed into the cross. Useful Websites: The episode entitled The Legitimization of Christianity that is part of the Frontline series From Jesus to Christ gives a very valuable introduction to the religious context of Early Christianity. For more information about religion in the ancient world, see in this same series Marianne Bonz's Religion in the Roman World and the excerpts from different scholars entitled The Empire's Religions.
Physics states that “two objects cannot occupy the same space simultaneously”. Can two languages occupy the same space simultaneously? Do brain cells compete to keep their language? Do multiple languages increase or diminish our ability to learn? In an attempt to answer some of these questions, I will discuss briefly some of the cognitive theories related to bilingualism. First, the Balance Theory promoted the idea that two languages had to exist in balance. If the careful balance between the first language (L1) and the second language (L2) shifted in favor of one language the other would suffer. Two languages could not occupy the same space without taking space from the other. Either the L1 would decrease due to an increase in L2 or vice versa. Cummings characterized this belief as the Separate Underlying Proficiency Model of Bilingualism (SUP). On the other hand, the Common Underlying Proficiency Model of Bilingualism (CUP) is an alternative to the SUP model. In this model the external characteristics of L1 and L2 are different but the internal underlying processes of comprehension are the same. While speech, grammar, and writing may be completely different in L1 and L2, the process by which we understand and internalize concepts lie in the same area. Baker (1996) states that the CUP model operates under the following six tenets: 1. There is one integrated source of thought regardless of language. 2. People have the capacity to store and function in two or more languages with ease. 3. Information processing and educational skills may be developed through two languages with the same success as one language. 4. The language of instruction must be sufficiently well developed for the student to manage the challenges presented in the classroom. 5. Learning language skills in either L1 or L2 helps the whole cognitive system grow, as long as both are sufficiently developed. 6. Academic and cognitive performance depend on both languages functioning at full capability. In addition to SUP and CUP models, The Thresholds Theory, proposed by Cummings, Toukomaa, and Skutnabb-Kangas, attempts to explain the relationship between cognition and bilingualism. There are two upper limits in this theory. Each one is a level of bilingual competence with their set of negative or positive consequences. Visualize a three story house with ladders representing L1 and L2 on either side. Each floor of the house represents different competences of bilingualism with their positive or negative consequences. The ceilings represent the two thresholds to surpass in the theory. Once students break through the second threshold and reach the third story, they can easily compete with peers in both L1 and L2. Curriculum may be taught in either language and the student would be able to grasp the concepts easily. These fully bilingual students usually surpass monolingual students in their cognitive development. Have you noticed that some students speak L2 very well, but fail to perform in the classroom? Cumming's Developmental Interdependence hypothesis explains this phenomenon. It states that a child’s ability in L2 depends on the competence achieved in L1. A distinction between skills required to communicate in everyday life and those required to succeed in the academic arena was created. Cummings called the ability to hold simple, everyday life conversations as the basic interpersonal communicative skills (BICS). He labeled the proficiency required to meet the academic demands in the classroom the cognitive/academic language proficiency (CALP). BICS are highly contextual and rely on non-verbal cues. CALPs relate to higher order thinking skills and are neither context embedded nor concrete. The student cannot rely on body language or contextual clues in order to perform at the CALP level. The student must be aware of the subtle nuances of language and must be able to discern between idioms, accents, and unusual usage of seemingly common words. Conversational proficiency usually precedes cognitive and academic proficiency. Students may be able to hold conversations easily in L1 and L2, fooling teachers into thinking that the student is fully bilingual, but not be able to perform adequately in the context reduced, cognitive demanding academic environment. Thus our jobs as teachers is to foster the development of L2 at the CALP level. The different theories presented suggest that in fact two, or even more, languages can occupy the same space. Not only can they reside within the brain, but the languages can feed off each other to help the multilingual person reach a deeper meaning and understanding of the world. Baker, Colin. (1996). Cognitive theories of bilingualism and the curriculum. Foundations of bilingual education. (pp. 145-161). Clevedon: Multilingual Matters, LTD
Nanowerk, physicists at ETH-Zurich have developed the smallest electricallyumped laser in the world that could revolutionize chip technology ("Microcavity Laser Oscillating in a Circuit-Based Resonator"). The laser is 30 micrometers long, eight micrometers high and has a wavelength of 200 micrometers. This makes the laser considerably smaller than the wavelength of the light it emits – difficult, as lasers normally can’t be smaller than their wavelength! So instead of using a resonant cavity, the researchers used an electrical resonant circuit made up of an inductor and two capacitors where the light is effectively “captured” in it and induced into self-sustaining electromagnetic oscillations on the spot using an optical amplifier. This means the size of the resonator is no longer limited by the wavelength of the light and can in principle be scaled down to any size. This makes the microlaser very interesting for chip manufacturers as an optic alternative to the transistors. “If we manage to approximate the transistors in terms of size using the microlasers, one day they could be used to build electro-optic chips with an extremely high concentration of electronic and optic components”, says researcher Christoph Walther.
Emergent Literacy Design Rationale: This lesson will help students learn about the long e sound. The lesson will provide visual cues, which will help them remember the e sound. Materials: Primary paper, pencils, chart that has eager eazy eats easy ewalabee. Lee and the Team 1) Introduce lesson by saying sounds are really hard to learn, but we are going to make them easier by giving the letter visual cues to help us remember. Today we will begin by talking about eager e 2) Ask the students, did you ever hear someone say they were eager to go and play outside? Well let’s pretend that we are someone who is eager to do something. Every time you feel your mouth move the same way you say eager, we are going to act like we are eager. We will do this by waving our hands over our heads very eagerly. 3) Lets try and find all the /E/ sounds in this sentence. Eager eazy eats easy ewalabee. Lets practice saying the tongue twister a few times first so when the time comes we can really listen for the /E/ sound. Do you hear all those /E/ sounds? Lets drag out the /E/ sound when we hear it. 4) Lets practice writing our E’s again. Model for them how we say the /E/ sound and then slowly show them how to write an /E/ Provide descriptions as you are forming the /E/ that will help them remember how to right the /E/ sound, After we have all written down our E’s, we will write them five more times for extra practice. 5) Do you hear the /E/ sound in leave or stay? birch or tree? easy or hard. Pass out match games that have two decks. One deck has pictures of /E/ words. The match of the picture will be the word. The children will play the game by matching the pictures and the words. There should be enough for each individual child. 6) Read Lee and the Team to them. Read the book again and every time they hear the /E/ sound, have the children draw a pictures of a bee and write a message about the bee. 7) Make a copy of their drawing and give it too them to help them remember the /E/ sound. Have the students find all the /E/ sounds in their stories. Post their stories up on the wall 8) Assessment: To make sure the student knows how to right the /E/ sound. Have them come to the desk one at a time and show you how they write their /E/. Lee and The Team by Educational Click here to return to Inspirations
An estimated 29 species of Amphibians, from the orders Caudata (Salamanders) and Anura (Frogs and Toads), exist within the borders of Buffalo National River. Efforts to locate and document all of the amphibian species, and discover new species, are currently underway. Since amphibians, with some exceptions, require a watery environment to reproduce, most of these species can be found in and near the river's edge. Some ephemeral aquatic habitats may be found high on the mountaintops creating a unique environment resembling a lowland environment found in other ecoregions of Arkansas. These habitats exist only in spring so that species which rely on them for reproduction must act very quickly. Toads are not required to have hydrated skin to supplement respiration. They tend to range further from the mesic conditions required by their close relatives, the frogs.
Table of Contents Preface. Acknowledgements. PART ONE: FOUNDATIONS OF COMMUNICATION. Introduction. 1. The World of Communication. 2. Perception and Communication. 3. Communication and Personal Identity. 4. Communication and Cultures. 5. The Verbal Dimension of Communication. 6. The Nonverbal Dimension of Communication. 7. Effective Listening. PART TWO: CONTEXTS OF COMMUNICATION. 8. Foundations of Interpersonal Communication. 9. Communication in Personal Relationships. 10. Foundations of Group and Team Communication. 11. Effective Communication in Task Groups. 12. Interviewing. 13. Planning Public Speaking. 14. Researching and Developing Support for Public Speeches. 15. Organizing and Presenting Public Speeches. 16. Analyzing Public Speeches. CLOSING: Pulling Ideas Together. APPENDIX: Annotated Sample Speeches. GLOSSARY. REFERENCES. INDEX.
SCHOOL CORPORAL PUNISHMENT ALTERNATIVES The best way of dealing with school misbehavior is by preventing it. Schools with good discipline not only correct misbehavior but also teach appropriate behavior and coping skills. Prevention strategies include: Establishing clear behavior expectations and guidelines. Focusing on student success and self-esteem. Seeking student input on discipline rules. Using a "systems approach" for prevention, intervention and resolution and developing levels of incremental consequences. Enforcing rules with consistency, fairness, and calmness. Planning lessons that provide realistic opportunities for success for all students. Monitoring the classroom environment continuously to prevent off-task behavior, and student disruptions, and for providing help to students who are having difficulty and supplemental tasks to students who finish work early. There are a number of programs that have proven effective: Social Skills Instruction There are many commercially available programs that teach social skills. These programs help students learn how to make good choices and teach them the social skills they need to behave appropriately such as listening, asking questions politely, cooperation and sharing. Social skills are described in behavioral terms. The skills are modeled and practiced. Students are provided reinforcement and feedback and are taught self-monitoring skills. Character Education Program The curriculum includes teaching children to think about how their actions affect others, how to manage anger, and how to make good choices. Example: Community of Caring Program (Joseph P. Kennedy, Jr. Foundation l994) Student Recognition Program Commonly held values are taught and recognized including pride, respect, responsibility, caring and honesty. An awards assembly is held periodically to honor students who demonstrate these values and an attempt is made to make sure all students are honored sometime during the year. Students are given specific instruction in active listening, restating problem situations from their own and disputants’ perspectives, anger management, identifying feelings, brainstorming and developing solutions to problems. Peer mediators are trained to help disputants solve problems that might otherwise escalate into conflict and result in punitive actions against the disputants. Internet Resource: OSEP Technical Assistance Center of Positive Behavioral Interventions and Supports (PBIS) This program gives schools assistance in identifying, adapting and sustaining effective school wide disciplinary practices. http://www.pbis.org Second Step Violence Prevention Program “The award-winning SECOND STEP violence prevention program integrates academics with social and emotional learning. Kids from preschool through Grade 8 learn and practice vital social skills, such as empathy, emotion management, problem solving, and cooperation. These essential life skills help students in the classroom, on the playground, and at home. The SECOND STEP program is research-based and approved for funding on many federal agency lists. It has been shown to reduce discipline referrals, improve school climate by building feelings of inclusiveness and respect, and increase the sense of confidence and responsibility in students. The program includes teacher-friendly lessons, training for educators, and parent-education tools.” http://www.cfchildren.org/ FAST Track Program “FAST Track is a comprehensive and long-term prevention program that aims to prevent chronic and severe conduct problems for high-risk children. It is based on the view that antisocial behavior stems from the interaction of multiple influences, and it includes the school, the home, and the individual in its intervention. FAST Track’s main goals are to increase communication and bonds between these three domains, enhance children’s social, cognitive, and problem-solving skills, improve peer relationships, and ultimately decrease disruptive behavior in the home and school. FAST Track is an intervention that can be implemented in rural and urban areas for boys and girls of varying ethnicity, social class, and family composition (i.e., the primary intervention is designed for all youth in a school setting). It specifically targets children identified in kindergarten for disruptive behavior and poor peer relations.” http://www.fasttrackproject.org/ Other alternatives and punishments Restorative Justice Conferences This is part of a process developed by the Colorado School Mediation Project which helps students learn to be accountable for their actions. These often involve conferences of the offender, persons offended, the parents, and school representatives who have an opportunity to tell the offender how they were affected and what they need to happen to go on. The object is for the offender to act to correct the situation: restore relationships, apologize, pay back, clean up, do community service, etc. Other alternatives include: Use of discipline codes which are fair and consistently enforced, emphasizing positive behaviors of students, use of school psychologists and school counselors and use of community mental health professionals and agencies. In-school and out-of-school suspension programs, expulsion, Saturday Schools, restitution, detention and parent pick-up programs. Resources for school discipline and classroom management information: National Association of School Psychologists "Fair and Effective Discipline for All Students: Best Practice Strategies for Educators" Regional Educational Laboratory from School Improvement Research Series, Research You Can Use "Schoolwide and Classroom Discipline" by Kathleen Cotton Creative tools for peaceful conflict resolution, teachers' resources, books for peace, a traveling children's museum, and more. Conflict Research Consortium The Conflict Research Consortium is a multidisciplinary program of research, teaching, and application, focused on finding more constructive ways of addressing difficult, long-term, and intractable conflicts, and getting that information to the people involved in these conflicts so that they can approach them in a more constructive way. Online Journal of Peace and Conflict Resolution The Online Journal of Peace and Conflict Resolution is intended as a resource for students, teachers and practitioners in fields relating to the reduction and elimination of destructive conflict. It desires to be a free, yet valuable, source of information to aid anyone trying to work toward a less violent and more cooperative world. Sustainable Schoolwide Social and Emotional Learning (SEL) A social classroom management program and curriculum based on current brain research, child development information and developmentally appropriate practices. back to top
Art History sites offer teachers, students and the general public unique ways of thinking and learning about art and its development through the ages. These sites include animations, photographs, maps, timelines, interactive games and activities. Included: A dozen great sites for exploring art history. From pre-historic drawings on cave walls to modern artistic eras, man always has expressed his response to life through his art. Art History sites offer visitors the opportunity to explore history by studying the development of art. Many of the featured sites include artifacts from museums around the world, educational resources for the classroom, interactive activities, multimedia presentations, and virtual tours that will help enhance student learning and deepen student understanding of both art and history. The Cave of Lascaux The Cave of Lascaux was closed to the public in 1963, but this Web site offers a virtual tour of the Paleolithic wall paintings and major features of the cave. Visitors can learn how perspective was created on the rock surface, the techniques used to interpret the paintings and engravings, materials used by artists of the time, how to date a cave painting, and more. This interactive Web site explores Leonardo Da Vinci; his scientific projects and inventions, his artwork, and the time in which he lived. With lessons in science, art, history, and language arts, students can experience the thread of creativity that makes learning and exploring all subjects fun and exciting. The site includes unique lesson plans, as well as areas that encourage students to participate online. Impressionism: Paintings collected by European Museums explores the major themes of the technique and offers interdisciplinary lesson plans and classroom resources geared to grades 1-8. The guided tour leads visitors on a fun trip through France to learn about the impressionists and their work, and to experience impressionism through beautiful photographs. National Gallery of Art Kid's Page The site is filled with beautiful paintings, animations, sounds, and virtual reality pages that give students the opportunity to experience art and some of the stories behind it in a unique format. Users can view the individual paintings, as well as interact with them, enhancing understanding and appreciation of the art. SmARTkids offers kids and adults new ways of thinking and learning about art. The four main areas of the site each give a different way to think and learn about art. Included are hands-on activities, curriculum integration ideas, tips for parents using the site with their children, and an artwork of the month page featuring art from the museum's collection, focusing questions, and a related activity. ALSO WORTH A LOOK Learn about more great sites for students, parents, and educators by visiting Education World's Site Reviews Archives. Article by Linda Starr Copyright © Education World
The growth of our civilization is changing the ocean in ways that are deadly for corals. If we don’t act soon, it may be too late. We live a big life on a small planet. The human population has grown from 5 to over 7 billion in one generation, and consumption has escalated too. Building homes, factories, and roads often leads to a better quality of life but it’s also increased pollution dumped into the air and water. Greenhouse gases like carbon dioxide have raised the global temperature. Our collective impact is so large it is even dramatically changing the chemistry of the ocean. Activities such as burning fossil fuels release carbon dioxide, or CO2, into the air, but a large part is absorbed by the ocean. CO2 in the water causes the ocean to become more acidic, and a more acidic ocean means it’s harder for creatures-- like reef-building corals and shellfish-- to create and maintain their calcium carbonate skeletons and shells. And that weakens them. Locally, trash, boats anchoring on the reef, destructive fishing practices, and the wrong sunscreen worn by swimmers can also sicken corals. These local issues compound the global threat of a warming and acidifying ocean. The world’s largest reef, the Great Barrier Reef of Australia, can be seen from space. It is home to more than fifteen hundred species of fish and estimated to be five hundred thousand years old -- one of the wonders of the natural world. But in 2016, most of the Great Barrier Reef’s corals bleached, and almost 25% died from an astonishingly long period of intensely high water temperatures driven by global warming. None of the central and northern sections of the reef were spared from the heat stress that causes bleaching. The northern third was hit the hardest, and two thirds of its corals died. And tragically, mass die-offs are again happening on the Reef for an unprecedented second year in a row in 2017. Even worse, it was not an isolated incident. All of the world’s corals are in danger. We have lost, already, approximately 50 percent of the world's reefs. This has occurred really, within the last three decades. So fast, when we're talking about an organism that has been on the planet, and been in symbiosis with its microalgae, for over 50 million years. And the incidence of bleaching is increasing in frequency and intensity as the planet warms. It’s not every year that water temperatures stay high enough, long enough, for corals to bleach. In fact, extreme high temperatures used to come in cycles, often driven by climate events like El Niño. Due to climate change, the higher temperatures are becoming more frequent, more intense, and more widespread, The underwater heatwaves that have devastated the corals on the Great Barrier Reef are part of the third global coral bleaching event. This one has now lasted 3 years - by far the longest, most widespread, and most damaging yet - and it continues even now. It’s feared that corals could now face bleaching so often that whole reefs and even species will be lost forever. NOAA’s Coral Reef Watch program, and others, use satellites to monitor the heat stress that can cause coral bleaching and uses climate models to predict where bleaching may occur on the world’s tropical coral reefs in the next few months. But what can we do to reduce the risks? Can corals make a comeback? Watch our next video in this series to find out.
What is that ringing in my ears? Tinnitus is abnormal noise perceived in one or both ears or in the head. Tinnitus (pronounced either “TIN-uh-tus” or “tin-NY-tus”) may be intermittent, or it might appear as a constant or continuous sound. It can be experienced as a ringing, hissing, whistling, buzzing, or clicking sound and can vary in pitch from a low roar to a high squeal. Tinnitus is very common. Most studies indicate the prevalence in adults as falling within the range of 10% to 15%, with a greater prevalence at higher ages, through the sixth or seventh decade of life.1 Gender distinctions are not consistently reported across studies, but tinnitus prevalence is significantly higher in pregnant than non-pregnant women.2 The most common form of tinnitus is subjective tinnitus, which is noise that other people cannot hear. Objective tinnitus can be heard by an examiner positioned close to the ear. This is a rare form of tinnitus, occurring in less than 1% of cases.3 Chronic tinnitus can be annoying, intrusive, and in some cases devastating to a person’s life. Up to 25% of those with chronic tinnitus find it severe enough to seek treatment.4 It can interfere with a person’s ability to hear, work, and perform daily activities. One study showed that 33% of persons being treated for tinnitus reported that it disrupted their sleep, with a greater degree of disruption directly related to the perceived loudness or severity of the tinnitus.5,6 Causes and related factors Most tinnitus is associated with damage to the auditory (hearing) system, although it can also be associated with other events or factors: jaw, head, or neck injury; exposure to certain drugs; nerve damage; or vascular (blood-flow) problems. With severe tinnitus in adults, coexisting factors may include hearing loss, dizziness, head injury, sinus and middle-ear infections, or mastoiditis (infection of the spaces within the mastoid bone). Significant factors associated with mild tinnitus may include meningitis (inflammation of the membranous covering of the brain and spinal cord), dizziness, migraine, hearing loss, or age.7 Forty percent of tinnitus patients have decreased sound tolerance, identified as the sum of hyperacusis (perception of over-amplification of environmental sounds) and misophonia/ phonophobia (dislike/fear of environmental sounds).8 While most cases of tinnitus are associated with some form of hearing impairment, up to 18% of cases do not involve reports of abnormal hearing.9 Hearing loss from exposure to loud noise: Acute hearing depends on the microscopic endings of the hearing nerve in the inner ear. Exposure to loud noise can injure these nerve endings and result in hearing loss. Hearing damage from noise exposure is considered to be the leading cause of tinnitus. Presbycusis: Tinnitus can also be related to the general impairment of the hearing nerve that occurs with aging, known as presbycusis. Age-related degeneration of the inner ear occurs in 30% of persons age 65–74, and in 50% of persons 75 years or older.10 Middle-ear problems: Tinnitus is reported in 65% of persons who have preoperative otosclerosis (stiffening of the middle-ear bones),11 with the tinnitus sound typically occurring as a high-pitched tone or white noise rather than as a low tone.12Otitis media (middle-ear infection) can be accompanied by tinnitus, which usually disappears when the infection is treated. If repeated infections cause a cholesteatoma (benign mass of skin cells in the middle ear behind the eardrum), hearing loss, tinnitus, and other symptoms can result.13 Objective tinnitus has been associated with myoclonus (contraction or twitching) of the small muscles in the middle ear.14,15 Conductive hearing loss resulting from an accumulation of earwax in the ear canal can sometimes cause tinnitus. Vestibular disorders: Hearing impairment and related tinnitus often accompany dysfunction of the balance organs (vestibular system). Some vestibular disorders associated with tinnitus include Ménière’s disease and secondary endolymphatic hydrops (resulting from abnormal amounts of a fluid called endolymph collecting in the inner ear) and perilymph fistula (a tear or defect in one or both of the thin membranes between the middle and inner ear). Vestibulo-cochlear nerve damage and central auditory system changes The vestibulo-cochlear nerve, or eighth cranial nerve, carries signals from the inner ear to the brain. Tinnitus can result from damage to this nerve. Such damage can be caused by an acoustic neuroma, also known as a vestibular schwannoma (benign tumor on the vestibular portion of the nerve), vestibular neuritis (viral infection of the nerve), or microvascular compression syndrome (irritation of the nerve by a blood vessel). The perception of chronic tinnitus has also been associated with hyperactivity in the central auditory system, especially in the auditory cortex.16 In such cases, the tinnitus is thought to be triggered by damage to the cochlea (the peripheral hearing structure) or the vestibulo-cochlear nerve. Head and neck trauma Compared with tinnitus from other causes, tinnitus due to head or neck trauma tends to be perceived as louder and more severe. It is accompanied by more frequent headaches, greater difficulties with concentration and memory, and a greater likelihood of depression.17 Somatic tinnitus is the term used when the tinnitus is associated with head, neck, or dental injury—such as misalignment of the jaw or temporomandibular joint (TMJ)—and occurs in the absence of hearing loss. Characteristics of somatic tinnitus include intermittency, large fluctuations in loudness, and variation in the perceived location and pattern of its occurrence throughout the day.18 Many drugs can cause or increase tinnitus. These include certain non-steroidal anti-inflammatory drugs (NSAIDs, such as Motrin, Advil, and Aleve), certain antibiotics (such as gentamicin and vancomycin), loop diuretics (such as Lasix), aspirin and other salicylates, quinine-containing drugs, and chemotherapy medications (such as carboplatin and cisplatin). Depending on the medication dosage, the tinnitus can be temporary or permanent.3 Pulsatile tinnitus is a rhythmic pulsing sound that sometimes occurs in time with the heartbeat. This is typically a result of noise from blood vessels close to the inner ear. Pulsatile tinnitus is usually not serious. However, sometimes it is associated with serious conditions such as high or low blood pressure, hardening of the arteries (arteriosclerosis), anemia, vascular tumor, or aneurysm. Other possible causes Other conditions have been linked to tinnitus: high stress levels, the onset of a sinus infection or cold, autoimmune disorders (such as rheumatoid arthritis or lupus), hormonal changes, diabetes, fibromyalgia, Lyme disease, allergies, depletion of cerebrospinal fluid, vitamin deficiency, and exposure to lead. In addition, excessive amounts of alcohol or caffeine exacerbate tinnitus in some people. Examination by a primary care physician will help rule out certain sources of tinnitus, such as blood pressure or medication problems. This doctor can also, if necessary, provide a referral to an ear, nose, and throat specialist (an otolaryngologist, otologist, or neurotologist), who will examine the ears and hearing, in consultation with an audiologist. Their evaluations might involve extensive testing that can include an audiogram (to measure hearing), a tympanogram (to measure the stiffness of the eardrum and help detect the presence of fluid in the middle ear), otoacoustic emissions testing (to provide information about how the hair cells of the cochlea are working), an auditory brainstem response test (to measure how hearing signals travel from the ear to the brain and then within parts of the brain), electrocochleography (to measure how sound signals move from the ear along the beginning of the hearing nerve), vestibular-evoked myogenic potentials (to test the functioning of the saccule and/or inferior vestibular nerve), blood tests, and magnetic resonance imaging (MRI). Neuropsychological testing is also sometimes included to screen for the presence of anxiety, depression, or obsessiveness—which are understandable and not uncommon effects when tinnitus has disrupted a person’s life. If a specific cause of the tinnitus is identified, treatment may be available to relieve it. For example, if TMJ dysfunction is the cause, a dentist may be able to relieve symptoms by realigning the jaw or adjusting the bite with dental work. If an infection is the cause, successful treatment of the infection may reduce or eliminate the tinnitus. Many cases of tinnitus have no identifiable cause, however, and thus are more difficult to treat. Although a person’s tolerance of tinnitus tends to increase with time,19 severe cases can be disturbing for many years. In such chronic cases, a variety of treatment approaches are available, including medication, dietary adjustments, counseling, and devices that help mask the sound or desensitize a person to it. Not every treatment works for every person. A masking device emits sound that obscures, though does not eliminate, the tinnitus noise. The usefulness of maskers is based on the observation that tinnitus is usually more bothersome in quiet surroundings20 and that a competing sound at a constant low level, such as a ticking clock, whirring fan, ocean surf, radio static, or white noise produced by a commercially available masker, may disguise or reduce the sound of tinnitus, thus making it less noticeable. Some tinnitus sufferers report that they sleep better when they use a masker. In some users, maskers produce residual inhibition—tinnitus suppression that lasts for a short while after the masker has been turned off. Hearing aids are sometimes used as maskers. If hearing loss is involved, properly fitted hearing aids can improve hearing and may reduce tinnitus temporarily. However, tinnitus can actually worsen if the hearing aid is set at an excessively loud level. Cochlear implants, used for persons who are profoundly deaf or severely hard-of-hearing, have been shown to suppress tinnitus in up to 92% of patients.21,22 This is likely a result of masking due to newly perceived ambient sounds or from electrical stimulation of the auditory nerve. Other devices under development may eventually prove effective in relieving tinnitus. For example, the recently introduced acoustics-based Neuromon¬ics device involves working with an audiologist who matches the frequency spectrum of the perceived tinnitus sound to music that overlaps this spectrum. This technique aims to stimulate a wide range of auditory pathways, the limbic system (a network of structures in the brain involved in memory and emotions), and the autonomic nervous system such that a person is desensi¬tized to the tinnitus. Assessing the true effective¬ness of this device will require further scientific study, although observations from an initial stage of clinical trials indicate that the device can reduce the severity of symptoms and improve quality of life.23 Tinnitus retraining therapy Tinnitus retraining therapy (TRT) is designed to help a person retrain the brain to avoid thinking about the tinnitus. It employs a combination of counseling and a non-masking sound that decreases the contrast between the sound of the tinnitus and the surrounding environment.24 The goal is not to eliminate the perception of the tinnitus sound itself, but to retrain a person’s conditioned negative response (annoyance, fear) to it. In one comparison of the effectiveness of tinnitus masking and TRT as treatments, masking was found to provide the greatest benefit in the short term (three to six months), while TRT provided the greatest improvement with continued treatment over time (12–18 months).25 Chronic tinnitus can disrupt concentration, sleep patterns, and participation in social activities, leading to depression and anxiety. In addition, tinnitus tends to be more persistent and distressful if a person obsesses about it. Consulting with a psychologist or psychiatrist can be useful when the emotional reaction to the perception of tinnitus becomes as troublesome as the tinnitus itself19 and when help is needed in identifying and altering negative behaviors and thought patterns. No drug is available to cure tinnitus; however, some drugs have been shown to be effective in treating its psychological effects. These include anti-anxiety medications in the benzodiazepine family, such as clonazepam (Klonopin) or lorazepam (Ativan); antidepressants in the tricyclic family, such as amitiptyline (Elavil) and nortriptyline (Aventyl, Nortrilen, Pamelor); and some selective serotonin reuptake inhibitors (SSRIs), such as fluoxetine (Prozac).26,27,28,29 Other drugs have been anecdotally associated with relief of tinnitus. These include certain heart medications, anesthetics, antihistamines, statins, vitamin or mineral supplements, vasodilators, anticonvulsants, and various homeopathic or herbal preparations. Scientific evidence is lacking to support the effectiveness of many of these remedies.27,30,31 Some appear to be placebos, while some are possibly mildly or temporarily effective but with potential side effects that are serious. Examples of recent research studies on some of these anecdotal treatments follow, although this list is not exhaustive: - In assessing the effectiveness of atorvastatin (Lipitor) in the treatment of tinnitus, scientists observed a trend toward relief of symptoms; however, this trend was not statistically significant when compared with results produced by administration of a placebo.32 - The relationship between low blood zinc levels and subjective tinnitus was inspected in a small placebo-controlled study. Administration of oral zinc medication produced results that prompted the researchers to note that additional tests were needed to investigate whether duration of treatment might be a significant factor.33 - Immediate suppression of subjective tinnitus has been observed in patients administered intravenous lidocaine,34 although such relief has been shown to be very short term.35 The effect of such tinnitus treatment is thought to occur in the central auditory pathway rather than in the cochlea.36 - Scientists demonstrated that the anticonvulsant gabapentin (Neurontin) is no more effective than placebo in treatment of tinnitus.37,38 - When scientists reported their finding that Ginkgo biloba extracts and placebo treatments produce very similar results, they also noted that use of the extract could lead to adverse side effects, especially if used unsupervised and with other medications.39,40 Some alternative approaches may eventually yield helpful options in tinnitus treatment. However, most scientists agree that additional well-constructed research is needed before any anecdotally associated preparation can be applied as a proven and effective treatment option. Treating tinnitus with surgery is generally limited to being a possible secondary outcome of surgery that is used in cases when the source of the tinnitus is identified (such as acoustic neuroma, perilymph fistula, or otosclerosis) and surgical intervention is required to treat that condition.41 Other proposed treatments Stress-reduction techniques are often advocated for improving general health, as they can help control muscle groups and improve circulation throughout the body. Such relaxation training, the use of biofeed¬back to augment relaxation exercises, and hypnosis have been suggested as treatments for tinnitus. Limited research is available on the effectiveness of these methods. Acupuncture, electrical stimulation, application of magnets, electromagnetic stimulation, and ultra¬sound have been found to be placebo treatments for tinnitus or to have limited scientific support for their effectiveness.27,30,42,43 Recent and ongoing research studies have attempted to assess whether transcranial magnetic stimulation could be an effective tinnitus treatment. This application is based on the thought that tinnitus is associated with an irregular activation of the temporoparietal cortex (a part of the brain), and thus that disturbing this irregular activation could result in transient reduction of tinnitus.44,45,46 Precautionary measures to help lessen the severity of tinnitus or help a person cope with tinnitus are related to some of the causes and treatments listed above. Avoiding exposure to loud sounds (especially work-related noise) and getting prompt treatment for ear infections have been identified as the two most important interventions for reducing the risk of tinnitus.47 Wearing ear protection against loud noise at work or at home and avoiding listening to music at high volume can both help reduce risk.48 Other important factors are exercising daily, getting adequate rest, and having blood pressure monitored and controlled, if needed. Additional precautionary measures include limiting salt intake, avoiding stimulants such as caffeine and nicotine, and avoiding ototoxic drugs known to increase tinnitus (some of which are listed above under “Causes and Related Factors”). Tinnitus is a common condition that can disrupt a person’s life. Our understanding of the mechanisms of tinnitus is incomplete, and many unknown factors remain. These limitations contribute to the lack of medical consensus about tinnitus management, stimulate continued research efforts, and motivate anecdotal and commercially based speculation about potential but unproven treatments. Prior to receiving any treatment for tinnitus or head noise, it is important for a person to have a thorough examination that includes an evaluation by a physician. Understanding the tinnitus and its possible causes is an essential part of its treatment. - Henry JA, Dennis KC, Schechter MA. General review of tinnitus: prevalence, mechanisms, effects, and management. Journal of Speech, Language, and Hearing Research 2005;48(5):1204–1235. - Gurr P, Owen G, Reid A, Canter R. Tinnitus in pregnancy. Clinical Otolaryngology & Allied Sciences 1993;18(4):294–297. - Folmer L, Hal Martin W, Shi Y. Tinnitus: Questions to reveal the cause, answers to provide relief. Journal of Family Practice 2004;53(7):532–540. - Seidman MD, Jacobson GP. Update on tinnitus. Otolaryngologic Clinics of North America 1996;29:455–465. - Folmer RL, Griest SE. Tinnitus and insomnia. Ameri¬can Journal of Otolaryngology 2000;21(5):287–293. - Hiller W, Goebel G. Factors influencing tinnitus loudness and annoyance. Archives of Otolaryngol¬ogy—Head & Neck Surgery 2006;132(12):1323–1330. - Sindhusake D, Golding M, Wigney D, Newall P, Jakobsen K, Mitchell P. Factors predicting severity of tinnitus: a population-based assessment. Journal of the American Academy of Audiology 2004;15(4):269–280. - Jastreboff PJ, Jastreboff MM. Chapter 22: Tinnitus and Hyperacusis In: Snow JB, Ballenger JJ, eds. Ballenger’s Otorhinolaryngology Head and Neck Surgery. 16th ed. Hamilton, Ontario: BC Decker; 2003:456–475. - Stouffer JL, Tyler RS. Characterization of tinnitus by tinnitus patients. Journal of Speech and Hearing Disorders 1990;55(3):439–453. - Blackwell DL, Collins JG, Coles R. Summary health statistics for U.S. adults: National Health Interview Survey, 1997. Vital Health Statistics 2002;10:1–109. - Gristwood RE, Venables WN. Otosclerosis and chronic tinnitus. Annals of Otology, Rhinology, & Laryngology 2003;112(5):398–403. - Sobrinho PG, Oliveira CA, Venosa AR. Long-term follow-up of tinnitus in patients with otosclerosis after stapes surgery. International Tinnitus Journal 2004;10(2):197–201. - Falcioni M, et al. Pulsatile tinnitus as a rare presenting symptom of residual cholesteatoma. Journal of Laryngology & Otology 2004;118(2):165–166. - Golz A, Fradis M, Martzu D, Netzer A, Joachims HZ. Stapedius muscle myoclonus. Annals of Otology, Rhinology, & Laryngology 2003;112(6):522–524. - Howsam GD, Sharma A, Lambden SP, Fitzgerald J, Prinsley PR. Bilateral objective tinnitus secondary to congenital middle-ear myo¬clonus. Journal of Laryngology & Otology 2005;119(6):489–491. - Lockwood AH, Salvi RJ, Coad ML, Towsley ML, Wack DS, Murphy BW. The functional neuroanatomy of tinnitus: evidence for limbic system links and neural plasticity. Neurology 1998;50(1):114–120. - Folmer RL, Griest SE. Chronic tinnitus resulting from head or neck injuries. Laryngoscope 2003;113(5):821–827. - Levine RA. Somatic Tinnitus. In Snow JB, ed. Tinnitus: Theory and Management. Lewiston, NY: BC Decker; 2004:108–124. - Andersson G, Vretblad P, Larsen H, Lyttkens L. Longitudinal follow-up of tinnitus complaints. Archives of Otolaryngology—Head & Neck Surgery 2001;(127):175–179. - Tucker DA, Phillips SL, Ruth RA, Clayton WA, Royster E, Todd AD. The effect of silence on tinnitus perception. Otolaryngology—Head & Neck Surgery 2005;132(1):20–24. - Ruckenstein MJ, Hedgepeth C, Rafter KO, Montes ML, Bigelow DC. Tinnitus suppression in patients with cochlear implants. Otology and Neurotology 2001;22(2):200–204. - Yonehara E, Mezzalira R, Porto PR, Bianchini WA, Calonga L, Curi SB, Stoler G. Can cochlear implants decrease tinnitus? International Tinnitus Journal 2006;12(2):172–174. - Davis PB, Paki B, Hanley PJ. Neuromonics tinnitus treatment: third clinical trial. Ear and Hearing. 2007;28(2):242–259. - Jastreboff PJ, Gray WC, Gold SL. Neurophysi¬ological approach to tinnitus patients. American Journal of Otology 1996;17(2):236–240. - Henry JA, Schechter MA, Zaugg TL, Griest S, Jastreboff PJ, Vernon JA, Kaelin C, Meikle MB, Lyons KS, Stewart BJ. Clinical trial to compare tinnitus masking and tinnitus retraining therapy. Acta Oto-laryngologica Supplementum 2006;(556):64–69. - Dobie RA, Sakai CS, Sullivan MD, Katon WJ, Russo J. Antidepressant treatment of tinnitus patients: report of a randomized clinical trial and clinical prediction of benefit. American Journal of Otology 1993;14(1):18–23. - Dobie RA. A review of randomized clinical trials in tinnitus. Laryngoscope 1999;109(8):1202–1211. - Ganança MM, Caovilla HH, Ganança FF, Ganança CF, Munhoz MS, da Silva ML, Serafini F. Clonazepam in the pharmacological treatment of vertigo and tinnitus. International Tinnitus Journal 2002;8(1):50–53. - Folmer RL, Shi YB. SSRI use by tinnitus patients: interactions between depression and tinnitus severity. Ear, Nose, & Throat Journal 2004;83(2):107–8,110,112 passim. - Dobie RA. Clinical trials and drug therapy for tinnitus. In Snow JB, ed. Tinnitus: Theory and Management. Lewiston, NY: BC Decker; 2004:266–277. - Seidman MD, Babu S. Alternative medications and other treatments for tinnitus: facts from fiction. Otolaryngologic Clinics of North America 2003;36(2):359–381. - Olzowy B, Canis M, Hempel JM, Mazurek B, Suckfüll M. Effect of atorvastatin on progression of sensorineural hearing loss and tinnitus in the elderly: results of a prospective, randomized, double-blind clinical trial. Otology and Neurotology 2007;28(4):455–458. - Arda HN, Tuncel U, Akdogan O, Ozluoglu LH. The role of zinc in the treatment of tinnitus. Otology and Neurotology 2003;24(1):86–89. - Otsuka K, Pulec JL, Suzuki M. Assessment of intra¬venous lidocaine for the treatment of subjective tinnitus. Ear, Nose, & Throat Journal 2003;82(10):781–784. - Kalcioglu MT, Bayindir T, Erdem T, Ozturan O. Objective evaluation of the effects of intravenous lidocaine on tinnitus. Hearing Research 2005; 199(1–2):81–88. - Baguley DM, Jones S, Wilkins I, Axon PR, Moffat DA. The inhibitory effect of intravenous lidocaine infusion on tinnitus after translabyrin¬thine removal of vestibular schwannoma: a double-blind, placebo-controlled, crossover study. Otology and Neurotology 2005;26(2):169–176. - Piccirillo JF, Finnell J, Vlahiotis A, Chole RA, Spitznagel E Jr. Relief of idiopathic subjective tinnitus: is gabapentin effective? Archives of Otolaryngology—Head & Neck Surgery 2007;133(4):390–397. - Witsell DL, Hannley MT, Stinnet S, Tucci DL. Treatment of tinnitus with gabapentin: a pilot study. Otology and Neurotology 2007;28(1):11–15. - Drew A, Davies E. Effectiveness of Ginkgo biloba in treating tinnitus: double blind, placebo controlled trial. BMJ 2001;322(7278):73. - Smith PF, Zheng Y, Darlington CL. Ginkgo biloba extracts for tinnitus: More hype than hope? Journal of Ethnopharmacology 2005;22(1–2):95–99. - House JW, Brackmann DE. Tinnitus: surgical treat¬ment. Ciba Foundation Symposium 1981;85:204–216. - Park J, White AR, Ernst E. Efficacy of acupuncture as a treatment for tinnitus: A systematic review. Archives of Otolaryngology—Head & Neck Surgery 2000;126(4):489–492. - Ghossainni SN, Spitzer B, Mackins CC, Zschommler A, Diamond BE, Wazen JJ. High-frequency pulsed electromagnetic energy in tinnitus treatment. Laryngoscope 2004;114(3):495–500. - Plewnia C, Bartels M, Gerloff C. Transient suppres¬sion of tinnitus by transcranial magnetic stimulation. Annals of Neurology 2003;53(2):263–266. - Folmer RL, Carroll JR, Rahim A, Shi Y, Hal Martin W. Effects of repetitive transcranial magnetic stimulation (rTMS) on chronic tinnitus. Acta Oto-laryngologica Supplementum 2006;556:96–101. - Smith JA, Mennemeier M, Bartel T, Chelette KC, Kimbrell T, Triggs W, Dornhoffer JL. Repetitive transcranial magnetic stimulation for tinnitus: a pilot study. Laryngoscope 2007;117(3):529–534. - Sindhusake D, Golding M, Newall P, Rubin G, Jakobsen K, Mitchell P. Risk Factors for tinnitus in a population of older adults: the Blue Mountains hearing study. Ear and Hearing 2003;24(6):501–507. - Schmuziger N, Patscheke J, Probst R. Hearing in nonprofessional pop/rock musicians. Ear and Hearing 2006;27(4):321–330.
The Scientific Revolution The Scientific Revolution was a period in history beginning in the late 1500s when scientific ideas began to be consciously put to use by European society. It is generally thought to have begun with a book, On the Revolution of the Heavenly Spheres by Nicolaus Copernicus in 1543. This book was the first to postulate that the Earth was not the center of the Universe. It was such a striking change from past beliefs that it made many realize that not everything there was to know had yet been learned. This was made abundantly clear by discoveries in the new world, pioneered by Christopher Columbus, which showed that even on Earth there were vast unknowns. The InDepthInfo History of Modern Europe was designed as a homeschool history textbook. It covers an exciting time in modern Europe between the Renaissance and the End of World War I. It has report suggestions, chapter quizzes, and a final examination. Perfect for high school level study. Yet science and improvement of machines had quietly been going on throughout the late middle ages. Great thinkers had divised new ways to look at scientific questions. William of Occam for example noted that the most likely explanation for a phenomena was simplest explanation. This rule we now call Occam's Razor. Advances had been made in agriculture, and transportation (especially with the development of the caravel, the compass, and the astrolabe). Another major factor, and perhaps the true spark of the Scientific Revolution, may have been gunpowder. The Gunpowder Revolution The advent of gunpowder in Europe caused a revolution in warfare. First, cannons were developed and then hand held weapons that, in effect, swept the aristocratic knight from the field of battle. This did not so much destroy the aristocracy of western Europe as it forced aristocrats to become a part of the regular force of a more centralized power, taking away their independence. With the invention of cannon they could no longer shut themselves up in a castle to avoid the wrath of their king. On a larger scale, innovations in warfare often proved the decisive factor in victory or defeat and controlled the fate of vast territories. It thus became vital for monarchs to sponsor technical experimentation in weapons. Meanwhile, monarchs, such as Czar Peter the Great of Russia, and their advisors began to realize that advances in other areas could be used to help the state. They gave monopolies to people who created new products, and then taxed the proceeds. They rewarded inventors and scientists and focused science by setting goals. By the mid-1600s scientists and inventors were vying with each other to make discoveries and advance science. It was because of a prize awarded by the British Parliament to the first person to develop a means for determining longitude at sea that the chronometer was invented. The Scientific Method Sir Francis Bacon was the first man to enunciate a method for making the technological innovations that were beginning to change European life. The ancient Greeks had felt that deduction was sufficient to access all important information. Bacon criticized this notion. He put forth the hypothesis that valid information about a subject could only be obtained through scientific experimentation. Under Bacon's regime, phenomena was observed, hypotheses made based on the observation. Tests would be conducted based on hypotheses. If the tests produced reproduceable results then conclusions could be made. These conclusions would spur additional questions and the process would begin again. The scientific method began to be applied to all technical areas from astronomy to farming. These advances generally made life easier and understanding broader. Printing Press: The Spread of Knowledge All of this scientific ferment was made possible by another technical innovation, the printing press. The moveable type press was invented in Europe by Johannes Gutenberg (1400-1468). In 1456 he produced the first European book from a press, the Bible. Though Gutenberg himself did not prove a great success, his printing press was. Soon it was copied all over Europe. Within 30 years an addition 350 presses were producing books, pamphlets, and broadsheets. With the printing press, knowledge, especially scientific knowledge, suddenly could be much more easily spread. When documents had to be copied one at a time by human hand they were rare and expensive. The printing press made books relatively inexpensive. It could be compared to the advent of the internet, where today a vast field of knowledge is accessible by the average person from their own home. People do not have to visit a university library to access scientific information. Scientific Societies and Universities Universities had been around for a long time. The University of Bologna was founded in 1088. These institutions were vital in helping to develop curious minds. Kings also saw the value of encouraging scientists by creating scientific societies, where great minds could meet and discuss ideas, research, and new developments. These acted as think tanks that could develop useful ideas. The Royal Society in Great Britain founded in 1662 is probably the most famous. Both Sir Isaac Newton (The father of modern physics and inventor of calculus) and Robert Boyle (the father of modern chemistry) were early members. Probably the greatest figure of the Scientific Revolution was Sir Isaac Newton (1672-1727), an English professor at Cambridge and noted natural philosopher. A true Renaissance man, he investigated optics, discovered the laws of gravity, and invented the Calculus (simultaneous to Liebnitz). He was both a scientist and a deeply religious man. He felt his investigations were a way to view and understand “the mind of God”. His view of the universe informed and directed philosophers and scientists until the 20th century. The Effects of the Scientific Revolution The Scientific Revolution would make Europeans the most powerful peoples in the world. It made individuals much more productive by creating machines that could do drudgerous labor and utilize multiple sources of power from wind and water to coal and steam. More people could be fed, clothed, and housed with less manpower. More wealth could be created in less time for more people. Innovations in military machines and tactics made Europeans a force to be reckoned with. New methods of trade and commerce made trade with other nations more advantageous, spreading even more knowledge. Perhaps the most important aspect of the Scientific Revolution was its self perpetuating nature. Once it was truly underway luddite movements could hardly stand in its way. The answer of one scientific question spawned a dozen more. Scientists found that the rewards of scientific research were great on an individual, national, and world-wide level. The Scientific Revolution would spawn the Industrial Revolution. The Scientific Revolution is often thought of as a period that occurred in the long ago, but in many ways we continue to be a part of it to this day.
Weld Defects - Cracks Causes and Types of Cracking that may occur in Welds Cracking in WeldsWe know that several types of imperfections may occur in a weld or the heat affected zone. Welds may contain porosity (see last months HERA News) slag inclusions, lack of fusion or cracks. Cracks in a weld are almost certainly the most unwanted of all weld imperfections. Because of the various materials and applications used in welding today, cracking is a complex subject. The base material's crack sensitivity may be associated with its chemistry and its susceptibility to conditions that reduce its ductility.The welding operation itself can produce stresses in and around the weld, introducing extreme localized heating, expansion, and contraction that may also cause cracking. Hot - Cold Cracks: Cracks can be classified hot or cold. Hot cracks develop at elevated temperatures, propagate between the grains of a material, and commonly form during solidification of weld metal. Cold cracks develop after solidification of the weld, as a result of stresses, and propagate both between grains and through grains. Cold cracks in steel sometimes are called delayed cracks and often are associated with hydrogen embrittlement. Base Material Cracks: Heat-affected zone (HAZ) cracking most often occurs with base material that has hardenability. High hardness and low ductility in a HAZ often are the result of a metallurgical response to welding thermal cycles. In ferritic steels, hardness increases and ductility decreases with an increase in carbon content and a faster cooling rate.The HAZ hardness depends on the base material's ability to be hardened, which in turn depends on the base material's chemical composition. Carbon has a predominant effect on steel's hardenability along with other elements. For instance, material with a carbon equivalent (CE) of over 0.4 may suffer from cracking unless precautions are taken during welding such as electrode choice, considering cooling rates and residual stress invariably may result in base material cracking. Heat Affected Zone (HAZ) and Underbead cracks (sometimes called toe cracks or delayed cracking) are generally cold cracks that form in the heat-affected zone of the parent material. The following needs to be considered: • hydrogen• a microstructure of relatively low ductility• high residual stress• temperature below 200 deg C Heat-affected zone and underbead cracks are normally longitudinal. They may be found at in the weld toe area of the heat affected zone or under the weld bead where residual stresses are highest. Transverse cracks are perpendicular to the direction of the weld. These are generally the result of longitudinal shrinkage stresses acting on weld metal of low ductility. Crater cracks occur in the crater when the welding arc is terminated prematurely. Crater cracks are normally shallow, hot cracks usually forming single or star cracks. These cracks usually start at a crater pipe and extend longitudinal in the crater. However, they may propagate into longitudinal weld cracks in the rest of the weld. Solidification cracks are longitudinal cracks in the weld face in the direction of the weld axis. They are generally hot cracks. These cracks are typically caused by excessive transverse stress, high depth to width ratio in excess of 2:1, high sulphur and phosphorus content. Toe cracks are generally cold cracks. They initiate and propagate from the weld toe where shrinkage stresses are concentrated. Toe cracks initiate approximately normal to the base metal surface. These cracks are generally the result of thermal shrinkage stresses acting on a weld heat-affected zone. Some toe cracks occur because the transverse tensile properties of the base metal cannot accommodate the shrinkage stresses that are imposed by welding. Root cracks are longitudinal cracks at the weld root. They may be hot or cold forms of cracks. Stress corrosion cracking in stainless steel is due to caustic or chloride contaminants. The cracking is predominantly inter-crystalline. Reheat cracking is almost exclusively restricted to creep resistant steels and must be considered a very serious form of cracking. Reheat cracking can be caused by the generation of excessive thermal stress during the post-weld heat treatment leading to the initiation of cracking from existing very small hot or cold cracks. This can be controlled by correct heating rates; temperature variations and avoiding where possible stress concentrations. Another form of reheat cracking occurs at high temperatures in the material’s creep range where inter-crystalline cracking in the "larger" grained heat affected zone results from insufficient creep ductility. This occurs during post weld heat treatment or during high temperature service. Miscellaneous cracks include other forms of cracking such as lamellar tearing. Generally, the following guidelines can be applied: (d) Chevron cracking in high strength welds metals in ferritic steels only. (e) Lamellar tearing-in principle possible in any material but in practice restricted to structural and pressure vessel ferritic steels. Cracks are unacceptable defects and are detrimental to weld performance. A crack, by its nature, is sharp at its extremities, so it acts as a stress concentration. The stress concentration effect of a crack is greater than that of most other discontinuities. Cracks have a tendency to propagate, contributing to weld failure under stress. Regardless of their size, cracks; except for crater cracks in class GP welds of AS/NZS 1554.1, are not permitted in welds governed by most fabrication standards. They must be removed by grinding or gouging, and the excavation filled with sound weld metal. Successful welding procedures for the materials be joined include the controls that are necessary to overcome the tendency for crack formation. Such controls are preheating temperature, interpass temperature, consumable type and preparation, and post weld heat treatment. Interested in improving your welding quality, then contact HERA on 09 262 4847 or email: [email protected]
4th Grade Skills, when mastered, open a new world of math to your child. This year is when your child moves from recognizing fractions to actually doing computations with them. Where the decimal point will no longer be a mysterious dot on the page, but present new meanings to the word number! Where geometry is no longer simply shapes on paper, but the knowledge behind everything they look at or touch. This is a very exciting time in your kids elementary math education. Get Prepared For Summer. If you want to learn how to put a stop to the dreaded Summer Math Loss and send your child back to school in September feeling like a math super-hero then watch this free video training. Many teachers no longer issue homework to their classes. My question is, how is a student to reinforce what is being learned if home is taken out of the equation? You (if you are a parent) know the value of 'home' and practice - so congratulations to you! Use this page to access all the printable math worksheets you need to help keep your child on task and interested in their math This list of skills is based on the 4th grade level expectatoins from Washington State OSPI These Grade Level Expectations are organized by basic math subject area for 4th Grade, based on Washington States OSPI. If you would rather have a basic math skills requirements list organized by subject area, that includes every requirement from Kindergarten to 6th grade in one downloadable document please visit my Grade Level Expectations section. 4th Grade Skills by Subject Area two-step word problems equations in the form of ____ x 9 = 63; 81 ÷ ___ = 9 problems with more than one operations, as in (72 ÷ 9) x(36 ÷ 4) = _____ properties – know that equals added to equals are equal; equals multiplied by equals are equal. - Use letters to stand for any number, as in working with a formula eg area of rectangle: A = L x W Numbers and Operations - Read and write numbers (in digits and words) up to nine - Recognize place value up to hundred millions - Order and compare numbers to 999,999,999 using the signs < , > = - Write numbers in expanded form - Use a number line; locate positive and negative whole numbers on a number line - Round to the nearest ten, hundred and thousand - Identify perfect squares (and square roots) to 144; recognize the square root sign - Identify Roman numerals from 1 to 1000 and identify years as written in Roman Numerals - Know the meanings of multiple, factor, prime number and - Recognize fractions to one-twelfth - Identify numerator and denominator - Write mixed numbers; change improper fractions to mixed numbers and vice versa - Recognize equivalent fractions (eg ½ = 2/4) - Put fractions in lowest terms - Rename fractions with unlike denominators to fractions with - Compare fractions with like and unlike denominators using the signs <, >, = - Solve problems in the form of 2/3 = ½ x - Add and subtract fractions with like denominators - Express simple outcomes as fractions (eg 3 out of 4 as ¾) - Read and write decimals to the nearest thousandth - Read and write decimals as fractions (eg 0.39 = 39/100) - Write decimal equivalents for halves, quarters, eights and - Compare fractions to decimals using the signs <, >, = - Write deciamls in expanded form - Round decimals to the nearst tenth, to the nearest hundredth - Compare decimals, using the signs < , >, = - Read and write decimals on a number line - Add and subtract with decimal numbers to two places. - Review and reinforce basic multiplication facts to 10 x 10 - Mentally multiply by 10, 100, 1000 - Identify multiples of a given number, common multiples of two - Multiply by two-digit and three digit numbers - Write numbers in expanded form using multiplication - Estimate a product - Use mental computation strategies for multiplication, such as breaking a problem into partial products eg 3 x 27 = (3 x 20) = (3 x 7) = 60 + 21 = 81 - Check multiplication by changing the order of the factors - Multiply three factors in any given order - Solve word problems involving multiplication - Understand multiplication and division as inverse operations - Review the meaning of dividend, divisor and quotient - Review and reinforce basic division facts to 100 ÷ 10 - Identify different ways of writing division problems: 28 ÷ 7; 7√28 ; 28/7 - Identify factors of a given number, common factors of two - Review: you cannon divide by 0; any number divided by 1 = - Estimate the quotient - Divide dividends up to four-digits by one-digit and two-digit - Solve division problems with remainders - Check division by multiplying and adding remainder and make linear measurements in yard, feet andinches (to 1/8th inch) and in meters, centimeters and millimeters and measure weight in pounds and ounces; grams and kilograms and measure liquid capacity in teaspoons, tablespoons, cups, pints, quarts, gallons and in milliliters and liters. equivalences among US customary units of measurement and solve problems involving changing units of measurement - Solve problems on elapsed time points on a coordinate plane (grid) using ordered pairs of positive whole and draw points, segments, rays, lines and draw lines: horizontal; vertical; perpendicular; parallel; intersecting angles; identify angles as right, acute, or obtuse. polygons – Triangle, quadrilateral, pentagon, hexagon and octagon (regular) Parallelogram, trapezoid, rectangle, square. and draw diagonals of quadrilaterals. Identify radius (plural: radii) and diameter; radius = ½ diameter similar and congruent figures the formula for the area of a rectangle (Area = length x width) and solve problems involving finding area in a variety of square units (such as mi2; yd2; ft2; in2; km2; m2; volume of rectangular prisms in cubic units (cm3, in3) and interpret bar graphs and line graphs When your child has mastered their 3rd Grade Skills, it's time to move on and up to Fourth grade skills! And don't forget to head over to our home page! Here you will find great resources to help you tutor your child, such as manipulatives , math puzzles and activities! Other Sections of this site Keep In Touch! You can send me a quick message, subscribe to K6Math Fun & Update, or join my Facebook Page - K6Math. Choose all the options so you don't miss any of the new material added to this site.
Atomic clocks are the most accurate time keeping devices in existence, and use the oscillation of atoms stimulated by an electromagnetic field as a frequency standard for keeping time. There are various types of atomic clock, the most common being Caesium Clocks and Rubidium Clocks, and they are most commonly found in research facilities. Atomic Clocks are utilised as the primary standard for time distribution services worldwide. Most commercially available atomic clock time synchronisation systems use a radio or GPS time broadcast which is connected to a precise time reference. Here we discuss atomic clock applications in relation to time synchronisation of PC’s and computer networks using NTP server systems, covering: - GPS Atomic Clocks - Radio Atomic Clocks - MSF 60 - Time Synchronisation of PC’s - Time Synchronisation of Computer Networks GPS Atomic Clocks The Global Positioning System, more commonly known as GPS, began as a US military application with the purpose of attaining highly accurate positioning information for global navigation. Operating via 24 satellites that orbit the Earth, each individual satellite is equip with a highly accurate atomic clock, synchronised to Coordinated Universal Time, UTC time. The satellites supply a constant stream of time and positioning information which can be received anywhere in the world using a GPS antenna and receiver, demonstrating it’s global application. Additionally, there are no set up or subscription fees in utilising the GPS system or the information it provides. GPS is considered to be one of the better external reference clocks available, providing a higher degree of accuracy than it’s radio alternative, resulting in the GPS system being referenced by computer timing systems and NTP Server systems worldwide. Radio Atomic Clocks Radio time broadcasts transmit precise time information from a radio transmitter. The broadcasts are copied from an atomic clock time reference and can be obtained with the use of a relatively low-cost radio receiver. There are numerous radio time broadcasts worldwide, however geographical location can influence signal strength and consistency. MSF-60 Time Signal The MSF 60 radio broadcast is a UK transmission which began in the 1950′s and was first transmitted from Rugby, Warwickshire. Due to it’s location it was more commonly known as the ‘Rugby Time Signal’ or ‘Rugby Clock’. In 2007 the transmission was moved to Anthorn, Cumbria, and serves the whole of the British Isles and parts of North-Western Europe. Operating on a frequency of 60kHz the MSF 60 signal is a long-wave radio time broadcast controlled by caesium atomic clocks located at the National Physics Laboratory (NPL). The signal can be decoded by a range of radio-controlled clocks and can act as a very accurate time reference for NTP time servers, reference clocks and other computer timing devices. DCF-77 Time Signal DCF-77 is a German long-wave radio time signal broadcast, transmitted at 77.5kHz. Unlike the UK which operates a lone transmitter, Germany runs two transmitters which operate as a primary and a back up transmitter and both broadcast from Mainflingen near Frankfurt. The DCF-77 signal began on 1 January 1959 and is generated from local atomic clocks that are linked to Master Clocks located at the Physikalisch Technische Bundesanstalt (PTB), Germany’s National Physics Laboratory. Maintained by Media Broadcast GmBH, the DCF-77 signal covers a large area of Europe and can be utilised as a precise time reference for timing equipment. WWVB Time Signal Transmitted from Fort Collins in Colorado, WWVB is the US radio time broadcast and is transmitted at 60kHz. The transmission began in 1962, and is maintained by the US National Institute of Standards and Technology, (NIST). While most broadcasts signal the local time of the broadcasting nation, the WWVB signal broadcasts time in UTC time, Coordinated Universal Time. The reason being the US broadcast covers multiple time zones, so offsets for time zones have to be applied as needed. Time Synchronisation of PC’s Accurate computer time has become an increasingly important feature of commercial computer applications. The synchronisation of computers to accurate time can be attained by combining a GPS or Radio timing receiver with a RS232 or USB interface. With the installation of software drivers on the host computer, it is possible for the PC to acquire accurate time to synchronisation its internal system time. Generally the host computer’s system time can be synchronised to within a few microseconds of the correct time, and popular operating systems such as Microsoft Windows 2000, 2003, XP, LINUX, UNIX and Novell can be synchronised. Time Synchronisation of Computer Networks Network Time Protocol, more commonly referred to as NTP, is a computer standard developed to distribute accurate time to computers and computer networks. NTP is a client-server based protocol utilised for computer time synchronisation throughout the Internet and local networks. NTP was originally developed by Dr David Mills, of the University of Delaware, who acknowledged the need to provide a standard means of synchronising time across the Internet. Stratum 1 NTP servers reference an external reference clock, such as GPS or radio time and frequency broadcasts in order to synchronise it’s system time. This accurate time stamp can then be distributed by the NTP server to network time clients over an IP network. NTP is organized in hierarchical form; primary servers known as Stratum 1, secondary servers (Stratum 2) and time clients. The accuracy of NTP server systems can be to within a few microseconds of precise time resulting in the NTP time clients being able to synchronise to within a few milliseconds of a NTP server. Andrew Everett specialises in the continued development of accurate time synchronisation equipment for computers and computer networks, encompassing GPS clocks and NTP Time Servers. Andrew contributes to the development on dedicated time servers, NTP synchronised clock applications and atomic clock synchronisation.
Profile of the Day: Susan B. Anthony On February 15, 1820 Susan B. Anthony, an abolitionist and prominent civil rights leader, was born. Anthony played a pivotal role in the 19th century women’s rights movement in the U.S, advocating women’s suffrage. With the help of Elizabeth Cady Stanton, Anthony drafted the 19th Amendment to the U.S. Constitution, which prohibits any citizen to be denied the right to vote based on gender. Fourteen years after her death, the 19th Amendment was finally ratified, giving women the right to vote. Check out her family tree and see how you’re related!
Native to California and found as far north as Anchorage, Alaska, the Western crab apple, also known as the Oregon crab apple (Malus fusca), displays clusters of delicate white flowers that fade into a deep purple and eventually into a small oblong apple. The fruit, which is tart and not particularly palatable, is sometimes used in jams and jellies. While these ornamental trees create a colorful backdrop in your garden, varying weather conditions tend to make them susceptible to pests and disease, including root rot caused by the soil borne Armillaria mellia (Armillariella) fungus. Found along the California coast, the Oregon crab apple thrives in full sun and moist, well-drained, and more acidic soils. Changes in weather such as extended droughts or wetter than usual rainy seasons, frost, pest infestation, pollution and incorrect pruning methods causes the trees to become susceptible to the Armillaria fungus, and ultimately root rot. While most crab apple trees have a natural resistance or tolerance to pests and diseases, they become susceptible to root rot when planted in an area already containing the fungus spores. Armillaria fungus grows quickly and occurs most often during the summer when it's warm and humid. Trees infected by root rot succumb quickly as no known cure exists. Identifying the Disease This fungus causes visible deep brown lines, which look similar to brown shoelaces, of a spongy consistency running along the root system. In advanced stages, you will see the actual spores from the Armillaria fungus in the soil surrounding the base of the tree and on the exposed root system. Leaves eventually become discolored, as though they have been burned, and die as the fungus begins to slowly creep up into the tree trunk and limbs. Avoiding the Problem Once diseased, the tree, and as much of the root system as possible, should be removed. Never plant new trees in an area where a diseased tree has been removed. When pruning these trees, clean your equipment after each use to prevent spreading the fungus from tree to tree. Fungicides will not change the course of this disease. - Dynamic Graphics Group/Dynamic Graphics Group/Getty Images
In the fifteenth century progress in metallurgy made possible the production of springs, which ultimately led to the development of portable clocks powered by a coiled spring rather than a weight. The origins of the spring-driven clock are almost as obscure as those of the weight-driven clock. Evidence suggests that the idea came from Italy. In the early 1400s, Filippo Brunelleschi and others designed spring-driven devices that made the invention of the portable timekeeper possible. One of the devices was the fusee, a cone-shaped spindle that equalizes the diminishing force of a coiled spring as it unwinds. Increasingly ornate and always expensive, these early clocks were regarded as objects of curiosity, owned by a few wealthy individuals. One of the earliest spring-driven clocks to have survived is a table clock most likely made in Aix-en-Provence about 1530 by Pierre de Fobis. Its complex movement is set into a typical sixteenth-century French clock case, inspired by classical architecture and ornaments rediscovered during the Renaissance. By the 1560s spring-driven clocks were produced throughout France, Flanders, and Germany as exemplified by works in this case.
Marine biologists in Japan have discovered how squid are able to move across the oceans so quickly. For years, fishermen and sailors have reported seeing squid “flying” across the surface of the sea, and every now and again someone gets lucky and manages to nab a few photographs of cephalopods in action. It’s only now, though, that marine biologists from Hokkaido University have discovered exactly how these squids squirt water out fast enough to propel themselves through the air at up to 11.2 metres per second — faster than Usain Bolt’s top speed of 10.31 metres per second. Jun Yamamoto and his team had been sailing around the northwest Pacific Ocean, 600km off the coast of Japan, looking for schools of squid. They spotted about 100 20cm squid swimming just below the surface of the ocean, but as they approached around 20 of the squid launched themselves into the air, gliding around 30m in ten seconds. That the squid took flight as the researchers’ boat approached has led Yamamoto to speculate that flying is a safety mechanism, to help them espace predators. Written By: Ian Steadmancontinue to source article at wired.co.uk
What is... Packets/MTU?| By John Holstein, Cotse Help Desk Coordinator Imagine if you will, an envelope stuffed with several pages of an important 50 page document. This envelope contains 22 of those pages. The first page is the cover page, the second page is the page of contents and the remaining 20 pages are randomly selected excerpts of the entire document. This is an example of a packet. A packet contains the TCP Header, the IP Header and the actual data or Maximum Segment Size (MSS). This combination may be in various sizes depending on the Operating System, Connection Type and User Preferences representing a portion of the file being transferred. In the same way as the 50 page report referred to above, the TCP and IP headers refers to the cover letter and table of contents that will address to the recipient, the order in which to present the information to the computer system and the user. Within all packets sent, the TCP and IP headers are present, this must be true in order for the MSS or actual data to be recreated on the target machine. A packet is the unit of data that is routed between an origin and a destination on the Internet or any other packet-switched network. When any file (e-mail message, html file, Graphics Interchange Format file, Uniform Resource Locator request, and so forth) is sent from one place to another on the Internet, the Transmission Control Protocol (TCP) layer of TCP/IP divides the file into "chunks" of an efficient size for routing. Each of these packets is separately numbered and includes the Internet Address of the destination. The individual packets for a given file may travel different routes through the Internet. When they have all arrived, they are reassembled into the original file (by the TCP layer at the receiving end). A packet-switching scheme is an efficient way to handle transmissions on a connectionless network such as the Internet. An alternative scheme, circuit-switched, is used for networks allocated for voice connections. In circuit-switching, lines in the network are shared among many users as with packet-switching, but each connection requires the dedication of a particular path for the duration of the connection. IP - is responsible for moving packet of data from node to node. IP forwards each packet based on a four byte destination address (the IP number). The Internet authorities assign ranges of numbers to different organizations. The organizations assign groups of their numbers to departments. IP operates on gateway machines that move data from department to organization to region and then around the world. TCP - is responsible for verifying the correct delivery of data from client to server. Data can be lost in the intermediate network. TCP adds support to detect errors or lost data and to trigger retransmission until the data is correctly and completely received. Sockets - is a name given to the package of subroutines that provide access to TCP/IP on most systems. Continue to Page 2 Comments? Questions? Bugs? John Holstein Return to the Help Desk
Sea level rise The disappearing Muir Glacier, Alaska in 2004 21 April 2008 by Jonathan Gregory How much will global sea levels rise this century? Jonathan Gregory, lead author of the chapters covering past and future sea level rise in the Intergovernmental Panel on Climate Change Fourth Assessment Report, investigates. Sea level rise is an important consequence of climate change because of its impacts on coastal populations and ecosystems. The UN's Intergovernmental Panel on Climate Change Fourth Assessment Report (IPCC AR4) stated that, "Many millions more people are projected to be flooded every year due to sea level rise by the 2080s. Those densely populated and low-lying areas where adaptive capacity is relatively low, and which already face other challenges such as tropical storms or local coastal subsidence, are especially at risk." The physical science basis for predicting sea level change is an interesting subject because it involves many effects and some unanswered questions. But in the face of an urgent practical need to assess the impacts, the incomplete state of scientific knowledge can be a frustrating obstacle. When the last ice age peaked 21,000 years ago, global average sea level was about 120m lower than now. As the meltwater from the massive North American and European ice sheets returned to the ocean, sea level rose at rates of over a metre per century, or 10mm per year (mm/yr). Could the same happen in the future? Not in the same way because those ice sheets no longer exist. However, 125,000 years ago, during the last interglacial (between ice ages) Greenland was 3-5°C warmer than now and its ice sheet was substantially smaller. Is the Greenland ice sheet viable? Model experiments suggest that there is a threshold of global warming, somewhere in the range 1·9-4·6°C, beyond which the Greenland ice sheet is not viable. This range is similar to the likely warming by the end of the 21st century under one of the IPCC's commonly used future emissions scenarios, A1B, which assumes rapid economic growth, population peaking mid-century and energy demands met by a balance between fossil and non-fossil fuels. If such a warm climate were maintained, the ice sheet would eventually disappear, raising sea level by 7m. The pressing question is: how quickly could it happen? Current ice-sheet models suggest sea level rise of a few millimetres per year at most. This is not a new puzzle; it has been called the 'enigma' of sea level rise. On average during the last few years, both the Greenland and Antarctic ice sheets have been losing mass. Their combined contribution to sea level rise is 0·1-0·8mm/yr, while the global average rate of rise has been about 3mm/yr. The ice-sheet contribution is hence relatively small at present, but there has been recent acceleration in ice flow speeds, producing increased discharge of ice into the ocean as icebergs. The acceleration could have been caused by recent climate change, through various possible mechanisms, such as ocean warming leading to thinning of ice shelves, and surface melting providing meltwater to lubricate the ice flow. Unfortunately we do not yet have sufficient empirical or theoretical knowledge of the relevant processes to say whether the recent effects are transient variations or the first signs of larger future changes, often labelled ice-sheet collapse. For other contributions to sea level change, our understanding is better. The main contribution is the thermal expansion of sea water as the ocean warms up. This effect can be calculated from observed changes in ocean temperature, and simulated by the three-dimensional global climate models that are the main tool for making climate predictions. There is uncertainty in the observational estimate because of the sparse sampling of large volumes of the deep ocean and remote areas, especially the Southern Ocean, and difficulties with instrumental calibration, but models and observations agree on 1-2mm/yr of thermal expansion in recent years. Snapshot of sea level rise relative to the global average for the decades 2080-2099 minus 1980-1999. A negative number means sea level rises less than on average, not that sea level falls. (HadCM3 climate model) Glaciers worldwide have made a larger contribution than the ice sheets recently, despite having only one percent of the total mass of ice on land. This is because they are in warmer climates, making them more sensitive to climate change. There is uncertainty in their contribution because there is a very large number of glaciers (over 100,000), of which scientists have monitored just a few hundred, and care is needed in treating these as representative. But there is reasonable agreement between observed and simulated changes in global glacier mass balance. When we add up the estimated contributions to sea level for recent years (1993-2003), the total agrees with the observed rate of rise within the uncertainties. If we look at a longer period - the last four decades (1961-2003) - both the observed rate of rise, of about 1·8mm/yr, and the sum of contributions from ice sheets, glaciers and thermal expansion are smaller than in recent years, but, crucially, they don't tally. Smaller thermal expansion and loss of land ice during the longer period is consistent with the cooler average climate of earlier decades. However, for 1961-2003 the best estimate of observed sea level rise is 60 percent larger than the sum of the estimated contributions. This discrepancy indicates a deficiency in our scientific knowledge. It is not specifically a problem with models, because models and observations agree for the main contributions. But, still, the budget is not balanced. To close the budget, one or more of the following must be true: the rate of sea level rise is an overestimate, or one of the terms is underestimated, or there is a missing term. This is not a new puzzle; it has been called the 'enigma' of sea level rise. Without a closed budget for sea level we cannot satisfactorily account for the increase in rate over recent years. Moreover, the record of 20th century sea level rise indicates large variability in the rate on decadal timescales; the 3mm/yr of recent years is unusually high but not unprecedented. So it is unclear if it is a fluctuation or a longer-term acceleration. Even if climate change is stabilised... it will take much longer for the sea to reach its final level. Using the same models with which we study the past, we can make projections of sea level change in the future. It is very likely that the rate of sea level rise will be greater in the 21st century than it has been on average in recent decades. For the emissions scenario A1B, the most recent IPCC report gives projections of 0·21-0·48m by the end of the century. Smaller or larger greenhouse gas emissions lead to smaller or larger sea level rise. These latest IPCC projections are quite similar to those of the IPCC Third Assessment Report, but the upper bounds are lower, and the lower bounds higher. This is because better observational datasets have reduced uncertainties in our methodologies. The projections include the effects of future changes in precipitation and melting on ice sheets and the recent acceleration in ice flow, but exclude future rapid accelerations (discussed above) for which we do not have sufficient understanding to make predictions. Therefore sea level rise could be substantially larger than the range given. Accelerating ice flows As an example, if the contribution of sea level rise from accelerated ice-flow were to grow linearly with global warming, it would add up to 0·2m to sea level this century, but there is no consensus on whether that is an underestimate or an overestimate. The IPCC AR4 stated the problem thus: "Understanding of these effects is too limited... to provide a best estimate or an upper bound for sea level rise" - however much the authors would have liked to be able to do that! Some researchers have suggested that, as an alternative method for projections, we can use the empirical evidence that rate of sea level rise has generally increased during the last century at the same time as global temperature has been rising. Assuming the former is proportional to the latter and using climate model projections of 21st-century warming gives projections of sea level rise of about one metre. Sea level change from UN's IPCC Fourth Assessment Report The reason this method gives larger projections is that it implicitly makes an assumption about the discrepancy in the sea level budget for recent decades, namely that this extra contribution will scale up with global temperature. (Note that the missing amount cannot be due to rapid changes in the ice sheets if, as the current assessment suggests, they were not a large contributor in the past few decades.) The IPCC fourth assessment projections, on the other hand, take no account of the discrepancy in the budget. The contrast between the results underlines the need to resolve this problem. Regional projections: climate models don't agree On top of the uncertainty about global average sea level rise comes the issue of its regional pattern. Unfortunately we cannot give confident regional projections because climate models do not agree, except for the conclusion that sea level rise will not be geographically uniform. Some places will see more than average, others less, the spread being tens of percent of the global mean, but the only common feature among all models is smaller sea level rise than average in the Southern Ocean. The reason for the general disagreement is that the climate models differ in regard to the processes which take up and redistribute heat within the ocean. One solid qualitative conclusion is that, even if climate change is stabilised in the next century or two, it will take much longer for the sea to reach its final level. It will take many centuries for the deep ocean to catch up with surface warming, but when it does, thermal expansion could generate a one metre or more rise, depending on the level of stabilisation of greenhouse gases. Despite the controversy over possible rapid changes, the ice sheets have timescales of millennia for full adjustment to climate change, with sea level changes of metres being possible. Policy decisions made in coming decades therefore could have a profound influence for much longer into the future. Professor Jonathan Gregory is a climate and sea level researcher at NERC's collaborative centre the National Centre for Atmospheric Science, and a Met Office Fellow at the Hadley Centre for Climate Change. He was a lead author of chapter five (Observations: Oceanic climate change and sea level) and chapter 10 (Global climate projections) of the IPCC's Fourth Assessment Report. All scientists involved in the IPCC reports were awarded the Nobel Peace Prize in 2007.
A good night’s sleep is extremely important for children. Not enough sleep can cause daytime sleepiness, moodiness and an inability to concentrate – all of which are detrimental to a child’s learning and social development. It’s recommended that toddlers get between 12 – 14 hours sleep each day / night and primary school aged children get 9 to 10 hours sleep each night. Many kids, however, do not get this much sleep. Beyond just affecting learning and social outcomes, research has found an association between later bedtimes in childhood and risk of obesity in adolescence. A study that’s been following a group of children since 1991 has plotted the association between bedtime and obesity over time. At five years of age, a quarter of the children tracked went to bed before 8pm, half went to bed between 8pm and 9pm and the final quarter typically stayed up beyond 9pm. When these participants were followed up years later, at age 17, the rates of obesity in each group of bedtimes were 10%, 16% and 23% respectively. So those who regularly went to bed after 9pm had more than double the risk of obesity compared to those who went to bed before 8pm. This study highlights yet another reason why it’s so important for kids to get a good night’s sleep. The study didn’t measure when children actually fell asleep, but rather when they went to bed, but it’s likely that those who went to bed earlier commenced the falling asleep process earlier. If you have trouble getting your kids to bed, talk to a health professional for advice or visit the Better Health Channel where there are good, evidence-based tips on sleep habits in childhood. Reference: Anderson, S et al. Bedtime in preschool- aged children and risk for adolescent obesity. The Journal of Pediatrics. Epub online July 5, 2016. doi: 10.1016/j.peds.2016.06.005.
2.2: Subatomic Particles Dalton was only partially correct about the particles that make up matter. All matter is composed of atoms, and atoms are composed of three smaller subatomic particles: protons, neutrons, and electrons. These three particles account for the mass and the charge of an atom. The Discovery of the Electron The first clue about the subatomic structure came at the end of the 19th century when J.J. Thomson discovered the electron using a cathode ray tube. This apparatus consisted of a sealed glass tube, from which almost all the air had been removed, and that contained two metal electrodes. When high voltage was applied across the electrodes, a visible beam called a cathode ray appeared between them. This beam was deflected toward the positive charge and away from the negative charge, and was produced in the same way with identical properties when different metals were used for the electrodes. In similar experiments, the ray was simultaneously deflected by an applied magnetic field. Measurements of the extent of deflection and the magnetic field strength allowed Thomson to calculate the charge-to-mass ratio of the cathode ray particles. The results of these measurements indicated that these particles were much lighter than atoms. Based on his observations, Thomson proposed the following: - The particles are attracted by positive (+) charges and repelled by negative (−) charges, so they must be negatively charged (like charges repel and unlike charges attract); - The particles are less massive than atoms and indistinguishable, regardless of the source material, so they must be fundamental, subatomic constituents of all atoms. Thomson’s cathode ray particle is an electron, a negatively charged, subatomic particle with a mass more than 1000× less than that of an atom. The term “electron” was coined in 1891 by Irish physicist George Stoney, from “electric ion.” In 1909, Robert A. Millikan calculated the charge of an electron by his “oil drop” experiments. Millikan created microscopic oil droplets, which could be electrically charged by friction as they formed or by using X-rays. These droplets initially fell due to gravity, but their downward progress could be slowed or even reversed by an electric field lower in the apparatus. By adjusting the electric field strength and making careful measurements and appropriate calculations, Millikan was able to determine the charge on individual drops to be 1.6 × 10−19 C (coulomb). Millikan concluded that this value must, therefore, be the fundamental charge of a single electron. Since the charge of an electron was now known due to Millikan’s research — and the charge-to-mass ratio was already known due to Thomson’s research (1.759 × 1011 C/kg) — the mass of the electron was determined to be 9.107 × 10−31 kg. Rutherford’s Nuclear Model Scientists had now established that the atom was not indivisible as Dalton had believed, and due to the work of Thomson, Millikan, and others, the charge and mass of the negative, subatomic particles — the electrons — were known. Scientists knew that the overall charge of an atom was neutral. However, the positively charged part of an atom was not yet well understood. In 1904, Thomson proposed the “plum pudding” model of atoms, which described a positively charged mass with an equal amount of negative charge in the form of electrons embedded in it, since all atoms are electrically neutral. A competing model had been proposed in 1903 by Hantaro Nagaoka, who postulated a Saturn-like atom, consisting of a positively charged sphere surrounded by a halo of electrons. The next major development in understanding the atom came from Ernest Rutherford. He performed a series of experiments using a beam of high-speed, positively charged alpha particles (α particles) that were produced by the radioactive decay of radium. He aimed a beam of α particles at a very thin piece of gold foil and examined the resultant scattering of the α particles using a luminescent screen that glowed briefly when hit by an α particle. He observed that most particles passed right through the foil without being deflected at all. However, some were diverted slightly, and a very small number were deflected almost straight back toward the source. From this, Rutherford then deduced the following: Because most of the fast-moving α particles passed through the gold atoms undeflected, they must have traveled through essentially empty space inside the atom. Alpha particles are positively charged, so deflections arose when they encountered another positive charge (like charges repel each other). Since like charges repel one another, the few positively charged α particles that changed paths abruptly must have hit, or closely approached, another body that also had a highly concentrated, positive charge. Since the deflections occurred a small fraction of the time, this charge only occupied a small amount of the space in the gold foil. Analyzing a series of experiments, Rutherford drew two important conclusions: - The volume occupied by an atom must consist of a large amount of empty space. - A small, relatively heavy, positively charged body, the nucleus, must be at the center of each atom. This analysis led Rutherford to propose a model in which an atom consists of a very small, positively charged nucleus, in which most of the mass of the atom is concentrated, surrounded by the negatively charged electrons so that the atom is electrically neutral. After many more experiments, Rutherford also discovered that the nuclei of other elements contain the hydrogen nucleus as a “building block,” and he named this more fundamental particle the proton, the positively charged, subatomic particle found in the nucleus. The Structure of an Atom Protons are found in the nucleus of an atom and have a positive charge. The number of protons is equal to the atomic number on the periodic table and determines the identity of the element. Neutrons are also found in the nucleus. They have no charge, but they have the same mass as protons and thus contribute to the atomic mass of an atom. Electrons orbit around the nucleus in clouds. They have a negative charge and negligible mass, so they contribute to the overall charge of an atom, but not to its mass. The nucleus was known to contain almost all of the mass of an atom, with the number of protons only providing half, or less, of that mass. Different proposals were made to explain what constituted the remaining mass, including the existence of neutral particles in the nucleus. It was not until 1932 that James Chadwick found evidence of neutrons, uncharged, subatomic particles with a mass approximately the same as that of protons. The existence of the neutron also explained isotopes: They differ in mass because they have different numbers of neutrons, but they are chemically identical because they have the same number of protons. Atomic Mass Unit (amu) and the fundamental unit of charge (e) The nucleus contains the majority of an atom’s mass because protons and neutrons are much heavier than electrons, whereas electrons occupy almost all of an atom’s volume. The diameter of an atom is on the order of 10−10 m, whereas the diameter of the nucleus is roughly 10−15 m — about 100,000 times smaller. Atoms — and the protons, neutrons, and electrons that compose them — are extremely small. For example, a carbon atom weighs less than 2 × 10−23 g, and an electron has a charge of less than 2 × 10−19 C. When describing the properties of tiny objects such as atoms, appropriately small units of measure, such as the atomic mass unit (amu) and the fundamental unit of charge (e) are used. The amu is defined with regard to the most abundant isotope of carbon, atoms of which are assigned masses of exactly 12 amu. Thus, one amu is exactly 1/12 of the mass of one carbon-12 atom: 1 amu = 1.6605 × 10−24 g. The Dalton (Da) and the unified atomic mass unit (u) are alternative units that are equivalent to the amu. The fundamental unit of charge (also called the elementary charge) equals the magnitude of the charge of an electron with e = 1.602 × 10−19 C. A proton has a mass of 1.0073 amu and a charge of 1+. A neutron is a slightly heavier particle with a mass 1.0087 amu and a charge of zero; as its name suggests, it is neutral. The electron has a charge of 1− and is a much lighter particle with a mass of about 0.00055 amu. For reference, it would take about 1800 electrons to equal the mass of one proton. The properties of these fundamental particles are summarized in the following table. |Subatomic particle||Charge (C)||Unit charge||Mass (g)||Mass (amu)| |Electron||−1.602 × 10−19||1−||0.00091 × 10−24||0.00055| |Proton||1.602 × 10−19||1+||1.67262 × 10−24||1.00727| |Neutron||0||0||1.67493 × 10−24||1.00866|
For the past several years, a group of researchers has been observing a seemingly impossible wood ant colony living in an abandoned nuclear weapons bunker in Templewo, Poland, near the German border. Completely isolated from the outside world, these members of the species Formica polyctena have created an ant society unlike anything we've seen before. The Soviets built the bunker during the Cold War to store nuclear weapons, sinking it below ground and planting trees on top as camouflage. Eventually a massive colony of wood ants took up residence in the soil over the bunker. There was just one problem: the ants built their nest directly over a vertical ventilation pipe. When the metal covering on the pipe finally rusted away, it left a dangerous, open hole. Every year when the nest expands, thousands of worker ants fall down the pipe and cannot climb back out. The survivors have nevertheless carried on for years underground, building a nest from soil and maintaining it in typical wood ant fashion. Except, of course, that this situation is far from normal.Polish Academy of Sciences zoologist Wojciech Czechowski and his colleagues discovered the nest after a group of other zoologists found that bats were living in the bunker. Though it was technically not legal to go inside, the bat researchers figured out a way to squeeze into the small, confined space and observe the animals inside. Czechowski's team followed suit when they heard that the place was swarming with ants. What they found, over two seasons of observation, was a group of almost a million worker ants whose lives are so strange that they hesitate to call them a "colony" in the observations they just published in The Journal of Hymenoptera. Because conditions in the bunker are so harsh, constantly cold, and mostly barren, the ants seem to live in a state of near-starvation. They produce no queens, no males, and no offspring. The massive group tending the nest is entirely composed of non-reproductive female workers, supplemented every year by a new rain of unfortunate ants falling down the ventilation shaft. Like most ant species, wood ants are tidy animals who remove waste from their colony. In the case of the bunker ants, most of this waste is composed of dead bodies. The researchers speculate that mortality in the "colony" is likely much higher than under normal circumstances. "Flat parts of the earthen mound [of the nest] and the floor of the adjacent spaces ... were carpeted with bodies of dead ants," write Czechowski and colleagues. This "ant cemetery" was a few centimeters thick in places, and "one cubic decimeter sample contained [roughly] 8,000 corpses," which led the researchers to suggest that there were likely 2 million dead ants piled around the nest mound. The sheer numbers of dead bodies suggest that this orphaned wood ant nest has been active for many years. The ant graveyard is also host to a tiny ecosystem, where mites and a few other invertebrates feed on the bodies of the dead wood ants. The question is, what are the wood ants eating? It's possible they have figured out how to eat the creatures who feast in their cemeteries, essentially making them cannibals at one remove. But Czechowski and his team dismiss this as unlikely. It's also possible that there are nutrients growing in the bat guano from the ants' only living neighbors in the bunker. But in their years of observation, the scientists still haven't figured out for certain what the ants' source of food is. Wood ants are known for surviving in harsh conditions, and they have been found on remote islands as well as living in small, closed boxes. And it's not impossible that this underworld colony could bloom into something more. In a previous experiment, Czechowski showed that orphaned wood ant colonies will adopt queens from related species. So if a queen ant fell down the pipe, she might join this colony and start reproducing. Unfortunately, however, without a steady food supply the ants probably wouldn't have enough energy to raise a new generation and keep the nest warm for them. So the only way this nest carries on is by waiting for a new rain of ants from the free colony above ground. The paper's conclusion reads like a dystopian science fiction scene from the 1970s: The wood-ant ‘colony’ described here – although superficially looking like a functioning colony with workers teeming on the surface of the mound – is rather an example of survival of a large amount of workers trapped within a hostile environment in total darkness, with constantly low temperatures and no ample supply of food. The continued survival of the ‘colony’ through the years is dependent on new workers falling in through the ventilation pipe. The supplement of workers more than compensates for the mortality rate of workers such that through the years the bunker workforce has grown to the level of big, mature natural colonies. Life in an abandoned nuclear weapons bunker is nightmarish, even for the humble ant. It appears that the legacy of the Soviet occupation of Poland doesn't just haunt the country's human population. It has affected the social structures of insects too. Journal of Hymenoptera Research, 2016. DOI: 10.3897/jhr.51.9096 Listing image by Wojciech Stephan
The ability to use and interpret language is, as far as we are aware, a characteristic unique to the human species. It is an ability to express thoughts and feelings by spoken sounds or written symbols and for others to recognize and understand the meanings of these sounds and symbols. The term ‘language’ may also refer to the rules associated with the spoken word or written language, such as parts of speech and sentence construction. The ability to use and interpret language is associated with a particular area of the brain. In most individuals the left side of the brain is dominant for language. It has been suggested that in people with schizophrenia the language area of the non-dominant right side of the brain is still active and this can lead to psychotic symptoms. Usually someone has an inner thought voice which may or may not be the same as their speaking voice. They will also usually have a self image picture and a set of life pictures in the mind that is the memory of life experiences; ‘kind of like a video that they can move forwards or backwards through and access a frame at any time’. This will obviously grow longer with time and shape character. Together these help form a person’s conscious identity. If the inner voice changes, or any of the inner pictures change, so will the sense of identity. When reading a fantasy novel some people will read the words ‘inside their mind’ with their own inner voice (they may absorb the words into their head without any conscious knowledge of any voice type at all). Others will role-play the voices of the different characters in the book inside their mind, kind of like soap opera characters. Others may even pick up the inner character of the creator of the story almost reading to them. Would it be a great assumption to make that the least likely to suffer from schizophrenia-type symptoms would be those without the creativity and imagination necessary to put them at risk, i.e. the first type of readers? Thoughts, pictures and words are inextricably linked. Words lead to thoughts, thoughts lead to words. A person who is reading a book vividly imagines the scene as described by the words, while others will see no pictures at all. Some famous authors even claim that their characters became alive inside their minds and almost wrote the stories themselves. Research using brain scanning equipment shows the changes that occur in the speech area of the brain in people with schizophrenia when they hear voices. The brain reacts as if the voices are real (i.e. the brain mistakes thoughts for real voices). This causes confusion in the patient who may develop irrational beliefs such as their thoughts are being controlled, people are talking through the television, or there is a microchip implanted inside them. In a recent study evaluated patients with psychotic symptoms to a control group, brain activity during verbal fluency tests was compared. Decreased lateralization and greater activity in the right superior lateral lobe was found in the patients. However, when a group of non-psychotic subjects who suffered from isolated auditory hallucinations was tested they showed no significant difference in brain activity to the control group. Thus, there is no established link between auditory verbal hallucinations and language lateralization. It would be interesting to try a different approach scanning the brains of different types of reading and visualization personalities to see if the brain sometimes puts out the signal for hearing a real voice. I would hate to say it could be so simply put– sometimes a vivid imagination can lead to a problem distinguishing fantasy from reality in creative people. It is a very subtle beginning, almost like (for a tiny instant) the brain enters a parallel reality, and fantasy and reality at a single time point becomes confused and interchanged in the mind (‘inside the video of live memory’). The false memory may then grow like a seed inside the mind and eat away at the video-type memory like a virus until it becomes totally impossible to distinguish fantasy from reality. In the end fantasy takes over and there is no real sense of the correct reality or identity left. As I said before words, thoughts and pictures are closely interknit. Creation of auditory hallucinations may be described as the same kind of subtle inter-parallel universe split. For an instant the thought voice, whatever its origin may be, is created and heard as real. It is a well known psychological rule that when presented with a random pattern and asked what they might see a person will try and form it into a familiar shape such as a face. In the same way random sounds can be converted into words with a little stretch of the imagination. This is also commonly used in NLP when words may be used that sound like another phrase. Another point of interest would be to look at the brain, linked to state of mind when people create characters’ voices in their mind. Small children often have irrational fears such as the closet monster. When lying in bed alone in the dark a voice from the closet may clearly be heard. It is a simple trick of the mind, they aren’t actually mentally ill. Even adults, caught alone in the woods at night, may experience the Wind in the Willows, ‘wild wood’ syndrome. Every shadowed tree is a face, every rustling leaf becomes a follower. Could this brainwave pattern create a susceptibility that (along with a natural ability to create role plays in the mind) can create and maintain a tormenting voice? Also, are “nice” voices created when positive emotional areas of the brain are activated at the same time as words are read about pleasant characters, and “nasty” voices created when negative areas of the brain are activated at the same time as reading about horrible characters? When reading, creating a fantasy voice in the mind does not normally register as a real voice, but can it reach a level where this fantasy voice registers in the brain as real? Frightened people in the woods can turn natural sounds into voices because they are in a susceptible form of mind– are these voices registered in the brain as real voices? Presumably yes even though it is just a sound in the same way mentally ill people will change a dog barking, or a train in the distance into a voice. Could a link be established between these two different types of voice creation? Beware of the uncertainty principle! – This may be better known in particle physics, but trying to study someone’s natural inner voice is very difficult simply because of the fact that the monitoring itself can change it. Joe recently helped develop the British Woodlands food webs educational simulation for Newbyte and is donating his share of The Last Tiger (available on Amazon kindle) children’s fantasy novel profits to the Animals on the Edge conservation project.
Mental Health: A Holistic Perspective By Beatriz Martinez reviewed by juliana ascolani Health is the greatest wealth one can achieve, and when our hopes for accessible and sustainable health become reality, we benefit from richer, fuller lives. This series of articles connects the six dimensions of holistic health to understand how prevention through lifestyle and empowerment can improve our overall wellness. Prevention, access to health care, and a support system are key to addressing mental health. In the United States, statistics from 2019 showed that 20.6% of adults aged 18 or older experienced a mental, behavioral, or emotional disorder, and 44.8% of those adults received mental health support. Also, 5.2% of adults experienced a serious mental illness, causing serious functional impairment. According to the World Health Organization (WHO), depression is a common mental disorder, accounting for 264 million people affected in 2018 alone, a number that increased during the COVID-19 pandemic. What is mental health? Mental health is a dynamic state of emotional, psychological, and social balance. It involves a person’s ability to cope with the psychosocial demands of everyday life and live in harmony with the values of society, which are: respect and care for others and ourselves, understanding of the importance of social life, and respect for the environment. The concept of mental health takes into consideration the changes and challenges present in life and the fact that mentally healthy people experience a whole range of human emotions, from joy and happiness to sadness and grief, as well as the possibility to find a balance and restore wellbeing. A state of optimal mental health allows people to be aware of their potential to adapt, persevere, and integrate effectively in their world. It’s also fundamental to think, socially interact, have an occupation, and enjoy life. This is why promoting mental health and protecting it is the basis for communities and societies worldwide. What are the benefits of optimal mental health? - Capacity to face or adapt to different situations; - More positive moods and behaviors, increasing creativity, improving learning, and being more open to trying new things; - Awareness of one’s potential and increased self-esteem; - Healthier relationships; - Feelings of calm and satisfaction; - Nurturing of balanced pursuits that help cope with life stressors; - Decluttering to be mindful of life’s pleasures. What are the key components of mental health? - Psychological wellness: An individual attribute, psychological wellness includes basic cognitive skills as well as the flexibility and ability to cope with challenges and stress while experiencing a healthy mind-body connection. It also entails the importance of self-acceptance, self-determination, and boundaries. By deepening our cognitive skills, we are able to solve problems, make decisions, and pay attention, among other healthy actions. By adopting healthier habits and avoiding substance abuse and other harmful behaviors, we can prevent and decrease the prevalence of mental health disorders such as depression, bipolar disorder, dementia, or even neurological and developmental challenges, like autism. - Emotional wellness: This facet of mental health involves exploring our own thoughts and feelings, both positive and negative, as well as experiencing empathy towards others. It allows us to understand and find the best way to handle our emotions, and at the same time, regulate stress. Emotional wellness empowers us to successfully manage our feelings and to adapt to unexpected changes and difficult situations. Also, empathy allows us to understand what others feel, improving the communication and the interaction with our surroundings, in order to decrease or avoid harmful behaviors. With strong emotional wellness, we can avoid insomnia, depression, or anxiety, among other health disorders. - Social wellness: This component refers to our relationships and the ways we interact with others. Healthy, nurturing, and supportive relationships can positively influence our overall mental wellness. By improving our social wellness, we can enhance our self-confidence and self-esteem, as well as gain resilience against distress. Having healthy relationships helps us feel more secure when making important decisions and be aware of our own potential; they will always bring out the best in us. - Environmental wellness: Body and mind are extremely interrelated with the environment. The overall experience of existing in this world includes how we feel in our surroundings, and finding a balance is key to improving our wellbeing. Interaction with our neighborhood and community can affect our mental health. A safe and engaging neighborhood will enable us to avoid stress and worries, and it will bring joy to our lives. Bike lanes, green spaces, healthy restaurants, community activities, and a low (or nonexistent) crime rate, among other factors, can contribute to a healthy neighborhood and, therefore, can improve our mental health. How does mental health connect with the other dimensions of holistic wellness? - Physical: Mind-body connection is a powerful tool for us and working on it can help us enjoy more. Stomachaches or headaches, for example, aren’t only physical disorders; these physical symptoms can be caused by our mental state and the other way round as well. While experiencing distress and physical challenges, we might feel judged and unacknowledged by others. Consequently, we may develop stress and lower self-esteem, factors that can worsen our mental health. - Spiritual: Spiritual practices are helpful when dealing with stress and worries, promoting better mental health. Furthermore, they’re also a great tool for preventing emotional or psychological disorders, as they can become a way of understanding and managing our emotions and feelings. - Social: Our mood and behavior don’t only have an impact on our own mental health but also on our relationships. For example, psychological wellbeing is influenced by the emotional support provided by social ties, which, in turn, may reduce the risk of unhealthy behaviors and poor physical health. Being surrounded by supportive people may help us to cope and reduce the effects of stress and, at the same time, may help us to find meaning and purpose in life, improving our mental wellbeing. Consequently, our overall mental health will be affected when we feel less supported and more insecure, which can lead to feelings of loneliness. - Environmental: Good mental wellness allows us to be more aware of our personal environment, such as work and home, as well as the planet. At the same time, research shows that the physical environment may influence the way we cope with stress and fatigue and how we have relationships with others. For example, poor-quality housing appears to increase psychological distress and increase loneliness. If we feel empowered, it’s more likely that we will adopt positive habits related to mindful consumption and, therefore, enhance a healthier home and work environment, contributing to the betterment of the world around us. Most of all, good mental health promotes self-awareness, so we can appreciate and control our own wellness and environment. - Economic: Having well-balanced mental health allows us to boost our productivity by focusing on our priorities and successfully finishing our tasks. On the contrary, if our mental health is not optimal, it might contribute to presentism and generate negative work performance, leading to decreased possibilities of promotion and employment stability. Research establishes a clear relationship between mental and economic health, considering that mental health problems are related to, for example, deprivation, poverty, and inequality. At the same time, when people are going through economic crises, mental wellbeing is being challenged. By focusing on the connection between economic and mental health, we can avoid health disorders such as sleeping issues, depression, and anxiety, a lack of interest in favorite activities, tiredness, behavioral shifts, or mood swings. SolaVieve is dedicated to making a difference in your overall health and wellbeing. We invite you to explore your mental dimension by finding out what works best for you, building on your potential, and continuing your holistic journey with us. I am a journalist specialised in international relations, and writing is my absolute passion. I translate my knowledge and feelings into words, a process that has become my profession and at the same time my personal healing practice. I believe that being curious about what surrounds us is the key to educating ourselves and to further being able to express it to others. I love reading and am mostly interested in politics, human rights, social movements, and the passionate world of health. I am a health researcher who bridges data science and health research with direct experience in healthcare and university institutions, passionately and collaboratively pursuing the integration and synergy of all key areas of health and wellness. I believe in inclusion as the main pillar of our society, especially when it comes to health. Promotion and prevention in health empower people to adopt healthy decisions, thus I have been working during the last years in the development of inclusive and holistic health systems. What do I enjoy the most about my job? Realizing how we are making a difference in people’s lives, and seeing the result in their health journeys. I enjoy the challenge of questioning new paradigms and creating debate around them. Canadian Mental Health Association (2021). Benefits of Good Mental Health. Retrieved 3 March 2021, from https://toronto.cmha.ca/documents/benefits-of-good-mental-health/ Galderisi, S., Heinz, A., Kastrup, M., Beezhold, J., & Sartorius, N. (2015). Toward a new definition of mental health. World Psychiatry: Official Journal of the World Psychiatric Association (WPA), 14(2), 231–233. https://doi.org/10.1002/wps.20231 NIMN. Mental Illness. (2021). Retrieved 3 March 2021, from https://www.nimh.nih.gov/health/statistics/mental-illness.shtml Umberson, D., & Montez, J. K. (2010). Social relationships and health: a flashpoint for health policy. Journal of health and social behavior, 51 Suppl(Suppl), S54–S66. https://doi.org/10.1177/0022146510383501 Rutter, M. (2005). How the environment affects mental health. British Journal of Psychiatry, 186(1), 4-6. doi:10.1192/bjp.186.1.4 World Health Organization (2011). Impact of economic crisis on mental health. Retrieved 3 March 2021, from https://www.euro.who.int/__data/assets/pdf_file/0008/134999/e94837.pdf World Health Organization (2019). Mental disorders. Retrieved 3 March 2021, from https://www.who.int/news-room/fact-sheets/detail/mental-disorders
The 52 noisemakers in the exhibition, were collected by Colleen and Richard Fain over 20 years from around the world and were shown publicly for the very first time. The diverse styles of the noisemakers reflect different regions and cultures and are as varied as the materials they are made from. Grogger, the Yiddish word for rattle, is associated most often in popular culture as the noisemaker used in the Jewish Holiday Purim. In addition, groggers, rashanim in Hebrew, have a long history spanning different communities, and were used for different purposes. Medieval Christians used a noisemaker in place of bells during what was known as the “silent days,” the three days before Easter. These noisemakers are known today as the crotalus, which means rattle in Greek. Whatever the purpose, groggers were used for throughout history. The idea was to make noise. They were used by the patrol men in the mid-17th century in what was then New Amsterdam (today New York) as an alarm and the patrol men became known as the Rattle Watch. Boston was using the rattle for the same purpose and for the next two centuries, rattles were supplied to the police force. Portable, inexpensive, loud and easy to use, the rattles were used in both the Americas and Britain.
You can use ‘to do’ in almost every English sentence to emphasize the main verb in a sentence. The meaning of the main verb gets stronger or shows contrast by using ‘to do’. This grammatical structure is called the emphatic do in English. For a full overview of what this is and how to form it, click here. Emphatic Do Exercise 3 You can find the exercise below: Complete the following sentences by using the emphatic do and the verb in brackets. 1.I (to like – present) beer. 2.We (to have – present) a dog. 3.She (to watch – present) TV sometimes. 4.They (to like – present) rice pudding. 5.I (to eat – past) a large pizza all by myself. 6.It (to escape – past) from its cage. 7.He (to use – present) his airpods a lot. 8.Yes, I (to break – past) my arm. 9.They (to listen – present) to the radio. 10.The teacher (to scream – past) at us.
Special Education services are for kids from age 3-22. If your child qualifies, they can get support from the school system even before they start school. Different kinds of therapy When we talk about therapies for children with developmental disabilities, we don't mean just psychological counseling. There are many types that address developmental skills. Click the button below to learn more: If your child has trouble with development, there are different types of therapies that can help. These can build skills like communication, using their hands and bodies, building good relationships and managing challenging behaviors. This list includes the most common ones, but there are others. Types of therapies: Behavioral Therapy: Uses positive reinforcement and an understanding of each child's learning and motivation. It has been shown to be very effective in helping kids to learn new skills and positive behaviors. Applied Behavioral Analysis (ABA) is one of the most common ones for kids with autism. Developmental Therapy (DT): Uses relationships and play to support all areas of a child's development: cognitive skills, language and communication, social-emotional skills and behavior, gross and fine motor skills, and self-help skills. RDI, SCERTS and DIRFloortime are examples of Developmental Therapy. Speech and Language Therapy: Helps with communication skills at all levels. It addresses both verbal and non-verbal ways to communicate. Occupational Therapy (OT): Helps with fine motor coordination--using hands and fingers to do things like hold onto a toy, use a spoon, or write with a pencil. More broadly, supports motor, cognitive and emotional skills needed for the things children do, like play, academic activities, self-care (such as eating and dressing) and social engagement. Sensory Integration Therapy: A type of OT, which helps kids adapt to sensory experiences that may be hard for them, like the feel of clothing, noisy environments or being held. Physical Therapy (PT): Helps with large muscle movements (gross motor coordination), like sitting, crawling and walking. You and the school team will develop a specific program that describes the services your child needs. Remember, these are just some examples! If your child is in preschool and needs some therapy, they can usually get this at preschool if that works for you. Sources: The Center for Autism and Related Disorders (CARD), Autism Consortium, Early Intervention Specialists Sources: Federation for Children with Special Needs, MA DOE, Casey Family Programs
Mayan Civilization Mayan Civilization INTRODUCTION The Mayan Civilization was an Ancient Native American civilization that grew to be one of the most advanced civilizations in the Americas. The people known as the Maya lived in the region that is now eastern and southern Mexico, Guatemala, Belize, El Salvador, and western Honduras. The Maya built massive stone pyramids, temples, and sculpture and accomplished complex achievements in mathematics and astronomy, which were recorded in hieroglyphs. After 900 the Maya mysteriously disappeared from the southern lowlands of Guatemala. They later reappeared in the north on the Yucatan Peninsula and continued to dominate the area until the Spanish conquest. Descendants of the Maya still form a large part of the population of the region. Although many have acquired Spanish ways, a significant number of modern Maya maintain ancient ethnic customs. PRE-CLASSIC PERIOD The Pre-classic period is the span of time in which the foundation of the more modern Mayan civilization was formed. The people went through huge developments in society and built up strength. Early Mayans were farmers and helped the community in keeping up the fields. They first used sticks to punch holes in the ground, but later, assumed more advanced farming techniques. Their main crops included maize (corn), beans, squash, avocados, chili peppers, pineapples, papayas, and cacao, which was made into a chocolate drink with water and hot chilies. Hunting and fishing were also a source of food for the early Mayans. They often hunted rabbits, deer, and turkeys, which were made into stews. When they were not hunting, fishing, or working in the fields, Mayan men and women took part in crafting useful items, such as stone tools, clay figurines, jade carvings, ropes, baskets, and mats. Women specialized in making clothing, such as ponchos, loincloths, and skirts. Like other ancient farming peoples, the early Maya worshipped agricultural gods, such as the rain god and, later, the corn god. Eventually they developed the belief that gods controlled events in each day, month, and year, and that they had to make offerings to win the gods favor. Maya astronomers observed the movements of the sun, moon, and planets, made astronomical calculations, and devised almanacs. The astronomers observations were used to divine auspicious moments for many different kinds of activity, from farming to warfare. Rulers and nobles directed the commoners in building major settlements. Pyramid-shaped mounds of rubble topped with altars or thatched temples sat in the center of these settlements, and priests performed sacrifices to the gods on them. As the Pre-classic period progressed, the Maya increasingly used stone in building. Both nobles and commoners lived in extended family compounds. During the Pre-classic period the basic patterns of ancient Maya life were established. However, the period was not simply a rehearsal for the Classic period but a time of spectacular achievements. CLASSIC PERIOD Classic Maya civilization became more complex as the population increased and centers in the highlands and the lowlands engaged in both cooperation and competition with each other. Trade and warfare were very important to cultural growth and development. Societies became more complex, with distinct social classes developing. Under the direction of their kings, who also performed as priests, the centers of the lowland Maya became densely populated jungle cities with vast stone and masonry temple and palace complexes. During the Classic period, warfare was conducted on a fairly limited, primarily ceremonial scale. Maya rulers, who were often depicted on carved stone monuments, carrying weapons, attempted to capture and sacrifice one another for ritual and political purposes. The rulers often destroyed parts of some cities, but the destruction was directed mostly at temples in the ceremonial precincts; it had little or no impact on the economy or population of a city as a whole. Some city-states did occasionally conquer others, but this was not a common occurrence until very late in the Classic period when lowland civilization had begun to disintegrate. Until that time, the most common pattern of Maya warfare seems to have consisted of raids employing rapid attacks and retreats by relatively small numbers of warriors, most of who were probably nobles. Lowland Maya centers were true cities with large resident populations of commoners who sustained the ruling elites through payments of tribute in goods and labor. They built temples, palaces, courtyards, water reservoirs, and causeways. Sculptors carved stelae, which recorded information about the rulers, their family and political histories, and often included exaggerated statements about their conquests of other city-states. RELIGION Mayan religion consisted of a wide range of diverse and varied supernatural beings or deities. They considered Hunab Ku to be the chief god and creator of the world, followed by other varied gods, including Itzamna, the lord of the heavens; Yum Kaax, the god of maize; and the four Chacs, the cardinal rain gods. They also worshipped Ix Chel, the rainbow goddess associated with mothers; and Ixtab, the goddess of suicide. The Maya performed many rituals and ceremonies to communicate with their deities. At pre-arranged events, such as the Maya New Year in July, or in emergenciessuch as famine, epidemics, or a great droughtthe people gathered in ritual plazas to honor the gods. People would dress in elaborate costumes and dance, take hallucinogenic drugs, take ritual steam baths, and play ritual games. Sacrifices in the form of killing or burning would be made to the gods, such as corn, blood, piercing, children, slaves, or prisoners of war. SCIENCE AND WRITING Although the Mayans were blessed with being mechanically skilled, most of their major achievements were in the department of abstract mathematics and astronomy. One of their greatest intellectual achievements was a pair of interlocking calendars, which was used for such purposes as the scheduling of ceremonies. Maya astronomers could make difficult calculations, such as finding the day of the week of a particular calendar date many thousands of years in the past or in the future. They also used the concept of zero, an extremely advanced mathematical concept. Although they had neither decimals nor fractions, they made accurate astronomical measurements by dropping or adding days to their calendar. The Maya developed a complex system of hieroglyphic writing to record not only astronomical observations and calendrical calculations, but also historical and genealogical information. Scribes carved hieroglyphs on stone stelae, altars, wooden lintels, and roof beams, or painted them on ceramic vessels and in books made of bark paper. COLLAPSE OF A CIVILIZATION From about AD 790 to 889, Classic Maya civilization in the lowlands collapsed. Construction of temples and palaces ceased, and monuments were no longer erected. The Maya abandoned the great lowland cities, and population levels declined drastically, especially in the southern and central lowlands. Scholars debate the causes of the collapse, but they are in general agreement that it was a gradual process of disintegration rather than a sudden dramatic event. A number of factors were almost certainly involved, and the precise causes were different for each city-state in each region of the lowlands. Among the factors that have been suggested are natural disasters, disease, soil exhaustion and other agricultural problems, peasant revolts, internal warfare, and foreign invasions. Whatever factors led to the collapse, their net result was a weakening of lowland Maya social, economic, and political systems to the point where they could no longer support large populations. Another result was the loss of inestimable amounts of knowledge relating to Maya religion and ritual.
When does respiration occur in plants? It is a common misconception that photosynthesis occurs during the day and respiration only happens at night. In fact, respiration in plants occurs all the time – both day and night, as respiration in plants is like breathing in humans. And although parts of the process of photosynthesis require energy from the sun, other steps are light-independent. In other words, photosynthesis and respiration are not opposite processes in plants; even though the chemical equations for the two processes are opposite, this does not tell the whole story of the relationship between the two processes. When Do Plants Respire? Respiration is the process of burning sugars to produce energy for living and growing. Plants respire, humans respire and all other forms of life on earth respire. The chemical equation for respiration is: glucose + oxygen = energy + carbon dioxide + water. During respiration, plants convert the sugars produced by photosynthesis back to energy to fuel essential metabolic processes. They burn sugars in order to fuel cellular processes like repair and reproduction. Water and carbon dioxide are produced and released by plants as they respire, and sugars and oxygen are used. Respiration occurs in the mitochondria of plant cells, and it is a process that does not require light. While energy from the sun fuels parts of the process of photosynthesis, the energy needed for respiration comes from sugars, which are the products of photosynthesis. However, there is technically a distinction between respiration that occurs during the day and plant respiration that happens in the dark. Respiration that occurs at night is simply called respiration, while plant respiration that happens during the day is referred to as photorespiration. Respiration Compared to Photosynthesis Both respiration and photosynthesis are essential processes for plants. Although glucose, one of the products of photosynthesis, is burned during respiration, these two processes are not exactly the reverse of one another. In fact, photosynthesis and respiration can and do happen at the same time in plants. Both life processes are necessary to support the basic metabolic functioning of plants. Respiration happens in the mitochondria of plant cells, while photosynthesis occurs in the chloroplasts. Chloroplasts are organelles in plants that contain chlorophyll, a green pigment that is needed to convert sunlight into energy for the first step of photosynthesis. There are two main chemical reactions that occur during photosynthesis; the first part of the process requires light, and the second part, the Calvin cycle, is light-independent. This is why it is false to say that photosynthesis only occurs during the day. Parts of the process can happen in the dark of night, fueled by energy that is stored in adenosine triphosphate (ATP), which is a product of the light-dependent step of photosynthesis. Why Plants Respire Just like humans breathe all night when they are sleeping, plants respire in the dark. Respiration, like photosynthesis, also produces adenosine triphosphate (ATP). ATP is a molecule used for storing and transferring energy in all living cells. Needed for biochemical reactions, ATP provides the energy for cellular repair and reproduction, among other vital functions. During respiration, plants release water through a process called transpiration. Plants exchange carbon dioxide and oxygen with the atmosphere through cellular structures called stomata. Located on the surfaces of leaves, stomata are like pores that are regulated by 'guard cells' to open and close when a plant needs to take in carbon dioxide or release oxygen or water. Respiration, photosynthesis and transpiration are the three main processes involved in plant growth and development. Essential and interconnected, all three of these processes are concerned with the production or consumption of sugars and the conversion and exchange of carbon dioxide and oxygen. - University of California, Santa Barbara: Plants Produce Carbon Dioxide as a Product of Cellular Respiration but They Also Release Oxygen, How is This Possible? - Colorado State University Extension: Plant Physiology: Photosynthesis, Respiration, and Transpiration - Colorado State University: Light and Dark About the Author Meg Schader is a freelance writer and copyeditor. She holds a Bachelor of Science in agriculture from Cornell University and a Master of Professional Studies in environmental studies from SUNY College of Environmental Science and Forestry. Along with freelancing, she also runs a small farm with her family in Central New York. Plant image by Platinum Pictures from Fotolia.com
A coupling capacitor, sometimes known as a blocking capacitor, is used in electrical circuits to connect the signal from one segment of the circuit to another. Coupling capacitors typically serve the purpose of blocking direct current (DC) while letting alternating current (AC) go through. The main purpose for using a coupling capacitor is to block interference, make sure the signal is carried through to the next stage, and that the two connecting signals are comparable. Capacitors are made up of two conductors that are separated by an insulation component known as a dielectric. The capacitor not only acts as a bridge and filter, as is the case with a coupling capacitor, but also as a battery. When voltage is passed through a capacitor, an electric field is created in the dielectric area and the energy created can be stored. In this way, capacitors can act like batteries. They are often used in this way to store power while batteries for a device are being recharged or used. The capacitor can also act like a buffer to take power away from a circuit so that any fluctuations in the signal are reduced. This makes for a smoother signal and causes fewer problems with the circuitry. They can also be used to modify and correct power signals so that the maximum amount of power created is used. This is known as power factor correction. In the case of a coupling capacitor, the signal is altered so that the AC current passes through while the DC current is blocked. The interference that occurs is therefore minimized. This helps to balance what is known as the DC bias. This DC bias is a signal used to balance out the polarity of the overall signal of the circuit, and even though it is blocked through the capacitor it is still necessary for the signal to be moved through the circuit. The coupling capacitor keeps the signals from the two circuits separated so that the signals are not off-setting or otherwise conflict with each other. Occasionally, within a circuit there will be an unintentionally created coupling capacitor. One example of this would be found in two wires that are too close together. The signal from one wire can couple with another wire and produce interference, or signal noise. This is usually taken care of by putting some type of grounding material between the wires to break up, or ground, the signal. This type of problem is seen in items like printed circuit boards, where the signals and wires can get quite close.
Neurons are cells that send information to the brain. They are a made up of the cell body, dendrites and axons. Dendrites allow a message to go to the cell body, and Axons, conduct the message away from the cell body. There are three types of neurons: Inter neuron (As shown below), motor-neuron, sensory-neuron The neurons communication method is called Action Potential. It is an electrochemical signal that comes from a nerve impulse. It starts with an electrical change that moves from the cell body, through the axon. The movement is caused by a positive ions causing a chain reaction throughout the axons. This is called Depolarization. It is an message that stimulates the axon when sodium ions flow into the axon and changes the axon segment to negative. Then flowing through the axon with it action potential. After the axon re polarizes with channels let the K+ ions to leave the axon, then the charge returns to its original charge. A synapse has, the ends of dendrites receiving the neuron, tips of terminal branches of axon and a synaptic gap, or a tiny gap between neurons. An Axon terminal button produces Neurotransmitter, it keeps NT in the synaptic vesicles, and recycle NT. When Action Potential reaches the end of the axon, is causes the vesicles to give away the neurotransmitters through the synaptic gap and reaches the receptors on the receiving neuron. After receiving the neuron the dendrite determines the NT function as either excitatory, or inhibitory. Excitatory, stimulates the AP on the receiving neuron and inhibitory represses AP.
This course will include expository writing as well as the development and revision of paragraphs in essays. There will be various lessons that will be taught, like rhetorical strategies, reading, and discussion of selected essays. This course will also focus on establishing skills in documented critical writing. Further, it will teach students to have a background in fiction, drama, and poetry. - Students will have learned critical thinking, problem-solving, and decision making to identify, understand, and evaluate arguments. - Learn reading and writing as essentials in developing well organized and coherent ideas in the written form. - Learn to express the main ideas of readings through the use of summary, paraphrase, and quotations. - Develop original ideas in response to readings. - Emphasize three problem areas for proofreading, i.e., formality, tone, sentence punctuation, grammar, and spelling. Use correct citation and reference styles.
CLASSIC DESIGN DATA FLOW DIAGRAM Before UML to be created (the current paradigm of diagrams to turn programming code) used for computer flow charts to visually represent the data flow through systems for information processing. They were the cominezos of Diagrams in Computer Science. The flow charts describe what operations and in what sequence are required to solve a problem dado.Un flowchart or organizational chart is a representation that illustrates the sequence of operations to be performed to achieve the solution of a flow chart problema.El Data (DFD) is one of the most important tools used by systems analysts. The use of data flow diagrams as a modeling tool was popuralizado by De Marco (1978) and Gane and Sarson (1979), through its structured methodologies for systems analysis. They suggested that the data flow diagram is used as a first tool for analysts to model system components system. These components are the system processes the data used by these processes, external entity that interacts with the system and information flows of the system. Now describe each of the four parties that form a DFD a) – The processes are what makes the system. Each process has one or more data inputs and produces one or more data outputs. Are represented by circles. b) – A file or data warehouse is a repository of data. Processes can enter or retrieve them. Each data store is represented by a thin line and has a unique name. c) – External entities outside the system, but they provide or use data from it. They are entities over which the designer has no control. d) – And the most important are the data flows that model the movement of information in the system and are represented by lines connecting the components. The flow direction is indicated by an arrow, and the line with the name of the data flow. It must be recognized that the classical charts and was a bit DFD abstract, and thank goodness for the good of the designers that evolved in a more clear and concise so you do not give errors when translated to a programming language. As a preview to the next type of diagrams tell you what are called Entity-Relationship Model and is what is being used now to design applications, also known as ER diagrams, and was a key concept when it comes to appear 'Classes' and' objects' but all I'll leave this breakthrough for a next installment.
The musket-wars period The musket wars were a series of Māori tribal battles involving muskets (long-barrelled muzzle-loaded guns, brought to New Zealand by Europeans). Most took place between 1818 and 1840, although one of the first such encounters was around 1807–8 at Moremonui, Northland, between Ngāti Whātua and Ngāpuhi. While Ngāti Whātua had only traditional weapons, their well-executed ambush defeated Ngāpuhi, who were armed with muskets. There were also intertribal wars involving muskets after 1840. Before and after The musket wars were preceded by traditional warfare between tribes, involving hand-to-hand fighting with traditional stone or wood weapons. The introduction of muskets meant fighting could be done at a distance. The change in weaponry and strategy was not immediate, but developed over a few decades. The musket wars were followed by the New Zealand wars. Rather than intertribal warfare, fighting was now between tribal groups against the Crown and, at times, the Crown's tribal allies. Geographical spread and effect The musket wars were New Zealand’s most geographically widespread conflict. Almost all parts of the North and South islands, as well as the Chatham Islands, saw battles. One of the most significant results of the wars was the redrawing of tribal boundaries. These redrawn boundaries later became codified by the Native Land Court, which decreed that tribal boundaries should be determined as they were in 1840, after the musket wars, when the Treaty of Waitangi was signed. The death toll from the musket wars was significant, although the actual number of casualties is not known. It is likely that there were around 20,000 deaths from direct and indirect causes. The high numbers reflect the decades of war and the fact that warfare affected all parts of the population, civilian and combatants. While the toll from the wars was considerable, the Māori population was to be affected much more by disease in the following decades. Traditionally men had been both warriors and cultivators of the soil, and warfare was confined to summer months. The ritual aspects of growing kūmara (sweet potato) meant it had to be cultivated by men. Potatoes, introduced by Pākehā, did not have the same ritual needs and could be grown by slaves and women, allowing men more time for warfare. Potatoes also provided more food per hectare than kūmara. Surplus potatoes were used to purchase muskets, or could be carried by travelling war parties. Better economic production and surplus food allowed taua (war parties) to travel much greater distances. Also, a number of significant battles saw Māori using Pākehā ships to travel to distant places. North Island iwi travelled as far as the Chathams and the southern South Island. Two significant expeditions were known as Amiowhenua (circle the land). One involved Hokianga and Kāwhia tribes who travelled down the west coast of the North Island and around to Wairarapa in 1820–21. Another left Hauraki in 1821 and was led by Kaipara and Waikato–Maniapoto chiefs to Rotorua, across to Hawke's Bay and Wairarapa and around the west coast to Taranaki, finishing up back in Waikato and Kaipara. Ngāti Tama chief Te Pūoho also travelled the length of the South Island.
Natural Resources Institute Finland (Luke) and an international consortium of researchers from 39 other institutions report the first whole-genome sequence of rye. This will guide future rye breeding and provide immediate benefit in managing the trade-offs of using rye as a genetic resource in wheat crop improvement. The work was published in Nature Genetics in March. The genome sequence will be indispensable for use of untapped genetic diversity in breeding improved disease resistance, such as to ergot or leaf rust, and other sustainability traits in rye. “The rye genome will enable us to understand rye’s adaptive potential and efficiently use germplasm resources to meet future climatic challenges”, Alan Schulman, professor at Luke and a principle investigator in the study, says. Rye as a crop is tolerant of northern conditions, disease, and difficult climate. Currently, rye is produced on 4.1 million ha worldwide. Over 80% of the world’s rye production is in North-Eastern Europe, an area that will see increasing challenges from climate change and disease over the coming decades. However, with its origins in the region of Turkey, rye was earlier cultivated in much of Western and Southern Europe on marginal soils and today is grown also in Spain, North America and Australia in fairly large quantities. All these regions are threatened by the effects of climate change. ”The genome sequence will both help find key genes, and allow breeders to efficiently access and apply genetic diversity to accelerate the production of new varieties suited to changing conditions.” Rye genome sequence can help improve wheat yields Rye is a source of many useful traits even for wheat breeding. Rye chromatin is commonly introgressed into bread wheat varieties to improve yield. “Rye is closely related to wheat, and crosses can be made, but the key is to move the genes without traits that would be undesirable in wheat. The genome sequence will greatly aid this process.” Rye has been cultivated in Finland for over 2000 years, is mentioned in the Kalevala, and a key part of Finnish food culture today as well. Rye is an exceptionally climate-resilient cereal crop, the key ingredient in rye bread, which is the national food of Finland (2016, voted “kansallisruoka”). “We want future generations also to have good rye bread, with its many health promoting properties, on their dinner tables”, Schulman says.
Children learn proper conversational style from their interactions with caring adults. Adults model pronunciation, diction, tone, and sentence structure. Nannies should speak clearly and articulately when the children are in her care to foster a preschooler’s language development. During this stage of development, children are beginning to recognize letters and numbers, detect and make rhymes, can listen longer and retell stories and start learning their letter names and sounds. Nannies can help to build language and literacy development in preschoolers by building oral language skills through talk using a robust vocabulary, helping children to gain phonic awareness, fostering familiarity with letters and numbers, and by encouraging children’s attempts at reading and spelling. Nannies should also regularly read aloud to preschoolers and talk with the children about the books they are reading. Nannies can track the words with her finger to help the child follow along in the text. Nannies should also encourage the child to write his name and other familiar words and to engage in alphabet activities. Dramatic play activities like playing restaurant and taking orders provides opportunities for children to learn how written language works. Nannies can foster language development by being an attentive listener when a preschooler is speaking. By modeling good listing skills and showing interest in what a child has to say a nanny can help to boost a preschooler’s confidence in communicating and his language development.
- Join over 1.2 million students every month - Accelerate your learning by 29% - Unlimited access from just £6.99 per month AS and A Level: History of the USA, 1840-1968 Meet our team of inspirational teachers - Marked by Teachers essays 3 - Peer Reviewed essays 12 Assess the significance of the role of individuals in reducing racial discrimination in the USA throughout the period 1877-1981.5 star(s) This helped to relieve some who were less fortunate. On the other hand Du Bois took a route which directly campaigned for civil rights for African Americans; alike to Washington he achieved little due to the already widespread racial situation in the USA. It is noticeable that these individuals had no short term meaningful effect on reducing racial discrimination, however much was achieved long term as they created the path for the civil rights movement in the future, this was also aided with the work from the NACCP, which raised awareness of the racial discrimination situation in America. - Word count: 2023 of racial equality but it was clearly not enough for the cause and attitudes like this of top politicians slowed down any progress in the development overall. Any additional help that could come through Government needed the placement of politicians willing to help racial equality, especially in the Deep South, but a lack of black voters in these states left clearly racist politicians with no intention of changing the racist laws that governed their state. The increase in voters during this period was not enough to sway the vote away from racist politicians and any progress in this way was clearly going to be a slow process. - Word count: 1232 To what extent did the US president hinder rather than help the development of African American civil rights in the period from 1865-1992?3 star(s) Although most hindered the development and were passive, by 1992 presidents had created Civil Rights for African Americans. In this essay I will be discussing the both side of the argument in which I will include the Presidents who helped the development of African American and those who hinder the development. Presidents hindered the development of African Americans Civil Rights because at the start of the time period they frequently held White Supremacist views. For example, Johnson at the start of the period can be seen as actively hindering the development of African American Civil Rights. Johnson clearly opposed civil right legislation as the civil war was his priority. - Word count: 1235 This brought the beginning of the term 'Red Power' made by the younger generation, for the more militant side which increased popularity and support that the Native Americans had. It came from the influence of the progression that the black power were having and they wanted to have the same impact in publicity. Also in 1972, over the trial of broken treaties, they took over the bureau's offices and released that they government could be giving $400 to each family. - Word count: 2238 To what extent was the 1920s a major turning point in the development of labour and trade union rights in the USA from 1865-1992?4 star(s) For example, workers saw a rise in real wages and employers taking actions to improve working conditions by reducing working hours and introducing insurance benefits and pension plans. Henry Ford was an example of the "welfare capitalism" which characterised the 1920s, Ford Motor Company was the first big business to double the daily wage and introduce the 8 hour working day. Representatives were even able to meet with employers to discuss grievances over production and plant safety. These developments were clearly significant for labour rights as the fundamental right of working in a safe environment and negotiating conditions were established. - Word count: 1243 Use sources A, B and C and your own knowledge. How far was the outbreak of the war of American Independence due to the lack of willingness of the American colonies to compromise in the years 1770 to 1775?4 star(s) Because of this, by 1770 relations between British authorities and the leaders of the colonial legislatures had broken down. Moreover, events such as the Gaspee incident worsened relations between the American colonies and the British and it showed that the Colonists had no respect for the British policies and were not willing to compromise with the British' ideas to improve relations. In addition, source A suggests that due to the American colonies not abiding to the British policies throughout the 1760's the British felt that they couldn't trust the colonies to obey various regulations and restrictions that were needed for the colonists to have more freedom. - Word count: 1062 Assess the view that the Supreme Court was the most important branch of federal government in assisting African Americans to achieve their civil rights in the period 1865-1992.4 star(s) began as early as the 1870s, when cases like the Slaughterhouse case effectively undid the work of Congress, thanks to rulings by the Supreme Court. This case ruled that states were permitted to make laws affecting the rights of the citizens - a ruling that would allow southern states to make laws segregating black and white citizens. In fact, it almost went so far as to completely disregard the 14th Amendment, claiming it protected an person's individual rights, but not their civil rights - with one ruling, the Supreme Court effectively removed de jure equality for African Americans, and moved de facto equality further out of their reach. - Word count: 4179 As alcohol became a luxury item increasing its appeal and demand to young people. Non-drinkers were also targeted as a means of improving sales due to the obvious profits to be made. This meant that by 1922 consumption began to rise steadily reaching the amount of 1.2 gallons of alcohol per capita 1923, a huge leap compared to the 0.8 gallons consumed in 1919 before prohibition. Driven by the opportunity to satisfy demand and make a profit a network of illegal bootleggers and speakeasies emerged. - Word count: 1330 The increase in federal power supported people through the recession and restored the national morale and avoided the feeling of isolation particularly for farmers. Increasing the confidence and hope in the American people was crucial in order for quick and successful economical restoration in the U.S and it is therefore possible to view the New Deal as success. However, Source C challenges this idea by presenting Roosevelt's New Deal policies as tyranny and a ploy through use of the classical mythology of the Trojan Horse. - Word count: 1648 The emphasis on "more public works schemes" suggests that Hoover was not reluctant to help, and he wanted to ease America during times of hardship. Furthermore, Hoover secures an additional $500 million from Congress in 1931, to help agencies around the USA to provide relief. In hindsight, it is clear that Hoover did much to try and ease America through the depression, but whether his aid was in time or consistent is arguable. However, it can also be suggested that Hoover's interventions did not do a sufficient amount during the depression, hence the depression merely stood at a halt. - Word count: 785 To what extent were Malcolm X and the subsequent Black Power Movement the 'Evil Twin' of the Civil Rights Movement in the late twentieth century in the United States of America?4 star(s) "Although the CRM of 1954-65 effected change in the South, it did nothing for the problems in the North, Midwest and West."14 The squalid living conditions in the Ghettos of cities such as New York that resulted from economic hardship were a key issue for the ensuing movement and their improvement made up a great part of the movement's agenda. A notable statistic is that although African Americans constituted around 10% of the population, almost a third of all those living below the poverty line were African Americans.15 The first reason that may cause the analogy of an 'evil twin' to be associated with Malcolm X is his promotion of separatism16 at a time of primarily integrationist thinking. - Word count: 5947 He also passed a few protective tariffs in an attempt to help the American economy. Even with these few accomplishments I few that President Jackson was not a very effective president. President Jackson made many choices based upon his political goals, not for the American people. He also, fought against the second back of the United States causing more problems for the nation. Jackson may have felt that he knew what was best for the nation's future, but he made many poor choices. - Word count: 404 How effective was the early civil rights movement in advancing Black Civil Rights in the period 1880-1945?3 star(s) In reaction to the Black Codes the 14th amendment was passed in 1868. This stated that all free men shall be protected and enjoy equal treatment under the law. The idea was to protect the African American population, making them citizens thus forcing the federal Government to be responsible for them. If rights were denied by any States, the State in question would lose all representation in Congress. Yet to many Southern States this threat carried little purpose and no real threat. The States were prepared to accept the loss of representation in order to continue discriminating against the black population. - Word count: 1123 Moreover, the latter was upheld by a radical groups known as the Ku Klux Klan made up of different individuals some possessing a great deal of power such as governors and police officers. They felt that segregation of Black and White Americans was correct and used violent means such as 'lynching' to enforce the same. It is ironic then that Black Americans played a huge part in providing the financial support for some of the Southern states, yet only 5% registered were allowed to vote. - Word count: 1535 The gentlemen's agreement between Roosevelt and the Japanese government halted the influx of Japanese immigrants. "Yellow Peril" is another situation which Roosevelt demonstrated his ability to protect foreign relations and at the same time get what he wanted in terms of what was best for America and himself. However even though Roosevelt dealt with the problem it was still an extremely unsuccessful aspect of foreign policy. One success of Roosevelt's was the 'Spanish - American' war. America defeated a weakening Spanish army fairly easily after 10 undefeated battles and this gave them a strong reputation. - Word count: 966 How important was public opinion in the years 1965 to 1968 to putting President Johnson under pressure to withdraw US forces from Vietnam? By 1969, at least two million Americans were drawn into public protest'. Johnson understood that opposition would continue to grow as the US were becoming more involved day by day, and that the US casualty rate was continuously rising. The public opinion that was demonstrated as a result of the huge human and financial cost (there was a $25.3 billion deficit by 1968) was a great pressure on Johnson and he had to acknowledge what they were saying before things became too out of hand. - Word count: 1338 In 1961 military aid for South Vietnam to expand the ARVN rose from $220 million to $262 million. There was less than 1000 US military personnel in South Vietnam when Eisenhower handed over to Kennedy in January 1961, and even this was a breach of the Geneva Accords of 1954 that stated there were to be no foreign troops in Vietnam. By the end of 1963 there were approximately 16,300 US military personnel in South Vietnam which shows growing commitment on part of the USA. - Word count: 1666 It showed the unity of the Civil Rights groups and their power within society but it also showed the strength of White support that was growing within America and internationally also as the media picked up on the dignity of which the Black Protestors acted with, whilst faced with the violence of the white extremists. This links in with the factor of Media attention which was also significant in the passing of this act. The Media played a huge part in the civil rights movement. - Word count: 1369 Asses the view that Hoovers policies and attitudes in the years 1929-33 merely prolonged the depression. Source 7 describes Hoover as a very stubborn person who "Remained convinced that he was right" Hoovers attitudes towards agriculture did not help American farmers at all. The agricultural marketing act was established in 1929 which artificially purchased farmers surpluses at prices above the market price. Hoover gave the Federal Farm Board $500M, yet Hoover still not think through exactly what he was doing. The agricultural market was in a significant decline in America during the 1930s and he only encouraged farmers to produce more as the Federal Reserve board was purchasing their surpluses. - Word count: 1446 How far do you agree with the view that McCarthyism had little impact on US society in the years 1950-54? Therefore, people had a reason to fear Communism as Americans were living a better and more affluent life and they did not want their privileges to be taken away from them. McCarthy took advantage of this and manipulated the press, and released as many accusations as possible in order to decrease the spreading of Communism. He also used radio, to suggest who he thought was a communist; anyone who he saw to be left winged or radical was a communist in the eyes of McCarthy. - Word count: 993 In considering the development of the USA in the years 1815-1917, how far can the union victory in the civil war be seen as a turning point? In practise African American involvement politically were also progressing, by late 1870's 64 out of 140 Representatives where Black Americans in the state Mississippi, this proves the union victory assured opportunities and a turning point for former slaves, additionally 13th Amendment was set in 1865, abolishing slavery, which was a support for black Americans freedom. But yet it was a limited turning point as many former slaves still feared to break away and develop their lives. In contrast in the long term meaning this caused in response brutal racial actions from radical racist white American. - Word count: 2089 This illustrates the desperation of people that how significant Black Salves have become to both North and South. Many would argue that in the short term meaning this boosted majority of black slaves to become more courageous and fight for their freedom to escape. An extract 'free at last'1 illustrates the hope for black Americans 'the slaves took full advantage, chipping away at their bondage from within while Union armies pounded it from without' this explains their discretion of trying to break away even though their former slave holders have tighten their discipline into a more brutal way. - Word count: 2249 neck and exited his throat, the other of which entered the back of his head and terminated the life of this victim. Divulged amongst this analysis many key groups and individuals challenge and protest the authenticity and accuracy of the formulated conclusions, the events of the assassination along with the lack of hard evidence, and distinguished tampered evidence, forces a sequence of misconceptions, conveying the illusion that the government is disguising a cover-up or conspiracy. Immersed through the idea that the American government withheld the essential truth from its citizens and the international public eyes, the JFK assassination inadequacies towards the exposure of evidence and alteration of information, photographic material and films have been interpreted through many professional perspectives. - Word count: 2857 US History. How would you characterize the positions of the North at the time leading up to the Compromise of 1850? The dilemma did occur again when California applied for statehood due to the Gold Rush of 1849 (Jordan 326). The same situation occurred in Utah when Utah had enough people to join the Union. Neither the South nor the North wanted the territories to become like the other. A compromise was necessary. No side would back down and it seemed if a compromise was not met, there would either be a southern secession or a civil war. (Jordan 326). The compromise of 1850, or the Great Compromise, was adopted in which Stephen Douglas took Henry Clay's compromise and broke it up into individual bills to be voted on separately (Jordan 328). - Word count: 1422 Spies had to be perfect at their guises and tactics for their success. According to Peggy Caravantes, Albert A. Nofi and Bryna Stevens, Sarah Emma Edmonds was the best at her guises. In her childhood, Edmonds often had to help her father with chores regularly done by boys, and she built a strong, lean body. Later, her body and her guises, helped her disguises as Franklin Thompson, "Cuff" (a plantation slave), a peddler, a widow and a Kentuckian confederate soldier (also named Frank Thompson). Due to these guises, Edmonds could pass through enemy lines again and again, successfully. A few times, her guises were that close to perfection that she was mistaken for the character she was disguised as. - Word count: 1158
But one of those physical limits may have just been stretched: heat loss. Nanosize crystals of semiconducting material, in this case a mixture of lead and selenium, move electrons fast enough to channel some of them faster than they can be lost as heat, according to new work from researchers at the universities of Minnesota and Texas. Solar cells employ semiconducting material because when a photon of sunlight of the right wavelength strikes that material, it knocks loose an electron, which can then be harvested as electrical current. But many of those loosened electrons dissipate as heat rather than being funneled out of the photovoltaic cell. Previous work in 2008 had shown that nanocrystals of semiconducting material can, in effect, slow down such "hot" electrons. As a result, these nanocrystals, also known as quantum dots, might be able to boost the efficiency of a solar cell. The new research published in Science June 18 shows that is indeed the case: Not only can quantum dots capture some of the "hot" electrons but they can also channel them to a typical electron-accepting material—the same titanium dioxide used in conventional solar cells. In fact, that transfer takes place in less than 50 femtoseconds (a femtosecond is one quadrillionth of a second, or really, really, really fast). Because that transfer is so fast, fewer of the excited electrons are lost as heat, thus boosting the theoretical efficiency to as high as 66 percent. Unfortunately, that's not all that's required to build such a highly efficient solar cell. The next step would be to show that the captured electrons and transferred current can be carried away on a wire, as in a conventional solar cell. The challenge will be making a wire small enough to connect to a solar cell incorporating a quantum dot no bigger than 6.7 nanometers in diameter—and one that won't lose much of the current as heat. And it would be years if not decades before such quantum dot-based solar cells might be manufactured. But these chemists have lit at least one path to a more efficient solar future.
Many pharmaceutical drugs in the form of complex proteins require 3D structures that are important for their functions. Animal cells have the unique machinery to make the special structures. Genetically engineered (transgenic, GMO) animals/animal cells are created so they serve as “bioreactors” to produce these drugs at an industrial scale. Animal products such as milk, egg white, blood, urine, and silk worm cocoons have been used to produce complex drugs that can’t be made by chemical synthesis. The first drug produced by GMO animals, anti-thrombin III from the milk of transgenic goats, prevents the formation of small blood clots that could break loose and plug other vessels (Figure 1). It was approved by the FDA in 2009. Animal cells and simple bacteria, however, have been used to produce protein drugs much earlier than that. For example, Activase® (r-tPA), produced by cells from Chinese hamsters, was approved by the FDA to treat stroke since 2001. The first bacteria produced drug, Humulin (human insulin) from Eli Lilly has been used by millions if not billions since 1982. Today, many cancer drugs such as monoclonal antibody therapeutics are produced by animal cell cultures after human genes are introduced to these cells. Pharmaceuticals from GMO animals/GMO cells/GMO bacteria will continue to be developed to save lives. By Xiuchun (Cindy) Tian, Professor, UConn Department of Animal Science
Xylella: What Is It? What Problems Is It Causing? How Can We Solve Them? First discovered in Italy, Xylella fastidiosa is a bacterium which causes disease in a variety of commercial plants, including citrus plants and several species of broadleaf trees widely grown in the UK. While Xylella isn’t currently present in our country, the UK is still on high alert given the significance of the threat. In this article we’ll be taking an in-depth look at Xyella, detailing what can be done to combat some of the key problems we face as a result of it. What is it? Classed as one of the most harmful plant pathogenic bacteria in the world, Xylella lives in the water conducting vessels (xylem) of plants, moving both upstream and downstream. By doing this, Xylella restricts the movement of water and nutrients through the plant, starving it. In nature, the bacteria are exclusively transmitted by insects from the Cicadellidae and Ceropidae families, such as leafhoppers and spittlebugs, which feed on plants’ xylem fluid. Although these sorts of insects usually only fly short distances of around 100 metres, the wind can carry them much further than that making infection more widespread. What problems is it causing? Discovered in Italy in 2013 when a large group of olive trees in Lecce became diseased, Xylella has since spread throughout Corsica, France and other areas of mainland Europe throughout the following years thanks to a plethora of insect vectors. Given the sheer prevalence of such insects in certain countries, Xylella can spread through woodland areas at an alarming rate, to devastating effect. In the wild this sort of widespread infection tends to occur during warmer seasons, when insect vector populations are at their highest. Infection by Xylella can result in a number of symptoms, such as leaf scorch, stunted growth, reduction in fruit quality and size and dieback. However, many infected plants demonstrate no symptoms. This can be particularly dangerous, as these plants can provide a reservoir for reinfection of other plants. This cycle can make Xylella extremely difficult to detect and control What can we do about it? Given how difficult it is to identify different instances of Xylella, infection control isn’t always easy, however, it’s certainly very important. Currently, Xylella is subject to EU emergency measures. The control strategy here primarily aims to keep the bacterium out of its member states with very rigorous inspections. In the UK, landings of Xylella host species, such as elm and oak trees, must be made aware to plant health authorities as quickly as possible. This way, thorough inspections can be made before they make their way into the mainland. Other regulations include restricted movements of specified host plants from the infected region of Apulia in southern Italy, and from third party countries outside the EU. This further reduces the risk Xylella spreading unchecked. Based near Weybridge, Creepers are a specialist wholesale nursery for the luxury garden market. We offer a wide range of services, including the sourcing and growing of topiary specimens, garden design and installation services. With over 20 years of experience, we’ve built strong relationships with the finest growers in the UK, so you can expect only the highest quality of plants. For more information about Creepers, don’t hesitate to get in touch.
Ecology is a branch of biology that examines living organisms and their relations with their environment. The ecological cycle, water, minerals, nitrogen, oxygen and carbon, such as materials, water, air and soil between the various forms of transformation between the conversion, continuous circulation in nature. The area of living space for living organisms, the depth of a thousand meters of the oceans, and the atmospheric altitude of six thousand meters above sea level is called the biosphere in biology. The biosphere is composed of water, air and soil and is a living space for living things. Biologically, animals living in this area are called fauna and all plants are called flora. In the biosphere, living things form a cluster of living things. These living clusters form the ecosystems of the physical environment, in other words, their relationship with the inanimate environment. The ecosystem is a life union, and there are three groups of living things: producers, consumers and disintegrators. Producers are photosynthetic organisms. Consumers are usually carnivorous and herbivores. Separators consist of bacteria and fungi. Producers make photosynthesis, consumers breathe, decomposers decompose organic residues. Ecosystems are an energy and food chain and the main source is sun. Energy and substances create a cycle within the ecosystem. Some vital substances must be reproduced at the rate they are consumed in order to sustain life in nature. Nitrogen is very important for all living things. Each living organism must provide nitrogen as organic or inorganic. Likewise, water is an indispensable substance for all living things. These substances must have a cycle. In a simple manner, the ecological cycle is the phenomenon of a number of substances that are used in nature by living beings, which can be reused and this process continues. However, ecological cycles are adversely affected by various pathways, especially human interventions. For example, the rapid rise of the population, the development of technology, urbanization and the advancement of the industry have increased the demand of societies for water. Overuse of water, urbanization and population growth, the increase of water use in the industry, the construction of new dams and canals, the destruction of the vegetation are all factors that disrupt the ecological cycle of water. ECO Label © Standard, prepared by our organization and authorized by a foreign accreditation body, is important for preventing further destruction of the ecological cycle. Our company, nature-sensitive production companies are allowed to use the label. Different information may be needed on the ecological cycle. For more information about this subject and the ECO Label © Label, please contact our company immediately.
Decimal means base 10 (note the prefix dec). For any number system, if you know that number system's base (also called the radix), then you know how many digits can be used in creating written numbers in that system. Base 10 (decimal) has 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. For any number system that has a base b, the first b non-negative numbers are represented by the digits themselves. For base b=10 (decimal), the first 10 numbers are 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. After the first b numbers, you run out of digits. What do you do? Numbers beyond the first b numbers are represented by writing multiple digits, and associating each digit with place value. As you know, the number that follows 9 is 10, and writing 10 uses two digits: '1' and '0'. The '1' occupies the "tens" place and the '0' occupies the "ones" place. That's 1 "ten" and 0 "ones". Counting continues with 11, 12, 13, ..., 18, 19, and finally 20, which is 2 "tens" and 0 "ones." Eventually, the number 99 is reached (9 tens and 9 ones). The next number in the sequence will require adding a third place, the "hundreds" place, to the left, for the number 100: 1 "hundreds," 0 "tens", and 0 "ones." Each place in a decimal number is associated with a power of ten, as we have seen. The right-most position is associated with "ones" or 100. The position to the left is associated with "tens" or 101. The position to the left of that is associated with "hundreds" or 102. You'll notice that the powers of ten start with the 0th power and count upward as you move to the left, as shown in this diagram. Any number can be written as a sum of products of powers of 10. This sounds complicated, but it really isn't. A "sum" is things added together. The things we will add together are "products" which are simply multiplications. What we will multiply is a single digit times 10 raised to a power. This is easiest understood by using an example. Consider the number '327.' Suppose we wrote it down in the diagram shown above. It would look like this: We can write the number '327' as a sum of products of powers of ten as follows. For each digit place, take the digit and multiply it by 10 to the power associated with that digit's place, then add all of them together. Let's concentrate first on writing the product of a digit and a power of ten. Later, we'll worry about adding them together. We'll work with the '3' of '327' first. Using our diagram as a guide, the product can be built by taking the '3' and multiplying it by 10 raised to the 2nd power. If we do this for the other digits, we get 2 x 101 and 7 x 100. You can see that we get one product for each digit. Now we form the sum by simply adding them all together, like this: And there we have our sum of products of powers of ten. This is the number '327' written in an expanded notation. It means exactly the same thing as writing '327' - it's just longer. This kind of notation is much more specific than just writing '327' because the base is given. When you see the number '327' written, you assume the base is 10. With computers, you can't always assume the base is 10. In fact, many times the base will be 2, 8, or even 16! The expanded notation is one way to be sure we know exactly what number it is that we are considering. This is fine, as long as you don't have any numbers with digits to the right of a decimal point. So how do you handle those? First, let's expand our diagram of place values to include a decimal point and places to the right of it. Here is how it would look: As you can see, the place values become negative powers of ten, counting backwards from zero. This is logical, and easy to remember. The rule about writing any number in expanded notation still works, too, without any modification. For example, the number '43.57' would be written as: Notice that we didn't use parentheses in this one (as we did in our first example) because they are not necessary. Multiplication has a higher precedence than addition. Everyone agrees that, when no parentheses are given and you have a choice between addition and multiplication, you should do mulitplication first.
Editor’s note: This article was originally published in ANZSOG News and is reproduced with permission. Indigenous ways of learning include deep listening and community-oriented activities, with family being a prominent element of First Peoples’ education. In contrast, Eurocentric teaching methods can result in a lack of engagement for Indigenous STEM students. Carissa Lee Godwin, Editor, APO’s First Peoples & Public Policy Collection, explores how the Australian curriculum can become more culturally inclusive. Elizabeth McKinley, Professor of Indigenous Education at the University of Melbourne, outlines the need for Indigenous knowledge systems in teaching practices in her paper on ‘STEM and Indigenous Students’, included in APO’s First Peoples & Public Policy Collection. Although this paper is from 2016, the concepts and recommendations remain valuable to today’s educators seeking to improve their teaching practices. Key research findings McKinley presents the state of science, technology, engineering and mathematics (STEM) education in Australia, and where it falls short of interacting with the cultural complexities and needs of Indigenous students at primary and secondary levels. There are three main takeaways from her assessment of the system: - Although there is an equal level of interest between Indigenous and non-Indigenous students when it comes to STEM subjects such as science, there is a level of disengagement for some Indigenous students in these subjects. - Although there has been roughly three decades of research examining Indigenous knowledge systems in the context of education, these styles of learning and teaching might not be effective for teaching STEM subjects to Indigenous students. - There is a mismatch between what is valued in the classroom, and what is valued at home or in the student’s community. As a consequence, Indigenous students do not find STEM subjects very accessible due to their cultural learning styles. Examples of these cultural learning styles include collaborative and community-oriented learning systems. Key policy recommendations McKinley provides three key recommendations for education and curriculum providers to teach STEM subjects to Indigenous students in a more culturally inclusive way: - STEM educators need to provide an individual and responsive approach to teaching that encompasses the interplay of family, social and cultural contexts, and how these may influence individual students. - Education providers should take into account each individual student’s cultural knowledge systems, as well as the importance of cultural identity to Indigenous communities, to understand how these may influence individual learning experiences. - Educators need to understand that the often-separated cultural elements of home and classroom sometimes need to be intertwined: ‘Bridging between home and school culture thus provides an underlying cultural approach for teachers to support learners who come from different cultural backgrounds,’ McKinley said. McKinley suggested that this can be achieved by finding ways to integrate cultural context within learning materials, by respecting and seeing individuals as important, and by encouraging teachers to establish a more caring and understanding rapport with students. McKinley makes it clear that Indigenous students may benefit from culturally relevant teaching methods. To my mind, these methods may also benefit non-Indigenous learners. By presenting STEM subjects through a different lens, non-Indigenous students can learn how it fits into another person’s cultural worldview. Additionally, neurodiverse students may be exposed to alternative teaching methods that they may find helpful. We all learn differently, and Indigenous students should not be deprived of achieving their highest potential because cultural ways of learning have not been considered. As McKinley points out, there are a number of ways to achieve this. It is time that our education system starts implementing these changes in a significant way. About the First Peoples & Public Policy Collection As part of its mission to improve Indigenous policy in Australia and Aotearoa-New Zealand, ANZSOG is working to increase knowledge of Indigenous culture and history. Part of this is through supporting the APO First Peoples & Public Policy Collection, launched at ANZSOG’s Reimagining Public Administration conference in February 2019. The First Peoples & Public Policy Collection is curated from a broad selection of key Indigenous policy topics, and provides a valuable resource on Indigenous affairs, with a focus on diverse Indigenous voices.
Robots either have legs, or they run on something like treads or wheels ... right? Well, not in the case of Carnegie Mellon University (CMU)’s new CHIMP robot. The humanoid ‘bot does have arms and legs, allowing it to stand and carry out tasks on a human scale. When it’s time to move, however, it can hunker down on all fours and roll along on rubberized treads built into its feet and forearms – not unlike a slower, all-terrain form of buggy rollin'. CHIMP (CMU Highly Intelligent Mobile Platform) is Carnegie Mellon’s entry in the upcoming DARPA Robotics Challenge, in which the various teams’ robots will ultimately have to successfully complete a sort of obstacle course that requires them to do things like driving a car, traveling through rubble, opening doors, climbing ladders, and manipulating tools. The competition begins later this year, and will proceed through to the end of 2014. While CHIMP’s design does keep it naturally balanced in a standing position, why not just have it walk like C-3PO? For one thing, bipedal walking (or even dynamically-balanced standing) requires a lot of mechanical complexity, computational requirements, and energy consumption. It’s also not a particularly stable a way of traversing uneven terrain, as compared to CHIMP’s approach of lowering its center of gravity then rolling along like a tank. That said, CHIMP can also move while standing, using only the treads on its feet. Among other things, this will allow it to move while grasping objects with its three-fingered manipulators (aka “hands”). During the competition, CHIMP will operate via supervised autonomy. This means that a human operator will remotely-control the big things, such as the path taken by the robot, but CHIMP’s own onboard systems will handle functions such as collision avoidance and maintaining stability. Those latter functions will be made possible using a variety of onboard sensors that create a texture-mapped 3D model of the robot’s surroundings. That same model is used by the operator to visualize CHIMP’s location and orientation. Using an interface consisting of a big-screen monitor, keyboard and mouse, the operator can then decide how to proceed within the various challenges. Making things considerably easier for them, tasks such as tool-grasping and steering-wheel-turning will also be pre-programmed into the robot. This will also allow CHIMP to perform such activities considerably faster. “Humans provide high-level control, while the robot provides low-level reflexes and self-protective behaviors,” said team leader Tony Stentz. “This enables CHIMP to be highly capable without the complexity associated with a fully autonomous robot.”
Hello! In this blog post, I will talk about and analyze Newton’s Three Laws of Motion and apply them to real-life situations. This post will be divided into three parts: First Law, Second Law, and Third Law. Section I: Newton’s First Law of Motion Newton’s first law of motion states that a body or an object at rest will remain at rest, and a body or an object in motion will remain in motion, unless acted upon an outside force. This law is typically called the law of inertia. So what is inertia? It is the tendency for all the objects to resist change in their states of motion. It is dependent on mass. So what does this all mean? It means that any object has a tendency to not change what their state it: whether they are moving or at rest. All objects resist a change in their motion. If they are in rest, they will remain unmoving, and it they are moving, then they will not stop moving. Unless, they are overcame by an outside force. Let’s take a look at this example to understand it a little better: a pushed, moving cart. Let’s recall what inertia is: the tendency of an object to resist change in motion. As we can see in the video, the cart was in rest, however it was pushed by an outside force and moved. After that, it went back to rest (not moving) again. So, how exactly does the law apply to this situation? Remember that every object resists change in motion. Even though the cart was moving, but due to the cart’s natural state (at rest), it will continue doing that. And when the cart was pushed, then it will move, and it will keep continue moving. But wait, didn’t the law state that if an object is moving, then it will continue to move? Then, why did it stop moving? Well, this law works the best when there is no friction. This is one of the exceptions for this law. The object can’t keep moving on forever on a surface with friction, because the friction will oppose its movement. Like what we saw in the video, it stopped moving exactly because there was a force opposing it. Hence the law, “the object will continue what its doing unless an unbalanced force acts upon it.” Section II: Newton’s Second Law of Motion Newton’s second law of motion states that if an unbalanced force acts on a body, it will accelerate. This law should be easy to comprehend, because we already know this in our heads unconsciously. We know that we should not kick a big rock like we kick a soccer ball, right? Why? Because it’s heavy. So what does this mean? Well, the greater the mass of the object, the greater the force needed to accelerate it. Mathematically, it can also be expressed by this: -acceleration is directly dependent on the unbalanced(net) force and it’s inversely dependent on the mass of the object. **Don’t forget, Fnet is the sum of all forces acting upon an object!** So basically, a is acceleration; m is mass; Fnet is the unbalanced force. we can also say: For a deeper understanding, let’s look at this video. As we can see here, we have two objects: a mouse and a popsicle stick. We all know that the mouse is a lot more heavier than the stick. And as said earlier, more mass means more force needed to accelerate it. Now, as seen on the video, when both of them was flicked, the popsicle stick moved away faster and farther than the mouse. So how does the law apply to this? Well, since the popsicle stick had less mass than the mouse, it accelerated faster with less net force needed. On the other hand, the mouse needed more net force to accelerate it. This law won’t always work on something that is moving uniformly (constant speed), because it has no acceleration. Why? Well, since acceleration is directly proportional to the net force, if there’s no acceleration, then there are still forces acting upon it, but the forces are balanced with one another. Another perspective is looking at the formulas. Since Fnet = ma, if your acceleration is 0, then there’s no Fnet. Section III: Newton’s Third Law of Motion Newton’s third law of motion states that for every action, there is an equal and opposite reaction. The law itself is pretty much self-explanatory. So, for every movement or action, there will be an action and a reaction. Now, let’s take a look at this video of a ball dropping and bouncing off the floor. Assuming that the ball was dropped, the ball pushes against the floor, whilst the floor pushes against the ball with an equal amount of force. And that’s the first part of the law, “equal reaction.” The opposites reactions would be the same, but in different meanings. Action: The ball pushing down against the floor. Reaction: The floor pushing up against the ball, making the ball bounce, which makes an opposite reaction. Thank you! 🙂
Weekly Learning 27th April To keep you busy and to ensure you are all continuing to learn ready for when we do return to school, Mrs Martin and Mr Creagh-Barry would like you to have a go at the learning outlined below. The daily activities will also be uploaded onto Class Dojo throughout the week. For Maths, we would like you to try the daily Maths Home Learning Lessons and Activities on the White Rose Learning Hub. Continue with learning from last week and start at ‘Week 2’ (NOT Summer Week 2!), complete one lesson and activity per day, and record your learning in your red learning journals. If you can print the learning activities out, then please do, if you can’t, then just do your best through discussions with adults and notes in your red learning journal. The link for the learning is below: In addition to this, and your other Daily Learning, we have some other challenges to keep you busy: Write a day in a life of a noblewoman who lived in a castle. Use this video below to help with your thinking. Think about these questions: - How did they dress? - Where did they live in the castle? - How should they behave? - How did they eat? - What were their duties in a castle? Collect a range of natural resources from outside such as leaves, twigs, flowers etc. Using the natural resources, can you make a collage of your own choice? Below are some pictures to help you with your thinking. For grammar, we would like you to try the activities and learning on BBC Bitesize. To start your grammar learning at home we would like you to build upon your learning of the ‘ing’ and ‘ed’ suffixes. Watch the video and play the games using this link: After you have completed the activities online please can you look at the powerpoint provided on ClassDojo. Can you follow the instructions on the powerpoint and change the words by adding the ‘ing’ or ‘ed’ suffix. Then put the words into sentences, recording these in your red journal. Last week we asked you to go on a minibeast hunt. If you have not done this already, here is an example tally sheet to give you an idea of how to record your results. Once you have gone on your hunt, see if you can classify the insects pictured using the flow chart model shown below. Answer the Yes/No questions and follow the arrows. See whether you can get the correct insects in the correct boxes. As an extra challenge, can you create your own flow chart for your minibeasts? Could you do one for trees/flowers/birds or animals? Upload your examples onto Class Dojo! Friday: As an on-going, weekly challenge, we would like you to continue with your learning of another language. Mrs Martin and Mr Creagh-Barry would like you to have a go at Spanish as we will continue this when we return to school. See if you can count to 10 and say “Hello, my name is _____” by the end of next week! Here are some links to help you out! Have fun! Your challenge this week is to count to 10 in Spanish and upload a video of your counting on ClassDojo.
On 13 May 1960, a NASA Thor-Delta rocket carried the agency’s new Echo 1 satellite into a 1,000 mile orbit around the Earth. It was a 156.995 pound metalized sphere 100 feet in diameter, essentially an enormous, shiny balloon made of the same mylar as party balloons of today. It required forty thousand pounds of air to fully inflate the sphere at sea level, but in the rarefied atmosphere in orbit it only required a few pounds of gas. Echo 1 was a passive satellite, used to reflect transcontinental and intercontinental telephone, radio, and television signals. It was so large and so reflective that it was easily visible to the naked eye for much of the Earth. It was expected to remain in orbit until sometime in 1964, but it survived much longer, and did not burn up in the atmosphere until 24 May 1968, eight years after its launch. Its sister “satelloon” Echo 2 was even larger at 135 feet in diameter, therefore it was even more conspicuous while it was in orbit from 1964-1969. Both balloons were sufficiently large and lightweight that they experienced detectable pressure from the solar wind, providing support for the concept of a solar sail. They also secretly served as an early rudimentary GPS system, using the balloons’ positions and instruments to calculate the exact location of Moscow for America’s intercontinental ballistic missiles.
The manipulation of living organisms underwent a veritable revolution with the advent of genetic engineering, which allows part of the genetic make-up (DNA) to be isolated and artificially manipulated. As such, genetic engineering has given rise to the appearance of "Genetically Modified Organisms": GMOs. The most common types of GMO are genetically modified plant species, which include varieties of maize, soybean, rapeseed and cotton. These varieties have essentially been genetically modified to resist certain insects, and tolerance to specific herbicides. For a GMO to be placed on the market in the EU, it needs to pass through an approval system in which the impact on safety for humans, animals and the environment is carefully assessed. The products in question are the following: - GMOs destined for human food or animal feed, for example GM maize seeds; - Human food and animal feed which contain GMOs, or which consist of such organisms; - Food produced from GMOs or which contains ingredients produced from GMOs, and animal feed produced from GMOs, for example GM soybean oil, maize flour, etc. Regulation (EC) N° 1830/2003 provides for comprehensive information thanks to the labelling of all products intended for human food and animal feed which consist of GMOs, or which contain them, as well as food and animal feed produced from GMOs. For example, oils produced from genetically modified soybean or maize, and all products containing them such as biscuits, crisps, etc., must be labelled in such a way that alerts the presence of GMOs. Regulation (EC) N° 1830/2003 concerns the traceability and labelling of GMOs and their derivatives, and defines the traceability of GMOs as "the ability to trace GMOs and products produced from GMOs at all stages of their placing on the market through the production and distribution chains". The general objective is to facilitate: - the control and verification of claims made on labels, - the targeted monitoring of potential effects on the environment, where appropriate, - the removal of products which contain GMOs or consist of GMOs if an unexpected risk for human health or the environment is identified. The FASFC has the power to control compliance with legal requirements with regards to the labelling and traceability of genetically modified products intended for the food chain, as well as to detect unauthorised GMOs. The controls are intended firstly to ensure that food and animal feed which contain GMOs, or GMO derivatives, are authorised for commercialisation for a given use, and secondly to ensure the conformity of the labelling. The control consist of a documentary inspection and/or analytical research intended to verify whether, if an GMO is present in a product it is authorised, and its presence is indicted on the label and/or the accompanying document of the product and samples.
Observations reveal new 'shape' for coronal mass ejections Radiation signatures produced by giant solar storms more complex than previously thought. Phil Dooley reports. Astronomers using one of the most sensitive arrays of radio telescopes in the world have caught a huge storm erupting on the sun and observed material flung from it at more than 3000 kilometres a second, a massive shockwave and phenomena known as herringbones. In the journal Nature Astronomy, Diana Morosan from the University of Helsinki in Finland and her colleagues report detailed observations of the huge storm, a magnetic eruption known as a coronal mass ejection (CME). Unlike the herringbones a biologist might find while dissecting, well, a herring, the team found a data-based version while dissecting the radio waves emitted during the violent event. The shape of the fish skeleton emerged when they plotted the frequencies of radio waves as the CME evolved. The spine is a band of emission at a constant frequency, while the vertical offshoot “bones” on either side were sudden short bursts of radiation at a much wider range of frequencies. Herringbones have been found in the sun’s radio-wave entrails before, but this is the first time that such a sensitive array of radio telescopes has recorded them. The detailed data enabled Morosan and colleagues for the first time to pin down the origin of the radiation bursts. To their surprise, the bones were being created in three different locations, on the sides of the CME. “I was very excited when I first saw the results, I didn’t know what to make of them,” Morosan says. As the CME erupted, the astronomers were already monitoring the sun, using 24 radio telescopes from the Low Frequency Array (LOFAR) distributed around an area of about 320 hectares near the village of Exloo in The Netherlands. “We had seen this really complicated active region – really big ugly sunspots, that had already produced three X-class flares, so we thought we should point LOFAR at it and see if it produces any other eruptions,” explains Morosan. A last minute request to the LOFAR director was rewarded with an eight-hour slot on the following Sunday, during which the active region erupted again, emitting X-rays so intense that it was classified as an X-class flare, the most extreme category. Flares are caused by turbulence in the plasma that makes up the sun. Plasma is gas that is so hot that the electrons begin to be stripped from the atoms, forming a mixture of charged particles. As it swirls around in the sun the charged particles create magnetic fields. When the turbulence rises the magnetic field lines can get contorted and unstable, a little like a tightly coiled and tangled spring. Sometimes the tangled magnetic field suddenly rearranges itself in a violent event called magnetic reconnection, a bit like a coiled spring breaking and thus releasing a lot of trapped energy. It is this energy that powers the flare and propels the plasma out into space to form the CME. “The CME is still connected to the solar atmosphere via the magnetic field, so it looks like a giant bubble expanding out,” Morosan says. The extreme energy in the CME – the second largest during the sun’s most recent 11-year cycle – accelerated matter away from the sun’s surface to over 3000 kilometres per second, or 1% of the speed of light. Because it was so fast the CME formed a shockwave as it travelled through the heliosphere – the atmosphere around the sun. Similar to the sonic boom created by a supersonic aircraft, the shockwave accelerated electrons to extreme speeds and caused them to emit radio waves that Morosan and her colleagues recorded. The exact frequency of the radio waves emitted by the electrons depends on the density of their environment. Close to the sun the photosphere density is higher, which creates higher frequency radio waves. The further the electrons are from the sun the lower the frequency of the radio emission. So the shape of the herringbones as a plot of frequencies shows where the accelerated electrons are in the sun’s atmosphere. The spine represents a constant frequency emission originating from electrons trapped in the shockwave. These escape in bursts from the shock and get funneled along the magnetic field lines on the surface of the CME bubble. Some bursts of electrons are funneled back towards the sun. These are the herringbone offshoots to higher frequency, while the ones that get funneled the other way, out into space, create offshoots to lower frequency. The sensitivity of the array of radio telescopes allowed the team to clearly identify three sources of herringbone radiation, all of them on the flanks of the CME, not at the front of it, as had been proposed. However, the success of the observation was cut short because the timeslot on the LOFAR array came to its end, while the CME was still in full swing. “We don’t know what happened after the flare peaked,” Morosan notes. “So we were lucky, and unlucky!”
International Day for the Abolition of Slavery Woman from Morocco trapped in forced labour. “Whenever the lady of the house left, she would lock me up for hours in the veranda, with only one small bottle of water.” © PAG-ASA, Massimo Timosi Slavery is not merely a historical relic. According to the International Labour Organisation (ILO) more than 40 million people worldwide are victims of modern slavery. Although modern slavery is not defined in law, it is used as an umbrella term covering practices such as forced labour, debt bondage, forced marriage, and human trafficking. Essentially, it refers to situations of exploitation that a person cannot refuse or leave because of threats, violence, coercion, deception, and/or abuse of power. In addition, more than 150 million children are subject to child labour, accounting for almost one in ten children around the world. Facts and figures: - An estimated 40.3 million people are in modern slavery, including 24.9 in forced labour and 15.4 million in forced marriage. - There are 5.4 victims of modern slavery for every 1,000 people in the world. - 1 in 4 victims of modern slavery are children. - Out of the 24.9 million people trapped in forced labour, 16 million people are exploited in the private sector such as domestic work, construction or agriculture; 4.8 million people in forced sexual exploitation, and 4 million people in forced labour imposed by state authorities. - Women and girls are disproportionately affected by forced labour, accounting for 99% of victims in the commercial sex industry, and 58% in other sectors. ILO has adopted a new legally binding Protocol designed to strengthen global efforts to eliminate forced labour, which entered into force in November 2016. The 50 for Freedom campaign aims to persuade at least 50 countries to ratify the Forced Labour Protocol by the end of 2019. Textile industry workers. It is vital to fight human trafficking in all its forms but labour trafficking is the most common. @ILO/A.Khemka The International Day for the Abolition of Slavery, 2 December, marks the date of the adoption, by the General Assembly, of the United Nations Convention for the Suppression of the Traffic in Persons and of the Exploitation of the Prostitution of Others (resolution 317(IV) of 2 December 1949). The focus of this day is on eradicating contemporary forms of slavery, such as trafficking in persons, sexual exploitation, the worst forms of child labour, forced marriage, and the forced recruitment of children for use in armed conflict. Main Forms of Modern Slavery Slavery has evolved and manifested itself in different ways throughout history. Today some traditional forms of slavery still persist in their earlier forms, while others have been transformed into new ones. The UN human rights bodies have documented the persistence of old forms of slavery that are embedded in traditional beliefs and customs. These forms of slavery are the result of long-standing discrimination against the most vulnerable groups in societies, such as those regarded as being of low caste, tribal minorities and indigenous peoples. Alongside traditional forms of forced labour, such as bonded labour and debt bondage there now exist more contemporary forms of forced labour, such as migrant workers, who have been trafficked for economic exploitation of every kind in the world economy: work in domestic servitude, the construction industry, the food and garment industry, the agricultural sector and in forced prostitution. Globally, one in ten children works. The majority of the child labour that occurs today is for economic exploitation. That goes against the Convention on the Rights of the Child, which recognizes “the right of the child to be protected from economic exploitation and from performing any work that is likely to be hazardous or to interfere with the child’s education, or to be harmful to the child’s health or physical, mental, spiritual, moral or social development.” According to the Protocol to Prevent, Suppress and Punish Trafficking in Persons Especially Women and Children, trafficking in persons means the recruitment, transportation, transfer, harbouring or receipt of persons, by means of the threat or use of force or other forms of coercion for the purpose of exploitation. Exploitation includes prostitution of others or other forms of sexual exploitation, forced labour or services, slavery or practices similar to slavery, servitude or the removal of organs. The consent of the person trafficked for exploitation is irrelevant and If the trafficked person is a child, it is a crime even without the use of force.
4.3. Vulnerabilities and Potential Impacts for Key Sectors Summary: In responding to climate change, Australasia's biota may face a greater rate of long-term change than ever before. They also must respond in a highly altered landscape fragmented by urban and agricultural development. There is ample evidence for significant potential impacts. Alterations in soil characteristics, water and nutrient cycling, plant productivity, species interactions (competition, predation, parasitism, etc.), and composition and function of ecosystems are highly likely responses to increases in atmospheric CO2 concentration and temperature and to shifts in rainfall regimes. These changes would be exacerbated by any increases in fire occurrence and insect outbreaks. Aquatic systems will be affected by the disproportionately large responses in runoff, riverflow and associated nutrients, wastes and sediments that are likely from changes in rainfall and rainfall intensity and by sea-level rise in estuaries, mangroves, and other low-lying coastal areas. Australia's Great Barrier Reef and other coral reefs are vulnerable to temperature-induced bleaching and death of corals, in addition to sea-level rise and weather changes. However, there is evidence that the growth of coral reef biota may be sufficient to adapt to sea-level rise. Our knowledge of climate change impacts on aquatic and marine ecosystems is relatively limited. Prediction of climate change effects is very difficult because of the complexity of ecosystem dynamics. Although Australasia's biota and ecosystems are adapted to the region's high climate variability (exemplified in arid and ENSO-affected areas), it is unclear whether this will provide any natural adaptation advantage. Many species will be able to adapt through altered ecosystem relationships or migration, but such possibilities may not exist in some cases, and reduction of species diversity is highly likely. Climate change will add to existing problems such as land degradation, weed infestations, and pest animals and generally will increase the difficulties and uncertainty involved in managing these problems. The primary human adaptation option is land-use management-for example, by modification of animal stocking rates in rangelands, control of pests and weeds, changed forestry practices, and plantings along waterways. Research, monitoring, and prediction, both climatic and ecological, will be necessary foundations to human adaptive responses. Active manipulation of species generally will not be feasible in the region's extensive natural or lightly managed ecosystems, except for rare and endangered species or commercially valuable species. In summary, it must be concluded that some of the region's ecosystems are very vulnerable to climate change. Climate is a primary influence not only on the individual plant, animal, and soil components of an ecosystem but also on water and nutrient availability and cycling within the ecosystem, on fire and other disturbances, and on the dynamics of species interactions. Changes in climate therefore affect ecosystems both by directly altering an area's suitability to the physiological requirements of individual species and by altering the nature of ecosystem dynamics and species interactions (Peters and Darling, 1985). In addition, biota face an environment in which the rising atmospheric CO2 concentration also will directly affect plants and soils. The rate of climatic change may exceed any that the biota have previously experienced (IPCC 1996, WG II, Chapter A and Section 4.3.3). This rate of change poses a potentially major threat to ecosystem structure and function and possibly to the ability of evolutionary processes, such as natural selection, to keep pace (Peters and Darling, 1985). Although many of the biota and ecosystems in the region have adapted to high climate variability (exemplified in the region's arid and ENSO-affected areas), it is unclear whether this will provide any advantage in adapting to the projected changes in climate. Furthermore, in contrast to the case of climate change over geological time scales, today the region's biota must respond in a landscape that has been highly modified by agricultural and urban development and introduced species (Peters, 1992). Considerable fragmentation of habitat has occurred in Australasia's forests, temperate woodlands, and rangelands. In the short term, land-use changes such as vegetation clearance are likely to have a much greater bearing on the maintenance of conservation values than the direct effects of climate change on biodiversity (Saunders and Hobbs, 1992). In the longer term, however, climate change impacts are likely to become increasingly evident, especially where other processes have increased ecosystem vulnerability (Williams et al., 1994). Australasia's isolated evolutionary history has led to a very high level of endemism (plants and animals found only in the region). For example, 77% of mammals, 41% of birds, and 93% of plant species are endemic (see Annex D). As one of the 12 recognized "mega-diversity" countries (and the only one that is an OECD member), Australia has a particular stewardship responsibility toward an unusually large fraction of the world's biodiversity. Many of New Zealand's endemic bird species are endangered. Species confined to limited areas or habitat, such as Australia's endangered Mountain Pygmy Possum (Burramys parvus)-which is only found in the alpine and subalpine regions of southeast Australia (Dexter et al., 1995)-may be especially vulnerable to climate change. Certain ecosystems have particular importance to the region's indigenous people, both for use as traditional sources of food and materials and for their cultural and spiritual significance. Selected climate change impacts on Australian Aborigines and New Zealand Maori are considered in Sections 22.214.171.124 and 126.96.36.199 respectively.
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2014 November 20 Explanation: Obscuring the rich starfields of northern Cygnus, dark nebula LDN 988 lies near the center of this cosmic skyscape. Composed with telescope and camera, the scene is some 2 degrees across. That corresponds to 70 light-years at the estimated 2,000 light-year distance of LDN 988. Stars are forming within LDN 988, part of a larger complex of dusty molecular clouds along the plane of our Milky Way galaxy sometimes called the Northern Coalsack. In fact, nebulosities associated with young stars abound in the region, including variable star V1331 Cygni shown in the inset. At the tip of a long dusty filament and partly surrounded by a curved reflection nebula, V1331 is thought to be a T Tauri star, a sun-like star still in the early stages of formation. Authors & editors: Jerry Bonnell (UMCP) NASA Official: Phillip Newman Specific rights apply. A service of: ASD at NASA / GSFC & Michigan Tech. U.
Kachin, tribal peoples occupying parts of northeastern Myanmar (Burma) and contiguous areas of India (Arunachal Pradesh and Nagaland) and China (Yunnan). The greatest number of Kachin live in Myanmar (roughly 590,000), but some 120,000 live in China and a few thousand in India. Numbering about 712,000 in the late 20th century, they speak a variety of languages of the Tibeto-Burman group and are thereby distinguished as Jinghpaw, or Jingpo (Chingpaw [Ching-p’o], Singhpo), Atsi, Maru (Naingvaw), Lashi, Nung (Rawang), and Lisu (Yawyin). The majority of Kachin are Jinghpaw speakers, and Jinghpaw is one of the officially recognized minority languages of China. Under the British regime (1885–1947), most Kachin territory was specially administered as a frontier region, but most of the area inhabited by the Kachin became after Burmese independence a distinct semiautonomous unit within the country. Traditional Kachin society largely subsisted on the shifting cultivation of hill rice, supplemented by the proceeds of banditry and feud warfare. Political authority in most areas lay with petty chieftains who depended upon the support of their immediate patrilineal kinsmen and their affinal relatives. The Kachin live in mountainous country at a low population density, but Kachin territory also includes small areas of fertile valley land inhabited by other peoples of Myanmar. The traditional Kachin religion is a form of animistic ancestor cult entailing animal sacrifice. As a result of the arrival of American and European missionaries in Burma beginning in the late 19th century, a majority of the Kachin are Christian, mainly Baptist and Roman Catholic. Among the Kachin in India, Buddhism predominates.
This is not a general Lisp glossary. It only has terms used in the text of this site. The ANSI Standard for Common Lisp has an extensive glossary. - Common Lisp Object System (CLOS) - The object-oriented features of Common Lisp. These are introduced on this site and defined in the Common Lisp and Meta Object Protocol references. - An object-oriented exception handling mechanism. - First Class - A language supports some data type as first-class when the objects of the type can be created and used as data at run time. First-class data can be kept in variables, and passed to and returned from functions. In dynamically typed languages, first-class data can also have its type examined at run-time. - A set of objects which are of different types. For example, any element of a heterogenous array might contain integers or floating point numbers, or another array, or any kind of data. - A set of objects that are required to all be of the same type. For example a homegenous array of integers can contain only integers. - The mechanism for repeatedly performing a series of actions. Common Lisp provides a number of constructions for performing iteration, as well as support for recursion. - Meta Object Protocol (MOP) - In general, an object oriented program, accessible to programers, which defines the internals of a supported system in such a way as to allow programmers to tailor the system to better meet their particular needs. Such needs might include efficiency or In particular, a MOP has been defined for the Common Lisp Object System, which is implemented using this particular MOP. Often, when people refer to "the MOP" as opposed to "a MOP", they are refering to the CLOS MOP. However, AMOP is short for The Art of the Metaobject Protocol, which defines "the MOP". See the MOP references. - Pretty Printing - An object-oriented, programmer-definable, formatted printing system. - Symbolic Processing - The processing of information, as opposed to the mere crunching of numbers. Information may indeed be numeric, but may also include arbitrary, heterogeneous objects. Usually, it is convenient in such processing to assign names to things which can themselves be accessed as first class data, and to be able to determine the type and internal representation of objects at run time. - A specification for the number and types of arguments to a function, and the number and types of return values. A function with optional or named (keyword) arguments can still have a signature.
NASA is asking everyone to celebrate Earth Day by sharing videos and pictures on social media of their favorite places on Earth using the hashtag #NoPlaceLikeHome. Earth Day is celebrated annually on April 22, to inspire appreciation for and awareness of earth’s environment. More than 1,800 planets have been discovered around the world by scientists from NASA, but there is a lot to love right here on Earth. Earth Day is celebrated with performances outdoors around the world where groups, individuals and organizations perform acts of service to enhance the earth. Some plant trees, clean up litter, recycle containers and the like. Many are encouraged to sign petitions about reversing environmental destruction and putting a halt to global warming. Earth Day was first organized in 1970 and founded by Senator Gaylord Nelson. The hope for the celebration was to promote respect for life and ecology on the planet and to heighten awareness of the continued problems of soil, water and air pollution. People unite all over to appreciate and display respect for Earth’s environment. In honor of Earth Day, John McConnell designed the Earth Flag as a “flag for all people”. It was made from weather-resistant, recyclable polyester and includes a two-sided image of the Earth from space on a dark blue field. People describe Earth Day with symbols including the recycling symbol, an image of a flower, leaves or tree depicting growth, or of planet earth. Typical colors for the day include green, blue and brown. Earth not only has life, but an atmosphere, forests, oceans, deserts, snow, ice sheets and rain. All of these are things that NASA’s 20 Earth-orbiting missions measure and observe in their quest to build the most comprehensive understanding possible of this dynamic planet called Earth. NASA has already began sharing their views of Earth from scientists at work in the field, research aircraft and satellites. The agency is posting on social media platforms such as Instagram, Twitter, Facebook, Vine and others. All of their posts contain the hashtag #NoPlaceLikeHome. While views from space can be exciting and awe-inspiring, NASA wants everyone to join in and share their personal images. It wants Earth lovers to show the world from their viewpoint displaying what makes their part of the Earth special. The vantage point of space is NASA’s channel for increasing understanding of Earth, safeguarding its future and improving life on the planet. The agency does this by sharing knowledge acquired and working with other institutions around the world to exchange new insights into how the Earth is changing. NASA monitors the Earth’s vital signs from space, air and land by using a fleet of satellites and ambitious airborne and ground-based observation campaigns. It is constantly developing new ways to study and observe the planet’s interconnected natural systems with long-term data records. Last year the agency asked for pictures during their Global Selfie campaign for Earth Day. This year it wants to include videos by using platforms such as Instagram and Vine. As people join the “movement” by sharing the world views of their corner of the Earth; the question NASA wants to know is: What is your favorite place on Earth? Whichever social media platform one favors, the National Aeronautics and Space Administration is asking everyone to join the fun. In addition to the social media outlets already referenced, Earth lovers can join the Google+ and Facebook events. Earth Day is a time to celebrate the place over seven billion people call home. On April 22, NASA is asking everyone to get involved by pledging to spend one day sharing videos and pictures of their favorite places on Earth using the hashtag #NoPlaceLikeHome. by Cherese Jackson (Virginia) Top Image Courtesy of NASA Goddard Space Flight Center – Creativecommons Flickr License Inside Image Courtesy of Kate Ter Haar – Creativecommons Flickr License Inside Image Courtesy of NASA’s Marshall Space Flight Center – Creativecommons Flickr License Featured Image Courtesy of NASA (Created for Earth Day)
Get Your Essay Structure Straight to Ensure a Proper Flow to the Writing Essay structure is in fact, the spine of your essay, and if you fail to get this straight, the quality of your essay can suffer. The structure adds cohesion and logical flow to the writing. Adhering to the basic structure of the essay is an expectation of essay writing assignments and you need to be aware of the basics that apply here. Let us take a closer look at the elements in the essay structure in the order of its appearance. Importance of Essay Structure When teachers mark essays, one of the most common mistakes they detect is the haphazardness of presenting information. Sometimes the same information is repeated in a number of places in the body. Some other students make the mistake of introducing information that is outside the key points being discussed within the scope of the essay. If you take a subject, there will be many points but depending on your essay length, you will need to select the key points and state them in your introduction. Then, the body paragraphs are dedicated to each of these points that you selected. Do not make the mistake of presenting information that you did not select to include in the essay, although they may be relevant. A proper structure helps present information in a logical and organized manner. Elements of Essay Structure There are a number of elements in an essay which comes together to form the proper essay structure. Here are some of the critical elements of the essay structure which you need to know when learning how to write an essay. - Essay title is the first part of your essay structure. This should be concise, reflective of your essay and hint at your thesis. Depending on the essay type, the title may change. A controversial essay can have a controversial title. An expository essay, on the other hand, will have an informative title. The structural rules applicable for title include having the first word and principle words in capital. No period appears at the end of a title, although question marks and exclamation marks are permitted. The title should be centralized, followed by double spaces. - Introduction – This is the first paragraph of the essay and will introduce the key aspect of the essay. Start with an introductory line which needs to be interesting and follow it with a brief outline of your topic. The last line in this paragraph is generally the thesis statement. - Body Paragraphs – The body of the essay accounts for the major part of the essay. Structuring the paragraphs properly in the body of the essay adds to cohesion and logical flow of your writing. The body is where you discuss all the key points that are to be used in proving or establishing your thesis statement. Each body paragraphs should open with a topic sentence. Then the rest of the paragraph discusses at length, the point being cited. Evidence such as examples, data, testimonies and quotes can be used in the latter part of the paragraph to support the arguments being made or central point being presented. The paragraph should end with a leading sentence to the next paragraph or a transitional line. An example of a transition line can be “The next section will explore aspects of sample selection, which is another key factor in a successful research study”. - The conclusion – This is where you summarizes the writing and bring the writing to an end. A conclusion is not a place to repeat the same text that appears in the body or introduction. It is your chance to reiterate and reestablish your points and arguments and tell the reader you final verdict on the subject. In the case of some essays as expository essays, you do not provide your own opinions. Then, you should conclude by stating the most relevant and important of the information that pertains to the topic. A cardinal rule to note is that no new ideas are introduced at this juncture. Following this structure can help you keep unwanted and irrelevant information coming in to the essay, at various points, and distracting the reader from the main flow of the essay. Remember, once you learn the basics of essay writing, the whole process becomes more manageable and hopefully more enjoyable.