content
stringlengths
275
370k
If you have been teaching for a while, you know that relationships are important in the classroom. In fact, we have talked about it over and over on the podcast. So, in this episode, we decided to dive deeper into this concept. Let’s find out why it’s important to build relationships, especially in the science classroom. Building Relationships is especially important in the science classroom. The shifting expectations that come with the NGSS mean the science classroom is more challenging for students. The rigor is higher because students apply the knowledge they learn. Previously, students were mostly expected to answer knowledge-based questions without much application. Often, high achieving students have the most difficult time with this shift. This happens because they know how to excel in a traditional classroom setting. You are changing the rules that they are familiar with, creating discomfort. Also, many students perceive that they aren’t good at science. This is very similar to trends that we see in math courses. Students believe that they lack ability when this is not necessarily the case. Most often, students have this perception because they haven’t had adequate access to science education in lower grade levels. The brain science behind building relationships. Nicole discusses some of the things she learned by reading the book Culturally Responsive Teaching and the Brain by Zaretta L. Hammond. We highly recommend this book to dive deeper into the brain science behind building relationships in the classroom. Our brains are constantly searching for 2 things: threats and relevance. This is one of the reasons that using phenomena in your lessons is so important. Phenomena make content relevant. To learn more about phenomena, click here. Physical, social, and emotional threats are perceived by the brain in the same way. When any of these threats occur, the brain shuts down higher-order thinking in order to focus on survival. In contrast, when we feel safe, the body releases hormones that prevent a flight-or-fight response. Students are able to learn when they are in a relaxed state. When people experience trauma, the mechanism that recognizes threats doesn’t work in the same way. For example, students with past trauma have difficulty determining when a threat is occurring. Therefore, they are more likely to perceive a situation as threatening. Student’s struggle to learn when their basic needs aren’t being met. Your students want to do the right thing in your class. They care about doing well. However, they aren’t always able to. When students are demonstrating trouble with behavior, make sure that their basic needs are being met. When students are hungry, tired, or scared, they probably won’t be able to perform corrently in their classroom. Develop strong relationships with students helps them feel secure. This also increases the likelyhood that they will communicate with you when they need something. For example, a student may communicate that they are very tired when they have a good relationship with a teacher. This empowers the teacher to provide support. And, this furhter strengthens the relationship.
strong and bold The strong and bold tags are HTML elements that are used to give emphasis to a word or phrase in a web page. The strong tag is used to indicate that its contents are of greater importance than the surrounding text. The strong tag is typically displayed in bold text by web browsers. Here's an example: <p>This is a normal paragraph. <strong>This text is important.</strong></p> On the other hand, the bold tag is used to make its contents appear in bold text. Here's an example: <p>This is a normal paragraph. <b>This text is bold.</b></p> Difference between strong and bold Both the strong and bold tags can be used to give emphasis to a word or phrase, but the strong tag carries more semantic meaning than the bold tag. In other words, the strong tag is used to indicate that its contents are important in the context of the surrounding text, while the bold tag is used solely for visual purposes. Strong and bold tags for SEO The strong and bold elements can be relevant for search engine optimization (SEO) because they can help to convey the meaning and structure of the content on a web page. When optimizing a web page for search engines, it's important to ensure that the page's content is well-structured, relevant, and useful to users. The strong and bold elements can be used to give emphasis to important words and phrases, which can help to draw the reader's attention to key points and can make the content easier to scan and understand. However, it's important to use these elements sparingly and only to highlight truly important content. Overuse of the strong and bold elements, or using them for spammy or irrelevant content, can actually hurt the page's SEO performance. It is generally recommended to use the strong element to semantically mark important content, rather than the bold element. This helps to ensure that the meaning of the content is conveyed to users and to search engines, even if the bold styling is not applied.
Spectrophotometer calibration is a process in which a scientific instrument known as a spectrophotometer is calibrated to confirm that it is working properly. This is important, as it ensures that the measurements obtained with the instrument are accurate. The procedure varies slightly for different instruments, with most manufacturers providing a detailed calibration guide in the owner's manual so that people know how to calibrate the equipment properly. When this process is performed, the person doing it must make a note in the log attached to the equipment and in their experimental notes, so that people know when the device was last calibrated and handled, and by whom. A spectrophotometer is capable of both transmitting and receiving light. The device is used to analyze samples of test material by passing light through the sample and reading the intensity of the wavelengths. Different samples impact the light in different ways, allowing a researcher or technician to learn more about the materials in the test sample by seeing how the light behaves as it passes through the sample. Spectrophotometer calibration is necessary to confirm that the results are accurate. In spectrophotometer calibration, a reference solution is used to zero out the equipment. This solution provides a base or zero reading. The device is calibrated by placing the reference solution inside the spectrophotometer, zeroing out the settings, and running the instrument. Then, samples of an actual test material can be subjected to spectrophotometry in confidence that the machine has been calibrated and is working properly. In a single beam spectrophotometer, a single beam of light is generated, and the device must be recalibrated for each use. In a double beam spectrophotometer, beams can be sent through a test sample and a reference sample at the same time to generate two sets of results which can be used for reference and calibration. In either case, spectrophotometer calibration can be done in the lab by someone working with the machine. If the machine develops serious problems, it may be sent to the manufacturer for maintenance, repair, and potential replacement. In order for a spectrophotometer to work properly, it must be allowed to warm up before use. Many devices take around 10 minutes to warm up. It is important to avoid performing spectrophotometer calibration during the warmup phase as this will throw the settings off. It is also important to be aware that for certain types of wavelengths, special filters and attachments may be needed for the device to function.
Frederick Douglass's intended audience was white people, mainly in the north, as he wanted to convince them of the damaging effects of slavery and to convince them that slavery should be abolished. The two prefatory letters in the book, one written by Wendell Phillips and one written by William Lloyd Garrison, were intended to make sure the white readership of the book knew that Frederick Douglass was trustworthy. William Lloyd Garrison and Wendell Phillips were white abolitionists who had a lot of credibility among white audiences in the north, and their blessing to Douglass went a long way in making sure white northern readers took Douglass seriously. Douglass's autobiography was intended to let his white audience know about the damage that slavery not only inflicted on slaves but the damage it also inflicted on white slave owners. For example, in telling the story of his slave owner, Sophia Auld, Douglass illustrated how slavery degraded a woman who was formerly kind to slaves (as she had never held slaves before). The white audience that read this account would, Douglass hoped, determine that slavery was contrary to their religious values and that slavery made the white people associated with it amoral. These arguments would, Douglass hoped, convince people to push for the abolition of slavery.
New conservation research has discovered that up to 74% of current orang-utan habitat in Borneo could become unsuitable for this endangered species due to 21st century climate or land-cover changes. However, the research has also identified up to 42,000km2 of land that could serve as potential orang-utan refuges on the island, and could be relatively safe new habitats for the great ape to reside. Published as ‘Anticipated climate and land-cover changes reveal refuge areas for Borneo’s orang-utans’ by Global Change Biology, the research was conducted by scientists including LJMU’s Professor Serge Wich from the School of Natural Sciences and Psychology, with Dr Matthew Struebig from the University of Kent’s Durrell Institute of Conservation and Ecology (DICE) and the Leibniz Institute for Zoo and Wildlife Research (IZW). Further contributions were made by conservation scientists from Australia and Indonesia, in consultation with leading orang-utan experts based in the Malaysian and Indonesian parts of Borneo. The study was supported by UNEP's GRASP (Great Apes Survival Partnership).Part of the work, conducted by the Centre for International Forestry Research (CIFOR) in Indonesia, used satellite images to map deforestation and estimate areas of forest change expected in the future. The researchers also mapped land unsuitable for oil palm agriculture, one of the major threats to orang-utans, and used this alongside information on orang-utan ecology and climate to identify environmentally stable habitats for the species this century. The research demonstrates that continued efforts to halt deforestation could mediate some orang-utan habitat loss, and this is particularly important in Borneo’s peat swamps, which are a home to large number of orang-utans and are vital for climate change mitigation. Focusing conservation actions on these remote areas now would help to minimise orang-utan losses in the future. Professor Serge Wich commented: "Even though the overall results of this study appear bleak, it is important to stress that humans can influence future climate change and that curbing deforestation and in particular deforestation of forests on peat swamps is vital for climate change mitigation." It is hoped that, since the relocation of endangered species is an expensive process, this research will contribute to conservationists' understanding of how to identify appropriate areas which are safe from development as well as the effects of climate change. The article is now online here
by Mary Ann LoFrumento, MD, pediatrician Children ages five to 12 will have very different needs. The younger the child, the more you should keep discussions direct and simple. The older the child, the more they will need to ask their questions and share their fears with you. In the smartphone era it is difficult to limit this age group’s access to the news. Focus on limiting overexposure and make sure that the information they get is accurate. Remember, children do not need all the details, but they can be informed that there may have to be changes within the family and need assurance that you will keep them involved. Social isolation for this age group can be very challenging. Explain why we are doing this. Explain how we are helping the doctors and nurses take care of the patients they have and making sure we protect the elderly and friends or neighbors who have issues that might make them more vulnerable. Disappointments are inevitable regarding sporting events, parties, graduations, vacations, etc. Be honest and open with your children. They may surprise you with their understanding and resilience. Let them know that we will get past this and the world has survived difficult times before. For older kids, discussing historical examples, and highlighting times when the world joined together and overcame great challenges might be helpful. Tell stories about their grandparents and great-grandparents and things that they faced: from polio, to the Great Depression, and to World War II and emphasize hope. Tips for School-Age Children: - Get the facts straight. Start any conversation by asking your child what they have heard or learned and always ask, “Where did you hear that?" Provide information in a clear and direct manner and ask if they have any questions. Let them know they can come and ask you about anything that they hear. It’s ok to acknowledge some of this information might be scary, especially for kids ages nine to 12. - Try to keep opinions out of it. Children of this age will often adopt the opinions of their parents when discussing a topic and share them with others, so pay attention to what you express in front of your children. Try to emphasize that we are looking to the scientists and doctors to come up with solutions. And there is every hope that we will get this under control soon. (Children trust scientists and doctors like they trust their parents and teachers). - Share your feelings with your children but don’t overwhelm them. Let them know that it is okay to feel sad, angry, or even worried. It's important for them to see that these feelings are normal. Practice active listening and don’t dismiss things that make them frustrated or sad, like missing soccer or dance. - Be a role model. Children this age want to feel protected and safe and they will look to you for this. As with younger children, if you appear in control and assure them that they are safe, they will feel secure. Also keeping to the family routine as much as possible is very helpful. On this same note – make sure they are doing all their required schoolwork. It’s hard to do this and do your own work. But they need to stay engaged and learning. - Enlist their help! Ask them to come up with an idea or a project that helps others. Find out what community projects they can get involved with. Join forces and communicate with other parents. - Keep an eye on social media. Social dynamics with kids can worsen with too much down time and frustration at being cooped up. Ask your kids to tell you about what’s happening online. - Go outside! Get kids outside whenever it is possible and have them play games or exercise. This is good for the whole family. - Find creative outlets. For some school age kids, it is still difficult to share all their feelings in words. Play, art, and music can be helpful for expressing their feelings.
A gene’s location on a chromosome plays a significant role in shaping how an organism’s traits vary and evolve, according to findings by genome biologists at NYU and Princeton University. A gene’s location on a chromosome plays a significant role in shaping how an organism’s traits vary and evolve, according to findings by genome biologists at New York University’s Center for Genomic and Systems Biology and Princeton University’s Lewis-Sigler Institute for Integrative Genomics. Their research, which appears in the latest issue of the journal Science, suggests that evolution is less a function of what a physical trait is and more a result of where the genes that affect that trait reside in the genome. Physical traits found in nature, such as height or eye color, vary genetically among individuals. While these traits may differ significantly across a population, only a few processes can explain what causes this variation—namely, mutation, natural selection, and chance. In the Science study, the NYU and Princeton researchers sought to understand, in greater detail, why traits differ in their amount of variation. But they also wanted to determine the parts of the genome that vary and how this affects expression of these physical traits. To do this, they analyzed the genome of the worm Caenorhabditis elegans (C. elegans). C. elegans is the first animal species whose genome was completely sequenced. It is therefore a model organism for studying genetics. In their analysis, the researchers measured approximately 16,000 traits in C. elegans. The traits were measures of how actively each gene was being expressed in the worms’ cells. The researchers began by asking if some traits were more likely than others to be susceptible to mutation, with some physical features thus more likely than others to vary. Different levels of mutation indeed explained some of their results. Their findings also revealed significant differences in the range of variation due to natural selection—those traits that are vital to the health of the organism, such as the activity of genes required for the embryo to develop, were much less likely to vary than were those of less significance to its survival, such as the activity of genes required to smell specific odors. However, these results left most of the pattern of variation in physical traits unexplained—some important factor was missing. To search for the missing explanation, the researchers considered the make-up of C. elegans’ chromosomes—specifically, where along its chromosomes its various genes resided. Chromosomes hold thousands of genes, with some situated in the middle of their linear structure and others at either end. In their analysis, the NYU and Princeton researchers found that genes located in the middle of a chromosome were less likely to contribute to genetic variation of traits than were genes found at the ends. In other words, a gene’s location on a chromosome influenced the range of physical differences among different traits. The biologists also considered why location was a factor in the variation of physical traits. Using a mathematical model, they were able to show that genes located near lots of other genes are evolutionarily tied to their genomic neighbors. Specifically, natural selection, in which variation among vital genes is eliminated, also removes the differences in neighboring genes, regardless of their significance. In C. elegans, genes in the centers of chromosomes are tied to more neighbors than are genes near the ends of the chromosomes. As a result, the genes in the center are less able to harbor genetic variation. The research was conducted by Matthew V. Rockman, an assistant professor at New York University’s Department of Biology and Center for Genomics and Systems Biology as well as Sonja S. Skrovanek and Leonid Kruglyak, researchers at Princeton University’s Lewis-Sigler Institute for Integrative Genomics, Department of Ecology and Evolutionary Biology, and Howard Hughes Medical Institute. The study was supported by grants from the National Institutes of Health. Kitta MacPherson, Princeton University
Greece has a history stretching back almost 4.000 years. The people of the mainland, called Hellenes, organised great naval and military expeditions, and explored the Mediterranean and the Black Sea, going as far as the Atlantic Ocean and the Caucasus Mountains. One of those expeditions, the siege of Troy, is narrated in the first great European literary work, Homer's Iliad. Numerous Greek settlements were founded throughout the Mediterranean, Asia Minor and the coast of North Africa as a result of travels in search of new markets. During the Classical period (5th century B.C.), Greece was composed of city-states, the largest being Athens, followed by and Thebes. A fierce spirit of independence and love of freedom enabled the Greeks to defeat the Persians in battles which are famous in the history of civilization -Marathon, Thermopyles, Salamis and Platees. In the second half of the 4th century B.C., the Greeks, led by Alexander the Great, conquered most of the then known world and sought to Hellenize it. In 146 B.C. Greece fell to the Romans. In 330 A.D. Emperor Constantine moved the Capital of the Roman Empire to Constantinople, founding the Eastern Roman Empire which was renamed Byzantine Empire or Byzantium for short, by western historians in the 19th century. Byzantium transformed the linguistic heritage of Ancient Greece into a vehicle for the new Christian civilization. The Byzantine Empire fell to the Turks in 1453 and the Greeks remained under the Ottoman yoke for nearly 400 years. During this time their language, their religion and their sense of identity remained strong. On March 25, 1821, the Greeks revolted against the Turks, and by 1828 they had won their independence. As the new state comprised only a tiny fraction of the country, the struggle for the liberation of all the lands inhabited by Greeks continued. In 1864, the Ionian islands were added to Greece; in 1881 parts of Thessaly. Crete, the islands of the Eastern Aegean and Macedonia were added in 1913 and western Thrace in 1919. After World War II the Dodecanese islands were also returned to Greece. During World War II, Greece as occupied by Bulgaria, Germany and Italy. A government in exile was established in 1944. By 1945 the World War was over, but the internal struggle between left and right-wing factions raged. When the dust settled, Alexandros Papagos and Konstantinos Karamanlis were the leaders of new conservative coalition parties; with the aid of the Marshall Plan, political and economic conditions stabilized, but left the country dependent on In the election of 1950, no fewer than forty-four parties contended for 250 parliament seats. The Populists, Liberals, and National Progressive Center Unionfinally formed a coalition government headed by General Plastiras. At this time, they still had a constitutional monarchy under King Paul. Social conditions declined despite economic growth, leading to successful military coups in 1963 and 1967. Crowding into cities caused demands for social welfare and better income distribution. "The Colonels" remained in power until 1973, heading a reign of terror. A Turkish invasion of Cyprus in 1974 allowed an opportunity to unseat the military dicatorship. A new constitution was written, and once again, elections were held. This time, they elected to abolish the monarchy, and Greece once again established a republic, the type of government it enjoys today.
What is Education and Examples? Education is a process of acquiring knowledge, skills, values, and attitudes through various formal and informal means. It is an essential tool that empowers individuals to lead a meaningful and productive life by providing them with the necessary tools and resources to succeed in various spheres of life. Education can take many forms, including formal education such as attending schools, colleges, and universities, as well as informal education such as learning through life experiences, reading, and online courses. It plays a crucial role in personal growth, career development, and social mobility, and can lead to higher levels of achievement, better employment opportunities, and a higher standard of living. There are various types of education, such as: Formal Education: Formal education is a structured form of education that takes place in schools, colleges, and universities. It provides students with a structured curriculum that covers various subjects, including math, science,
When you save a text in a constant or a variable, your are saving a string in Swift. Furthermore, you can see a string as a serie of characters. For that reason, you can access to the content of a string in various ways, such as Collection (Array i.e.) of characters. Initialize a String A string literal is a text writed with a double quote at beginning and at the end. So, if you need to save a text in a constant or a variable, you only must to assign a string literal to it. let coach = "Ted Lasso" var team = "AFC Richmond" NOTE: It is fundamental remember that strings are case sensitive. So, "a simple string" it is different of "A simple string". If you need to save the value of two strings in another constant or variable, you can concatenate it. It is as easy as use the operator + between the two values when you have to assign it. let name = "Michael" let lastName = "Scott" let funnyBoss = name + lastName Another way, it is to use the operator += to add text to a previous initialized string var spy = "Bond" spy += ", James Bond" In the above example, the final value of Bond, James Bond And, as a String is a Collection of characters, you can also use the method var greet = "Hello, world" greet.append("!") In the last code, greet value is But maybe the most used feature in Strings could be String interpolation. It allows use constants or variables values inside a string. In this case, you can do it, writing the constant or variable name, inside parenthesis "()", and starting with a backlash "". That is let name = "Forest" let fullName = "Forest Gump" let introduction = "Hello, I am \(name), \(fullName)" The final value of "Hello, I am Forest, Forest Gump" Finishing (althought it is not used often in real world apps) maybe sometimes you need to save larger texts in a string, and it could be difficult read the value for others programmers o for yourself. In this case, you can use a multiline string. To save a multiline string, you only need to write three quotation marks at the beginning, and finish with another three in a single line. For example: let text = """ To be, or not to be, that is the question: Whether 'tis nobler in the mind to suffer The slings and arrows of outrageous fortune, Or to take arms against a sea of troubles, And by opposing end them? To die, to sleep; """ First triple double quotes doesn't need to be in a single line, but it helps to focus in text value. You can see useful methods in the post Useful String methods in Swift Happy coding! 👨🏻💻
Read and write numerals up to 100. 0106.2.1 Links verified on 1/2/2015 - Count Us In - 12 activities that help with understanding basic math. - number recognition, ordinal numbers, sorting, patterns, addition, subtraction,time - Cookie Dough - type the word that corresponds to the numeral - Do the reverse and type in the number that corresponds with the word. - Find the missing number - find the butterfly that holds the missing number - Numeracy Games - many games dealing with numeration - Really Big Numbers - Enter a number then click the Click here button to see how to write it. - Word Game - [click Numbersand then click Numbers 5 to play the game] select and combine words that spell out a given number, numbers from 21 to 99 used - Write numbers up to 100 - type the missing numeral
Online gaming has become an increasingly popular pastime for people of all ages, and educators are beginning to take notice of the potential benefits that it can offer for learning. Gamification is the process of incorporating game tambang888 elements into non-game contexts, and it is being used in a variety of ways to make education more engaging and effective. Benefits of gamifying the classroom There are a number of potential benefits to gamifying the classroom, including: - Increased engagement and motivation: When students are engaged in a learning activity, they are more likely to retain information and be motivated to learn more. Gamification can help to increase engagement by making learning more fun and challenging, and by providing students with opportunities to earn rewards and recognition. - Improved learning outcomes: Research has shown that gamification can lead to improved learning outcomes in a variety of subjects, including math, science, and language arts. This is likely due to the fact that gamified learning activities are more engaging and motivating for students, and that they provide students with opportunities to practice skills and receive feedback in a fun and supportive environment. - Enhanced critical thinking and problem-solving skills: Many games require players to think critically and solve problems in order to succeed. By incorporating game elements into the classroom, educators can help students to develop these important skills. - Improved collaboration and teamwork skills: Many games are designed to be played collaboratively, which can help students to develop their teamwork skills. In addition, gamified learning activities can provide students with opportunities to interact with each other and learn from each other. - Increased self-confidence and self-efficacy: When students succeed in gamified learning activities, they earn rewards and recognition, which can boost their self-confidence and self-efficacy. This can lead to improved academic performance and a more positive attitude towards learning. How to gamify the classroom There are a number of ways to gamify the classroom. Here are a few ideas: - Use leaderboards and badges: Leaderboards and badges are two common game elements that can be used to motivate students and encourage competition. Leaderboards can be used to track students’ progress and rank them against their peers. Badges can be awarded to students for completing tasks, achieving goals, or demonstrating mastery of a skill. - Create missions and challenges: Missions and challenges are another way to make learning more engaging and challenging. Missions can be individual or group-based, and they can be designed to take place inside or outside of the classroom. Challenges can be used to test students’ knowledge and skills, or to encourage them to be creative and think outside the box. - Use rewards and recognition: Rewarding students for their achievements is a great way to motivate them and boost their self-confidence. Rewards can be tangible (such as candy, stickers, or gift cards) or intangible (such as praise, extra credit, or special privileges). - Provide feedback: Feedback is essential for helping students to learn and grow. Gamified learning activities can provide students with immediate feedback on their performance, which can help them to identify their strengths and weaknesses and make necessary adjustments. Examples of gamified learning activities Here are a few examples of gamified learning activities that can be used in the classroom: - Math games: There are a number of online and offline math games that can be used to help students learn and practice math skills. For example, students can play games to learn about addition, subtraction, multiplication, division, fractions, decimals, and geometry. - Science games: There are also a number of online and offline science games that can be used to help students learn about science concepts. For example, students can play games to learn about the solar system, the human body, the water cycle, and the food chain. - Language arts games: There are also a number of online and offline language arts games that can be used to help students learn and practice language skills. For example, students can play games to learn about vocabulary, grammar, spelling, and punctuation. - History games: There are also a number of online and offline history games that can be used to help students learn about historical events and figures. For example, students can play games to learn about the American Revolution, World War II, and the Civil Rights Movement. Tips for gamifying the classroom Here are a few tips for gamifying the classroom: - Start small: Don’t try to gamify your entire classroom overnight. Start by incorporating a few game elements into your lessons or assignments. - Get feedback from students: Ask your students for feedback on the gamified learning activities that you are using. This will help you to identify what is working and what is not. - Balance fun and learning: Gamification should be used to make learning more fun and engaging, but it should not be the only focus of your lessons or assignments. Make sure that
A cell’s nucleus can be thought of as the master control room of a factory, and the DNA is similar to the factory manager. The DNA helix controls every aspect of cellular life, and we didn’t even know its structure until the 1950s. Ever since that discovery, the fields of genetics, molecular biology and biochemistry have rapidly expanded, and now simply knowing the sequence of a chromosome provides a wealth of information about the inner workings of the cell. Every Possible Gene in the Sequence Scientific research has determined that every three DNA base pairs -- called a codon -- encodes for an amino acid in the eventual protein. One of the key pieces of information gleaned from the code is that every gene starts with an adenine-thymine-guanine codon -- ATG on the DNA sequence. Because DNA is double-stranded, every CAT -- or cytosine-adenine-thymine -- found in the sequence is the beginning of a gene on the opposite strand. In addition, all genes end with TAA, TAG, or TGA codons. In other words, a quick examination of the sequence will reveal every possible location for a gene, although some short sequences are not actively transcribed by the organism. Messenger RNA Sequences In addition, the genetic code allows us to translate possible genes directly into messenger RNA sequences. This information is important to research scientists utilizing a technique called RNA interference to block gene expression in target cells. Most eukaryotic and some prokaryotic organisms process mRNA transcripts by splicing, or removing, portions of the sequence called introns. If an organism does not splice RNA, the DNA sequence can be directly translated into a protein sequence. Even for those organisms that do, splice sites are generally known, which means that the protein sequence can be guessed or determined experimentally. If an organism’s genome has already been mapped, an individual’s DNA sequence can be analyzed for mutations -- this concept is the basis for human genetic testing. Doctors can now determine with reasonable accuracy a person’s vulnerability to diseases caused by DNA mutations. For example, women with a family history of breast cancer can get checked for mutations in the BRCA genes, which would indicate a high risk for future breast cancer. Most species of bacteria produce enzymes called restriction endonuclease -- the cells are vulnerable to viruses that can insert harmful foreign DNA. Restriction enzymes combat the tactic by cleaving double-stranded DNA at specific sequences. Molecular biologists and microbiologists can use purified enzymes to cut DNA in the lab. Restriction digests are powerful tools at the disposal of research scientists, so if the DNA sequence is known, the restriction sites on that sequence are also known. - National Cancer Institute: BRCA1 and BRCA2: Cancer Risk and Genetic Testing - "Human Anatomy and Physiology"; Elaine N. Marieb; 2012 About the Author Robert Mullis is is a graduate of Liberty University with a bachelor's degree in biochemistry and a second degree in accounting. As a writer, he specialized in math, biology, chemistry, literature, and business.
What is Linker? Suppose you are leading a project, and the project is divided into multiple parts on the basis of resources available and the expertise of each individual. Now, each of these small portions of the project is equivalent to a small project in itself, each having their own set of inputs and outputs. But for you to publish the entire project as a single product needs a way to combine all these pieces of the puzzle in such a way that we make a single product. In computing terms, this is what linking is, and the component which does that is the linker. So, combining all the multiple files to a single executable is what linking is. The program that takes the files generated by the compiler to make it into one executable is called a linker. Why do we Use Linker? Now, before we jump onto understanding the use of a linker, we need to be well versed with a few terminologies so that one will be able to appreciate the use of a linker. At first, we will look at terminology Symbols. In the source code, all the variables and functions are referred to by names. For example, suppose you declare a variable int x. In that case, we are essentially informing the compiler to set aside some memory for memory required by int. from here on, anywhere I reference “x,” I will be referring to the memory location where this variable is stored. Hence, if we mention a variable or a function, it is nothing more than a symbol to program. For us, it is easy to understand symbols, but for the processor, it means nothing more than a specific command it should execute. This command should be interpreted in machine language from the programming language a user write. For example, if we use x+=8, the compiler should convert this to something like “increase the value at the memory of “x” by 8 units”. The next term is symbol visibility. This is more often used if 2 or more function needs to share a variable. For this, we would take the help of symbols that would have specific meaning for a programming language. For example, in C, we can use the symbol extern to reference a variable in another file, and the extern symbol will make sure to reference it to the variable in another file. Hence, increasing the visibility of the variable, which is declared only in one file. The entire linking process is carried out in two parts; the first one is collecting all the files into a single executable file, i.e., to read all the symbol definitions and note the unresolved symbols, and then in the next step, go through each of the unresolved symbols and fix them up to right places. Post this process; there should be no unresolved symbol, or else the linker will fail. And this process of inking is known as Static Linking. This is because all these checks are performed during the compilation of the source program. An advanced concept is of dynamic linking, where variables and symbols are linked at the run time. The executable image would consist of an additional file of a sharable library. Now let us understand the necessity of using linkers. If you are working on a huge project consisting of millions of code lines and due to customer requirements, you would have to change only a portion of a file and then compile the code again. Don’t you think, if we compile all the millions of lines of code again, it will result in unnecessary time loss. In current modern optimizing, compilers perform high code analysis and memory allocation and can take even longer. This is where linkers would come into play as most of the third-party libraries would be affected rarely, and the files which are affected by code change are less as well. Hence, when the compilation of the code is performed, the compiler creates an object file. With each file change, only the corresponding object file gets updated. Now, the job of the linker is to gather these object files and use them to generate the final executable. Here the third-party libraries are used via shared libraries. Now, in recent times we have seen that not all OS would create a single executable. Rather it prefers to run on components that keep the functions together. For example, Windows uses DLLs for the task. This helps in reducing the size of executable, but in hand increases the dependency on the existing DLLs. DOS uses.OVL files. Importance of Linker In recent times, with the compiler evolving constantly, we have begun writing more optimized code every other day. Though linkers come at the very end of gluing together the object files, one would have a notion that the technology hasn’t changed much. But with recent advancements, linkers have even higher importance. With Linker hardening, linkers will be able to remap non-dynamic sections of relocation to read-only. Having it read-only would improve things in running the code in an optimized way. Another importance is that linkers can use intermediate representation in the object files to identify sites where inlining may prove beneficial and thus help link-time optimization. With the discussions above, we have an idea of what advantages linkers bring in place for developers, and here we would formally put them in points. - There need not be any duplication of the library required, as the frequently used libraries will be available in one location. An example of such an advantage is having standard system libraries. - Change of libraries used dynamically by various other codes will easily get corrected or upgraded by the change of the library in a single space. However, the ones linked by static linking would have to manually re-link again. - With the use of static linking, we can avoid having “DLL hell,” which is a problem when the code is working with DLLs running on a single memory space. In conclusion, the crux of the entire article of linkers is that it is an inevitable part of the process of compilation of code. And with the recent developments, we have seen the level of optimization of the code has increased multi-fold. Linkers are pieces in the chessboard, which glue the different source files into a single unified program to solve a business problem! This is a guide to What is Linker?. Here we also discuss the introduction and why we use linker? Along with importance and advantages. You may also have a look at the following articles to learn more –
8th Grade Math Learning Targets ALT 1 - Communication Communicates clearly and explains reasoning so others can follow how a problem is solved. AST.1.1 - Language : Uses appropriate mathematical language. AST.1.2 - Representations : Uses appropriate forms of mathematical representations to present information correctly. AST.1.3 - Transitions : Moves between different forms of mathematical representations. AST.1.4 - Lines of Reason : Communicates through lines of reasoning that are complete and coherent. ALT 2 - Modeling Reasons mathematically to solve problems in real-life context. AST 2.1 - Relevant Elements : Identifies the relevant elements of the authentic real-life situation. AST 2.2 - Strategies : Selects adequate mathematical strategies to model the authentic real-life situation. AST 2.3 - Reaches a Solution : Applies the selected mathematical strategies to reach a valid solution to the authentic real-life situation. AST.2.4 - Degree of Accuracy : Explains the degree of accuracy of the solution. AST 2.5 - Making Sense : Explains whether the solution makes sense in the context of the authentic real-life situation. ALT 3 - Patterns Recognizes patterns and describes them as relationships or general rules. AST.3.1 - Pattern ID : Selects and applies mathematical problem-solving techniques to correctly identify the pattern. AST.3.2 - Description : Pattern is described as relationship or general rule. AST 3.3 - Verification : Verifies the validity of these general rules. AST.3.4 - Conclusions : Conclusions are consistent with the correct findings. ALT 4 - Exponents Works with expressions and equations using integer exponents. AST 4.1 - Exponent Properties : Knows and applies the properties of integer exponents to generate equivalent numerical expressions. AST 4.2 - Scientific Notation : Uses numbers expressed in the form of a single digit times an integer power of 10 to estimate very large and very small quantities, and to express how many times as much one is than the other. AST 4.3 - Operations with Scientific Notation : Performs operations with numbers expressed in scientific notation, including problems where both decimal and scientific notation are used. AST 4.4 - Applications : Uses scientific notation and choose units of appropriate size for measurements of very large or very small quantities. AST 4.5 - Uses Technology : Interprets scientific notation that has been generated by technology. ALT 5 - Equations and Systems Analyzes and solves linear equations and systems of linear equations. AST 5.1 - One Variable Equations : Solves linear equations in one variable. AST 5.2 - Systems : Analyzes and solves pairs of simultaneous linear equations. ALT 6 - Linear Functions Defines, evaluates, compares and uses linear functions to model relationships between quantities. AST 6.1 - Direct Variation : Graphs proportional relationships, interpreting the unit rate as the slope of the graph. Compares two different proportional relationships represented in different ways. AST 6.2 - Writing Linear Equations : Uses similar triangles to explain why the slope m is the same between any two distinct points on a non-vertical line in the coordinate plane; derive the equation y = mx for a line through the origin and the equation y = mx + b for a line intercepting the vertical axis b. AST 6.3 - Functions : Understands that a function is a rule that assigns to each input exactly one output. The graph of a function is the set of ordered pairs consisting of an input and the corresponding output (function notation not required). AST.6.4 - Compare Properties of Linear Functions : Compares properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). AST 6.5 - Slope-Intercept Form (y = mx + b) : Interprets the equation y = mx + b as defining a linear function, whose graph is a straight line; give examples of functions that are not linear. AST 6.6 - Derive an Equation of a Line on a Graph : Constructs a function to model a linear relationship between two quantities. Determines the rate of change and initial value of the function from a description of a relationship or from two (x, y) values, including reading these from a table or from a graph. Interprets the rate of change and initial value of a linear function in terms of the situation it models, and in terms of its graph or a table of values. AST 6.7 - Describing Functions : Describes qualitatively the functional relationship between two quantities by analyzing a graph. Sketch a graph that exhibits the qualitative features that has been described verbally. ALT 7 - Bivariate Data Investigates patterns of association in bivariate data. AST 7.1 - Scatter Plots : Constructs and interprets scatter plots for bivariate measurement data to investigate patterns of association between two quantities. Describes patterns such as clustering, outliers, positive or negative association, linear association, and nonlinear association. AST 7.2 - Lines of Fit : Knows that straight lines are widely used to model relationships between two quantitative variables. AST 7.3 - Using Model Equations : Uses the equation of a linear model to solve problems in the context of bivariate measurement data, interpreting the slope and intercept. AST 7.4 - Categorical Data : Understands that patterns of association can also be seen in bivariate categorical data by displaying frequencies and relative frequencies in a two-way table. AST 7.5 - Two-Way Tables : Constructs and interprets a two-way table summarizing data on two categorical variables collected from the same subjects. AST 8.6 - Relative Frequencies : Uses relative frequencies calculated for rows and columns to describe possible association between the two variables. ALT 8 - Geometry Understands congruence and similarity using transformational geometry, triangle-angle relationships, and parallel lines cut by transversals, as well as, the volume of cylinders, cones and spheres. AST 8.1 - Rigid Transformations : Verifies experimentally the properties of rotations, reflections and translations. AST 8.2 - Congruence : Understands that a two-dimensional figure is congruent to another if the second can be obtained from the first by a sequence of rotations, reflections, and translations; find two congruent figures, describes a sequence that exhibits the congruence between them. AST 8.3 - Coordinate Transformations : Describes the effect of dilations, translations, rotations, and reflections on two-dimensional figures using coordinates. AST 8.4 - Angle Relationships : Uses informal arguments to establish facts about the angle sum and exterior angle of triangles, about the angles created when parallel lines are cut by a transversal, and the angle-angle criterion for similarity of triangles. AST 8.5 - Volume Formulas : Knows the formulas for the volume of cones, cylinders, and spheres and uses them to solve real-world and mathematical problems. ALT 9 - Pythagorean Theorem Understands and applies the Pythagorean Theorem using rational and irrational numbers. AST 9.1 - Identify Irrational Numbers : Knows that numbers that are not rational are called irrational. Understands informally that every number has a decimal expansion. AST 9.2 - Estimate, Compare, Order Irrationals : Uses rational approximations of irrational numbers to compare the size of irrational numbers, locates them approximately on a number line diagram, and estimates the value of expressions. AST 9.3 - Square and Cube Roots : Uses square and cube root symbols to represent solutions to equations of the form x2 = p and x3 = p where p is a positive rational number. Evaluates square roots of small perfect squares and cube roots of small perfect cubes. Knows that √2 is irrational. AST 9.4 - Prove the Pythagorean Theorem : Explains the proof of the Pythagorean Theorem and its converse. AST 9.5 - Find Length in 2D and 3D : Applies the Pythagorean Theorem to determine unknown side lengths in right triangles in the real world and mathematical problems in two and three dimensions. AST 9.6 - Distance and Coordinates : Applies the Pythagorean Theorem to find the distance between two points in a coordinate system.
Bookmarks are a simple way to save the address of a web page. A bookmark is a shortcut that stores the address of a web page. When clicked it opens a web page automatically without having to type in the URL. Storing favorite places on the Internet is a basic skill. However, in many education settings the steps to complete this task can be complex. This is because many school networks prevent students from saving files to the local computer. In addition, student profiles often are set up to permit access to a designated location on the server, prohibiting the storage of files to a Favorites folder, where Internet books would typically be stored. Why should your students bookmark? The four main reasons to bookmark a web page are: - SPEED: Many young children are slow typists so it is faster to create a bookmark that stores a web page address compared to typing the URL into the address bar of the web browser. - ACCURACY: Many young children are inaccurate typists which can cause them to access unwanted web pages because the URL they entered into the address bar is incorrect. With a bookmark they always can view the correct web page. This is especially helpful when the URL is lengthy and complex. - IMPROVES WORKFLOW: When gathering facts for a research project, students may need to return to a web page for additional information. A bookmark makes it easy for children to return to a web page repeatedly. - CITE THE SOURCE: Teachers will often ask students to create a bibliography that states where the information for an assignment was collected. Bookmarks are a great way to store sources of information if they need to be included in a report, presentation, or other publication. Bookmarking can be complicated in some education settings! How a school’s network is set up can transform the basic Internet skill of bookmarking from a simple task into a complicated set of steps. This is exactly the problem I encountered this week when teaching Assignment 4 from TechnoJourney. The students cannot add bookmarks into the Favorites folder by clicking the Add to Favorites button in Internet Explorer. Instead, they must create a shortcut in their student folder to the web page. Since the students are only eight or nine years old I was worried about teaching this skill. UPDATE: TechnoJourney was replaced with TechnoInternet. The activities are similar. One of my concerns was that students needed to know TWO methods of bookmarking. They would need to know the home method, which is the standard step of clicking the Add to Favorites button in Internet Explorer, plus the school method which is creating a shortcut in a student folder. I had worried that it might be confusing to teach them two methods of bookmarking in the same class. My other concern was that the school method of creating a shortcut has several steps transforming the basic internet skill of bookmarking into a complex task. It was NOT AN OPTION to skip teaching bookmarking. The classroom teacher wanted her students to be better at Internet research. For this reason, the ability to store bookmarks was considered an essential Internet skill that would allow students to easily access valuable online resources and information. The students surprised me in their ability. I started by demonstrating how they would bookmark resources at home. Students then practiced this skill. Several in the group that had older siblings were already familiar with task and the rest of the students caught on quickly. Next, we had to learn the school method of bookmarking. It had TEN STEPS: - Minimize the web browser window. - Click My Documents to open the student folder. - Inside the student folder create a new folder called bookmarks. - Keep the student folder OPEN. Maximize the web browser window. - Find a web page. - Select the web address. Right click the mouse and select COPY. - Click on the bookmarks folder in the status bar. - Right click inside the bookmarks folder. Select NEW from the menu. Click SHORTCUT. - Right click inside the location box. Select PASTE. Click NEXT. - Type a name for the shortcut. Click FINISH. I have an overhead projector, so it is easy to have students follow along. We did each step listed above, one at a time. Once we had completed all ten steps, I asked students to create another shortcut in their bookmarks folder on their own. I walked around the room to remind students about the steps. To make my life easier, I asked those children who had successfully created a second bookmark to assist their neighbor. It did not take long before everyone had two bookmarks in their bookmarks folder. Students then practiced clicking on their bookmarks to display favorite web pages. Creating bookmarks is a basic Internet skill that needs to be practiced repeatedly to remember. For this reason, we are going to spend the next class period bookmarking web pages. The goal is to make the children experts at this task. The classroom teacher is very pleased because it is her hope for students to become familiar with bookmarking in Grade 3, so that they can use this skill when conducting internet research in Grades 4, 5, 6, 7, and 8. About Teaching Basic Internet Skills Bookmarking is a basic internet skill. Do you teach how to create a bookmark? At what grade do you introduce this skill? Is bookmarking a simple task at your school or is it more complex? Other Articles about Teaching Internet Skills using TechnoJourney Now the Students’ Turn: Reflecting on TechnoJourney A Teacher Speaks Out: Yes, you should teach Internet skills! Peer to Peer Teaching – Students Become the Teachers Internet Tour Guide Activity Use YouTube Videos in your Classroom Students Love Google Maps Review How to Sort Google Images with Your Students Teaching Internet Skills – The Trust Test Wikipedia in the Classroom Bookmarking is a Basic Internet Skill that can be Complex Metacognition and Teaching about the Internet 4 Strategies for Reviewing Internet Search Results When Should Students Start Using the Internet? Should you Teach Internet Skills?
Emissions Advantages of Gasification Gasification-based processes for power production characteristically result in much lower emissions of pollutants compared to conventional coal combustion. This can be traced to the fundamental difference between gasification and combustion: in combustion, air and fuel are mixed, combusted and then exhausted at near atmospheric pressure, while in gasification oxygen is normally supplied to the gasifiers and just enough fuel is combusted to provide the heat to gasify the rest. Since air contains a large amount of nitrogen along with trace amounts of other gases which are not necessary in the combustion reaction, combustion gases are much less dense than syngas produced from the same fuel. Pollutants in the combustion exhaust are therefore at much lower concentrations than the syngas, making them difficult to remove. Moreover, gasification is usually operated at high pressure (compared to combustion at near ambient). The inherent advantages in removing syngas contaminants prior to utilization of the syngas1 emerge as follows: - Relatively high concentration of pollutant species and pollutant species precursors (most notably hydrogen sulfide (H2S) in syngas which would form sulfur oxides (SOx) upon syngas combustion), versus much lower concentration that would be found in the combustion flue gas, improves removal; - High-pressure gasifier operation significantly reduces the gas volume requiring treatment; - Conversion of H2S into elemental sulfur (or sulfuric acid) is technically much easier and more economical than capture and conversion of SO2 into salable by-products; - The higher temperature and pressure process streams involved in gasification allow for easier removal of carbon dioxide (CO2) for geological storage or for sale as a byproduct; - The oil and gas industries already have significant commercial experience with efficient removal of acid gases (H2S and CO2) and particulates from natural gas. - Removal of corrosive and abrasive species prevents potential damage to the conversion devices such as gas turbines, resulting from contamination, corrosion, or erosion of materials. The Clean Air Act, enacted by Congress in 1963, requires the United States Environmental Protection Agency (EPA) to create National Ambient Air Quality Standards (NAAQS) for any pollutants which effect public health and welfare. As of 2007, the EPA had established standards for ozone, carbon monoxide, sulfur dioxide, lead, nitrogen dioxide, and coarse and fine particulates. These standards are reviewed and updated every five years. These NAAQS, known as Title I, are administered by each state in conjunction with the EPA. Each state must submit a State Implementation Plan (SIP) to the EPA for approval which details how the state will comply with the NAAQS. The SIP may be more stringent than the Federal requirements, but must meet them at a minimum. The complications of varying state and local implementation plans generally translate into great variation in the permitting process for new power plants based on their proposed sites. Various state and local regulations and whether or not those areas meet the NAAQS play a large role in the negotiation process for emissions requirements at new plants. Also, the future of emissions regulation is cloudy and more stringent regulations, along with the inevitable increase in worldwide electrical demand, could play a substantial role in determining the eventual market penetration of gasification technology for electrical production. NETL Comparison of Pulverized Coal Combustion and IGCC Pollutant Emissions The National Energy Technology Laboratory (NETL) published a detailed performance comparison of three different IGCC technologies along with subcritical and supercritical pulverized coal (PC) power plants (Natural Gas Combined Cycle (NGCC) was also included, however since coal is not the feedstock in that scenario it is not discussed here) entitled Cost and Performance Baseline for Fossil Fuel Plants1 in 2007. Design principles for the IGCC systems were based on best current design practices listed in the Electric Power Research Institute's CoalFleet User Design Basis Specification for Coal-Based Integrated Gasification Combined Cycle (IGCC) Power Plants: Version 4, while the PC plants were modeled based on incorporating the best commercially available technology that could be implemented in a plant to start operation in 2010. Those comparisons illustrated the typical magnitude of emissions reductions possible for the main pollutants/emissions of concern for IGCC-based systems. The three IGCC technologies far outperformed both subcritical and supercritical PC plants in minimizing these criteria emissions. More detailed discussion for individual emissions types can be found at those pages specific to the species in question: In summary, gasification has inherent advantages over combustion for emissions control. Emission control is simpler in gasification than in combustion because the produced syngas in gasification is at higher temperature and pressure than the exhaust gases produced in combustion. These higher temperatures and pressures allow for easier removal of sulfur and nitrous oxides (SOx, and NOx), and volatile trace contaminants such as mercury, arsenic, selenium, cadmium, etc. Gasification systems can achieve almost an order of magnitude lower criteria emissions levels than typical current U.S. permit levels and +95% mercury removal with minimal cost increase.2 1. Simbeck, D., et al., “Coal Gasification Guidebook: Status, Applications, and Technologies,” Report prepared for EPRI by SFA Pacific, Inc., TR-102034, Dec 1993. 2. The Future of Coal - An MIT Interdisciplinary Study (Mar 2007)
Gregory Fiete grew up playing outside in the woods, lakes, and swamps of the Deep South and Midwest. When he was a fourth grader in South Carolina, a friend suggested that they go fishing because it was the perfect time. It was late spring, and there was a full moon, which is when the fish would be more likely to bite as they spawned. Fiete, a professor of physics at Northeastern, didn’t call it science at the time. But it was an experience that sparked scientific wonder—a realization that some barren rock in space could influence life on Earth. “It was this idea that there’s a lot of connectedness in nature, a lot of patterns,” Fiete says. “The patterns can be understood and used to do something useful, which for me was catching fish.” Fiete now looks for other kinds of patterns: the ones hidden within electrons of solid matter. He is leading a group of theoretical physicists who are working to understand and predict subatomic mechanisms within materials that could lead to better and faster technologies based on quantum systems. The inner workings of these materials can be difficult to understand because some of the laws of quantum physics aren’t exactly intuitive for most people. Even some physicists would say that certain rules of quantum mechanics can be spooky and make people uncomfortable. Fiete, who spent 10 years at the University of Texas at Austin before joining Northeastern in 2019, thinks that exploring the possibilities hidden within quantum materials through theories and calculations isn’t that much different from recognizing the patterns of activity that helped him make big catches based on moon phases. Now, the question of what is possible (quantum mechanically speaking) is at the heart of Fiete’s research. His team focuses on the fundamental behaviors and characteristics that move electrons to produce new properties within special materials, such as superconductors. “You need to understand how something works, which is about recognizing the patterns in it,” Fiete says. “That’s essentially what science amounts to—we’re just trying to understand how things work.” Physicists over the last century have developed a robust understanding of the particles that make up a material. And scientists like Fiete are pushing the field to dig deeper and build on this knowledge to fully understand the mechanics that control the collective behavior of electrons within solid matter. These motions can have important implications for the ability of a material to conserve energy or transmit heat. Fiete suspects that in the next 80 years, the fundamental understanding that he and other scientists are building will bring the power of technologies based on quantum physics within a closer reach for researchers in several fields. “Many medical technologies now rely on physics, like magnetic resonance imaging machines, and there are various radiation treatments and laser surgeries,” Fiete says. “These technologies are all based on quantum physics, and they are a part of our medical care now.” Unlocking the power of quantum materials could also catalyze a new era of technology that will rely on quantum computers. Such computers could calculate in minutes what would take the supercomputers of today thousands of years. But to do that, researchers need to extrapolate the understanding that theoretical physicists already have of electrons to harness the hidden powers of quantum materials. In 2019, Fiete showed that it is possible to use lasers to enhance materials used in electronics. As a laser is pumped through a material, the arrangement of its electrons changes in a way that it supercharges their ability to move electricity. Shooting materials with lasers could enhance their properties in other different ways, including changing magnetic properties or conducting electricity without losing any energy. It could also make for a bundle of multiple materials in one—or generate completely new properties. “There are whole zoos of different types of properties that physicists are interested in,” Fiete says. “An even more intriguing question is, what kinds of new matter can we realize when we keep the laser on?” In 2015, Fiete provided a new theoretical way to find properties emerging from materials known as topological insulators, which are known in the scientific community for their superior ability to conduct electricity. “We keep coming up with a finer and finer comb to distinguish one material property from another,” Fiete says. “And the downstream are these kind of quantum technologies that have the capability to potentially change so many things in commerce, national security, and basic science itself.” Just as internal forces drive quantum materials, Fiete says, the intangible aspects of being a physicist will advance his field. “What matters is that I had a great discussion with a student or another faculty member, and that we understood something and helped move science forward,” says Fiete, whose list of recognitions includes a Presidential Early Career Award for Scientists and Engineers, which he received from President Obama. “It matters that [our] idea will then move the science forward.”
When writing an essay, don't be tempted to simply summarise other writers' ideas. It is your discussion of the topic and your analysis of their ideas that should form the backbone of your essay. What is an essay? An essay is a type of assignment in which you present your point of view on a single topic through the analysis and discussion of academic sources. Usually, an essay has the format of an introduction, body paragraphs and a conclusion. Critical analysis is essential to essay writing. One way you can demonstrate this is by summarising and paraphrasing other writers, by comparing, contrasting and evaluating their ideas. You can use this analysis to construct your own opinions, questions or conclusions. When writing an essay, you need to have a clear position on a topic (sometimes called a thesis statement) in the introduction. You then support your thesis statement in the body of the essay, using relevant ideas and evidence from appropriate sources. It is important that you present your own ideas, opinions and analyses throughout your essay. When you use someone else’s ideas, you must correctly acknowledge it through referencing. Citations are included within the text and a reference list or bibliography at the end of the text, both according to the referencing style required by your unit. What will my marker be looking for in my essay? If in doubt, ask early! Your lecturer and tutor are there to help – and you can always ask for further advice from a Writing Mentor or a Language and Learning Adviser. In general, your marker will be looking for evidence that you have: - answered the essay question directly - met the assignment criteria - drawn on discussions from weekly seminars and classes (your unit’s weekly topics should be your guide for all of your assessments) - provided a position on, and shown understanding of, the topic - completed the set and recommended readings - discussed and analysed sources, and formatted them in the required referencing style - planned your essay so that is readable, clear and logically sequenced, and with a distinct introduction, body and conclusion - kept within the set word limit. How much should I write? Again, always consult your unit guide and assessment instructions for exact details of your assignment. These should clearly state the required word count for your assignment. Do not go dramatically under or over this amount. Usually about 10% over or under is acceptable – but always check with your lecturer first. Planning your essay well before the due date will result in less stress and also less time writing, as you will know exactly how many words you need for each section. If you use the introduction, body and conclusion model, it is recommended to have one main idea per body paragraph. For example, if you have to write a 1000-word essay you might have three body paragraphs of approximately 250 words each, leaving 125 words for both the introduction and the conclusion. A reference list or bibliography – formatted according to your referencing style – on a separate page at the end of your essay is also usually required. Normally this is not included in the word count, but check with your lecturer or tutor to be sure. Use the Guide to essay paragraph structure and the Essay paragraph planner on this page to plan your next essay. Here are some ideas for structuring your essay. Always check the assignment criteria and other information in your unit site for specific requirements. If you are not sure, ask your lecturer or tutor. You can also get further advice from a Writing Mentor or a Language and Learning Adviser. For further details and examples, download the Guide to essay paragraph structure (PDF) from this page. Try to begin and end each paragraph with your own thoughts rather than quoting or paraphrasing someone else’s words. Remember that your marker will be looking for your opinion, your discussion and your analysis of ideas. Remember that these are the first words your marker will read, so always try to make a great first impression, to ensure that you provide your marker with a clear and accurate outline of what is to follow in your essay. Don’t go into too much detail in the introduction. Save the detail for the body of your essay. - Provide background information about the topic. Introduce and define some of the key concepts discussed in the essay. - Respond directly to the essay question and clearly state what your essay intends to achieve. - Provide an overview of some of the main points, or direction, of the essay. - Be sure to revise the introduction in your final draft, so that it accurately reflects any changes you may have made to the body and conclusion of your essay - Start each paragraph with a topic sentence. This is the main point of your paragraph and everything within this paragraph should relate back to it. - Each main point should be relevant to your essay question or thesis statement. - Integrate evidence and examples into your paragraph from your readings to support your point. Do not simply present evidence, but analyse it at each stage, always relating it back to your assignment question. - Be formal, objective and cautious in your writing. - All sources must be cited in text in the referencing style required by your unit. (Citations are also listed in a bibliography or reference list at the end of the essay.) - Consider how you conclude your paragraph and how you might link it to the following paragraph. Conclusions are primarily for summing up what you have presented in the body of your essay. No new information is presented in the conclusion. Use synonyms and paraphrasing so that you do not repeat all your main points word for word. - Summarise your argument and draw on some of the main points discussed in the body of the essay, but not in too much detail. - Tell your reader how your essay has successfully responded to the essay question. - You may return to discuss the background/context of the topic, if relevant. - Where you see a gap in knowledge, you might provide suggestions for further research (optional). Reference list or bibliography Linking words clarify for the reader how one point relates to another. An essay flows cohesively when ideas and information relate to each other smoothly and logically. Here are some common linking words used to: - Introduce and add ideas firstly, secondly, finally, also, another, too, moreover, furthermore, as well as - Illustrate ideas for example, to demonstrate this - Show a result or effect accordingly, therefore, as a result, thus, in order for this to occur - Compare ideas - Contrast ideas in contrast, however, but, in comparison, despite, on one hand ... on the other hand ... - Restate and clarify in other words, to put this another way, this could also be defined as - Sum up or conclude therefore, so, to summarise, to conclude, in conclusion, finally You should also avoid repeating key names and words too many times. Instead, use pronouns that refer back to earlier key words. For example: it, they, their, this, these, that, those Further examples of linking words in academic writing: The writing process Planning and researching - Use the Deakin Assignment Planner to get a better idea of the time required to complete your essay. - Analyse the assignment question. - Stuck? Ask your tutor or Study Support. - Gather relevant information and supporting evidence from class notes and readings. - Make further notes about any questions you have. - Researching involves sourcing texts appropriate to your task. - Use a variety of reading strategies. - Take notes always with the assignment question in mind. - Brainstorm the most significant relevant issues/points using lists or a mind map. - It is important to begin writing as soon as soon as possible – think of writing as a process rather than a goal. - Write an answer to the question in just one or two sentences – this can form the basis your thesis statement or argument. - Plan and structure the body paragraphs of your essay into topic sentences with bullet points for each paragraph. - Expand on each bullet point to build paragraphs based on evidence, which will also require with citations. - Be formal, objective and cautious in your writing. - Integrate your sources with your own analysis. - After reviewing the plan and draft of body paragraphs, write the introduction and conclusion. Drafting, reviewing and proofreading - Take a break for at least a day and come back with a fresh pair of eyes. - Review the marking criteria and assignment instructions again. Ask yourself: Have I done everything required? - Draft and re-draft your essay. - Read the paper aloud to find errors in sentence structure and word choice and refine it so there is a more natural flow. - Save a back-up copy of each draft – and in more than one place! - Get help with writing and referencing from Study Support. - Don’t leave adding citations and references until the final draft – it can be very time consuming. - Proofread your essay and make sure it follows any formatting requirements required by the unit. - Ensure your referencing is correct and consistent. - Save a back-up copy of your final essay before submitting your assignment! - Submit your assignment according to your unit’s instructions.
Work and energy problems worksheet Work and Energy Problem F POWER PROBLEM Martinus Kuiper of the Netherlands ice skated for 24 h with an average speed of 6.3 m/s. Suppose Kuiper’s mass was 65 kg. If Kuiper provided 520 W of power to accelerate for 2.5 s, how much work did he do? SOLUTION Given: P = 520 W ∆t = 2.5 s Unknown: W = ? Use the equation for power and rearrange it ... Created Date: 20111111081846Z Energy Work Problems Some of the worksheets for this concept are Physics work work and energy, Work energy problems, Name period date, Physics work and energy work solutions, Physics work and energy work solutions, Kinetic energy work, Work word problems, Topic 5 work and energy. Physics Worksheet Work and Energy Section: Name: Mr. Lin 2 20. A skater of mass 60 kg has an initial velocity of 12 m/s. He slides on ice where the frictional force is 36 N. How far will the skater slide before he stops? 21. A diver of mass m drops from a board 10.0 m above the water surface. Work, Energy and Power: Problem Set Problem 1: Renatta Gass is out with her friends. Misfortune occurs and Renatta and her friends find themselves getting a workout. They apply a cumulative force of 1080 N to push the car 218 m to the nearest fuel station. Determine the work done on the car. Audio Guided Solution Worksheet: Work Work Power and Energy Page 1 of 2 Write the equation and units for work: 1. How much work does Bobby perform in pushing a 35 N crate a distance of 4 meters? list known values formula substitution answer & units 2. How far will a 70 N crate be moved if 3500 J or work are accomplished? Additional Worksheets. Work worksheet. Creating Work Problems worksheet. Energy worksheet. Creating Energy Problems worksheet. Momentum worksheet. Creating Momentum Problems worksheet. Impulse worksheet. Power worksheet. Labs & Activities Directory. Determining the Velocity of a Steel Ball Using a B... Work, Energy & Momentum Demonstration List Mechanics: Work, Energy and Power Worksheet Name: Work . Questions: A tugboat pulls a ship with a constant net horizontal force of 5.00 x 103 N and causes the ship to move through a harbor. How much work is done on the ship if it moves a distance of 3.00 km? Determine how to approach the problem. We apply the work-energy theorem. We know that all the car's kinetic energy is lost to friction. Therefore, the change in the car's kinetic energy is equal to the work done by the frictional force of the car's brakes. Start studying Work and Energy Vocabulary. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Physics Worksheet Work and Energy Section: Name: Mr. Lin 2 20. A skater of mass 60 kg has an initial velocity of 12 m/s. He slides on ice where the frictional force is 36 N. How far will the skater slide before he stops? 21. A diver of mass m drops from a board 10.0 m above the water surface.
The early childhood curriculum, Te Whāriki , provides a framework for early childhood services to implement a curriculum that supports children’s competence and confidence as learners. Developing social competence enables children to relate to others in ways that enrich and extend their learning. Educators have a key role in nurturing children’s emotional wellbeing and helping children to develop an understanding of appropriate behaviour. The Education Review Office (ERO) evaluated how effectively early childhood services helped children to develop social competence, emotional wellbeing and an understanding of appropriate behaviour. ERO gathered data for this evaluation from 310 early childhood services during their regular scheduled education reviews in Terms 2 and 3, and part of Term 4, 2010. This report discusses the areas of strength, and areas for development that ERO found. It also describes the practices of specific service types - Playcentres, kindergartens and education and care services - in supporting children’s social competence, and understanding of appropriate behaviour, Early childhood services were generally very good at helping children to learn alongside other children and adults, and to understand the limits and boundaries of acceptable behaviour. In 45 percent of the services reviewed educators used practices that were highly effective in assisting children to develop social and emotional competence. In a further 38 percent, practices were mostly effective. Fourteen percent had somewhat effective practices and three percent were not effective. In services with highly effective practice, educators acknowledged and valued children’s cultural background and the experiences and perspectives they brought to their learning. Interactions with children were sensitive, caring and respectful. Educators had high expectations for children and they took account of parents’ aspirations in setting these expectations. They were attuned to younger children, responding sensitively to their body language and including them in conversations. Learning environments were calm and unhurried, allowing time for rich conversations and opportunities for educators to work alongside children, supporting their interactions with others. Socially and emotionally competent children were observed by ERO to be: Common features of highly effective practice in supporting children’s developing social competence and understanding of appropriate behaviour, across the service types,included: Where practice was not effective, this was largely due to educators’ limited understanding of policy expectations and associated lack of consistency. Turnover of educators, and/or a lack of professional leadership and support also contributed to poor practice. Other issues related to curriculum implementation, especially educators not being responsive to children’s needs and having poor quality interactions with them. In some services, the learning environment did not support children. Children’s behaviour, learning and development were not helped by their limited access to resources and the poor management of group times.
Types of Monomers | Sciencing The carbohydrate monomers deoxyribose and ribose are integral parts of DNA and Consequently, we can define monosaccharides as possessing the molecular a sugar molecule varies, many monosaccharides are isomers of one another. glycosidic linkage between glucose monomers causes glycogen polymers to. When combined with other monomers, polymers are formed. Food in the forms of carbohydrates, proteins and fats derives from the linkage of several monomers. What Are the Chemical Names of the Four Macromolecules? Lipids with three fatty acid tails and one glycerol are called triacylglycerols. Homopolymers are polymers made by joining together monomers of the same Heteropolymers are polymers composed of more than one kind of monomer. very high pressure until it becomes what is known as low-density polyethylene. Carbon atoms can link to other carbon atoms to create long carbon strings that. Cells use glucose for cellular respiration. Glucose forms the basis of many carbohydrates. Other simple sugars include galactose and fructose, and these also bear the same chemical formula but are structurally different isomers. The pentoses are simple sugars such as ribose, arabinose and xylose. Combining the sugar monomers creates disaccharides made from two sugars or larger polymers called polysaccharides. For example, sucrose table sugar is a disaccharide that derives from adding two monomers, glucose and fructose. Other disaccharides include lactose sugar in milk and maltose a byproduct of cellulose. An enormous polysaccharide made from many monomers, starch serves as the chief storage of energy for plants, and it cannot be dissolved in water. Starch is made from a huge number of glucose molecules as its base monomer. Starch makes up seeds, grains and many other foods that people and animals consume. The protein amylase works to revert starch back into the base monomer glucose. Glycogen is a polysaccharide used by animals for energy storage. Glycogen differs from starch by having more branches. When cells need energy, glycogen can be broken down via hydrolysis back into glucose. Long chains of glucose monomers also make up cellulose, a linear, flexible polysaccharide found around the world as a structural component in plants. Many animals cannot fully digest cellulose, with the exception of ruminants and termites. Another example of a polysaccharide, the more brittle macromolecule chitin, forges the shells of many animals such as insects and crustaceans. Simple sugar monomers such as glucose therefore form the basis of living organisms and yield energy for their survival. Monomers of Fats Fats are a type of lipids, polymers that are hydrophobic water repellent. The base monomer for fats is the alcohol glycerol, which contains three carbons with hydroxyl groups combined with fatty acids. Fats yield twice as much energy as the simple sugar, glucose. For this reason fats serve as a kind of energy storage for animals. Fats with two fatty acids and one glycerol are called diacylglycerols, or phospholipids. Lipids with three fatty acid tails and one glycerol are called triacylglycerols, the fats and oils. Fats also provide insulation for the body and the nerves within it as well as plasma membranes in cells. Monomers of Proteins An amino acid is a subunit of protein, a polymer found throughout nature. An amino acid is therefore the monomer of protein. Introduction to macromolecules Proteins provide numerous functions for living organisms. Several amino acid monomers join via peptide covalent bonds to form a protein. Two bonded amino acids make up a dipeptide. Three amino acids joined make up a tripeptide, and four amino acids make up a tetrapeptide. Science at a Distance With this convention, proteins with over four amino acids also bear the name polypeptides. Of these 20 amino acids, the base monomers include glucose with carboxyl and amine groups. Glucose can therefore also be called a monomer of protein. The amino acids form chains as a primary structure, and additional secondary forms occur with hydrogen bonds leading to alpha helices and beta pleated sheets. Folding of amino acids leads to active proteins in the tertiary structure. Additional folding and bending yields stable, complex quaternary structures such as collagen. Collagen provides structural foundations for animals. The protein keratin provides animals with skin and hair and feathers. Proteins also serve as catalysts for reactions in living organisms; these are called enzymes. Proteins serve as communicators and movers of material between cells. For example, the protein actin plays the role of transporter for most organisms. The varying three-dimensional structures of proteins lead to their respective functions. Changing the protein structure leads directly to a change in protein function. Nucleotides as Monomers Nucleotides serve as the blueprint for the construction of amino acids, which in turn comprise proteins. Nucleotides store information and transfer energy for organisms. Nucleotides are the monomers of natural, linear polymer nucleic acids such as deoxyribonucleic acid DNA and ribonucleic acid RNA. Nucleotide monomers are made of a five-carbon sugar, a phosphate and a nitrogenous base. Bases include adenine and guanine, which are derived from purine; and cytosine and thymine for DNA or uracil for RNAderived from pyrimidine. The combined sugar and nitrogenous base yield different functions. Nucleotides form the basis for many molecules needed for life. One example is adenosine triphosphate ATPthe chief delivery system of energy for organisms. Adenine, ribose and three phosphate groups make up ATP molecules. Phosphodiester linkages connect the sugars of nucleic acids together. These linkages possess negative charges and yield a stable macromolecule for storing genetic information. RNA, which contains the sugar ribose and adenine, guanine, cytosine and uracil, works in various methods inside cells. RNA exists in a single-helix form. DNA is the more stable molecule, forming a double helix configuration, and is therefore the prevalent polynucleotide for cells. DNA contains the sugar deoxyribose and the four nitrogenous bases adenine, guanine, cytosine and thymine, which make up the nucleotide base of the molecule. Polysaccharides are excellent energy storage molecules because they are easily built and broken down by enzymes. Forming fairly compact structures, polysaccharides allow energy storage without the space required by a pool of free glucose monomers. Other polysaccharides form strong fibers that provide protection and structural support in both plants and animals. With small differences in the bond between monomers, polymers can function as compact energy storage units in starch and glycogen or as strong, protective fibers in cellulose and chitin. - What is the relationship between monomers and polymers? Give an example using proteins. Understanding the structure, synthesis, and breakdown of carbohydrate polymers provides a framework for understanding their function in living cells. Animals, including humans, create glucose polymers called glycogen. The position of the glycosidic linkage between glucose monomers causes glycogen polymers to coil into spiral shapes. Glycogen polymers are significantly branched, with several monomers in the primary chain containing a second glycosidic linkage to a different glucose. The second attachment sites allow shorter glucose chains to branch away from the main chain, packing more glucose units into the compact coiled structure. Animals initiate enzyme-driven hydrolysis reactions to break down glycogen when energy is needed. For quick access to energy, glycogen is stored primarily in two locations in humans, the liver for easy delivery into the bloodstream and muscles for direct use as needed. Plants synthesize two types of polysaccharides, starch and cellulose. The glycosidic bonds between glucose units in plant starch are similar to those in animal glycogen. Accordingly, starch molecules are structurally similar, forming compact coils, and play a similar role in energy storage for plants. Unlike glycogen, starch molecules vary widely in the level of branching. Most plants form a mixture of starch polymers with little to no branching and polymers with extensive branching. In addition to providing energy for the plants that synthesize them, starches serve as the main food source for many animals. Humans and other animals produce enzymes that degrade starch molecules into small fragments during digestion. In humans, this digestion begins in the mouth by an enzyme called amylase, which degrades starch polymers into disaccharides maltose. To experience starch digestion yourself, try chewing an unsalted cracker for a long time.
Delegate: Owen Bishop Topic: Access to Water The issue of access to safe and clean water is one of the most pressing that the world currently faces. Water is a human right and necessary for survival for everyone on earth as humans can only survive for three days without water. The UN needs to take action to aid developing nations in being able to provide water for their people. Additionally, it is necessary for developed nations to ensure that their water supplies are not merely available but also clean as to avoid the spread of disease and other issues. Belgium heavily recognizes the importance of water as a human right by both providing financial support to those who can’t pay water bills as well as through a minimum supply of guaranteed water. This has allowed Belgian people to have a reliable source of clean water allowing them to be able to live without fear of not having access to this critical supply. Belgium would be in favor of potential resolutions passed to encourage other nations to enact similar policies to better ensure that people are not failing to receive water even in wealthy countries with access to it. The other pressing issue is ensuring that the water people have access to is clean. In 2003, Belgium was labeled as having one of the worst water supplies in the world. Since then, Belgium has improved infrastructure to better clean their water supply. However, the majority of the water used in Belgium comes largely from the southern regions as the north is lacking a clean supply. Through this, the committee can see that it is critical that nations have the ability to improve their infrastructure as well as revealing the importance of sharing water both inside of a nation and between multiple nations to ensure that everyone has access to clean, safe drinking water. - Owen Bishop
Why did Britain help Jews establish the Jewish state? (21 March 2017, 23 Adar, 5777) The Balfour Declaration in 1917 and the British Establishment of what Became the State of Israel The British issued the Balfour Declaration in 1917 and proceeded to establish a Mandate over the area of Palestine. This laid the foundations for the State of Israel. The British did much more to bring about the creation of modern Israel than they themselves admit and much, much more than is commonly realized. # The British kept their promises to the Zionists. They opened up the country to mass Jewish immigration; by 1948, the Jewish population had increased by more than tenfold. The Jews were permitted to purchase land, develop agriculture, and establish industries and banks. The British allowed them to set up hundreds of new settlements, including several towns. They created a school system and an army; they had a political leadership and elected institutions; and with the help of all these they in the end defeated the Arabs, all under British sponsorship, all in the wake of that promise of 1917. Contrary to the widely held belief of Britain's pro-Arabism, British actions considerably favored the Zionist enterprise. # Tom Segev, "One Palestine Complete", USA, 2001, p.5. All this is a subject for another time. At present, granted that they did do something the question is why did they do it? The Bible and the British The British are identifiable as Israelites because they were located at Ends of the Earth (Isaiah 24:16, 26:15, 41:8-9 43:6 49:6); dwelt in Islands (Isaiah 24:15 49:6 60:9 Jeremiah 31: 9-10); were associated with Tarshish meaning the Atlantic Ocean area (Isaiah 60:9); were in the west (Isaiah 24:14, Hosea 1:10); Located to the Northwest of the Land of Israel (Isaiah 49:12 ) and numerous other proofs. Their national coat of arms included a Lion and Unicorn which were destined to be symbols of Israel in the End Times (Numbers 24:7-9). They would also Rule Over OTHER Peoples (Genesis 27:29 48:19); be Recognizable as a "Brit-Am" (Isaiah 42:6 49:8) i.e. a Covenant of the People or Commonwealth; They would be Seafarers (Isaiah 42:10); They would be the Dominant World Power  (Numbers 24:7-9 Micah 5:7-9);  the Battle-Axe of the Almighty or "Police-Man of the Globe" (Jeremiah 51:20 Zechariah 10:7); possess the GATE(s) OF YOUR ENEMIES i.e. International Strategic Points (Genesis 22:17 24:60); and so on. Anglo-Hebrews. Britain, and its Offshoots and the USA as Israelites. Biblical Proofs We believe that the British did what they did to help the Jews because many of them are descended from the Ten Tribes of Israel and amongst those Israelites Joseph is predominant especially from the Tribe of Ephraim. This however is the real explanation on the spiritual subconscious level. This may have been their destined task as being who they were BUT they were not conscious of it. Every such spiritual foundational causative principle has more prosaic interfaces. In other words even though the real reason is Hebrew Ancestry (the British are descended from Israelites especially Joseph) expression may given unto it by other forces. The moving elements in the British Psyche leading to British Zionism that may be measured in physical historical terms include: The Restoration Movement from the 1600s or earlier; a Literal Bible Heritage; Moral Imperative; a Need to Leave a Mark on the World; the Instinct to Remain Involved; a tradition that involved Defiance of the World and have often been a Protector of the Jews as exemplified by Henry-viii. An explanation for the above points is briefly given below. Restoration Movement from the 1600s or earlier. In England there were scholars, philanthropists, military adventurers, mystics, writers, politicians, and religious innovators who were all of the opinion that restoring the Jews to their homeland was necessary. Some of these saw this as a first step to converting the Jews to Christianity but most viewed it as something desirable in itself. From the 1600s to 1917, individuals with similar views were to be found in France, Germany, Spain, the USA, and probably elsewhere as well. Nevertheless, the Restoration Movement was overwhelmingly British. Franz Kobler, "THE VISION WAS THERE", UK 1956 pp.7-9, wrote: #"Nowhere more than in Britain has the idea of the Restoration of the Jews been developed into a doctrine and become the object of a movement extending over more than three centuries. Only in Britain the leading spokesmen of many generations have been inspired by the vision of a revived Israel. Only there the creation of a Jewish National Home has been a serious and almost continuous political issue which was finally translated into reality" # "The movement [i.e. Restoration of the Jewish Independent Kingdom] [is].. an integral part of British religious, social and political history forming a parallel, not an annex, of the histories of Jewish Messianism and Zionism .... The recognition of Israel's Restoration as an organic part of British political ideas... a genuine religious, humanitarian and political trend within British history." Franz Kobler, "THE VISION WAS THERE", UK 1956 pp.7-9. Literal Bible Heritage People of the British Isles had a tendency to take the Bible literally. This was exemplified by the religion of the Plymouth Brethren who although small in number had some influence. They understand the Bible to require the Restoration of the Jews for the fulfillment of prophecy. The British were never wholly selfish. They freed the slaves. They introduced some reforms. The Jews in Russia and elsewhere were suffering from pogroms and persecution. They wanted to help. If help was to be given they wished to be the ones giving it. By analogy someone pointed out that with the fall of France in 1940 there were people in Britain who felt relieved. It was good to be alone at last to do what was needed to be done. To Leave a Mark on the World The world was changing. There were independence movements in the Empire. Socialism and libertarian ideas were becoming popular. New forces were coming up. It was only a matter of time before Britain would need to adapt. It would be good to have created a Jewish Home, to leave a legacy while they were still able to do so. To Remain Involved Britain was going down, becoming smaller. Nevertheless no matter what its size it would need to remain a player. Inserting an extra card or two might make things interesting at some point. Defiance of the World: Protector of the Jews Henry-viii (1491-1547) is known for having executed two of his six wives and doing other deeds of dubious probity. Nevertheless Henry was not all bad. He accomplished much of value. The attitude of Henry towards the Jews tells us something of the English character. The Jews had been expelled from England in 1290. Prince Arthur, the brother of Henry, was the heir presumptive. Arthur as a boy was married off to Catherine of Aragon, the daughter of the King of Spain. The marriage agreement included a clause that the exclusion of Jews would remain in effect. Arthur died prematurely, Henry later married Catherine widow of his brother, and became king. Marranos were descendants of Jews from Spain who had been forced to convert to Christianity. They were often suspected of secretly remaining attached to Judaism. Henry had Jewish Marrano musicians from Italy at his court.  The Papal legate protested. Henry sent them out and then brought them back again, one by one. Henry attempted successfully to intervene with the King of Spain to lighten the persecution of  Marranos Jews in the Netherlands which then was ruled by Spain. Henry tried to get his marriage annulled on legal Religious grounds. His experts appealed to Jewish Rabbinical sources and consulted with Rabbis in Italy. David S. Katz has written on this. This was not so much a love of Jews but rather a refusal to relinquish use of them. To be fair, practical, and innovative. We must also acknowledge the presence of Jewish Zionism and the influence of Jews and their sympathizers everywhere. This too was a determinative element and perhaps the one that had overwhelming importnace. Nevertheless the other factors existed and are not to be dismissed.
A world first, the A-Train is a constellation of six Franco-American satellites flying in formation at a few minute interval in a heliosynchronous orbit passing over the equator at 13:30 local time. This space rendezvous is designed to make almost simultaneous use of all the observation techniques currently available to scrutinise the Earth's atmosphere. They are independent but complementary to each other; the different A-Train missions are all concerned with the climate and the study of the interactions between radiation, clouds, aerosols and the water cycle. The matrix of measurements from their fifteen instruments will provide the scientific community with an unprecedented wealth of data which will be used to test and improve digital forecasting models both for weather and climate pollution. Aqua (NASA), Aura (NASA), Calipso (NASA/CNES), Cloudsat (NASA/ASC), Parasol (CNES) and OCO (NASA) will make an exceptional space observatory combining all the active and passive measurement techniques to gain a better understanding of how the climate machine works. The A-Train especially identifies the various types of aerosol and helps to understand their direct and indirect effects on the climate; the role of stratospheric polar clouds on the hole in the ozone layer is also being studied. AQUA, which has been in orbit since 4 May 2002, is considered to be the "leader" of the constellation because it is the first to cross the equator every day (at 13h30 local time for ascending orbits) and night (at 1h30 for descending orbits), but also because it is the largest. Its mission is focussed on the water cycle.
Millions of people experience acid reflux and heartburn. The most frequently used treatment involves commercial medications, such as omeprazole. However, lifestyle modifications may be effective as well. Simply changing your dietary habits or the way you sleep may significantly reduce your symptoms of heartburn and acid reflux, improving your quality of life. Acid reflux is when stomach acid gets pushed up into the esophagus, which is the tube that carries food and drink from the mouth to the stomach. Some reflux is totally normal and harmless, usually causing no symptoms. But when it happens too often, it burns the inside of the esophagus. An estimated 14–20% of all adults in the US have reflux in some form or another ( The most common symptom of acid reflux is known as heartburn, which is a painful, burning feeling in the chest or throat. Researchers estimate that around 7% of Americans experience heartburn daily (2). Of those who regularly experience heartburn, 20–40% are diagnosed with gastroesophageal reflux disease (GERD), which is the most serious form of acid reflux. GERD is the most common digestive disorder in the US ( In addition to heartburn, common symptoms of reflux include an acidic taste at the back of the mouth and difficulty swallowing. Other symptoms include a cough, asthma, tooth erosion and inflammation in the sinuses ( So here are 14 natural ways to reduce your acid reflux and heartburn, all backed by scientific research. Where the esophagus opens into the stomach, there is a ring-like muscle known as the lower esophageal sphincter. It acts as a valve and is supposed to prevent the acidic contents of the stomach from going up into the esophagus. It naturally opens when you swallow, belch or vomit. Otherwise, it should stay closed. In people with acid reflux, this muscle is weakened or dysfunctional. Acid reflux can also occur when there is too much pressure on the muscle, causing acid to squeeze through the opening. One step that will help minimize acid reflux is to avoid eating large meals. Summary: Avoid eating large meals. Acid reflux usually increases after meals, and larger meals seem to make the problem worse. The diaphragm is a muscle located above your stomach. In healthy people, the diaphragm naturally strengthens the lower esophageal sphincter. As mentioned earlier, this muscle prevents excessive amounts of stomach acid from leaking up into the esophagus. However, if you have too much belly fat, the pressure in your abdomen may become so high that the lower esophageal sphincter gets pushed upward, away from the diaphragm's support. This condition is known as hiatus hernia. Several observational studies show that extra pounds in the abdominal area increase the risk of reflux and GERD ( Controlled studies support this, showing that weight loss may relieve reflux symptoms ( Losing weight should be one of your priorities if you live with acid reflux. Summary: Excessive pressure inside the abdomen is one of the reasons for acid reflux. Losing belly fat might relieve some of your symptoms. Growing evidence suggests that low-carb diets may relieve acid reflux symptoms. Scientists suspect that undigested carbs may be causing bacterial overgrowth and elevated pressure inside the abdomen. Some even speculate this may be one of the most common causes of acid reflux. Studies indicate that bacterial overgrowth is caused by impaired carb digestion and absorption. Summary: Acid reflux might be caused by poor carb digestion and bacterial overgrowth in the small intestine. Low-carb diets appear to be an effective treatment, but further studies are needed. Drinking alcohol may increase the severity of acid reflux and heartburn. Summary: Excessive alcohol intake can worsen acid reflux symptoms. If you experience heartburn, limiting your alcohol intake might help ease some of your pain. However, one study that gave participants caffeine in water was unable to detect any effects of caffeine on reflux, even though coffee itself worsened the symptoms. These findings indicate that compounds other than caffeine may play a role in coffee's effects on acid reflux. The processing and preparation of coffee might also be involved ( Nevertheless, although several studies suggest that coffee may worsen acid reflux, the evidence is not entirely conclusive. One study found no adverse effects when acid reflux patients consumed coffee right after meals, compared to an equal amount of warm water. However, coffee increased the duration of reflux episodes between meals ( Additionally, an analysis of observational studies found no significant effects of coffee intake on the self-reported symptoms of GERD. Yet, when the signs of acid reflux were investigated with a small camera, coffee consumption was linked with greater acid damage in the esophagus ( Whether coffee intake worsens acid reflux may depend on the individual. If coffee gives you heartburn, simply avoid it or limit your intake. Summary: Evidence suggests that coffee makes acid reflux and heartburn worse. If you feel like coffee increases your symptoms, you should consider limiting your intake. Gum that contains bicarbonate appears to be especially effective ( These findings indicate that chewing gum — and the associated increase in saliva production — may help clear the esophagus of acid. However, it probably doesn't reduce the reflux itself. Summary: Chewing gum increases the formation of saliva and helps clear the esophagus of stomach acid. One study in people with acid reflux showed that eating a meal containing raw onion significantly increased heartburn, acid reflux and belching compared with an identical meal that didn't contain onion ( Raw onions might also irritate the lining of the esophagus, causing worsened heartburn. Whatever the reason, if you feel like eating raw onion makes your symptoms worse, you should avoid it. Summary: Some people experience worsened heartburn and other reflux symptoms after eating raw onion. Patients with GERD are sometimes advised to limit their intake of carbonated beverages. One observational study found that carbonated soft drinks were associated with increased acid reflux symptoms ( The main reason is the carbon dioxide gas in carbonated beverages, which causes people to belch more often — an effect that can increase the amount of acid escaping into the esophagus ( Summary: Carbonated beverages temporarily increase the frequency of belching, which may promote acid reflux. If they worsen your symptoms, try drinking less or avoiding them altogether. In a study of 400 GERD patients, 72% reported that orange or grapefruit juice worsened their acid reflux symptoms ( The acidity of citrus fruits doesn't appear to be the only factor contributing to these effects. Orange juice with a neutral pH also appears to aggravate symptoms ( Since citrus juice doesn't weaken the lower esophageal sphincter, it is likely that some of its constituents irritate the lining of the esophagus ( While citrus juice probably doesn't cause acid reflux, it can make your heartburn temporarily worse. Summary: Most patients with acid reflux report that drinking citrus juice makes their symptoms worse. Researchers believe citrus juice irritates the lining of the esophagus. GERD patients are sometimes advised to avoid or limit their consumption of chocolate. However, the evidence for this recommendation is weak. One small, uncontrolled study showed that consuming 4 ounces (120 ml) of chocolate syrup weakened the lower esophageal sphincter ( Another controlled study found that drinking a chocolate beverage increased the amount of acid in the esophagus, compared to a placebo ( Nevertheless, further studies are needed before any strong conclusions can be made about the effects of chocolate on reflux symptoms. Summary: There is limited evidence that chocolate worsens reflux symptoms. A few studies suggest it might, but more research is needed. Peppermint and spearmint are common herbs used to flavor foods, candy, chewing gum, mouthwash and toothpaste. They are also popular ingredients in herbal teas. One controlled study of patients with GERD found no evidence for the effects of spearmint on the lower esophageal sphincter. Yet, the study showed that high doses of spearmint may worsen acid reflux symptoms, presumably by irritating the inside of the esophagus ( If you feel like mint makes your heartburn worse, then avoid it. Summary: A few studies indicate that mint may aggravate heartburn and other reflux symptoms, but the evidence is limited. Some people experience reflux symptoms during the night ( This may disrupt their sleep quality and make it difficult for them to fall asleep. One study showed that patients who raised the head of their bed had significantly fewer reflux episodes and symptoms, compared to those who slept without any elevation ( Additionally, an analysis of controlled studies concluded that elevating the head of the bed is an effective strategy to reduce acid reflux symptoms and heartburn at night ( Summary: Elevating the head of your bed may reduce your reflux symptoms at night. People with acid reflux are generally advised to avoid eating within the three hours before they go to sleep. Although this recommendation makes sense, there is limited evidence to back it up. One study in GERD patients showed that having a late evening meal had no effects on acid reflux, compared to having a meal before 7 p.m. ( However, an observational study found that eating close to bedtime was associated with significantly greater reflux symptoms when people were going to sleep ( More studies are needed before solid conclusions can be made about the effect of late evening meals on GERD. It may also depend on the individual. Summary: Observational studies suggest that eating close to bedtime may worsen acid reflux symptoms at night. Yet, the evidence is inconclusive and more studies are needed. The reason is not entirely clear, but is possibly explained by anatomy. The esophagus enters the right side of the stomach. As a result, the lower esophageal sphincter sits above the level of stomach acid when you sleep on your left side ( When you lay on your right side, stomach acid covers the lower esophageal sphincter. This increases the risk of acid leaking through it and causing reflux. Obviously, this recommendation may not be practical, since most people change their position while they sleep. Yet resting on your left side might make you more comfortable as you fall asleep. Summary: If you experience acid reflux at night, avoid sleeping on the right side of your body. Some scientists claim that dietary factors are a major underlying cause of acid reflux. While this might be true, more research is needed to substantiate these claims. Nevertheless, studies show that simple dietary and lifestyle changes can significantly ease heartburn and other acid reflux symptoms.
A research group — coordinated by Giuseppe Legname from The Laboratory of Prion Disease, SISSA, Trieste, Italy — has found a way to build artificial prions, assembling the proteins in a serial manner. The study was performed in collaboration with the Carlo Besta Neurological Institute in Milan and the results were published in the journal PLOS Pathogens. The idea behind the study, as described in a press release, was that building something from scratch is sometimes the best way to understand the properties of an object — in this case, prions. With the help of the newly synthesized prions the team hopes to further its understanding of prion-based diseases. Prions are A difficulty in prion research is that natural prions, being a heterogeneous and sometimes poorly characterized group, can be difficult to control. Synthesizing the prions allowed the team to exert minute control over their pathogenic behavior in experimental animals. “When we ‘characterized’ them, we also observed that they were very similar to the ones responsible for Mad Cow and Creutzfeldt-Jakob disease, the human form of the illness,” said Dr. Legname, adding that, “The synthetic ones we created ourselves, however, are easier to control, homogeneous and structurally defined. And yet they still show the same consequences as biological ones. Our ultimate goal, of course, is to identify mechanisms which can block the pathogenic effect, in order to develop treatments for disease.” The artificial prions retained the activity of their biological counterparts, and the team managed to induce prion disease-associated changes in mice using the synthetic prions. The research has implications for neurodegenerative disorders such as amyotrophic lateral sclerosis (ALS), where prion-like mechanisms have been observed and are believed to contribute to disease development. “Naturally, our line of research is already evolving. We will be working with human prions, and we have other projects as well,” Dr. Legname said. “We are thinking about the molecules responsible for Alzheimer’s, like amyloid beta, or Parkinson’s, or even amyotrophic lateral sclerosis. In these cases as well, having synthetic molecules available could be an important step forward.” Dr. Legname concluded, “It is the first time that something like this has been done, and the consequences for research are significant.” We are sorry that this post was not useful for you! Let us improve this post! Tell us how we can improve this post?
Most people don't spend much time pondering the diameter of their pupils. The fact is that we don't have much control over our pupils, the openings in the center of the irises that allow light into the eyes. Short of chemical interventions—such as the eyedrops ophthalmologists use to widen their patients' pupils for eye exams—the only way to dilate or shrink the pupils is by changing the amount of available light. Switch off the lamp, and your pupils will widen to take in more light. Step out into the sun, and your pupils will narrow. Mechanical though they may be, the workings of pupils are allowing researchers to explore the parallels between imagination and perception. In a recent series of experiments, University of Oslo cognitive neuroscientists Bruno Laeng and Unni Sulutvedt began by displaying triangles of varying brightness on a computer screen while monitoring the pupils of the study volunteers. The subjects' pupils widened for dark shapes and narrowed for bright ones, as expected. Next, participants were instructed to simply imagine the same triangles. Remarkably, their pupils constricted or dilated as if they had been staring at the actual shapes. Laeng and Sulutvedt saw the same pattern when they asked subjects to imagine more complex scenes, such as a sunny sky or a dark room. Imagination is usually thought of as “a private and subjective experience, which is not accompanied by strongly felt or visible physiological changes,” Laeng says. But the new findings, published in Psychological Science, challenge that idea. The study suggests that imagination and perception may rely on a similar set of neural processes: when you picture a dimly lit restaurant, your brain and body respond, at least to some degree, as if you were in that restaurant. The new experiments complement popular methods for studying consciousness by providing visual stimulation to participants without their awareness. Joel Pearson, a cognitive neuroscientist at the University of New South Wales in Australia, explains that mental imagery research takes the opposite approach, allowing subjects conscious awareness of a mental image without the accompanying stimulation. Perhaps by combining the two approaches, scientists can better understand how consciousness works.
Hidden in a quiet forest clearing in Croatia, a 65-foot memorial tower marks the spot where Yugoslavia began its organized resistance against the invading Nazi forces. In Soviet history books, World War II began in June 1941 when Nazi Germany broke the Molotov–Ribbentrop Pact (the treaty of non-aggression between Germany and the Soviet Union), and began advancing towards Moscow on the Eastern Front. Soviet allies and other Communist groups across Eastern Europe began preparing for the worst; and in Sisak, modern-day Croatia, members of the Yugoslav Communist Party formed the region’s first anti-fascist fighting brigade: the 1st Sisak Partisan Detachment. According to the stories, these early Yugoslav partisans met in secret in the forest, beneath the shadow of an old elm tree. As the Nazis advanced on the Balkans, the group would go on to play an important role in Yugoslavia’s National Liberation Movement, destroying bridges, ammo stores, and disrupting Nazi supply lines. Immediately after the war, a commemorative plaque was set beneath the elm tree outside Sisak. In time the tree died, but in the 1970s plans were made to create a larger, more impressive memorial complex commemorating the spot where it had stood. The architect Želimir Janeš designed the main memorial structure, a 65-foot tower whose abstract shape was intended to resemble an elm tree. The memorial park was completed in 1981 with the main structure surrounded by smaller monuments, stones and memorial panels. During the last decade of socialist Yugoslav the Sisak Memorial Park was a popular destination, visited by domestic tourists and educational school trips alike. This all came to an end in the 1990s however, when Yugoslavia disintegrated and the region was ravaged by bloody wars fought along ethnic and religious fault lines. Since then, the site has fallen into neglect. Many of the smaller memorial features have disappeared, and plaques around the clearing show signs of damage by gunfire. The central tower still stands, however—and fresh white paint on one side of the monument shows that at least someone in these parts still cares for the place.
Compare and contrast. Think about the tin forest and forests you have learnt about so far. Make a list of things you can see, hear and smell in a real forest. Now write about what you might see, hear and smell in a Tin forest. Do they share any similarities? What did you learn yesterday? Why is it important to represent data into a graph? (It’s easier to read, easier to compare) What kinds of graphs or charts can you name? Today, we are going to be learning about scaled pictograms. What is a pictogram? 1. Read the presentation 2. Interpret the data on the activity sheets Re-read your summary from yesterday to remind yourself about the text. Read the Question Booklet and work through questions 1-7, remember to find the answer in the text for every question – even if you think that you remember it! Topic - RE Today’s RE lesson is about temptations. Read the story and answer the question in a creative way in your book. Finally, act out the scenario with someone at home. Is it easy or hard to resist temptation? How do you decide what is right and wrong? How can you live a good life?
As a second-generation Mexican-American, I witness the cultural appropriation of Dia De Los Muertos on Halloween. For instance, what many people might not know is that Halloween is a part of Dia De Los Muertos. Dia De Los Muertos is a three-day celebration of life, love, and joy meant to remember the dead. It translates as “Day of the Dead.” The holiday starts on Halloween, October 31st, and ends on November 2nd, on All Souls Day. Day of the Dead originates in Mexico but incorporates aspects of indigenous culture over the three-day celebration. Indigenous tribes of Mexico such as the Aztec, the Toltec, and Nahua people, might have celebrated the dead on Dia De Los Muertos. The natives consider mourning the dead disrespectful but want to commemorate their ancestors respectfully. Today, Latino countries all over the world celebrate Dia De Los Muertos. On Halloween night, some people dress up as La Catrina Skull. La Catrina Skull is a Dia De Los Muertos tradition where the face painting of the skeleton is used during the celebrations to evoke La Catrina. Also, women, men, and children paint their faces as La Catrina. Some people who identify as White might paint their faces to resemble La Catrina. This is a form of cultural appropriation. White people with no Latino heritage claim La Catrina as their own by wearing it during Halloween. Dressing up as La Catrina is a way for Latinos to connect with their ancestors. On Dia De Los Muertos, families decorate altars, called ofrendas, with pan de muertos (translates to “bread of the dead”). Pan de Muertos is a form of pan de dulce (“sweet bread”) meant to feed the dead when they come and visit their living relatives for the festivities. (Think of the Pixar movie, Coco). Pictures of loved ones are accented with marigold petals on the ofrenda. Many people dress up as La Catrina in Mexico to evoke her. People congregate together to march towards the cemeteries on Halloween. Some families even decorate their ancestral graves of their ancestors with Marigold petals. They bring food and other offerings for the two-day celebration. In America, the commercialization of Dia De Los Muertos boomed in recent years. Companies such as Party City sell Dia De Los Muertos merchandise during the Halloween season. An article written by the Los Angeles Times in 2017, observes the boom in sells and treatment for Dia De Los Muertos: commercialization. The holiday’s popularity is soaring in California because of the growing Latino population. Latinos are the largest ethnic group in the state of California. To me, Dia De Los Muertos is the time to remember my ancestors. My mother creates our family ofrenda and tells us stories about our ancestors that have passed on as we celebrate the holiday together. In some cases, I do go to the cemetery with my family and bring flowers to decorate my grandparents grave. From time to time, I do dress up as La Catrina for fun, but I also remember to use it as a tool to respect my ancestors and my culture.
Shielded Metal-arc Welding (SMAW) is the simplest and used for many joining processes. More than 50% of industrial and maintenance welding currently is performed by this welding process. In this welding operation, an electric arc is generated by touching the tip of a coated electrode against the workpiece and withdrawing it quickly to a distance sufficient to maintain the arc as shown in the picture below. The electrodes are in the shapes of thin, long rods that are held manually. The heat generated melts a portion of the electrode tip, its coating and the base metal in the immediate arc area. The molten metal consists of a mixture of the base metal, the electrode metal, and substances from the coating on the electrode, this mixture forms the weld when it solidifies. The electrode coating de-oxidizes the weld area and provides a shielding gas to protect it from oxygen in the surroundings. A bare section at the end of the electrode is clamped to one terminal of the power source, while the other terminal is connected to the workpiece being welded. The current may be either DC or AC usually in the range of 50 to 300 A. For sheet-metal welding, DC is suitable because of the steady arc it produces. Power requirement is generally less than 10 kW. The shield Metal-arc welding process has the advantages of being relatively simple and requiring a smaller variety of electrodes. The equipment consists of a power supply, cables, and an electrode holder. The shield metal-arc welding process commonly is used in general construction, shipbuilding, pipelines and other maintenance work. It is mainly used for work in remote areas where a portable fuel-powered generator can be used as the power supply. shield metal-arc welding is best suited for the workpiece of thickness 3 to 19 mm, although this range can be extended easily by skilled operators using multiple-pass techniques as shown in the picture. The multiple-pass approach requires that the slag is removed after each weld bead. Unless removed fully, the solidified slag can cause severe corrosion of the weld area and lead to failure of the weld but it also prevents the fusion of welded layers. Before applying another weld, the slag should be removed completely by using wire brushing or weld chipping.
A toxic substance is capable of causing death or serious injury -- hence the skull and crossbones on labels of poisonous materials. Many people wear antiperspirant to avoid odor and wetness, but some people worry that using antiperspirant could be toxic to the body. Antiperspirants are a common defense against embarrassing underarm wetness and the unpleasant aroma caused by odor-producing bacteria. By temporarily blocking sweat ducts, antiperspirants are able to reduce the amount of perspiration produced, which means less sweat in the underarm area [source: International Hyperhidrosis Society]. One of the sweat-blocking ingredients found in many antiperspirants is aluminum, an element that's also found in drinking water, cooking utensils, antacids and beer [source: WebMD]. In recent years, questions have been raised about whether the aluminum in antiperspirants can be absorbed into the body and contribute to the development of breast cancer. Aluminum compounds, such as aluminum zirconium and aluminum chloride, have estrogen-like properties. Because estrogen can promote the growth of breast cancer tissue, there's concern that aluminum may have the same effect when absorbed through the skin [source: National Cancer Institute]. A 2007 study analyzed the breast tissue of 17 breast cancer patients to measure aluminum content. The researchers found that the outer regions of the breast, close to where antiperspirant would be applied, held higher levels of aluminum. While this may sound alarming, it's important to note that the aluminum content of healthy breast tissue wasn't tested [source: Medical News]. More studies are under way, but so far, researchers haven't found a definite link between aluminum and breast cancer [source: American Cancer Society]. Aluminum has also been fingered as a potential contributor to Alzheimer's disease. While it's true that aluminum has been found in the bodies of some people with the disease, it hasn't been found in every person with Alzheimer's [source: WebMD]. And although aluminum has been tied to brain problems common in people with Alzheimer's, there's no proof that aluminum causes the disease [source: Alzheimer's Society]. The dangers of aluminum, however, are still very real. People without fully functioning kidneys should be wary of using antiperspirant, which is why the U.S. Food and Drug Administration requires a warning label on all antiperspirants stating that people with kidney disease should consult a doctor before using them [source: American Association of Kidney Patients]. Another common concern about antiperspirants is that they may plug sweat glands and prevent toxins from leaving the body through underarm lymph nodes. However, there's no connection between your sweat glands and your lymph nodes [source: American Cancer Society]. In addition, your body doesn't get rid of waste through the sweat glands -- your kidneys and liver filter out toxins. If you don't feel comfortable using an antiperspirant, simply use an aluminum-free deodorant -- it won't stop the sweat, but it will take care of the odor [source: Global Healing Center]. To learn more about antiperspirant, see the links on the following page.
Tooth enamel is the strongest substance in the human body. Its function is to protect teeth from the daily wear and tear of biting and chewing, temperature extremes and guards against the erosive effects of acids and chemicals. - Consumption of excessive amounts of soft drinks or fruit drinks. Bacteria thrive on sugar and produce high acid levels that can destroy enamel - Eating an abundance of sour foods and candies. Acidic foods can erode tooth enamel. - Dry mouth or low saliva volume. Saliva helps prevent decay by neutralizing acids and washing away leftover food debris in your mouth. - Acid reflux disease (GERD), or heartburn. Acid reflux brings stomach acids up to the mouth, where acids can erode enamel - Bulimia, Alcoholism, or binge drinking in which frequent vomiting exposes teeth to stomach acids. - Certain drugs or supplements with high acid content (aspirin, vitamin C) can also erode enamel - Friction /wear and tear from brushing teeth too vigorously or grinding teeth, can erode enamel - Sensitive teeth or tooth pain when eating hot, cold or sweet foods and drinks. - Rough or irregular edges on the teeth, which can become cracked or chipped when enamel is lost. - Smooth, shiny surfaces on the teeth- enamel erosion causes mineral loss on these areas. - Yellowing of teeth from thin enamel - Cupping (dents in the enamel) that are evident on the biting/chewing surfaces of the teeth - When teeth erode, they are more susceptible to cavities and decay. Protection from tooth erosion: - Decrease the amount of acidic drinks and foods, such as carbonated drinks and citrus fruits. If you do drink them, do so at mealtimes to minimize effects on enamel. - Switch to modified products, such as low acid orange juice - Rinse your mouth with water right after having acidic foods and drinks. - Drink sodas and fruit juices with a straw, which eliminates bathing the teeth with an excessive amount of acid. - Finish a meal with a glass of milk or piece of cheese to neutralize acids - Chew sugar free gum with xylitol, which reduces acids from foods and drinks. This also increases saliva flow, which helps strengthen the teeth by depositing key minerals into the enamel. - Drink plenty of water of you have dry moth or decreased saliva flow - Use a soft bristle toothbrush and avoid aggressive tooth brushing - Wait at least one hour to brush teeth after they have been exposed to acids in foods or drinks. Acid leaves the enamel softened and more prone to tooth erosion if teeth are brushed immediately after. A mouth rinse is a good option if you need to have fresh breath. - Use fluoride/tarter control toothpaste to strengthen your teeth and reduce the amount of bacteria. - Ask your dental professional about using commercial toothpastes to reduce tooth sensitivity and/or to protect against erosion. - Get medical treatment for disorders (bulimia, alcoholism, or GERD) that can produce an acidic oral environment. There are several ways to repair teeth that are damaged by erosion. The correct approach depends on your particular problem. Tooth bonding (tooth colored filling material) can protect a tooth with enamel erosion and can also improve the appearance of teeth that are worn down, chipped and/or discolored. Porcelain laminates (thin porcelain facings), can also be used. These are a more expensive alternative, but are a lot longer lasting than bonding and don’t chip or stain. If enamel loss is significant, a cap (crown) may be indicated. In this situation the entire tooth may have to be covered in order to protect it from further damage.
A: Native Pollen Bees are also called Solitary Pollen Bees or Wild Bees. Mason Bees are one type of Solitary Pollen Bee. They do not live in colonies with a Queen or worker bees and they do not produce honey. They do the bulk of the pollination in our gardens, parks & forests. They are also responsible for helping pollinate the crops, fruit and vegetables we eat- 1 in every 3 mouthfuls of food we eat required pollination to grow! Native Pollen Bees work more efficiently than Honey Bees at pollinating flowers. They don't travel far, and so focus their pollination efforts on fewer plants. They fly quickly, visiting more plants in a shorter amount of time. Both males and females pollinate flowers, and Native Bees begin earlier in the spring than Honey Bees.
A CT scan (also called a CAT scan, or computerized tomography scan) is an X-ray technique that gives doctors information about the body’s internal organs in 2-dimensional slices, or cross-sections. During a CT scan, you lie on a moving table and pass through a doughnut-shaped machine that takes X-rays of the body from many different angles. A computer puts these X-rays together to created detailed pictures of the inside of the body. Before the test, you need to have a contrast solution (dye) injected into your arm through an intravenous line. Because the dye can affect the kidneys, your doctor may perform kidney function tests before giving you the contrast solution. Right now, CT scans are not used routinely to evaluate the breast. If you have a large breast cancer, your doctor may order a CT scan to assess whether or not the cancer has moved into the chest wall. This helps determine whether or not the cancer can be removed with mastectomy. Your doctor might order CT scans to examine other parts of the body where breast cancer can spread, such as the lymph nodes, lungs, liver, brain, and/or spine. Generally, CT scans wouldn’t be needed if you have an early-stage breast cancer. If your symptoms or other findings suggest that the cancer could be more advanced, however, you may need to have CT scans of the head, chest, and/or abdomen. If advanced breast cancer is found, your doctor may order more CT scans during treatment to see whether or not the cancer is responding. After treatment, CT scans may be used if there is reason to think the breast cancer has spread or recurred outside the breast. Some researchers are investigating whether breast CT scans could be a better screening tool than traditional mammography. During breast CT, you lie face down on a table while a CT scanner rotates around the breast. The total dose of radiation is the same as in a conventional mammogram. Research on breast CT for screening is still in the early stages, however.
In a solar system far, far away, around a small and dim star, orbits a small planet, just three times the size of Earth. The astronomers who discovered the small planet don’t know much else about it yet, but the basics are enough to get them excited. Extraterrestrial life is thought to have the best chance of surviving on planets with a similar mass to that of Earth, orbiting small stars. This “exoplanet,” which goes by the romantic name MOA-2007-BLG-192Lb, is the second smallest planet ever spotted outside our solar system. The very smallest planet yet discovered is believed to be sterile, as it orbits a neutron star that emits blasts of radiation. Lead astronomer David Bennett of the University of Notre Dame cautioned that the newly discovered planet is not an ideal candidate for extraterrestrial life. The star it orbits appears so small and dim that it’s likely a brown dwarf, a “failed” star which can’t maintain the nuclear fusion reaction that makes our own sun a source of warmth and light. At the announcement at an American Astronomical Society meeting, Bennett said that the surface of the planet is probably rather dark and could could be frozen solid, like Pluto on the edge of our solar system. However, a slight chance remains that it could be habitable, researchers said. Nicholas Rattenbury, a co-author from the University of Manchester and Jodrell Bank, told BBC News: “Our best ideas about how planets form suggests the planet could have quite a thick atmosphere. This atmosphere could act like a big blanket, keeping the planet warm. So even though there’s very little energy coming from its host star, hitting the planet and warming it up that way, internal heat coming from within the planet could be warming up the surface. This has led to some speculation that there could, possibly, be a liquid ocean on the surface of this planet” [BBC News]. Whether or not the orb has ever been home to liquid oceans and some squiggly form of life, Bennett said the discovery of the small planet around the small star has encouraged his team to hunt for similar systems. “The fact that we’re finding outer planets around low-mass objects is an indication that planets are forming in these low-mass systems,” the University of Notre Dame physicist said. Many such systems are relatively nearby and would be easy places to take the next step—the search for life [National Geographic News]. The research team used telescopes in New Zealand and Chile to make their observations, and used a technique called gravitational microlensing to identify the planet. The gravitational microlensing technique, which came from Einstein’s General Theory of Relativity, relies upon observations of stars that brighten when an object such as another star passes directly in front of them (relative to an observer, in this case on Earth). The gravity of the passing star acts as a lens, much like a giant magnifying glass. If a planet is orbiting the passing star, its presence is revealed in the way the background star brightens [National Science Foundation]. Image: NASA’s Exoplanet Exploration Program
1 Answer | Add Yours There is no one hard and fast defining characteristic which delineates the South from other portions of the nation; there are rather several defining factors, all of which differ in some detail. The traditional definition of "the South" would be that area of the United States situate below the Mason Dixon Line. The definition came about because when slavery was still extant in the U.S., the line became de facto the dividing line between slave and free states. Another traditional definition is those States which purportedly left the Union and fought against it during the Civil War. (Under the latter definition, Maryland would not be part of the South; whereas it would be in the former.) Geographically, it might be described as that area bordered on the North by the Potomac River and on the West by the Mississippi River. Still another definition might encompass the "Black Belt" because of the rich soils which previously defined this largely agricultural area. Finally, it is often characterized as the "Bible Belt," because of the conservative, often evangelical Protestant religion which is dominant in the southern States. We’ve answered 317,653 questions. We can answer yours, too.Ask a question
A functional disorder refers to a disorder or disease where the primary abnormality is an altered physiological function (the way the body works), rather than an identifiable structural or biochemical cause. It characterizes a disorder that generally can not be diagnosed in a traditional way; that is, as an inflammatory, infectious, or structural abnormality that can be seen by commonly used examination, x-ray, or blood test. In this context, "functional" means that the symptoms occur within the expected range of the body's behavior. (Examples: Shivering after a cold swim is a symptom, but not due to disease. Or, a runner's leg cramp is very painful, but the muscle is healthy.) Functional disorders are characterized by symptoms. Childhood functional gastrointestinal disorders include a variable combination of often age-dependent, chronic or recurrent symptoms not explained by structural or biochemical abnormalities. Examples of these symptoms may include: - Abdominal pain or bellyaches - Abdominal distention - Chronic diarrhea or constipation - Fecal soiling - Food refusal Examples of functional GI disorders in kids and teens include: - Infant regurgitation - Cyclic vomiting sydrome - Functional dyspepsia - Irritable bowel syndrome (IBS) - Functional abdominal pain - Abdominal migraine - Functional diarrhea - Functional constipation - Functional fecal retention - Functional non-retentive fecal soiling Childhood functional GI disorders, while distressing, are not dangerous when symptoms and parental concerns are addressed appropriately. However, failed diagnosis and inappropriate treatments may be the cause of needless physical and emotional suffering. We encourage you to find out more. Specific information about functional GI and motility disorders in infants, kids and teens can be found at this IFFGD web site: www.aboutkidsgi.org.
1 of 2 Note - This article originally was published in Percussive Notes, Vol. 33, No. 6, December 1995, page 32-45. Reprinted by permission of the Percussive Arts Society, Inc., 701 N.W., Ferris, Lawton, OK 73507 The tabla is a well known percussive instrument from the Indian subcontinent, yet the nature of compositional theory for this instrument is little known. This is unfortunate because the theory is remarkably advanced and the tabla has become a source of inspiration to modern percussionists throughout the Western world (Bergamo 1981). There are only two approaches to Indian rhythm; cyclic and cadential (Stewart 1974). The cadential form requires a resolution while the cyclic form rolls along and does not resolve. The cyclic form includes such common examples as theka, rela, or kaida. These will be covered in this paper. It is necessary to go over a little background before we delve into our discussion of the cyclic form. First, there are different criteria used for the nomenclature. We also need to bear in mind the relationship between tabla and its progenitor, the pakhawaj. We need to be aware of the stylistic schools (gharanas). There are a few concepts of Indian rhythm which must be mastered. Finally, we should know what the cyclic-form is, and how it relates to the cadential form. Tabla is derived from an ancient barrel-shaped drum known as pakhawaj. This drum supplies a large body of compositions for the tabla. Additionally, the pedagogy, the system of bols (mnemonic syllables)(Courtney 1993), and musical tradition has been taken almost without change from the pakhawaj. The system of pedagogy has a special significance for tabla. Over the millennia, musical material has passed from the guru (teacher) to the shishya (disciple) in an unbroken tradition. This has created stylistic schools which are known as gharanas. These gharanas are marked by common compositional forms, repertoire, and styles (Courtney 1992). The fundamentals of the Indian system of rhythm are important. This system, known as tal is based upon three units. These are the matra, vibhag, and avartan; which refer to the beat, measure and rhythmic cycle respectively. The vibhag (measure) is important because it is the basis of the timekeeping. In this method, each measure is specified by either a clap or wave of the hands. The Indian concept of a beat is not very different from the Western, except for the first beat. This first beat, known as sam, is pivotal for all of north Indian music. Aesthetically, it marks a place of repose. It also marks the spot where transitions from one form to another are likely to occur. Although there are many compositional forms, there are really only two overall classes; cyclic and cadential. These mutually exclusive classes are based upon simple philosophies. The cadential class has a feeling of imbalance; it moves forward to an inevitable point of resolution, usually on the sam. It is a classic case of tension/resolve. Common cadenzas are the tihai, mukhada, paran. In contrast, the cyclic class comprises material which rolls along without any strong sense of direction. One may generally ascribe a feeling of balance and repose to this class. These include our basic accompanying patterns (theka and prakar); formalized theme and variation (kaida); and a host of others which we will discuss in greater detail later in this paper. The alternation between the cyclic and the cadential material is the aesthetic dynamo which drives Indian music forward. The cyclic material is the groove or rhythmic foundation upon which the main musician builds the performance. The stability of the cyclic form makes it suitable for providing the musical framework for either tabla solos or accompaniment. Conversely the tension and instability of the cadenza provides the energy to keep the performance moving. The conceptual basis of the terminology is important. The nomenclature can be confusing until we realize that terms may be based upon unrelated criteria. This is illustrated with a simple analogy. Imagine a Martian suddenly appearing in human society, whose job is to categorize the various types of people. On different occasions, he may see the same individual being referred to as a Republican, Catholic, male, middle executive, or a host of other labels that we apply to people everyday. The situation is very confusing until our Martian realizes that these labels are based upon unrelated criteria. This is the same type of confusion which is present in the terminology of tabla. There are several criteria used to define compositions. These criteria are: bol (mnemonic syllables), structure, the function, and in rare cases the technique. The bols are the mnemonic syllables; cyclic material cuts across the spectrum, so any and every bol of tabla may be found. The structure is the internal arrangement of patterns. There are a number of possible structures used in cyclic material but a binary/ quadratic approach is especially common. In this method, the first half is commonly referred to as bhari while the second half is referred to as khali. It is interesting to note that while our cyclic material is commonly based upon a quadratic / binary structure our cadential material is usually triadic. The function of cyclic material is the actual usage within the performance. Material may function as an introduction, a simple groove, a fast improvisation, or any other function. The technique is the rarest criterion. Sometimes the technique is one-handed, two-handed, or verbal. These are the six points which should be remembered from this brief introduction. 1) The nomenclature is based upon different criteria, therefore it is usual to find a single composition bearing different names. 2) Much of the material and philosophy has been derived from an ancient two-headed drum called pakhawaj. 3) Indian rhythm uses the concepts of cycle, (avartan), measure (vibhag), and beat (matra). 4) The measures are represented by a style of timekeeping based upon the clapping and waving of hands. 5) The first beat of the cycle, called the sam, is a pivotal point for the music. 6) There are two overall philosophies for the material: cyclic and cadential. The cadenza is a tension / resolve mechanism while the cyclic form is the basic "groove" characterized by a feeling of balance. Some of our readers may have a difficult time absorbing all of these concepts at once. The unfamiliar terms are especially difficult for the newcomer. We are including a glossary at the end of this article to make the subject more accessible. We may now proceed to the discussion of cyclic compositions. There are a number of compositional forms which may be considered cyclic. The ka, prakar, kaida, rela, gat, laggi and a few other forms will be discussed. Although these terms may be new to the average reader their importance will become clear. Theka is the accompaniment pattern used for Indian music and is the most basic cyclic form. The word "theka" literally means "support" or "a place of rest" (Pathak1976). Whenever one is accompanying a vocalist, dancer, or instrumentalist, one will spend most of the time playing this. Theka is defined entirely by its function. It is the major accompaniment pattern for north Indian music. Any bol may be found but, Dha, Na, Ta, Tin and Dhin are common. Any structure imaginable may be found, but a binary structure (i.e., bhari khali) is quite common. Theka has become inextricably linked to the fundamental concepts of tal. In northern India, when one speaks of tintal, rupak, or any other tal, one is generally speaking of the theka. It is common for several north Indian tals to have the same number of beats, same arrangement of the vibhags, and the same timekeeping (i.e., clap/wave patterns), yet be distinguished by their thekas. This is unthinkable in south Indian music. This link between the performance (e.g., theka) and the theoretical (e.g., tal) can make an in-depth discussion difficult. Many of the points which are often raised in discussions of theka should more correctly be discussed in general discussion of north Indian tal. It is for this reason that we will not go into greater detail about vibhag, avartan, etc. Here are a few common thekas. 1. Tintal theka 2. Rupak theka 3. Kaherava theka 4. Dadra theka The prakar is the variation or improvisation upon the theka. When a musician refers to "playing the theka" he is actually referring to the prakars. This is because a basic theka is too simple and dull to be used in any degree on stage. There are a number of ways to create these these variations; yet the most widespread are the ornamentation and alteration of the bols. Ornamentation is the most common process for generating prakars. This keeps the performance varied and maintains the interest of the audience. The basic theka is a mere skeleton, while the prakar puts the flesh onto it. We can illustrate this with these two examples of dadra: Basic Dadra (theka) Prakar of Dadra The difference in moods between these two examples is clear. The first example has a childlike simplicity and becomes monotonous after a while. Conversely, the second example is more lively. It is important to keep in mind that this is nothing more than the original theka with some ornamentation. On stage, this prakar would be mixed in with an indefinite number of similar improvisations to keep the performance moving at a lively pace. Ornamentation is not the only process, for many times a prakar is formed by a complete change in the bols. This is usually done for stylistic reasons. Compare the basic kaherava with a prakar which is sometimes referred to as bhajan ka theka. Basic Kaherava (theka) Prakar of Kaherava (Bhajan Ka Theka) The relationship between this pair of kaheravas is very different from the relationship seen in our dadra examples. The basic bols of kaherava are not contained in bhajan ka theka. This prakar represents a totally different interpretation. When there is a restructuring of the bols it is sometimes called a kisma. We have seen that prakar is the variation upon the theka. This may be a simple ornamentation or it may be a totally different interpretation of the tal. There are numerous processes behind the generation of these patterns but we are not able to go into them here. An in depth discussion may be found elsewhere (Courtney 1994b). Kaida is very important for both the performance and pedagogy of tabla solos. The word Kaida means "rule" (Kapoor, no-date). It implies an organized system of rules or formulae used to generate theme and variations. It originated in the Delhi style (i.e., Dilli gharana) but has spread to all the other gharanas. In the Benares style it is referred to as Bant or Banti (Stewart 1974). Attempts are occasionally made to distinguish kaida from bant. Such attempts usually are motivated by a chauvinistic attitude toward particular gharanas and are not based upon any objective musical criteria. The results of these efforts have been musically insupportable. Kaida is defined by its structure. It is a process of theme and variation. Any bol may be used, so the bol has no function in its definition. It is also hard to consider function as a defining criteria. Kaida may be thought of as a process by which new patterns may be derived from old. We will illustrate this with a well known beginner's kaida. (Most kaidas are excruciatingly long so this short one will suffice.) Theme (full tempo) It has already been stated that the word "kaida" means rule, so it is convenient for us to go over the rules. This last example will serve to illustrate it. The first rule of kaida is that the bols of the theme must be maintained. In other words, whatever bols are contained in the main theme are the only ones that can be used in the variations. A brief glance at our example easily bears this out. However let us go beyond a mere glance. Close examination reveals that the syllable Ti suddenly appeared in the third variation. It is clearly a variation of Ti , which was present from the beginning. If one thinks in English then this subtlety will be missed, but if one thinks from the standpoint of North Indian languages this becomes a major alteration. Tabla bols show a tremendous tolerance in their vowels (i.e., swar) but show very little tolerance in their consonants (i.e., vyanjan). Although this is an interesting topic it is not possible to go into it in any depth in this paper. Another rule of kaida concerns its overall structure. It must have an introduction, a body and a resolving tihai. The introduction is usually the theme played at half tempo, yet one may hear introductions which involve complex counter-rhythms (i.e., layakari) and even basic variations upon the theme. The body consists of our main theme played at full tempo and the various variations. It must finally be resolved with a tihai. The tihai is essentially a repetition of a phrase three times so that the last beat of the last iteration falls on the first beat of the cycle (i.e., sam). The tihai is discussed in much greater detail elsewhere (Courtney 1994a). It is also a rule that everything must exhibit a bhari / khali arrangement. This means that everything must be played twice. The first time should emphasize the open, resonant strokes of the left hand while the second iteration should emphasize its absence. Only the tihai is exempt from this restriction because the tihai is not really a part of the kaida but rather a device used to resolve and allow a transition. It is also a rule that the variations must follow a logical process. Kaidas have a number of variations, which may be called bal, palta or prastar. (There are many languages in use in northern India so terminology may vary.) The particulars of a logical process often vary with the gharana (stylistic school) and individual artistic concepts. Therefore the process illustrated in the previous example is typical but not the only possible approach. In our main theme, both slow and full tempo, we find a rhyming scheme being built up. Dha Dha Ti Ta and Ta Ta Ti Ta will be assigned a code which we can arbitrarily call "A" , while Dha Dha Tun Na and Dha Dha Dhin Na we can call "B". Therefore, the main theme has the rhyming scheme of AB-AB. If we move to the first variation we see that it takes the form of AAAB-AAAB. In a similar manner the second variation has the form of ABBB-ABBB. One could continue to build up other reasonable structure such as AABB-AABB or any other reasonable permutation. Notice that each iteration (i.e., bhari / khali) usually ends with the B structure, therefore the B begins to function as a mini-theme. This too is subject to some variation because in some gharanas, particularly the Punjabi gharana it is not the entire B but a fraction thereof which functions as the mini-theme. Mathematical permutations based upon only two elements are limited so other processes need to be included. One approach is to double the size of our structure. Instead of working with structures like AAAB-AAAB we could work with AAAAAAAB-AAAAAAAB. Doubling the size certainly increases the possible permutations, but can quickly become unmanageable; therefore many gharanas do not do this. A more universal approach is to take the A and B patterns and fragment them to create smaller structures. Fragmentation may be seen in the third variation of our example. We have derived the expressions Dha Ti Ta and Dha Ti from Dha Dha Ti Ta . For convenience we will call them "C" and "D" respectively. Therefore, variation number three may be expressed as CCDAB-CCDAB. Now that it has been fragmented, we can generate patterns like CDCAB-CDCAB, DCCAB-DCCAB, ACDCB-ACDCB, etc. The use of fragmentation to derive new structures, and their subsequent recombination is a far more flexible process. It is not surprising that this process is used throughout northern India. The fact that kaida is defined by structure has interesting ramifications. It gives rise to a whole family of subdivisions. If the bols of rela are used, a form known as kaida-rela is created. In the same manner kaida-laggis, kaida-peshkars, and kaida-gats are also produced by the use of the appropriate bols. We may summarize our discussion of kaida by saying that it is a structural process of theme and variation. This process is governed by rules which may be briefly summarized as follows: 1) an overall structure of introduction, body, tihai, 2) a binary (i.e., bhari / khali) and quadratic (i.e., AB-AB) structure, 3) maintenance of the bols of the theme. 4) an organized process of permutation. This process may be applied to any bol. With these processes understood we may move on to other material. The word rela means a "torrent" , "an attack" (Pathak1976) or " a rush" (Kapoor no-date). It is has been suggested that the word is derived from the sound that a railroad train makes, however this is generally not accepted in academic circles. Rela is defined by the bol. One normally finds pure tabla bols used, as opposed to bols from the pakhawaj. Here is a representative, but certainly not exhaustive, list of the bols used in rela. These bols function as basic building blocks from which larger patterns are assembled. Structure is not a criterion for rela's definition, therefore the bols may be assembled in a many ways. If we develop it according to the rules of kaida it is usually referred to as kaida-rela. If we assemble them in a freeform manner it is sometimes referred to as swatantra rela. The concepts of swatantra and kaida may be viewed as two extremes of a continuum. The performance of rela is usually somewhere in between these two extremes. In other words some of the rules of kaida may be followed but not all. This is up to the individual artist and is not specified by the concept of rela. The gat originated in the purbi styles (e.g., Lucknow, Farukhabad and Benares gharanas), but today it is played throughout India. It is defined both by function and bol. Functionally, it is a fixed composition rather than any improvisation (Shepherd 1976). Viewed from the standpoint of the bol, it shows a moderate influence of pakhawaj, as do most purbi compositions. Gat is a very difficult topic to discuss because it is so poorly defined. The word gat literally means motion, however the musical meaning implies a fixed composition of either cadential or cyclic form. A survey of the Hindi literature shows that virtually any tabla composition of the purbi class can be called a gat. It is perhaps easier to say what a gat is not. It is not a pakhawaj composition (i.e., paran, fard, sath etc.), nor is it a light style (e.g., laggi) nor is it an accompanying style (e.g., theka or prakar) nor can it be improvised. This does not narrow the definition very much. Gat is a broad class of compositions rather than a single compositional form. We will now look at at some of these forms. The kaida-gat is a common form. As the name implies it is the use of purbi bols in a theme and variation process which follows the rules of kaida. Therefore, the AB-AB structure is central to the process. The kaida has already been discussed, so the aforesaid rules need not be restated. An extremely common form of gat uses a quadratic structure but cannot be considered a kaida. This follows an ABCB structure. This is occasionally referred to as domukhi, or "two faced", in reference to the two B patterns. Some gharanas will also call it a dupalli, yet many dupallis are cadential rather than cyclic. Unlike the kaida-gat, there need be no introduction nor do there have to be any variations. One may play the same gat any number of times. A tihai is usually used, but this is merely a reflection of universal custom rather than anything inherent to the gat. Here is a one example (Saksena 1978:59): There are also gats which have a repetition of a phrase three times. The cyclic version usually follow an ABCBCB or ABCBDB structure. This type is sometimes called tinmukhi or tipalli. However, it should be noted that the term tipalli usually refers to a cadential form and is thus outside the scope of this paper (Courtney 1994a). If a similar approach is taken but the "B" structure is repeated four times it may be called a chaupalli. Sometimes it need not be an entire structure but a single stroke (e.g., Dha Dha Dha Dha)( Sharma 1973). Again many chaupallis are cadential. The lom-vilom is another fascinating form of a gat. It is a musical palindrome that is the same whether played forwards or backwards. It is a characteristic of the palindrome that there are two halves. The first and second halves must be mirror images of each other. The first half of the lom-vilom is called the aroh (ascending), while the second half is called the avaroh (descending). Here is an example (Shankar 1967:145). There are other forms which are considered to be gats by many musicians but will not be discussed here. These are the chakradar gats, tipalli, chaupalli, and dupalli. We will not discuss them because they are cadential forms and do not fall within the topic of this paper. 1 of 2 David Courtney and Ernesto Leon (Introduction to the tabla compositional forms) 1 of 2 This page last updated © 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015 David and Chandrakantha Courtney For comments, corrections, and suggestions, kindly contact David Courtney at [email protected]
very low density lipoprotein receptor The VLDLR gene provides instructions for making a protein called a very low density lipoprotein (VLDL) receptor. This protein is active in many different organs and tissues, including the heart, muscles used for movement (skeletal muscles), fatty (adipose) tissue, and the kidneys. The VLDL receptor appears to play a particularly important role in the developing brain. The VLDL receptor works together with a protein called reelin. Reelin fits into the VLDL receptor like a key in a lock, which triggers a series of chemical reactions within the cell. During early brain development, the reelin signaling pathway helps to guide the movement of immature nerve cells (neuroblasts) to their appropriate locations in the brain. At least six mutations in the VLDLR gene have been found to cause VLDLR-associated cerebellar hypoplasia. These mutations prevent cells from producing any functional VLDL receptor protein. Without this protein, neuroblasts cannot reach the parts of the brain where they are needed. These problems with brain development predominantly affect the cerebellum, which is the part of the brain that coordinates movement. People with VLDLR-associated cerebellar hypoplasia have an unusually small and underdeveloped cerebellum, which leads to problems with balance and coordination (ataxia) and impaired speech. Other regions of the brain are also affected, resulting in intellectual disability and the other major features of this condition. - VLDL receptor
Tracing the Footsteps A Look at Theories on the Origin of HIV Where Did HIV Originate?Since the discovery of HIV in the early 1980s, one question has yet to be answered: Where did this virus originate? How did it go from being one of the smallest forms of life on record to the cause of the global AIDS pandemic? Many theories have been proposed, everything from the preposterous (HIV was manufactured by the CIA or the KGB and then introduced for the purpose of population control), to the unlikely (HIV originated from a tainted oral polio vaccine in the 1950s), to the academic (that HIV originated by zoonotic transmission from an animal host). Growing evidence supports the theory that sometime in the not-so-recent past, partners in crime, HIV-1 and HIV-2, jumped the species barrier from primates to humans. The origin of AIDS remains steeped in controversy, due in part to the early stigma attached to HIV and to the racist overtones implicit in linking black Africans to a sexual disease and, indirectly, to primates. Identifying the natural source of HIV is a difficult process. First, an animal for the virus needs to be found. Second, the geographic distribution of that animal needs to mirror the initial distribution of the disease, in this case West Africa. The virus in the host also needs to have the genetic and structural relatedness to the human virus, and there needs to be a plausible route of transmission of the virus from the animal host to humans. Evolutionary biologists suggest that simian immunodeficiency viruses (SIV) have lived in parts of Africa for thousands of years. This theory of AIDS as a zoonosis (an infection that can "jump" from species to species) has also been around for quite some time. However, Beatrice Hahn and her virus detectives from the University of Alabama in Birmingham have recently identified a species of chimpanzee called Pan troglodytes troglodytes that appears to be the "Adam" of HIV-1. Pan troglodytes troglodytes carries a strain of SIV classified as SIVcpz, the closest living relative of HIV. Why is This Newsworthy All of a Sudden?HIV-1 is a rapidly changing virus that mutates as it reproduces at least a billion times a day in a person's body. Its immediate family tree includes three major groups (M, N and O) and 10 subtypes. New hybrids or "recombinant" viruses mix traits of strains from the two parents, yet like rebellious teens, these viruses develop their own personalities. Each subtype has a distinct genetic profile, so for a vaccine or drug to work, it must know what it's fighting. There are other issues that arise from a scientific perspective, outside of finding a vaccine. Research in the area of xenotransplantation (using animal organs, such as pig heart valves, for human transplants) should proceed with caution. Or could you imagine the study of the accidental emergence of an animal virus leading to a new human pandemic? This would not only be a tragedy, but it would end the study of animal transplant research. I question what awaits us in the jungles of West Equatorial Africa or the rain forests in South America. Have we learned our lesson from HIV? How about Ebola virus? West Nile virus? As we plow through her jungles and strip away natural resources, maybe Mother Nature is telling us "Get out!" Maybe, by our very presence, we're affecting the natural balance of certain ecosystems. Let's Look at Measles, for ExampleEach year, hundreds of tourists travel to the mountains of Rwanda in hopes of catching a glimpse of the great ape. According to a report in Primatology, in 1988 the animals began to sneeze and cough, and then die. Scientists scrambled for an explanation. Eventually, blood and tissues samples were taken that showed the telltale signs of measles infection. The outbreak appeared to be isolated (tell that to the six dead gorillas and 27 who were left sick), but in retrospect one has to view this as a lesson for impending dangers if we do not proceed with caution. Again, the lesson is that infections can jump from primates to people, or for that matter, from people to primates. Polio Vaccine TheoryIn The River, Dennis Hooper postulates that AIDS was inadvertently brought on by humans in the early testing of a polio vaccine in Africa in the 1950s. This theory seemed farfetched when it was originally introduced to public attention in a 1992 issue of Rolling Stone. The River suggests that an oral polio vaccine might have been manufactured from contaminated chimpanzee kidney tissue that was subsequently introduced to the African population. While Hooper's theory has not been proven to have any scientific merit, the time and place of the earliest cases of AIDS and the testing of the vaccine do coincide. From 1957 to 1960, the polio vaccine was given to a million people in what are now Rwanda, Burundi and Congo. This theory challenges Hahn's Pan troglodyte troglodyte zoonotic theory of origin. If Hooper's theory is correct, the simian ancestor of HIV might have grown in the batches of vaccine used in experimental trial. When the oral vaccine was administered to humans, the simian virus would have passed through a sore and entered the human bloodstream, evolving into HIV-1. From there it would have been transmitted through sexual or blood contact. In September of this year, Hooper's theory suffered a significant blow. Tests on samples of the vaccine, in storage for over 40 years, showed no trace of HIV or SIV. Studies of mitochondrial DNA from the samples failed to provide any evidence to support the allegations that the polio vaccine had been prepared using chimpanzee tissue. These results, presented at the Royal Society of London, were a compilation of data conducted at three laboratories in the United States, Germany and France. Evidence from these tests also revealed that the tissue used to prepare this certain vaccine was one of monkey origin, not specifically chimpanzee. The Wistar Institute (the corporation that manufactured the oral polio vaccine delivered to Africa in the early 1950s) claims that the results not only vindicate its own role in the issue, but also soothe public concern over the safety of vaccines, which had been called into question by the allegations. It should also be noted that the Wistar Institute was responsible for launching these trials independently. As stated above, there are three major groups of HIV and 10 subtypes that Hahn and her colleagues have documented back to Pan troglodyte troglodyte SIVcpz: an almost exact genetic map to HIV-1. The researchers sequenced a strain of SIVcpz found in a chimpanzee with a natural infection and found that all HIV-1 strains known to infect man are closely related to this SIVcpz lineage. These researchers also found that HIV-1 group N is a collage of SIVcpzUS and sequences related to HIV-1. This tells us that some "recombination" event happened in the ancestors of a host chimpanzee. Also, HIV-2 has been conclusively proven to have jumped from the primate, sooty mangabey. HIV is a member of the lentivirus family. A lentivirus is a "slow" virus characterized by a long interval between infection and the beginning of symptoms. This similarity in genome structure made SIVcpz a strong candidate for the origin of HIV-1. Other characteristics of the virus raised doubts about its role as the precursor to HIV-1. These characteristics included an unexpectedly distant relationship between SIVcpz and HIV-1, a low prevalence of SIVcpz infection in wild-living chimpanzees, uncertain geography between chimpanzee habitats and early AIDS cases, and questions concerning transmission. In a recent publication, Hahn and her colleagues described SIVcpzUS, a new strain of the SIVcpz virus. When analyzing this strain and comparing previous data, they concluded that the HIV-1 pandemic had arisen as a consequence of SIVcpz transmission from a particular chimpanzee species (Pan troglodytes troglodytes) and presented evidence for a higher number of natural SIVcpz infection based on the discovery of viruses that had recombined. This suggests that the genetic material from strains of SIVcpz recombined or integrated different genetic material into a new set of genes. They also described the geographic convergence of all groups of HIV-1 (M, N, and O) and SIVcpz from Pan troglodyte troglodytes. The common West Central African practice of "hunting and field-dressing chimpanzees" (i.e., killing, skinning and preparing the meat) was proposed as a likely means of transmission. Following the VirusThe race is on now to comprehend how and when these viruses made their disastrous leap, and what paths they are currently traveling. Ask yourself this: If SIVcpz made the leap from chimp to human, what's to stop other viruses from rearing their ugly heads? Think about the impact of emerging retroviruses with a similar latency period and a more virulent clinical manifestation. Combine that possibility with the increased prevalence of "bare-backing," crystal-methamphetamine use, and HIV and hepatitis C co-infection, along with a shift in the demographics of HIV to an underserved minority community. Emerging infections will not go away. They are a major threat to global health. Chris Fritzen represented AIDS Project Los Angeles at the recent 40th Interscience Conference on Antimicrobial Agents and Chemotherapy. He is a corporate development officer at APLA and can be reached by calling (323) 993-1565 or by e-mail at [email protected]. This article has been reprinted at The Body with the permission of AIDS Project Los Angeles (APLA). This article was provided by AIDS Project Los Angeles. It is a part of the publication Positive Living.
Von Neumann architecture The Von Neumann architecture, also known as the Von Neumann model and Princeton architecture, is a computer architecture based on that described in 1945 by the mathematician and physicist John von Neumann and others in the First Draft of a Report on the EDVAC. This describes a design architecture for an electronic digital computer with parts consisting of a processing unit containing an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, and input and output mechanisms. The meaning has evolved to be any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the Von Neumann bottleneck and often limits the performance of the system. The design of a Von Neumann architecture is simpler than the more modern Harvard architecture which is also a stored-program system but has one dedicated set of address and data buses for reading data from and writing data to memory, and another set of address and data buses for fetching instructions. A stored-program digital computer is one that keeps its program instructions, as well as its data, in read-write, random-access memory (RAM). Stored-program computers were an advancement over the program-controlled computers of the 1940s, such as the Colossus and the ENIAC, which were programmed by setting switches and inserting patch leads to route data and to control signals between various functional units. In the vast majority of modern computers, the same memory is used for both data and program instructions, and the Von Neumann vs. Harvard distinction applies to the cache architecture, not the main memory. The earliest computing machines had fixed programs. Some very simple computers still use this design, either for simplicity or training purposes. For example, a desk calculator (in principle) is a fixed program computer. It can do basic mathematics, but it cannot be used as a word processor or a gaming console. Changing the program of a fixed-program machine requires rewiring, restructuring, or redesigning the machine. The earliest computers were not so much "programmed" as they were "designed". "Reprogramming", when it was possible at all, was a laborious process, starting with flowcharts and paper notes, followed by detailed engineering designs, and then the often-arduous process of physically rewiring and rebuilding the machine. It could take three weeks to set up a program on ENIAC and get it working. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. A stored-program design also allows for self-modifying code. One early motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instructions, which had to be done manually in early designs. This became less important when index registers and indirect addressing became usual features of machine architecture. Another use was to embed frequently used data in the instruction stream using immediate addressing. Self-modifying code has largely fallen out of favor, since it is usually hard to understand and debug, as well as being inefficient under modern processor pipelining and caching schemes. On a large scale, the ability to treat instructions as data is what makes assemblers, compilers, linkers, loaders, and other automated programming tools possible. One can "write programs which write programs". On a smaller scale, repetitive I/O-intensive operations such as the BITBLT image manipulation primitive or pixel & vertex shaders in modern 3D graphics, were considered inefficient to run without custom hardware. These operations could be accelerated on general purpose processors with "on the fly compilation" ("just-in-time compilation") technology, e.g., code-generating programs—one form of self-modifying code that has remained popular. There are drawbacks to the Von Neumann design. Aside from the Von Neumann bottleneck described below, program modifications can be quite harmful, either by accident or design. In some simple stored-program computer designs, a malfunctioning program can damage itself, other programs, or the operating system, possibly leading to a computer crash. Memory protection and other forms of access control can usually protect against both accidental and malicious program modification. Development of the stored-program concept The mathematician Alan Turing, who had been alerted to a problem of mathematical logic by the lectures of Max Newman at the University of Cambridge, wrote a paper in 1936 entitled On Computable Numbers, with an Application to the Entscheidungsproblem, which was published in the Proceedings of the London Mathematical Society. In it he described a hypothetical machine which he called a "universal computing machine", and which is now known as the "Universal Turing machine". The hypothetical machine had an infinite store (memory in today's terminology) that contained both instructions and data. John von Neumann became acquainted with Turing while he was a visiting professor at Cambridge in 1935, and also during Turing's PhD year at the Institute for Advanced Study in Princeton, New Jersey during 1936 – 37. Whether he knew of Turing's paper of 1936 at that time is not clear. Independently, J. Presper Eckert and John Mauchly, who were developing the ENIAC at the Moore School of Electrical Engineering, at the University of Pennsylvania, wrote about the stored-program concept in December 1943. In planning a new machine, EDVAC, Eckert wrote in January 1944 that they would store data and programs in a new addressable memory device, a mercury metal delay line memory. This was the first time the construction of a practical stored-program machine was proposed. At that time, he and Mauchly were not aware of Turing's work. Von Neumann was involved in the Manhattan Project at the Los Alamos National Laboratory, which required huge amounts of calculation. This drew him to the ENIAC project, during the summer of 1944. There he joined into the ongoing discussions on the design of this stored-program computer, the EDVAC. As part of that group, he wrote up a description titled First Draft of a Report on the EDVAC based on the work of Eckert and Mauchly. It was unfinished when his colleague Herman Goldstine circulated it with only von Neumann's name on it, to the consternation of Eckert and Mauchly. The paper was read by dozens of von Neumann's colleagues in America and Europe, and influenced the next round of computer designs. Jack Copeland considers that it is "historically inappropriate, to refer to electronic stored-program digital computers as 'von Neumann machines'". His Los Alamos colleague Stan Frankel said of von Neumann's regard for Turing's ideas: I know that in or about 1943 or '44 von Neumann was well aware of the fundamental importance of Turing's paper of 1936 ... Von Neumann introduced me to that paper and at his urging I studied it with care. Many people have acclaimed von Neumann as the "father of the computer" (in a modern sense of the term) but I am sure that he would never have made that mistake himself. He might well be called the midwife, perhaps, but he firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing— in so far as not anticipated by Babbage ... Both Turing and von Neumann, of course, also made substantial contributions to the "reduction to practice" of these concepts but I would not regard these as comparable in importance with the introduction and explication of the concept of a computer able to store in its memory its program of activities and of modifying that program in the course of these activities. At the time that the "First Draft" report was circulated, Turing was producing a report entitled Proposed Electronic Calculator which described in engineering and programming detail, his idea of a machine that was called the Automatic Computing Engine (ACE). He presented this to the Executive Committee of the British National Physical Laboratory on February 19, 1946. Although Turing knew from his wartime experience at Bletchley Park that what he proposed was feasible, the secrecy surrounding Colossus, that was subsequently maintained for several decades, prevented him from saying so. Various successful implementations of the ACE design were produced. Both von Neumann's and Turing's papers described stored-program computers, but von Neumann's earlier paper achieved greater circulation and the computer architecture it outlined became known as the "von Neumann architecture". In the 1953 publication Faster than Thought: A Symposium on Digital Computing Machines (edited by B.V. Bowden), a section in the chapter on Computers in America reads as follows: The Machine of the Institute For Advanced Studies, Princeton In 1945, Professor J. von Neumann, who was then working at the Moore School of Engineering in Philadelphia, where the E.N.I.A.C. had been built, issued on behalf of a group of his co-workers a report on the logical design of digital computers. The report contained a fairly detailed proposal for the design of the machine which has since become known as the E.D.V.A.C. (electronic discrete variable automatic computer). This machine has only recently been completed in America, but the von Neumann report inspired the construction of the E.D.S.A.C. (electronic delay-storage automatic calculator) in Cambridge (see page 130). In 1947, Burks, Goldstine and von Neumann published another report which outlined the design of another type of machine (a parallel machine this time) which should be exceedingly fast, capable perhaps of 20,000 operations per second. They pointed out that the outstanding problem in constructing such a machine was in the development of a suitable memory, all the contents of which were instantaneously accessible, and at first they suggested the use of a special vacuum tube — called the "Selectron" – which had been invented by the Princeton Laboratories of the R.C.A. These tubes were expensive and difficult to make, so von Neumann subsequently decided to build a machine based on the Williams memory. This machine, which was completed in June, 1952 in Princeton has become popularly known as the Maniac. The design of this machine has inspired that of half a dozen or more machines which are now being built in America, all of which are known affectionately as "Johniacs."' In the same book, the first two paragraphs of a chapter on ACE read as follows: Automatic Computation at the National Physical Laboratory One of the most modern digital computers which embodies developments and improvements in the technique of automatic electronic computing was recently demonstrated at the National Physical Laboratory, Teddington, where it has been designed and built by a small team of mathematicians and electronics research engineers on the staff of the Laboratory, assisted by a number of production engineers from the English Electric Company, Limited. The equipment so far erected at the Laboratory is only the pilot model of a much larger installation which will be known as the Automatic Computing Engine, but although comparatively small in bulk and containing only about 800 thermionic valves, as can be judged from Plates XII, XIII and XIV, it is an extremely rapid and versatile calculating machine. The basic concepts and abstract principles of computation by a machine were formulated by Dr. A. M. Turing, F.R.S., in a paper1. read before the London Mathematical Society in 1936, but work on such machines in Britain was delayed by the war. In 1945, however, an examination of the problems was made at the National Physical Laboratory by Mr. J. R. Womersley, then superintendent of the Mathematics Division of the Laboratory. He was joined by Dr. Turing and a small staff of specialists, and, by 1947, the preliminary planning was sufficiently advanced to warrant the establishment of the special group already mentioned. In April, 1948, the latter became the Electronics Section of the Laboratory, under the charge of Mr. F. M. Colebrook. Early von Neumann-architecture computers The First Draft described a design that was used by many universities and corporations to construct their computers. Among these various computers, only ILLIAC and ORDVAC had compatible instruction sets. - Manchester Small-Scale Experimental Machine (SSEM), nicknamed "Baby" (University of Manchester, England) made its first successful run of a stored-program on June 21, 1948. - EDSAC (University of Cambridge, England) was the first practical stored-program electronic computer (May 1949) - Manchester Mark 1 (University of Manchester, England) Developed from the SSEM (June 1949) - CSIRAC (Council for Scientific and Industrial Research) Australia (November 1949) - EDVAC (Ballistic Research Laboratory, Computing Laboratory at Aberdeen Proving Ground 1951) - ORDVAC (U-Illinois) at Aberdeen Proving Ground, Maryland (completed November 1951) - IAS machine at Princeton University (January 1952) - MANIAC I at Los Alamos Scientific Laboratory (March 1952) - ILLIAC at the University of Illinois, (September 1952) - BESM-1 in Moscow (1952) - AVIDAC at Argonne National Laboratory (1953) - ORACLE at Oak Ridge National Laboratory (June 1953) - BESK in Stockholm (1953) - JOHNNIAC at RAND Corporation (January 1954) - DASK in Denmark (1955) - WEIZAC in Rehovoth (1955) - PERM in Munich (1956?) - SILLIAC in Sydney (1956) Early stored-program computers The date information in the following chronology is difficult to put into proper order. Some dates are for first running a test program, some dates are the first time the computer was demonstrated or completed, and some dates are for the first delivery or installation. - The IBM SSEC had the ability to treat instructions as data, and was publicly demonstrated on January 27, 1948. This ability was claimed in a US patent. However it was partially electromechanical, not fully electronic. In practice, instructions were read from paper tape due to its limited memory. - The Manchester SSEM (the Baby) was the first fully electronic computer to run a stored program. It ran a factoring program for 52 minutes on June 21, 1948, after running a simple division program and a program to show that two numbers were relatively prime. - The ENIAC was modified to run as a primitive read-only stored-program computer (using the Function Tables for program ROM) and was demonstrated as such on September 16, 1948, running a program by Adele Goldstine for von Neumann. - The BINAC ran some test programs in February, March, and April 1949, although was not completed until September 1949. - The Manchester Mark 1 developed from the SSEM project. An intermediate version of the Mark 1 was available to run programs in April 1949, but was not completed until October 1949. - The EDSAC ran its first program on May 6, 1949. - The EDVAC was delivered in August 1949, but it had problems that kept it from being put into regular operation until 1951. - The CSIR Mk I ran its first program in November 1949. - The SEAC was demonstrated in April 1950. - The Pilot ACE ran its first program on May 10, 1950 and was demonstrated in December 1950. - The SWAC was completed in July 1950. - The Whirlwind was completed in December 1950 and was in actual use in April 1951. - The first ERA Atlas (later the commercial ERA 1101/UNIVAC 1101) was installed in December 1950. Through the decades of the 1960s and 1970s computers generally became both smaller and faster, which led to some evolutions in their architecture. For example, memory-mapped I/O allows input and output devices to be treated the same as memory. A single system bus could be used to provide a modular system with lower cost. This is sometimes called a "streamlining" of the architecture. In subsequent decades, simple microcontrollers would sometimes omit features of the model to lower cost and size. Larger computers added features for higher performance. Von Neumann bottleneck The shared bus between the program memory and data memory leads to the Von Neumann bottleneck, the limited throughput (data transfer rate) between the CPU and memory compared to the amount of memory. Because program memory and data memory cannot be accessed at the same time, throughput is much smaller than the rate at which the CPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data. The CPU is continually forced to wait for needed data to be transferred to or from memory. Since CPU speed and memory size have increased much faster than the throughput between them, the bottleneck has become more of a problem, a problem whose severity increases with every newer generation of CPU. Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it. The performance problem can be alleviated (to some extent) by several mechanisms. Providing a cache between the CPU and the main memory, providing separate caches or separate access paths for data and instructions (the so-called Modified Harvard architecture), using branch predictor algorithms and logic, and providing a limited CPU stack or other on-chip scratchpad memory to reduce memory access are four of the ways performance is increased. The problem can also be sidestepped somewhat by using parallel computing, using for example the Non-Uniform Memory Access (NUMA) architecture—this approach is commonly employed by supercomputers. It is less clear whether the intellectual bottleneck that Backus criticized has changed much since 1977. Backus's proposed solution has not had a major influence. Modern functional programming and object-oriented programming are much less geared towards "pushing vast numbers of words back and forth" than earlier languages like Fortran were, but internally, that is still what computers spend much of their time doing, even highly parallel supercomputers. As of 1996, a database benchmark study found that three out of four CPU cycles were spent waiting for memory. Researchers expect that increasing the number of simultaneous instruction streams with multithreading or single-chip multiprocessing will make this bottleneck even worse. Non-von Neumann processors Perhaps the most common kind of non-von Neumann structure used in modern computers is content-addressable memory (CAM). |Wikimedia Commons has media related to Von Neumann architecture.| - CARDboard Illustrative Aid to Computation - Harvard architecture - Interconnect bottleneck - Little man computer - Modified Harvard architecture - Random-access machine - Turing machine - von Neumann, John (1945), First Draft of a Report on the EDVAC, archived from the original on March 14, 2013, retrieved August 24, 2011 - Ganesan 2009 - Markgraf, Joey D. (2007), The Von Neumann bottleneck, retrieved August 24, 2011 - Copeland 2006, p. 104 - MFTL (My Favorite Toy Language) entry Jargon File 4.4.7, retrieved 2008-07-11 - Turing, A.M. (1936), "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, 2 (1937) 42: 230–65, doi:10.1112/plms/s2-42.1.230 (and Turing, A.M. (1938), "On Computable Numbers, with an Application to the Entscheidungsproblem. A correction", Proceedings of the London Mathematical Society, 2 (1937) 43 (6): 544–6, doi:10.1112/plms/s2-43.6.544) - "Electronic Digital Computers", Nature 162, September 25, 1948: 487, doi:10.1038/162487a0, retrieved 2009-04-10 - Lukoff, Herman (1979), From Dits to Bits...: A Personal History of the Electronic Computer, Robotics Press, ISBN 978-0-89661-002-6 - ENIAC project administrator Grist Brainerd's December 1943 progress report for the first period of the ENIAC's development implicitly proposed the stored program concept (while simultaneously rejecting its implementation in the ENIAC) by stating that "in order to have the simplest project and not to complicate matters" the ENIAC would be constructed without any "automatic regulation". - Copeland 2006, p. 113 - Copeland, Jack (2000), A Brief History of Computing: ENIAC and EDVAC, retrieved January 27, 2010 - Copeland, Jack (2000), A Brief History of Computing: ENIAC and EDVAC, retrieved 27 January 2010 which cites Randell, B. (1972), Meltzer, B.; Michie, D., eds., "On Alan Turing and the Origins of Digital Computers", Machine Intelligence 7 (Edinburgh: Edinburgh University Press): 10, ISBN 0-902383-26-4 - Copeland 2006, pp. 108–111 - Bowden 1953, pp. 176,177 - Bowden 1953, p. 135 - "Electronic Computer Project". Institute for Advanced Study. Retrieved May 26, 2011. - James E. Robertson (1955), Illiac Design Techniques, report number UIUCDCS-R-1955-146, Digital Computer Laboratory, University of Illinois at Urbana-Champaign - F.E. Hamilton, R.R. Seeber, R.A. Rowley, and E.S. Hughes (January 19, 1949). "Selective Sequence Electronic Calculator". US Patent 2,636,672. Retrieved April 28, 2011. Issued April 28, 1953. - Herbert R.J. Grosch (1991), Computer: Bit Slices From a Life, Third Millennium Books, ISBN 0-88733-085-1 - C. Gordon Bell; R. Cady; H. McFarland; J. O'Laughlin; R. Noonan; W. Wulf (1970), "A New Architecture for Mini-Computers—The DEC PDP-11", Spring Joint Computer Conference: 657–675. - Linda Null; Julia Lobur (2010), The essentials of computer organization and architecture (3rd ed.), Jones & Bartlett Learning, pp. 36,199–203, ISBN 978-1-4496-0006-8 - Backus, John W.. "Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs". doi:10.1145/359576.359579. - Dijkstra, Edsger W.. "E. W. Dijkstra Archive: A review of the 1977 Turing Award Lecture". Retrieved 2008-07-11. - Richard L. Sites, Yale Patt. "Architects Look to Processors of Future". Microprocessor report. 1996. - "COP8 Basic Family User’s Manual". National Semiconductor. Retrieved 2012-01-20. - "COP888 Feature Family User’s Manual". National Semiconductor. Retrieved 2012-01-20. - Bowden, B.V., ed. (1953), Faster Than Thought: A Symposium on Digital Computing Machines, London: Sir Isaac Pitman and Sons Ltd. - Rojas, Raúl; Hashagen, Ulf, eds. (2000), The First Computers: History and Architectures, MIT Press, ISBN 0-262-18197-5 - Davis, Martin (2000), The universal computer: the road from Leibniz to Turing, New York: W W Norton & Company Inc., ISBN 0-393-04785-7 republished as: Davis, Martin (2001), Engines of Logic: Mathematicians and the Origin of the Computer, New York: W. W. Norton & Company, ISBN 978-0-393-32229-3 - Can Programming be Liberated from the von Neumann Style?, John Backus, 1977 ACM Turing Award Lecture. Communications of the ACM, August 1978, Volume 21, Number 8 Online PDF - C. Gordon Bell and Allen Newell (1971), Computer Structures: Readings and Examples, McGraw-Hill Book Company, New York. Massive (668 pages) - Copeland, Jack (2006), "Colossus and the Rise of the Modern Computer", in Copeland, B. Jack, Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford: Oxford University Press, ISBN 978-0-19-284055-4 - Ganesan, Deepak (2009), The Von Neumann Model, retrieved October 22, 2011 - McCartney, Scott (1999). ENIAC: The Triumphs and Tragedies of the World's First Computer. Walker & Co. ISBN 0-8027-1348-3. - Goldstine, Herman H. (1972). The Computer from Pascal to von Neumann. Princeton University press. ISBN 0-691-08104-2. - Shurkin, Joel (1984). Engines of the Mind - a history of the computer. New York, London: W.W. Norton & Company. ISBN 0-393-01804-0. - Harvard vs von Neumann - A tool that emulates the behavior of a von Neumann machine - JOHNNY – A simple Open Source simulator of a von Neumann machine for educational purposes
Cicadas life cycle Cicadas feeding habits Cicadas pest control Cicada 2004 invasion Cicada Bugs pictures www.termite-pictures.com (everything about termites) Cicadas are members of the Hemiptera, then the Homoptera, the Homoptera is often considered an order in its own rite these days but in some books you will find it designated as a suborder of the Hemiptera. They are then members of the superfamily Cicadoidea, and the Family Cicadidae. There are about 1600 species of Cicada in the world, some of the largest are in the genera Pomponia and Tacua. Cicadas are mainly warm-temperate to tropical in habitat. There are around 200 species in Australia compared with about 100 species in the Palaearctic and only 1 species in the UK. The British species is Melampsalta montana (was Cicadetta) which is widespread outside of the UK. Generally speaking cicadas have life cycles that last from one to several years, most of this time is spent as a nymph under the ground feeding on the xylem fluids of plants by piercing their roots and sucking out the fluids. Some species take a very long time to develop and the periodical cicadas of the genus Magicicada of North America are well known because some of them have a 17 cicades year life cycle. Cicades can be found in Tennessee, Ohio, Virginia, Kentucky, Illinois and other parts of the US like some parts of the Mississippi. There are 3 species of periodical cicada, each of which has two forms, a 17 year form and a 13 year form; they are Magicicada septendecim, Magicicada septendecula and Magicicada cassini. Some authorities claim that the 13 year form of each species should be a species in its own rite, in this case they are named; Magicicada tredecim, Magicicada tredecassini and Magicicada tredecula. Of the three species (called Decim, Cassini and Decula for short) Decim is the most common in the north of their range, Decula is rare all over and Cassini is most common around the Mississippi valley. The common names for cicadas vary widely around the world. In Australia, children were the first to coin the common name for many cicadas - names that have been dutifully passed down from generation to generation of cicada hunters. Probably the best known and most mysterious is the Black Prince (Psaltoda plaga) followed closely by the Green Grocer (Cyclochila australasiae). Other popular names include the Double Drummer (Thopha saccata), Redeye (Psaltoda moerens), Floury Baker (Abricta curvicosta) and Cherrynose (Macrotristria angularis). Two other common names becoming more widely accepted are Hairy Cicada (Tettigarcta tomentosa and T. crinita) and Bladder Cicada (Cystosoma saundersi). The exact origin of most of these names is unclear, but also the Yellow Monday and Green Grocer were in popular.
Background (click to expand) ↓ While everyone is at risk of being exposed to environmental hazards, those who are marginalized are typically at higher risk. Specifically, populations such as the elderly and children are more susceptible to harms associated with environmental hazards such as air pollution, lead exposure, and heat waves because of age and weaker immune systems. Further, many First Nations communities are at an increased risk to being exposed to environmental hazards, such as unsafe drinking water as evident by the that exist in many First Nations communities across B.C. While nursing's contributions to environmental health dates back to influential nursing leaders such as Florence Nightingale, who recognized that the environment was a tool that could be manipulated to improve health outcomes,as well as Lillian Ward and Mary Breckenridge,who addressed issues of sanitation and clean water, many nurses are often unaware of nursing's role in environmental health. With emerging issues such as greenhouse gas emissions, use of toxic substances within daily care and cleaning products and incineration of medical waste many British Columbians continue to be exposed to a variety of environmental hazards that may cause adverse health outcomes. Further, nurses are presented with patients on a daily basis whose health conditions have been highly impacted by environmental exposures. The precautionary principle, which is based on the a duty to prevent harm, indicates that preventive measures must be taken when there is a threat of serious damage, even if scientific evidence is not conclusive. While the precautionary principle has been used as a form of prevention, this principle is not always adhered to. Nurses across B.C need to utilize their position and voice to advocate for policies that protect British Columbians from hazardous environmental exposures that threaten their health and well-being. Key messages (click to expand) ↓ - Environmental factors play a significant role in contributing to both positive and negative health outcomes. - Various environmental exposures exist throughout B.C. All British Columbians, especially those who are marginalized, are at risk for experiencing adverse health outcomes as a result of these exposures. - Addressing external physical and social contributors to health is a part of a comprehensive nursing assessment. - Nurses are well positioned to identify the factors that contribute to both good and poor health, and advocate for healthy environmental policies. - As stated in the CNA Code of Ethics for Registered Nurses, "nurses should endeavor as much as possible, individually and collectively to advocate for and work towards eliminating social inequities by supporting environmental preservation and restoration and advocating for initiatives that reduce environmentally harmful practices in order to promote health and well-being" (p.20). - When an environmental exposure poses a serious threat, but scientific evidence is not conclusive, preventive measures must still be taken to protect the health and well-being of society. Further Reading (click to expand) ↓ - Canadian Nurses Association. (2016). Nursing and Environmental Health.
An aneurysm has to do with the weakening of a wall of a blood vessel, usually an artery. The weakening in the vascular wall can result in a bulging or blood-filled sac. The blood in this sac can clot, break free and form an obstruction that can lodge elsewhere and prevent necessary blood flow. Aneurysms usually occur in the aorta, the body's largest artery. But they can also frequently occur in the brain. A stroke occurs when brain cells start to die from a lack of oxygen-rich blood. Sudden bleeding in the brain can also cause a stroke if brain cells are damaged. Stroke symptoms can include sudden weakness, paralysis or numbness of the face, arms, or legs; and trouble speaking or seeing. Both a stroke and an aneurysm are serious medical conditions that require emergency care.
Stars emit X-rays. Our sun, for example, has a hot corona entwined with magnetic fields, and emits X-rays. Astronomers have discovered that young stars emit considerably more X-rays than older, sun-like stars, but it is not clear why, nor when in the life of a new star the X-ray emission begins, nor how the emission subsequently changes. For example, it has been established that those young stars with accretion and winds emit X-rays -- perhaps the interaction between the ionized outflows and magnetic fields in the disks contribute to the strong X-rays. SAO astronomer Scott Wolk was part of a team of twelve astronomers that combined data from the Chandra X-ray Observatory and four other observatories using four wavelength regions to investigate the mystery of X-ray emission from young stars. They chose the rich, young stellar cluster in the Orion nebula, a relatively nearby site of star formation that includes members in a wide range of evolutionary stages, including forty-five very young stars that are probably less than a million years old. The scientists used their optical and infrared data to more accurately determine the ages and properties of the candidates, and the Chandra data to characterize the X-rays. This is the first such research effort with a statistically adequate sample to determine the X-ray properties of young stars. The astronomers conclude that X-rays begin at a very early stage in the life of a star; depending on the star's mass, this can be less than one hundred thousand years. The X-ray emission then increases with age, even beyond the age at which the star's accretion is thought to cease. The results imply that there is only one mechanism that produces moderate-strength X-rays in these young stars, and that this mechanism arises from stellar magnetic activity and does not depend on either accretion or circumstellar disks.
Drawing in Adobe Photoshop involves creating vector shapes and paths. In Photoshop, you can draw with any of the shape tools, the Pen tool, or the Freeform Pen tool. Options for each tool are available in the options bar. Before you begin drawing in Photoshop, you must choose a drawing mode from the options bar. The mode you choose to draw in determines whether you create a vector shape on its own layer, a work path on an existing layer, or a rasterized shape on an existing layer. Vector shapes are lines and curves you draw using the shape or pen tools. (See Draw shapes and Draw with the Pen tools.) Vector shapes are resolution-independent—they maintain crisp edges when resized, printed to a PostScript printer, saved in a PDF file, or imported into a vector-based graphics application. You can create libraries of custom shapes and edit a shape’s outline (called a path) and attributes (such as stroke, fill color, and style). Paths are outlines that you can turn into selections, or fill and stroke with color. You can easily change the shape of a path by editing its anchor points. A work path is a temporary path that appears in the Paths panel and defines the outline of a shape. You can use paths in several ways: - Use a path as a vector mask to hide areas of a layer. (See About layer and vector masks.) Convert a path to a selection. (See Convert paths to selection borders.) Fill or stroke a path with color. (See Fill paths with color.) Designate a saved path as a clipping path to make part of an image transparent when exporting the image to a page-layout or vector-editing application. (See Create transparency using image clipping paths.) When you work with the shape or pen tools, you can draw in three different modes. You choose a mode by selecting an icon in the options bar when you have a shape or pen tool selected. Creates a shape on a separate layer. You can use either the shape tools or the pen tools to create shape layers. Because they are easily moved, resized, aligned, and distributed, shape layers are ideal for making graphics for web pages. You can choose to draw multiple shapes on a layer. A shape layer consists of a fill layer that defines the shape color and a linked vector mask that defines the shape outline. The outline of a shape is a path, which appears in the Paths panel. Draws a work path on the current layer that you can then use to make a selection, create a vector mask, or fill and stroke with color to create raster graphics (much as you would using a painting tool). A work path is temporary unless you save it. Paths appear in the Paths panel. Paints directly on a layer—much as a painting tool does. When you work in this mode, you’re creating raster images—not vector graphics. You work with the shapes you paint just as you do with any raster image. Only the shape tools work in this mode. A. Shape Layers B. Paths C. Fill Pixels
|An Overview of Land Based Sources of Marine Pollution |This page is one of a series of web pages developed by the CAR/RCU on various Environmental Issues in the Caribbean. These pages are a good starting point for research into many of the pressing concerns of the nations and territories of the Wider Caribbean Region. They contain definitions, descriptions, discussions, links to relevant on-line documents and web sites.|| The major sources of coastal and marine pollution originating from the land vary from country to country. The nature and intensity of development activities, the size of the human population, the state and type of industry and agriculture are but a few of the factors contributing to each countrys unique pollution problems. Pollution is discharged either directly into to the sea, or enters the coastal waters through rivers and by atmospheric deposition. In order to mitigate and control the impact of pollution on coastal and marine resources, it is essential that the type and load of pollutants be identified. This involves determination of the sources and their location, and the volume and concentration of the pollutants. Point sources of pollution are sources that can be identified to one location, such as industrial and sewage treatment plants. Point sources, though easy to identify, account only for a fraction of the land-based sources of pollution affecting coastal and marine environments. Non-point sources are harder to identify, and include urban storm water run-off and overflow discharges, as well as runoff from forest and agriculture. Pollution sources can be located relatively far away from coastal areas and still have an impact. Pollutants from sources and activities within a drainage area can be carried to the coast by rivers. Pollution from distant sources can also enter into the marine environment through atmospheric deposition. Based on current information, the land based pollutants constituting the greatest threat to coastal and marine ecosystems and to public health in the Wider Caribbean Region are sewage, oil hydrocarbons, sediments, nutrients, pesticides, litter and marine debris, and toxic wastes. Sewage is one of the most significant pollutants affecting the coastal environments of the Wider Caribbean Region, especially in the developing nations. In 1993, Pan American Health Organization (PAHO) indicated that only 10% of the sewage generated in the Central American and Caribbean Island countries were properly treated. A more recent survey conducted in eleven CARICOM countries by PAHO reported that the percentage of population served by sewage systems varied from 2 to 16%. The inadequate number of sewage treatment plants in operation, combined with poor operating conditions of available treatment plants, and the disposal practices of discharging mostly untreated wastewater are likely to have an adverse effect on the quality of coastal waters. The population of coastal dwellers in most of the countries in the region continues to grow steadily, thus increasing the amounts of poorly treated or untreated sewage waste waters being discharged into the coastal waters. The discharge of sewage can cause public health problems either from contact with polluted waters or from consumption of contaminated fish or shellfish. The discharge of untreated sewage effluents also produces long-term adverse impacts on the ecology of critical coastal ecosystems in localized areas due to the contribution of nutrients and other pollutants. Pollution due to inadequate sewage disposal causes nutrient enrichment around population centers, and high nutrient levels and even eutrophication near treatment facilities and sewage outfalls. Increased nutrient concentrations promote increased algal and bacterial growth, degradation of seagrass and coral reef ecosystems, decreased fisheries production, along with risks to human health. The past decade has also witnessed an increasing growth in the regions tourism, an industry dependent on the quality of the natural environment. Estimates provided by Caribbean Tourism Organization (CTO) indicate that the total stayover tourist arrival to the Caribbean region is close to 12 million visitors per year, a figure which does not include the tourists visiting coastal areas of the Gulf of Mexico, Central America, the Mexican Caribbean and the northern coast of South America. In addition, CTO statistics for daily cruise ship visitors for the period of 1991 and 1992 indicated close to 8 million visitors per year. In response to the increasing flux of tourists, hotels and recreational facilities are being built in the region. Because of the lack of the necessary municipal sewerage systems, hotels are placed in the position of operating their own treatment plants. According to current reports, only 25% of the treatment plants operated by hotels and resort complexes are in good operating condition. There are considerable efforts underway in the Caribbean region to increase the proportion of population served by communal sewerage systems, in spite of the high costs involved. The prohibitively high costs of building and maintaining traditional sewage treatment plants are frequently given as a reason for not treating the sewage before its disposal. There are however several biological methods of treatment available for sewage not contaminated with wastes of industrial origin, which would be suitable to the tropical and sub-tropical character of the Caribbean region. Unfortunately, in most instances, sewage does not only contain human excreta, but also various environmentally unfriendly compounds used in households, such as detergents. The problem is further exacerbated by the common practice of discharging untreated or inadequately treated industrial waste water into the domestic waste water stream. As a result, most sewers contain a variety of toxic and nonbiodegradable substances, which make their treatment less effective and more costly. It is estimated that less than 2% of the urban sewage is treated before its disposal, and that the proportion of treated sewage from rural communities is probably even lower. The outfalls of the sewerage systems are usually very short, contributing to the pollution of nearshore waters. An additional source of sewage is from the increasing number of ships and recreational vessels within the region. Larger ships have holding tanks for sewage, which, according to Annex IV of MARPOL, they are not permitted to discharge within four miles of the nearest land, unless they have approved treatment plants on board. Coastal cargo vessels and recreational boats do not have holding tanks, and are likely to discharge their waste waters in marinas and nearshore coastal areas due to the lack of port reception facilities for sewage wastes in most of the countries in the region. The alleviation of the sewage problem and the creation of a long term viable economy will necessitate a political commitment to develope and enforce legislation relevant to the management of residential and tourism development in the coastal zone, as well as adherence to planning policies taking into account the potential environmental impacts of development. Improving existing sewage disposal facilities, or building new ones where necessary is important, as is ensuring that individual houses and resorts have sewage disposal systems, such as septic tanks. Larger resorts should use existing municipal sewage systems, where available, or install and manage their own packaging plants. The Wider Caribbean region is one of the largest oil producing areas of the world with a production of approximately 170 x 106 tons per year. The main oil producing countries are Colombia, Mexico, Trinidad and Tobago, USA, and Venezuela. Most of the oil produced within the Wider Caribbean region is shipped within the region resulting in an intricate network of distribution routes. The sites most vulnerable for accidents are areas where tankers move through restricted channels and in the vicinity of ports. In addition to tankers, a number of tank barges also operate in the region in support of extensive oil refineries and petrochemical industries. In spite of regulations established in Annex I of MARPOL 73/78, tankers and barges do not always use port facilities for the disposal of bilge and tank washing and wastes, and a significant amount of oil is discharged into the coastal areas of the Wider Caribbean region this way. This deliberate release far exceeds the amount of oil entering the sea from accidental oil spills. Offshore oil and gas exploitation can become sources of pollution, either in the form of accidental oil spills or from the release of "produced water" from the oil-bearing strata with the oil and the gas at the time of production. The "produced water is discharged into the marine environment together with waste drilling chemicals and mud, and may contain substances that exert high oxygen demand, together with toxic poly-aromatic hydrocarbons (PAHs), benzene, ethylbenzene, xylene and heavy metals, such as lead, copper, nickel and mercury. Accidental oil spills from offshore operations are often caused by pipeline breakage, well blowouts, platform fires overflows and equipment malfunctioning. In addition to the accidental oil spills, there is also a significant amount of natural seepage of petroleum hydrocarbons from submarine oil deposits, which contributes to marine pollution. Unlike the previously described sources of oil pollution, natural oil seepages are very difficult to estimate. Much of the information on oil pollution levels in coastal and marine waters of the Wider Caribbean Region comes from the UNEP-IOC/IOCARIBE CARIPOL (Caribbean Oil Pollution Database) Program initiated in 1979. The data gathered by CARIPOL indicated that the concentration of dissolved/dispersed petroleum hydrocarbons (DDPHs) are generally low in offshore waters, while relatively high levels are found in enclosed coastal areas. Oil refineries and petrochemical plants were also seen as the major sources of coastal oil pollution within the region. NOAA Status and Trends Programme has been gathering information about the accumulation of petroleum hydrocarbons, particularly toxic compounds, such as PAHs, in sediments and marine organisms along the U.S Gulf Coast. The CARIPOL Programme has also obtained similar information along the Mexican Gulf coast and the coastal areas of the Caribbean region. The impact of oil pollution on the ecology of coastal and marine ecosystems and the species that inhabit them is particularly destructive following massive oil spills caused by maritime accidents. However, information required to completely understand the ecological and health risks caused by long-term chronic oil discharges into the coastal marine environment of the Wider Caribbean Region is very limited. Corals do not die from oil remaining on the surface of the water. However, gas exchange between the water and the atmosphere is decreased, with the possible result of oxygen depletion in enclosed bays where surface wave action is minimal. Coral death results from smothering when submerged oil directly adheres to coral surfaces, and oil slicks affect sea birds and other marine animals. In addition, tar accumulation on beaches reduces tourism potential of coastal areas. Rivers bring a considerable amount of sediments into the coastal and marine ecosystems of many Wider Caribbean region nations. Natural geochemical processes control most of the suspended and dissolved materials carried by these rivers. However, human activities can increase the amount of sediments in the rivers. These activities include erosion of the river basin watershed caused by deforestation, urbanization, agricultural activities, and by a variety of pollutants discharged into the waters. Most of the rivers discharge sediment loads ranging between 100 and 1000 mg/l into the coastal waters of the Wider Caribbean region. The yearly sediment load in the region can be estimated at 109 tons per year, which is approximately 12% of the global sediment input from rivers, estimated at 8 * 109 tons/year. Most land in Caribbean region, especially on the small islands, is relatively near the ocean, making the coastal and marine environments especially vulnerable to sedimentation caused by human activities. In addition, the coastal areas are under increasing development pressure, while the shortage of land on small islands forces development activities onto steeper, erosion prone terrain. In many Caribbean countries, intensive mining of beach sand, as well as inappropriate coastal engineering, such as the construction of breakwaters and seawalls, has lead to increased coastal erosion. All these activities combined can have serious ecological impacts. In the Wider Caribbean Region, deforestation of the river basin watersheds is likely the biggest human activity contributing to sediments entering the coastal zone. Continued economic growth in the region has brought about changes in the traditional uses of land. Increased agricultural development has taken place at the expense of forestlands. There is a limited amount of information available about the long-term effects of siltation in coastal waters, most of which has been gathered from remote sensing sources and coral reef surveys. Long-term data is needed to establish a time series of patterns and consequences of land use changes in drainage basins. The increased turbidity of coastal waters place a continuous stress on critical coastal ecosystems, such as coral reefs. The negative effect of siltation on coral reefs has been confirmed by studies conducted on the coasts of Panama, Costa Rica, Nicaragua, among other locations. Increased sedimentation can cause a variety of negative impacts on coral reefs. These include screening out light needed for photosynthesis, scouring of coral by sand and other transported sediments, poor survival of juvenile coral due to loss of suitable substrata, and the direct smothering of coral in cases of extreme sedimentation. Mining and dredging operations can also be a direct source of siltation. The mining of bauxite is particularly important for the economies of Jamaica, Suriname, Guyana, and to a lesser degree the Dominican Republic and Haiti. In the case of Jamaica, bauxite wastes are not discharged into rivers or coastal areas but into deposition ponds. There is little information about the final disposal of wastes from bauxite operations for the other mentioned countries. Other mining operations within the Wider Caribbean Region include the mining and processing of ores for the production of nickel oxide in Cuba and the Dominican Republic. The mining activities take place in areas close to the coast. Again, little information exists about the disposal of mine tailings in rivers of adjacent coastal waters. Dredging is another contributor to the siltation of coastal waters. Dredge materials are generally contaminated sediments containing toxic heavy metals, organic pollutants etc. originating from domestic and industrial point discharges and non-point sources. Dredging of shallow coastal waters to keep open shipping lanes, while not producing pollution, causes serious re-suspension of sediments and resulting decrease of water clarity. Increased water turbidity decreases the productivity of coral reefs and seagrass beds, which rely on light for photosynthesis. In cases of high sediment load, physical smothering of coral reefs, seagrasses, and associated filter feeders and other benthic organisms is also possible. A related problem is the transport of pesticides and herbicides bound to sediments to the marine environment. The discharge of nutrients into coastal waters is a major cause of eutrophication, especially in areas of limited water circulation. Nutrient enrichment is an increasing concern in the Wider Caribbean Region. The main nutrients are nitrogen and phosphorus compounds, and they enter coastal waters from point and non-point sources. Eutrophication may cause algal blooms, changes in the aquatic community structure, decreased biological diversity, fish kills and oxygen depletion events. The presence of nutrients in the water column enhances the growth of plants, and in some cases may cause algae to overgrow the corals or seagrasses that were previously present. Habitat degradation will in turn cause decreased fisheries production and loss of recreational and tourism potential Fertilizers used in agriculture are one source of nutrients reaching the coastal zone. Continued economic growth and development has drastically changed the traditional land use patterns of the Wider Caribbean Region. Agricultural development has been rapid, and, in addition, coastal areas have seen increased population growth together with changes in adjacent land use, increasing the pressures on the marine and coastal areas. Sewage from coastal settlements is also a major source of nutrients in coastal waters. In addition, nutrients, especially nitrogen, enter the marine environment via atmospheric deposition. Traffic is an important source of these atmospheric nutrients. To control the sources of nutrient enrichment and to reverse the adverse effects of eutrophication, it will be necessary to improve the effectiveness of nutrient reduction in sewage treatment plants and to control the runoff from non-point sources by improving management practices in agriculture. In addition, practices that promote long-term benefits and cause the least damage to interrelated ecosystems should be encouraged. Tourism, which is of great importance to the economies of the Wider Caribbean Region, is directly dependent on the quality of the coastal environment. When eutrophication occurs, the ecological and aesthetic quality of the environment is altered and, in severe cases, recreational use is prevented. Pesticides (insecticides, herbicides, fungicides, etc.) are extensively used in conjunction with agriculture within the Wider Caribbean Region. Pesticides reach the coastal and marine environment via rivers and by atmospheric transport. Pesticides in the marine environment may affect living organisms, and, through contamination of seafood, may become a public health problem. It has been estimated that 90% of the pesticides that are applied do not reach the targeted species. Pesticides are highly toxic and tend to accumulate in the coastal and marine biota, making pesticide contamination a serious concern. The negative effects of pesticides in the marine and coastal environments include changes in reef community structure, such as decreases in live coral cover and increases in algae and sponges and damage to seagrass beds and other aquatic vegetation from herbicides. Marine organisms may be affected either directly, as the pesticide moves through the food chain and accumulate in the biota, or by loss or alteration of their habitat. This, in turn, will lead to decreased fisheries production. Pesticides may cause fish kills in areas of poor water circulation, and groundwater and drinking water supplies may become contaminated. Areas under particular threat are those with little water exchange and circulation, where pesticide residues dont get flushed out quickly. Many of the monitoring programs developed to determine the presence of pesticide residues accumulated in sediments and the marine biota in the Wider Caribbean Region have concentrated on a limited number of pesticides of known long-term environmental impact and toxicity. These pesticides include DDTs, Chlordanes, Dieldrin, Endrin, Aldrin, HCBs, Heptachlor and its epoxides, Endosulfan, among others. NOAAs Status and Trends Mussel Watch Program (MWP) have conducted most of the published surveys of pesticides in sediments and marine organisms of the Wider Caribbean Region. In 1986 and 1987, DDTs were still the most abundant compounds and their levels were considerably higher in oyster tissues than in sediments. Until recently, the MWP have concentrated their work on the U.S Gulf Coast, and limited data are as of yet available for most Caribbean nations. Efforts to reduce pesticides will depend on a change in agricultural practices and in the handling of pesticides. The environmental effects depend on the chemicals used, quantities applied, the biophysical layout of the farm, including amount of vegetation cover, the slope, drainage and the presence of riparian buffer zones along rivers and streams. Land can be set aside for coastal erosion and introduction of newer pesticides with much lower application rates. Some pesticides and many insecticides are sediment-binding, and the amount reaching the coastal environment could be reduced by controlling soil erosion in agricultural areas. Water-soluble pesticides are potentially more damaging because they easily enter the coastal environment. The end of the rainy season poses a time of particular threat to surface water contamination because of potential overflow from catchment areas to nearby rivers and streams. There is not very much data available yet about the behaviour of these pesticides in the marine environment when applied in the tropical coastal zones, including degradation rates, fractionization partition and biological update, and transfer through the food chain to humans. Data on the presence of second and third generation pesticides has been obtained from the Caribbean coasts of Costa Rica, Nicaragua and Panama. It was determined that only residues of the pesticide chlorophyrifos showed widespread distribution in the analyzed sediments. Frequent fish kills were also observed after the application of the pesticides, indicating high toxicity to non-target organisms. It is clear that a modification in agricultural practices is necessary in order to reduce the impact of pesticides, as well as their transfer to, the aquatic environment. Increasing amounts of solid wastes are generated within the Wider Caribbean region, coupled with deficient collection systems and inadequate disposal practices. Additionally, disposal of solid wastes originating from ships and other offshore sources are impacting the coastal areas of the region. The increasing amounts of solid wastes in the coastal zone are detrimental to the economies of many countries, especially those dependent on the tourist trade. Some objects, such as glass and hypodermic needles, can pose a health risk to those coming into contact with them. Scientists have documented an increasing number of injuries and death among marine mammals, fish, sea turtles, and birds due to entanglement. Furthermore, animals can mistake plastic items and pelagic tar as food sources. Some marine animals accidentally feeding on plastic may feel a sense of fullness, and as a result, slowly starve to death. The land based solid waste pollution has its origin in inadequate disposal practices, such as using rivers and streams and mangrove swamps as dumpsites. Poorly managed landfills in coastal areas can also become sources of debris, especially in the rainy season, when runoff may wash wastes out to sea. At present there is little published information available about the amount of solid wastes generated in the Wider Caribbean region, and about how these wastes are handled prior to final disposal. Solid wastes dumped at sea come from shipping, commercial fisheries, and other offshore activities. The disposal of solid wastes by ships in nearshore coastal areas is regulated by Annex V of the MARPOL 73/78 Convention. The Maritime Environment Committee of the International Maritime Organization (IMO) in July 1991 designated the Wider Caribbean region as a "Special Area" under the above regulations. However, in order to comply with Annex V of MARPOL, most countries in the region will need to provide port reception facilities for Annex V wastes generated by shipping activities. At present, many countries in the region lack such facilities. The lack of adequate port reception facilities could result in solid wastes being disposed of at sea, and being transported by wind and currents to shore often in locations distant from the original source of the material. Ship generated wastes account for approximately 80% of solid wastes in the coastal areas Beach cleanups are performed in many countries of the region. Generally plastics are very common, while glass, metal containers, paper products and other materials are also commonly seen. The most effective way to reduce this pollution is to stop it at the source. To this end, increasing public awareness, strengthening local legislation, promoting proper garbage collection, transportation, and dispersal system, including the development of port reception facilities to comply with Annex V of MARPOL, are some potential solutions for the problem of ship generated pollution. Toxic pollutants are organic and inorganic compounds, either synthesized or chemically transformed natural substances. When accidentally released into the marine environment, they can have severe adverse effects on marine ecosystems. Many compounds are very persistent in the aquatic environment, bio-accumulate in marine organisms, and are highly toxic to humans via the consumption of seafood. The sources of toxic pollutants are primarily industrial point sources, such as the petroleum industry (oil refineries and petrochemical plants), chemical industries (organic and inorganic), wood/pulp plants, pesticide production and formulation, metal and electroplating industries etc. Toxic substances also enter the marine environment from non-point sources via rivers and streams and through the atmosphere. Toxic substances are generally released as a result of manufacturing operations, effluent discharges, and accidental spills. The wastes generated may contain heavy metals, carcinogenic hydrocarbons, dioxins, different types of pesticides, noxious organic and inorganic substances, etc. With increasing industrial development within the Wider Caribbean region, the discharge of toxic pollutants is a potential problem for every country in the region. Major industrial activity centers within the region are concentrated in a few areas, including the Texas and Louisiana region of the U.S. Gulf Coast, the industrial area of Lake Maracaibo in Venezuela, the El Mamonal Industrial complex in Cartagena Bay, Columbia, Kingston Harbour in Jamaica, and Havana Bay in Cuba. The extent of industrial toxic substances released into the environment depends on the location of the sites and the measures that companies are taking to reduce their waste flow. The potential effects of toxic substances in the marine and coastal environments include the destruction of fish and other wildlife leading to a loss of biodiversity, decrease in productivity of mangrove, seagrass and coral reef ecosystems, negative economic impacts relating to tourism and recreation, and human health risks through contaminated food. Limiting the amount of toxic substances entering the coastal and marine ecosystems usually involves a legislative approach. The legislation will have to be not only implemented, but actively enforced to be effective. In future planning efforts special attention should be paid for the location of industrial sites in order to limit their effect on important coastal and marine ecosystems. Finally, each industry producing hazardous wastes needs to have an effluent and recipient monitoring program in place for compliance control. Many of the land-based sources of pollution degrading the coastal and marine environment are reaching the marine environment through waterborne, airborne or direct discharges. Any pollution from a confined and discrete conveyance, such as a pipe, ditch, channel, tunnel, well, fissure, etc. is considered point source pollution. Examples of point source pollution include sewage effluents and various industrial discharges. Domestic sewage is a significant contributor to marine pollution in the WCR. Typical pollutants in sewage effluents are suspended solids, oxygen demanding substances, nitrogen, phosphorous, oil, grease, and pathogens. Industrial wastewater has a wider range of pollutants, which are dependent on the type of industry producing the waste. Oil refinery wastewater produces a high amount of oxygen demanding substances, dissolved salts, phenol and sulfur compounds etc. Wastewater from the food processing industry, distilleries and soft drink industries is also high in oxygen demanding substances, as is chemical industry wastewater, which also frequently contains toxic substances. Non-point source pollution is more difficult to recognize than point source pollution. Non-point source pollution emanates from unconfined or unchannelled sources, including agricultural run off, drainage or seepage, and atmospheric deposition. These pollutants reach the marine environment by surface water, through ground water flows, or by air. Examples of non-point sources of pollution include sediments, nutrients, pesticides, pathogens and solid waste. They are caused by activities such as tillage, fertilising, manure spreading, pesticide use, irrigation and clear cutting. The effectiveness of existing wastewater collection and treatment facilities in the region, whether domestic or industrial, is usually constrained by limited capacity, poor maintenance practices, process malfunction, and lack of experienced or properly trained staff. Existing agricultural and forestry practices are often characterised by absence of consistent requirements for best management practices relating to non-point sources of pollution. Environmental management presents a demand for improved management practices and for controls to be put in place for different point as well as non-point sources of pollution. Attention also needs to be given to land and water use in the surrounding environment. Most of the countries in the Wider Caribbean Region have adopted legal instruments to control various aspects of domestic and industrial wastewater disposal to coastal and marine waters. The degree to which these legal instruments are applied in the practical management and control of environmental pollution by governmental agencies varies from country to country, but is usually very weak. In many cases the legislation does not include systems for integrated permitting (which integrates all aspects of the environment, such as air, water, noise, waste, and risk), compliance control and enforcement. Integrated permitting aims to co-ordinate the time schedules and information required for sectoral permitting procedures, and aims to have a mechanism in charge of the co-ordination of different sectoral permits. Environmental planning and management in this regard is a sectoral issue. However, the environmental sector is not the only sector with demands on the use and quality of coastal and marine waters. Other interests and demands need to be identified as well. The use and protection of water therefore necessitates an integrated approach to planning and management. The quality of surface water, as well as coastal and marine waters, is inter-linked with the use of the land and the sea. Because of this, planning and management activities will have to be performed in a multi-sectoral way. Coastal and marine planning and management should be seen as processes, which embrace environmental, socio-economic and demographic considerations, including issues such as land-sea interaction, interdisciplinary co-operation, participation of public and private sector organisations, balance between protection and development, and public participation. An efficient planning and management process can not take place without multi-sectoral participation and a co-ordinating body powerful enough to co-ordinate different sector agencies. The co-ordinating body will decide how the integration and co-ordination between different interests should be dealt with. It should then be the responsibility of the sector agencies to implement the decisions of the co-ordinating body. Briefly, the aims of these planning activities are: - To collect general information regarding existing conditions and overlapping development trends; - To provide a basis for discussion and decisions regarding the priority to be given to different interests; - To stimulate further in-depth planning or special studies; and - To provide a basis for decisions in connection with permit applications and the implementation of various measures. Regionally, the Land Based Sources (LBS) Protocol of the Cartagena Convention an instrument for dealing with environmental pollution reaching the marine environment from land-based sources. The Protocol is supported by a special subprogram of the Caribbean Environment Programme called the Assessment and Management of Environmental Pollution Sub-programme (AMEP). The AMEP (Assessment and Management of Environmental Pollution) Subprogramme of the Caribbean Environment Programme deals with the assessment and management of environmental pollution. Description of AMEP, programme updates, and related technical reports can be found on our AMEP page. These reports were used as sources for the previous text. CEP Technical Report No. 32, Guidelines for Sediment Control Practices in the Insular Caribbean CEP Technical Report No. 33, Regional Overview of Land-Based Sources of Pollution in the Wider Caribbean Region More information about CEP Technical Reports These sites are in no particular order. If you know of a site you think should be included in this list, please e-mail your suggestions to [email protected]. MARINE POLLUTION IN THE CARIBBEAN This site of the Regional Marine Pollution Emergency, Information and Training Center for the Wider Caribbean Region contains information about oil pollution, training, regional focal points, and contingency plans. Tar and Oil Pollution Data for Stations in the Gulf of Mexico and Caribbean Sea (1979-89) This page contains data collected by NOAA's Atlantic Oceanographic and Meteorological Laboratory from the Caribbean Sea and Gulf of Mexico as part of the Caribbean Oil Pollution Database (CARIPOL) Project. ECLAC/CDCC Waste Management Links This site contains national contingency plans, information about waste management projects, technical documents and the oil spill protocol. GENERAL MARINE POLLUTION Smithsonian Institution Ocean Planet Exhibition Page on Marine Pollution This very educational page has information about cross country sources of pollution, raw sewage, alien species, and America's watersheds. Ocean News Issue #4 Marine Pollution Ocean News is published by Bamfield Marine Station Public Education Programme. The pollution issue has information about sources and solutions of pollution as well as red tides, and is very educational. Consortium for International Earth Science Information Network (CIESIN) page on Environmental Treaties and Resource Indicators The site contains a summary of Convention on the Prevention of Marine Pollution by Dumping of and Other Matter. SITES WITH LISTS OF POLLUTION RELATED WEB SITES A compilation of interesting sites with ocean pollution related information. Search | Site Map | Who we are | Services | Marine Issues |Projects in the Pipeline| Environmental Law |Clearing-house Corner |Content in Franšais and Espa˝ol Best viewed in Internet Explorer 4.0 or higher at high (800x600) resolutionn |Return to CEP Home Page|| Last updated: 21 august 2001 Caribbean Environment Programme Regional Co-ordinating Unit 14 - 20 Port Royal Street ę 2000 - UNEP - CAR/RCU
Butterflies' sensory systems help them find food and mates, avoid predators, and choose appropriate host plants for their eggs. Their senses may be divided into four basic categories: touch, hearing, sight, and taste/smell. The last two categories are usually the most well-developed systems in butterflies. The information below introduces important organs associated with sensory systems at different life stages and explains how a butterfly uses its senses to navigate through its world. Butterfly sensory systems are very different from humans (for example, they can see ultraviolet light and hear ultrasound). These differences can make it hard to study butterfly senses. Butterflies probably use their senses in many ways we just don't know about yet because we perceive the world through mammalian senses. TOUCH. Touch is important in different ways during the larval and adult stages of a butterfly's life. They sense touch through hairs that extend through sockets in the exoskeleton. These hairs, called tactile setae, are attached to nerve cells, which relay information about the hairs' movement to the butterfly. In larvae, tactile setae are scattered fairly evenly over the whole body. You can see these setae on Monarch larvae with a simple magnifying lens or under a microscope. Larvae have a variety of responses to touch, and these responses may change over time. (This could be an interesting area of experimentation for students!) Many students notice that larvae often curl up into a ball when lightly touched. Adults have tactile setae on almost all of their body parts. In both adults and larvae, the setae play an important role in helping the butterfly sense the relative position of many body parts (e.g., where is the second segment of the thorax in relation to the third segment). This is especially important for flight, and there are several collections of specialized setae and nerves that help the adult sense wind, gravity, and the position of head, body, wings, legs, antennae, and other body parts. In monarchs, setae on the adult's antennae sense both touch and smell. HEARING. In general, butterflies appear to have poor hearing. Larvae perceive sound through tactile setae, but they seem to mainly respond to sudden noises. This is easy to observe in Monarch larvae, which will rear up if you clap loudly near them. This reaction is often called a startle response, a behavior that probably evolved to protect the larvae from predators who make noise. If you clap repeatedly, the butterflies get used to the noise and stop responding-a phenomenon of learning called habituation. Habituation happens throughout the animal kingdom, including in humans. For example, people who live in cities often stop noticing the noise of traffic until they go out into the country and "hear" its absence again. Adult butterflies often sense sound through veins in their wings, but scientists have only studied this in a limited number of species. A few species of moths and butterflies (not monarchs) make sound by rubbing or clicking together parts of their bodies (e.g., wing veins, legs, clapping their wings together, etc.). In some species this may be a means of communication between individuals and can play an important role in finding mates. In other species it seems to serve as a way to scare off predators such as birds. We're not sure how many species of butterflies and moths use sound because humans often can't hear the noises they produce. SIGHT. Butterflies see very differently during different stages of their lives. Larval vision is limited and poor. They see through their 12 ocelli, which have only a couple cells each (compared to the thousands of cells associated with adult insect eyes or human eyes). Larvae can still see the same range of light as adults, however, from red all the way through ultraviolet. Adults see through compound eyes made up of thousands of ommatidia. Each ommatidium gathers light and processes visual information through its own lens and nerve system. The small plastic "Dragonfly eye" available in many science, toy, and photography stores can give students an idea of how the world looks through a compound eye. Compound eyes give butterflies excellent perception of color and motion in a wide range; butterflies can see up, down, forward, backward, and to the sides at the same time. On the other hand, they are not very good at judging distance or perceiving patterns, and the images are not united into one continuous picture. Butterflies apparently see the world as a series of still photos rather than a movie. Butterflies can also perceive polarized light (light waves that move in only one direction, and butterflies can sense that direction). Bees use their perception of polarized light to navigate to and from their hives, and some people suggest that butterflies may use it in similar ways, both to move around their habitat and to migrate. TASTE & SMELL. Butterflies get much of their information about the world through chemoreceptors scattered across their bodies. In butterflies, chemoreceptors are nerve cells that open onto the surface of the exoskeleton and react to the presence of different chemicals in the environment. They operate on a system similar to a lock and key. When a particular chemical runs into a chemoreceptor, it fits into a "lock" on the nerve. This fit sends a message to the nerve cell telling the butterfly that it has encountered the chemical. For example, organs on the back of butterfly tarsi sense dissolved sugar; when the dissolved sugar touches these chemoreceptors, the butterfly extends its proboscis to eat the nectar its tarsi have sensed. Humans also have chemoreceptors, which are concentrated on the tongue (tastebuds) and in the nose. Adult butterflies sense most smells through their antennae, which are densely covered with chemoreceptors, especially on the clubs. In monarchs, chemoreceptors on the antennae sense the honey odor associated with nectar and feeding as well as special chemicals released by the male, called pheromones. In general, pheromones help males and females of the same species find each other to mate. Monarch males can produce pheromones, which they secrete through special glands on the wings. In contrast to their close relatives, however, monarchs do not require pheromones for successful mating. Scientists are still studying what role, if any, pheromones play in Monarch mating rituals. Female butterflies often have important chemoreceptors on their legs to help them find appropriate host plants for their eggs. These chemoreceptors are at the base of spines on the back of the legs, and they run up along the spine to its tip. Females drum their legs against the plant, which releases plant juices. The chemoreceptors along the spines tell the butterfly whether they are standing on the correct host plant. Monarch females test host plants with all six legs before laying eggs. They also probably have chemoreceptors on their ovipositor. Monarchs invest a lot of time into finding the correct host plant for their eggs because it is essential for the eggs' survival.
Temporal range: early Oligocene – Recent |California ground squirrel (Spermophilus beecheyi) in the man-made rocky shoreline of the Berkeley Marina: The numerous crevices offer safety and shelter.| Most of them live in the mountains, like the Sierra Nevada, or the Alps. Marmots make holes in the ground. They live in burrows, underground. They hibernate, that is they sleep through the winter. Marmots are very social animals. They group together easily. They also like to communicate with each other, with whistles, especially when they sense danger. They vary in size and habits. Most are able to rise up on their hind legs and stand fully erect comfortably for long periods. This way they watch for predators. They tend to live together more than other squirrels, and many live in colonies with complex social structures. Most Marmotini are rather short-tailed and large squirrels. The Alpine marmot (Marmota marmota) is the largest living member of the Sciuridae, at 53–73 cm in length and weighing 5–8 kg. Related pages[change | change source] Other websites[change | change source] |Wikispecies has information on: Marmota.|
Ways to prevent a stomach virus include washing hands frequently, staying away from infected people, disinfecting contaminated surfaces and using separate personal items, according to Mayo Clinic. Travelers to international locations can prevent stomach issues by only drinking bottled water, avoiding ice cubes and staying away from raw or undercooked foods.Continue Reading Many stomach viruses, such as norovirus, spread quickly and easily through direct contact with an infected person or contact with a contaminated surface, notes the Centers for Disease Control and Prevention. A person with norovirus is contagious from the time she gets sick until after she recovers. Taking precautions around someone who is currently sick or was recently sick reduces the chances of catching the virus. Staying away from someone who is sick reduces the risk of contact with the virus. Mayo Clinic recommends keeping some distance from a person with the virus when possible. Using separate towels, utensils and plates helps limit contact with the virus. Because avoiding all contact with viruses is nearly impossible, washing hands helps remove the virus before becoming infected. An effective way to wash hands is to rub soap onto the hands for at least 20 seconds while using warm water. Cleaning in the kitchen, both surfaces and food, reduces the spread of stomach viruses, states the CDC. Anyone recovering from a stomach virus should wait two days or longer after recovering before preparing food for others. Cleaning contaminated surfaces in the kitchen and other areas of the home with a beach-based cleaner helps disinfect to reduce the spread of the virus.Learn more about Cold & Flu
June 13, 2014 Astronomers Discover Nearly 200 Previously Unknown ‘Red’ Galaxies John P. Millis, Ph.D. for redOrbit.com - Your Universe Online One of the greatest aspects of NASA’s astronomical research program is that the data accumulated from virtually all of the instruments – X-ray satellites, Infrared detectors, gamma-ray satellites – is available to the public. This means professional and amateur astronomers alike have the ability to make breakthrough discoveries.For instance, in 2007 Dutch school teacher Hanny van Arkel discovered a peculiar blob in an image of the spiral galaxy IC 2497 while participating in the scientific crowd sourcing project Galaxy Zoo. Now known as Hanny’s Voorwerp (Dutch for Hanny’s object) the source of the radiation is a hot topic in the astronomical community. But beyond citizen science, professionals are getting in on the action as well. Astronomers Ivana Damjanov, Margaret Geller, Ho Seong Hwang, and Igor Chilingarian of the Harvard-Smithsonian Center for Astrophysics (CfA), combed through archival data from the Sloan Digital Sky Survey, looking for dense red galaxies known in the scientific community as “red nuggets”. These galaxies are characterized by masses some 10 times greater than that of the Milky Way, but have volumes 100 times smaller. They are also important to cosmology theories as well, because most models predict the early Universe would have been filled with these types of galaxies. However, previous attempts to find examples of these galaxies in the nearby regions of the Universe yielded few results. This is puzzling because low-mass red stars – the types that would be abundant in these galaxies – are long-lived, even longer than the age of the Universe. So, if absent, theories of galaxy evolution would be in question. The challenge is that in optical images these galaxies appear as red stars, making them difficult to pick out. "These red nugget galaxies were hiding in plain view, masquerading as stars," says Damjanov. But, using the archival survey data, the team was able to identify candidate objects for their study. Using various other telescopes – such as the Hubble Space Telescope and the Canada-France-Hawaii Telescope – the team then turned their attention to spectroscopically analyzing these candidates, which would assist them in determining which objects were stars and which were red nuggets. The study revealed about 200 of the objects were, in fact, the target galaxies. This was an important confirmation of the particular abundance of these galaxies in our neighborhood of the Universe. "Now we know that many of these amazingly small, dense, but massive galaxies survive. They are a fascinating test of our understanding of the way galaxies form and evolve," explains Geller. This result can now be used to refine models of how galaxies in the Universe evolved, and how the cosmos itself may have progressed over billions of years. "Many processes work together to shape the rich landscape of galaxies we see in the nearby universe," says Damjanov. The research was presented on Wednesday, June 11 at a meeting of the Canadian Astronomical Society (CASCA) in Quebec, QC. Image 2 (below): This series of photos shows three "red nugget" galaxies at a distance of about 4 billion light-years, and therefore seen as they were 4 billion years ago. At left, a lonely one without companion galaxies. The one in the middle is alone as well, although it appears to be next to a larger spiral galaxy. That blue spiral is actually much closer to us, only one billion light-years away. Finally, the red nugget on the right might have some companion galaxies residing nearby. Credit: Ivana Damjanov & CFHT MegaCam Team
Chlortetracycline was the first tetracycline discovered, in 1948. Since then five additional tetracyclines have been isolated or derived (oxytetracycline, tetracycline, demeclocycline, doxycycline and minocycline), but only the last four are available for systemic use in the United States. Of these four agents, doxycycline and minocycline are the most frequently prescribed. Research to find tetracycline analogues lead to the development of the glycylcyclines. Tigecycline is the first of this new class of agents and exhibits broad-spectrum antibacterial activity similar to the tetracyclines . Doxycycline is one of the most active tetracyclines and is the most often used clinically since it possesses many advantages over traditional tetracycline and minocycline. Doxycycline can be administered twice daily, has both intravenous (IV) and oral (PO) formulations, achieves reasonable concentrations even if administered with food, and is less likely to cause photosensitivity . Doxycycline may be an alternative for use in children since it binds calcium to a lesser extent than tetracycline, which can cause tooth discoloration and bony growth retardation. MECHANISM OF ACTION The tetracyclines enter the bacterial cell wall in two ways: passive diffusion and an energy-dependent active transport system, which is probably mediated in a pH-dependent fashion. Once inside the cell, tetracyclines bind reversibly to the 30S ribosomal subunit at a position that blocks the binding of the aminoacyl-tRNA to the acceptor site on the mRNA-ribosome complex. Protein synthesis is ultimately inhibited, leading to a bacteriostatic effect . In contrast to many other antibiotics, tetracyclines are infrequently inactivated biologically or altered chemically by resistant bacteria. Resistance to these agents develops primarily by preventing accumulation of the drug inside the cell either by decreasing influx or increasing efflux. Once resistance develops to one of the drugs in this class, it is typically conferred to all tetracyclines. However, there are differences in resistance among species of bacteria. Resistance genes to tetracyclines often occur on plasmids or other transferable elements such as transposons . Bacteria carrying a ribosome protection type of resistance gene produce a cytoplasmic protein that interacts with the ribosomes and allows the ribosomes to proceed with protein synthesis even in the presence of high intracellular levels of the drug [4,5].
Botkin describes the chambered nautilus (Nautilus pompilius Linnaeus), one of “the humblest and most obscure creatures” which dwells in the southwestern Pacific. It is “a cryptic creature with nocturnal habits, living in the depths of the ocean, as much as 1,000 feet below the surface, and rarely seen alive by human beings.” Its oldest fossil ancestors date back 420 million years. The nautilus lives in the outermost chamber of its shell. As it grows it needs a larger protective shield, and the chambers grow in size, the shell “coiling into the convoluted shape of a logarithmic spiral, following a simple but elegant mathematical formula.” Along the opening of the outer chamber of its shell, small deposits of calcium carbonate are laid down in groups of three to five, separated from the others by a ridge or “growth line.” On average there are 30 growth lines per chamber, one for each day in the lunar cycle, “suggesting that a new chamber is put down each lunar month and a new growth line each day. From this it can be inferred “that the chambered nautilus contains in its shell two clocks: one timed to the sun, the other to the moon.” “Strangely, the number of growth lines per chamber has increased over time,” writes Dr. Botkin. Older fossil shells have only nine growth lines per chamber. Modern shells have thirty. This, in turn, suggests “that the lunar month has grown longer and that the moon used to revolve faster around Earth than it does now.” Ergo, “the moon must have been closer to Earth, since the closer a satellite is to a planet, the faster it must revolve to remain in orbit.” It would seem that the revolution of the moon 420 million years ago took only nine days. Since then there has been a loss of energy from friction of the tides, causing the moon to recede slowly as it continues to do. This loss of energy from friction increased when the continents emerged about 600 million years ago. “Thus in the chambered nautilus, the solar system, the physical Earth, and life on Earth are linked,” writes Daniel Botkin. An interesting question arises: If the moon’s orbit continues its drift, what might its impact be on inter-tidal life at some future point in time? As Heraclitus said, “All is flux.” Professor Botkin sees in the image of the chambered nautilus, one informed by careful observation and modern science, a model for a new unity in which human beings can live and thrive within a wondrous, unsettling, and dynamic ecosystem. No longer can nature be characterized in terms of a “balance of nature” which assumes that nature, undisturbed by human beings, achieves a steady state or equilibrium, a kind of constancy in terms of maximum biomass and diversity. It is the deconstruction of this old view of nature which takes up most of the book. The Moon In The Nautilus Shell is an expanded and updated version of Botkin’s 1990 landmark book Discordant Harmonies: A New Ecology for the 21st Century which challenged the idea that nature maintains its equilibrium indefinitely as long as people just leave it alone since they can only have negative effects on it. Take the case of the Boundary Waters Canoe Area in northern Minnesota, a remarkable place which would meet most people’s conventional idea of wilderness and “could persist with the least direct human intervention.” According to Dr. Botkin it has, from the end of the last ice age until the time of European colonization, “passed from the ice and tundra to spruce and jack pine forest.” From there it shifted to paper birch and alder, and then back to spruce, jack pine and white pine driven by variable climate. Botkin asks: “Which of these forests represented the natural state?” “If natural means simply before human intervention, then all these habitats could be claimed as natural, contrary to what people really mean and really want,” claims Botkin. “What people want in the Boundary Waters Canoe Area is a wilderness as seen by the voyageurs and a landscape that gives the feeling of being untouched by people.” Nothing wrong with that, but a choice must be made. Botkin offers numerous case histories where failure to appreciate the role of change and disturbance led to bad management decisions such as the total suppression of fire in forests. This resulted, for instance, in the decline of the giant sequoia trees on the west coast of the United States. However, letting nature take its course, without any human intervention, can also be fatal as in the case of the elephant herd in Kenya’s Tsavo national park in which game managers ignored a population explosion, stood by and waited for “the attainment of a natural ecological climax.” That elephant population crashed and did not recover for years. Another contributor to this conversation is Emma Marris, a writer for the science Journal Nature and author of Rambunctious Garden: Saving Nature in a Post-Wild World. She uses the metaphor of an unruly garden to illustrate the dynamic, changing nature of, well, nature, and the predictably unpredictable role of human beings in it. “We are already running the whole Earth, whether we admit it or not,” asserts Marris. “But from the point of view of a geologist or paleo-ecologist, ecosystems are in a constant dance, as their components compete, react, evolve, migrate, and form new communities,” writes Marris. It may be impossible to restore conditions, say, in an Australian sanctuary to 1770 by trapping and killing thousands of introduced rabbits. Yet, it might be possible to manage it for something achievable such as avoiding extinction of rare or endangered species. Humans have lived in Australia for 50,000 years. “Aborigines increased the amount of flammable plant material…This, combined with their fire-setting ways, may have changed the dominant species in many parts of the country,” argues this writer. She quotes one authority who notes that “virtually all the continent’s ecosystems as being in some sense man-made.” “A consequence of throwing out the ‘pristine wilderness’ ideal is that conservationists, and society at large, now have to formulate alternative goals for conservation,” says Marris. “In a nutshell give up romantic notions of a stable Eden, be honest about goals and costs, keep land from mindless development, and try just about everything.” Marris echoes Botkin who observes, “Nature in the twenty-first century will be a nature that we make; the question is the degree to which this molding will be intentional or unintentional, desirable or undesirable.” Botkin recognizes that abandoning the belief in the constancy of nature is very discomforting leaving us in “an extreme existential position.” Environmental historian and Bancroft prize winner William Cronon wrote a challenging essay in 1995, “The Trouble with Wilderness; Or, Getting Back to the Wrong Nature,” in which he claimed it was time “to rethink wilderness.” Wilderness is “profoundly a human creation” and a “flight from history that is very nearly the core of wilderness represents the false hope of an escape from responsibility, the illusion that we can somehow wipe clean the slate of our past and return to the tabula rasa that supposedly existed before we began to leave our mark on the world.” This results in a dualism that sets humanity and nature at opposite poles. “We thereby leave ourselves little hope of discovering what an ethical, sustainable, honorable, human place in nature might actually look like,” wrote Cronon. Still, many resist this admittedly disturbing revisionism. Tulane environmental law professor Oliver Houck demurs on the question of deconstructing nature and ecology. In a 1998 article he declared, “While ecosystems contain humans, human actions are not their measure. The best available measures of ecosystems are representative species that indicate their natural conditions.” “This measure taken, the role of human beings is to manage ecosystems, and themselves, toward that goal,” maintains Houck. The environmental historian Donald Worster worries about a new “era of agnosticism” in which the very idea of the ecosystem or nature is nothing more than fiction. Is the idea of “some comprehensive order in organic nature now totally suspect? The famous Jesuit palaeontologist, writing in The Phenomenon of Man (1952), said, “The order and the design do not appear except in the whole. The mesh of the universe is the universe itself.” One must recognize great variability and discontinuities in the here and now, in the realm of contingency so to speak, with chance and randomness playing their part within a broader context of probabilities and logic. Even Botkin pulls back from describing this dynamism in nature as chaotic. Chance and randomness, yes. But not chaos. Is this new, dynamic view of nature really disorder or just a more sophisticated, non-static version of order? There is irony in the image of the moon in the nautilus shell in a book on discontinuities and disequilibrium in nature. Just how random is the interaction of life, tides, lunar orbits and continents rising from the sea? Neither Botkin nor Marris would, nor should they, engage in a discussion of final causes at least in these books. But the philosopher or theologian might, understandably, be drawn into such a stimulating and important discussion of questions which science can only inform but not answer. This article was originally published by Mercatornet under a Creative Commons License here.
Before the Ch’ing dynasty was overthrown in 1911 and China became a republic, the Chinese emperor was considered the head of the state. The emperor or Wang was considered as head of state from Zhou to Qin dynasties. Thereafter, however, the Wang became merely head of the noble ranks. The first emperor was Ying Zheng. He was born in 259 B.C. His father was Zhuang Xiang, at that time the king of Qin, one of the seven feudal states of China. In 146 B.C., when Zheng was 13 years old, his father died, and Zheng inherited the throne. Zheng was a ruthless dictator. He tried to control his people and censor their minds by killing scholars and poets, even burning important books of China and books of some past regimes. Zheng thought himself to be Shi Huangdi (the First August God or First Emperor). It was Zheng’s desire to have his future generations occupy the throne forever. In those days, the emperor was referred as the Son of Heaven. Thus, the reigning emporor had huge power anf the final say on every matter. The emperor’s words were considered to be sacred edicts. The title of the emperor passed from father to son. During the reign of the Han dynasty, the eldest son would basically inherit the throne. However, on some occasions, rebel leaders overthrew the emperors. For instance, Zhu Yuan Zhang and Hong Xiu Quan were the leaders of Taiping Rebellion. They had ruled with absolute power, and were entitled “Heavenly Kings.” When referring to a sovereign king in the Western sense of the word, the sovereign’s personal name is used, i.e. Queen Victoria or George V. In China, however, the emperor was referred as Huang-di-Bi-xia (His Majesty the Emperor), or Dang-jin Huang-shang (The Imperial Highness of the Present Time). Thus, they were addressed in the third person. A Chinese emperor was known either by the family name, a Temple name, or several honorary names.
Ecological Succession and the Lehigh Gap by Dan R. Kunkle The following article is from Summer 2004 issue of Wildlife Activist. Succession is an important ecological concept. It refers to the gradual, predictable changes that occur in the flora and fauna of a given place over time as an ecosystem develops. We are all familiar with succession in an abandoned farm field or lawn. In our part of the continent, the end result of succession is a Temperate Deciduous Forest – a forest consisting primarily of a variety of broad-leaved trees that lose their leaves in winter. This final stage of succession is called the climax community. In the North Woods of Maine, the climax community is a coniferous forest; in Nebraska, it is a grassland community. When a climax forest remains undisturbed for a long period of time, we call it an “old growth” ecosystem. Only a few small stands of old growth remain in Pennsylvania. Ecologists often refer to two kinds of succession: primary and secondary. Primary succession occurs when new land is formed, such as a volcanic island rising out of the ocean or the shifting sand of a barrier island creating new land. Most succession is secondary – occurring in a habitat that has been altered by humans or nature. Examples include the abandoned farm fields noted above, and areas where natural disturbances such as fires, hurricanes, or floods have occurred. Succession has just begun on the degraded areas of the Lehigh Gap Wildlife Refuge. While a natural forest occurred on the slopes of the Kittatinny at Lehigh Gap in the past, those forests had been altered over time by lumbering, charcoaling, and fires set to encourage the growth of huckleberries at the higher elevations. But the most severe impact was that of the air pollution from the zinc smelter. Life on the ridge at places was so completely destroyed that the succession that is beginning is much like primary succession – and primary succession takes a long time. Time is needed to rebuild organic soils and a community of bacteria, fungi, worms, and other invertebrates. The leaders of the Wildlife Center have decided to take the slow, but ecologically sound approach of restoration by succession on our refuge. We are using native warm-season grasses to start the process. These are the same grasses that followed the glaciers north and began the soil building process tens of thousands of years ago following the ice ages. We will accelerate succession somewhat by introducing a variety of flowering plants and scattered trees into the mix over time, but eventually, natural succession will take over, and a mature, old growth forest will return to the ridge in a few centuries. We are documenting this succession scientifically and photographically, and will be using the refuge to teach about succession and a variety of other ecological concepts to visitors and school groups. Succession will be a core message, relating to the human and natural history of the gap, to geology and pollution, and to the flora and fauna, past present, and future. Pioneers on the Refuge When you visit the refuge, you will notice some of the early successional species that are pioneering the new growth on the refuge. Following are some profiles of these pioneers. We are introducing some of these species, while the seeds of others are coming in on their own via wind or animals. The major species in our reintroduction efforts are the native, warm-season grasses. These are all deeply rooted perennial grasses that tend to grow in bunches rather than forming sod, especially in poor growing conditions such as those on the refuge. Their deep roots allow them to grow all summer and some species reach heights up to 8 or 9 feet. They flower and produce seeds in summer or fall. Their ability to colonize barren, mineral soils and avoid taking up heavy metals make them the plants of choice for our restoration efforts. While eight different species are contained in our experimental planting mix, we feature three of these grasses that have pioneered the refuge on their own. Once the most common grass in the mixed grass prairies of the Great Plains, this grass is a common invader of abandoned farm fields in our region. Both bluestems are blue only in the early stages of growth each year. Little bluestem colonized long sections along the side of the LNE (upper) railroad bed. Some of this was lost during construction to allow truck access to the test plot area. It turns a beautiful amber color (a mixture of tan and wine coloring) in the fall. Its seeds are preferred by finches and sparrows which cling to a stalk in fall or winter, pin the grass to the ground, and strip it of seeds. The seeds are “bearded,” meaning they have a feathery attachment that allows them to blow in the wind. A similar species, Broomsedge (Andropogon virginicus) is also found on the refuge. This species tends to grow in large clumps up to two feet in diameter around the Osprey House area and along the lower railroad bed (D&L Trail). This was a major species in the tallgrass prairie and produces clusters of seeds on finger-like seed heads that gave it the common name “turkey foot.” Big bluestem is one of the species we are counting on most highly to create valuable wildlife habitat on our restoration area. Its seeds are sought out by many birds and small mammals. Look for this 4 to 6-foot species on the bank at the Osprey House, where clumps are easy to spot. This relative of the crop plant sorghum can grow up to 9 feet tall and produces large, golden brown flag-like seed heads. It is found growing primarily along the lower railroad bed (D&L Trail) in the center of our refuge. The beautiful seed heads are often seen waving amidst those of Big bluestem in this area of the refuge where both plants grow to be 6 or 7 feet tall. This was another major species of the tallgrass prairies of the Midwest. It provides excellent food for wildlife, both as forage and from its abundant seeds. The 2 to 3.5 inch triangular shaped leaves and chalky white bark with black markings of this small (up to 30 foot) tree are familiar to many people. It is one of the first tree species to colonize abandoned farmland or burn areas in the Northeastern U.S., and in Lehigh Gap it is re-colonizing some of the steep slopes below the Devil’s Pulpit area and around the Osprey House. Their 2-3 inch reproductive structures called catkins hang like tassels from the twigs and produce seeds that are eaten in winter by a variety of birds. Of concern to the agencies overseeing the Superfund process is the fact that these trees take up the heavy metals, especially zinc, and deposit them in their leaves. This mobilization of metals is of concern because the toxic metals are now available to the food chain through leaf-eating caterpillars. Also, fallen leaves bring the metals to the surface of the ground in an organic layer fed upon by worms and arthropods. For this reason, we will not encourage the colonization of test plot areas with gray birch seedlings. Sassafras’s aromatic bark and roots were once used to brew a tea. Typically growing to about 20 to 30 feet, Sassafras is easily recognizable by the three different shapes of leaves present on the same tree. It produces purple berries that are relished by birds. This tree colonizes when birds spread the seeds in their droppings, and then can form clusters of trees by new trunks growing up from roots. Groups of Sassafras (and also Black Gum) are found in the most degraded areas of the refuge and stand out as islands among the barren areas. Bigtooth and Quaking Aspen are two different species that have appeared on the refuge. Seen mostly along the D&L Trail and at the Osprey House, we have also spotted some seedlings in the test plot areas. Quaking Aspens are noticeable when there is a gentle breeze as their leaves quiver (hence the name) in the slightest breeze. These softwood trees grow rapidly, and can reach 60 feet, but often die quickly or prematurely. In the riparian zone along the Lehigh River, many young River Birches are growing. Their leaves are similar to the Gray Birch, though are more rounded, but the pinkish to reddish-brown bark that is peeling is the easiest field mark. These trees grow along stream banks and flood plains throughout the eastern U.S. Invasive Alien Species When a species is introduced to a new habitat, it frequently dies, but occasionally a species finds its new habitat free of disease and predators and it flourishes. These aliens often crowd out the native species, forming large, dense stands of only the alien. Such an area is useless to most native species of animals. Invasive aliens are a major cause of species becoming endangered in the U.S., second only to habitat loss. While some invasive aliens, such as Zebra Mussels or Gypsy Moths are animals, many are plants such as Kudzu and Purple Loosestrife. Here are three alien invasive plants that have gained a foothold on the refuge and need to be controlled. Japanese Barberry and Japanese Knotweed are two more that we have found in smaller numbers. Butterfly Bush (Buddleia davidii) This ornamental is often planted by gardeners to attract butterflies, which it does, but at Lehigh Gap it has become an invasive species. For nearly two miles, the abandoned rail line that will be the D&L Trail was a solid corridor of butterfly bush. It is also prominent around the Osprey House and is moving up the slopes and into the test plots. It produces lilac-like clusters of flowers that produce huge numbers of seeds that allow the plant to spread easily. We fear it could take over at the test plots and outcompete the native grasses, so it must be controlled, which is easily done by pulling out the young plants. Common Reed (Phragmites australis) A dense stand of Phragmites greets visitors along the entry road to the Osprey House. Stands of this 8-12 foot tall plant with their large plumy flower heads are picturesque, but as the reed overtakes an area, biodiversity is eliminated. Few species survive in or make use of a dense stand of Phragmites. This is one of the more difficult to control species. Small stands of Phragmites also are cropping up at springs and seeps along the D&L Trail and near the Osprey House. This tree, imported form eastern Asia, has pinnately compound leaves like the various sumacs, and grows in profusion at the Osprey House and along the railroad bed near the ponds. This tenacious tree spreads easily from root suckers and responds to cutting by sending up new stems that grow as much as ten feet in a year. It is native to eastern Asia and was brought to America as an ornamental tree, where it is often called Tree-of-Heaven or Chinese Sumac. On your next hike through the refuge, take a look at the pioneering species that are returning the barren landscape. Understanding their ecological role will enhance your experience.
Inside XSL-T (3/4) - exploring XML Attribute sets, variables, and parametersThe xsl:attribute-set element defines a named set of attributes. These attribute sets can be attached to new elements using xsl:element, or copied from element to element using xsl:copy or xsl:use-attribute-sets. The value of the use-attribute-sets attribute is a whitespace-separated list of names of attribute sets, so it is a convenient way for inserting the same set of attributes in many places. A variable is a name that may be bound to a value. The value to which a variable is bound can be an object of any of the types that can be returned by expressions, namely node-sets, booleans, strings, and numbers. There are two elements that can be used to bind variables: xsl:variable and xsl:param. The difference is that the value specified on the xsl:param variable is only a default value for the binding; when the template or stylesheet within which the xsl:param element occurs is invoked, parameters may be passed that are used in place of the default values. A variable-binding element can specify the value of the variable in three alternative ways. If the variable-binding element has a select attribute, then the value of the attribute must be an expression and the value of the variable is the object that results from evaluating the expression. In this case, the content must be empty. If the variable-binding element does not have a select attribute and has non-empty content (i.e. the variable-binding element has one or more child nodes), then the content of the variable-binding element specifies the value. The content of the variable-binding element is a template, which is instantiated to give the value of the variable. If the variable-binding element has empty content and does not have a select attribute, then the value of the variable is an empty string. TemplatesTemplates are the main mechanism for transforming the source into the result document. They consist of two parts, an XPath expression that is supposed to match elements in the source tree, and a fragment that gets inserted into the result document by creating new elements and copying and modifying elements from the source. Matching source and result elements: XPath The primary purpose of XPath is to address parts of an XML [XML] document. Supporting this, it also provides basic facilities for manipulation of strings, numbers and booleans. XPath gets its name from its use of a path notation as in URLs for navigating through the hierarchical structure of an XML document, and a natural subset can be used for matching (testing whether or not a node matches a pattern) in XSLT. Location paths select one node or a set of nodes in the document tree. They exist in normal and abbreviated syntax, where the latter mimics directory navigation behavior. Here are some examples of location paths using abbreviated syntax: - /doc/chapter/section selects the fourth section of the third chapter of the doc - //section/para selects all the para elements in the same document as the current node that have a section parent - ../@lang selects the lang attribute of the parent of the current node - para[@type="warning"] selects all para children of the context node that have a type attribute with value warning - para[@type="warning"] selects the fifth para child of the context node that has a type attribute with value warning - para[@type="warning"] selects the fifth para child of the context node if that child has a type attribute with value warning - chapter[title="Introduction"] selects the chapter children of the context node that have one or more title children with string-value equal to Introduction - chapter[title] selects the chapter children of the context node that have one or more title children - para[@lang and @annotation] selects all the para children of the current node that have both a lang and an annotation attribute XPath supports the data types node-sets, booleans, numbers, and strings. Node-set functions include last(), position(), count(), name(), id() and key(). Sub-string and concatenation functions are included for strings, as well as the expected basic boolean operations and numerical calculations. That way complex matching expressions can be formed, for details see XPath core library. Finally something about creating result elements, and the unevitable conclusion. Created: Aug 13, 2000 Revised: Aug 13, 2000
Collagen, the most abundant protein in the human body, serves as an important component in structural tissues such as the structure underneath the skin. However, free radicals can attack collagen and weaken this structure, causing wrinkles and sagging. Read on to learn about: 29 types of collagen exist inside the body. However, over 90% of the collagen is comprised of types I, II, III, IV, or V. - Collagen I occurs in the skin, veins, ligaments, organs, and bones. - Collagen II makes up most of cartilage. - Collagen III comprises reticular fibers and is commonly found with type I. - Collagen IV forms the base of cell membranes. - Collagen V is located in cell surfaces and the hair. To maintain skin’s strength, flexibility, and resilience, collagen works together with keratin. Keratin, a strong protein, serves as a major component in skin, hair, nails, and teeth. Depending on the levels of amino acids that form keratin, its texture can vary from soft (like with skin) to hard (like in teeth). Keratin is formed by kerainocytes, living cells, which make up a large part of skin (as well as the hair and nails). These cells push upwards, eventually dying as newer cells form below. If these dead cells are in good condition, they serve as an insulating layer to protect new keratin below them. However, free radicals can damage both keratin and collagen. Free radicals are small particles that have at least one missing electron, which makes the particle unstable. In an attempt to stabilize itself, the particle takes electrons from nearby substances. Now, the free radical is stable, but the other substance becomes a free radical. Since collagen and keratin are abundant in the the skin, they tend to be the nearby substance that free radicals attack. If a free radical takes an electron from a protein in a strand of collagen, it changes the chemical structure of the collagen at that specific point. This causes a break in the collagen strand resulting in damage. Once a strand of collagen has multiple points of damage that have occurred over years, that strand loses its elasticity and creates skin sagging. This damage is progressive because as each free radical stabilizes itself, it creates at least one new free radical. Aging is the progression of years of free radical damage. Keratin cells are exposed 5,000 times a day to free radicals. While free radicals result from normal biological processes, such as energy production, they also result from environmental stresses. These stresses include: - Unprotected sun exposure - Bad cosmetics To slow the process of aging, reduce your exposure to these unnecessary causes of free radical activity. Even if you address just one source of free radical activity, the progression of damage to collagen, and therefore skin aging, will slow. Additionally, you can incorporate antioxidants into your diet and skin care and supplement routine. Antioxidants are substances with extra electrons, and therefore are able to stabilize free radicals without incurring damage. Because of this, antioxidants are important in slowing the aging process. Supplements are a quick way to consume antioxidants for beautiful skin. They require very little effort in comparison to the powerful antioxidant benefit they deliver to your cells to keep collagen from breaking down. Perricone MD’s Skin & Total Body supplement fights free radical damage with its potent active ingredients to protect collagen and therefore diminish the appearance of wrinkles.
Phase Space: a Framework for Statistics Statistics involves the counting of states, and the state of a classical particle is completely specified by the measurement of its position and momentum. If we know the six quantities then we know its state. It is often convenient in statistics to imagine a six-dimensional space composed of the six position and momentum coordinates. It is conventionally called "phase space". The counting tasks can then be visualized in a geometrical framework where each point in phase space corresponds to a particular position and momentum. That is, each point in phase space represents a unique state of the particle. The state of a system of particles corresponds to a certain distribution of points in phase space. The counting of the number of states available to a particle amounts to determining the available volume in phase space. One might preclude that for a continuous phase space, any finite volume would contain an infinite number of states. But the uncertainty principle tells us that we cannot simultaneously know both the position and momentum, so we cannot really say that a particle is at a mathematical point in phase space. So when we contemplate an element of "volume" in phase space then the smallest "cell" in phase space which we can consider is constrained by the uncertainty principle to be Beta decay concepts Perspectives ..., Sec 15.1
by Jonathan Brisendine – Field & Lab Archaeologist During the excavation process, one of the common artifacts found on historic sites is glass. So what does this tell us archaeologically other than there was glass here? We classify glass in two major categories Flat and Curved. Flat glass usually pertains to window glass, while curved or hollow usually pertains to you guessed it, bottles. Once we collect all the artifacts in field they are brought back to the lab where they are cleaned then cataloged. From this we can get an accurate amount of each type of glass that was found. Because, of the detailed records we keep of the location of our testing units both the vertical and horizontal location is known. With this information we can draw meaning interpretations from even the most ubiquitous artifacts. Seen below is an imagined example of a historic foundation and the results of the statistical mapping that we could produce once we have gathered the information of the type and frequency of glass once catalogued. Blue indicates the location where flat glass was found while green indicated where the curved hollow glass was found. What conclusions would you draw from this information? First note the brick structure. Think about structures and where flat glass would most likely be found and why. Also, think about areas of activity inside such a structure. Where would a person store & use hollow glass such as bottles. To see how we might interpret the location of the artifacts, see the image below.
Lab discovery gives glimpse of conditions found on other planets Scientists have recreated an elusive form of the material that makes up much of the giant planets in our solar system, and the sun. Experiments have given a glimpse of a previously unseen form of hydrogen that exists only at extremely high pressures - more than 3 million times that of Earth's atmosphere. Hydrogen - which is among the most abundant elements in the Universe - is thought to be found in this high-pressure form in the interiors of Jupiter and Saturn. Researchers around the world have been trying for years to create this form of the element, known as the metallic state, which is considered to be the holy grail of this field of physics. It is believed that this form of hydrogen makes up most of the interiors of Jupiter and Saturn. The metallic and atomic form of hydrogen, formed at elevated pressures, was first theorised to exist 80 years ago. Scientists have tried to confirm this in lab experiments spanning the past four decades, without success. In this latest study from a team of physicists at the University of Edinburgh, researchers used a pair of diamonds to squeeze hydrogen molecules to record pressures, while analysing their behaviour. They found that at pressures equivalent to 3.25 million times that of Earth's atmosphere, hydrogen entered a new solid phase - named phase V - and started to show some interesting and unusual properties. Its molecules began to separate into single atoms, while the atoms' electrons began to behave like those of a metal. The team says that the newly found phase is only the beginning of the molecular separation and that still higher pressures are needed to create the pure atomic and metallic state predicted by theory. The study, published in Nature, was supported by a Leadership Fellowship from the Engineering and Physical Sciences Research Council. Professor Eugene Gregoryanz, of the University of Edinburgh's School of Physics and Astronomy, who led the research, said: "The past 30 years of the high-pressure research saw numerous claims of the creation of metallic hydrogen in the laboratory, but all these claims were later disproved. "Our study presents the first experimental evidence that hydrogen could behave as predicted, although at much higher pressures than previously thought. The finding will help to advance the fundamental and planetary sciences."
This Day in World History September 19, 1991 5,000-year-old mummy found in Alps While hiking through the Alps on the Italian-Austrian border, Erika and Helmut Simon, a German couple, spotted a brown shape in a watery gully below them. Scrambling down to investigate, they realized that they were looking at a human head and shoulder. Assuming the body was a climber who had been killed in a fall, they reported their find to authorities. The body was removed with a jackhammer and tourists made off with some of its clothing and the tools that were found with it. In fact, though, the Simons had stumbled upon an amazing find. The “Iceman,” as he was quickly dubbed, was a mummified corpse from Europe’s prehistory, about 5,300 years old. A wealth of information has been gleaned about what his life during the late Neolithic period (8000-3000 BCE) was like. The Iceman, now renamed Ötzi, after the location where he was found, was about 5 feet 5 inches tall and weighed about 110 pounds. Scientists studied his clothes and bearskin-bottomed shoes; his copper axe, flint knife, and arrows; and the container holding embers wrapped in leaves, which they concluded was a fire-starting kit. They determined his age (about mid-forties), identified what he ate (wheat, barley, other plants, goat, and deer), and diagnosed his ailments: x-rays, for example, indicated he suffered from arthritis. Then, in 2007, a chance discovery during a scan of the body revealed what may have been the cause of his death: he had been murdered. A tiny arrowhead was lodged beneath one of Ötzi’s shoulder blades, where it had severed an artery. When he was struck, he pitched forward onto a granite slab and bled to death. At some point, the body was covered by ice, preserving it—until, a few thousand years later, when because of a warming world enough ice had melted to reveal him once again.
The city of Tiwanaku, capital of a powerful pre-Hispanic empire that dominated a large area of the southern Andes and beyond, reached its apogee between 500 and 900 AD. Its monumental remains testify to the cultural and political significance of this civilization, which is distinct from any of the other pre-Hispanic empires of the Americas. Tiwanaku is located near the southern shores of Lake Titicaca on the Altiplano, at an altitude of 3,850 m (12,630 ft), in the Province of Ingavi, Department of La Paz. Most of the ancient city, which was largely built from adobe, has been overlaid by the modern town. However, the monumental stone buildings of the ceremonial center survive in the protected archaeological zones. Tiwanaku was the capital of a powerful empire that lasted several centuries and it was characterized by the use of new technologies and materials for the architecture, pottery, textiles, metals, and basket-making. It was the epicenter of knowledge and 'saberes' due to the fact that it expanded its sphere of influence to the interandean valleys and the coast. The politics and ideology had a religious character and it incorporated its sphere of influence to different ethnic groups that lived in different regions. This multiethnic character takes form of the stylistic and iconographic diversity of his archaeological materials. The monumental buildings of this administrative and religious center are a witness of the economic and political force of the cardinal city and of his empire. The public and religious space of this city is shaped by a series of architectural structures that correspond to different periods of cultural accessions: Temple Semi-underground, Kalasasaya's Temple, Akapana's Pyramid, Pumapumku's Pyramid. In addition, the area politician and administrative officer is represented by structures as the Palace of Putuni and Kantatallita. This architectural complex reflects the complex political structure of the period and its strong religious nature. The most imposing monument at Tiwanaku is the Pyramid of Akapana. It is a pyramid originally with seven superimposed platforms with stone retaining walls rising to a height of over 18m (59 ft). Only the lowest of these and part of one of the intermediate walls survive intact. Investigations have shown that it was originally clad in sandstone and andisite and surmounted by a temple. It is surrounded by very well-preserved drainage canals. The walls of the small semi-subterranean temple (Templete) are made up of 48 pillars in red sandstone. There are many carved stone heads set into the walls, doubtless symbolizing an earlier practice of exposing the severed heads of defeated enemies in the temple. To the north of the Akapana is the Kalasasaya, a large rectangular open temple, believed to have been used as an observatory. It is entered by a flight of seven steps in the center of the eastern wall. The interior contains two carved monoliths and the monumental Gate of the Sun, one of the most important specimens of the art of Tiwanaku. It was made from a single slab of andesite cut to form a large doorway with niches (Hornacinas) on either side. Above the doorway is an elaborate bas-relief frieze depicting a central deity, standing on a stepped platform, wearing an elaborate head-dress, and holding a staff in each hand. The deity is flanked by rows of anthropomorphic birds and along the bottom of the panel there is a series of human faces. The ensemble has been interpreted as an agricultural calendar. The settlers of this city perfected the technology for carving and polishing different stone materials for the construction, which, together with architectural technology, enriched the monumental spaces. The economic base of this city is evidenced through the agricultural fields, known locally as Sukakollos, and characterized by their irrigation technology which allowed the different cultures to easily adapt to the climate conditions. The artificial terraces constitute an important contribution to agriculture and made possible a sustained form of farming and consequently the cultural evolution of the Tiwanaku Empire. These innovations were subsequently taken up by succeeding civilizations and were extended as far as Cuzco. The social dynamics of this population of the highland plateau were sustained in strong religious components that are expressed in a diverse iconography of stylized of zoomorphic and anthropomorphous images. The political and ideological power represented in different material supports extended to the borders coming up to the population’s vallunas and to more remote coastal areas. Many towns and colonies were set up in the vast region under Tiwanaku rule. The political dominance of Tiwanaku began to decline in the 11th century, and its empire collapsed in the first half of the 12th century.
Coroutines are special functions that differ from usual ones in four aspects: - exposes several entry points to a function. An entry point is the line of code inside the function where it will take control over the execution. - can receive a different input in every entry point while executing the coroutine. - can return different outputs as response to the different entry points. - can save control state between entry points calls. Python implements coroutines starting in Python 2.5 by reusing the generator syntax, as defined in PEP 342 - Coroutines via Enhanced Generators. Generator syntax is defined in PEP 255 - Simple generators. I covered briefly the generator functionality in a previous post. The basic usage of generators is creating an iterator over a data source. For example, the function in the snippet below returns a generator that iterates from a specific number down to 0, decreasing an unit in every iteration. In the example above, the keyword yield is used to return a new value in every iteration while consuming the generator. It’s interesting to note that a generator can be consumed only once, opposite to a list that can be consumed/iterate as much as needed. A generator is considered exhausted upon being consumed the first time. PEP 342 takes advantage of the keyword yield for pointing out entry points where the function/coroutine will receive inputs while being executed. Let’s see a very simple example of a coroutine that concatenates every string inserted by the user from the command line: What is really interesting is how the coroutine execution is suspended and resumed by means of the yield keyword, allowing the program flow to be moved from the coroutine to the external program and back to it. As a side project I have implemented a tiny library called washington that exposes a chainable API for building a coroutines stream. I had a lot of fun while digging into the implementation, even though the real usage of the library is expected to be very limited
In ancient Egyptian period scrolls were made using materials such as papyrus or paper that has been extracted from the plant. These scrolls were used to write, draw or paint. The purpose of the use of manuscripts was either record all sorts of information about him or has been used for ornamentation. During the period was the ancient Egyptian heiroglyphs common pictographic style of writing used for writing on parchment as base, tombs or on religious texts. The rollers used in the early Egyptian era were divided into pages. These pages are attached to the edges at times. To scroll, was to be held. Once held a sheet of paper was available to write or even read the writing context. The other pages of scrolling would be securely wound on each side of the sheet that has been used. There was a difference between a roller and a roller in the direction of its use. A parchment designed for repeated use whole roll can be used for a while. The rolls have been stored in paper or single winding of the manuscripts had wooden rollers at each end. The rollers were used very strictly. One side of the spiral was marked by horizontal lines which allowed the user to write in a precise manner. The side on which the lines were drawn was the same for all logs. Scrolls allowed a continuous curve which prevented a specific area on the paper to be emphasized in during use. Rhind papyrus was used for scrolling time recording Egyptian mathematical tables and problems.
A group of scientists have put the world on alert that a massive solar flare could happen within the next two years that could harm power grids, communications, and satellites around the world. The scientists say that the risk of a massive flare that could harm systems on the earth increase as the sun reaches the peak of its 10-year activity cycle. The scientists say "governments are taking it very seriously." According to scientist Mike Hapgood, who specializes in space weather at the Rutherford Appleton Laboratory, solar storms are more commonly being placed on national risk registers used for disaster planning along with events such as tsunamis and volcanic eruption. Hapgood warns that while solar flares are rare, when they happen consequences on earth could be catastrophic. Magnetically-charged plasma thrown from the surface of the sun can have a significant impact on earth. The chance of a massive solar storm is about 12% for every decade. According to the scientists, the last major solar storm was over 150 years ago, and the odds say that a massive solar storm occurs approximately once in every 100 years. The fear is that these massive solar storms could melt transformers within national power grids, destroy or damage satellites, knockout radio communications, and more. The largest solar storm ever recorded happened in 1859. British astronomer Richard Carrington observed a large solar eruption, and the geomagnetic storms caused by the eruption took 17 hours to reach the earth. According to reports from 1859, the solar storm is so massive that the aurora borealis was seen as far south as the Caribbean. Had such an event happened in modern times with satellites in orbit, the consequences could have been disastrous.
Infrared IR ) light is electromagnetic radiation with a wavelength longer than that of visible light, measured from the nominal edge of visible red light at 0.74 micrometres ( µm), and extending conventionally to 300 µm. These wavelengths correspond to a frequency range of approximately 1 to 400 THz, and include most of the thermal radiation emitted by objects near room temperature. Microscopically, IR light is typically emitted or absorbed by molecules when they change their rotational-vibrational movements. Sunlight at zenith provides an irradiance of just over 1 kilowatt per square meter at sea level. Of this energy, 527 watts is infrared radiation, 445 watts is visible light, and 32 watts is ultraviolet radiation.
Long-distance migration: evolution and determinants Long distance migration has evolved in many organisms moving through different media and using various modes of locomotion and transport. Migration continues to evolve or become suppressed as shown by ongoing dynamic and rapid changes of migration patterns. This great evolutionary flexibility may seem surprising for such a complex attribute as migration. Even if migration in most cases has evolved basically as a strategy to maximise fitness in a seasonal environment, its occurrence and extent depend on a multitude of factors. We give a brief overview of different factors (e.g. physical, geographical, historical, ecological) likely to facilitate and/or constrain the evolution of long distance migration and discuss how they are likely to affect migration. The basic driving forces for migration are ecological and biogeographic factors like seasonality, spatiotemporal distributions of resources, habitats, predation and competition. The benefit of increased resource availability will be balanced by costs associated with the migratory process in terms of time (incl. losses of prior occupancy advantages), energy and mortality (incl. increased exposure to parasites). Furthermore, migration requires genetic instructions (allowing substantial room for learning in some of the traits) about timing, duration and distance of migration as well as about behavioural and physiological adaptations (fuelling, organ flexibility, locomotion, use of environmental transport etc) and control of orientation and navigation. To what degree these costs and requirements put constraints on migration often depends on body size according to different scaling relationships. From this expos it is clear that research on migration warrants a multitude of techniques and approaches for a complete as possible understanding of a very complex evolutionary syndrome. In addition, we also present examples of migratory distances in a variety of taxons. In recent years new techniques, especially satellite radio telemetry, provide new information of unprecedented accuracy about journeys of individual animals, allowing re-evaluation of migration, locomotion and navigation theories. - Biology and Life Sciences - ISSN: 0030-1299
Photo Credit: Crystal McMichael Pre-Columbian Human Impacts on Amazonian Rainforest Ecosystems How natural are the Amazonian rainforest ecosystems? A growing case is being made among anthropologists and archaeologists that prior to European contact in 1492, native people manipulated much, perhaps most, of Amazonia. If fire was used across the Amazon basin to clear land for agriculture then Amazonian rainforest ecosystems that ecologists have assumed were mature may in fact be only one-to-several generations removed from intensive management. Fire does not occur naturally in the western Amazon and so charcoal in the soil is a sure sign of human activity. Given that the Amazonian rainforest ecosystems support Earth's largest rainforest and are home to unparalleled biodiversity, understanding the extent to which wildlife and people have interacted in the past is vital for effective planning and management. A further aspect of this debate is that if much of Amazonia is truly the product of disturbance, the forest must be considered to be relatively young and is probably not at equilibrium with respect to carbon cycling. To test the hypothesis of widespread disturbance, this project will conduct the first systematic survey of soils in Amazonian rainforest ecosystems for charcoal. Fires in Amazonia are almost always human-induced, and each burn leaves ash and charcoal that become incorporated into soil. Over 400 soil pits have been sampled on transects across western Amazonia, to determine the distribution and age of buried charcoal. Prior soil descriptions also will be used to determine where other scientists have located charcoal and data gathered in this project will be compared with a new model for pre-Columbian settlement of Amazonia. Through these analyses, the collaborating team from Florida Tech, Wake Forest, University of Florida, The National History Museum and Guarulhos University, Brazil, hope to inject real data into the policy arena of Amazonian development and conservation. Other indicators of human presence in Amazonian rainforest ecosystems will also be mapped such as the presence of pot fragments or black earths (soils created by mixing charcoal into the soil). The study will be based on a randomized design, but care will be taken to collect data from areas known to have supported pre-Columbian populations as well as those where there is nothing known of past occupation. The first results from this study of sparse human occupation in Amazonian rainforest ecosystems have now been published in Science.
Revealing the secrets of sand Brandeis scientist receives $1 million to explore the behaviors of macroscopic assemblies Next time you’re at the beach, scoop up a handful of sand and watch how it flows like water through your fingers. Then, look at the rolling sand dunes beyond the towels and umbrellas. The sand in your hand and the sand in the dunes is the same material. But one acts like a liquid and the other, a solid. How is that possible? The truth is, scientists don’t know. Physicists use a fundamental theoretical framework to predict how a large collection of small molecules can flow as a liquid or freeze into a solid. Yet they have little understanding of the forces that create sand dunes or trigger an avalanche. Scientists lack fundamental concepts to describe how assemblies of macroscopic objects — sand, snow, grains, rocks or jellybeans — form and respond to stress. Bulbul Chakraborty, the Enid and Nate Ancell Professor of Physics, hopes to change that. Chakraborty, along with Corey O’Hern from Yale University and Robert Behringer from Duke University, have received a three-year, $1 million grant from the W.M. Keck Foundation to develop the first predictive theoretical framework to characterize behaviors of macroscopic assemblies. The team will focus on jamming, a common — and sometimes deadly — behavior of macroscopic assemblies, from traffic snarls to corn silo bottlenecks. Even jellybeans in a bottle turned upside down flow like water until they jam. “The transition from the flowing to the jammed state is not understood well enough to be predictable,” Chakraborty says. “The process of jamming and the properties of jammed states are at the heart of all granular science. Jamming affects all industrial granular processing and has a huge impact on natural events.” To transport large quantities of grain, for example, agricultural engineers rely on machinery that keeps the grain flowing; otherwise, jammed grain can lead to a burst silo and millions of dollars in lost crop. On the other hand, deadly avalanches can result when snow fails to jam. “This research has broad implications for energy, such as in transporting coal; for agriculture, and for designing for avalanche and earthquake prevention systems,” Chakraborty says. Keep that in mind, next time you’re at the beach.
Whether your child is a toddler or already school-aged, it is never too early or late to invest time in building literacy skills, especially during these hot summer months! Most children enjoy being read to, and early exposure to reading provides opportunities to learn about language, grow in imagination and social-emotional maturity. Words are made up of individual sounds, and children learn the relationship between sounds, letters, and words before being able to decode printed words. This phonological awareness is a strong predictor of literacy. Children with auditory processing challenges often demonstrate delays with both language processing and literacy development, as both visual and auditory systems are accessed for successful reading. Reading aloud strengthens these neurological pathways and is still the best way to put a child on the road to becoming a reader. Some easy tips while reading to your child: - Young children are active and may need to move while you read to them. Encourage them to help flip pages, point to / name pictures, fill in rhymes! - Instead of reading exact words on a page, try adding in sound effects, movement, and adjusting your language so that it meets your child’s comprehension level. - Take turns choosing books and allow repetition, thereby building increased participation and memory skills. The non-profit, Reading Rockets, produces several great resources supporting families around promoting early literacy in children. A few highlights include: - Top 10 Resources on Reading Together (includes specific tips for parents of infant children to those with 3rd graders) - Activities to Encourage Speech and Language Development (ASHA resource) - Launching Young Readers series: Sounds & Symbols (video) -Iris Lee, MS, CCC-SLP
The International Relations Council is pleased to offer supplemental global education resources covering a variety of international topics currently taught for the use of teachers and students. It is important to keep in mind that the Global Education Resources should serve as a starting point for your global education. This is not an all-encompassing list of activities; instead, it provides you and other students with enough information and resource to begin learning about the world. As an apolitical, nonpartisan organization, the International Relations Council does not endorse any of the organizations, associations, universities, bodies, or websites cited in this guide. This page contains resources for students in grades K-5. Simply click the resource title to be linked to the source. If there’s a resource you don’t see here, or if you have additional resources you would like to share or request, please click here. Key to Resource Standards Standard 1: Choices and Consequences | Standard 2: Rights and Responsibilities Standard 3: Culture, Values, & Diversity | Standard 4: Change and Continuity Standard 5: Dynamic Relationships The Changing Face of America: Two activities where students interpret tables and census data to understand the implications of changing patterns in immigration at the national, state, and local levels. (4) Objectives: Understand historical and contemporary patterns of immigration to the United States, observe immigration markers in cultural landscapes, and identify trends in population changes in the state and community Follow the Food: Various strategies educators can use to design lesson plans that use cooking as an instructional tool to develop young people’s understanding of people and culture. (3) Objectives: Expand student’s worldview which can include reinforcing geographic concepts, connecting food’s origins to history, exploring the impact of food on rituals and religions, describing cultural characteristics of your family and class members Holiday Culture Collage: A project where students create a collage that showcases their family’s culture and how they spend the holidays. At the end, they will give a brief oral presentation to the class. (3) Objectives: Explore different cultures, increase oral communication skills Multicultural Restaurant Chef: Students role-play as prospective chefs of a new multicultural restaurant. They will research the foods of multiple cultures and create an advertisement depicting their favorite foods. (3) Objectives: Improve research skills, learn about different cultures Why People Move: Students take a migration poll and discuss a new place they might like to move to and why people might move to the area. (1) Objectives: Explain the pros and cons of moving to a new place, explain why people move, define migration A World at Peace: Students will brainstorm the basic rights of people everywhere by first looking at the Bill of Rights, then they will look at the Universal Declaration of Human Rights and UNICEF’s Committee on the Rights of the Child. Finally, they will create a multi-media creative writing assignment imagining a world at peace. (2) Objectives: Know how and why people compete for control of Earth’s surface, understand factors that contribute to cooperation or conflict, understand efforts to improve political and social conditions, know ways in which conflicts about diversity can be resolved in a peaceful manner that respects individuals rights and promotes the common good
Although cold weather still happens in a warming climate, the winter season is less cold than it was half a century ago and is the fastest-warming season in a majority of the U.S states. An analysis of winter temperatures indicates that 98% (236) of 242 cities had an increase in average winter temperatures from 1970, with the highest increases around the Great Lakes and Northeast region. Plus, 74% (179) recorded at least seven additional days of above-normal temperatures between 1970 and 2020. Akin to a warming fall season, a warmer winter can have negative impacts. For instance, less snow accumulation in certain areas can threaten skiing, snowmobiling, and other winter sports-based economies. Also, the winter’s chill period is shrinking, hurting the development of fruit trees like peaches and cherries.
Your toddler may show every sign of good eyesight including the ability to see objects in the distance, however that doesn't necessarily mean that he or she doesn't have a vision problem. Amblyopia is one common eye condition that is often hidden behind the appearance of good eyesight. Also known as "lazy eye" it usually occurs when the brain begins to ignore the signals sent by one eye, often because that eye is weaker and doesn't focus properly. Sometimes it can occur in both eyes, in which case it's called bilateral amblyopia. This eye condition is especially common in preemies, and tends to run in families as well, so it's important to provide your eye doctor with a complete medical and family history. There are several factors that can cause amblyopia to develop. These include: - high nearsightedness or farsightedness, - uneven eye development as an infant, - congenital cataract (clouding of the lens of the eye), - strabismus (where the eyes are misaligned or "cross-eyed") However in many cases of amblyopia there may be no obvious visible structural differences in the eye. In addition to the fact that the eyes may look normal, vision often appears fine as the brain is able to compensate for the weaker eye by favoring the stronger one. Because of this, many children live with their eye condition for years before it is diagnosed. Unfortunately, as a person ages, the brain loses some of its plasticity (how easy it is to train the brain to develop new skills), making it much harder - if not impossible - to treat amblyopia in older children and adults. That's why it's so important for infants and young children to have a thorough eye exam. Are There Any Signs of Amblyopia? If you notice your child appears cross-eyed, that would be an indication that it's time for a comprehensive eye exam to screen for strabismus and amblyopia development. Preschoolers with amblyopia sometimes show signs of unusual posture when playing, such as head tilting, clumsiness or viewing things abnormally close. However, often there are no signs or symptoms. The child typically does not complain, as he or she does not know what normal vision should look like. Sometimes the condition is picked up once children begin reading if have difficulty focusing on the close text. The school nurse may suggest an eye exam to confirm or rule out amblyopia following a standard vision test on each eye, though it might be possible to pass a vision screening test and still have amblyopia. Only an eye doctor can make a definitive diagnosis of the eye condition. So How Do You Know If or When To Book a Pediatric Eye Exam? Comprehensive eye and vision exams should be performed on children at an early age. That way, hidden eye conditions would be diagnosed while they're still more easily treatable. An eye exam is recommended at 6 months of age and then again at 3 years old and before entering first grade. The eye doctor may need to use eye drops to dilate the pupils to confirm a child's true refractive error and diagnose an eye condition such as amblyopia. Treatment for Amblyopia Glasses alone will not completely correct vision with amblyopia in most cases, because the brain has learned to process images from the weak eye or eyes as blurred images, and ignore them. There are several non-surgical treatment options for amblyopia. While your child may never achieve 20/20 vision as an outcome of the treatment and may need some prescription glasses or contact lenses, there are options that can significantly improve visual acuity. Patch or Drops In order to improve vision, one needs to retrain the brain to receive a clear image from the weak eye or eyes. In the case of unilateral amblyopia (one eye is weaker than the other), this usually involves treating the normal eye with a patch or drops to force the brain to depend on the weak eye. This re-establishes the eye-brain connection with the weaker one and strengthens vision in that eye. If a child has bilateral amblyopia, treatment involves a regimen of constantly wearing glasses and/or contact lenses with continual observation over time. Your eye doctor will prescribe the number of waking hours that patching is needed based on the visual acuity in your child's weak eye; however, the periods of time that you chose to enforce wearing the patch may be flexible. During patching the child typically does a fun activity requiring hand eye coordination to stimulate visual development (such as a favorite video game, puzzle, maze etc) as passive activity is not as effective. The earlier treatment starts, the better the chances are of stopping or reversing the negative patterns formed in the brain that harm vision. Amblyopia treatment with patches or drops may be minimally effective in improving vision as late as the early teen years (up to age 14) but better results are seen in younger patients. Many optometrists recommend vision therapy to train the eyes using exercises that strengthen the eye-brain connection. While success rates tend to be better in children, optometrists have also seen improvements using this occupational therapy type program to treat amblyopia in adults. The key to improvement through any non-surgical treatment for amblyopia is compliance. Vision therapy exercises must be practiced on a regular basis. Children that are using glasses or contact lenses for treatment, must wear them consistently. Your eye doctor will recommend the schedule of the patching, drops, or vision therapy eye exercise and the best course of treatment. Amblyopia: Take-home Message Even if your child is not showing any signs of vision problems, and especially if they are, it is important to have an eye examination with an eye doctor as soon as possible, and on a regular basis. While the eyes are still young and developing, diagnosis and treatment of eye conditions such as amblyopia are greatly improved.
Vocational Training the e-Learning Way Your email address will not be published. Required fields are marked * Save my name, email, and website in this browser for the next time I comment. Use this chat box to contact support for any technical issues with your Learning Management System. Please DO NOT use this for questions relating to your course learning material. Brown-rot fungi break down hemicellulose and cellulose that form the wood structure. Cellulose is broken down by hydrogen peroxide (H2O2) that is produced during the breakdown of hemicellulose. Because hydrogen peroxide is a small molecule, it can diffuse rapidly through the wood, leading to a decay that is not confined to the direct surroundings of the fungal hyphae. As a result of this type of decay, the wood shrinks, shows a brown discoloration, and cracks into roughly cubical pieces, a phenomenon termed cubical fracture. The fungi of certain types remove cellulose compounds from wood and hence the wood becomes a brown color. Brown rot in a dry, crumbly condition is sometimes incorrectly referred to as dry rot in general. The term brown rot replaced the general use of the term dry rot, as wood must be damp to decay, although it may become dry later. Dry rot is a generic name for certain species of brown-rot fungi. Soft-rot fungi secrete cellulase from their hyphae, an enzyme that breaks down cellulose in the wood. This leads to the formation of microscopic cavities inside the wood, and sometimes to a discoloration and cracking pattern similar to brown rot. Soft-rot fungi need fixed nitrogen in order to synthesize enzymes, which they obtain either from the wood or from the environment. Examples of soft-rot-causing fungi are Chaetomium, Ceratocystis, and Kretzschmaria deusta. Kretzschmaria deusta, commonly known as brittle cinder, is a fungus and plant pathogen found in temperate regions of the Northern Hemisphere. It is common on a wide range of broadleaved trees including beech (Fagus), oak (Quercus), lime (Tilia), Horse Chestnut and maple (Acer). It also causes serious damage in the base of rubber, tea, coffee and palms. It causes a soft rot, initially and preferentially degrading cellulose and ultimately breaking down both cellulose and lignin, and colonises the lower stem and/or roots of living trees through injuries or by root contact with infected trees. It can result in sudden breakage in otherwise apparently healthy trees. The fungus continues to decay wood after the host tree has died, making K. deusta a facultative parasite. The resulting brittle fracture can exhibit a ceramic-like fracture surface. Black zone lines can often be seen in cross-sections of wood infected with K. deusta. The actively conducting portion of the stem in which tree cells are still alive and metabolically active is referred to as sapwood. In the living tree, sapwood is responsible not only for conduction of sap but also for storage and synthesis of biochemicals. An important storage function is the long-term storage of photosynthate. Carbon that must be expended to form a new flush of leaves or needles must be stored somewhere in the tree, and parenchyma cells of the sapwood are often where this material is stored. The primary storage forms of photosynthate are starch and lipids. Starch grains are stored in the parenchyma cells and can be easily seen with a microscope. Heartwood functions in long-term storage of biochemicals of many varieties depending on the species. These chemicals are known collectively as extractives. In the past, heartwood was thought to be a disposal site for harmful byproducts of cellular metabolism, the so-called secondary metabolites. This led to the concept of the heartwood as a dumping ground for chemicals that, to a greater or lesser degree, would harm living cells if not sequestered in a safe place. We now know that extractives are a normal part of the plant’s system of protecting its wood. Extractives are responsible for imparting several larger-scale characteristics to wood. For example, extractives provide natural durability to timbers that have a resistance to decay fungi. In the case of a wood like teak (Tectona grandis), known for its stability and water resistance, these properties are conferred in large part by the waxes and oils formed and deposited in the heartwood. Many woods valued for their colors, such as mahogany (Swietenia mahagoni), African blackwood (Diospyros melanoxylon), and Brazilian rosewood (Dalbergia nigra),owe their value to the type and quantity of extractives in the heartwood. For these species, the sapwood has little or no value, because the desirable properties are imparted by heartwood extractives. White-rot fungi break down the lignin in wood, leaving the lighter-colored cellulose behind; some of them break down both lignin and cellulose. As a result, the wood changes texture, becoming moist, soft, spongy, or stringy; its color becomes white or yellow. Because white-rot fungi are able to produce enzymes, such as laccase, needed to break down lignin and other complex organic molecules, they have been investigated for use in mycoremediation applications. There are many different enzymes that are involved in the decay of wood by white-rot fungi, some of which directly oxidize lignin. White-rot fungi are grown all over the world as a source of food – for example the shiitake mushroom, which in 2003 comprised approximately 25% of total mushroom production.
10.2 Geostrophic Equations The geostrophic balance requires that the Coriolis force balance the horizontal pressure gradient. The equations for geostrophic balance are derived from the equations of motion assuming the flow has no acceleration, du/dt = dv/dt = dw/dt = 0; that horizontal velocities are much larger than vertical, w << u, v; that the only external force is gravity; and that friction is small. With these assumptions (7.12) become where f = 2 Ω sin φ is the Coriolis parameter. These are the geostrophic equations. The equations can be written: where p0 is atmospheric pressure at z = 0, and ζ is the height of the sea surface. Note that we have allowed for the sea surface to be above or below the surface z = 0; and the pressure gradient at the sea surface is balanced by a surface current us. Substituting (10.7b) into (10.7a) gives: where we have used the Boussinesq approximation, retaining full accuracy for ρ only when calculating pressure. In a similar way, we can derive the equation for v. If the ocean is homogeneous and density and gravity are constant, the first term on the right-hand side of (10.8) is equal to zero; and the horizontal pressure gradients within the ocean are the same as the gradient at z = 0. This is barotropic flow described in §10.4. If the ocean is stratified, the horizontal pressure gradient has two terms, one due to the slope at the sea surface, and an additional term due to horizontal density differences. These equations include baroclinic flow also discussed in §10.4. The first term on the right-hand side of (10.8) is due to variations in density ρ(z), and it is called the relative velocity. Thus calculation of geostrophic currents from the density distribution requires the velocity (u0, v0) at the sea surface or at some other depth. |Department of Oceanography, Texas A&M University Robert H. Stewart, [email protected] All contents copyright © 2005 Robert H. Stewart, All rights reserved Updated on October 17, 2005
What Is Gresham’s Law? Gresham’s law is a monetary principle stating that “bad money drives out good.” It is primarily used for consideration and application in currency markets. Gresham’s law was originally based on the composition of minted coins and the value of the precious metals used in them. - 1 What is an example of Gresham’s law? - 2 What are the main limitations of Gresham’s law? - 3 What is the meaning of bad money drives out good money? - 4 On which the Gresham law is not applicable? - 5 How does Gresham’s law work? - 6 Who made Gresham’s law? - 7 What is Monometallism? - 8 What do you mean by gold standard? - 9 What is fiat money? - 10 Who told bad money drives good money? - 11 What is seigniorage revenue? - 12 What is meaning of hot money? - 13 What is the Gresham effect? - 14 Are old coins legal tender? - 15 What seigniorage means? What is an example of Gresham’s law? In economics, Gresham’s law is a monetary principle stating that “bad money drives out good “. For example, if there are two forms of commodity money in circulation, which are accepted by law as having similar face value, the more valuable commodity will gradually disappear from circulation. What are the main limitations of Gresham’s law? Limitations of the Gresham’s Law: - If the total money in circulation, including, both good and bad money, exceeds the actual monetary demand of the public. - If the public is prepared to accept and circulate bad money. - If the good money is full-bodied legal tender whose face value equals its intrinsic value. What is the meaning of bad money drives out good money? Bad Money Drives Out Good Money Meaning If there is counterfeit or inflated currency in circulation, people will hoard their genuine currency and only use the counterfeits in order to preserve the thing of true value. This phenomenon is also known as Gresham’s law. On which the Gresham law is not applicable? If, for example, the government of a country that is operating under a bimetallic system declares the overvalued currency as legal tender, the public will hoard, export, or melt the undervalued currency type into bullion (Rothbard 1980). Gresham’s law is not only applicable to bimetallic currency systems. How does Gresham’s law work? Gresham’s law, observation in economics that “bad money drives out good.” More exactly, if coins containing metal of different value have the same value as legal tender, the coins composed of the cheaper metal will be used for payment, while those made of more expensive metal will be hoarded or exported and thus tend Who made Gresham’s law? The expression “Gresham’s Law” dates back only to 1858, when British economist Henry Dunning Macleod (1858, p. 476-8) decided to name the tendency for bad money to drive good money out of circulation after Sir Thomas Gresham (1519-1579). What is Monometallism? British Dictionary definitions for monometallism monometallism. / (ˌmɒnəʊˈmɛtəˌlɪzəm) / noun. the use of one metal, esp gold or silver, as the sole standard of value and currency. the economic policies supporting a monometallic standard. What do you mean by gold standard? The gold standard is a monetary system where a country’s currency or paper money has a value directly linked to gold. With the gold standard, countries agreed to convert paper money into a fixed amount of gold. In the U.S., for instance, the dollar is fiat money, and for Nigeria, it is the naira. What is fiat money? Fiat money is a government-issued currency that is not backed by a physical commodity, such as gold or silver, but rather by the government that issued it. 4 Who told bad money drives good money? The principle of bad taking over good is explained by Mr. Gresham and is popularly known as Gresham’s law which states “bad money drives out good money…” Let me tell you a historical story of Indian ruler Muhammad Bin Tughlaq. He ruled from 1324 to 1351. What is seigniorage revenue? Seigniorage Explained Seigniorage may be counted as revenue for a government when the money it creates is worth more than it costs to produce. This revenue is often used by governments to finance portions of their expenditures without having to collect taxes. What is meaning of hot money? “Hot money” refers to funds that are controlled by investors who actively seek short-term returns. These investors scan the market for short-term, high interest rate investment opportunities. A typical short-term investment opportunity that often attracts “hot money” is the certificate of deposit (CD). What is the Gresham effect? In this post I’ll explore the idea that the use of credit cards in payments is driving a modern Gresham effect, the result of which is a displacement of cash and an inflationary race to the bottom of sorts. First, we need to revisit the idea of Gresham’s law, or the idea that the bad money drives out the good. Are old coins legal tender? Top Money Stories Today Although the older notes cannot be used as legal tender, the Bank of England will accept them. A spokesperson told the BBC: “All genuine Bank of England banknotes that have been withdrawn from circulation retain their face value for all time.” What seigniorage means? Introduction. Seigniorage refers to the profit made by a government from minting currency. Seigniorage is determined by the difference between the face value of the currency and the cost of producing it. 5
Give the correct meaning of the following medical term: A phobia is a fear that is generally perceived as irrational. This can include literally anything. For example, there is a fear of peanut butter sticking to the roof of ones mouth and this is referred to as arachibutyrophobia. Answer and Explanation: 1 Phagophobia is the fear of swallowing and is considered psychological. This fear is generated by the irrational concern of choking. This can be caused by many things including a bad experience with a certain food. These individuals typically avoid swallowing. Learn more about this topic: fromChapter 11 / Lesson 3 A fearful over-reaction to an object or situation is labelled a phobia. Learn the definitions of two kinds of phobias, specific and social, as well as their causes and treatment.
Servo control systems require accurate control of motion parameters such as acceleration, velocity, and position. This requires a controller that can apply current (torque) to accelerate a motor in a given direction, as well as provide an opposing current to decelerate it. When this application of aiding and opposing torque can be carried out in both directions, it is referred to as four quadrant motor control (Figure 1). In four quadrant electric actuation systems, energy changes its form from electrical current flow to mechanical motion and vice versa. This conversion of energy is performed by an electric motor. An electric motor can be modeled electrically as a resistor, an inductor, and a voltage source. The resistor represents the resistance of the windings and internal wiring. The inductance is created from the turns of the wire that make up the windings. The voltage source is a result of the back electromotive force (EMF) created by the rotation of the motor shaft. When an electric motor shaft rotates, it produces an opposing voltage proportional to the motor’s angular velocity. When the applied voltage exceeds the back EMF voltage, motoring occurs. When the back EMF voltage is greater than the applied voltage, braking occurs and the motor generates energy. In steady state, the difference between the applied voltage and the motor’s back EMF, divided by the circuit’s resistance, gives the current flowing in the motor windings. A motor’s current is directly proportional to its mechanical output torque. Figure 2 depicts the conversion of energy from electrical input to mechanical output. Electrical energy is input to a power supply. The power supply converts the input energy into a form that can be used by the motor drivers [i.e., alternating current (AC) to direct current (DC)]. The motor driver applies the energy from the supply to the motor as necessary to obtain the intended motion. The electric motor then converts the electrical energy into mechanical energy. The output of the motor is typically mated with some form of mechanical actuator that converts the motor’s output to the intended motion. At each point in the conversion process, some energy is lost due to inefficiencies in the system. A moving object possesses kinetic energy. When a motor decelerates a moving object, the energy returned to the system has to go somewhere. Similarly, potential energy in the form of gravitational forces, springs, etc., can be returned to the system as objects move. The energy is passed to the motor, which converts the mechanical energy back to electrical energy. The motor driver converts the electrical energy from the motor and returns it to the power bus between it and the power supply. At this point, something must be done with the remaining energy (Figure 3). Similar to the motoring scenario, the conversion process is not 100% efficient, and a portion of the regenerated energy conversion energy is lost in the system as heat. There are several methods that can be used to handle the remaining energy. In some cases, it can be returned back to the power source (batteries, grid, etc.). If the energy is not removed from the system, the supply voltage will rise as the energy charges the bus capacitance. If the voltage rises too high, it could exceed voltage ratings of components and cause damage. This work was done by Joshua Stapp for the Army Armament Research, Development and Engineering Center. ARDEC-0006 This Brief includes a Technical Support Package (TSP). Calculating Electrical Requirements for Direct Current Electric Actuators (reference ARDEC-0006) is currently available for download from the TSP library. Don't have an account? Sign up here.
Term originally introduced in the late 1930s by Matthes (1939) to describe a broad interval of the late Holocene during which significant glacial advances were observed. In the climatological literature the LIA has now come to be used to characterize a more recent, shorter recent interval from around A.D. 1300 to 1450 until A.D. 1850 to 1900 during which regional evidence in Europe and elsewhere suggest generally cold conditions. Variations in the literature abound with regard to the precise definition, and the term is often used by paleoclimatologists and glaciologists without formal dates attached. The attribution of the term at regional scales is complicated by significant regional variations in temperature changes due to the the influence of modes of climate variability such as the North Atlantic Oscillation and the El Nino/Southern Oscillation. Indeed, the utility of the term in describing past climate changes at regional [Read more…] about Little Ice Age (“LIA”) Period of relative warmth in some regions of the Northern Hemisphere in comparison with the subsequent several centuries. Also referred to as the Medieval Warm Epoch (MWE). As with the ‘Little Ice Age’ (LIA), no well-defined precise date range exists. The dates A.D. 900–1300 cover most ranges generally used in the literature. Origin is difficult to track down, but it is believed to have been first used in the 1960s (probably by Lamb in 1965). As with the LIA, the attribution of the term at regional scales is complicated by significant regional variations in temperature changes, and the utility of the term in describing regional climate changes in past centuries has been questioned in the literature. As with the LIA, numerous myths can still be found in the literature with regard to the details of this climate period. These include the citation of the cultivation of vines in Medieval England, and the [Read more…] about Medieval Warm Period (“MWP”) A Microwave Sounding Unit (“MSU”) is a device that has been installed on polar orbiting satellites to measure, from space, the intensity of microwave radiation emitted by earth’s atmosphere. Different “channels” of the MSU measure different frequencies of radiation which can, in turn, be related to temperature averages of the atmosphere over different vertical regions. Channel 2 measurements provide a vertically-weighted temperature estimate that emphasizes the mid-troposphere (with small contributions from the stratosphere), while Channel 4 largely measures temperatures in the lower stratosphere. Information from MSUs have been used to generate the “MSU Temperature Record“. More information on MSU can be found here. This is a somewhat outdated term used to refer to a sub-interval of the Holocene period from 5000-7000 years ago during which it was once thought that the earth was warmer than today. We now know that conditions at this time were probably warmer than today, but only in summer and only in the extratropics of the Northern Hemisphere. This summer warming appears to have been due to astronomical factors that favoured warmer Northern summers, but colder Northern winters and colder tropics, than today (see Hewitt and Mitchell, 1998; Ganopolski et al, 1998). The best available evidence from recent peer-reviewed studies suggests that annual, global mean warmth was probably similar to pre-20th century warmth, but less than late 20th century warmth, at this time (see Kitoh and Murakami, 2002). Combinations of different channels of individual Microwave Sounding Unit (“MSU”) measurements have been used to generate a record of estimated atmospheric temperature change back to 1979, the “MSU Temperature Record”. The complex vertical weighting functions relating the the various channels of the MSU to atmospheric temperatures complicate the interpretation of the MSU data. Moreover, while MSU measurements are available back to 1979, a single, continuous long record does not exist. Rather, measurements from different satellites have been combined to yield a single long record, further complicating the interpretation of the MSU record. Direct comparisons of the MSU Temperature Record with the surface temperature record are therefore difficult. More information on the MSU Temperature Record can be found here. A pattern of variability in the ocean and atmosphere that appears to be centered in the extratropical North Pacific, which emphasizes decadal, rather than interannual, timescales. Term was introduced by Mantua et al. (1997). More information on the PDO can be found here. Time history tied to a particular mode of time/space variance in a spatiotemporal data set (see “Principal Components Analysis”). A measure of the difference in sea level pressure between the western (e.g., Darwin, Australia) and central/eastern (e.g., Tahiti) equatorial Pacific, representative of the east-west changes in atmospheric circulation associated with the El Nino/Southern Oscillation phenomenon. Term was introduced by Sir Gilbert Walker (Walker and Bliss, 1932). More information on the SOI can be found here.
The end part of the accounting process is called the financial statement. This end part is complex and makes students working on accounts go for accounting help or just need a digital marketing help. The financial statement provides information about the financial position of the business and its profitability. Section 129 (1) of the Companies Act 2013 states that the financial statement of a company: - gives a fair view of the company’s state of affairs, - should follow the accounting standards which are notified under Section133 and Objectives of Preparing Financial Statement:- - It gives a fair view of the financial position, profit or loss, of the business. - It shows the true nature of performance, that is, assets and liabilities, of the business. Characteristics of Financial Statements:- - Financial statements are considered to be historical documents as they are related to the past period. - They are written in the form of money. - Financial statements reflect the financial position through the Balance Sheet. They also the company’s profitability through Statement of Profit and Loss. Nature of Financial Statements:- - Recorded facts: Recorded facts refer to the data used to prepare financial statements drawn from accounting records. For example, cost of fix assets, cash at bank, trade receivables, figures relating to cash in hand, etc., are record facts. The financial statement does not include the facts which are not record in the books if at all they are significant or not. The assets purchase either at different prices or at different times are reflected in the Balance Sheet at their real cost. They are not reflect in the Balance Sheet at their replacement cost due to the accounting records, which states that the cost price of the set cost is a record fact. Accounting Conventions show certain fundamental accounting principles that must be keep in mind while preparing financial statements and gaining wide acceptance. For example, because of ‘Conservatism,’ the expect profits are ignore, and the provision is make in the books for the expected future losses. Similarly, the closing inventory is value at the realizable value or at the cost, whichever is less. This signifies that the actual financial position can be much better than the position disclosed by financial statements. in accounting, personal judgment plays a decisive role. For example, the accountant’s decision whether the asset is to be depreciate on a written down value method, straight-line method, or some other method. He also decides the rate of depreciation through his judgments. Similarly, the inventory is value at realizable value or cost as in Standard Cost, Average Cost, First in First Out, Last in First Out, etc. The accountant has the choice to go with any of these methods. Likewise, it is the responsibility of the accountant to give his judgment for the classification of ExpenditureExpenditure and capital and revenue, the period for writing off intangible assets, and the rate of provision for doubtful debts. These things somehow will guide you regarding homework help usa . Some general instructions are to be follow for the preparation of the Financial Statement, also know as the Balance Sheet: - An asset shall be classified as current when it persuades any of the following criteria: - it is intend for consumption or sale in, or is expect to be realiz in, the normal operating cycle of the company, - to be trade, it is hold predominantly, - After the reporting date, it is anticipate to be realize within twelve months. Else, the remaining assets shall be classified as non-current. - The operating cycle refers to the time between acquiring assets to be process and their realization either in cash equivalents or cash. - It is assume to have a duration of twelve months when the company’s normal operating cycle cannot be identify. - Liability shall be classified as current only when it fulfills any of the - it must be settle in the company’s normal operating cycle, - it is hold predominant to be trade, - After the reporting date, it is due to be settle within twelve months. Rest all are consider as non-current liabilities. - Trade receivables are receivables that can be call so if they respect the amount due based on the goods and services sell in the normal course of business. They may include Bills Receivables and Sundry Debtors. - Payables that are in respect of the amount based on account of services rendered or goods - sold in the normal course of business are call trade payables. - No debit balance of profit and loss and assets: The elimination of the Profit and Loss statement from the assets is an important change in Schedule 3. Now, it is present as the negative balance under the subheading ‘Reserve and Surplus.’ - Miscellaneous ExpenditureExpenditure such as Discount on Issue of Shares/Debentures, Preliminary Expenses, Loss on Issue of Debentures, - Underwriting Commission should be write off from the sub-heading Securities - Premium Reserves (if at all it exists) or from Surplus, that is, balance in Statement of Profit and Loss within ‘Reserve and Surplus’ or General Reserve, in the year in which they occur. - The elimination of ‘Schedules’ as per Schedule 3 helps such information be - furnished in ‘Notes to Accounts. These were a few basic points that students studying accounts must remember and can refer to while accounting for help or Assignment Help USA .
Minerals And Energy Resources - NCERT Solutions for Class 10 Social science NCERT Solutions for Class 10 Social science Chapter 5 Minerals And Energy Resources are provided here with simple step-by-step explanations. These solutions for Minerals And Energy Resources are extremely popular among class 10 students for Social science Minerals And Energy Resources Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the NCERT Book of class 10 Social science Chapter 5 are provided here for you for free. You will also love the ad-free experience on Meritnation’s NCERT Solutions. All NCERT Solutions for class 10 Social science are prepared by experts and are 100% accurate. Page No 63: Multiple choice questions (i) Which one of the following minerals is formed by decomposition of rocks, leaving a residual mass of weathered material? (ii) Koderma, in Jharkhand is the leading producer of which one of the following minerals? (c) iron ore (iii) Minerals are deposited and accumulated in the stratas of which of the following rocks? (a) sedimentary rocks (b) metamorphic rocks (c) igneous rocks (d) none of the above (iv) Which one of the following minerals is contained in the Monazite sand? (i) (b) bauxite (ii) (b) mica (iii) (a) sedimentary rocks (iv) (c) thorium Page No 64: Answer the following questions in about 30 words. (i) Distinguish between the following in not more than 30 words. (a) Ferrous and non-ferrous minerals (b) Conventional and non-conventional sources of energy. (ii) What is a mineral? (iii) How are minerals formed in igneous and metamorphic rocks? (iv) Why do we need to conserve mineral resources? (i) (a) Minerals containing iron are called ferrous minerals, e.g., iron ore and manganese. Minerals which do not contain iron are called non-ferrous minerals, e.g., bauxite, lead and gold. (b) Conventional sources of energy are generally exhaustible and polluting, e.g., firewood, coal and petroleum. Non conventional sources of energy are usually inexhaustible and non-polluting, e.g., solar, wind, tidal and atomic energy. (ii) A mineral is a homogeneous, naturally occurring substance with a definable interior structure. Minerals are formed by a combination of elements, and the mining of some minerals is very profitable. (iii) In igneous and metamorphic rocks, molten/liquid and gaseous minerals are forced upwards into the cracks. They then solidify and form veins or lodes. (iv) Mineral resources need to be conserved because they are limited. It takes billions of years for them to be replenished in nature. Continued extraction of ores leads to increasing costs of extraction and a decrease in quality as well as quantity. Page No 64: Answer the following questions in about 120 words. (i) Describe the distribution of coal in India. (ii) Why do you think that solar energy has a bright future in India? (i) The distribution of coal in India is more abundant on the eastern side of the country. In India, coal occurs in rock series of two main geological ages—Gondwana and tertiary. While Gondwana coal is about 200 million years old, tertiary deposits are approximately 55 million years old. The major resources of Gondwana (metallurgical) coal are located in the Damodar valley (West Bengal, Jharkhand), Jharia, Raniganj and Bokaro. The Godavari, Mahandi, Son and Wardha valleys also contain coal deposits. Tertiary coals occur in the north-eastern states of Meghalaya, Assam, Arunachal Pradesh and Nagaland. View NCERT Solutions for all chapters of Class 10
In this section on platypus ecology and behaviour you can read about: - Foraging behaviour - Diet and food consumption - Home range, movements and dispersal - Courtship, mating and nest-building - Social communication - Mortality factors Platypus feed only in the water. They find their small invertebrate prey by searching along shallow riffles, gleaning items from submerged logs and branches, digging under banks, and diving repeatedly to the bottom of pools. Animals most commonly feed for one extended session in each 24-hour period, typically remaining active for 8-16 (though occasionally up to 30) hours and completing up to 1600 foraging dives per session. Platypus foraging behaviour in a pool begins with an animal doing a neat, quiet duck dive (as shown above). The animal swims to the bottom and uses its bill to find and seize prey. The platypus doesn’t swallow food immediately, instead storing its prey in special cheek pouches located at the back of the jaw. It returns to the surface as its oxygen supply becomes depleted (usually within 30-60 seconds of when it dived, though unforced dives of up to 138 seconds have been recorded) and then typically spends 10-20 seconds chewing and swallowing its food before diving again. Although platypus cheek pouches sometimes hold minor amounts of mud or sand, such material is presumably ingested by accident. In particular, there’s no reason to believe that gritty sediment is retained on purpose to help grind up prey. Instead, inedible material is probably routinely expelled (along with surplus water) through grooves located along the edge of the lower jaw (as shown at left). An insulating air layer trapped in the platypus’s fur helps to provide positive buoyancy, increasing the amount of energy needed to dive deeply. A study conducted along the Manning River in New South Wales (which has a maximum depth of about 8 metres) found that about 80% of platypus foraging dives reached a depth of 1.6-4.9 metres, with the deepest descending to 6.1 metres. At Lake Lea in Tasmania (which has a maximum depth of more than 10 metres), 98% of platypus dives did not exceed 3 metres, though one dive descended to nearly 9 metres. The use of data loggers has confirmed that platypus feed mainly but by no means exclusively at night, with around 25% of the animals tracked along a small Victorian stream and 40% of those tracked in a Tasmanian lake often recorded to be active during daylight hours. Photos courtesy of APC (above) and Ann Killeen (below) Diet and food consumption The platypus typically has a varied diet dominated by bottom-dwelling (or “benthic”) insects (as shown below). Although several studies have concluded that larval mayflies and caddisflies are particularly important dietary items, this may largely reflect the wide availability of these groups at the sites where the studies were conducted. The platypus also dines on water bugs, water beetles, and larval damselflies, dragonflies, dobsonflies, midges, craneflies and blackflies. Other prey includes freshwater shrimps, snails, “pea shell” mussels, seed-shrimps (or ostracods) and worms. Burrowing crayfish have been found to be an important part of the platypus diet in a Tasmanian lake, and trout eggs were often consumed along the Thredbo River in winter when fish were spawning. The platypus’s ability to prey on fish or other vertebrates is restricted by its lack of true teeth as an adult. Remains of a small frog (which may have been eaten as carrion) have been found in one platypus cheek pouch sample from the Shoalhaven River in New South Wales. A young platypus is equipped with a set of shallow-rooted premolar and molar teeth located at the back of the bill, but these fall out around the time that a juvenile begins to eat solid prey. The teeth are replaced by rough grinding pads which grow continuously to offset natural wear – a very handy feature given that abrasive material such as sand may often accidentally enter the platypus’s mouth when it snaps up its prey from the bottom. Reflecting the fact that the platypus diet consists of small, soft-bodied prey items that are masticated quite finely even before they are swallowed, the platypus’s stomach is small and lacks the ability to secrete digestive enzymes or hydrochloric acid. However, the platypus’s stomach does contain Brunner’s glands, which produce a mucus-rich secretion to help lubricate the intestinal walls and assist efficient nutrient uptake there. Because the platypus is a relatively small, warm-blooded animal, it needs a lot of food to serve as fuel. Studies in captivity have shown that adult males need to consume the equivalent of around 15-28% of their body weight in food each day to maintain good physical condition. Similarly, the average daily food intake of animals occupying a Tasmanian lake has been estimated to be 19% of body mass. Not surprisingly, the energy requirements of lactating females increase substantially as their offspring grow. For example, daily food consumption of a mother in captivity was found to increase from around 14% of her body weight in the first month of lactation to slightly more than 36% in the final month of lactation. Photos courtesy of https://www.mdfrc.org.au/bugguide/resources/howtouse.htm Home range, movements and dispersal Based on mark-recapture studies conducted along creeks in Victoria, a male platypus’s home range typically measures 6-11 kilometres in length. By comparison, a female’s home range is generally 2-4 kilometres long. The difference reflects the fact that a male tries to encompass as many female home ranges as possible within his own to improve his prospects for mating. Though adult males try to avoid coming into direct contact with each other, especially just before and during the breeding season, male home ranges often overlap to some extent. Female home ranges also typically overlap, in a well-ordered manner that ensures that each female has enough room to raise her young. The longest platypus home ranges described to date stretched respectively 15.1 kilometres (male) and 6.0 kilometres (female). Most adults occupy stable home ranges for periods of at least several years. In a Victorian radio-tracking study, males and females typically visited 24-70% of their home range on any given day. Adults have been documented to travel up to 4.0 kilometres (female) or 10.4 kilometres (male, including backtracking) along a creek or river channel in a single activity period. At Lake Lea in Tasmania, daily male activity areas encompassed 3-35 hectares (up to 25% of the total surface area) with females using 2-58 hectares (up to 41% of the surface area). Subsurface swimming has been calculated to be most efficient when a platypus travels at a speed of 0.4 metre/second (1.4 kilometres per hour). However, a feeding platypus typically proceeds at a more leisurely rate of 0.1-0.7 kilometres per hour as it dives and resurfaces. Juvenile dispersal is believed to be an important mechanism to reduce inbreeding and enable vacant habitat to be repopulated. It’s well established that young males move farther on average than young females, and that females are more likely than males to settle near where they grew up. Because the number of juveniles captured in live-trapping studies in Victorian streams drops quite sharply in late autumn, it’s also believed that many juveniles initiate dispersal at this time of year. Young dispersing males have been known to travel more than 50 kilometres in the Yarra River system and nearly 45 kilometres in the Wimmera River system, and undoubtedly may sometimes venture much farther. Photos courtesy of L. Berzins (above), B. Catherine (below) Platypus mainly sleep in burrows located near the water’s edge, though they may also occasionally shelter in a handy hollow log or (in Tasmania) within a dense clump of low-growing vegetation. Platypus burrows are divided into two types: nesting burrows and camping burrows. A nesting burrow provides shelter for a mother and her offspring for several months. It’s typically 3-6 metres long (measured in a straight line from the entrance to the nesting chamber), though it may be much longer, particularly along rivers prone to major flooding. The entrance is roughly oval in outline and just large enough to allow an adult platypus to enter (as shown at right). Whenever a mother of young juveniles enters or exits her burrow, she blocks the entry tunnel with a series of 2-9 compacted soil plugs (or “pugs”). It’s believed that the pugs both deter predators from entering and help to protect juveniles from drowning if flooding occurs. Camping burrows are occupied by animals that are not caring for eggs or young. They are typically 1-2 (though can be 4 or more) metres long. Research has shown that camping burrow entrances are sometimes located underwater, with others typically well hidden beneath an undercut bank or overhanging vegetation as illustrated below (entrance locations marked by red arrows). A platypus will normally occupy two or more camping burrows in a period of a few weeks, including some that may be used by other individuals. For example, a radio-tracking study carried out along a stream in southern Victoria found that between 6 and 12 burrows were occupied by each of five animals (3 males, 2 females) monitored for 28 to 38 days. Three animals occupied at least one burrow known to be used by another animal during the study, though only one burrow was ever occupied by two individuals at the same time. Platypus have been recorded mating in late winter and spring (peaking in about September) in Victoria and New South Wales; breeding is believed to occur a few weeks earlier on average in Queensland and a few weeks later in Tasmania. Mates do not form lasting pair bonds: males court as many females as possible, and females rear their young without male assistance. A clutch of 1-3 whitish, leathery-shelled eggs (15-17 millimetres long) is laid approximately 2-3 weeks after mating. The eggs are then incubated for 10-11 days in an underground nesting burrow, clasped between a female’s curled-up tail and her belly. The young are tiny and very immature when they hatch. Their exit from the egg is assisted by a prominent bump (or “caruncle”) at the end of the snout, an inwardly curving egg tooth and tiny claws on the front feet. After hatching, the babies develop in the nesting burrow for several months before entering the water for the first time in summer. Throughout this period, they only feed on milk. Because a female platypus doesn’t have nipples, a baby sweeps its stubby bill rhythmically from side to side to slurp up milk secreted directly onto the mother’s belly from two round patches of skin. Platypus milk is thick and rich, containing on average about 39% solids (as compared to 12% solids in cow milk), 22% fat (about six times the average value for cow milk) and 8% protein (more than double the average value for cow milk). Juveniles are fully furred, well-coordinated and about 80% of their adult length when they first enter the water, (as shown above). They aren’t taught to swim or to find food by their mother, but have to master these skills on their own through trial and error. Males and females both mature at the age of two years, although some females may not raise young until they are four years old or more. A long-term study conducted along the upper Shoalhaven River in New South Wales concluded that less than half of all females breed on average in any given year (range = 18-80%). Along both the Shoalhaven River and urban streams near Melbourne, more females raise young in years when water flow has been plentiful in the five months before mating begins, suggesting that this is a crucial period for a female to store fat before her body decides to breed. Reproductive success can also be greatly reduced if major flooding occurs in the period when juveniles occupy nesting burrows, presumably because young animals drown when their burrow is inundated. Photos courtesy of Ann Killeen (above), APC (below) Courtship, mating and nest-building Based on studies in captivity, a female platypus decides when courtship occurs. This often results in the male grasping the tip of her tail in his bill and swimming with her in a tight circle (as shown above) or being towed behind her as she twists and turns near the water surface. The actual mating event typically lasts 3-4 minutes, and can occur either while both animals are supported in shallow water by a structure (such as a partly submerged log) or while they’re floating in deeper water. A number of mating postures have been described. A relatively large male (1.6 kg) mounted his partner from above and behind, wrapping his tail beneath her body and grasping her hind feet and back with his front feet to maintain his position. In contrast, a smaller male (1.1 kg) lay on his side next to his partner while using his bill to grip her neck and his hind feet to grip her body. Pairs that mate while floating in the water may end up facing in opposite directions and upside down relative to one another, forcing them to rotate around their long axis so each can breathe in turn. A female starts gathering material to build a nest about 1-2 weeks after mating. She continues this activity for 2-5 nights, finishing shortly before she retires to the burrow to lay and incubate her clutch of eggs. In captivity, females were observed using the bill to gather floating grass and leaves from the water surface. This material was then passed under the body to the tail, which was curled forward to hold the bundle firmly against a female’s belly as she swam to the nesting burrow entrance. The finished nest takes the form of a hollow sphere or cup. Because wet materials are used to build a platypus nest, it’s unlikely that the nest serves to keep the eggs and young warm. Instead, its main role is probably to maintain humidity in the burrow so eggs and small hairless juveniles don’t dry out when their mother has to leave to find food or carry out other duties. Photo courtesy of M. Kirton There is no evidence that platypus use sound to communicate with each other, apart from occasionally producing a querulous growl (similar to the noise of complaint made by a broody hen) when they feel threatened or annoyed. Although aquatic mammals typically don’t rely much on their sense of smell, the platypus has an exceptionally large number of genes coding for special smell receptors located in the vomeronasal (or Jacobson’s) organ found in the roof of its mouth. In other mammals, the vomeronasal organ is mainly used to detect odours produced by members of the same species. In the case of the platypus, both sexes have scent glands located at the base of the neck. The glands are much larger in adult males than females and become more active in both sexes during the breeding season. When males are handled at this time of year, the glands often release small drops of a pale yellow fluid with a strong, musky odour. A male held in captivity has also been seen releasing a yellow, mucilaginous liquid from his cloaca after swimming to the bottom of a pool and then pausing above a stone or other object. The person reporting this sequence of events considered it likely to be a form of marking behaviour. Photo courtesy of APC An analysis of 183 platypus mortalities reported to the APC from the 1980s to 2009 in Victoria found that animals died most often after drowning in illegal nets or traps set to capture fish or crayfish/yabbies (56% of all mortalities). Only 18% of victims were killed by more or less natural causes (such as predators, drought and flooding). However, this figure undoubtedly underestimates the actual impact of natural factors, given that many victims of predation are presumably eaten entirely, and those dying from starvation, disease or heat stress are undoubtedly less likely to be found (and the cause of death accurately identified) than those killed directly by human activities. It’s also worth noting that use of enclosed crayfish/yabby traps in Victoria was completely banned beginning in July 2019. Apart from nets and traps, mortality factors identified in the Victorian study (in decreasing order of importance) were as follows: - Predation by dogs, foxes or birds of prey – 13% of victims - Irrigation pumps, mini-hydro turbines or other infrastructure – 10% of victims - Embedded fishing hooks or discarded fishing line – 5% of victims - Entanglement in other sources of litter – 4% of victims - Flooding – 3% of victims - Run over by a car – 3% of victims - Shot or bludgeoned by humans – 2% of victims - Drought – 2% of victims - Other (such as juveniles dug up during earth-moving works) – 2% of victims By comparison, a study of factors contributing to 23 platypus mortalities in the mid-1990s in Tasmania concluded that the most common cause of death was attack by domesticated dogs (43% of victims). This was followed by being run over by a car (30% of victims), starvation or exposure due to natural causes such as flooding (17% of victims) and infection by the ulcerative fungus Mucor amphibiorum (9% of victims). Differences in the Victorian and Tasmanian findings reflect the fact that: - Use of crayfish/yabby traps was undoubtedly much less widespread in Tasmania than in Victoria (due to differences in fishing regulations). - Foxes were not present in Tasmania in the mid-1990s. - As compared to mainland platypus, Tasmanian animals generally spend more time travelling across land and therefore are more at risk of being attacked by pet dogs or hit by cars. - There are no known cases of platypus on the mainland becoming sick due to infection by Mucor. Photo courtesy of B. McNamara
Dec 30, 2018· Coal makes energy in 6 sorta simple steps. 1) The coal or natural gas is first burned for mainly thermal (or heat) energy. 2) That thermal energy is used to boil water and to produce steam. Consumption of fuels used to generate electricity ; Receipts of fossil-fuels for electricity generation ; Average cost of fossil-fuels for electricity generation ... Like natural gas, coal in the Southwest Power Pool is cycled to accommodate wind power . tags: coal generation natural ... How does coal generate electricity? 1. STEP # 1:Coal is mined from the groundand sent to coal power plants There are approximately 1100 coal mines in North America 2. STEP # 2:Coal is crushed into afine powder. Because there is a growing shortage of fresh water globally, Eskom's power stations are designed to make optimum use of this scarce resource. Using coal to generate electricity is not ideal because, no matter how carefully it is burnt, there are gaseous and solid emissions. More than half of the electricity generated in the world is by using coal as the primary fuel. The function of the coal fired thermal power plant is to convert the energy available in the coal to Electricity. Sep 06, 2011· The Fisk and Crawford power plants have a combined generating capacity of nearly 900 megawatts of electricity -- enough to provide power for more than half a … How a Coal Plant Works. Coal-fired plants produce electricity by burning coal in a boiler to produce steam. The steam produced, under tremendous pressure, flows into a turbine, which spins a generator to create electricity. ... The turbines are connected to the generators and spin them at 3,600 revolutions per minute to make alternating current ... Electricity is made at a generating station by huge generators. Generating stations can use wind, coal, natural gas, or water. The current is sent through transformers to increase the voltage to push the power … Power generated in one hour = 1Kwh. One Kwh = 860 kcal. Conversion effieciency from fuel to power generation taken as 30%. Cal.value of coal taken as 5000 Kcl/kg. Coal consumtion for one Kwh = (1 x 860/o.3) / 5000 = 0.573 kg. To get correct consumtion substitute correct cal. value of coal and correct conversion efficiency. Air pollution from coal-fired power plants is linked with asthma, cancer, heart and lung ailments, neurological problems, acid rain, global warming, and other … Coal, like other fossil fuel supplies, takes millions of years to create, but releases its stored energy within only a few moments when burned to generate electricity. Because coal is a finite resource, and cannot be replenished once it is extracted and burned, it cannot be considered a renewable resource. Jul 31, 2015· They generated 30 percent of the nation's electricity last year. Coal was the chief source of electrical generation in 19 states and the second most common source in another nine. Coal … How much does it cost to generate electricity with different types of power plants? The U.S. Energy Information Administration (EIA) has historical data on the average annual operation, maintenance, and fuel costs for existing power plants by major fuel or energy source types in Table 8.4. How Much Energy Does Your State Produce? Our new map shows you not just which state produces the most energy, but also what kinds of energy are produced in each state. The size of each circle represents the volume of the energy produced in that state. As a frame of reference, 1 trillion Btu is equal to about 51,000 tons of coal. Coal is the largest domestically produced source of energy in America and is used to generate a significant chunk of our nation's electricity. The Energy Department is working to develop technologies that make coal cleaner, so we can ensure it plays a part in our clean energy future. Coal, a nonrenewable fossil fuel, is used to generate approximately 54 percent of the electricity in Ohio. Coal is burned to produce heat, which converts water into high-pressure steam. Coal is burned to produce heat, which converts water into high-pressure steam. Coal-fired power plants, which produce almost half of the country's electricity, have significant impacts on water quantity and quality in the United States. Water is used to extract, wash, and sometimes transport the coal; to cool the steam used to make electricity in the power plant; and to control pollution from the plant. Coal is Australia's second largest export commodity, accounting for about 13 percent of Australia's total exports in 2012-2013 and worth about $38 billion. 7 Coal and climate change. Burning coal to generate electricity produces greenhouse gases, which cause climate change. Coal makes energy in 6 sorta simple steps. 1) The coal or natural gas is first burned for mainly thermal (or heat) energy. 2) That thermal energy is used to boil water and to produce steam. Types of power plants Steam turbine. Most traditional power plants make energy by burning fuel to release heat.For that reason, they're called thermal (heat-based) power plants. Coal and oil plants work much as I've shown in the artwork above, burning fuel with oxygen to release heat energy, which boils water and drives a steam turbine.This basic design is sometimes called a simple cycle. How much coal is required to generate 1 MWH of electricity? Update Cancel. ... How much coal is required to generate 1 kW of power? ... of coal to generate 1 MWh of electricity. If you really wanted to know power (MW, not MWh), then let's use a 1-second period to determine 1MW: 500 kg / 3600 = 138.9 g/sec. to produce 1 MW, roughly. Nov 22, 2008· How do you generate electricity using coal, Effective Digital Presentations, produced this incredible 3D animation to show how a coal fueled ... How We Generate Electricity A diverse mix for a dependable future. Where Your Electricity Comes From. ... PGE meets our area's growing energy demands with a diverse mix of generation facilities that includes water power, wind, coal and natural gas combustion. See the chart below for the generating capabilities of our seven hydroelectric ... Electricity generation is the process of generating electric power from sources of primary energy. For electric utilities in the electric power industry, it is the first stage in the delivery of electricity to end users, the other stages being transmission, distribution, energy … Coal burnt as a solid fuel in coal power stations to generate electricity is called thermal coal. Coal is also used to produce very high temperatures through combustion. Efforts around the world to reduce the use of coal have led some regions to switch to natural gas and electricity from lower carbon sources. Natural gas is a cleaner and more efficient source of electrical generation compared to coal. It is more environmentally friendly because it produces less carbon and other harmful emissions than coal. Coal Power Internationally, coal is currently the most widely used primary fuel, accounting for approximately 36% of the world's electricity production. This situation is likely to remain until at least 2020. ... South Africa's infrastructure to generate electricity from coal is well established. How does coal make electricity? Simply put, coal-fired electricity generation is a five-step process: Thermal coal (either black or brown) that has been pulverised to a fine powder is burned Steam coal, also known as thermal coal, is used in power stations to generate electricity. Coal is first milled to a fine powder, which increases the surface area and allows it to burn more quickly. Electricity is made from many different resources, including non-renewable and renewable sources. Non-renewable sources include uranium and fossil fuels, like coal and natural gas. When they're used to generate power, these sources produce waste, and will eventually run out.
Adolescence is a period that begins with puberty and ends with the transition to adulthood (approximately ages 10–20). Physical changes associated with puberty are triggered by hormones. Cognitive changes include improvements in complex and abstract thought, as well as development that happens at different rates in distinct parts of the brain and increases adolescents’ propensity for risky behavior because increases in sensation-seeking and reward motivation precede increases in cognitive control. Adolescents’ relationships with parents go through a period of redefinition in which adolescents become more autonomous, and aspects of parenting, such as distal monitoring and psychological control, become more salient. Peer relationships are important sources of support and companionship during adolescence yet can also promote problem behaviors. Same-sex peer groups evolve into mixed-sex peer groups, and adolescents’ romantic relationships tend to emerge from these groups. Identity formation occurs as adolescents explore and commit to different roles and ideological positions. Nationality, gender, ethnicity, socioeconomic status, religious background, sexual orientation, and genetic factors shape how adolescents behave and how others respond to them, and are sources of diversity in adolescence - Describe major features of physical, cognitive, and social development during adolescence. - Understand why adolescence is a period of heightened risk taking. - Be able to explain sources of diversity in adolescent development. Adolescence is a developmental stage that has been defined as starting with puberty and ending with the transition to adulthood (approximately ages 10–20). Adolescence has evolved historically, with evidence indicating that this stage is lengthening as individuals start puberty earlier and transition to adulthood later than in the past. Puberty today begins, on average, at age 10–11 years for girls and 11–12 years for boys. This average age of onset has decreased gradually over time since the 19th century by 3–4 months per decade, which has been attributed to a range of factors including better nutrition, obesity, increased father absence, and other environmental factors (Steinberg, 2013). Completion of formal education, financial independence from parents, marriage, and parenthood have all been markers of the end of adolescence and beginning of adulthood, and all of these transitions happen, on average, later now than in the past. In fact, the prolonging of adolescence has prompted the introduction of a new developmental period called emerging adulthood that captures these developmental changes out of adolescence and into adulthood, occurring from approximately ages 18 to 29 (Arnett, 2000). This module will outline changes that occur during adolescence in three domains: physical, cognitive, and social. Within the social domain, changes in relationships with parents, peers, and romantic partners will be considered. Next, the module turns to adolescents’ psychological and behavioral adjustment, including identity formation, aggression and antisocial behavior, anxiety and depression, and academic achievement. Finally, the module summarizes sources of diversity in adolescents’ experiences and development. Physical changes of puberty mark the onset of adolescence (Lerner & Steinberg, 2009). For both boys and girls, these changes include a growth spurt in height, growth of pubic and underarm hair, and skin changes (e.g., pimples). Boys also experience growth in facial hair and a deepening of their voice. Girls experience breast development and begin menstruating. These pubertal changes are driven by hormones, particularly an increase in testosterone for boys and estrogen for girls. Major changes in the structure and functioning of the brain occur during adolescence and result in cognitive and behavioral developments (Steinberg, 2008). Cognitive changes during adolescence include a shift from concrete to more abstract and complex thinking. Such changes are fostered by improvements during early adolescence in attention, memory, processing speed, and metacognition (ability to think about thinking and therefore make better use of strategies like mnemonic devices that can improve thinking). Early in adolescence, changes in the brain’s dopaminergic system contribute to increases in adolescents’ sensation-seeking and reward motivation. Later in adolescence, the brain’s cognitive control centers in the prefrontal cortex develop, increasing adolescents’ self-regulation and future orientation. The difference in timing of the development of these different regions of the brain contributes to more risk taking during middle adolescence because adolescents are motivated to seek thrills that sometimes come from risky behavior, such as reckless driving, smoking, or drinking, and have not yet developed the cognitive control to resist impulses or focus equally on the potential risks (Steinberg, 2008). One of the world’s leading experts on adolescent development, Laurence Steinberg, likens this to engaging a powerful engine before the braking system is in place. The result is that adolescents are more prone to risky behaviors than are children or adults. Although peers take on greater importance during adolescence, family relationships remain important too. One of the key changes during adolescence involves a renegotiation of parent–child relationships. As adolescents strive for more independence and autonomy during this time, different aspects of parenting become more salient. For example, parents’ distal supervision and monitoring become more important as adolescents spend more time away from parents and in the presence of peers. Parental monitoring encompasses a wide range of behaviors such as parents’ attempts to set rules and know their adolescents’ friends, activities, and whereabouts, in addition to adolescents’ willingness to disclose information to their parents (Stattin & Kerr, 2000). Psychological control, which involves manipulation and intrusion into adolescents’ emotional and cognitive world through invalidating adolescents’ feelings and pressuring them to think in particular ways (Barber, 1996), is another aspect of parenting that becomes more salient during adolescence and is related to more problematic adolescent adjustment. As children become adolescents, they usually begin spending more time with their peers and less time with their families, and these peer interactions are increasingly unsupervised by adults. Children’s notions of friendship often focus on shared activities, whereas adolescents’ notions of friendship increasingly focus on intimate exchanges of thoughts and feelings. During adolescence, peer groups evolve from primarily single-sex to mixed-sex. Adolescents within a peer group tend to be similar to one another in behavior and attitudes, which has been explained as being a function of homophily (adolescents who are similar to one another choose to spend time together in a “birds of a feather flock together” way) and influence (adolescents who spend time together shape each other’s behavior and attitudes). One of the most widely studied aspects of adolescent peer influence is known as deviant peer contagion (Dishion & Tipsord, 2011), which is the process by which peers reinforce problem behavior by laughing or showing other signs of approval that then increase the likelihood of future problem behavior. Peers can serve both positive and negative functions during adolescence. Negative peer pressure can lead adolescents to make riskier decisions or engage in more problematic behavior than they would alone or in the presence of their family. For example, adolescents are much more likely to drink alcohol, use drugs, and commit crimes when they are with their friends than when they are alone or with their family. However, peers also serve as an important source of social support and companionship during adolescence, and adolescents with positive peer relationships are happier and better adjusted than those who are socially isolated or have conflictual peer relationships. Crowds are an emerging level of peer relationships in adolescence. In contrast to friendships (which are reciprocal dyadic relationships) and cliques (which refer to groups of individuals who interact frequently), crowds are characterized more by shared reputations or images than actual interactions (Brown & Larson, 2009). These crowds reflect different prototypic identities (such as jocks or brains) and are often linked with adolescents’ social status and peers’ perceptions of their values or behaviors. Adolescence is the developmental period during which romantic relationships typically first emerge. Initially, same-sex peer groups that were common during childhood expand into mixed-sex peer groups that are more characteristic of adolescence. Romantic relationships often form in the context of these mixed-sex peer groups (Connolly, Furman, & Konarski, 2000). Although romantic relationships during adolescence are often short-lived rather than long-term committed partnerships, their importance should not be minimized. Adolescents spend a great deal of time focused on romantic relationships, and their positive and negative emotions are more tied to romantic relationships (or lack thereof) than to friendships, family relationships, or school (Furman & Shaffer, 2003). Romantic relationships contribute to adolescents’ identity formation, changes in family and peer relationships, and adolescents’ emotional and behavioral adjustment. Furthermore, romantic relationships are centrally connected to adolescents’ emerging sexuality. Parents, policymakers, and researchers have devoted a great deal of attention to adolescents’ sexuality, in large part because of concerns related to sexual intercourse, contraception, and preventing teen pregnancies. However, sexuality involves more than this narrow focus. For example, adolescence is often when individuals who are lesbian, gay, bisexual, or transgender come to perceive themselves as such (Russell, Clarke, & Clary, 2009). Thus, romantic relationships are a domain in which adolescents experiment with new behaviors and identities. Behavioral and Psychological Adjustment Theories of adolescent development often focus on identity formation as a central issue. For example, in Erikson’s (1968) classic theory of developmental stages, identity formation was highlighted as the primary indicator of successful development during adolescence (in contrast to role confusion, which would be an indicator of not successfully meeting the task of adolescence). Marcia (1966) described identify formation during adolescence as involving both decision points and commitments with respect to ideologies (e.g., religion, politics) and occupations. He described four identity statuses: foreclosure, identity diffusion, moratorium, and identity achievement. Foreclosure occurs when an individual commits to an identity without exploring options. Identity diffusion occurs when adolescents neither explore nor commit to any identities. Moratorium is a state in which adolescents are actively exploring options but have not yet made commitments. Identity achievement occurs when individuals have explored different options and then made identity commitments. Building on this work, other researchers have investigated more specific aspects of identity. For example, Phinney (1989) proposed a model of ethnic identity development that included stages of unexplored ethnic identity, ethnic identity search, and achieved ethnic identity. Aggression and antisocial behavior Several major theories of the development of antisocial behavior treat adolescence as an important period. Patterson’s (1982) early versus late starter model of the development of aggressive and antisocial behavior distinguishes youths whose antisocial behavior begins during childhood (early starters) versus adolescence (late starters). According to the theory, early starters are at greater risk for long-term antisocial behavior that extends into adulthood than are late starters. Late starters who become antisocial during adolescence are theorized to experience poor parental monitoring and supervision, aspects of parenting that become more salient during adolescence. Poor monitoring and lack of supervision contribute to increasing involvement with deviant peers, which in turn promotes adolescents’ own antisocial behavior. Late starters desist from antisocial behavior when changes in the environment make other options more appealing. Similarly, Moffitt’s (1993) life-course persistent versus adolescent-limited model distinguishes between antisocial behavior that begins in childhood versus adolescence. Moffitt regards adolescent-limited antisocial behavior as resulting from a “maturity gap” between adolescents’ dependence on and control by adults and their desire to demonstrate their freedom from adult constraint. However, as they continue to develop, and legitimate adult roles and privileges become available to them, there are fewer incentives to engage in antisocial behavior, leading to desistance in these antisocial behaviors. Anxiety and depression Developmental models of anxiety and depression also treat adolescence as an important period, especially in terms of the emergence of gender differences in prevalence rates that persist through adulthood (Rudolph, 2009). Starting in early adolescence, compared with males, females have rates of anxiety that are about twice as high and rates of depression that are 1.5 to 3 times as high (American Psychiatric Association, 2013). Although the rates vary across specific anxiety and depression diagnoses, rates for some disorders are markedly higher in adolescence than in childhood or adulthood. For example, prevalence rates for specific phobias are about 5% in children and 3%–5% in adults but 16% in adolescents. Anxiety and depression are particularly concerning because suicide is one of the leading causes of death during adolescence. Developmental models focus on interpersonal contexts in both childhood and adolescence that foster depression and anxiety (e.g., Rudolph, 2009). Family adversity, such as abuse and parental psychopathology, during childhood sets the stage for social and behavioral problems during adolescence. Adolescents with such problems generate stress in their relationships (e.g., by resolving conflict poorly and excessively seeking reassurance) and select into more maladaptive social contexts (e.g., “misery loves company” scenarios in which depressed youths select other depressed youths as friends and then frequently co-ruminate as they discuss their problems, exacerbating negative affect and stress). These processes are intensified for girls compared with boys because girls have more relationship-oriented goals related to intimacy and social approval, leaving them more vulnerable to disruption in these relationships. Anxiety and depression then exacerbate problems in social relationships, which in turn contribute to the stability of anxiety and depression over time. Adolescents spend more waking time in school than in any other context (Eccles & Roeser, 2011). Academic achievement during adolescence is predicted by interpersonal (e.g., parental engagement in adolescents’ education), intrapersonal (e.g., intrinsic motivation), and institutional (e.g., school quality) factors. Academic achievement is important in its own right as a marker of positive adjustment during adolescence but also because academic achievement sets the stage for future educational and occupational opportunities. The most serious consequence of school failure, particularly dropping out of school, is the high risk of unemployment or underemployment in adulthood that follows. High achievement can set the stage for college or future vocational training and opportunities. Adolescent development does not necessarily follow the same pathway for all individuals. Certain features of adolescence, particularly with respect to biological changes associated with puberty and cognitive changes associated with brain development, are relatively universal. But other features of adolescence depend largely on circumstances that are more environmentally variable. For example, adolescents growing up in one country might have different opportunities for risk taking than adolescents in a different country, and supports and sanctions for different behaviors in adolescence depend on laws and values that might be specific to where adolescents live. Likewise, different cultural norms regarding family and peer relationships shape adolescents’ experiences in these domains. For example, in some countries, adolescents’ parents are expected to retain control over major decisions, whereas in other countries, adolescents are expected to begin sharing in or taking control of decision making. Even within the same country, adolescents’ gender, ethnicity, immigrant status, religion, sexual orientation, socioeconomic status, and personality can shape both how adolescents behave and how others respond to them, creating diverse developmental contexts for different adolescents. For example, early puberty (that occurs before most other peers have experienced puberty) appears to be associated with worse outcomes for girls than boys, likely in part because girls who enter puberty early tend to associate with older boys, which in turn is associated with early sexual behavior and substance use. For adolescents who are ethnic or sexual minorities, discrimination sometimes presents a set of challenges that nonminorities do not face. Finally, genetic variations contribute an additional source of diversity in adolescence. Current approaches emphasize gene X environment interactions, which often follow a differential susceptibility model (Belsky & Pluess, 2009). That is, particular genetic variations are considered riskier than others, but genetic variations also can make adolescents more or less susceptible to environmental factors. For example, the association between the CHRM2genotype and adolescent externalizing behavior (aggression and delinquency)has been found in adolescents whose parents are low in monitoring behaviors (Dick et al., 2011). Thus, it is important to bear in mind that individual differences play an important role in adolescent development. Adolescent development is characterized by biological, cognitive, and social changes. Social changes are particularly notable as adolescents become more autonomous from their parents, spend more time with peers, and begin exploring romantic relationships and sexuality. Adjustment during adolescence is reflected in identity formation, which often involves a period of exploration followed by commitments to particular identities. Adolescence is characterized by risky behavior, which is made more likely by changes in the brain in which reward-processing centers develop more rapidly than cognitive control systems, making adolescents more sensitive to rewards than to possible negative consequences. Despite these generalizations, factors such as country of residence, gender, ethnicity, and sexual orientation shape development in ways that lead to diversity of experiences across adolescence. - Podcasts: Society for Research on Adolescence website with links to podcasts on a variety of topics, from autonomy-relatedness in adolescence, to the health ramifications of growing up in the United States. - Study: The National Longitudinal Study of Adolescent to Adult Health (Add Health) is a longitudinal study of a nationally representative sample of adolescents in grades 7-12 in the United States during the 1994-95 school year. Add Health combines data on respondents’ social, economic, psychological and physical well-being with contextual data on the family, neighborhood, community, school, friendships, peer groups, and romantic relationships. - Video: This is a series of TED talks on topics from the mysterious workings of the adolescent brain, to videos about surviving anxiety in adolescence. - Web: UNICEF website on adolescents around the world. UNICEF provides videos and other resources as part of an initiative to challenge common preconceptions about adolescence. - What can parents do to promote their adolescents’ positive adjustment? - In what ways do changes in brain development and cognition make adolescents particularly susceptible to peer influence? - How could interventions designed to prevent or reduce adolescents’ problem behavior be developed to take advantage of what we know about adolescent development? - Reflecting on your own adolescence, provide examples of times when you think your experience was different from those of your peers as a function of something unique about you. - In what ways was your experience of adolescence different from your parents’ experience of adolescence? How do you think adolescence may be different 20 years from now? - Adolescent peer groups characterized by shared reputations or images. - Deviant peer contagion - The spread of problem behaviors within groups of adolescents. - Differential susceptibility - Genetic factors that make individuals more or less responsive to environmental experiences. - Individuals commit to an identity without exploration of options. - Adolescents tend to associate with peers who are similar to themselves. - Identity achievement - Individuals have explored different options and then made commitments. - Identity diffusion - Adolescents neither explore nor commit to any roles or ideologies. - State in which adolescents are actively exploring options but have not yet made identity commitments. - Psychological control - Parents’ manipulation of and intrusion into adolescents’ emotional and cognitive world through invalidating adolescents’ feelings and pressuring them to think in particular ways. - American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: American Psychiatric Publishing. - Arnett, J. J. (2000). Emerging adulthood: A theory of development from the late teens through the twenties. American Psychologist, 55, 469–480. - Barber, B. K. (1996). Parental psychological control: Revisiting a neglected construct. Child Development, 67, 3296–3319. - Belsky, J., & Pluess, M. (2009). Beyond diathesis-stress: Differential susceptibility to environmental influences. Psychological Bulletin, 135, 885–908. - Brown, B. B., & Larson, J. (2009). Peer relationships in adolescence. In R. M. Lerner & L. Steinberg (Eds.), Handbook of adolescent psychology (pp. 74–103). New York, NY: Wiley. - Connolly, J., Furman, W., & Konarski, R. (2000). The role of peers in the emergence of heterosexual romantic relationships in adolescence. Child Development, 71, 1395–1408. - Dick, D. M., Meyers, J. L., Latendresse, S. J., Creemers, H. E., Lansford, J. E., … Huizink, A. C. (2011). CHRM2, parental monitoring, and adolescent externalizing behavior: Evidence for gene-environment interaction. Psychological Science, 22, 481–489. - Dishion, T. J., & Tipsord, J. M. (2011). Peer contagion in child and adolescent social and emotional development. Annual Review of Psychology, 62, 189–214. - Eccles, J. S., & Roeser, R. W. (2011). Schools as developmental contexts during adolescence. Journal of Research on Adolescence, 21, 225–241. - Erikson, E. H. (1968). Identity, youth, and crisis. New York, NY: Norton. - Furman, W., & Shaffer, L. (2003). The role of romantic relationships in adolescent development. In P. Florsheim (Ed.), Adolescent romantic relations and sexual behavior: Theory, research, and practical implications (pp. 3–22). Mahwah, NJ: Erlbaum. - Lerner, R. M., & Steinberg, L. (Eds.). (2009). Handbook of adolescent psychology. New York, NY: Wiley. - Marcia, J. E. (1966). Development and validation of ego identity status. Journal of Personality and Social Psychology, 3, 551–558. - Moffitt, T. E. (1993). Adolescence-limited and life course persistent antisocial behavior: Developmental taxonomy. Psychological Review, 100, 674–701. - Patterson, G. R. (1982). Coercive family process. Eugene, OR: Castalia Press. - Phinney, J. (1989). Stages of ethnic identity in minority group adolescents. Journal of Early Adolescence, 9, 34–49. - Rudolph, K. D. (2009). The interpersonal context of adolescent depression. In S. Nolen-Hoeksema & L. M. Hilt (Eds.), Handbook of depression in adolescents (pp. 377–418). New York, NY: Taylor and Francis. - Russell, S. T., Clarke, T. J., & Clary, J. (2009). Are teens “post-gay”? Contemporary adolescents’ sexual identity labels. Journal of Youth and Adolescence, 38, 884–890. - Stattin, H., & Kerr, M. (2000). Parental monitoring: A reinterpretation. Child Development, 71, 1072–1085. - Steinberg, L. (2013). Adolescence (10th ed.). New York, NY: McGraw-Hill. - Steinberg, L. (2008). A social neuroscience perspective on adolescent risk-taking. Developmental Review, 28, 78–106. - Jennifer LansfordJennifer E. Lansford is a Research Professor at the Duke University Center for Child and Family Policy and Social Science Research Institute. Her work focuses on the development of aggression and other behavior problems in children and adolescents, emphasizing how family, peer, and cultural contexts affect these outcomes.
Some examples of bone-strengthening activities include: hopping, skipping, jumping rope, running, gymnastics, lifting weights, volleyball, tennis and basketball. Muscle-strengthening physical activities increase skeletal muscle strength, power, endurance and mass. What are the muscle strengthening activities? Examples of muscle-strengthening activities include: - lifting weights. - working with resistance bands. - heavy gardening, such as digging and shovelling. - climbing stairs. - hill walking. - push-ups, sit-ups and squats. What is the best example of muscular and bone activity? This force is commonly produced by impact with the ground. Examples of bone-strengthening activity include jumping jacks, running, brisk walking, and weight-lifting exercises. As these examples illustrate, bone-strengthening activities can also be aerobic and muscle strengthening. What is the important of muscle and bone activity? Your bones and muscles work together to support every movement you make on a daily basis. When you are physically active you strengthen your muscles. Your bones adapt by building more cells and as a result both become stronger. Strong bones and muscles protect against injury and improves balance and coordination. What are the difference between aerobic and muscle and bone strengthening activities? Activities such as running, biking, or stair climbing are examples of aerobic types of exercise. Aerobic exercise stresses the cardiorespiratory (heart and lungs) system, whereas strength training places emphasis on the musculoskeletal (muscles, bones, joints) system. What are 3 bone-strengthening activities? Examples of bone-strengthening activity include jumping jacks, running, brisk walking, and weight-lifting exercises. As these examples illustrate, bone-strengthening activities can also be aerobic and muscle strengthening. What are the 5 basic strength training exercises? “There are five basic moves: squat, hinge, push, pull, and core work. There are many variations of each of those movements, but for beginners, I tend to gravitate toward a bodyweight squat, glute bridges, push-ups (on an incline if needed), inverted rows, and planks.” What are some examples of muscular strength? Practically speaking, you use muscular strength when you lift yourself out of a chair, pick up a heavy object, or push a piece of furniture. In the gym, a single repetition at a given weight is an example of muscular strength. How many days a week should you do muscle and bone strengthening activities? They should partake in vigorous-intensity aerobic activity on at least three days of the week, and include muscle-strengthening and bone strengthening activities on at least three days of the week. Is dancing a bone strengthening activity? Dance is a form of weight-bearing activity because your legs must support the entire weight of your body. The National Osteoporosis Foundation reports that high-impact exercises, such as dance, not only keep bones strong but help build bone mass. Why is it important to strengthen muscles? Muscular strength and endurance are important for many reasons: Increase your ability to do activities like opening doors, lifting boxes or chopping wood without getting tired. Reduce the risk of injury. Help you keep a healthy body weight. What are the benefits of bone-strengthening activity? Bone-strengthening activities produce an impact or tension force on the bones that promotes bone growth and strength. Examples of bone-strengthening activities suitable for children include: activities that require children to lift their body weight or to work against a resistance. What are the benefits of muscle-strengthening exercise? Increased muscle mass: Muscle mass naturally decreases with age, but strength training can help reverse the trend. Stronger bones: Strength training increases bone density and reduces the risk of fractures. Joint flexibility: Strength training helps joints stay flexible and can reduce the symptoms of arthritis. What is the best exercise for bone density? Weight-bearing and resistance exercises are the best for your bones. Weight-bearing exercises force you to work against gravity. They include walking, hiking, jogging, climbing stairs, playing tennis, and dancing. Resistance exercises – such as lifting weights – can also strengthen bones. How do you maintain bone strength? 10 Natural Ways to Build Healthy Bones - Eat Lots of Vegetables. Vegetables are great for your bones. … - Perform Strength Training and Weight-Bearing Exercises. … - Consume Enough Protein. … - Eat High-Calcium Foods Throughout the Day. … - Get Plenty of Vitamin D and Vitamin K. … - Avoid Very Low-Calorie Diets. … - Consider Taking a Collagen Supplement. … - Maintain a Stable, Healthy Weight. What are some examples of aerobic exercises? - Walking or hiking. - Jogging or running. - In-line skating. - Cross-country skiing. - Exercising on a stair-climber or elliptical machine.
Processed: Food Science and the Modern Meal In 1912 Louis Maillard was the first to describe the chemical reactions in grilled, sautéed, and baked foods that make them so delicious and, we know now, a little unhealthy. (Flickr user jypsygen) Normally the Maillard reaction is described with appetizing adjectives, and rightly so. In 1912 French chemist Louis-Camille Maillard published a paper that describes what happens when amino acids react with sugars at elevated temperatures: the creation of many delightful flavors and odors, such as the smell of popcorn and the taste and flavor of roasted coffee. The Maillard reaction is also responsible for the brown of barbecued steak, baked bread, and soy sauce. (Maillard reactions differ from but are often mistaken for caramelization, which is a browning of sugars exposed to heat.) “There wasn’t much of what you could call flavor chemistry before Maillard,” remarks historian Alan Rocke. “In the 1800s the German chemist Justus von Liebig published ideas about the importance of protein extracts of beef, and a lawyer, Jean Anthelme Brillat-Savarin, published heavily cited anecdotal ponderings on taste, but Maillard was the first to tackle serious food chemistry.” The Maillard reaction proceeds through three thorny, intricate steps, which can each produce hundreds of different products. In addition, most foods have many different kinds of amino acids and sugars, creating a cornucopia of possible participants in the reaction. Not until 1953 did the chemistry community get a handle on how all these flavor compounds could be produced, says food chemist Vincenzo Fogliano. That year a chemist at the U.S. Department of Agriculture, John E. Hodge, established a mechanism for the Maillard reaction. “Maillard discovered the reaction, but Hodge understood it,” Fogliano says. Fortuitously, developments in gas chromatography and protein mass spectrometry that same decade permitted food scientists to measure Maillard products in food, notes Floros. From that point the food industry had the tools to control the chemistry of cooking amino acids and sugars, both to orchestrate the production of pleasing flavors and odors and to avoid the offensive ones. However, this task is complicated by the fact that the Maillard reaction can produce thousands of different molecules with even slight changes to temperature, moisture levels, or pH, says food chemist Thomas Hofmann. Sometimes a Maillard product will be universally pleasant, such as the 2,3-butanedione found in popcorn and grilled steak. Other times a product that is desirable in some dishes is less welcome in others, Hofmann explains. For example, the compound 2-acetyl-1-pyrroline gives crusty bread and basmati rice a pleasant odor and flavor but produces a strange aftertaste when found in ultra-high-temperature pasteurized milk. Maillard reactions can also change the texture and consistency of proteins in food, making yogurt more gelatinous or cheese softer and creamier, says food chemist Thomas Henle. Then there are the negative products, such as the loss of vitamin C and B1 in Maillard reactions during cooking, states food scientist Cathy Davies. And there’s the production of acrylamide. When Tareke and Törnqvis, the toxicologists in Sweden, were asked to examine the exposure of the sick construction workers to acrylamide, they did what any reputable scientists would do: they compared the levels of acrylamide in the sick workers to those in the general population. To their surprise they found unexpectedly high levels of acrylamide in the control group. Tareke was also simultaneously comparing acrylamide levels in wild animals and domesticated pets for her doctoral work, and to her further surprise found high levels in pets. Given that a major difference between wild and domesticated animals is the amount of processed food they consume, Törnqvis and Tareke suspected that the acrylamide in the human controls might be attributable to their eating highly processed food. In 2002 they showed that processed food, especially potato chips but also common baked bread, did in fact contain acrylamide. Facing public outcry, food-industry associations banded together to fund research on exactly how the Maillard reaction led to acrylamide production and how it might be thwarted. One of the most promising techniques developed for acrylamide prevention is using an enzyme called asparaginase to break down the amino acid asparagine, says food scientist Monica Anese. Because acrylamide is created when asparagine reacts with sugar, removing the amino acid at the outset decreases acrylamide levels in the final foodstuff. Another strategy, she says, is to lower cooking temperature since acrylamide is produced under high heat. The downside is less browning of cookies and bread, an unpopular option with consumers.
No, rainwater is not some kind of crop. However, the harvesting of rainwater can be one of the most valuable things for human beings. While Earth itself is majority water, most of that water is not usable for human consumption, unless you plan on building a plethora of desalination plants. Most of the world’s freshwater is locked up in glaciers and ice sheets. Rivers and lakes serve as sources of freshwater. And there is one more source, the weather. In particular, the rain. Human beings have been capturing and using the rain as a water resource for thousands of years. In times when municipal water systems were not available, and water was needed, wherever it rained, that rain could be a source of water for drinking, washing up, irrigation, or in the case of the video below, fountains. In a world where water is becoming more valuable than ever, rain water capturing still proves to be a beneficial sources of water. In many places, rain that would have simply become storm water runoff is put to good use in ponds and fountains. Harvesting the rain is often used as a source of drinking water. In Myanmar’s Irrawaddy Delta, there is salt water among the ground water. In a land where rainfall is plentiful, rainwater harvesting is a major source of water. In places with a decent water supply, rainwater can be a supplement to the existing water supply. It can also be a set-aside in the event that a drought takes place. And in that case, captured rainwater is beneficial in agriculture, as it serves as an irrigation source. Rainwater is often used to recharge the groundwater in some areas. For developing nations, harvesting rainwater is often seen as a solution for combating a scarcity of potable water. In many places, there is plenty of water, but much of it is not drinkable. Capturing rainwater is nothing complicated. It can be done through simple means. It is often collected in vessels, from rooftops, and it can be harvested from rivers or reservoirs. It is simple, but it will have an impact on the water supply world wide. With more technological innovations, this form of capturing water for human consumption could play a major role in the future. Of course, it has to be conducted the right way. In many places, rainwater is collected from rooftops. There isn’t a guarantee that such water is safe. Birds often land there, and defecate in many cases. Consider this. Many geopolitical issues in the world are related to the water supply. Water is a major issue in the Israeli-Palestinian conflict. Water rights are a major issue in the Middle East. Considering the desert geography the region this will continue to play a role in geopolitics. Rainwater is not equal everywhere. Many places have low rainfall totals. Technological innovations can often be a response to bridge the gap between humans and the environments they live in. It will not be just a matter of harvesting rainwater. It will also be about where that rainwater will go. In theory, rainwater can be collected in one place, and sent to another place. In fact, this is already being done in several places. In California, much of the population lives in areas that get rainfall totals that qualify it as semiarid or desert. The California Aqueduct collects water from the Sierra Nevadas, and through an elaborate network of canals, pipes, and tunnels, that water id distributed to places such as southern California. Aqueducts have been used to distribute water for ages. It was done in the Roman Empire. It can be done today, with even more advanced technology than in antiquity. And even in the Middle East, rainwater can be harvested from high elevation areas. Turkey and Iran have some of the rainiest places in the Middle East. Lebanon has high elevation regions where snowfall is commonplace. However, water issues go beyond California. How will geopolitics play a role in water distribution, if measures such as distributing collected rainwater take place? Will there be peace as a result? Or will the existing geopolitical problems hinder such solutions? And will it be enough? Where will it come from? Many questions, no easy answers. In the present, rainwater harvesting still has its benefits locally.
Septic systems are more than just a system of pipes and drains. They are living, breathing environments. The microbial system that lives within a septic systems includes bacteria, enzymes, and yeast that play an active role in maintaining your systems. Their purpose is to digest any solids that have settled to the bottom of your septic tank to get the decomposition process started. A typical septic system consists of a septic tank and a drainfield. The system is basically an underground wastewater treatment system that uses a combination of nature and technology to treat wastewater from household plumbing produced by kitchen drains, bathrooms and laundry. Bacteria is naturally present in all septic tanks and originates from the organic waste that’s flushed into the septic tank. However, not all bacteria is “good,” meaning that it doesn’t have the ability to quickly break down the waste. Also, not all bacteria has the ability to break down grease, toilet paper and other waste. Some substances that are flushed into septic tanks kill “good” bacteria, such as laundry detergents, bleach, chemical drain cleaners and other products, so they need to be replenished. These are the reasons you need to add “good” bacteria to a septic tank. The septic tank digests organic matter and separates floatable matter (such as oils and grease) and solids from the wastewater. Soil-based systems discharge the effluent, or wastewater, from the septic tank into a series of perforated pipes buried in a leach field that has been designed to slowly release the effluent into the soil. Since solid materials need to remain within the septic tank to prevent clogging the drainage field and causing serious backup, they must be removed with the use of septic pumping trucks. What may be surprising is how infrequently pumping is needed (typically only once every 3-5 years). This infrequency is all thanks to the vast colonies of microorganisms living within the tank. These work non-stop to break down waste materials, converting much of the solids into liquids that join the stream of effluent and gases that simply dissipate through the soil or leach field. A healthy bacterial environment is vital to maintaining septic system health. Without it, you would be faced with frequent maintenance and nasty, inconvenient issues.
Every day, information washes over the world like so much weather. From casual conversations, tweets, texts, emails, advertisements and news stories, humanity processes countless discrete pieces of socially transmitted information. Anthropologists call this process cultural transmission, and there was a time when it did not exist, when humans or more likely their smaller brained ancestors did not pass on knowledge. Luke Premo, an associate professor of anthropology at Washington State University, would like to know when that was. Writing in the October issue of Current Anthropology, he and three colleagues challenge a widely accepted notion that cultural transmission goes back more than 2 million years. Exhibit A in this debate is the Oldowan chopper, a smooth, fist-sized rock with just enough material removed to make a crude edge. Writing in Nature in 1964, the prominent paleoanthropologist Louis Leakey connected the tools with what he said was the first member of the human genus, Homo habilis, or “handy man.” Leakey and his colleagues did not explicitly say Homo habilis learned how to make the tool through cultural transmission, but the word “culture” alone implies it, said Premo. “All of their contemporaries figured that any stone tool must be an example of culture because they thought that humans are the only animals that make and use tools and humans rely on cultural transmission to do so,” said Premo. “It made sense to them at the time that this ability might in fact distinguish our genus from all others.” More than half a century later, Premo and colleagues at the University of Tubingen, George Washington University and the Max Planck Institute for Evolutionary Anthropology are asking for better evidence that the technique for making early stone tools was culturally transmitted. Writing in the journal Current Anthropology, they say the tools could have been what lead author Claudio Tennie calls “latent solutions” that rely on an animal’s inherent skill rather than cultural transmission. Homo habilis could have learned to make the Oldowan tool on his or her own, much as wild chimps use sticks to fish for termites. “Our main question is: How do we know from these kinds of stone tools that this was a baton that somebody passed on?” said Premo, hefting an Oldowan tool in his hand. “Or was it just like the chimp case, where individuals could figure out how to do this on their own during the course of their lifetimes?” The Oldowan tool may look “cool and new and like it would require a lot of brain power.” But the animal world has complicated creations, like beehives, beaver lodges and spider webs, that don’t require cultural transmission. This type of tool also changed little for more than 1 million years, suggesting that the individuals who made them had the same mental and motor abilities. Techniques that are culturally transmitted, said Premo, tend to undergo at least slight changes, if not the more frequent churn of innovations we see in contemporary society. Some hominin technologies, like the Mousterian stone tools used by Neanderthals and others 160,000 to 40,000 years ago, require many steps to prepare, increasing the likelihood that they had to be passed on. If cultural transmission is so recent, said Premo, it could explain why too much information can overwhelm us. Clearly, our ability to transmit our culture has helped us pass on the techniques we need to thrive in a wide range of environments across the planet. “It does explain our success as a species,” Premo said. “But the reason we are successful might be much more recent than what many anthropologists have traditionally thought.” Moreover, the human system of transmitting information “can be hijacked. If you’ve got this system in which you receive information that can affect your behaviors… all it takes is somebody broadcasting information to you that makes you act in a way they prefer. And if you’re getting hundreds of messages every day, it can be difficult to discern what is important for you from what is important for somebody else.”
This article is part of our special report Can space technologies improve drinking water quality?. Digital tools, which provide water utility operators with accurate and timely information, can improve drinking water quality and ensure better water safety planning, experts say. The quality of drinking water currently faces two main challenges: algae blooms and turbidity. Algae blooms can be caused by eutrophication – water pollution caused by excessive plant nutrients from agricultural practices. It can also deteriorate during drought conditions and with rising temperatures. Turbidity can result from normal weather phenomena but has increased in time due to extreme events related to climate change, such as floods. According to the World Health Organisation (WHO) Guidelines for drinking water quality, water safety plans (WSPs) are recommended as the most effective means of consistently ensuring the safety and acceptability of a drinking-water supply. “WSPs require a risk assessment including all steps in water supply from catchment to consumer, followed by implementation and monitoring of risk management control measures, with a focus on high priority risks,” the WHO notes. The WHO says that where risks cannot be immediately addressed, the WSP approach allows for improvements to be implemented systematically over time. “WSPs should be implemented within a public health context, responding to clear health-based targets and quality-checked through independent surveillance,” the WHO says. The risk assessment module suggested by the WHO uses open data for watershed management and provides information for the control of nutrients, erosion and sediment. It can also identify other pressures such as from industry, changing land use, and climatic risks such as floods and droughts. Experts suggest that if water managers are quickly informed about potential algae blooms or turbidity, they can take the necessary preventive actions accordingly in order to ensure water quality. The debate over the digitisation of water utilities has emerged as a way for the industry to adopt preventive planning and avoid excessive costs. “Water professionals are often considered, rightly, to be conservative, cautious or late adopters. Yet several potent trends make digital water no longer optional, but rather inevitable,” said Kala Vairavamoorthy, executive director of the International Water Association (IWA). “Water professionals must update our analytic strategies used for planning investments, with constant, real-time observations of water quantity and quality data,” he added. He said that digital water would help connect the water sector with related industries and resource issues, such as energy, health and agricultural production and ecosystems. “Connected utility assets help unlock the seamless integration of information and operational technology (IT/OT), creating ‘silent running’ systems, innovations that improve water extraction through smart pumps, or treatment through real-time performance monitoring […] Real impact in engagement and efficiency will come through the interaction of big data, clear analytics, smart devices and user-friendly applications,” he noted. The role of Space-O For the IWA, in order for a risk-based approach for water management to be possible, water managers and utility operators need to have sufficient and timely information. “Space-O provides this using a catchment-based approach, which enables utility operators to act timely (with information up to 10 days ahead), that is accurate and comprehensive as it provides an overview and typology of risk within the catchment area, connecting service provision with other water uses like agriculture,” IWA told EURACTIV.com by email. The SPACE-O project, which is funded by the EU’s Horizon 2020 research and innovation programme, focuses on better management of drinking water. Through satellite imagery from Copernicus and advanced models, the researchers have created a water information system, which produces precise and short-term forecasting (for up to 10 days) about water quality and quantity in lakes and reservoirs. According to the IWA, the water treatment plant (WTP) optimisation system uses modelled (forecasting) data to increase managers’ preparedness as well as to inform customers. “Short-term forecasting to identify stressors on water quality like algae or turbidity in advance can save resources,” the IWA noted, adding that this optimises performance by reducing costs of chemical and energy consumption.
Roughly 15% of the marks in your GCSE science papers will be based on the practical work which you carry out in Year 10 and Year 11. In this blog, we look in more detail at the data that can be measured in a practical investigation. A categoric variable has values which can be given a label or a name (think ‘category’). Examples of categoric variables include type of car, favourite film, colours, type of material, and so on. A continuous variable can take on any numerical value. Continuous data can be collected by counting or by measuring something. Examples of continuous variables include mass, temperature and length. A length might be measured to be 1.2 m, for example, or 1.3 m, or 1.227 m, or any value in between. An interval is the difference between one value in a set of continuous data and the next. If we want to investigate how the temperature of a cup of hot tea changes with time, we need to measure its temperature at appropriate time intervals. An interval of one day would be unsuitable, as the tea would cool to room temperature long before we take our second measurement. At the other extreme, an interval of one second would be overkill – measuring the temperature every second would certainly tell us how the temperature of the tea decreased with time, but it would give us far more information than we need. (It would also be very tricky to do unless we had access to a data logger). A time interval of 30 seconds between readings, or one minute between readings would be more suitable in this case. If we want to measure the value of a continuous variable, we have to select an appropriate measuring device (instrument). The resolution of a device is the smallest change that the device can detect (in the variable being measured). If the resolution of a digital watch is one second, that means one second is the smallest amount of time it can measure. The stop-clocks in your science lab might have a resolution of 0.01 seconds, which means that they can measure times of 0.01, 1.25 or 10.87 seconds (or any other time to two decimal places). Does that mean a time that is measured using a stop-clock is always more accurate than one that’s measured using a digital watch? Well, not necessarily. Read on! A measured value (or calculation or estimate) which is accurate is one which is close to its actual value. Let’s say you wanted to measure the temperature at which pure ice melts in your kitchen. If your first measurement of the temperature of the ice is 0.2 °C, and your second is 10.2 °C, then your first measurement is clearly the more accurate of the two, since it is much closer to the known value of 0 °C. Two or more values which are precise are close to one another, but that doesn’t mean that accuracy and precision are the same thing. We could have measured the temperature of the melting ice three times and recorded values of 6.4, 6.2 and 6.3°C. This data is precise, because each of the values are close to one another, but they are inaccurate as they are quite far from the actual value of the temperature at which pure ice should melt in your kitchen. Check out this graphic* which helps the difference between accuracy and precision. In science, when we say that there’s an error in a measurement, it doesn’t necessarily mean that we’ve made a mistake. It means that the measurement which we’ve made is not exactly equal to the true value. There are a number of different types of errors which you have to know about for your GCSEs. Random errors are unpredictable. Let’s say we measure how long it takes for a coin to drop from a height of 2 m to the ground. We might measure a time of 0.68 s on the first attempt, then values of 0.58 s and 0.64 s on the second and third attempts. Due to the limitations of our method of measurement, variations in the air flow in the room, the angle at which the coin was held and so on, the measurements which we take vary randomly. To reduce the effect of random error on the accuracy of a measurement, we should take multiple readings and then work out the mean (average) value. On a scatter graph, random error is what causes data points to be scattered about both sides of the best-fit line. The greater the random error in an investigation, the further the points will be scattered from the best-fit line. When a measurement has a systematic error, it means that it is always ‘out’ (higher or lower than the true value) by the same amount. In other words, the error is consistent between readings. A systematic error is normally caused by a fault in the measuring device. For example, if 2 cm has been broken off one end of a 30 cm ruler, then it could (if you weren’t paying close attention) cause all measurements of length carried out using that ruler to be 2 cm greater than they really are. When there’s a systematic error in an investigation, the best-fit line on a graph of the data will be higher or lower than it should be. An anomaly is a measurement which does not fit the expected trend. If an anomalous measurement was due to a mistake by the experimenter or a problem with the equipment, then the measurement should be repeated if possible (or ignored if it’s not possible to repeat it). Finally, a zero error is a special type of systematic error which occurs when a measuring device gives a reading when the true value should be zero. For example if an electronic balance reads ‘– 29 g’ when nothing is sitting on top of it, then all mass measurements taken using the balance will be 29 g lower than their true values.
Dropping Mentos® candies into a bottle of soda causes a foamy jet to erupt. Although science fair exhibitors can tell you that this geyser results from rapid degassing of the beverage induced by the candies, the precise means by which bubbles form hasn't been well characterized. Now, researchers reporting in ACS' Journal of Chemical Education used experiments in the lab and at various altitudes to probe the mechanism of bubble nucleation. During production, soda is carbonated by sealing it under carbon dioxide pressure that is about four times the total air pressure. This causes carbon dioxide to dissolve in the beverage. When someone opens the container, carbon dioxide escapes from the space above the liquid, and the dissolved carbon dioxide slowly enters the gas phase, eventually causing the soda to go "flat." Mentos® greatly speed up this process: Carbon dioxide flows into tiny air bubbles on the rough surface of the candies, allowing the gas to rapidly jet to the surface of the soda. Thomas Kuntzleman and Ryan Johnson wondered if atmospheric pressure plays a role in carbon dioxide bubble formation. They reasoned that the answer could reveal more details of the process. In the lab, the researchers added a Mentos® candy to water carbonated at different pressures and measured the mass lost from the liquid over time. They fit these data to an equation that allowed them to estimate that the bubbles on the surface of the candy were about 6 μm in diameter. In contrast to other candies, Mentos® could have a fortuitous balance between bubble size and the number of bubble sites that allows them to produce excellent fountains, the researchers suggest. Then, the researchers left the lab and examined the extent of soda foaming after candy addition at different altitudes, ranging from Death Valley (43 feet below sea level) to Pikes Peak (14,108 feet above sea level). They observed increased foam production at higher elevations; however, this effect could not be explained by the simple application of gas laws. Similar experiments could form the basis of classroom projects for students in general science through physical chemistry courses, the researchers say. Cite This Page:
As planetary ice recedes, it’s revealing secrets about human history. The latest example of that is in Norway, where a group of researchers has reconstructed 6,000 years of history based on more than 2,000 artifacts that have melted out of ice patches. The new timeline, published this week in Royal Society Open Science, provides insights into how people responded to past climate shocks. Ice patches are similar to glaciers in that they’re long-living hunks of ice replenished by snow each winter. But they differ in that they don’t move. That means that any artifacts left on them are simply entombed in the ice rather than ground to a fine dust, which is what happens to artifacts trapped in glaciers as they slide down the mountain. Now that climate change is causing ice to melt, those artifacts are once again seeing the light of day after thousands of frozen years. In case of Oppland, Norway, some artifacts have been dated back to 6,000 years ago. The wealth of artifacts recovered in Oppland (or any ice patch for that matter) are delicate and after centuries of life without air, they degrade and can be destroyed by the elements in a matter of days if nobody finds them. That makes the scientists’ work equal parts detective and EMT. “We have a responsibility to rescue what is being exposed and thus destroyed,” James Barrett, an ice archeologist from the University of Cambridge who worked on the study, told Earther. Saving the artifacts has allowed Barrett and his colleagues to reconstruct history. That’s yielded some pretty fascinating and at times counterintuitive theories on how people lived, and how climate and society are intimately tied together. The earliest artifacts date to 6,000 years ago, which Barrett said are unique in their own right. But the artifact record that allows the scientists to spin their historical yarn begins to pick up steam in the third century. That’s a period when agriculture and economic activity started to take hold in the valleys populated by Nordic people, according to a blog post by Lars Pilo, another study author who works with the Oppland County Council Department of Cultural Heritage. But hunting was still a large part of society, and with comparably mild weather and small glaciers and ice patches, it was relatively easy to hunt and travel over the mountains. The weather took a turn for the worse during a period called the Late Antique Little Ice Age. From 536 AD until around 660 AD, a series of volcanic eruptions ushered in cold weather around the globe. The chill meant ice once again grew, making the mountains less hospitable. Yet the researchers found that during that period, the number of artifacts continued to increase, perhaps because conditions at lower elevations were so dire. “There are hints that use of the high mountains continued, perhaps to buffer losses from poor harvests in the valley farms,” Barrett said. “This last issue is one we need to explore further; it’s a direction for future research.” The number of artifacts peaked in the Viking Age, which lasted from around the late eighth century until the early 10th century. During this period, exploration was the name of the game. Ships were setting out across the sea, contributing to a larger trading economy that was in part driven by natural resources brought down from the mountains. “The high mountains were part of this story, and ice-patch finds may even have increased because of distant demand for products such as furs and antler,” Barrett said. After that, artifacts drop off owing to new hunting methods that relied more on trapping herds of reindeer, and less on shooting arrows. The Black Plague also likely played a role as society became more closed. There are almost certainly more pieces of history still lodged in the ice, including artifacts from a 1,600-year period starting in 3,800 BC that is still a complete blank spot in the archeologists’ logs. It could be that artifacts from that period are rare, or that they’ve melted out and been destroyed already. Filling in this and other gaps provides a richer glimpse of the past, which is a major win archeologists, historians and really anyone who wants to understand how society evolved. But Barrett said the fact that we’re acquiring this new knowledge because of climate change is “important cautionary tale” for everyone living on Earth right now.
Silicosis is a lung disease that is caused by inhaling tiny bits of silica dust. Respirable crystalline silica is the portion of crystalline silica dust that is small enough to enter the gas exchange regions of the lung if inhaled. Silica dust causes fluid build-up and scar tissue in the lungs that impacts the ability to breath. Inhalation of silica is also associated with lung cancer, pulmonary tuberculosis, and other airway diseases, as well as to the development of autoimmune disorders and chronic kidney disease. Crystalline silica is a common mineral that is a part of sand, rock, and mineral ores like quartz. Crystalline silica occurs primarily as quartz and is a component of the sand and stone materials used to make everyday products such as concrete, brick and glass. At least 1.7 million U.S. workers are exposed to respirable crystalline silica in a variety of industries and occupations including construction, mining, sandblasting, brick making and hydrofracturing (fracking). Silicosis is not reversible, but it is preventable by mitigating exposure. The Occupational Health and Safety Administration (OSHA) have developed a standard to protect the workforce from this hazard. The current standard for permissible exposure limits (PESLs) were adopted in 1971, shortly after the creation of OSHA, and have not been updated since. The standard is currently under revision to reflect the increased level of concern related to this very real occupational risk to workers. In fact, there are current recommendations that PELs be reduced by 50 percent. OSHA's proposal is based on requirements of the Occupational Safety and Health Act (OSH Act) and court interpretations of the Act. For health standards issued under Section 6(b) (5) of the OSH Act, OSHA is required to promulgate a standard that reduces significant risk to the extent that it is technologically and economically feasible to do so. The summary of the proposed rule, published in September 2013 states: "The Occupational Safety and Health Administration (OSHA) proposes to amend its existing standard for occupational exposure to crystalline silica. The basis for issuance of this proposal is a preliminary determination by the Assistant Secretary of Labor for Occupational Safety and Health that employees exposed to respirable silica face a significant risk to their health at the current permissible exposure limits (PELs) and that promulgating these proposed standards will substantially reduce that risk." The new permissible exposure limit (PEL), calculated as an 8-hour, time-weighted average, is 50 micrograms of respirable crystalline silica per cubic meter of air (50 ug/m3). OSHA also proposes other ancillary provisions for employee protection such as preferred methods for controlling exposure, respiratory protection, medical surveillance, hazard communication and record keeping. OSHA is proposing two separate regulatory texts - one for general industry and maritime, and the other for construction - in order to tailor requirements to the circumstances found in these sectors. Generally, engineering controls to mitigate dust exposure offer the best protection. However, a combination of engineering controls, work practice, protective equipment, and product substitution where feasible, along with worker training is needed to protect workers. In hydofracturing, for example, engineering controls and work practice controls may be to mandate capping of unused fill ports on sand movers, thus reducing the amount of dust released; or, reducing the drop height between the sand transfer belt and blender hoppers. Limiting the number of workers and the time spent in high dust areas or performing some operations remotely physically limits the amount of exposure. Simply making sure that fresh water is applied to the site around the well mitigates the amount of free dust in the air. When engineering and work practice controls are not feasible or are not enough to reduce silical levels below OSHA PELs, employers must provide workers with respirators. When respirators are used, the employer must have a respiratory protection program that meets the requirements of OSHA's Respiratory Protection Standard (29CFR1910.134- which is currently under review). This program must include proper respirator selection, fit testing, medical evaluations and training. Workers must also be provided with information about the hazards of silica and other chemicals found in the workplace. And, as part of its National Emphasis on Silica Program, OSHA recommends that employers medically monitor all workers who may be exposed to silica dust levels at or above half the PEL. Recommended medical tests include: - A medical exam that focuses on the respiratory system and includes a medical and work history - A chest X-ray, evaluated by a qualified professional OSHA recommends that these tests be repeated every three years if the employee has less than 15 years of silica exposure, every two years if the employee has 15-20 years of exposure, and every year if the employee has 20 or more years of exposure. The new recommendations are currently in the draft phase, but is it expected that specific, enforceable mandates will be forthcoming. These new initiatives may have significant impact on employees, employers, and occupational health care providers. The occupational health team at Mount Nittany Physician Group is constantly monitoring changes in the industry that may impact workforce health. We are here as your resource to keep your workforce healthy, productive, and compliant with new industry mandates.
Crystallization is a naturally occurring or artificially initiated process for the solidification of substances or materials into a structured form, where the atoms or molecules are highly organized into a structure known as a crystal. Some materials, such as drug substances in pharmaceuticals, can form several different crystal structures, called crystal modifications or crystal forms or polymorphs. Crystallization occurs at a certain temperature and is accompanied by a certain amount of energy released (exothermal process), which is known as the heat or enthalpy of crystallization. Furthermore, the crystallization procedure also has a kinetic component (rate of crystal growth), which should be taken into account in crystallization studies. Crystallization as a thermally initiated process often occurs in two major steps: The first step is nucleation, whereas the second step is known as crystal growth; the latter is also dependent upon the conditions of heat treatment. With the help of DSC (differential scanning calorimetry), the crystallization temperatures and the energy released can be determined with a high degree of reliability. It should be mentioned that the temperature at which crystallization takes place at DSC experiments might shift to lower temperature values due to cooling or super-cooling effects, which may occur during the crystallization process. In the below illustrated indium measurement, the crystallization occurred at 155°C, which is approximately 2 Kelvin lower than the melting temperature of 156,6°C.
Ever since Visual Basic evolved into the VB.NET framework, shape controls in VB6 are no longer available. In VB2013, the programmer needs to write code to create various shapes and drawings. Visual Basic 2013 offers various graphics capabilities that allow us to write code that can create all kinds of shapes and even fonts. In this lesson, you will learn how to write code to draw lines and shapes on the Visual Basic 2013 IDE. Before you can draw anything on a form, you need to create the Graphics object in Visual Basic 2013. A graphics object is created using the CreateGraphics() method. You can create a graphics object that draws to the form itself or a control. To draw graphics on the default form, you can use the following statement: Dim myGraphics As Graphics =me.CreateGraphics To draw in a picture box, you can use the following statement: Dim myGraphics As Graphics = PictureBox1.CreateGraphics You can also use the text box as a drawing surface, the statement is: Dim myGraphics As Graphics = TextBox1.CreateGraphics The Graphics object that is created does not draw anything on the screen until you call the methods of the Graphics object. In addition, you need to create the Pen object as the drawing tool. We shall examine the code that can create a pen in the following section. A Pen can be created using the following code: myPen = New Pen(Brushes.Color, LineWidth) where myPen is a Pen variable. You can use any variable name instead of myPen. The first argument of the pen object defines the color of the drawing line and the second argument defines the width of the drawing line. For example, the following code created a pen that can draw a dark magenta line and the width of the line is 10 pixels: myPen = New Pen(Brushes.DarkMagenta, 10) You can also create a Pen using the following statement: Dim myPen As Pen myPen = New Pen(Drawing.Color.Blue, 5) Where the first argument define the color(here is blue, you can change that to red or whatever color you want) and the second argument is the width of the drawing line. Having created the Graphics and the Pen objects, you are now ready to draw graphics on the screen which we will show you in the following section. In this section, we will show you how to draw a straight line on the Form. First of all, launch Visual basic 2013 Express. In the startup page, drag a button onto the form. Double-click on the button and key in the following code. Private Sub BtnDraw_Click(sender As Object, e As EventArgs) Handles BtnDraw.Click Dim myGraphics As Graphics = Me.CreateGraphics Dim myPen As Pen myPen = New Pen(Brushes.DarkMagenta, 20) myGraphics.DrawLine(myPen, 60, 180, 220, 50) End Sub The second line created the Graphics object and the third and fourth line create the Pen object. The fifth draw a line on the Form using the DrawLine method. The first argument uses the Pen object created by you, the second argument and the third arguments define the coordinate the starting point of the line, the fourth and the last arguments define the ending coordinate of the line. The syntax of the Drawline argument is object.DrawLine(Pen, x1, y1, x2, y2) For the above example, the starting coordinate is (60,80) and the ending coordinate is (220,50) Figure 25.1 shows the line created by the program.
Symptoms and Signs In seedlings, symptoms appear after the pathogen infects the stems at the base of the developing cotyledons near the soil line. The fungus produces black, sunken cankers that have sharp margins and often contain concentric rings. The plant's growing tip may be killed or the stem broken where it is weakened by the canker. Infection may continue into the hypocotyl and root region or the primary leaf petioles. Root infection causes a brown to black necrosis. If plants are grown under dry conditions, young plants can be killed. Infection of older seedlings and plants may cause stunting, leaf chlorosis, early defoliation, and plant death. On older plants, "charcoal dust" often appears on the surface of roots and stems, mainly near the soil line, and is diagnostic evidence for this disease. This charcoal dust effect is caused by the production of small, black microsclerotia just below the epidermis and in the vascular tissue. This symptom is also called ashy stem blight. Comments on the Disease Charcoal rot can affect common beans, blackeyes, and limas. The fungus is pathogenic on many crops including corn and sorghum. The pathogen can survive in both the seed and the soil. The disease occurs mainly under high temperatures and drought stress conditions– especially if irrigation is delayed. It is capable of infecting plants at all stages of growth. Soils that are high in organic matter, such as those along the Sacramento River-San Joaquin Delta region, tend to have more problems with this disease. In addition, uneven, unleveled land (such as beans grown in newly planted orchards) and fields where the plants are stressed (such as from too much or too little irrigation water) tend to have increased risk for getting this disease. Avoid drought stress, especially during periods of high temperature. A 3-year rotation with a cereal crop (except corn or sorghum) may help reduce soil inoculum.
What Should I Put Into My Composter ? Composting is a simple, natural process. Once you put organic material into your composter, the decomposition process starts. What you place into the composter and how you layer the material, affects the speed of decomposition. For best results, perform the following steps: - Add 4 parts carbon-based material (brown material such as dry leaves, shredded newspaper, straw etc.) to 1 part nitrogen (green material such as fruit and vegetable scraps, weeds, flowers, grass clippings etc.). - Vary the materials that go into the composter. The micro-organisms in your composter thrive on a variety of foods. - If possible, layer wet and dry material in your composter. For instance, when you add kitchen scraps or grass, also add dry leaves. Layers should be no more than 15 cm (6 inches) thick. - Sprinkle a little earth over your organic material. This will keep flies away from your composter. - Chop or shred materials into small pieces to make the composting process go faster.