content
stringlengths
275
370k
The full influence of the Chavín cult, whatever its origin and however it rose to power, covered most of the area of Peru by about 400 B.C. Between about 400 and 200 B.C. the momentum of the Chavín cult faded; the reasons for its eventual end are still unknown. Richard Burger suggests the Chavín religion may have appeared early on as a "crisis cult" on the coast, a response to a natural disaster like a tidal wave that would leave its survivors willing to assimilate into a belief system to restore order in their lives. Archaeological work continues in the Andean area in hopes of revealing clearer pictures of ancient life and culture. Current and past work in the field unearths a way to view what life might have looked like in Chavín settlements some 2,500 years ago. - Design Pics Inc/National Geographic Creative
This article needs additional citations for verification. (December 2013) (Learn how and when to remove this template message) A neck frill is the relatively extensive margin seen on the back of the heads of reptiles with either a bony support such as those present on the skulls of dinosaurs of the suborder Marginocephalia or a cartilaginous one as in the frill-necked lizard. In technical terms, the bone-supported frill is composed of an enlarged parietal bone flanked by elongated squamosals and sometimes ringed by epoccipitals, bony knobs that gave the margin a jagged appearance. In the early 1900s, the parietal bone was known among paleontologists as the dermosupraoccipital. The feature is now referred to as the parietosquamosal frill. In some genera, such as Triceratops, Pentaceratops, Centrosaurus and Torosaurus, this extension is very large. Despite the neck frill predominantly being made of hard bone, some neck frills are made of skin, as is the case with the frill-necked lizard of today that resides in Australia. The use of the neck frill in dinosaurs is uncertain; it may have been used for thermoregulation or simply as a defense mechanism. Indeed, during battles for territory, competing Triceratops crashed heads together with their elongated horns and the neck frill may have been employed as a kind of shield, protecting the rest of the animal from harm. However, usage of the neck frill in modern reptiles is better documented. Two chief and disparate examples are the horned lizards (genus Phrynosoma) with a bony frill, and the frill-necked lizard (genus Chlamydosaurus) with a cartilaginous frill. The frill-necked lizard's frill is mainly made up of flaps of skin, which are usually coloured pink, supported by cartilaginous spines. Similar to the portrayal of the dinosaur Dilophosaurus in Steven Spielberg's Jurassic Park, frill-necked lizard puff out these neck frills on either side of its head when threatened. The lizards often raise their frills when battling for territory or when coming into contact with another lizard, especially during mating season. There is, however, no evidence that suggests that Dilophosaurus neither had nor didn't have the same abilities, as many of its features in the Jurassic Park film were mostly fictional. Numerous other animals of both modern and prehistoric times use both skin or bone protrusions to make themselves seem more threatening, attract mates or to thermoregulate. Examples of these are the usage of dewlaps and crests in lizards, dinosaurs and birds. - Weldon Owen Pty Ltd. (1993). Encyclopedia of animals - Mammals, Birds, Reptiles, Amphibians. Reader's Digest Association, Inc. ISBN 1-875137-49-1. |This vertebrate anatomy-related article is a stub. You can help Wikipedia by expanding it.|
History of arithmetic This article needs additional citations for verification. (November 2015) (Learn how and when to remove this template message) The history of arithmetic includes the period from the emergence of counting before the formal definition of numbers and arithmetic operations over them by means of a system of axioms. Arithmetic — the science of numbers, their properties and their relations — is one of the main mathematical sciences. It is closely connected with algebra and the theory of numbers. The practical need for counting, elementary measurements and calculations became the reason for the emergence of arithmetic. The first authentic data on arithmetic knowledge are found in the historical monuments of Babylon and Ancient Egypt in the third and second millennia BC. The big contribution to the development of arithmetic was made by the ancient Greek mathematicians, in particular Pythagoreans, who tried to define all regularities of the world in terms of numbers. In the Middle Ages trade and approximate calculations were the main scope of arithmetic. Arithmetic developed first of all in India and the countries of Islam and only then came to Western Europe. In the seventeenth century the needs of astronomy, mechanics, and more difficult commercial calculations put before arithmetic new challenges regarding methods of calculation and gave an impetus to further development. Theoretical justifications of the idea of number are connected first of all with the definition of "natural number" and Peano's axioms formulated in 1889. They were followed by strict definitions of rational, real, negative and complex numbers. Further expansion of the concept of number is possible only if one of the arithmetic laws is rejected. The appearance of arithmeticEdit If in two sets of subjects each element of one set has only one corresponding element in the other set, these sets are one-to-one. Such actual comparison when subjects were displayed in two ranks, was used by primitive tribes in trade. This approach gives the opportunity to establish quantitative ratios between groups of objects and doesn't demand the concept of number. Further there were natural standards for counting, for example, fingers of hands, and then sets of standards, such as hands. The advent of the standards symbolizing concrete numbers also is connected to the emergence of the concept of number. Thus the number of things to be counted was compared to the Moon in the sky, the number of eyes, and the number of fingers on a hand. Later numerous standards were replaced with one of the most convenient, usually fingers of hands and/or feet.
The Leading Current Definition The U.S. Census Bureau in the United States Department of Commerce defines race as follows, in definitions often used in other areas of the law and in research studies: What is race? The Census Bureau defines race as a person’s self-identification with one or more social groups. An individual can report as White, Black or African American, Asian, American Indian and Alaska Native, Native Hawaiian and Other Pacific Islander, or some other race. Survey respondents may report multiple races. What is ethnicity? Ethnicity determines whether a person is of Hispanic origin or not. For this reason, ethnicity is broken out in two categories, Hispanic or Latino and Not Hispanic or Latino. Hispanics may report as any Thus, ultimately the question is how you sincerely self-identify, but guidance is provided regarding what is meant by each term, which is framed in terms of "origins" from various large geographic regions of the world. The Definitions Have Changed Over Time The definitions of race and ethnicity, and the terms used, by the U.S. Census, have changed somewhat with almost each new decennial census. In reality, our culture and society is such that most people have a self-identify that corresponds to the predominant place of their remote genetic ancestors, and very few people who have remote genetic ancestors who are predominantly only from one place will self-identify otherwise, because it is not what is socially expected. But, there are plenty of edge cases and ambiguous cases. For example, famous golfer Tiger Woods has deep genetic ancestry from Africa, Southeast Asia and Europe in large proportions. One of the leading scholarly work that shows the empirical relationship between racial and ethnic self-identification and deep regional genetic ancestry is a 2014 article in the American Journal of Human Genetics that used data from the consumer genetic self-testing firm 23andMe. Historical "one drop rule" notwithstanding, empirically, in the United States, people who are at least 75% European in deep genetic ancestry whose remaining ancestry is African usually to identify as "white". But, people whose remote genetic ancestors are, for example, 38% traceable to Africa and 62% traceable to Europe, tend to identify as black or African-American, even though they are mostly European in genetic ancestry. In theory, the vast majority of people who identify as African-American in the U.S. have, on average, about 75% deep genetic African ancestry with most of the rest of their ancestry being Northern European ancestry, and could legitimately in a formal reading of the "origins" idea, identify as of more than one race. But, few people actually identify as more than one race unless they have parents of different races or at least grandparents with different racial identities. The African American designation, like most of the formal categories, also includes a multitude of subgroups. Descendants of U.S. slaves make up most African-Americans in the United States, but the category also usually includes recent immigrants from sub-Saharan Africa and Afro-Caribbean immigrates from sub-Saharan Africa. The Intersection Of Race and Hispanic Origins Many ambiguous cases regarding race arise among people who identify as Hispanic, in part, because socially accepted and bureaucratic understandings in race are different in the U.S. than in Latin America. Most Hispanic identifying people in the U.S. would identify their race in Latin America as "mestizo", which usually means people with both European ancestry and a large component of pre-Columbian indigenous Central and South American ancestry. But, that isn't an option on census bureau forms, so many Hispanic people mark "Other" as their race, while many mark "white", some mark "black" (even in cases where they have little African ancestry), and a few mark Native American (even though this would be a logical choice for someone who self-identifies as "mestizo"). Hispanic people in the U.S. and people in Latin America identifying as "white" tend to have less European ancestry than people who are not Hispanic in the U.S. who identify as "white." Hispanic people in the U.S. and people in Latin America identifying as "black" tend to have less African ancestry than people who are not Hispanic in the U.S. who identify as "black." Due to this muddle, many statistical and research studies distinguish between non-Hispanic whites and non-Hispanic blacks while lumping all Hispanic identifying people in one category regardless of their self-identified racial identity. People in the U.S. tend to identify as "Native American" without including other races at a fairly low percentage of deep genetic ancestry that is pre-Columbian indigenous American (many people who have 15% Native American ancestry, for example, would identify as exclusively Native American), and usually tend to do so only if their Native American ancestors are from North of the U.S.-Mexico border, even though indigenous Americas from the territory of the Continental U.S. and indigenous Americans from Latin America, are genetically very similar. Many recognized Indian Tribes in the United States have blood quantum rules for membership that require members to have a certain proportion of ancestors who were members of the tribe, which is a genealogical test, rather than a genetic one, because Indian Tribes, like countries, can make someone a member via naturalization or adoption in addition to by birth. But, one does not have to be affiliated with a specific Indian Tribe to be legitimately considered Native American under the law. And, a large share of people who do not physically appear to be Native American but claim a Native American ancestor many generations earlier as part of family lore, like Presidential candidate Elizabeth Warren and like many African Americans in the U.S., actually do have a distant Native American ancestor. Asian American is one of the most heterogeneous U.S. racial categories. People who have origins in India, Southeast Asia, East Asia and Central Asia, who look very different from each other physically and have very different cultures, are currently lumped in one presumptive group. Historically, the U.S. distinguished between "Hindus" (meaning "South Asian" rather than as a religious identifier), and East Asians such as people from China and Japan, although it no longer does so. This also illustrates the complication that while race is mostly a function of social and ethic background, genetic ancestry usually influences one's self-identification. For example, in the U.S., someone adopted into a white family as a baby from China (there are probably hundreds of thousands of such people, at least, in the United States) usually self-identifies as Asian-American, despite having zero linguistic and first hand cultural connection to anyplace in China or Asia. The self-identification rule is the powerful blunt instrument by which the edge cases are resolved in a basically unreviewable manner. As a result, the U.S. is not plunged into the arcane debates of racial identity that prevailed in the 19th century in U.S. Courts. Blatant attempts to defy social convention by claiming a self-identification different from the one that would be ascribed socially to a person in what are not edge cases, are sometimes resolved on the grounds that the judge or jury, as the case may be, does not believe that the person's expression of self-identification in court is sincere based upon circumstantial evidence. An example of the issues that judges and scholars sought to avoid with the self-identification rule is famous case Plessy v. Ferguson, 163 U.S. 537 (1896), in the U.S. which established that "separate but equal" satisfied the equality requirements of the 14th Amendment to the United States, started out as a test case, but turned in part into a racial classification case when it turns into a criminal prosecution, over whether the mixed race plaintiff in the case (who could "pass" as white and would probably identify as "white" in the U.S. today), was black or was white. Homer Plessy, a free man who was seven-eighths white and one-eighth of African descent, agreed to participate in a test case to challenge a Louisiana law known as the Separate Car Act. This law required that railroads provide separate cars and other accommodations for whites and African-Americans. The Comite des Citoyens (Committee of Citizens) was a group of New Orleans residents from a variety of ethnic backgrounds that sought to repeal this law. They asked Plessy, who was technically African-American under Louisiana law, to sit in a whites-only car. He bought a first-class ticket and boarded the whites-only car of the East Louisiana Railroad in a train for The railroad cooperated in the test case because it viewed the law as imposing unnecessary additional costs through the purchase of more railroad cars. It knew about the intention to challenge the law, and the Committee of Citizens also enlisted a private detective to detain Plessy on the train so that he could be charged under the Separate Car Act. When Plessy was told to vacate the whites-only car and sit in the African-American car, he refused and was arrested by the detective. The train was stopped so that he could be removed, and a trial Plessy's lawyers argued that the Separate Car Act violated the Thirteenth and Fourteenth Amendments. Their theory failed, and the judge found that Louisiana could enforce this law insofar as it affected railroads within its boundaries. Plessy was convicted and The self-identification rule has been popular with judges and scholars, in part, out of a desire to avoid returning to having courts decide those issues, which in light of contemporary American sensibilities seems unseemly. There is also a sense that accuracy in classification isn't an important feature of a definition of race, because for many purposes, such as the purposes of discrimination laws in the U.S., the beliefs and intent and motives of the person discriminating are what matter legally, not some absolute Platonic truth regarding the person's genetic makeup or culture.
Calcium is a Key Nutrient, a Mineral, That Babies Need We all know how important calcium is to the good health of our bones and teeth but did you know that of all the minerals we have, calcium is the one found at a higher level than any other our bodies? Calcium is stored in our bones, teeth, and even in muscle and other tissue. It is very important to the growth of healthy bones and those teeth that start coming out around the age of 6 months. Calcium also helps to build permanent teeth. Your baby will get enough calcium from formula and/or breast milk, and it is only when he starts to wean from these liquids that you need to pay closer attention to calcium needs. Throughout infancy, breast milk will always contain the right amounts of calcium that your growing baby needs and while breast milk contains less calcium than formula, it is better absorbed (as with iron) so that baby does not need as much. Calcium is found in so many foods that if a baby does not drink milk after the age of 12 months it’s not necessarily a bad thing. Just check out the labels of any foods you buy and you will be surprised to see all the places that calcium lurks: yogurt, cottage cheese, plain cheeses, and even cream cheese are wonderful sources of calcium and Vitamin D and are very nutritious for your little one. The calcium needs of babies and children are as follows: Babies 0-11 months of age: - Babies younger than 6 months old need 200 mg of calcium a day. - Babies 6 to 11 months old need 260 mg of calcium a day. The only types of milk babies should have are breast milk or formula. Don’t give cow’s milk or any other kind of milk to babies younger than 1 year old. Kids and Teens Kids need more calcium as they get older to support their growing bones: - Kids 1 to 3 years old need 700 mg of calcium a day (2–3 servings). - Kids 4 to 8 years old need 1,000 mg of calcium a day (2–3 servings). - Kids and teens 9 to 18 years old need 1,300 mg of calcium a day (4 servings). ? Babies can eat yogurt and cheese prior to the age of 1 year. Milk should not be offered as a drink because it does not contain enough nutrients for babies to grow and develop properly. Drinking too much milk can also hinder iron absorption and iron is a crucial element in baby’s growth! * Calcium requirements via Kid’s Health Check out some other foods that baby might like that that contain calcium: certain types of legumes such as kidney beans whole wheat bread fortified orange juice
There are many ways to measure deforestation in the world today. For example, researchers can use satellite imagery to detect changes in forest density and growth around the world, or with help from LiDAR. Deforestation is affecting forest growth around the world and contributes to the way climate change effects various locations. Researchers have estimated that when Europeans first set foot on North American shores, the forests were so dense that a squirrel could have travelled from the Atlantic to the Mississippi without having to touch the ground. Human settlement, logging, and other industries have cut this forested area down to a fraction of what it was. Deforestation doesn’t happen equally, which makes estimating the extent of deforestation in the United States difficult. New research has given scientists a new way to track forest changes using a mix of satellites and information gathered in the field. Using satellite images, researchers established a method of calculating the distance between any point in the continental United States and the nearest forested area. They tracked this data using information from 1992, and again in 2001. These two data sets showed that the distance to the nearest forest increased by a third of a mile. This new measuring tool is being called the forest attrition distance. As human settlements and industry grows, smaller patches of forest are lost to attrition and the distance to the next forested area increases. This metric not only takes the quantity of the forest into question, but the quality as well. Researchers hope that increasing the knowledge of how deforestation affects people, animals, and plants, will continue to slow the practice of clear cutting and increase awareness when it comes to the conservation of America’s forested lands. - Yang, S., & Mountrakis, G. (2017). Forest dynamics in the US indicate disproportionate attrition in western forests, rural areas and public lands. PloS one, 12(2), e0171383. - How Far to the Next Forest? A New Way to Measure Deforestation, NY Times
The National Science Teachers Association website is a useful tool for teachers to use as they assist students in prepping for their local Terra Fair. On the NSTA website teachers can find over 800 classroom resources-including lesson plans, book chapters, videos, simulations and more- on a variety of topics for every grade level from K-12. Many of these classroom resources are free to download and implement into your own classroom. Each resource is vetted by NSTA curators, and easily adaptable to be in line with Next Generation Science Standards (NGSS). In addition to classroom resources, NSTA also publishes a variety of newsletters, including a monthly publication called Science and the STEM Classroom that highlights important topics in STEM education and provides a variety of supplemental resources. There are 3 versions of this newsletter published for different grade levels; one for elementary school, middle school, and high school/college. Each issue is archived and available online to read. This website can prove to be a useful resource as you guide students through fair preparation. Science Buddies and our Pinterest page are good resources for them to use as they create their projects, and begin project boards. Our blog posts also have resources for students including video guides on judging and creating project boards.
Getting an accurate diagnosis of any neurological disorder is key for proper treatment. With that in mind, what are the characteristics of autism? Ultimately, the most important thing is understanding how to help a child with their specific needs after their autism diagnosis, and identifying the characteristics of their case can be key to this. There are three core deficits of autism spectrum disorder (ASD): Each of these core deficits has a direct effect on the examples of what you will read below. These are often some of the first characteristics that will present in children with autism. However, because development varies from person to person, it can be a while before parents or therapists are willing to consider these problems as indicators of a child on the spectrum. Some of these social communications include issues with back-and-forth conversation, difficulty in understanding social cues, or trouble with developing relationships. In these examples, a child might refuse to answer questions when prompted, or avoid eye contact when it is attempted, and then have subsequent difficulty making friends. Once you begin to notice these patterns in a child, it is not cause for panic or concern. Instead, make sure to have them evaluated by a psychologist, and if there is a diagnosis of autism, begin educating yourself and your family on how to best adjust. Put simply, this would be when your child adversely reacts to a particular environment; this can include the way things sound, smell, taste, look or feel. This can be for a number of reasons, including a sensory overload in the way their brain processes different stimuli. This is often called hypersensitivity, and when coupled with other characteristics, it can be a leading indicator for an autism diagnosis. It is important to not reprimand your child for reacting like this to a stimulus; by overtly reacting to certain behaviors you will run the risk of inadvertently reinforcing that behavior, resulting in more frequent occurrences. This is where having a metered approach through therapy and education will best help your child adjust to hypersensitivity. Another characteristic of autism is a fixation on objects, topics, or certain activities. This is where the ‘savant syndrome’ of autism might present itself (this is present in less than 10 percent of people with autism), though it is realistically more likely to manifest as obsessive behavior. Often, it will be an object of comfort like a toy, or an interesting topic that they learned about in school, or an activity that brings them comfort. People with autism thrive on routine, and a break in their repetitive behavior can be upsetting to them. If your child is exhibiting obsessive behaviors, it does not necessarily correlate with an autism diagnosis, but it is worth considering. Sometimes other undiagnosed issues can lead to similar behaviors, so make sure you speak with a specialist about the possible range of outcomes and diagnoses before enrolling your child in a program. There is no standard for how autism affects a child; each case is unique. While some children will have perfectly normal speech development patterns, up to 40 percent of children with an ASD are unable to speak verbally. Similarly, a child with autism may have trouble remembering words or might have a delay in their ability to speak. Phrase repetition, talking to themselves, or attempting to create and use their own language might be a symptom as well. Because speech is such a nuanced part of all human activity, making assumptions based on early patterns is difficult. However, if there are persistent issues along with other cues, it may be time to look into how ABA therapy, speech-language, or other therapy can help your child communicate more easily. The good news is that even if early speech development is inhibited, many children with autism develop skills to be able to communicate effectively. Most importantly, the quality of life for a child will almost always be improved if there is an effective treatment for their language skills and early communication issues. Some other quick behavioral cues to take note of that, when paired with the characteristics above, can lead to an autism diagnosis include: There are many characteristics of autism, and being aware of the most prevalent ones will help you get an accurate diagnosis of your child earlier. For any and all of their behavioral tendencies, it is important to recognize how you can work constructively with your family, teachers, and specialists to improve them. At Ally Pediatric Therapy, our mission is in the name: to assist children with autism and their families on the pathway to a better life. If you believe your child is exhibiting some of the common symptoms of autism, please reach out today. A diagnosis and treatment plan can make all the difference in the world, and we’d love to help.
DATA TRENDS & USE Data Trends & Use Explore these resources for academic, social, and other trends related to all students, both in general and within different populations. You will also find guidance on how to use data to inform decision-making in schools and districts. The Path Forward: Improving Opportunities for African-American Students [This report offers] a portrait of the performance of African-American students in the United States today. [...] Over the past 25 years, the performance of African-American students on key academic success indicators has improved, in some cases markedly. [...] In absolute terms, though, there is much more room for improvement. [...] By staying the course on accountability, promoting school choice, working with industry, and focusing on students in the greatest need, we can work to turn around these distressing numbers in ways that will significantly benefit African-American students who deserve a high-quality education experience that will prepare them for the path forward. A Climate for Academic Success The study analyzes reported differences in school climate by students in successful versus unsuccessful schools and how school climate and personnel resources are related to academic success. It concludes with practical implications of these findings for improving the academic performance of schools. How Black and White Students in Public Schools Perform in Mathematics and Reading on the National Assessment of Educational Progress This report addresses the following questions: 1) how do gaps in 2007 compare to the gaps in the initial and most recent prior years of the NAEP national and state assessment series? And 2) how do states compare to the nation in 2007? The current report presents these results in graphs that show the NAEP achievement gaps in a format that makes it possible to see at a glance the national and state gaps results for all available years. How Hispanic and White Students in Public Schools Perform in Mathematics and Reading on the National Assessment of Educational Progress This report addresses the following questions: 1) How do score gaps in 2009 mathematics and reading performance compare to the gaps in the initial and most recent prior years of the NAEP national and state assessment series? And 2) How do Hispanic and White scores and gaps in mathematics and reading at the state level compare to the national scores and gaps in 2009? This website features resources based on National Assessment of Educational Progress (NAEP) data addressing: 1) School Composition and the Black-White Achievement Gap; 2) Hispanic-White Achievement Gap Performance; and 3) Black-White Achievement Gap Performance An executive summary and full report are included for each report. Additional resources are also offered, such as data tables, data highlights, and FAQ’s. Research on the Achievement Gap REL West staff have searched selected databases for relevant resources and developed a references list of resources that are relevant to the request. This memo includes: 1) Research reports and policy-oriented articles about factors associated with, as well as interventions for closing the achievement gap, and 2) Relevant organizations that focus on the issue of the achievement gap in K–12 settings. Citations include a link to a free online version when available. Citations are accompanied by an abstract, excerpt, or summary written by the author or publisher of the article. We have not done an evaluation of these resources or organizations, but rather provide them for your information only. School Size, Achievement, and Achievement Gaps In order to examine the relationship between school size and achievement, a study was conducted using longitudinal achievement data from North Carolina for three separate cohorts of public school students (one elementary, one middle and one high school). Results revealed several interactions between size and student characteristics, all of which indicated that the achievement gaps typically existing between certain subgroups (i.e., more versus less-advantaged, lower versus higher-achieving) were larger in larger schools. Results varied across the grade level cohorts and across subjects, but in general effects were more common in mathematics than in reading, and were more pronounced at the high school level. Study results are discussed in the context of educational equity and cost-... more It takes more than testing: Closing the achievement gap. A report of the Center on Education Policy This report summarizes the research on the achievement gap between white students and black and Hispanic students; highlights possible remedies for the gap; and suggests an approach that policymakers can use to weigh the various proposals for closing the gap. Closing the Achievement Gap for Economically Disadvantaged Students? Analyzing Change Since No Child Left Behind Using State Assessments and the National Assessment of Educational Progress A critical state-level indicator of progress in public education is student achievement annual performance and change over time. The Council of Chief State School Officers (CCSSO) has been very active in tracking and reporting on student achievement results and using state assessment scores and other data to analyze achievement trends. A central goal of the No Child Left Behind (NCLB) Act was to close the gap in student achievement between students from different social and economic backgrounds. A principal objective of the federal funding mandated under NCLB, the design for program initiatives, and the accountability provisions of the federal law was to reduce the extent of disparity in performance of students from different demographic groups within schools as well as differences in... more Making Sure All Children Matter: Getting School Accountability Signals Right School accountability systems have the potential to be a powerful tool to help close the long-standing gaps in achievement that separate low-income students and students of color from their peers. Making Sure All Children Matter breaks down how accountability systems can do this. Black Male Achievement Lowest in Schools with Highest Levels of Black Students This blog posting summarizes study conducted by the American Institutes for Research that explores the achievement gap between Black and White students. Key Findings: young Black females fared better than young Black males in this analysis of achievement in relation to school segregation; the achievement gap for males was 25 points in schools with 60% or greater Black populations, compared to 17 points in schools with less than 20% Black populations; for young women, the Black-White achievement gap was not statistically wider in schools with the highest percentages of Black students (15 points) than in schools with the lowest percentages (13 points). What Works Clearinghouse The goal of the WWC is to be a resource for informed education decision making. To reach this goal, the WWC identifies studies that provide credible and reliable evidence of the effectiveness of a given practice, program, or policy (referred to as “interventions”), and disseminates summary information and free reports on the WWC website. With over 700 publications available and more than 10,500 reviewed studies in the online searchable database, the WWC aims to inform researchers, educators, and policymakers as they work toward improving education for students. National Assessment of Educational Progress (NAEP) The National Assessment of Educational Progress (NAEP) is the largest nationally representative and continuing assessment of what America's students know and can do in various subject areas. Michigan School Data MI School Data is the State of Michigan's official public portal for education data to help citizens, educators and policy makers make informed decisions that can lead to improved success for our students. The site offers multiple levels and views for statewide, intermediate school district, district, school, and college level information. Data are presented in graphs, charts, trend lines and downloadable spreadsheets to support meaningful evaluation and decision making. Black Lives Matter: The Schott 50 State Report on Public Education and Black Males, 2015 Schott Foundation’s biennial report reflecting national data on the four-year graduation rates for Black males compared to other sub-groups. Achievement Gap Highlights: How Black and White Students in Public Schools Perform on the National Assessment of Educational Progress Achievement gaps between Black and White students are featured in every major National Assessment of Educational Progress (NAEP) report card. The report, Achievement Gaps: How Black and White Students in Public Schools Perform in Mathematics and Reading on the National Assessment of Educational Progress, examines achievement gaps more closely, and provides a detailed portrait of how achievement gaps and Black and White students’ performance have changed over time at both the national and state levels. This report uses data from two assessments—main NAEP and LongTerm-Trend (LTT). While both programs assess reading and mathematics, they have three major differences: (1) main NAEP assesses performance of fourth and eighth graders, while LTT assesses performance of 9- and 13-year-olds; (2... more
This material must not be used for commercial purposes, or in any hospital or medical facility. Failure to comply may result in legal action. WHAT YOU SHOULD KNOW: - Lyme disease is an infection caused by a bacteria (germ) called Borrelia burgdorferi. Ixodes ticks (deer ticks) are carriers of the bacteria, and may infect you by biting through your skin. Deer ticks are most common in the Northeastern and North Central United States. Symptoms of Lyme disease may appear up to one month after you are bitten by a tick. Lyme disease may cause a target, or bull's eye like rash on your skin. Symptoms include a fever, sore throat, headache, stiff neck, feeling tired, and pain in your muscles and joints. Lyme disease may also lead to problems with your nerves, brain, and heart. You may have trouble thinking clearly, and you may not be able to move areas of your face. Lyme disease may also cause you to have abnormal heartbeats. - Your caregiver may know you have Lyme disease by looking at your rash. You may also need blood tests to check for the bacteria that causes Lyme disease. Caregivers may also test for bacteria in the fluid around your spinal cord, or the fluid around your joints. Treatment includes medicines to kill the germ causing Lyme disease. Medicines may also be used to decrease any pain and swelling in your joints. You may also need treatment to remove swollen joint tissue, or treatment to correct abnormal heartbeats. Treatment for Lyme disease may prevent or decrease symptoms such as joint pain and swelling. Treatment may also help stop the disease from spreading to your organs. AFTER YOU LEAVE: Take your medicine as directed: Call your primary healthcare provider if you think your medicine is not helping or if you have side effects. Tell him if you are allergic to any medicine. Keep a list of the medicines, vitamins, and herbs you take. Include the amounts, and when and why you take them. Bring the list or the pill bottles to follow-up visits. Carry your medicine list with you in case of an emergency. - Antibiotics: Antibiotics are germ-killing medicines. You may be given antibiotics to kill the germ that causes Lyme disease. - Nonsteroidal anti inflammatory medicine: This family of medicine is also called NSAIDs. NSAIDs may help decrease pain and inflammation (swelling) in your joints. Some NSAIDs may also be used to decrease a fever (high body temperature). This medicine can be bought with or without a doctor's order. This medicine can cause stomach bleeding or kidney problems in certain people. Always read the medicine label and follow the directions on it before using this medicine. - Steroids: You may be given steroids to reduce pain, redness, and swelling in your joints. Ask your caregiver when to return for a follow-up visit. If you had surgery to remove swollen joint tissue, you may need to have stitches removed. If you have a temporary pacemaker, talk to your caregiver about how long you need it. You may need follow-up visits often to check your heart while the pacemaker is in place. Keep all appointments. Write down any questions you may have. This way you will remember to ask these questions during your next visit. Ways to decrease your risk for getting Lyme disease: - Avoid areas outdoors where you know there are many ticks. Check your skin and scalp for ticks when you return from areas where ticks live. If you find a tick, remove it carefully from your skin with tweezers, and clean the area. Watch the area where the tick attached itself, over the next month, for any redness or a rash. - Remove dead leaves and brush from around your home and yard. If you live in a wooded area, make a border between your grass and the woods near your house. Wood chips and gravel can be used to make the border. Spray the grass and tree areas where you live with repellent to keep ticks away. - Spray your clothing and exposed skin with a tick repellent. Your caregiver may suggest a repellent that has low levels of DEET in it. Wear protective clothing while in areas where there may be ticks. Wear long-sleeves and tuck your pants into your socks. Wear light colored clothing so you can see a tick that gets on you. If you had surgery to treat your symptoms of Lyme disease, you will have bandages that cover your wound. Talk to your caregiver about how to care for your wound. For more information: Contact the following: - Centers for Disease Control and Prevention (CDC) 1600 Clifton Road Atlanta , GA 30333 Phone: 1- 800 - 232-4636 Web Address: http://www.cdc.gov/ CONTACT A CAREGIVER IF: - Your red target rash grows or spreads to other areas of your body. - You suddenly have trouble falling or staying asleep. - You have new or worsening pain and swelling in your joints. - You have new or worsening weakness and muscle pain. - You have changes in your mood, such as feeling depressed (deep sadness), anxious (worry), or easily angered. - You think or know you are pregnant. - You get a new tick bite. - You have questions or concerns about your disease or treatment. SEEK CARE IMMEDIATELY IF: - You suddenly have headaches, or your neck becomes stiff and is painful to move. - You have new pain in your chest or trouble breathing. - You have new or worsening trouble with your memory, concentration, or thinking clearly. - You suddenly cannot talk or see well, or you have trouble moving an area of your body. - You have new numbness in your arms or legs, or you have new trouble walking. © 2017 Truven Health Analytics Inc. Information is for End User's use only and may not be sold, redistributed or otherwise used for commercial purposes. All illustrations and images included in CareNotes® are the copyrighted property of A.D.A.M., Inc. or Truven Health Analytics. The above information is an educational aid only. It is not intended as medical advice for individual conditions or treatments. Talk to your doctor, nurse or pharmacist before following any medical regimen to see if it is safe and effective for you.
The Language of Disasters and Incidents Words have meaning and power, and as such, it is important to understand the precise meanings of terms used in conversation and writing. This is especially true when people are dealing with situations that are vague, uncertain, complex, ambiguous, and possibly lethal. The language of disasters and incidents is different from the military terminology we use day-to-day. When a unit is mobilized to assist with a domestic operation, it is extremely important to understand the language used by first responders and incident commanders. This chapter will introduce you to this language by identifying and explaining key terminology. Disasters, Hazards, and Incidents The terms disaster, hazard, and incident reached their current definitions through two sources, one dating from before 9/11 and the other after 9/11. The older terms in the Robert T. Stafford Disaster Relief and Emergency Assistance Act - major disaster, natural disaster, and domestic disaster - are more familiar to laymen, while the newer ones in the National Response Framework (NRF) - incident or catastrophic incident - are elements of the more specialized vocabulary of emergency responders. Both older and newer terms are used, and the staff officer should understand how to use all of them. Major disaster is defined by Title 42 U.S. Code Section 5122(2) as follows: Any natural catastrophe (including any hurricane, tornado, storm, high water, wind driven water, tidal wave, tsunami, earthquake, volcanic eruption, landslide, mudslide, snowstorm, or drought) or, regardless of cause, any fire, flood, or explosion, in any part of the United States, which in the determination of the President causes damage of sufficient severity and magnitude to warrant major disaster assistance under this Act to supplement the efforts and available resources of States, local governments, and disaster relief organizations in alleviating the damage, loss, hardship, or suffering caused thereby. Emergency is defined by The Stafford Act as follows: Any occasion or instance for which, in the determination of the President, Federal assistance is needed to supplement State and local efforts and capabilities to save lives and to protect property and public health and safety, or to lessen or avert the threat of a catastrophe in any part of the United States. Incident is defined by Joint Publication 3-28, Civil Support, as follows: An occurrence, caused by either human action or natural phenomena, that requires action to prevent or minimize loss of life or damage to property and/or natural resources. A disaster has already occurred and caused significant damage, while a hazard, as defined by the NRF, is simply "something that is potentially dangerous or harmful, often the root cause of an unwanted outcome." The Northridge earthquake was a disaster, while earthquakes in general are hazards. All disasters or hazards fall into two general categories (natural or man-made) and most fall into one of a number of subcategories. Table 1-1a. Types of natural disasters and hazards Table 1-1b. Types of man-made disasters and hazards Stafford Act Declarations The Stafford Act commits federal resources to responding to damaging, life-threatening disasters when state and local efforts cannot handle them. The federal government reacts to formal state requests for assistance in three principal ways, the first two requiring a presidential declaration: 1. Major disaster declaration: In response to a request from the governor of a state, the president makes this declaration, opening the way to a large federal commitment of resources, including the potential deployment of Department of Defense (DOD) personnel and resources. The frequency of major disasters and the costs to the federal government are on the rsise because of: a. Increasing population density. Because of these increasing circumstances, one disaster causes additional disasters. For example, an earthquake may rupture gas lines, causing fires and chemical spills. 2. Emergency declaration: At the request of a governor, this presidential declaration authorizes a lesser federal commitment, limited to $5 million. 3. Fire management assistance declaration: Authorizes the use of federal funds to mitigate, manage, and control fires burning on publicly or privately owned forests or grasslands. At the request of a governor, the regional Federal Emergency Management Agency (FEMA) director makes the declaration, not the president. Figure 1-1 shows presidential disaster declarations, January 2000 through March 2007. FEMA posts basic information about each of the individual declarations of major disasters, emergencies, and fires on its website at "http://www.fema.gov/hazard/index.shtm". Figure 1-2 shows the relationship between the severity of an event and the level of response to the event. Facts about declarations: From the Stafford Act to the National Response Framework The Stafford Act dates from a time when there was little expectation of a terrorist attack. Since 1988 only four terrorist attacks have merited major disaster declarations, but the four were of such magnitude and impact they reshaped the national approach to all disasters. After the World Trade Center explosion and the Oklahoma City Alfred P. Murrah Federal Building bombing in the 1990s, new terminology not found in the Stafford Act began to emerge relating to tools at the disposal of terrorists. In the new terminology, terrorists employ weapons of mass destruction (WMD) to cause death, destruction, and fear. Destruction encompasses physical wreckage and loss of life to damage to the society, economy, national security, and national well-being. The DOD has used a general definition of WMD: "weapons that are capable of a high order of destruction and/or of being used in such a manner as to destroy large numbers of people." The DOD also uses the term "chemical, biological, radiological, nuclear or high-yield explosives" (CBRNE or CBRN-E) to encompass the full range of WMD. The NRF uses a precise definition of WMD that is spelled out in U.S. laws. Title 18, U.S. Code, paragraph 2332a defines WMD as: Incidents in the NRF The NRF employs a new term, incident, which is intended to be broader and more inclusive than the terms disaster and emergency. An incident is "an occurrence or event, natural or human-caused, that requires an emergency response to protect life or property." Facts about incidents: Catastrophic incidents are comparable to presidentially declared major disasters. The terms suggest natural and man-made events that do significant harm and overwhelm the response capabilities of local and state governments. The definition of catastrophic incident differs from that of major disaster only in that it fits more neatly within the framework of the war against terrorism. Facts about catastrophic incidents The NRF includes a Catastrophic Incident Annex (NRF-CIA). Only the secretary of Homeland Security or his designee can implement this annex. Incidents covered under the annex are "any natural or man-made incident, including terrorism, that results in extraordinary levels of mass casualties, damage, or disruption severely affecting the population, infrastructure, environment, economy, national morale, and/or government functions." Disaster response and incident management Responses to terrorist WMD attacks differ from responses to natural disasters. First responders need to deal with the effects of WMD, which may be different from effects of natural disasters. At the same time, the responders may have to deal with further terrorist attacks and with bringing the terrorists to justice. Consequence management and crisis management emerged to describe the manner in which to handle the needed responses. Consequence management and crisis management The requirements for consequence management and crisis management are combined in the NRF. The DOD definition of consequence management is problematic, given that it encompasses both natural and man-made disasters and does not focus exclusively on terrorist actions. At the same time, the NRF uses the terms "consequences" and "effects" interchangeably when considering the outcomes for both natural disasters and man-made disasters, including those caused by terrorists. If the staff officer encounters the term consequence management, he should ask for a definition. The NRF replaces consequence management and crisis management as separate functions with a single term, incident management. Incident management aims to remove the boundaries between consequence management and crisis management. The goal of incident management is to orchestrate "the prevention of, preparedness for, response to, and recovery from terrorism, major natural disasters, and other major emergencies." Incident management includes prevention, preparedness, response, and recovery phases. This handbook will use the term "disaster response" when discussing DOD participation in incident management for a number of reasons: The United States: the Homeland Hurricane Katrina was a domestic disaster, meaning that it took place within the United States. When the Stafford Act and the NRF use the term "United States," they mean more than just the 50 states. The United States, which we can also call the homeland, consists of the following, together with contiguous coastal zone and air space: In terms of the NRF and disaster response, the District of Columbia, the nonstate possessions, and the freely associated states are states with the same rights and responsibilities accorded to the 50 states. The state in this broad sense is the basic geographic unit in disaster response. The state's chief executive, usually the governor, must make the case for and request a federal response to a disaster. Within each state, local chief executive officers (for example, mayors and county commissioners) and tribal chief executive officers must request state and, if necessary, federal disaster assistance through the governor. The local and tribal officers rely on their own law enforcement, firefighting, and other resources to make the first response to an incident. The first responders always take the initial action, whether the incident is a routine, small-scale emergency or a major disaster that will eventually require the presence of the DOD. Disasters and emergencies can quickly exhaust or overwhelm the resources of a single jurisdiction, whether at the state or local level. Two primary types of mutual aid, intrastate and interstate, exist. Throughout the United States, numerous regional assistance compacts exist and governors can apply to them for immediate help if state resources are exhausted. Nationally, the Emergency Management Assistance Compact (EMAC) exists to coordinate the arrival of help so that inappropriate and unlicensed assistance is prevented. EMAC is approved by Congress and administered by the states. During a disaster, governors have at their disposal a crucial state resource in the National Guard. To deploy the National Guard effectively, governors need to understand the role the Guard plays in their emergency response systems and to recognize other military assets that are available through the DOD. Military assistance can come in several varieties: Governors of states are absolutely responsible for everything that happens (and fails to happen) within the borders of their states during a disaster. Reflecting their leading role in disaster response, governors are granted emergency powers to fulfill their responsibilities in extraordinary circumstances. These powers are established legislatively and vary from state to state. The powers generally include: Field Manual 3-28, Civil Support Operations (Final draft version 6.2), Department of the Army, 16 January 2010. Joint Publication 3-28, Civil Support, Department of Defense, 14 September 2007. Lee, Erin; Logan, Christopher; Mitchell, Jeffrey; and Trella, Joe A Governor's Guide to Homeland Security, National Governor's Association Center for Best Practices, January 2007. National Response Framework, Department of Homeland Security, January 2008. National Incident Management System, Department of Homeland Security, December 2008. National Infrastructure Protection Plan, Department of Homeland Security, 2009. Strategy for Homeland Defense and Civil Support, Department of Defense, June 2005.
- Armstrong Flight Research Center - Exploring Space - 050 min(s) - 060 min(s) How do NASA nutritionist’s record, analyze, and interpret data to prepare astronaut food for a mission to Mars? This module is appropriate for video conference AND web conference presentation. Someday, humans will travel to the planet Mars or near Earth Asteroids. Their journey will be the most ambitious space mission ever. Going to Mars will be far more challenging than going to the Moon. In order to feed a crew of four for the six – nine month trip every bit of food needed will have to be packed onboard. No matter how desperate the crews become in transit, they can’t send out for a pizza delivery. In this module students will analyze food contamination, determine serving sizes based on food labels and recommended serving per astronaut, and calculate the percent change between estimated and recommended serving sizes. Learners will identify how microbial contamination of food relates to preparing food for a seven-month space flight. Learners will estimate serving size and calculate the percent of change between estimated and recommended serving size of different foods. Learners will complete a data table and generate a report based on learned information. Learners will present their data via photos, video, PowerPoint, etc. Learners will discuss results and explain their methods for packaging food accurately and safely. Sequence of Events Among the thousands of questions that need to be answered before astronauts travel to distant planets and asteroids are questions related to the astronauts themselves. How much food will they need and what foods can they take? Fortunately, both water and air can be recycled; where food is another matter. Every bit of food needed will have to be packed on board. Through this series of lessons, your students will gain a better understanding of what it takes to prepare and package food for a long space voyage. Click on the links below to download the following activity and videos to use with your class as pre-conference activities: During the event, students will estimate serving sizes of different foods and compare the student's estimates to serving size information provided on “Nutrition Facts” food labels. For the program you will need the following materials: Three food items e.g. loose M & Ms, dry breakfast cereal, popped popcorn, calculators, paper plates, 2 cup measuring cup and optional scales. The teacher may Substitute Food Items in case of food allergies. If you do not have scales or measuring cups, you can have students count the individual items. You will also need to divide your students into small groups. Additionally, they will calculate the percent of change between their estimated serving size and the recommended serving size. The coordinator will inquire about student’s understanding of “What will it takes to prepare and package food for a long space voyage?” Click on the links below to download the following activity and videos to use with your class: Engage students in discussions regarding which foods should be taken to Mars. Use the following activities to: - Construct and use calorimeters to measure the kilo-calories (energy) contained in several food samples. - Learn and practice safety procedures during their testing. Click the following links to access the activity and related video: Science Content Standard A: Science as Inquiry - Abilities necessary to do scientific inquiry - develop understandings about scientific inquiry Science Content Standard C: Life Science - Structure and function of living organisms - Regulation and behavior Common Core Mathematics Content Standard: Numbers and Operations - Perform operations with multi-digit whole numbers and with decimals to hundredths - Compute fluently and make reasonable estimates Common Core Mathematics Content Standard: Operations and Algebraic Thinking - Write and interpret numerical expressions - Analyze patterns and relationships
Central processing unit A central processing unit (CPU), or sometimes simply processor, is the component in a digital computer that interprets computer program instructions and processes data. CPUs provide the fundamental digital computer trait of programmability, and are among the essential components in computers of any era, along with primary storage and input/output capabilities. A CPU manufactured as a single integrated circuit is usually known as a microprocessor. Beginning in the mid-1970s, microprocessors of ever-increasing complexity and power gradually supplanted other designs, and today the term "CPU" is usually applied to some type of microprocessor. The term "central processing unit" is a description of a certain class of logic machines that can execute computer programs. This broad definition can easily be applied to many early computers that existed long before "CPU" ever came into widespread usage. However, the term itself has been in use in the computer industry at least since the early 1960s . The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation has remained much the same. Early CPUs were custom-designed as a part of a larger, usually one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are suited for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured in very small spaces (on the order of millimeters). Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones to children's toys. Prior to the advent of machines that resemble today's CPUs, computers such as the ENIAC had to be physically rewired in order to perform different tasks. These machines are often referred to as "fixed-program computers," since they had to be physically reconfigured in order to run a different program. Since the term "CPU" is generally defined as a software (computer program) execution device, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer. The idea of a stored-program computer was already present during ENIAC's design, but was initially omitted so the machine could be finished sooner. On June 30, 1945, before ENIAC was even completed, mathematician John von Neumann distributed the paper entitled "First Draft of a Report on the EDVAC." It outlined the design of a stored-program computer that would eventually be completed in August 1949 . EDVAC was designed to perform a certain number of instructions (or operations) of various types. These instructions could be combined to create useful programs for the EDVAC to run. Significantly, the programs written for EDVAC were stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the large amount of time and effort it took to reconfigure the computer to perform a new task. With von Neumann's design, the program, or software, that EDVAC ran could be changed simply by changing the contents of the computer's memory. While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, others before him such as Konrad Zuse had suggested similar ideas. Additionally, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but elements of the Harvard architecture are commonly seen as well. Being digital devices, all CPUs deal with discrete states and therefore require some kind of switching elements to differentiate between and change these states. Prior to commercial acceptance of the transistor, electrical relays and vacuum tubes (thermionic valves) were commonly used as switching elements. Although these had distinct speed advantages over earlier, purely mechanical designs, they were unreliable for various reasons. For example, building direct current sequential logic circuits out of relays requires additional hardware to cope with the problem of contact bounce. While vacuum tubes do not suffer from contact bounce, they must heat up before becoming fully operational and eventually stop functioning altogether. Usually, when a tube failed, the CPU would have to be diagnosed to locate the failing component so it could be replaced. Therefore, early electronic (vacuum tube based) computers were generally faster but less reliable than electromechanical (relay based) computers. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely . In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs (see below for a discussion of clock rate). Clock signal frequencies ranging from 100 kilohertz (kHz) to 4 megahertz (MHz) were very common at this time, limited largely by the speed of the switching devices they were built with. Discrete transistor and IC CPUs The design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices. The first such improvement came with the advent of the transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky, unreliable, and fragile switching elements like vacuum tubes and electrical relays. With this improvement more complex and reliable CPUs were built onto one or several printed circuit boards containing discrete (individual) components. During this period, a method of manufacturing many transistors in a compact space gained popularity. The integrated circuit (IC) allowed a large number of transistors to be manufactured on a single semiconductor-based die, or "chip." At first only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs. CPUs based upon these "building block" ICs are generally referred to as "small-scale integration" (SSI) devices. SSI ICs, such as the ones used in the Apollo guidance computer, usually contained transistor counts numbering in multiples of ten. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs. As microelectronic technology advanced, an increasing number of transistors were placed on ICs, thus decreasing the quantity of individual ICs needed for a complete CPU. MSI and LSI (medium- and large-scale integration) ICs increased transistor counts to hundreds, then thousands. In 1964, IBM introduced its System/360 computer architecture, which was used in a series of computers that could run the same programs with different speed and performance. This was significant at a time when most electronic computers were incompatible with one another, even those made by the same manufacturer. To facilitate this improvement, IBM utilized the concept of a microprogram (often called "microcode"), which still sees widespread usage in modern CPUs . The System/360 architecture was so popular that it dominated the mainframe computer market for the next few decades and left a legacy that is still continued by similar modern computers like the IBM zSeries. In the same year (1964), Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets, the PDP-8. DEC would later introduce the extremely popular PDP-11 line that originally was built with SSI ICs but was eventually implemented with LSI components once these became practical. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits . Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay. Thanks to both the increased reliability as well as the dramatically increased speed of the switching elements (which were almost exclusively transistors by this time), CPU clock rates in the tens of megahertz were obtained during this period. Additionally, while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like SIMD (Single Instruction Multiple Data) vector processors began to appear. These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc. The introduction of the microprocessor in the 1970s significantly affected the design and implementation of CPUs. Since the introduction of the first microprocessor (the Intel 4004) in 1970 and the first widely used microprocessor (the Intel 8080) in 1974, this class of CPUs has almost completely overtaken all other central processing unit implementation methods. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual vast success of the now ubiquitous personal computer, the term "CPU" is now applied almost exclusively to microprocessors. Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size as a result of being implemented on a single die means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, as the ability to construct exceedingly small transistors on an IC has increased, the complexity and number of transistors in a single CPU has increased dramatically. This widely observed trend is described by Moore's law, which has proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity to date. While the complexity, size, construction, and general form of CPUs have changed drastically over the past 60 years, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As the aforementioned Moore's law continues to hold true, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model. The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. Discussed here are devices that conform to the common von Neumann architecture. The program is represented by a series of numbers that are kept in some kind of computer memory. There are four steps that nearly all von Neumann CPUs use in their operation: fetch, decode, execute, and writeback. The first step, fetch, involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The location in program memory is determined by a program counter (PC), which stores a number that identifies the current position in the program. In other words, the program counter keeps track of the CPU's place in the current program. After an instruction is fetched, the PC is incremented by the length of the instruction word in terms of memory units. Often the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below). The instruction that the CPU fetches from memory is used to determine what the CPU is to do. In the decode step, the instruction is broken up into parts that have significance to other portions of the CPU. The way in which the numerical instruction value is interpreted is defined by the CPU's instruction set architecture (ISA). Often, one group of numbers in the instructions, called the opcode, indicates which operation to perform. The remaining parts of the number usually provide information required for that instruction, such as operands for an addition operation. Such operands may be given as a constant value (called an immediate value), or as a place to locate a value: a register or a memory address, as determined by some addressing mode. In older designs the portions of the CPU responsible for instruction decoding were unchangeable hardware devices. However, in more abstract and complicated CPUs and ISAs, a microprogram is often used to assist in translating instructions into various configuration signals for the CPU. This microprogram is sometimes rewritable so that it can be modified to change the way the CPU decodes instructions even after it has been manufactured. After the fetch and decode steps, the execute step is performed. During this step, various portions of the CPU are connected so they can perform the desired operation. If, for instance, an addition operation was requested, an arithmetic logic unit (ALU) will be connected to a set of inputs and a set of outputs. The inputs provide the numbers to be added, and the outputs will contain the final sum. The ALU contains the circuitry to perform simple arithmetic and logical operations on the inputs (like addition and bitwise operations). If the addition operation produces a result too large for the CPU to handle, an arithmetic overflow flag in a flags register may also be set (see the discussion of integer range below). The final step, writeback, simply "writes back" the results of the execute step to some form of memory. Very often the results are written to some internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but cheaper and larger, main memory. Some types of instructions manipulate the program counter rather than directly produce result data. These are generally called "jumps" and facilitate behavior like loops, conditional program execution (through the use of a conditional jump), and functions in programs. Many instructions will also change the state of digits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, one type of "compare" instruction considers two values and sets a number in the flags register according to which one is greater. This flag could then be used by a later jump instruction to determine program flow. After the execution of the instruction and writeback of the resulting data, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If the completed instruction was a jump, the program counter will be modified to contain the address of the instruction to which was jumped, and program execution continues normally. In more complex CPUs than the one described here, multiple instructions can be fetched, decoded, and executed simultaneously. This section describes what is generally referred to as the "Classic RISC pipeline," which in fact is quite common among the simple CPUs used in many electronic devices (often called microcontrollers). Design and implementation The way a CPU represents numbers is a design choice that affects the most basic ways in which the device functions. Some early digital computers used an electrical model of the common decimal (base ten) numeral system to represent numbers internally. A few other computers have used more exotic numeral systems like ternary (base three). Nearly all modern CPUs represent numbers in binary form, with each digit being represented by some two-valued physical quantity such as a "high" or "low" voltage. Related to number representation is the size and precision of numbers that a CPU can represent. In the case of a binary CPU, a bit refers to one significant place in the numbers a CPU deals with. The number of bits (or numeral places) a CPU uses to represent numbers is often called "word size," "bit width," "data path width," or "integer precision" when dealing with strictly integer numbers (as opposed to floating point). This number differs between architectures, and often within different parts of the very same CPU. For example, an 8-bit CPU deals with a range of numbers that can be represented by eight binary digits (each digit having two possible values), that is, 28 or 256 discrete numbers. In effect, integer size sets a hardware limit on the range of integers the software run by the CPU can utilize. Integer range can also affect the number of locations in memory the CPU can address (locate). For example, if a binary CPU uses 32 bits to represent a memory address, and each memory address represents one octet (8 bits), the maximum quantity of memory that CPU can address is 232 octets, or 4 GiB. This is a very simple view of CPU address space, and many designs use more complex addressing methods like paging in order to locate more memory than their integer range would allow with a flat address space. Higher levels of integer range require more structures to deal with the additional digits, and therefore more complexity, size, power usage, and generally expense. It is not at all uncommon, therefore, to see 4- or 8-bit microcontrollers used in modern applications, even though CPUs with much higher range (such as 16, 32, 64, even 128-bit) are available. The simpler microcontrollers are usually cheaper, use less power, and therefore dissipate less heat, all of which can be major design considerations for electronic devices. However, in higher-end applications, the benefits afforded by the extra range (most often the additional address space) are more significant and often affect design choices. To gain some of the advantages afforded by both lower and higher bit lengths, many CPUs are designed with different bit widths for different portions of the device. For example, the IBM System/370 used a CPU that was primarily 32 bit, but it used 128-bit precision inside its floating point units to facilitate greater accuracy and range in floating point numbers . Many later CPU designs use similar mixed bit width, especially when the processor is meant for general-purpose usage where a reasonable balance of integer and floating point capability is required. Most CPUs, and indeed most sequential logic devices, are synchronous in nature. That is, they are designed and operate on assumptions about a synchronization signal. This signal, known as a clock signal, usually takes the form of a periodic square wave. By calculating the maximum time that electrical signals can move in various branches of a CPU's many circuits, the designers can select an appropriate period for the clock signal. This period must be longer than the amount of time it takes for a signal to move, or propagate, in the worst-case scenario. In setting the clock period to a value well above the worst-case propagation delay, it is possible to design the entire CPU and the way it moves data around the "edges" of the rising and falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it also carries the disadvantage that the entire CPU must wait on its slowest elements, even though some portions of it are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism (see below). Architectural improvements alone do not solve all of the drawbacks of globally synchronous CPUs, however. For example, a clock signal is subject to the delays of any other electrical signal. Higher clock rates in increasingly complex CPUs make it more difficult to keep the clock signal in phase (synchronized) throughout the entire unit. This has led many modern CPUs to require multiple identical clock signals to be provided in order to avoid delaying a single signal significantly enough to cause the CPU to malfunction. Another major issue as clock rates increase dramatically is the amount of heat that is dissipated by the CPU. The constantly changing clock causes many components to switch regardless of whether they are being used at that time. In general, a component that is switching uses more energy than an element in a static state. Therefore, as clock rate increases, so does heat dissipation, causing the CPU to require more effective cooling solutions. One method of dealing with the switching of unneeded components is called clock gating, which involves turning off the clock signal to unneeded components (effectively disabling them). However, this is often regarded as difficult to implement and therefore does not see common usage outside of very low-power designs. Another method of addressing some of the problems with a global clock signal is the removal of the clock signal altogether. While removing the global clock signal makes the design process considerably more complex in many ways, asynchronous (or clockless) designs carry marked advantages in power consumption and heat dissipation in comparison with similar synchronous designs. While somewhat uncommon, entire CPUs have been built without utilizing a global clock signal. Two notable examples of this are the ARM compliant AMULET and the MIPS R3000 compatible MiniMIPS. Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for embedded computers . The description of the basic operation of a CPU offered in the previous section describes the simplest form that a CPU can take. This type of CPU, usually referred to as subscalar, operates on and executes one instruction on one or two pieces of data at a time. This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction is executed at a time, the entire CPU must wait for that instruction to complete before proceeding to the next instruction. As a result the subscalar CPU gets "hung up" on instructions which take more than one clock cycle to complete execution. Even adding a second execution unit (see below) does not improve performance much; rather than one pathway being hung up, now two pathways are hung up and the number of unused transistors is increased. This design, wherein the CPU's execution resources can operate on only one instruction at a time, can only possibly reach scalar performance (one instruction per clock). However, the performance is nearly always subscalar (less than one instruction per cycle). Attempts to achieve scalar and better performance have resulted in a variety of design methodologies that cause the CPU to behave less linearly and more in parallel. When referring to parallelism in CPUs, two terms are generally used to classify these design techniques. Instruction level parallelism (ILP) seeks to increase the rate at which instructions are executed within a CPU (that is, to increase the utilization of on-die execution resources), and thread level parallelism (TLP) purposes to increase the number of threads (effectively individual programs) that a CPU can execute simultaneously. Each methodology differs both in the ways in which they are implemented, as well as the relative effectiveness they afford in increasing the CPU's performance for an application. Instruction level parallelism One of the simplest methods used to accomplish increased parallelism is to begin the first steps of instruction fetching and decoding before the prior instruction finishes executing. This is the simplest form of a technique known as instruction pipelining, and is utilized in almost all modern general-purpose CPUs. Pipelining allows more than one instruction to be executed at any given time by breaking down the execution pathway into discrete stages. This separation can be compared to an assembly line, in which an instruction is made more complete at each stage until it exits the execution pipeline and is retired. Pipelining does, however, introduce the possibility for a situation where the result of the previous operation is needed to complete the next operation; a condition often termed data dependency conflict. To cope with this, additional care must be taken to check for these sorts of conditions and delay a portion of the instruction pipeline if this occurs. Naturally, accomplishing this requires additional circuitry, so pipelined processors are more complex than subscalar ones (though not very significantly so). A pipelined processor can become very nearly scalar, inhibited only by pipeline stalls (an instruction spending more than one clock cycle in a stage). Further improvement upon the idea of instruction pipelining led to the development of a method that decreases the idle time of CPU components even further. Designs that are said to be superscalar include a long instruction pipeline and multiple identical execution units. In a superscalar pipeline, multiple instructions are read and passed to a dispatcher, which decides whether or not the instructions can be executed in parallel (simultaneously). If so they are dispatched to available execution units, resulting in the ability for several instructions to be executed simultaneously. In general, the more instructions a superscalar CPU is able to dispatch simultaneously to waiting execution units, the more instructions will be completed in a given cycle. Most of the difficulty in the design of a superscalar CPU architecture lies in creating an effective dispatcher. The dispatcher needs to be able to quickly and correctly determine whether instructions can be executed in parallel, as well as dispatch them in such a way as to keep as many execution units busy as possible. This requires that the instruction pipeline is filled as often as possible and gives rise to the need in superscalar architectures for significant amounts of CPU cache. It also makes hazard-avoiding techniques like branch prediction, speculative execution, and out-of-order execution crucial to maintaining high levels of performance. By attempting to predict which branch (or path) a conditional instruction will take, the CPU can minimize the number of times that the entire pipeline must wait until a conditional instruction is completed. Speculative execution often provides modest performance increases by executing portions of code that may or may not be needed after a conditional operation completes. Out-of-order execution somewhat rearranges the order in which instructions are executed to reduce delays due to data dependencies. In the case where a portion of the CPU is superscalar and part is not, the part which is not suffers a performance penalty due to scheduling stalls. The original Intel Pentium (P5) had two superscalar ALUs which could accept one instruction per clock each, but its FPU could not accept one instruction per clock. Thus the P5 was integer superscalar but not floating point superscalar. Intel's successor to the Pentium architecture, P6, added superscalar capabilities to its floating point features, and therefore afforded a significant increase in floating point instruction performance. Both simple pipelining and superscalar design increase a CPU's ILP by allowing a single processor to complete execution of instructions at rates surpassing one instruction per cycle (IPC). Most modern CPU designs are at least somewhat superscalar, and nearly all general purpose CPUs designed in the last decade are superscalar. In later years some of the emphasis in designing high-ILP computers has been moved out of the CPU's hardware and into its software interface, or ISA. The strategy of the very long instruction word (VLIW) causes some ILP to become implied directly by the software, reducing the amount of work the CPU must perform to boost ILP and thereby reducing the design's complexity. Thread level parallelism Another strategy of achieving performance is to execute multiple programs or threads in parallel. This area of research is known as parallel computing. In Flynn's taxonomy, this strategy is known as Multiple Instructions-Multiple Data or MIMD. One technology used for this purpose was multiprocessing (MP). The initial flavor of this technology is known as symmetric multiprocessing (SMP), where a small number of CPUs share a coherent view of their memory system. In this scheme, each CPU has additional hardware to maintain a constantly up-to-date view of memory. By avoiding stale views of memory, the CPUs can cooperate on the same program and programs can migrate from one CPU to another. To increase the number of cooperating CPUs beyond a handful, schemes such as non-uniform memory access (NUMA) and Directory-based coherence protocols were introduced in the 1990s. SMP systems are limited to a small number of CPUs while NUMA systems have been built with thousands of processors. Initially, multiprocessing was built using multiple discrete CPUs and boards to implement the interconnect between the processors. When the processors and their interconnect are all implemented on a single silicon chip, the technology is known as chip-level multiprocessing (CMP). It was later recognized that finer-grain parallelism existed with a single program. A single program might have several threads (or functions) that could be executed separately or in parallel. Some of earliest examples of this technology was to consider Input/output processing such as Direct memory access as a separate thread from the computation thread. A more general approach to this technology was introduced in the 1970s when systems were designed to run multiple computation threads in parallel. This technology is known as multi-threading (MT). This approach is considered more cost-effective than multiprocessing, as only a small number of components within a CPU is replicated in order to support MT as opposed to the entire CPU in the case of MP. In MT, the execution units and the memory system including the caches are shared among multiple threads. The downside of MT is that the hardware support for multithreading is more visible to software then that of MP and thus supervisor software like operating systems have to undergo larger changes to support MT. One type of MT that was implemented is known as block multithreading, where one thread is executed until it is stalled waiting for data to return from external memory. In this scheme, the CPU would then quickly switch to another thread which is ready to run, the switch often done in one CPU clock cycle. Another type of MT is known as Simultaneous multithreading, where instructions of multiple threads are executed in parallel within one CPU clock cycle. For several decades from the 1970s to early 2000s, the focus in designing high performance general purpose CPUs was largely on achieving high ILP through technologies such as pipelining, caches, superscalar execution, Out-of-order execution, etc. This trend culminated in large, power-hungry CPUs such as the Intel Pentium 4. By the early 2000s, CPU designers were thwarted from achieving higher performance from ILP techniques due to: - the growing disparity between CPU operating frequencies and main memory operating frequencies - the escalating CPU power dissipation that was needed for more esoteric ILP techniques CPU designers then borrowed ideas from commercial computing markets such as transaction processing, where the aggregate performance of multiple programs, also known as throughput computing, was more important than the performance of a single thread or program. This reversal of emphasis is evidenced by the proliferation of dual and multi-core CMP designs and notably, Intel's newer designs resembling its less superscalar P6 architecture. Late designs in several processor families exhibit CMP, including the x86-64 Opteron and Athlon 64 X2, the SPARC UltraSPARC T1, IBM POWER4 and POWER5, as well as several video game console CPUs like the Xbox 360's triple-core PowerPC design. A less common but increasingly important paradigm of CPUs (and indeed, computing in general) deals with data parallelism. The processors discussed earlier are all referred to as some type of scalar device. As the name implies, vector processors deal with multiple pieces of data in the context of one instruction. This contrasts with scalar processors, which deal with one piece of data for every instruction. Using Flynn's taxonomy, these two schemes of dealing with data are generally referred to as SISD (single instruction, single data) and SIMD (single instruction, multiple data), respectively. The great utility in creating CPUs that deal with vectors of data lies in optimizing tasks that tend to require the same operation (for example, a sum or a dot product) to be performed on a large set of data. Some classic examples of these types of tasks are multimedia applications (images, video, and sound), as well as many types of scientific and engineering tasks. Whereas a scalar CPU must complete the entire process of fetching, decoding, and executing each instruction and value in a set of data, a vector CPU can perform a single operation on a comparatively large set of data with one instruction. Of course, this is only possible when the application tends to require many steps which apply one operation to a large set of data. Most early-vector CPUs, such as the Cray-1, were associated almost exclusively with scientific research and cryptography applications. However, as multimedia has largely shifted to digital media, the need for some form of SIMD in general-purpose CPUs has become significant. Shortly after floating point execution units started to become commonplace to include in general-purpose processors, specifications for and implementations of SIMD execution units also began to appear for general-purpose CPUs. Some of these early SIMD specifications like Intel's MMX were integer-only. This proved to be a significant impediment for some software developers, since many of the applications that benefit from SIMD primarily deal with floating point numbers. Progressively, these early designs were refined and remade into some of the common, modern SIMD specifications, which are usually associated with one ISA. Some notable modern examples are Intel's SSE and the PowerPC-related AltiVec (also known as VMX). - Addressing mode - Computer bus - Computer engineering - CPU cooling - CPU core voltage - CPU design - CPU power dissipation - CPU socket - Floating point unit - Instruction pipeline - Instruction set - Notable CPU architectures - Wait state - Ring (computer security) - Stream processing - ↑ While EDVAC was designed a few years before ENIAC was built, ENIAC was actually retrofitted to execute stored programs in 1948, somewhat before EDVAC was completed. Therefore, ENIAC became a stored program computer before EDVAC was completed, even though stored program capabilities were originally omitted from ENIAC's design due to cost and schedule concerns. - ↑ Vacuum tubes eventually stop functioning in the course of normal operation due to the slow contamination of their cathodes that occurs when the tubes are in use. Additionally, sometimes the tube's vacuum seal can form a leak, which accelerates the cathode contamination. See vacuum tube. - ↑ Since the program counter counts memory addresses and not instructions, it is incremented by the number of memory units that the instruction word contains. In the case of simple fixed-length instruction word ISAs, this is always the same number. For example, a fixed-length 32-bit instruction word ISA that uses 8-bit memory words would always increment the PC by 4 (except in the case of jumps). ISAs that use variable length instruction words, such as x86, increment the PC by the number of memory words corresponding to the last instruction's length. Also, note that in more complex CPUs, incrementing the PC does not necessarily occur at the end of instruction execution. This is especially the case in heavily pipelined and superscalar architectures (see the relevant sections below). - ↑ Because the instruction set architecture of a CPU is fundamental to its interface and usage, it is often used as a classification of the "type" of CPU. For example, a "PowerPC CPU" uses some variant of the PowerPC ISA. Some CPUs, like the Intel Itanium, can actually interpret instructions for more than one ISA; however this is often accomplished by software means rather than by designing the hardware to directly support both interfaces. (See emulator) - ↑ Some early computers like the Harvard Mark I did not support any kind of "jump" instruction, effectively limiting the complexity of the programs they could run. It is largely for this reason that these computers are often not considered to contain a CPU proper, despite their close similarity as stored program computers. - ↑ This description is, in fact, a simplified view even of the Classic RISC pipeline. It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline. See the respective articles for more details. - ↑ The physical concept of voltage is an analog one by its nature, practically having an infinite range of possible values. For the purpose of physical representation of binary numbers, set ranges of voltages are defined as one or zero. These ranges are usually influenced by the operational parameters of the switching elements used to create the CPU, such as a transistor's threshold level. - ↑ While a CPU's integer size sets a limit on integer ranges, this can (and often is) overcome using a combination of software and hardware techniques. By using additional memory, software can represent integers many magnitudes larger than the CPU can. Sometimes the CPU's ISA will even facilitate operations on integers larger that it can natively represent by providing instructions to make large integer arithmetic relatively quick. While this method of dealing with large integers is somewhat slower than utilizing a CPU with higher integer size, it is a reasonable trade-off in cases where natively supporting the full integer range needed would be cost-prohibitive. See Arbitrary-precision arithmetic for more details on purely software-supported arbitrary-sized integers. - ↑ In fact, all synchronous CPUs use a combination of sequential logic and combinatorial logic. (See boolean logic) - ↑ One notable late CPU design that uses clock gating is that of the IBM PowerPC-based Xbox 360. It utilizes extensive clock gating in order to reduce the power requirements of the aforementioned video game console it is used in. - ↑ Neither ILP nor TLP is inherently superior over the other; they are simply different means by which to increase CPU parallelism. As such, they both have advantages and disadvantages, which are often determined by the type of software that the processor is intended to run. High-TLP CPUs are often used in applications that lend themselves well to being split up into numerous smaller applications, so-called "embarrassingly parallel problems." Frequently, a computational problem that can be solved quickly with high TLP design strategies like SMP take significantly more time on high ILP devices like superscalar CPUs, and vice versa. - ↑ Best-case scenario (or peak) IPC rates in very superscalar architectures are difficult to maintain since it is impossible to keep the instruction pipeline filled all the time. Therefore, in highly superscalar CPUs, average sustained IPC is often discussed rather than peak IPC. - ↑ Earlier, the term scalar was used to compare the IPC (instructions per cycle) count afforded by various ILP methods. Here the term is used in the strictly mathematical sense to contrast with vectors. See scalar (mathematics) and vector (spatial). - ↑ Although SSE/SSE2/SSE3 have superseded MMX in Intel's general purpose CPUs, later IA-32 designs still support MMX. This is usually accomplished by providing most of the MMX functionality with the same hardware that supports the much more expansive SSE instruction sets. - Amdahl, G. M., G. A. Blaauw, and F. P. Brooks, Jr. 1964. Architecture of the IBM System/360 IBM Research. Retrieved January 30, 2009. - Brown, Jeffery. 2005. Application-customized CPU design. IBM developerWorks. Retrieved January 30, 2009. - Digital Equipment Corporation. 1975. LSI-11 Module Descriptions in LSI-11, PDP-11/03 user's manual 2nd edition. Maynard, Massachusetts: Digital Equipment Corporation. - Garside, J. D., Furber, S. B., & Chung, S-H. 1999. AMULET3 Revealed. University of Manchester Computer Science Department. Retrieved January 30, 2009. - Hennessy, John A., Goldberg, David. 1996. Computedr Architecture: A Quantitative Approach. San Fransisco, CA: Morgan Kaufmann Publishers. ISBN 1558603298. - MIPS Technologies, Inc. 2005. MIPS32® Architecture For Programmers Volume II: The MIPS32® Instruction Set. MIPS Technologies, Inc. Retrieved January 31, 2009. - Smotherman, Mark. 2005. History of Multithreading. Retrieved January 31, 2009. - von Neumann, John. 1945. First Draft of a Report on the EDVAC. Moore School of Electrical Engineering, University of Pennsylvania. Retrieved January 31, 2009. - Weik, Martin H. 1961. A Third Survey of Domestic Electronic Digital Computing Systems Ballistics Research Laboratory|Ballistic Research Laboratories. Retrieved January 31, 2009. All links retrieved April 28, 2013. - Processor Design: An Introduction – (Detailed introduction to microprocessor design. Somewhat incomplete and outdated but still worthwhile.) - How Microprocessors Work. - Pipelining: An Overview – (Good introduction to and overview of CPU pipelining techniques by the staff of Ars Technica.) - SIMD Architectures – (Introduction to and explanation of SIMD, especially how it relates to personal computers, by Ars Technica.) New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
By the end of the lesson, students will have been introduced to possessive forms of nouns and will have practiced using them. By the end of the lesson, students will have had a chance to speak about their families using possessives Procedure (34-41 minutes) The teacher asks the students about their family members' names and in this way she reviews the previous lesson, possessive adjectives. 1. The teacher uses a picture of a cat and writes two sentences on the board - a correct and an incorrect sentence. "Sandy is Parinaz's cat." and "Sandy is Parinaz cat.". 2. She asks the students to choose the correct sentence about the owner of the cat. The teacher asks Ss why they have chosen that answer. 3. The teacher uses a picture of a house and asks: "Do you think it is my house or my parents' house?". After eliciting the answers, she writes two sentences on the board: "It is my parent's house." and "It is my parents' house." and asks the students to choose the correct form. 4. The teacher drills the correct sentences. 5.She writes down the rules on the board. 1. The teacher uses a HO in which there are six sentences. The students should choose the correct possessive form of the noun in each sentence. This aim of this exercise is to help students to distinguish the correct form of possessives- 's or s'. The students compare their answers in pairs. Then, the teacher asks some Ss randomly to read out their answers so that the teacher will circle the correct answer on the board. 2. The teacher changes the grouping pattern before having the Ss do the next activity. 3. The teacher asks the students to read a text ( reading handout ) and answer 5 questions. The questions have all been designed to focus students' attention on the possessive forms of the nouns. The students compare their answers with their partners and some Ss are asked by the teacher to come to the board and write the answers. 1.The teacher writes a set of questions about family on the board. 2.She puts the students in groups of three and gives them each a piece of paper. 3.The teacher has the students to ask each other the questions and write down the answers.
The district curriculum guide suggests that we delve into sequences and series. Since there are no required standards, I have the opportunity to structure more exploratory activities in the unit. Students will study the typical sequences and series concepts/skills in their next math course (precalculus). I borrowed from Jo Boaler's YouCubed Week of Inspirational Math, Henri Picciotto's work, and an NCTM Student Explorations (What Shapes Do You See, Jan 2012) to blend together five exploratory lessons. - Number Patterns - Growing Shapes - Patterns in a Triangle - Staircase Sums - Averages and Sums As we process students' explorations each day, we will address these concepts: - The difference between an arithmetic and geometric sequence - The difference between a sequence and a series - The difference between convergent and divergent series We will leave the formulas and details for next year's unit. Today was day 1 ... and it was a lot of fun! I wish I had audio recording or pictures but I don't. First students completed a short exam review learning check with partners. I loved hearing the buzz in the room as partners convinced one another how to find the solutions asked. (We are reviewing a small set of questions in each class for the next 2 weeks). Then I gave a brief introduction to our new unit. I gave them copies of the visual numbers from Boaler's lessons. Students got excited when they realized prime numbers were represented by a circle in the visual number display. Other students were stymied at first - one said, "I just see a bunch of little circles." I overhead students arguing over how to represent the number 36. They did a great job with the consecutive number sums. They caught on quickly when they were allowed to add negative numbers and zero to the patterns. Only the hundreds' chart patterns slowed them down a bit but it was hilarious to watch light bulbs ignite as they realized the patterns. Then they argued how to write those patterns algebraically. I am pumped to see what happens in the future lessons! Love that we get to wrap up the year with explorations!
Clinical Trials 101 Understanding the Basics of CCFA Studies and Clinical Trials What is a clinical trial? Clinical trials are simply biomedical or health-related research studies in patients that follow a pre-defined protocol (or set of rules to follow). Outcomes of these trials are measured by the researchers or investigators who initiated the study. Why should you participate? By participating in a medical study or clinical trial, you can have a more active role in your own health care, gain access to new research treatments before they become widely available, and help others by contributing to medical research. The process of drug approval can be a lengthy one. According to the Tufts Center for the Study of Drug Development at Tufts University, it takes an average of 15 years for an experimental drug to go from lab to patient. Your participation in a trial can last over a year (though you may withdraw at any time) and it may take even longer to analyze the results. What happens during a clinical trial? Researchers begin by testing compounds (drug preparations involving several ingredients), adding them to enzymes, cell cultures, or cellular substances to see which combinations improve the compound's performance. If a compound succeeds in the test tube, it's time to test it on animals. Researchers generally test drugs on two or more species, because the agent might affect each differently. These tests determine any toxic side effects and its safety at various doses. Animal testing also helps researchers to see how a drug is absorbed into the bloodstream, and how quickly it is excreted. - Appropriate dose range - How the drug is metabolized (broken down and cleared by your body) in order to determine the best way to administer treatment According to Tufts, 5 in 5,000 compounds that undergo preclinical testing make it to human testing (and one of those five is approved). Phase II (Pilot Trials to establish what is the best dose): In this phase, a slightly larger group of patients will be tested to determine: - If the treatment works - What are the best doses to test in a larger population. Phase III (Larger Trials): If the pilot trials are promising, then the Phase III trials demonstrate: - Safety and effectiveness in larger trials involving hundreds of patients. In the U.S., if two large Phase III trials are positive, the drug can be presented to the FDA in a New Drug Application (NDA). If approved, the drug can then be marketed for the specific indication, such as to maintain remission in ulcerative colitis. However, once approved, doctors can prescribe for any indication. Phase IV: In some studies, a Phase IV is necessary. In this phase, the researchers are sometimes required to extend the prior indication at different doses, dose schedules, or formulations. What types of protocols do clinical investigators need to follow? The rules and regulations of a clinical trial are designed by the sponsor. This is usually a pharmaceutical company, although research institutions and health organizations also fund investigations. The sponsor designs a protocol—a detailed description of how the trial should be conducted—for clinical investigators, the physicians who conduct the trial from various locations. The protocol will describe the type of trial involved. Often, preliminary trials are open label, meaning that patients receive the experimental medication at a dose that is known to the patient and the doctor. In controlled trials, some participants receive the trial drug while others receive a placebo (a harmless substance that looks identical to the drug on trial). Many trials are randomized, meaning patients are randomly assigned to receive either drug or placebo. Studies also can be double-blinded, meaning that neither patient nor physician knows whether drug or placebo is being taken, or at what dosage. “Blinding is important to prevent bias in the results. Sometimes if a patient or physician knows drug is being given, depending on their feelings about the drug, they may subjectively think things are improving or worsening. If they don’t know the treatment being given, then any response to treatment reported is more likely to be accurate. According to FDA regulations, an institutional review board, (IRB) made up of healthcare professionals and lay people from the facility and community where the trial is taking place, keeps tabs on that entity's involvement in the trial. The IRB makes sure that all FDA and protocol regulations are adhered to, and reviews patient recruitment, advertising, and potential risks. What should you ask if you are thinking about participating? Enrolling in a trial is a careful process for the patient as well. CCFA plays a part in bringing trials to the attention of potential participants. However, the patient should discuss participation with his or her personal physician, and that physician should be kept posted about the patient's progress in the trial. Patients enrolling in a trial are closely monitored throughout by the clinical investigators. On the first visit, the patient should be ready with a list of questions and concerns: - What is the purpose of the trial and how long will it last? - Who is the sponsor? - Who has reviewed and approved the trial? - What kinds of tests and treatments are involved? - Will there be any pain or discomfort? - What are my treatment options, other than this medication? - What are the advantages and disadvantages? - How often will I be examined? - What side effects may occur? - Will the treatment be free? - If I am harmed, what treatment would I be entitled to? The FDA protects patient rights by demanding that people enrolled in trials sign an informed consent form. This requirement mandates that the researcher provides adequate information about the study and responds fully to participants' questions. The investigators must be certain that the patient understands all risks and responsibilities and is aware of other treatment options. For further information, call Crohn's & Colitis Foundation's IBD Help Center: 888.MY.GUT.PAIN (888.694.8872). The Crohn's & Colitis Foundation provides information for educational purposes only. We encourage you to review this educational material with your health care professional. The Foundation does not provide medical or other health care opinions or services. The inclusion of another organization's resources or referral to another organization does not represent an endorsement of a particular individual, group, company or product. About this resource Published: January 1, 2016 Outside Research Resources Track upcoming and ongoing IBD studies and trials, find answers to many common research participant questions, and learn more about the world of medical research with these links. - ClinicalTrials.gov from the U.S. National Institutes of Health - CenterWatch, Inc. lists ongoing trials for Crohn’s and colitis - FDA Office of Orphan Products promotes studies of less common diseases - Pharmaceutical Research and Manufacturers of America (PhRMA) represents more than 100 pharmaceutical companies engaged in research - Warren Grant Magnuson Clinical Center lists research studies led by the U.S. National Institutes of Health - Clinical Trials 101 - Patients in Clinical Trials: Peggy Shares Her Experience - Clinical Trials Glossary - New Treatments
By: Robin Donaldson, Chief Operating Officer, Indiana Youth Services Association & NSPN Advisory Board member Adolescence is defined as the transition from childhood to adulthood and encompasses the broad developmental tasks of establishing a unique identity and developing one’s own autonomy and independence. Brain development also undergoes unique changes during adolescence that can explain many behaviors specific to this developmental period. Brain development continues well into the 20’s and the last area to develop is the prefrontal cortex, responsible for higher cognitive and emotional functioning. Prefrontal cortex development is largely influenced by experience and this allows us to directly impact adolescent brain development. After a preadolescent cellular growth spurt in the brain, a pruning process begins in adolescence. The adolescent loses approximately three percent of gray matter in the prefrontal lobes. This pruning works on a “use it or lose it” principle so it is important to repeatedly expose young people to the skills and knowledge needed to become successful adults. Repeated use and exposure will strengthen the neural connections that support these skills so they are not lost during the pruning process. The speed and efficiency of neural communication is determined by neural sensitivity known as long-term potentiation. Under normal situations, long-term potentiation is highest during adolescence. However, adolescents face many factors that can inhibit long-term potentiation. Things like alcohol and substance use, chronic stress, sleep deprivation, and stimulants interfere with long-term potentiation and slow neural communication. We can help teens minimize exposure to these risk factors through education. Due to the pattern of brain development, teenagers have greater difficulty reading the emotions of others, a function of the prefrontal lobes, and experience emotions at greater intensity than adults due to a reactive limbic system. We can help develop the neural pathways between the limbic center and prefrontal lobes by helping teens examine and identify emotions (their own and of others) and helping them learn to “put a brake” on their emotions and stop and think before reacting. Activities such as “emotion charades” allows teens the opportunity to both recognize and express emotions. Research now demonstrates a significant link between exercise and brain functioning and development. Exercise increases blood flow and oxygen level in the brain and this is necessary for optimal cellular growth and function. Exercise also impacts neurotransmitter levels in the brain that can serve to help teens better regulate emotional control. Knowledge is power and teens should be educated about how the choices that they make during adolescence can have lifelong impacts. This blog is the final post in a three-part series on adolescent brain development. Click on the links below to read parts one and two:
Black radicalism has taught that any serious "conversation about race" must address the systemic racism that results in patterns of racial inequality in the judicial system, the national and global economies, policing, the education system, religion, popular culture and a war machine that predominantly kills non-Europeans around the world. The acquittal of George Zimmerman, the half-white/half-Peruvian neighborhood watchman who shot and killed unarmed black teenager Trayvon Martin, by a predominantly white jury in Florida in July 2013 sparked calls in the media for a national "conversation about race." However, what passes for most "conversations about race," particularly in corporate media, which shape public perception, are narrow or wrong. Right-wing commentators such as Bill O'Reilly blame black people's problems on "the disintegration of the African-American family" and other cultural pathologies, while liberal pundits typically point to conservatives as the sole racists in the country. Left out are black radical critiques of systemic racism. The marginalization of black radicalism has made honest conversations about race difficult to initiate - and erases a key piece of American history. Defining Black Radicalism Racism is a system of power that oppresses people of African descent and other non-European peoples within the United States and around the world. Systemic racism manifests itself in the judicial system, the national and global economies, policing, the education system, religion, popular culture and a war machine that predominantly kills non-European peoples around the world. What passes for most "conversations about race," particularly in corporate media, which shape public perception, are narrow or wrong. The foundation of this system as it exists in the United States was laid down by the trans-Atlantic slave trade, in which black African people were stolen from Africa by European colonizers to work as slaves. Slaves worked in mines, rice fields or construction or on plantations. Their labor would be used to produce commodities that were later sold in international markets for profit, which helped create modern global capitalism. Slavery was protected by robust political and legal systems that designated slaves as property to be bought and sold, rather than human beings. The system curtailed the rights of all African-Americans, including those who were not enslaved. Slaves were brutally treated with torture, lynchings, whippings, rape and other forms of cruelty inflicted upon them. This created a system of racial hierarchy that put whites on top and blacks - free and slave - on bottom. Slavery transferred wealth from black labor to white property owners because African slaves were not paid for their work. For centuries, slavery allowed whites - including those who did not own slaves - to amass wealth for their communities, while blacks were politically and economically oppressed. This laid the foundation for a massive wealth gap between blacks and whites that persists to this day, more than a century and a half after slavery's demise. A 2013 study by the Urban Institute found that in 2010, white families' average wealth was $632,000, black families' $98,000 and Latinos' $110,000. Redlining (the practice of denying or making it difficult for residents in poor, non-white communities to receive financial services like getting a mortgage or insurance or borrowing money), gentrification, discriminatory lending practices, no access to credit, low incomes and the recent recession have all prevented - and continue to prevent - African-Americans from accumulating wealth in their communities. Moreover, slavery had dismal repercussions for the African continent. A 2007 Harvard study by Dr. Nathan Nunn analyzed the impact of the trans-Atlantic and the older but smaller trans-Saharan, Indian Ocean and Red Sea slave trades on Africa's economic development. Nunn found that "the slave trade caused political instability, weakened states, promoted political and social fragmentation and resulted in a deterioration of domestic legal institutions." Additionally, the "countries from which the most slaves were taken (taking into account differences in country size) are today the poorest in Africa." Nunn concluded, "if the slave trades had not occurred, then 72% of the average income gap between Africa and the rest of the world would not exist today and 99% of the income gap between Africa and the rest of the underdeveloped world would not exist." After slavery ended in the 1860s, racism still persisted through the establishment of Jim Crow laws, a system that legalized racial segregation in the United States. This lasted for about a century. Jim Crow has been replaced by a mass incarceration system that disproportionately imprisons black people for nonviolent drug offenses, even though blacks and whites use drugs at roughly the same rates. Oppressive policing reflects similar entrenched racism: Every 28 hours, a black person is extrajudicially killed by a police officer, security guard or self-appointed vigilantes such as Zimmerman. Systemic racism manifests itself in multiple facets of society. Patterns of racial inequality exist in the judicial system, the national and global economies, policing, the education system, religion, popular culture and a war machine that predominantly kills non-European peoples around the world. As a political tradition, black radicalism would look at these phenomena and diagnose them as consequences of a racist power structure that oppresses black people. Its critique of white supremacy is radical in that it does not look at individual bigots, prejudiced beliefs, individual privileges or one political party as the root cause of black people's suffering. The root cause of black people's misery, to the black radical, is a racist power system, the purpose and design of which is to keep their people miserable. Reforming, improving or integrating into the racist power system is not enough for a black radical because the system is irredeemably rotten at its core. That is why Dr. Martin Luther King Jr., near the end of his life, worried that black people were "integrating into a burning house." Black radicalism is more of a collective political tradition than a coherent ideology. It encompasses ideologies such as Pan-Africanism, black nationalism, Black Marxism and black internationalism with varying beliefs and goals among them. What unites the black radical tradition is the challenging of systemic racism, the liberation of African peoples, and the goal of achieving fundamental change. If anything, black radicalism is a tradition of African peoples' resistance and self-determination. Roots of Black Radicalism The roots of black radicalism trace back to African resistance against European enslavement. Professor Cedric Robinson, in his book Black Marxism: The Making of the Black Radical Tradition, writes about not just the well-known slave-led Haitian revolution but also slave rebellions in Brazil, the United States and other colonies. Some slaves ran away and formed maroon communities. Harriet Tubman, the famous African-American abolitionist who escaped slavery, helped hundreds of slaves escape to freedom. Even on the plantations, slaves resisted in subtler ways, such as refusing to do work, pretending to be sick, working slow, stealing from their masters or damaging property. Robinson also explains that "for the period between the mid-sixteenth and mid-nineteenth centuries, it was an African tradition that grounded collective resistance by Blacks to slavery and colonial imperialism" and "it had been as an emergent African people and not as slaves that Black men and women had opposed enslavement."1 This tendency emerged as a strategic response to the transnational nature of slavery's oppression. Black Internationalism … emerged as a strategic response to the transnational nature of slavery's oppression. While slavery and colonialism worked to rob slaves of their African culture, they still retained parts of it. Slaves told folktales and fables that reflected various African oral traditions, incorporating symbols and themes rooted in African cultures. Slaves sang and danced with "field hollers" and "call and response" based on African musical forms. Enslaved women made quilts, rugs and baskets with African patterns. In addition, slaves fashioned gourds into musical instruments, such as drums and banjos, similar to those used in parts of Africa. Drumming was also as a secret method of communication for slaves, just as African drumming was used for religious and ceremonial functions, thus becoming a tool of resistance. African rhythms, drumming and oral traditions strongly influenced musical genres like blues, jazz, rock, R&B, samba, reggae and rap/hip-hop. Retaining bits of their African culture provided a strong sense of collective self that formed the basis of black resistance against their oppression. Within black radicalism is the tradition of black internationalism. This tendency emerged as a strategic response to the transnational nature of slavery's oppression. African slaves were brought to European colonies in the United States, the Caribbean and throughout much of Central and South America. A slave rebellion in one colony inspired slaves elsewhere to follow suit. The successful slave-led Haitian revolution inspired African slaves in the United States. Black internationalism views African-Americans and other members of the African diaspora as a transnational people. It is true that there are cultural and experiential differences between African-Americans, Afro-Latinos, Afro-Caribbeans and Black Europeans. Even continental Africans have tribal and ethnic differences, which outside powers have exploited and which have contributed to horrific conflicts. But they do share obvious racial features, such as dark skin and kinky hair, cultural similarities, particularly in music, African ancestral heritage and shared collective oppression under European slavery, colonialism and racism. Africans peoples' transnational identity was recognized by the international community when the United Nations proclaimed 2011 as the International Year for People of African Descent. The year's event page states: "In proclaiming this International Year, the international community is recognising that people of African descent represent a distinct group whose human rights must be promoted and protected. People of African descent are acknowledged in the Durban Declaration and Programme of Action as a specific victim group who continue to suffer racial discrimination as the historic legacy of the transatlantic slave trade. Even Afro-descendants who are not directly descended from slaves face the racism and racial discrimination that still persist today, generations after the slave trade ended." (emphasis added) Thus, African peoples throughout the diaspora, despite their differences, share not just ancestral heritage and culture but political fates. It is this internationalist impulse that forms the basis of black political ideologies like Pan-Africanism, black opposition to imperialism and black support for Third World struggles. Civil War Victory and Subsequent Repression of Black Radicals The end of slavery in the United States was one important victory for African-Americans and abolitionists. Reconstruction's abrupt end after the Civil War and the inauguration of Jim Crow created new political challenges for African-Americans. One challenge was addressing economic oppression experienced by African-Americans after slavery. Civil rights groups not only challenged legalized racial segregation but also incorporated economic justice in their agendas, as professor Risa L. Goluboff explains in her book The Lost Promise of Civil Rights. According to Goluboff, it was a "particular combination of racial subordination and economic exploitation that made the political economy of the rural South unique."2 The tenancy system in the South during the late 1800s involved black farmers and sharecroppers, who wanted economic independence, living on the land of usually white landowners, who wanted subordinate black laborers. Workers paid landowners with money made from the crop or a share of it. If not, they worked as wage laborers. But wage workers and farmers were kept in debt by landowners.3 The point of Jim Crow segregation, along with vagrancy and other laws, was to "keep African Americans subordinate and to keep labor cheap."4 Thus, black workers in the South were not just concerned about racial segregation but also about economic disenfranchisement. Additionally, blacks were terrorized by whites through lynchings and other forms of brutal violence. Ida B. Wells, a black radical journalist, used muckraking journalism and her rhetorical skills to expose and speak out against lynching. In the Northern and Western industrial economies, there was no legal regime of racial segregation as in the South, but blacks were marginalized in other ways. Black workers often were not adequately compensated for their work. Businesses avoided hiring blacks, and white managers were often indifferent to the concerns of black workers. Racism from white workers made work environments hostile to black workers. Black workers and organizers demanded not just legal nondiscrimination but also better wages and full employment. The economic plight of African-Americans, along with organizing by the Communist Party, made many black thinkers and activists turn to communism, including Paul Robeson and Langston Hughes. Famed black intellectual W.E.B. Du Bois embraced communism and fathered the Pan-Africanist movement after he founded the NAACP. In 1920, Du Bois advocated "the careful, steady increase of public democratic ownership of industry, beginning with the simplest type of public utilities and monopolies" in a collection of essays called Darkwater, thereby supporting a core tenet of socialism - workers' control of production. He also wrote in the same piece, "Perhaps the finest contribution of current Socialism to the world is neither its light nor its dogma, but the idea back of its one mighty word - Comrade!" Another issue was the link between oppression of African-Americans in the United States with European colonialism abroad. Generations of enslavement, racial discrimination and other forms of domestic oppression made many African-Americans empathize with other dark-skinned, colonized peoples as fellow oppressed comrades - especially their brothers and sisters in Africa. Thus, many black people viewed themselves as a Third World people and questioned American nationalism. Indeed, slavery not only built American capitalism but also allowed its empire take off quicker than others. Black internationalism politicized this sentiment. During the 1898 Spanish-American War, many black soldiers in the Philippines befriended the natives and were angered when white troops called the Filipinos "nigger."5 Those black soldiers defected from the US and joined Filipino rebels in their fight for independence from American imperialism. Jamaican political leader, entrepreneur, orator, and journalist Marcus Garvey advocated black nationalism, in which people of African descent throughout the diaspora would return to Africa to set up their own independent nation. He founded the Universal Negro Improvement Association in 1914 to promote black political and economic independence and the ship company Black Star Line in 1919 to foster commerce between black communities. The Black Star Line shut down in 1922 as the organization struggled with poor management, financial troubles, charges of mail fraud against Garvey and sabotage by J. Edgar Hoover's Bureau of Investigation, predecessor to the modern FBI. In 1946, DuBois, the NAACP, National Negro Congress (NNC) and others petitioned the United Nations to redress governmental oppression of African-Americans as a human rights violation.6 African-American author Richard Wright attended the 1955 Bandung Conference, where newly independent Asian and African countries pledged mutual cooperation and opposition to colonialism and neocolonialism by any nation. This led to the creation of the Non-Aligned Movement. African-American revolutionaries were inspired by Third World liberation movements, including successful revolutions in Cuba, Algeria and Ghana. Malcolm X, whose black nationalism fused with Third World liberation, visited Kwame Nkrumah in Ghana, Gamal Abdel Nasser in Egypt and Che Guevara. He also publicly opposed the Vietnam War before it was popular to do so. Shortly before his assassination in February 1965, Malcolm X delivered a speech linking the struggles of African-Americans with Third World liberation movements: "There's a worldwide revolution going on. ... What is it revolting against? The power structure. The American power structure? No. The French power structure? No. The English power structure? No. Then what power structure? An international Western power structure. An international power structure consisting of American interests, French interests, English interests, Belgian interests, European interests. These countries that formerly colonized the dark man formed into a giant international combine. A structure, a house that has ruled the world up until now. And in recent times there has been a revolution taking place in Asia and in Africa, whacking away at the strength or at the foundation of the power structure." Black activists like Bayard Rustin opposed US militarism, including the war in Vietnam. King opposed the Vietnam War later in his life. The Black Panther Party, along with its breakfast program and armed self-defense wing, had an office in Algeria that many black revolutionaries would retreat to. In the 1930s and '40s and during the late '60s Black Power movement, black radicalism thrived as a political force to be reckoned with. Black activists tied their claims for civil rights to economic justice and Third World liberation. Debates about black nationalism, communism, internationalism, reformism and Third World revolution were fairly common among black thinkers and activists. It was a vibrant element within left-wing and African-American politics because it provided radical fuel to the civil rights and antiwar movements. However, a multitude of factors decapitated the movement. Cold War politics played a considerable role. UC Irvine professor Sohail Daulatzai, author of Black Star, Crescent Moon: The Muslim International and Black Freedom Beyond America, explained to Truthout that the Cold War was a "coded race war. It was about the darker peoples being subject to US and Soviet Cold War aims." While the United States and Soviet Union never directly attacked each other, the Third World was their proxy battlefield. Fearing newly independent Asian and African nations would embrace communism or go their own route, the United States projected power in those regions through direct and indirect military interventions. The United States also pursued a diplomatic war. The Soviet Union and the Third World lambasted the United States for its racist treatment of African-Americans. To counter, the United States made concessions to the civil rights movement and passed desegregation laws in return for black support for its anti-communism efforts. Thus came desegregation in the armed forces and diplomatic corps. Groups that petitioned the UN to redress America's violations of African-Americans' human rights were branded "communist and un-American" by the FBI.7 The United States defeated the petition in the UN. Daulatzai said this was "part of the chess game that the United States played to project out to the rest of the world, especially the Third World, that it was racially progressive." He added, "Ultimately, US expansion into the Third World was about undermining real national liberation." That also played out domestically by weakening the black liberation movement. McCarthyist witch hunts during the 1950s targeted outspoken leftists all of colors, including African-Americans. For example, Hughes had an FBI file. Thus, the mainstream civil rights movement strategically dropped economic justice and internationalist demands to focus on eliminating legalized racial discrimination, which they had a better chance of winning. In 1964 and 1965, they did win with the passage of the Civil Rights and Voting Acts, respectively. But even King became frustrated with integrationist efforts because they did not tackle poverty or militarism. There was a real move toward electoral politics by elements of the black petit bourgeoisie. That split itself off from the more radical elements of the black liberation movement that were under severe repression from the state. Through surveillance and infiltration, the FBI and its counterintelligence program, known as COINTELPRO, severely repressed the black radical movement. The FBI spied on and amassed long files on black leaders such as King and Malcolm X. Black political groups like the Black Panther Party and black student unions were infiltrated by FBI spies. Agents wiretapped phones and sent false letters to those in the movement, including one to King encouraging him to commit suicide. Informants and provocateurs sowed division, distrust and paranoia among black radical groups. Government surveillance covered the entire African-American community, including even the study of what music black people listen to. The repression even reached the level of political assassination, when Chicago police, under FBI direction, shot and killed Black Panther organizers Fred Hampton and Mark Clark on December 4, 1969. While COINTELPRO decimated the Black Panther Party, militant offshoots sprang up, such as the Black Liberation Army, which committed violent acts, like robberies and murders, for insurrectionist purposes. However, these actions achieved little. Many black revolutionaries were imprisoned or forced to flee the country. Notable examples are Mumia Abu Jamal, who remains in prison after being convicted - under highly contested circumstances - of murdering a police officer in 1981 and Assata Shakur, a Black Panther leader who escaped prison and fled to Cuba, where she has lived in political asylum since 1984. In 1977, Shakur was convicted for the 1973 murder of a New Jersey state trooper. However, she was shot in the altercation, and her role in the murder is still heavily disputed. The Obama administration recently placed her on the FBI's most wanted terrorists list. As black revolutionary leaders and organizations successfully were repressed, many of those groups' foot soldiers returned to their communities with little hope. Deindustrialization decimated manufacturing jobs that black workers relied on. As a result, the former foot soldiers of black radical groups formed street gangs, which is how the notorious Crips and Bloods came to be.8 The rise of crack-cocaine trafficking in the inner cities during the late 1970s and 1980s made gang life more lucrative. However, those drugs are not indigenous to black communities. CIA-backed Contras funneled crack-cocaine from South America to America's inner cities to raise extra funds for their war against the leftist Sandinista government in Nicaragua in the 1980s. This led to the 1980s crack epidemic that ravaged black communities. Under the guise of the "War on Drugs," the prison system and draconian police tactics expanded, leading to the arrest, harassment, incarceration and murder of large numbers of predominantly black and brown people. "Our dismal economic condition has silenced us,” Margaret Kimberely, a columnist for Black Agenda Report, said in an interview. “The jobs black people depend on are gone." Deindustrialization is not the only phenomenon that hurt black labor. Recent austerity measures have slashed many government jobs that African-Americans rely on. African-Americans are 30 percent more likely to hold public-sector jobs than the general workforce. As of August 2013, the official black unemployment rate is 13.4 percent, compared with 6.7 percent for whites. Meanwhile, African-Americans make up nearly 40 percent of the prison population, even though they are 13 percent of the national population. Kimberley added, "We don't even have the stability to marshal our forces because if half your people are in jail and no one has jobs, that really just decimates the population. It makes it very difficult to galvanize around anything substantive." Ajamu Baraka, a human rights defender with roots in the Black Liberation Movement, explained to Truthout that after the 1972 National Black Political Convention in Gary, Indiana, "there was a real move toward electoral politics by elements of the black petit bourgeoisie. That split itself off from the more radical elements of the black liberation movement that were under severe repression from the state." Professional and upper-middle-class African-Americans composed much of the mainstream civil rights movement. As a class, they were more concerned with eliminating legal barriers to integration within mainstream society rather than tackling deeper problems like inequality, poverty, or militarism.9 Tactically, they focused on legal battles and, later, electoral strategy rather than radical grassroots organizing. As black radicalism was crushed, this reformist element of black politics won. Mainstream black politics hitched itself to the Democratic Party. Integration, narrow multicultural diversity and piecemeal reform became the dominant goals. Now black people overwhelmingly vote Democrat and most mainstream black commentators support the Democratic Party. But this also meant that black politicians, including those in the Congressional Black Caucus, embrace corporate money and neoliberal policies. As a result, the country's first black president, Barack Obama, embraces many policies that harm African peoples domestically and abroad - cutting food stamps, privatizing education, domestic surveillance, police militarization, globalized extrajudicial killing and perpetual war. The decimation of black radicalism has made a national conversation about race difficult in the age of Obama. Obama and the Democrats are held up as anti-racist vanguards, even as they implement policies that hurt the black community. This is no accident. It is the inevitable consequence of the repression of black radicalism, a tradition that has long opposed imperialism, systemic racism and capitalism. But it's not all doom and gloom. The black radical tradition continues to exist in outlets like Black Agenda Report, Pambazuka and elements of the movement against mass incarceration and other struggles. It's weakened, but still continues. The prospects of increasing black radicalism's impact are dim. The assimilation of black political leaders such as Obama into the power structure have led many to confuse black representation with black liberation, even though they are not the same. That makes systemic racism harder to challenge. But as long as black intellectuals, activists, journalists and others keep their radical tradition, culture and history alive, black resistance politics will not go away. 1 Robinson, Cedric J., Black Marxism: The Making of the Black Radical Tradition, The University of North Carolina Press, 1983, 2000, p. 169-171 2 Goluboff, Risa L., The Lost Promise of Civil Rights, Harvard University Press, Cambridge, Massachusetts, 2007, p. 77 3 Ibid., pp. 58-59 4 Ibid., p. 79 5 Zinn, Howard, A People's History of the United States, First Perennial Classics edition, HarperCollins Publishers Inc., New York, 2001, p. 319 6 Normand, Roger and Zaidi, Sarah, Human Rights at the UN: The Political History of Universal Justice, Indiana University Press, 2008, p. 162 7 Ibid., p. 163 8 "Bastards of the Party" 9 Goluboff, The Lost Promise of Civil Rights, pp. 175-176
Earth is a planet of unfathomable biodiversity. Scientists have already identified nearly 2 million individual species, and even conservative estimates state that more than 9 million more remain undiscovered [source: O'Loughlin]. The planet's amazing variety of life is more than just an academic curiosity; humans depend on it. For instance, farmers rely on worms, bacteria and other organisms to break down organic waste and keep soil rich in nitrogen, processes vital to modern agriculture. Pharmaceutical companies use a wide array of plants and animals to synthesize medications, and we can only guess how many medicinal breakthroughs reside in Earth's undiscovered species. A stable food supply and a source for pharmaceuticals are only a couple of the benefits Earth's biodiversity provides. Earth's plant life mitigates the effect of global warming by absorbing carbon dioxide, yet 90 percent of those plants (and nearly two-thirds of all food crops) depend on the nearly 190,000 species of pollinating insects [sources: New York Times, U.S. Forest Service]. Scientists from Cornell even went so far as to add up the value of the different services Earth's plants and animals provide, and after factoring everything from ecotourism to biological pest control, they arrived at a grand total of $2.9 trillion -- and that was back in 1997 [source: Science Daily]. Clearly, the planet would be a much different place without its rich and diverse ecosystems, and while it's hard to imagine what that place would look like, we may not have to if we can't protect the planet from the looming threats to biodiversity. Climate change is increasingly forcing species away from their habitats in search of more favorable temperatures, and scientists fear not all species will survive the change. Overhunting, which famously led to the extinction of the passenger pigeon, continues to endanger animals like the rhino. Invasive species like kudzu and the brown tree snake, introduced by humans to non-native environments, can rapidly drive native species to extinction. In the United States, invasive species cause between $125 and $140 billion in damage every year, and they are thought to have played a part in nearly half of all extinctions worldwide since the 1600s [sources: Thomas, University of Michigan]. The greatest of all threats to Earth's biodiversity, however, is deforestation. While deforestation threatens ecosystems across the globe, it's particularly destructive to tropical rainforests. In terms of Earth's biodiversity, rainforests are hugely important; though they cover only 7 percent of the Earth, they house more than half the world's species [sources: NASA, University of Michigan]. Through logging, mining and farming, humans destroy approximately 2 percent of the Earth's rainforests every year, often damaging the soil so badly in the process that the forest has a difficult time recovering [source: University of Michigan]. As their habitats disappear, plants and animals are forced to compete with one another for the remaining space, and those that can't go extinct. In recent history, deforestation has led to approximately 36 percent of all extinctions, and as the habitat loss accelerates, that number is bound to increase [source: University of Michigan]. Deforestation is particularly difficult to stop because it has so many causes. While it's easy to blame irresponsible logging and mining companies for the devastation, their reckless practices are in some ways a symptom of larger problems. For instance, many rainforests are located in developing countries that lack the resources to enforce environmental regulations. These countries also benefit greatly from the economic activity that the companies generate, giving them even less incentive to discourage deforestation. What's more, the indigenous people who make their homes in the rainforests regularly clear the land to make room for plantations and cattle pastures, and efforts to stop this activity directly impair the livelihoods of those people. Fortunately, hope remains for the Earth's rainforests. In Brazil, satellite imagery revealed that the rate of deforestation fell by 49 percent compared to the previous year, thanks in part to stricter environmental regulations and increased enforcement. Recent studies have also shown that as a country's economic conditions improve, its deforestation rate slows considerably as the indigenous populations rely less on the rainforest's resources for survival. Finally, nonprofit groups like the World Wildlife Fund and the Sierra Club continue to raise awareness about the importance of Earth's rainforests. One such nonprofit, the Nature Conservancy, has even started working with local Brazilian municipalities to help land owners register their plots of the rainforest, a practice that will help hold them accountable to Brazil's environmental regulations. The collective efforts of governments, nonprofits and the indigenous peoples may be enough stop the destruction before it's too late.
Belted kingfishers are one of the most widely distributed birds in North America. In Ohio, the kingfisher can be found year round with the availability of open, fish-occupied waters. Belted kingfishers can often be seen perching or hovering over water in search of its primary prey, small fish. Once a kingfisher locates a small fish it dives head first vertically or at an angle for the water, aiming right for the prey. After catching a fish in its long thick bill, it flies back to its perch and bangs the fish against the branch or trunk of the tree. When the fish is stunned or dead, it gives it a little toss in the air, catches it and swallows it whole. Belted kingfishers also prey on crayfish, frogs, tadpoles and other aquatic dwellers. Later kingfishers, like owls, will regurgitate pellets containing the bones and indigestible materials. Belted kingfishers are solitary except during the breeding season early April to mid-July. During this time males will defend their territory against other kingfishers. When an unidentified kingfisher intrudes an occupied territory, the male becomes very aggressive, resulting in a rattling vocal air flight that continues until the trespasser vacates. An average territory could be a little over half-mile long. It can take belted kingfishers three days to three weeks to excavate their nesting tunnel in river bank or lakeside bluff. When they are building their tunnel, males and females chip away at the dirt with their long thick bills. They then use their feet in which two of their toes are fused together. These toes act as a plow for pushing the loose dirt out of the tunnel. The tunnel entrance slopes upwards, is 3.5 to 4 inches wide and can be up to six feet long. At the end of the tunnel is a small almost perfectly spherical chamber for egg laying. The female kingfisher lays usually lays six to eight glossy white eggs. Both the male and the female incubate the eggs for 23-24 days. The young are altricial and naked. The chicks bristly feather quills grow in about a week and their eyes open in about two weeks. Young are tended to by both parents. The adults feed them regurgitated food. Young leave the nest 30-35 days after hatching. Surprisingly, human activity, such as digging gravel pits and building roads, have helped the belted kingfisher by creating banks where kingfishers can build nests and expand their breeding range. Kingfishers appear to be less susceptible to environmental pollutants than other fish-eating birds, however excellent water quality must be protected for successful fish populations that the belted kingfisher relies on for survival and reproduction. What you can do? Protect our watersheds. A watershed is a region draining into a river, river system or other body of water. Anything harmful such as home toxins, automotive fluids and fertilizers, which run off land, can eventually get into part or all of a watershed. These pollutants can have life threatening effects on organisms in or around the river.
Spherical coordinate systems On the Earth (Terrestrial) Latitude (measured N and S from the Equator) Longitude (measured E and W from the Prime Meridian — politically decided in early 1900's as going through Greenwich, England) Note to self: NEED DIAGRAMS SHOWING/COMPARING: Terrestrial coordinate system; Terrestrial vs Equatorial; Terrestrial vs Ecliptic; Ecliptic vs Equatorial; Horizon; Horizon vs Equatorial; Galactic? In the Sky (Celestial) Several coordinate systems, each named by the circle which corresponds to the Equator in the Earth-based system. The Equatorial system is based on the Celestial Equator (and the Celestial Poles) Circles parallel to the Equator are like parallels of latitude on the Earth, and we measure N and S from the Equator to the ‘parallel’ that a star is on to measure its DECLINATION (from zero at the Celestial Equator to N or S 90 degrees at the Celestial Poles). Circles perpendicular to the Equator are like meridians of longitude on the Earth and we measure from the 'Prime Meridian' of the sky to the 'meridian' that a star is on to measure its RIGHT ASCENSION. EXCEPT — we don’t measure E and W, but only TO THE EAST, and we measure it in time units, not degrees. We measure right ascension to the East in TIME units so that as the stars move to the West they can serve as a clock. If a star with a right ascension of 6h 45m is on “The Meridian” (the arc running from the North point on the horizon through the Celestial Pole, through the Zenith, through the South point on the horizon), it is 6:45 on a star clock. If a star with a right ascension of 12h is on the Meridian, it is 12:00 on a star clock. And if a star with a right ascension of 18h 40m is on The Meridian, it is 18:40 on a star clock. IN THIS SYSTEM every star has a particular declination and right ascension and we could, on a globe (that is, a celestial globe) plot the positions of all the stars in the sky, and use that to see where they are relative to each other. These numbers — RA (right ascension) and Dec (declination) — are almost constant for a given star, because the stars are so far away that any motion that they have relative to us (or vice-versa) is too small to see without tremendous effort over times as short as a human lifetime. As a result, the positions of the stars relative to each other seem absolutely fixed to a casual observer (leading to the term the ‘fixed’ stars). (There are very small changes in these coordinates over long periods of time, because of proper motion and precession. In 2009 a 'quick and dirty' summary of these motions was added to The Many Motions of the Stars. The discussion of precession, although lacking diagrams, detail, and historical context, serves as an introduction to the topic which will be fleshed out at a later date. However, there are seven objects — the Πλανητες (‘planetes’), or “wanderers” — which MOVE relative to the stars in ‘short’ periods of time: the Moon, the Sun, Mercury, Venus, Mars, Jupiter, Saturn (see The Wanderers
"On the seventh space shuttle flight, astronaut Robert Crippen noticed a pit in the windshield. After landing, the windshield had to be replaced. Using x-rays, scientists figured out that the window was the victim of a paint fleck no bigger than a pencil point." -Marianne Dyson, Space Station Science To demonstrate the difference speed can make in terms of impact craters, you can use a penny, a few eggs, a bowl and a ruler (yardstick would be even better, but we don't have one.) Put the egg in the bowl and place it on a table or on the floor. Hold a penny 4 inches above the egg and drop it. Double the impact by doubling the height, from which you drop the penny (8 inches.) Keep adding 4 additional inches each time until the penny penetrates the shell. |It is hard to see in this photo, but the egg began to get dimples and cracks in it when the penny was 16-24 inches away.| You can continue to experiment. Are the dents or cracks larger as you hold the penny further away from the egg before dropping it? Now crack another egg and place part of it's shell over another egg. Drop the penny on this combination as you did before. Did this help protect the egg underneath? The astronaut and physicist Fred Whipple used this concept when he invented the "Whipple Bumper" to protect space stations. It consists of an outer aluminum wall, a layer of material between the walls and then the wall to the space station. The outer wall will take the damage without it affecting the actual wall of the space station. - Space Station Science, Marianne Dyson
Permeability in fluid mechanics and the earth sciences (commonly symbolized as κ, or k) is a measure of the ability of a porous material (often, a rock or unconsolidated material) to allow fluids to pass through it. The SI unit for permeability is m2. A traditional unit for permeability is the darcy (D), or more commonly the millidarcy (mD) (1 darcy 10−12m2). The unit of cm2 is also sometimes used (1 m2 = 104 cm2). For a rock to be considered as an exploitable hydrocarbon reservoir without stimulation, its permeability must be greater than approximately 100 mD (depending on the nature of the hydrocarbon - gas reservoirs with lower permeabilities are still exploitable because of the lower viscosity of gas with respect to oil). Rocks with permeabilities significantly lower than 100 mD can form efficient seals (see petroleum geology). Unconsolidated sands may have permeabilities of over 5000 mD. Permeability is part of the proportionality constant in Darcy's law which relates discharge (flow rate) and fluid physical properties (e.g. viscosity), to a pressure gradient applied to the porous media: - is the superficial fluid flow velocity through the medium (i.e., the average velocity calculated as if the fluid were the only phase present in the porous medium) (m/s) - is the permeability of a medium (m2) - is the dynamic viscosity of the fluid (Pa·s) - is the applied pressure difference (Pa) - is the thickness of the bed of the porous medium (m) In naturally occurring materials, permeability values range over many orders of magnitude (see table below for an example of this range). Relation to hydraulic conductivity The proportionality constant specifically for the flow of water through a porous media is called the hydraulic conductivity; permeability is a portion of this, and is a property of the porous media only, not the fluid. Given the value of hydraulic conductivity for a subsurface system, , the permeability can be calculated as: - is the permeability, m2 - is the hydraulic conductivity, m/s - is the dynamic viscosity, kg/(m·s) - is the density of the fluid, kg/m3 - is the acceleration due to gravity, m/s2. Permeability is typically determined in the lab by application of Darcy's law under steady state conditions or, more generally, by application of various solutions to the diffusion equation for unsteady flow conditions. Permeability needs to be measured, either directly (using Darcy's law), or through estimation using empirically derived formulas. However, for some simple models of porous media, permeability can be calculated (e.g., random close packing of identical spheres). Permeability model based on conduit flow Based on Hagen–Poiseuille equation for viscous flow in a pipe, permeability can be expressed as: - is the intrinsic permeability [length2] - is a dimensionless constant that is related to the configuration of the flow-paths - is the average, or effective pore diameter [length]. Intrinsic and absolute permeability The terms intrinsic permeability and absolute permeability states that the permeability value in question is an intensive property (not a spatial average of a heterogeneous block of material), that it is a function of the material structure only (and not of the fluid), and explicitly distinguishes the value from that of relative permeability.
Vol. 4 No. 6 September 11, 2000 Solar Power Satellites Desirable but Years Away On September 7, 2000, the House Science Committeeäs Subcommittee on Space and Aeronautics held a hearing on Solar Power Satellites (SPS). First proposed in 1968, the concept involves the capturing of solar energy using gigantic photovoltaic arrays placed in geostationary orbit. The collected energy could be converted into electricity and beamed to Earth via a microwave beam for worldwide use. According to Subcommittee chairman Dana Rohrabacher (R-CA), the Subcommittee convened this hearing energy has once again become a major concern. Oil prices have soared. Rohrbacher indicated that those in his home state of California are experiencing frequent electrical shortages. In many parts of the world, people are burning off clean energy resources such as natural gas simply because they do not have a way to get these resources to markets. Rohrabacher insisted that "we should be looking for new, cleaner sources of energy." Rohrabacher believed that SPS is an alternative whose utility and economy was worth exploring. The Subcommittee last considered SPS in a 1997 hearing, after NASA had completed a study of the concept. Four witnesses supplied testimony on SPS at the hearing. Their input is summarized below.John C. Mankins, Manager, Advanced Concepts Studies, Office of Space Flight, NASA Dr. Peter Glaser of Arthur D. Little invented the SPS concept in 1968, and since that time, NASA, industry, and the Department of Energy (DOE) have all conducted studies on SPS. A study conducted by DOE and supported by NASA between 1976 and 1980 indicated that a major SPS system, consisting of 60 satellites, would cost over $275 billion (FY 2000 dollars). According to Mankins, NASA revisited the SPS concept in 1995 "to determine whether or not technology advances since the 1970s might enable new SSP [space solar power] systems concepts that were more viable--both technically and programmatically." This "Fresh Look Study" concluded that since the 1970s, several promising SSP concepts had emerged due to recent technology advances, making space power systems far more viable than they had been in 1980. At the suggestion of the House Science Committee, NASA completed an SSP Concept Definition Study in 1998. Testing the results of the Fresh Look Study, the new study affirmed the improved viability of SSP and led to NASAäs development of strategic research and technology roadmaps for SSP technology development. Since 1999 NASA has been conducting the Space Solar Power Exploratory Research and Technology Program to further define both new systems concepts as well as the technical challenges of SSP. Led by NASAäs Marshall Space Flight Center and costing $22 million, the program should be completed by the end of this year, Mankins said. The preliminary results of NASAäs latest study indicate a number of challenges ahead for SSP systems in areas such as systems integration; solar power generation; power management and distribution; robotics assembly, maintenance, and servicing; platform and ground systems; transportation from Earth to orbit; and in-space transportation. Mankins also said that further study of terrestrial markets for SPS is required. Mankins said that NASA currently believes that technology advances and technology flight experiments and demonstrations needed to make space-based power a reality could occur by 2007. By the 2011-2012 time frame, NASA could be ready to demonstrate the technology for a 1-megawatt class SSP platform bus. Technology needed for a 1-2-gigawatt power SSP platform could be demonstrated between 2025 and 2035. Only after 2050, NASA predicts, would large-scale, in-space SSP platforms more powerful than 10 gigawatts become viable. Mankins acknowledged that the development cost of such systems would be substantial. NASAäs FY 2001 budget request does not include funding for SPS work.Ralph H. Nansen, President, Solar Space Industries Nansen pointed to program size, cost, the uncertain safety of wireless energy transmission, and international implications as the chief obstacles to SPS development. He further emphasized that SPS would not become a reality in the present absence of low-cost, heavy lift space transportation. "The existing space transportation market has not been large enough to justify the huge development cost of a reusable heavy lift launch vehicle system," Nansen said. Citing the increasing world demand for energy, Nansen suggested that the U.S. government and industry should form a partnership to develop SPS. According to Nansen, the governmentäs primary role in such a partnership should be to provide leadership and funding to start the program, coordinate international agreements, support the development of high technology, multi-use infrastructure, and purchase the first operational satellite. Several government agencies, such as DOE, NASA, and the Departments of State and Commerce, would play a role in this effort. Industry would supply most developmental funding and design and develop the system. Nansen noted that the government should take several initial steps to aid the commercial development of SPS. Such measures, according to Nansen, should include: Grey summarized the findings of an AIAA assessment of NASAäs recent SSP studies. He noted that the AIAAäs study comprised three aspects of SSP work: Grey indicated that the AIAA found these areas worth examining for international cooperation: computer modeling; solar array technology development; wireless power transmission; research facilities; innovative concepts and technologies; multiple-use applications; and demonstrations. Grey noted that SPS was the topic of a workshop at the UNISPACE III meeting held in Vienna in 1999, where participants reached the conclusion that the concept could not be realized without international cooperation and worldwide acceptance. The AIAA recommended that an international organization such as the United Nations needs to work on resolving global concerns regarding SPS including health and safety requirements for a SPS system, frequency and orbital allocations for SSP satellites, and economic and market issues. The AIAA explored the use of SSP technologies in a number of applications including solar power generation, wireless power transmission, power management and distribution, and in-space transportation. According to Grey, SSP-enabling technologies could be used in a range of activities such as human space exploration, science and robotic exploration, national security missions, commercial space development, and terrestrial applications. The AIAA applauded NASAäs SSP study efforts for identifying and defining key technologies for SPS despite the programäs modest funding. "Perhaps the most important [result of NASAäs study] was the emergence and validation of a viable alternative to microwave power transmission: laser power beaming, at intensities that comply with current health regulations and at acceptable projected overall system efficiencies," Grey said. Like Nansen, Grey believes that the lack of low-cost, reliable space transportation is the major barrier to a space-based power system. John Fini, Senior Associate, Strategic Insight, Ltd., (on behalf of Molly Macauley, Senior Fellow, Resources for the Future) Fini submitted for the record the prepared statement of Macauley, who was unable to present her testimony due to illness. Fini made no comment on Macauleyäs written statement. The National Space Society is a pro-space advocacy organization whose 20,000 members worldwide are working to create a spacefaring civilization. To learn more about NSS and its programs, call 202-543-1900 or go to http://www.nss.org
Biomedical engineers at University of Illinois in Champaign, with colleagues from Washington University in St. Louis, and other institutions in the U.S., Korea, and China developed tiny light-emitting diode (LED) devices that can be injected deep in the brain to study neural functions. The team led by Illinois’s John Rogers published its findings in this week’s issue of the journal Science (paid subsciption required). The technology devised by the researchers enables highly miniaturized biocompatible electronics to be implanted deep into tissue, including brain tissue, with a thin, releasable micro-injection needle. An ultrathin ribbon connects the implanted device to a wireless antenna and an energy harvesting circuit that powers the devices. The team demonstrated its first application of the technology with an optogenetic device that uses light to stimulate targeted neural pathways in the brain, and study precise, isolated brain functions in ways that were previously impossible. In the Science paper, Rogers and colleagues reported on genetically programming specific neurons to respond to light. The researchers designed tiny LED circuits about the size of individual human cells at the tip of of a flexible plastic ribbon thinner than a human hair and narrower than an eye of a needle. The tiny circuitry, say the researchers, can be inserted deep into the brain with very little stress to the surrounding tissue. The researchers genetically engineered the neurons of lab mice to respond to stimuli from light. The light given off by the implanted LEDS stimulated the altered neurons in the test mice, giving off dopamine and other neurotransmitters. Dopamine is a chemical that helps control the brain’s reward and pleasure centers, as well as movement and emotional responses to fulfill those rewards. The ability to study the brain in this way opens potential uses in developing treatments for neurological disorders such as Alzheimer’s disease, Parkinson’s disease, and depression. The results of this study suggests the technology could be expanded to electronic circuis that could be injected deep in tissue elsewhere in the body. Rogers says, for example, his lab developed devices for stimulating peripheral nerves in the leg as a technology that could help manage pain. “Many cases, ranging from fundamental studies to clinical interventions, demand access directly into the depth,” notes Rogers. “This is just the first of many examples of injectable semiconductor microdevices that will follow.” - U.S. Patent Awarded for Protein Therapy Delivery Technology - University Consortium to Research Nanotech Health Monitors - Two Paralyzed People Use Brain-Controlled Robotic Arms - Efficient, Economical Brain Imaging System in Development - Medical Sensor Powered by Music Vibrations Developed * * *
When meningitis strikes, it can be confusing and difficult to identify the symptoms because they are very similar to symptoms one would experience having the flu or a cold. This Understanding Meningitis section is a guide to helping you learn about the various types of meningitis, how they affect the body, how meningitis can be contracted, treated and prevented. WHAT IS MENINGITIS? Meningitis is the inflammation of the membranes (meninges) surrounding a person’s brain and spinal cord. The inflammation is typically caused an infection of the cerebral fluid. Meningitis is typically caused by a bacteria, viruses, fungus or parasites that lead to an infection. Meningitis can also be caused by injury, illness or substances. When meningitis occurs, the membranes (meninges) become inflamed. Meninges are a collection of membranes the cover the brain and spinal cord. Their primary purpose is to protect the central nervous system. Inflammation of the meninges is caused by an infection of the fluid (cerebrospinal fluid) surrounding the brain and spinal cord. The most common form of meningitis is viral meningitis. The severity of meningitis varies depending on its form. There are 5 categories of meningitis – bacteria, viral, parasitic, fungal and non-infectious meningitis. Knowing the cause of meningitis is important because the infectiveness, spread, danger and treatment can differ. TYPES OF MENINGITIS Viral meningitis is the most common type of infectious meningitis in the United States. Viral meningitis is generally less severe and resolves without specific treatment. Viral meningitis is rarely fatal, but can be debilitating and have long term after effects. Some people only feel the symptoms for 7-10 days while others may have symptoms lasting for 3-4 months, which can lead to hospitalization and prolonged absence of school or work. Viral meningitis is most often caused by enteroviruses and generally are at their highest risk of transmission during the summer to fall seasons. Enteroviruses are a group of viruses associated with several syndromes and diseases. Enterovirus exposure is extremely high but less than 1 out of 1000 infections become viral meningitis. Not all people with enteroviruses develop meningitis. Neonates, infants, and adults are all at risk of contracting viral meningitis. Viral meningitis is spread through the exchange of respiratory and throat secretions (kissing, coughing, sneezing, and sharing a cup, utensil, lip gloss, or cigarettes). Viral meningitis can also be contracted by coming in physical contact with another person’s bodily fluids who has meningitis, most likely through ingestion. Viral meningitis is also found in one’s stool. Herpes simplex and genital herpes can cause viral meningitis as well as chicken pox, rabies and HIV. The incubation period of viral meningitis may range from a few days to several weeks from the time of infection until the development of symptoms. Risk factors for development are exposure to someone with a recent viral infection or a suppressed immune system. Viral meningitis is often referred to as spinal meningitis, aseptic meningitis and sterile meningitis interchangeably. Mollarets Meningitis is a form of viral meningitis that is recurring. Mollarets meningitis is considered rare. However, recent research and studies have categorized it has being more common than initially thought. Mollarets meningitis has the same characteristics as other forms of meningitis except they are recurring and often are accompanied with long-term irregularity of the nervous system. Mollarets meningitis has been suggested to be cause by the herpes simplex virus, HSV-2 and HSV-1. Bacterial meningitis can be quite severe and may result in brain damage, hearing loss, limb loss or learning disability. For bacterial meningitis, it is also important to know which type of bacteria is causing the meningitis because specific antibiotics would need to be administered. Bacterial meningitis is extremely dangerous and can be life threatening. Bacteria meningitis is cause by bacteria instead of a virus as with viral meningitis. Age plays a large factor in the type of bacteria that causes meningitis. Group B Streptococci,Listeria monocytogenes, meningococcus and streptococcus pneumoniae are all form of bacterial meningitis. Bacteria meningitis is especially danger because it can spread quickly causing an epidemic. College students living in dormitories are at increased risk. Weakened immune systems from diseases, medication and surgical procedures can cause an individual to be considered high risk for bacterial meningitis. Travelers to foreign nations such as the sub-Saharan desert in Africa can be susceptible to meningitis. Head trauma can also potentially lead to meningitis if nasal bacteria is able to enter the meningeal space. Symptoms can appear quickly within 3-7 days. Seizures and coma’s are often a symptom of severe bacterial infection. Health people may carry the bacteria that causes meningitis in their nasal cavity and throat without becoming ill. Meningococcal Disease is the combination of meningococcal meningitis (bacterial infection of the meninges of the brain and spinal cord) and meningococcemia (a blood infection). Meningococcal bacteria (neisseria meningitides bacteria) is the cause of meningococcal meningitis infections. Meningococcal meningitis requires immediate attention as it can cause severe damage and even death within 24-48 hours. Meningococcal meningitis survivors often times suffer severe long term effect. Everyone is susceptible to meningococcal meningitis unless vaccinated. However, there are cases that have not been preventable through vaccination. Pneumococcal meningitis is cause when pneumonia bacteria (Streptococcus pneumonia) have infected the bloodstream and infect the meninges surrounding the brain and spinal cord. Pneumococcal meningitis may cause septicemia leading to severe damage to the organs. Like other forms of meningitis, pneumococcal meningitis is carried in the back of the nasal cavity and throat. It can be transmitted through coughing, saliva and the exchange of respiratory fluids within close quarters. If suspected, pneumococcal meningitis should be treated quickly. 1 in 5 people who become sick with pneumococcal meningitis will die. 25-50% will experience long term brain and neurological complications. Vaccinations are available. Upon the recommendation from a physician, those at risk such as children, the elderly and those susceptible to pneumococcus infections should be vaccinated. Fungal meningitis develops after a fungus has spread through the bloodstream. The most common form of fungal meningitis is cryptococcal fungus meningitis. Fungal meningitis is often prevalent in those with weakened immune systems such as those with Cancer and AIDS. Fungal meningitis is not transmittable from person to person. Fungal meningitis occurs when fungus has been introduced to the boy through medications administered via injections such as steroids. Fungal meningitis is also thought to be contracted through inhalation in environments heavily contaminated with bird feces. Although not contagious, fungal meningitis carries the same symptoms as other forms of meningitis and it diagnosis will also need to be done by lumbar puncture. Parasitic meningitis is caused by Naegleria fowler. Naegleria fowler is found in warm bodies of freshwater and can enter the body through the nose. Naegleria fowler causes primary amebic meningoencephalitis (PAM). PAM is a brain infection that destroys brain tissue. Naegleria fowler is found worldwide. Parasitic meningitis caused by PAM is rare and little is known about the treatment and after effects of parasitic meningitis as most infections have been fatal. Non-infectious meningitis is a form of meningitis that is not spread person to person. Non-infectious meningitis can be cause by disease, medication, drugs, head injury or surgery. Cancer and Lupus are common causes of non-infectious meningitis. The symptoms for non-infectious meningitis are similar to other forms of meningitis which may include: nausea, headaches, photophobia and vomiting. Chemical meningitis is also classified as non-infectious meningitis. Neoplastic meningitis (meningitis carcinomatosa, leptomeningeal carcinomatosis) is directly related to cancerous cells. Symptoms & Prevention
Causes of pressure sores in the elderly include sustained pressure on a bed or wheelchair and friction caused by shifting or changing positions, according to Mayo Clinic. Pressure sores can also be the result of shear, which occurs when the skin drags against a surface such as a bed. Shear may develop on tilted or elevated beds.Continue Reading Elderly people generally have skin that is thinner, drier, more fragile and less elastic than younger people, increasing the likelihood of developing pressure sores, notes Mayo Clinic. Elderly people produce new skin cells at a slower pace, which also contributes to pressure sores. Pressures sores may occur when skin and tissues become trapped between a bone and a surface such as a wheelchair or bed, explains Mayo Clinic. When this happens, the pressure may prevent capillaries from delivering oxygen and essential nutrients to the tissues, causing tissues and skin cells to die. Skin that is moist or exceptionally dry is more likely to develop pressure sores. Other factors that may contribute to pressure sores include poor nutrition, dehydration, weight and medical conditions that affect blood flow. A lack of sensory perception, bowel incontinence, smoking and muscle spasms may also increase the risk of developing pressure sores, reports Mayo Clinic.Learn more about Skin Conditions
Check out our website now at Loci Controls. Thanks for your interest! In the last post we looked at the basic composition of landfill gas. Now let’s use that information to calculate the density. The density of a gas is a critical factor when measuring its flow rate. Density is mass divided by volume, so let’s first calculate the volume of a quantity of landfill gas. As you may remember from high school chemistry, the properties of a gas change dramatically with temperature and pressure, as described by the ideal gas law (P is the gas pressure, V is the volume, n is the number of moles of the gas, R is a constant called the Universal Gas Constant, and T is temperature): So where do we start? Well, we can easily search online and find that R has a value of 8.314 J/(mole*K) and we can calculate everything assuming we are working with 1 mole of gas. Because that leaves three real variables (pressure, volume and temperature), we’ll need to fix two and solve for the third. This requires some additional information about landfill gas extraction systems. After talking with a bunch of landfills, I’ve found that the system pressure in the gas extraction system is typically around 40-50” of water (vacuum, measured relative to the atmospheric pressure). 40” of water is equivalent to 99.6 millibar, so for simplicity I’ll assume the landfill gas is at 900 mbar (since 40” of vacuum means -99.6 mbar relative to the atmosphere, which is usually around 1 bar). In SI units, this is 90,000 Pascal. Because landfill gas is a byproduct of anaerobic digestion, it is usually around 40° C, or 313.15 K (in SI units). Now that I’ve specified a value for P and T, I can calculate the volume of 1 mole of hypothetical gas: There are tons of good online calculators to do basic calculations with the ideal gas law, like this one. In the next post, we’ll combine this result with the information we found last time about landfill gas compositions in order to calculate some density ranges. I recently set about designing a flow meter for landfill gas (LFG). My search for a good online reference about LFG was futile, so I decided to create one myself. This article and the next several in this series will be dedicated to calculating some basic properties of landfill gas. The ultimate objective is to arrive at the density and the Reynolds number, which is a fundamental parameter of fluids flowing in a pipe. But first we need to establish some more basic properties. Composition of landfill gas The first step is to identify the major components of landfill gas. Because LFG is created as a result of anaerobic digestion, it is hot and very humid. The gas composition also varies as a function of the age of the landfill, temperature, rainfall, and vacuum pressure in the collection system, and the underlying solid waste composition. After talking with many landfill gas to energy plants and landfill operators, it seems that the normal composition of LFG is roughly in the following ranges: CH4 = 35-55% CO2 = 15-35% O2 = 0-4% The remainder of the gas (called “balance gas”) is primarily N2 and water vapor, along with trace compounds like Hydrogen Sulfide (H2S) and other contaminants (Benzene, refrigerants, etc.) that depend highly on the composition of the solid waste. Since my primary goal is to calculate the fluid properties, the trace components do not have a significant impact. H2S is highly toxic and has a very distinct odor, so is a major concern for landfill operators. The smell is detectable at very low concentrations (a few parts per million!), and it poses significant health and environmental problems at concentrations above 50 ppm or so. H2S seems to be created mainly by decomposing drywall, and hence is more of a problem at sites that take a lot of construction waste. Our next post will explore some of the chemical and physical properties of landfill gas in order to help calculate the fluid properties. Landfill Gas to Energy (LFGTE) projects seem like a very logical one to pursue..take a waste material and turn it into energy, then sell it for money. However, one of the major concerns that is holding back many landfill from rushing to support this idea is capital recovery. Installing a turbine for electricity generation (or pipes for selling methane directly) is not a small investment. If a project is not sized properly, there’s the risk that there won’t be sufficient methane and hence revenue to support the LFGTE project. A popular tool is the LandGEM model which uses a first-order decomposition rate to estimate annual emissions. It is extremely important to pick the two input parameters, methane yield and decay rate, very carefully. The methane yield is dependent on the waste composition. Decay rate is based primarily on environmental factors. Traditionally, these factors were derived based on laboratory experiments. However, it’s difficult for this kind of research to take into consideration environmental factors that could affect methane generation, such as atmosphere pressure, humidity, temperature, and etc. There are proposals now to collect data in the field and to derive a model of landfill gas generation stochastically. Such model would create a great check and balance with the existing LandGEM model to predict landfill gas generation. Having better resolution of a landfill’s gas production may also help landfill managers better tune the wellheads for methane extraction. We’ll be blogging more about research related to this concept hereon!
Focus: Faster Crunching Thanks to Einstein Relativity theory insists that no matter what speed you choose for your spaceship–snail-like or close to light speed–the laws of physics always look the same. Yet in the 30 March Physical Review Letters, a theorist reports that the complexity of physics calculations is not the same at all speeds. Merely imagining particle interactions from a speeding spaceship’s point-of-view could dramatically accelerate computer calculations, especially for phenomena involving particles moving close to light speed. The discovery of such a simple but unnoticed effect of relativity theory is surprising to many researchers, in part because the theory is so well-studied. Researchers studying strongly relativistic physics–the interaction of high-energy particle beams, for example, or the passage of intense laser pulses through matter–routinely rely on computer simulations to follow the complex dynamics. These simulations chop a system into many small parts, and calculate what happens to those parts through a sequence of steps in time. For accuracy, the discrete parts and time steps must be smaller than the finest details important to the physics. For example, for a dense beam of protons passing at nearly the speed of light through a cloud of electrons, the program must cut up the system into pieces smaller than 10 centimeters across, which is the typical length of a “pulse” of protons in an accelerator. But the program must also calculate results on the scale of the accelerator length, perhaps 5 kilometers. As a result, the number of computational steps required to simulate a problem grows not only with its overall size and duration, but also with the fineness of its details; it depends on the full range of scales involved, from small to large. If the laws of physics don’t change with the speed–or “reference frame”–of the observer, you might think that the range of scales shouldn’t change either. But a simple calculation shows otherwise, according to Jean-Luc Vay of the Lawrence Berkeley National Laboratory in California. Vay imagined two objects that interact as they pass by one another at relativistic speeds. Each object has a certain overall length and also a certain size for its finest details. Relativity theory says that an observer in a different reference frame would see these scales altered by relativistic length contraction and time dilation. Vay calculated that the overall range of scales would change too, and potentially quite dramatically for the most high-speed interactions. Moreover, there’s always an optimal frame in which the range becomes smallest. He says that researchers may be able to speed up some of their calculations immensely, merely by recasting them in the best frame of reference. For the proton beam example, Vay found that the calculation in the optimal frame required about 5000 time steps, compared to more than five million in a frame fixed on the electron cloud. On the computer he used, the former calculation finished in less than 30 minutes, while the latter ran for over a week. For other simulations involving the intense interaction of laser light with matter, as in free-electron lasers, Vay says the simulation speed may be improved by a million times or more. “This comes as a surprise to most physicists,” says Vay. “Most feel that the complexity of a system should be invariant” as viewed from different reference frames. “This is a remarkable observation,” says plasma physicist Alex Friedman of the Lawrence Livermore National Laboratory in California, “especially in view of the number of years that have passed since special relativity was developed. I think it will make a big difference for lots of practical simulations.” Mark Buchanan is a freelance science writer who splits his time between Wales, UK, and Normandy, France.
It is caused by a bacterium and cannot be treated by any current product. Symptoms usually start at the leaf edges: conspicuous V-shaped, yellow lesions develop as the bacteria move further into the leaf. If you snap the leaf just below one of these lesions, you will notice a blackening within the vein. The disease is systemic and eventually spreads to most of the plant. Many farmers spray copper-based products to prevent the disease or slow its spread. This simply does not work. I have carried out many trials and found no difference between sprayed and unsprayed areas. Farmers would do far better by trying to understand the disease. For one thing, a cabbage leaf has pores (or vein openings) called hydathodes along its edge. In wet, humid conditions, water pressure from within the plant causes drops of sap to be exuded from these pores. When the weather warms up, the sap is sucked back into the leaf together with any airborne bacteria that has landed on it. Although copper kills bacteria, the concentration of copper in each droplet is far too low to kill the bacteria. I have seen farmers cause burn on the crop by increasing the copper concentration in a futile effort to control the disease. The bacteria can also enter the plant through wounds caused by caterpillars and hail. In the latter case, applying a copper spray immediately after the hail might do some good, as the bacteria can enter through the open wounds. It also means that you should not have any diamond back moth larvae on the crop when black rot conditions are present. Control measures that work Ultimately, the most effective control measure is to plant resistant varieties. They don’t provide total immunity, but will usually reduce the level of infection to a manageable level. I say ‘usually’, because resistance to bacteria trying to invade the plant through the hydathodes is stronger. It is less effective with open wounds on the leaves, especially at higher temperatures. The fact that there are differing forms of black rot further complicates the process of promoting resistance. Researchers around the world are testing for resistant genes in every brassica in the family in order to incorporate these into cabbage breeding lines. This process will take some years. Most books on the subject claim that crop rotation will help control black rot, but this has not been my experience. If conditions are right and the cultivar is susceptible, black rot will occur even with heat-treated seed in virgin soil. On the other hand, it will spread faster when planted near infected plants, as the concentration of bacteria in the air will be higher. Wet patches on the land are also good starting points for the bacteria. Increasing the gap between plants Another strain of X. campestris occurs sporadically in very wet, warm conditions and stops developing when these conditions change. It does not enter through the hydathodes, but the usual signs will appear on the leaves. Many a farmer will panic on spotting this, then be surprised when the symptoms simply stop. If you farm in an area where black rot is a greater hazard, avoid planting late-maturing varieties and widen the spacing. In addition, you can increase the gap between plants to allow a better movement of air and a slightly dryer microclimate. This will also bring the crop to harvest earlier, which will help to further reduce the danger of black rot. The heads can be harvested before the black rot spreads from the outer leaves to the head and cover leaves. This can sometimes mean the difference between success and severe damage.
This eerie patch of blackness in the middle of a busy star cluster may look like a rather misshapen black hole, but it's actually something even stranger. It's also quite possibly the loneliest, darkest, coldest place in the entire cosmos. This is Barnard 68, and it's what's known as a dark molecular cloud. Basically, the dust and gas that makes up Barnard 68 is so tightly packed together that it blocks out all the light behind it. The result might look like some alien civilization tore apart the fabric of the universe and opening up a gateway to the howling void, but thankfully - or unfortunately, I guess, depending on how you feel about the howling void - it's just gas. Make that a lot of gas. Here's some additional info on this particular patch of darkness: The eerily dark surroundings help make the interiors of molecular clouds some of the coldest and most isolated places in the universe. One of the most notable of these dark absorption nebulae is a cloud toward the constellation Ophiuchus known as Barnard 68, pictured above. That no stars are visible in the center indicates that Barnard 68 is relatively nearby, with measurements placing it about 500 light-years away and half a light-year across. It is not known exactly how molecular clouds like Barnard 68 form, but it is known that these clouds are themselves likely places for new stars to form. In fact, Barnard 68 itself has been found likely to collapse and form a new star system. It should be pointed out that this molecular cloud only looks pitch black in the optical wavelengths that are familiar to us. Venture into different wavelengths, such as infrared, and you can see the stars behind Barnard 68 just fine, as you can see here.
Reaching for the stars As a proposal to build a real-life Starship Enterprise gains attention online, we look at previous engineering studies of how we can go boldly where no-one has gone before An ambitious space-lover has captured media attention with a proposal to design and construct first an interplanetary spacecraft and ultimately an interstellar vessel superficially resembling the USS Enterprise from the original incarnation of Star Trek. The idea is that such an iconic form will attract enough public support to get the funding required to build a spacecraft capable of crossing the final frontier. Its need to resemble a TV prop may reduce its practicality, however, and such a plan may be, as Spock might say, illogical. So how well does this conceptual outline compare to previous engineering studies of interplanetary and interstellar spacecraft possibilities? Starting in 1958 at America’s Los Alamos National Laboratory in New Mexico, the scientists behind Project Orion proposed to power the vehicles used to explore strange new worlds using nuclear pulse propulsion. Running nuclear reactions within the structure of the spacecraft, similarly to burning fuel in the combustion chamber of a conventional rocket, would produce high temperatures that risked damage to its structure. Instead, Orion proposed to drop “pulse units” behind the ship at regular intervals. These would each contain a nuclear weapon and a propellant to be blasted back towards the spacecraft, where it would hit a pusher plate, transferring momentum to the vessel via a shock-absorption system. Models were successfully tested with conventional explosives, but there were concerns about the safety of launching nuclear weapons into space. The 1963 Partial Test Ban Treaty, forbidding the detonation of nuclear weapons except those being conducted underground, effectively killed the project. Orion was possible to build at the time with the then-available technology. It was calculated that it would reach up to a tenth of the speed of light, and built at different sizes, could be used for hopping around the solar system or even an interstellar mission putting it into orbit around a distant star. It was estimated that a large version of Orion would cost the equivalent of a year’s US gross national product, or $3.67 trillion at 1968 prices. Daedalus and Icarus From 1973–8 the British Interplanetary Society (BIS) carried out Project Daedalus, a feasibility study based on the idea of sending an unmanned probe on a flyby of Barnard’s Star six lightyears away, where it would seek out new life and new civilisations after a journey of 50 years. It was to be a two-stage probe with the first stage accelerating to about 7% of the speed of light until a few years into its journey and the second stage reaching 12% of lightspeed. Propulsion was to be from inertial confinement fusion, in which nuclear fuel is compressed by a laser until it begins a fusion reaction. Although this is possible with contemporary technology, it would likely mean mining the atmosphere of Jupiter for Helium-3. BIS and the Tau Zero Foundation began a new five-year mission in 2009, Project Icarus, aimed at updating the original study in light of the past 30 years of technological advancements, but taking into account that many of the original experts in the physics and engineering communities had since retired or died. The two projects are named for their respective characters in Greek mythology: Daedalus the master craftsman who created the Cretan labyrinth where the minotaur was kept, and his son Icarus who attached artificial wings to himself with wax but flew too close to the Sun. The voyages of the Enterprise The suggestion for the new Enterprise programme is explicitly intended to carry humans on board, using the saucer-shaped front section to generate artificial gravity by rotating – although the axis around which it would spin is not ideal. The timescale given is for a first-generation Enterprise to be finished within 20 years. It would use nuclear-electric propulsion that would be capable of reaching Mars within 90 days. The next generation’s ships, completed every 33 years, would be increasingly advanced until one is built that can maintain a constant acceleration of 1 g all the way to the Sun’s nearest star system, Alpha Centauri. It would require 0.27% of American GDP every year – at around $40 billion in 2012, this is little over half their Department for Education budget. Ultimately, one proposal such as these will have to be made to work. For as Stephen Hawking and others have suggested, colonising the galaxy is the only way that humankind will live long and prosper.
Boil Water Response - Information for the Public Health Professional Boil Water Orders and Notices are often used by health agencies and drinking water utilities in response to conditions that create a potential for biological contamination in drinking water. Common reasons for a boil water response include loss of pressure in the distribution system, loss of disinfection and other unexpected water quality problems. Often these result from other events such as water line breaks, treatment disruptions, power outages, floods and other severe weather. The standard recommendation for boiling water is a FULL ROLLING BOIL for ONE MINUTE and COOL BEFORE USE. The term rolling boil facilitates communication and assures that an effective pasteurization temperature is reached to kill or inactivate waterborne pathogens. Some agencies recommend boiling for longer periods, but this extra time is not necessary and can cause unnecessary power demand and increase safety concerns. Because some users (e.g. immunocompromised individuals) may be more susceptible to illness from water borne pathogens, public health officials need to react swiftly to address potential water quality problems. However, public health officials must also be conscious of unnecessarily alarming the public, causing undue economic disruption, and eroding the public perception of safe tap water. Whenever possible, alternate methods to address water quality concerns, such as isolating problem water and opening interconnections with neighboring systems, should be used to avoid unnecessary boil water responses. More specific directions on these steps and when a boil water response may be necessary are provided in Department guidance and regulations. A boil water response is NOT appropriate when chemical contamination is present. This may increase exposure to chemicals such as nitrates and solvents by concentration in the boiled water or by volatilization into the breathing zone. Boiling water is also NOT appropriate to address gross levels of contamination (e.g. raw sewage or high turbidity) when particulate matter can impair the effectiveness of boiling. Under these conditions, alternate water sources must be used. There are many disease causing organisms that consumers could be exposed to through ingestion and contact with contaminated drinking water. The more common pathogens that can be found in drinking water are as follows: Protozoa: Protozoa are microorganisms that can live in animals, people and the environment. Many protozoa have life cycle stages that include cysts and oocysts. The cysts and oocysts are generally resistant to normal residual chlorine levels, but are more readily deactivated by ultraviolet (UV) disinfection. Most protozoa, including cyst and oocyst stages, will be removed by water filtration devices capable of removing 1 micron particles (i.e. microfiltration). In New York State, diseases caused by species of Giardia, Cryptosporidium, and amoebae must be reported to the NYSDOH. Bacteria: Bacteria are usually killed by normal chlorine residual levels. Most bacteria will be removed by microfiltration ("<"1 micron) and most will be effectively deactivated by ultraviolet (UV) disinfection, although some species may require increased UV doses. Bacterial spores can be resistant to normal chlorine disinfectant levels and some are resistant to UV. Small bacteria and spores may pass through filters at the microfiltration level. Bacteria that can cause waterborne illness include Escherichia coli; and species of Salmonella, Vibrio, Shigella, and Camphylobacter. Viruses: Viruses are rapidly inactivated by normal chlorine residual levels. But their small size, typically less than 0.01 microns, allows viruses to pass through 1 micron filters. In addition, some viruses are resistant to inactivation by exposure to UV light. Hence, ordinary water filtration and UV disinfection may not provide adequate viral treatment and viruses are usually controlled with chemical disinfection. Viruses that can cause waterborne illnesses include: Hepatitis A,Adenoviruses, Hepatitis E, Enteroviruses (including Polio-, Echo and Coxsackie viruses), Rotaviruses, and Caliciviruses. BOILING AND PASTEURIZATION Boiling water kills or inactivates viruses, bacteria, protozoa and other pathogens by using heat to damage structural components and disrupt essential life processes (e.g. denature proteins). Boiling is not sterilization and is more accurately characterized as pasteurization. Sterilization kills all the organisms present, while pasteurization kills those organisms that can cause harm to humans. Cooking food is also a form of pasteurization. For pasteurization to be effective, water or food must be heated to at least the pasteurization temperature for the organisms of concern and held at that temperature for a prescribed interval. The effectiveness of pasteurization is directly related to temperature and time. Milk is commonly pasteurized at 149°F/65°C for 30 seconds, or 280°F/138°C for at least two seconds. A study of the effectiveness of pasteurization of milk intentionally contaminated with Cryptosporidium found that five seconds of heating at 161°F/72°C rendered the oocysts non-infectious. Although, some bacterial spores not typically associated with water borne disease are capable of surviving boiling conditions (e.g. clostridium and bacillus spores), research shows that water borne pathogens are inactivated or killed at temperatures below boiling (212°F or 100°C). In water, pasteurization is reported to begin at temperatures as low as 131°F/55°C for protozoan cysts. Similarly, it is reported that one minute of heating to 162°/72°C and two minutes of heating at 144°/62°C will render Cryptosporidium oocysts non-infectious. Other studies report that water pasteurized at 150°F/65°C for 20 minutes will kill or inactivate those organisms that can cause harm to humans. These include: Giardia, Cryptosporidium, Endameba, the eggs of worms, Vibrio cholera, Shigella, Salmonella bacteria, those that cause typhoid, the enterotoxogenic strains of E. coli, Hepatitis A and rotaviruses. It is also reported that a 99.999% kill of water borne microorganisms can be achieved at 149°F/65°C in five minutes of exposure. Water will boil at different temperatures under different conditions (e.g. lower temperatures at higher elevations, higher temperatures in pressure vessels), however these differences are not a significant factor for boil water responses. Water in an open vessel will boil at about 212°F/100°C in New York. Even on the top of Mt. Marcy, NY where the elevation is more than one mile above sea level, water boils at about 203°F/95°C and is adequate for disinfecting water. In cases where boiling water is not possible or practical and alternate water sources are not available, chemical disinfection may be a viable substitute. Chemical disinfection may be appropriate when boiling is not possible due to power outages, and is also an appropriate way to prepare water for non-ingestion uses such as washing dishes and personal hygiene. However, chemical disinfection by itself may not be as effective as boiling for pathogen control as some protozoans, such as Cryptosporidium in the cyst form, are resistant to both chlorine and iodine based disinfectants. Chemical disinfection should not be relied on to produce water for ingestion when gross levels of contamination or high levels of protozoans or turbidity may be present (e.g. raw sewage contamination). Under these conditions, alternate sources must be used for any water to be ingested or used in food preparation. Some chemical disinfectants are readily available as household chemicals (e.g. regular unscented chlorine bleach) or by purchase from pharmacies and outdoor stores (e.g. iodine tincture). Chemical disinfection can be accomplished on site by adding a specific amount of chemical to each gallon of questionable water and allowing the water to sit for a sufficient contact period before use. If the water is very cold, it should be warmed first or the contact time should be increased. To help reduce the taste and smell of chemical disinfectants, water can be aerated after the contact time is reached by pouring it back and forth between a pair of clean containers. Disinfection methods using ordinary household chemicals can be found in several publications, including the State Department of Health pamphlet "Don't Be Left in the Dark" Disinfection with bleach should use regular, unscented bleach. Bleach that is scented, splash free or splash less should not be used due to additives in the bleach. Additionally, Clorox regular unscented bleach is certified in conformance with National Sanitation Foundation (NSF) Standard 60, which regulates the quality and purity of chemicals used for drinking water applications. WATER TREATMENT DEVICES Many water treatment devices are available for use in homes and commercial buildings, but few of them can be considered effective for pathogen removal. Many of these devices will have little or no effect on pathogens. An improperly maintained or ignored treatment device may actually add biological contamination to the water that passes through it. It is impractical to assess all of the treatment systems available, due to the sheer number available on the market and the proprietary nature of some of the processes. The following information is provided as a general overview for the public health professional. Point-of-use treatment units are manufactured and installed to treat water for use at a single location. Typical of point-of-use units are kitchen devices that treat only the water that comes out of the kitchen tap or water supplied to a nearby ice maker. There are also hand held treatment units such as water pitchers with a small integral filtration or carbon unit. Point-of-use devices installed in the kitchen will have no effect on potential exposures to water contaminants from bathroom sinks, showers, outside faucets, etc.. Often treatment systems are installed on part of a buildings plumbing, e.g. water softener on the hot water side, and these too are considered point-of-use. Specific types of treatment are discussed below. Point-of-entry treatment units are applied where water enters a home or commercial building and are installed to treat all of the water used at that location. Specific types of treatment are discussed below. Water Softeners & Ion Exchange Units - Water softeners and other ion exchange devices are not effective for removing pathogens and should never be used as a substitute for disinfection by boiling. Carbon Treatment Units - Carbon treatment provides effective removal of many chemicals, but is not effective for removing pathogens and should not be used as a substitute for disinfection by boiling. Improperly maintained carbon units in particular can actually increase the biological contamination in water that passes through it. Aerators - Aeration and oxidation units are often found in homes to treat water that has objectionable taste and odors, like sulfur compounds and chlorine, and to control nuisance minerals such as iron and manganese. Aerators are also used to remove radon. These provide no pathogen control and should never be used as a substitute for disinfection by boiling. Green Sand Filtration - Green sand units are chemical treatment devices designed to remove inorganic chemicals by oxidation. Though these units are called "filters" and have a sand media, they cannot be relied on to remove pathogens and should never be used as a substitute for disinfection by boiling. Physical / Mechanical Filtration - Physical filtration can be capable of effective pathogen removal and is used widely by water utilities for this purpose. Reverse osmosis is a form of filtration that uses specialized membranes and is addressed below. Many water filtration devices are marketed for home and commercial building use. Most of the available filter units use replaceable filter cartridges or bags, and some use membranes. The ability of a filter to remove pathogens is directly related to the size of the pores in the filter material, the quality of the unit, and the operation and maintenance of the unit. Filters rated for removal of particles that are one micron (a.k.a. micrometer, or 10-6 meter) or less in diameter are often referred to as microfilters. Filters of this size can remove the majority of water borne pathogens (protozoans and most bacteria), however, viruses are much smaller than one micron and may not be adequately removed by microfilter units. Public water systems that utilize cartridge filters in New York State, use cartridges that are rated for one micron absolute by a third party vendor and often utilize a chlorine disinfectant to inactivate viruses. The absolute rating means the filter removes 99.99% of the particulates for the rated size, and certification by a third party vendor (e.g. NSF, WQA or UL) to this level of performance increases the certainty of the performance, as well as the quality of the equipment and materials. Nominally rated cartridges, or other rating criteria provided by manufacturers vary from each manufacturer and often do not meet this standard. Reverse Osmosis - Reverse Osmosis (RO) is a form of filtration that works by forcing water under pressure through a specialized membrane. The pores in the membranes are sized so that water molecules pass though, but all particulates as well as larger molecules are removed. This type of filter is often rated by molecular size rather than by microns. A RO unit is capable of removing all waterborne pathogens and could be considered an acceptable substitute for disinfection by boiling if it is certified under ANSI/NSF standard 058 for "Cyst Removal", and it is under the control and operation of a certified water treatment plant operator or qualified nephrology technician (i.e. dialysis technician). However, because RO units are prone to fouling if turbidity levels are elevated, continuous operation during a boil water event may be difficult to accomplish without appropriate pretreatment. It should be noted that most RO units are also equipped with carbon pre-filters to protect the membranes from chlorine and large particulate. Advance preparation is key to effectively implementing a boil water response as a public health protection measure. To assist with this, the Bureau of Water Supply Protection has prepared a series of checklists and Frequently Asked Questions (FAQs) that address issues that arise when boil water events occur. These documents were prepared for different target audiences and should be used by public health staff to answer questions and as informational handouts for the public. Some water customers will have issues that are addressed in more than one of these FAQs (e.g. hospitals that are also food service establishments). Other advanced preparation items that can help both utilities and public health professionals ensure effective implementation of a boil water response include: - Accurate identification and mapping of service areas - Pre-identification of critical users (e.g. hospitals, schools, daycare centers, nursing homes/assisted living facilities, medical offices) - Contact information for critical users (valid for off hours/24 hours a day) - Contact information for public media (radio, newspaper, television) - Water system emergency contacts (valid for off hours/24 hours a day) - Up to date water supply emergency response plans - Contact information for certified bulk haulers in the area ALTERNATE WATER SOURCES Boiling is the most reliable method the public can use to disinfect their drinking water and should be the first option for on-site disinfection. However, it may not always be possible or practical to boil water. Power outages may leave consumers unable to boil, and boiling may not be practical to meet some water needs. If needs are critical and cannot be discontinued, alternate water sources or other disinfection methods may be necessary. Generally, water used by the public for drinking and food preparation during a boil water event should be obtained in the following order of preference, depending on the scope of the affected area and incident specific conditions: - Boiled (and then cooled) tap water - Bottled water (certified for distribution in NY) - Alternate public water supply (water from another public water supply that is not operating under a boil water notice) - Bulk water arranged by a water utility or emergency agency - Water chemically disinfected on-site Roadside springs are not a sure source of safe drinking water, since they are seldom monitored and no one is in charge of keeping them safe. Roadside spring water that is used for drinking or food preparation should be boiled (and then cooled) before use. Chemical disinfection is limited in effectiveness and is not appropriate for very turbid (muddy) water, or where raw sewage or other fecal matter may be present. In this case only use an alternate source of water. Chemical disinfection is discussed in greater detail in a previous section. When a boil water response has ended, recovery actions needed at consumer locations are often overlooked. Contaminated water may remain in plumbing lines, tanks, ice makers, and other equipment and can sicken consumers. Information should be provided to consumers to inform them of the need to flush and/or disinfect pipes, tanks and equipment. No single set of recommendations for flushing or disinfection can apply to all users, however, checklists and fact sheets are available from the Department to help consumers implement the final protective steps needed to assure the return to potable water. 1. Ciochetti, D. A., and R. H. Metcalf. 1984. Pasteurization of naturally contaminated water with solar energy. Appl. Environ. Microbiol. 47:223-228[Abstract/Free Full Text]. 2. Fayer, R. 1994. Effect of high temperature on infectivity of Cryptosporidium parvum oocysts in water. Appl. Environ. Microbiol. 60:2732-2735 3. Harp, J. A., R. Fayer, B. A. Pesch, and G. J. Jackson. 1996. Effect of pasteurization on infectivity of Cryptosporidium parvum oocysts in water and milk. Appl. Environ. Microbiol. 62:2866-2868 4. Metcalf, R. H. 1995. Unpublished data. 5. New York State Department of Health, Center for Environmental Health. Environmental Health Manual Item - WSP 22, Boil Water Orders and Notices. 6. New York State Department of Health, Center for Environmental Health. Boil Water Orders Notices - Fact Sheet for Public Water Suppliers. 7. Centers for Disease Control and Prevention. A Guide to Drinking Water Treatment and Sanitation for Backcountry & Travel Use. Available from: http://www.cdc.gov/healthywater/drinking/travel/backcountry_water_treatment.html 8. New York State Department of Health, Center for Environmental Health. Flood Preparedness. Available from: http://www.health.state.ny.us/environmental/emergency/flood/
In this article, we will continue learning important IP addressing and subnetting information, and we will learn how to apply this valuable information to some real-world scenarios. Before we move on, I'd like to review some information that was covered in part one of this series, - IP addresses must be unique on the Internet (when using public IP addresses) and on a private network (when using private IP addresses). - DHCP is commonly used to hand out IP addresses. This helps to keep addresses unique, provides a database of addresses assigned and prevents administrators from having to assign addresses statically. - IP addresses are 32 bits (made up of four octets of 8 bits each). - A subnet mask is what tells the computer what part of the IP address is the network and what part is for the host computers on that network. - Subnetting is the process of breaking a large network into smaller networks by adding 1s to the subnet mask. - Today, classless IP addresses are used almost exclusively, and classful IP addresses are used only for certification testing or older routing protocols. - A default gateway is where a device sends packets that are destined for a device not on the local LAN. Again, the device knows what is and what is not on the local LAN by the subnet mask. - Private IP addresses are used by most networks today, and these special, non-routable IP addresses are translated to public Internet IP addresses when those devices need to talk to the Internet. Now, let's learn more important IP address and subnetting information and how it applies to your real-world network. Using the host's formula A common real-world question when laying out your network is: "What subnet mask do I need for my network?" To answer this question, let's learn how to use the "host's formula." The host's formula will tell you how many hosts will be allowed on a network that has a certain subnet mask. The host's formula is 2n - 2. The "n" in the host's formula represents the number of 0s in the subnet mask, if the subnet mask were converted to binary (we will talk about this soon). To use the host's formula, let's first look at a simple example. Say that you have the IP address space 192.168.0.0. Currently, you have a small network with 20 hosts. This network may grow to 300 hosts within the next year, however, and you may have multiple locations in time and need to allow for them to communicate using this address space. With only 20 hosts, the simplest thing to do would be to use 255.255.255.0 as your subnet mask. This would mean that you would have 192.168.0.x as your network and x.x.x.0-255 for your hosts. Before you decide to use this subnet mask, however, let's apply the host's formula to it. To use the host's formula in this scenario, you take the subnet mask (255.255.255.0) and convert it to binary (for more information, see my binary-to-decimal conversion article). This would give you: 111111111 11111111 11111111 00000000 The host's formula is 2 n - 2, where "n" is the number of zeros in the subnet mask. As you can count, there are eight zeros in the subnet mask. To use this with the host's formula, you would calculate 28 - 2. This comes to 256 minus 2, or 254. So, with the subnet mask specified, you will get 254 usable hosts. This would suit your 20-user network now but won't support your future network host expectations of 300 hosts. It is in your best interest to plan ahead and choose the best subnet mask the first time. This prevents you from having to come back later and change all the IP addresses on this network. If you remember from part one, adding 1s to the subnet mask means that you get fewer hosts per network but more networks. If you take away 1s from the subnet mask, you get more hosts per network but fewer networks. The latter is what we need to do. To do this, let's take away one of the 1s to make our subnet mask: 11111111 11111111 11111110 0000000 In decimal, this is 255.255.254.0 This means that you have nine 0s in the subnet mask. To apply the host's formula with this subnet mask, what we would calculate is 29 - 2. This comes to 512 minus 2, or 510. So, with the subnet mask specified, you will get 510 usable hosts. This would definitely suit your 20-user network now and your future network host expectations of 300 hosts. Considering that information, we know that the most efficient subnet mask for our network is 255.255.254.0. Our valid hosts are 192.168.1-255 and 192.168.1.0-254. That is how you arrive at the total of 510 usable hosts. You can use the host's formula to calculate "what-if" scenarios to determine the most efficient subnet mask for the size of your networks. Continue reading the next part of this series, where we discuss using the subnet's formula. About the author: David Davis (CCIE #9369, CWNA, MCSE, CISSP, Linux+, CEH) has been in the IT industry for 15 years. Currently, he manages a group of systems/network administrators for a privately owned retail company and authors IT-related material in his spare time. He has written more than 50 articles, eight practice tests and three video courses and has co-authored one book. His Web site is HappyRouter.com. This was first published in November 2006
Back To CourseEnglish Grammar Rules 12 chapters | 304 lessons As a member, you'll also get unlimited access to over 75,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed.Try it risk-free Cohesion is a word that we use to describe unity or togetherness. In other words, things that are cohesive fit together. Think about an army. For an army to be effective, it must have cohesion. Good soldiers, a strategic general, and strong communication all mesh to create a united, cohesive organization. Cohesive sentences are a lot like cohesive armies. They have good soldiers (like a noun and a verb), a strategic general (a writer who carefully places words in all the right spots), and strong communication (sentence and paragraph transitions make sense). One of the greatest challenges of the English language is writing strong sentences. It takes practice. In this lesson, you'll learn about the components of cohesive sentences so that you can practice cohesive writing. Every cohesive sentence in English must have a noun, or a subject, like a person place or thing, and a verb, or an action. It should be an independent sentence, meaning the sentence is full and complete, not a fragment or a partial sentence. These are the good soldiers. Let's take a look at an example of a cohesive sentence: This sentence has a noun, 'I' and a verb, 'am.' It's cohesive, but it's not a great sentence because it doesn't give much information other than the condition of your existence. Let's keep going and add on more information to our sentence: Our sentence is beginning to make more sense. So not only does a cohesive sentence have a noun and a verb, a cohesive sentence makes sense. This is the strategic general giving direction. This sentence makes sense, but what happens next? Let's take our cohesive sentence and add other sentences: These sentences no longer make sense. There is no strong communication between the sentences. These sentences are not cohesive because they are not coherent. A cohesive sentence must have a noun and a verb, but it also must make sense and it must flow with other sentences. Without this flow, a cohesive sentence will not fit into a longer paragraph. So while a cohesive sentence must be an independent sentence with a noun and a verb, it must also agree with other sentences around it to be both cohesive and coherent. Now let's look at our sentence and make it cohesive and coherent: This sentence is part of a quote from Alexander the Great, one of the most celebrated and famous leaders in human history. It's cohesive, coherent, and can stand alone. By repeating the words and the main idea, Alexander said that a weak army with a strong leader is much better than a strong army with a weak leader. While both parts of the sentence could be independent (they have a noun and verb), the parts of the sentences combined make a powerful cohesive sentence. In addition to being an independent sentence, having cohesion and flowing with the sentences around it, a cohesive sentence can include a cohesive pronoun, transition word, correlative conjunction, and conjunctive adverb usage. In a cohesive sentence, a pronoun, a word that takes the place of a noun, must agree with the subject. For example: Someone, the subject, is singular, and therefore the pronoun 'he' must be singular as well. Most people are uncomfortable writing 'he' as it sounds non-inclusive of 'she' and write 'they' instead. Transition words are signals to the reader that a shift is coming in a sentence. There are additive transitions (like 'also,' 'for example,' and 'with regards to'), adversative transitions (like 'however' and 'on the other hand'), causal transitions ('due to,' 'consequently,' and 'thus') and sequential transitions ('first of all,' 'next,' and 'in summary'). By using these many transitions, the writer shows the reader movement in thought, the flow of the writing, and even the conclusion of the entire written piece. Correlative conjunctions in cohesive sentences can connect two ideas or subjects and include both/and, neither/nor, either/or, and not only/but also. They are essential to cohesive sentences to connect different subjects and express relationships between the two. Be careful. There must be parallelism, matching grammatical structures, and the verbs in the sentence must agree with the subjects. Conjunctive adverbs (and there are many, like moreover, nevertheless, and also) are words that modify and connect verbs in a cohesive sentence. In a way, they function like transition words but modify verbs. They are usually, but not always, introduced by a main clause + semi-colon (;) + conjunctive adverb + comma (,) + main clause. A cohesive sentence must be able to stick together. It must have cohesion: the ability to stand alone as an independent sentence. A cohesive sentence always has a noun and a verb. A cohesive sentence must make sense. A cohesive sentence must flow with the sentences around it. A cohesive sentence must be coherent and fit with other sentences around it. Pronouns (I, you, he, she, it, we, they) must agree in cohesive sentences. Cohesive sentences also use transition words, correlative conjunctions, and conjunctive adverbs. To unlock this lesson you must be a Study.com Member. Create your account Already a member? Log InBack Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Back To CourseEnglish Grammar Rules 12 chapters | 304 lessons Next LessonSilent E Words & Rules
Hemoglobin A1C (HbA1C or A1C) is a simple blood test that measures the average blood sugar levels over the past 2- 3 months. Because it is an average over several months, you do not have to be fasting when you get you blood drawn to check it. It is one of the commonly used tests to diagnose prediabetes (sometimes called insulin resistance), as well as type 1, and type 2 diabetes. It is also the main test used to help manage diabetes. If you have diabetes or prediabetes, your healthcare provider will likely order this test every three to six months depending on how your blood sugar is doing. What is an A1C? When sugar enters the bloodstream, it attaches to hemoglobin — the protein in red blood cells that carries oxygen. Everyone has some sugar attached to the hemoglobin (called glycosylated hemoglobin); however, people with higher blood sugar levels have more. The A1C test measures the percentage of the hemoglobin that is coated with sugar. The more hemoglobin that is coated with sugar, the higher the A1C will be. The higher the A1C, the more dangerous the blood sugar becomes. Controlling your blood sugar lowers the A1C and lessens the danger. A normal A1C level is at or below 5.6 %. A level of 5.7% to 6.4% indicates prediabetes, and a level of 6.5% or more indicates diabetes. Within the 5.7% to 6.4% prediabetes range, the higher your A1C, the greater your risk is for developing diabetes. Higher A1C levels are linked to diabetes complications including vision troubles or blindness, kidney problems and failure, heart attacks and circulation problems, and nerve problems including numbness of the toes and feet. The goal for most people with diabetes is an A1C of 7% or less. However, your personal goal will depend on many things such as your age and other medical conditions. Ask your healthcare provider to help you set a personal goal. There are several factors that can falsely increase or decrease A1C results: 1) kidney and liver disease as well as severe anemia, 2) certain medications, including opioids and some HIV medications, 3) blood loss or blood transfusion, and 4) early or late pregnancy. The American Diabetes Association (ADA) recommends that everyone 45 years of age and older have the A1C test as a method for screening for blood sugar problems. If results are normal, the A1C can be repeated every three years. If results are suggestive of prediabetes, the A1C should be checked at least annually. For those who were diagnosed with diabetes during pregnancy (called gestational diabetes) that resolved after the baby was born, there is a recommendation to be tested every three years thereafter to make sure you do not develop diabetes in the future. . For those who are overweight or obese and have one or more of the risk factors for developing type 2 diabetes, the recommendation is to have the A1C tested as part of an annual medical exam. Typical risk factors include: a parent or sibling with diabetes, being physically inactive, having high blood pressure, having high triglycerides or a low HDL, having a history of heart disease, or being a member of a high-risk ethnicity, including Native American, African American, Latino, or Asian American. Ask your healthcare provider if you should be screened for blood sugar problems. Early treatment will postpone or even eliminate potential complications from having a high blood sugar. Zuzana Fletcher, is a Nurse Practitioner Specialist at the Pocatello Health West Clinic. She graduated with honors in 2013 and has more than 6 years of diverse experience in her field. Health West to Collaborate with Bear Lake Health Center
What effect does HIV have on the body? HIV attacks a specific type of immune system cell in the body. It’s known as the CD4 helper cell or T cell. When HIV destroys this cell, it becomes harder for the body to fight off other infections. When HIV is left untreated, even a minor infection such as a cold can be much more severe. This is because the body has difficulty responding to new infections. Not only does HIV attack CD4 cells, it also uses the cells to make more of the virus. HIV destroys CD4 cells by using their replication machinery to create new copies of the virus. This ultimately causes the CD4 cells to swell and burst. When the virus has destroyed a certain number of CD4 cells and the CD4 count drops below 200, a person will have progressed to AIDS. However, it’s important to note that advancements in HIV treatment have made it possible for many people with HIV to live longer, healthier lives. HIV is transmitted through contact with the following bodily fluids, from most likely to lead to HIV transmission to least likely: - vaginal fluid - breast milk Sex without a condom and sharing needles — even tattoo or piercing needles — can result in the transmission of HIV. However, if an HIV-positive person is able to achieve viral suppression, then they’ll be unable to transmit HIV to others through sexual contact. According to the HIV is classified into 3 stages: acute HIV, chronic HIV, and AIDS. HIV doesn’t always multiply rapidly. If left untreated, it can take years for a person’s immune system to be affected enough to show signs of immune dysfunction and other infections. View a timeline of HIV symptoms. Even without symptoms, HIV can still be present in the body and can still be transmitted. Receiving adequate treatment that results in viral suppression stops the progression of immune dysfunction and AIDS. Adequate treatment also helps a damaged immune system to recover. Once a person contracts HIV, the acute infection takes place immediately. Symptoms of the acute infection may take place days to weeks after the virus has been contracted. During this time, the virus is multiplying rapidly in the body, unchecked. This initial HIV stage can result in flu-like symptoms. Examples of these symptoms include: However, not all people with HIV experience initial flu-like symptoms. The flu symptoms are due to the increase of copies of HIV and widespread infection in the body. During this time, the amount of CD4 cells starts to fall very quickly. The immune system then kicks in, causing CD4 levels to rise once again. However, the CD4 levels may not return to their pre-HIV height. In addition to potentially causing symptoms, the acute stage is when people with HIV have the greatest chance of transmitting the virus to others. This is because HIV levels are very high at this time. The acute stage typically lasts between several weeks and months. The chronic HIV stage is known as the latent or asymptomatic stage. During this stage, a person usually won’t have as many symptoms as they did during the acute phase. This is because the virus doesn’t multiply as quickly. However, a person can still transmit HIV if the virus is left untreated and they continue to have a detectable viral load. Without treatment, the chronic HIV stage can last for many years before advancing to AIDS. Advances in antiretroviral treatments have significantly improved the outlook for people living with HIV. With proper treatment, many people who are HIV-positive are able to achieve viral suppression and live long, healthy lives. Learn more about HIV and life expectancy. A normal CD4 count ranges from approximately 500 to 1,600 cells per cubic millimeter of blood (cells/mm3) in healthy adults, according to HIV.gov. A person receives an AIDS diagnosis when they have a CD4 count of fewer than 200 cells/mm3. A person may also receive an AIDS diagnosis if they’ve had an opportunistic infection or another AIDS-defining condition. People with AIDS are vulnerable to opportunistic infections and common infections that may include tuberculosis, toxoplasmosis, and pneumonia. People with weakened immune systems are also more susceptible to certain types of cancer, such as lymphoma and cervical cancer. The survival rate for people with AIDS varies depending on treatment and other factors. The most important factor affecting HIV progression is the ability to achieve viral suppression. Taking antiretroviral therapy regularly helps many people slow the progression of HIV and reach viral suppression. However, a variety of factors affect HIV progression, and some people progress through the phases of HIV more quickly than others. Factors that affect HIV progression can include: - Ability to achieve viral suppression. Whether someone can take their antiretroviral medications and achieve viral suppression is the most important factor by far. - Age when symptoms start. Being older can result in faster progression of HIV. - Health before treatment. If a person had other diseases, such as tuberculosis, hepatitis C, or other sexually transmitted diseases (STDs), it can affect their overall health. - Timing of diagnosis. Another important factor is how soon a person was diagnosed after they contracted HIV. The longer between their diagnosis and treatment, the more time the disease has to progress unchecked. - Lifestyle. Practicing an unhealthy lifestyle, such as having a poor diet and experiencing severe stress, can cause HIV to progress more quickly. - Genetic history. Some people seem to progress more quickly through their disease given their genetic makeup. Some factors can delay or slow the progression of HIV. These include: - taking antiretroviral medications and achieving viral suppression - seeing a healthcare provider, as recommended, for HIV treatments - stopping the use of substances such as ethanol, methamphetamine, or cocaine - taking care of one’s health, including having sex with condoms to prevent the acquisition of other STDs, trying to minimize stress, and sleeping regularly Living a healthy lifestyle and seeing a healthcare provider regularly can make a big difference in a person’s overall health. Treatments for HIV typically involve antiretroviral therapy. This isn’t a specific regimen, but instead a combination of three or four drugs. The U.S. Food and Drug Administration has currently approved nearly 50 different medications to treat HIV. Antiretroviral therapy works to prevent the virus from copying itself. This maintains immunity levels while slowing the progression of HIV. Before prescribing medication, a healthcare provider will take the following factors into consideration: - a person’s health history - the levels of the virus in the blood - possible side effects - any pre-existing allergies There are seven classes of HIV drugs, and a typical treatment regimen involves medications from different classes. Most healthcare providers will start people with HIV on a combination of three medications from at least two different drug classes. These classes, from the most commonly prescribed to the least commonly prescribed, are: - nucleoside/nucleotide reverse transcriptase inhibitors (NRTIs) - integrase strand transfer inhibitors (INSTIs) - non-nucleoside/non-nucleotide reverse transcriptase inhibitors (NNRTIs) - CCR5 antagonists (CCR5s) - fusion inhibitors - post-attachment inhibitors, a new drug class not in significant use yet HIV doesn’t cause a lot of outward or noticeable symptoms until the disease has progressed. For this reason, it’s important to understand how HIV is transmitted and the ways to prevent transmission. HIV can be transmitted by: - having sex, including oral, vaginal, and anal sex - sharing needles, including tattoo needles, needles used for body piercing, and needles used for injecting drugs - coming into contact with body fluids, such as semen, vaginal fluid, blood, and breast milk HIV is not transmitted by: - breathing the same air as a person living with HIV - getting bitten by a mosquito or other biting insect - hugging, holding hands with, kissing, or touching a person living with HIV - touching a door handle or toilet seat that’s been used by an HIV-positive person Keeping this in mind, some of the ways a person can prevent HIV include: - practicing the abstinence method by refraining from oral, anal, or vaginal sex - always using a latex barrier, such as a condom, when having oral, anal, or vaginal sex - avoiding sharing needles with other people Healthcare providers usually recommend that people get an HIV test at least once a year if they’ve had sex without condoms or shared needles with anyone in the past. People with past exposure to HIV would also benefit from episodic testing. If a person has been exposed to HIV within the past 72 hours, they should consider post-exposure prophylaxis, otherwise known as PEP. People with ongoing exposure to HIV may benefit from pre-exposure prophylaxis (PrEP) and regular testing. PrEP is a daily pill, and the US Preventive Services Task Force (USPSTF) recommends a PrEP regimen for everyone at increased risk of HIV. Symptoms can take years to appear, which is why it’s so important to get tested regularly. Advances in HIV treatments mean that people are living longer with the condition than ever before. Getting tested regularly and taking good care of one’s health can reduce transmission. If HIV is contracted, getting early treatment can prevent further transmission to others as well as progression of the disease. Treatment is vital to prevent the disease from progressing to AIDS.
Airborne viruses and relative humidity In the range of 40-60% relative humidity, the time aerosolized virus droplets remain airborne is significantly shorter, as is the duration of a virus’ viability. It also minimizes risks to human health by other biological contaminants and chemical interactions. Just how this works, we will explain in the following. Not so mysterious after all Actually, much about the role humidity plays in the transmission of viruses is quite logical. Listen in to this great explatantory podcast to learn more. In our INSIGHTS section there is an excellent short podcast that was recorded by Dave Marshall-George, Sales Director at Condair, with BusinessNet Explorer, on how indoor humidity mitigates the spread of airborne viruses and how we can better manage our indoor humidity for the health of the nation. For instance, the higher the relative humidity, the less inclined are the airborne droplets to remain suspended in the air. Instead, their increased weight due to their higher volume and weight of absorbed moisture causes them to plummet to the floor quickly, where they are less likely to invade our respiratory systems. The ideal indoor humidity level of between 40-60%RH has been scientifically proven to combat airborne flu infections. In addition, the same ideal humidity level shortens the time airborne flu remains infectious, due to a more complex phenomenon which happens inside the virus. Physiochemical reactions can disarm the viruses from within and these are more likely to occur in airborne droplets that contain higher levels of moisture. Studies revealed that at room temperature, flu survival rate is lowest at around 50% relative humidity, due to salt concentrations of the host droplet being most damaging to the virus itself at this level. Several medical studies revealing these absolutely fascinating natural mechanism for airborne infection control are to be found here. Knowing this, it should come as no surprise to you that it is during the cold winter months when heating systems run constantly, causing relative humidity to fall below 40%, that you come down with the flu and other respiratory infections. This infographic shows the three machanisms of preventing airborne infection with healthy humidity. Furthermore clinical findings published 1985 by Sterling EM, Arundel A, Sterling TD - a study which was confirmed by several studies published along the following decades - reveal correlations relevant for comfort and health protection at different room humidity levels. The risk posed by undesired microorganisms and the occurrence of specific symptoms of illness are minimal within the optimal range between 40 and 60% relative humidity. The well-known Scofield/Sterling Diagram reflects their findings. The authors reviewed scientific literature that focused on humidity effects on biological contaminants (viruses, bacteria and fungi) causing respiratory disease, chemical interactions and the possible impacts on human health and comfort. 74 references are listed in the paper. Their conclusion is that the optimal humidity range for minimizing risks to human health by biological contaminants and chemical interactions, is in the narrow range between 40-60%RH, at normal room temperatures. Conclusions in a nutshell: preferences of viruses and bacteria for low and high humidity are revealed, while fungi prefer humidity’s above 80%RH for optimal survival on surfaces for airborne microbes midrange humidity was least favourable for survival off-gassing of formaldehyde and chemical interactions increases above 40%RH and the concentration of irritating ozone decreases "Criteria for Human Exposure to Humidity in Occupied Buildings" by Sterling EM, Arundel A, Sterling TD
Simplify Symbolic Expressions Simplification of a mathematical expression is not a clearly defined subject. There is no universal idea as to which form of an expression is simplest. The form of a mathematical expression that is simplest for one problem turns out to be complicated or even unsuitable for another problem. For example, the following two mathematical expressions present the same polynomial in different forms: (x + 1)(x - 2)(x + 3)(x - 4), 2x3 - 13x2 + 14x + 24. The first form clearly shows the roots of this polynomial. This form is simpler for working with the roots. The second form serves best when you want to see the coefficients of the polynomial. For example, this form is convenient when you differentiate or integrate polynomials. If the problem you want to solve requires a particular form of an expression, the best approach is to choose the appropriate simplification function. See Choose Function to Rearrange Expression. Besides specific simplifiers, Symbolic Math Toolbox™ offers a general simplifier, If you do not need a particular form of expressions (expanded, factored, or expressed in particular terms), use shorten mathematical expressions. For example, use this simplifier to find a shorter form for a final result of your computations. simplify works on various types of symbolic expressions, such as polynomials, expressions with trigonometric, logarithmic, and special functions. For example, simplify these polynomials. syms x y simplify((1 - x^2)/(1 - x)) simplify((x - 1)*(x + 1)*(x^2 + x + 1)*(x^2 + 1)*(x^2 - x + 1)*(x^4 - x^2 + 1)) ans = x + 1 ans = x^12 - 1 Simplify expressions involving trigonometric functions. simplify(cos(x)^(-2) - tan(x)^2) simplify(cos(x)^2 - sin(x)^2) ans = 1 ans = cos(2*x) Simplify expressions involving exponents and logarithms. In the third expression, use log(sym(3)) instead of If you use log(3), then MATLAB® calculates the double precision, and then converts the result to a symbolic number. simplify(exp(x)*exp(y)) simplify(exp(x) - exp(x/2)^2) simplify(log(x) + log(sym(3)) - log(3*x) + (exp(x) - 1)/(exp(x/2) + 1)) ans = exp(x + y) ans = 0 ans = exp(x/2) - 1 Simplify expressions involving special functions. simplify(gamma(x + 1) - x*gamma(x)) simplify(besselj(2, x) + besselj(0, x)) ans = 0 ans = (2*besselj(1, x))/x You also can simplify symbolic functions by using syms f(x,y) f(x,y) = exp(x)*exp(y) f = simplify(f) f(x, y) = exp(x)*exp(y) f(x, y) = exp(x + y) Simplify Using Options simplify uses strict simplification rules and ensures that simplified expressions are always mathematically equivalent to initial expressions. For example, it does not combine logarithms for complex values in general. syms x simplify(log(x^2) + log(x)) ans = log(x^2) + log(x) You can apply additional simplification rules which are not correct for all values of parameters and all cases, but using which return shorter results. For this approach, use For example, simplifying the same expression with you get the result with combined logarithms. simplify(log(x^2) + log(x),'IgnoreAnalyticConstraints',true) ans = 3*log(x) IgnoreAnalyticConstraints provides a shortcut allowing you to simplify expressions under commonly used assumptions about values of the variables. Alternatively, you can set appropriate assumptions on variables explicitly. For example, combining logarithms is not valid for complex values in general. If you assume that a real value, simplify combines logarithms without assume(x,'real') simplify(log(x^2) + log(x)) ans = log(x^3) For further computations, clear the assumption on x by recreating it using Another approach that can improve simplification of an expression or function is the syntax n is a positive integer that controls how simplify takes. Specifying more simplification steps can help you simplify the expression better, but it takes more time. By default, n = 1. For example, create and simplify this expression. The result is shorter than the original expression, but it can be simplified further. syms x y = (cos(x)^2 - sin(x)^2)*sin(2*x)*(exp(2*x) - 2*exp(x) + 1)/... ((cos(2*x)^2 - sin(2*x)^2)*(exp(2*x) - 1)); simplify(y) ans = (sin(4*x)*(exp(x) - 1))/(2*cos(4*x)*(exp(x) + 1)) Specify the number of simplification steps for the same expression. First, use 25 steps. ans = (tan(4*x)*(exp(x) - 1))/(2*(exp(x) + 1)) Use 50 steps to simplify the expression even further. ans = (tan(4*x)*tanh(x/2))/2 Suppose, you already simplified an expression or function, but you want the other forms of the same expression. To do this, you can set the 'All' option to true. The syntax simplify(f,'Steps',n,'All',true) shows other equivalent results of the same expression in the simplification steps. syms x y = cos(x) + sin(x) simplify(y,'Steps',10,'All',true) ans = 2^(1/2)*sin(x + pi/4) 2^(1/2)*cos(x - pi/4) cos(x) + sin(x) 2^(1/2)*((exp(- x*1i - (pi*1i)/4)*1i)/2 - (exp(x*1i + (pi*1i)/4)*1i)/2) To return even more equivalent results, increase the number of steps to 25. ans = 2^(1/2)*sin(x + pi/4) 2^(1/2)*cos(x - pi/4) cos(x) + sin(x) -2^(1/2)*(2*sin(x/2 - pi/8)^2 - 1) 2^(1/2)*(exp(- x*1i + (pi*1i)/4)/2 + exp(x*1i - (pi*1i)/4)/2) 2^(1/2)*((exp(- x*1i - (pi*1i)/4)*1i)/2 - (exp(x*1i + (pi*1i)/4)*1i)/2) Simplify Using Assumptions Some expressions cannot be simplified in general, but become much shorter under particular assumptions. For example, simplifying this trigonometric expression without additional assumptions returns the original expression. syms n simplify(sin(2*n*pi)) ans = sin(2*pi*n) However, if you assume that variable an integer, the same trigonometric expression simplifies to 0. ans = 0 For further computations, clear the assumption. You can use the general simplification function, to simplify fractions. However, Symbolic Math Toolbox offers a more efficient function specifically for this task: simplifyFraction(f) represents the f as a fraction, where both the numerator and denominator are polynomials whose greatest common divisor is 1. For example, simplify these expressions. syms x y simplifyFraction((x^3 - 1)/(x - 1)) ans = x^2 + x + 1 simplifyFraction((x^3 - x^2*y - x*y^2 + y^3)/(x^3 + y^3)) ans = (x^2 - 2*x*y + y^2)/(x^2 - x*y + y^2) simplifyFraction does not expand expressions in the numerator and denominator of the returned result. To expand the numerator and denominator in the resulting expression, Expand option. For comparison, first simplify this fraction without simplifyFraction((1 - exp(x)^4)/(1 + exp(x))^4) ans = (exp(2*x) - exp(3*x) - exp(x) + 1)/(exp(x) + 1)^3 Now, simplify the same expressions with simplifyFraction((1 - exp(x)^4)/(1 + exp(x))^4,'Expand',true) ans = (exp(2*x) - exp(3*x) - exp(x) + 1)/(3*exp(2*x) + exp(3*x) + 3*exp(x) + 1)
Physical or Chemical? In this activity, students will experience physical and chemical changes. Through their observations, students will note the differences between these processes and use their understanding of these differences to classify five different processes as chemical or physical. - Old pennies - Small dish - Glass cup - Warm water - Stirring stick/spoon - 3 Deep set plates / pans (for the milk, vinegar, and soda water) - Milk (preferably homogenous) - Food colouring - Dish detergent - 2 medium sized pop bottles - Baking soda - Carbonated water (or soda pop…. But this is sticky) This activity is best done towards the end of your unit on physical and chemical changes. The reason for this is because in this activity, the students are asked to draw upon their knowledge of the changes, and to determine which of the processes are physical and which are chemical. While introducing the activity, it would be useful to remind students that chemical changes involve processes in which the substances present at the beginning of the change are not present at the end and new substances are formed. A chemical change can not be “undone”. Common indications of chemical changes are colour change, bubbles, the formation of a new substance or the emission of a gas. In a physical change, the material itself is the same before and after the change, although some extensive properties (like shape, phase, etc.) of the material changes. The change can be “undone.” Part 1 is a chemical change. The rust on the penny reacts with the vinegar, which is why it gets removed from the penny. Part 2 is physical. Substances that are small and light enough can dissolve in water, and remain suspended between the water molecules, so that it seems they have disappeared. As the solution dries up, however, the sugar will reappear unchanged. Part 3 is a physical reaction as well, though it may not look like it. The soap breaks the surface tension of the milk, which causes currents. These milk currents carry the food colouring with them, which is why we witness the colours spreading out. Part 4 is a chemical reaction. Vinegar is acidic and baking soda is basic. When you mix the two, they react to form gaseous carbon dioxide. The bubbles observed are due to carbon dioxide escaping. Part 5 seems like it might be a chemical change as well, since bubbles are produced, but actually this is a physical change. Carbon dioxide is already dissolved in the carbonated water (hence the name) and shaking the closed bottle causes pressure to build up inside. When you open the bottle, that pressure is released, and so is the carbon dioxide. In your discussion of this activity, you should try to clear up any troubles the students are having. For example, a student may wonder why the penny is a chemical reaction since it appears that we start and end with a penny and vinegar. However, the rust is actually a different substance than the penny, and it has reacted and mixed with the vinegar, which is why it is gone. Shiny New Penny. Place an old rusted penny into a small dish of vinegar and wait for 30 seconds. Pull it out, and it should be shiny and looking just like new. Physical or chemical? Disappearing Sugar? Pour some sugar into a cup of warm water and start to stir. As the sugar dissolves into the water, it starts to disappear. Physical or chemical? Make note of reasoning on the worksheet. Milk Fireworks Put 2-3 drops of different coloured food colouring onto different spots on a plate with some milk in it. Then, drop some dish detergent in or around your drops and watch the colours spread! Physical or Chemical? Make note of reasoning on the worksheet. Bubbles! Fill a 700 ml pop bottle with 100 ml of vinegar. Make sure there is a decent sized plate or dish underneath the bottle to catch overflow! Now add a spoon of baking soda, close the cap, and give one little shake. Now open the pop bottle. Physical or chemical? Make note of reasoning on the worksheet. Bubbles again? This time, fill a 700 ml pop bottle about half way with carbonated water. With the lid on, give it a quick shake, then open it up. Have a plate or dish ready as well. Physical or chemical? Make note of reasoning on the worksheet. - Was Shiny New Penny a physical or chemical change? Why? - Was Disappearing Sugar a physical or chemical change? Why? - Was Milk Fireworks a physical or chemical change? Why? - Was Bubbles! a physical or chemical change? Why? - Was Bubbles again? a physical or chemical change? Why? - What are the differences between physical and chemical changes? - What signs can we look for to determine whether a change is physical or chemical?
World Mental Health Day is observed on October 10 every year, with the aim of raising awareness of mental health issues, creating positive changes that support people, improving mental health outcomes and eliminating stigma. Each of us can make an effort to ensure that people dealing with mental health problems can live better lives with dignity. There are certain barriers to seeking treatment, known as stigma. Stigma is the perception that a certain attribute makes a person unacceptably different from others, leading to prejudice and discrimination against them. Mental health stigma and discrimination prevent people from seeking help, which can delay treatment and impair recovery. It also causes isolation, excluding people from day-to-day activities and making it hard to build new relationships or sustain current ones. Taking care of yourself includes taking care of your mental and physical well-being. This makes up your overall well-being, and is integral to leading a happy and fulfilling life. Fortunately, many people experience improved mental well-being with the right help. There are always options, there is always hope, and there is always a healthy way to manage your symptoms. World Mental Health Day recognizes the importance of self-care and nurturing mental well-being. Below are a few tips to start each day with a positive perspective: - Develop healthy physical habits. Healthy eating, physical activity, and regular sleep can improve your physical and mental health. - Remember your good deeds. Give yourself credit for the good things you do for others each day. - Forgive yourself. Everyone makes mistakes. Learn from what went wrong but don’t dwell on it. - Spend more time with friends. Surround yourself with positive, healthy people. - Explore your beliefs about the meaning and purpose of life. Think about how to guide your life by the principles that are important to you. - Practice healthy thinking. Build your emotional resilience by adopting positive habits or thoughts. You can do this by practicing gratitude or reframing negative situations in a more positive light. Your Employee Assistance Program (EAP) benefit can help parents and children deal with bullying. If you need assistance, contact your Care Coordinator at 800-245-1150 to discuss your available options.
Breast thermography, or thermal imaging, is a noninvasive and painless test that doctors sometimes use to monitor for early breast changes that could indicate breast cancer. It works by detecting increases in temperature. Thermography does not involve radiation. Instead, it uses an ultra-sensitive camera to produce high-resolution, infrared photographs, or heat images, of the breast. Thermography first appeared in the 1960s, but it has struggled to gain ground as a diagnostic tool for breast cancer due to concerns about poor sensitivity and inaccurate results. The authors of a 2018 study noted that the sensitivity of infrared imaging technology had improved drastically in recent years. They concluded that it may show promise for the future but that, for now, people should only use it alongside other screening methods. Health authorities, including the Food and Drug Administration (FDA), have issued similar recommendations. Read on to find out more about thermography, including what it involves and its benefits and risks. Thermography uses digital infrared imaging to detect subtle changes in the breast by revealing areas of heat and cold. In the body, areas of high or fast blood flow will show on a thermograph as being warmer than other areas. When blood flow increases for this purpose, the skin in that area will become warmer. A tumor will, therefore, appear as a hot spot in thermography images. According to the American College of Clinical Thermology, thermography can detect changes that may indicate various conditions, such as: - fibrocystic disease - an infection - vascular disease The test cannot confirm that cancer is present. It can only show that there are changes that may need further investigation. However, the FDA do not recommend using thermography without another screening method. They stress that “thermography is not an effective alternative to mammography and should not be used in place of mammography for breast cancer screening or diagnosis.” Thermography should always take place in a doctor’s office or another healthcare setting. It will involve the following: - The person will stand about 6–8 feet away from the camera. - They will have a painless, noninvasive test that does not involve compressing the breast. - The procedure will last approximately 15 minutes. The practitioner will look for clear differences between the breasts. For this reason, thermography might not be suitable for a person who has undergone a mastectomy or other breast surgery. The FDA note that other facilities, such as spas and homeopathic clinics, are also carrying out thermography services. The FDA express concern that these providers may be giving “false information that can mislead patients into believing that thermography is an alternative or better option than mammography.” This incorrect information may result in people not obtaining a correct diagnosis in the early stages of breast cancer when treatment is usually most effective. Anyone who opts for thermography should ask a doctor to recommend a provider and also attend mammogram screening as the doctor recommends. What can you expect during a mammogram? Find out more in our step-by-step guide. What thermographs detect A thermograph will not detect a lump, but it will show changes in body and skin temperature, which may be a sign of increased metabolic activity or blood flow in one particular area. These changes happen as the cancer cells strive to maintain themselves and grow. If the results show something unusual, this may not necessarily be cancer. The cause could be mastitis, a benign tumor, fibrocystic breast disease, or another issue. If the thermography detects any abnormalities, the person should seek further screening, which may include a mammogram. If a mammogram confirms that a lump is present, the doctor may recommend an ultrasound or MRI scan and a biopsy. Only a biopsy can confirm whether cancer is present. What happens in a breast biopsy? Learn more here. As a screening option for breast cancer, thermography offers the following benefits: - It is not painful. - It is not invasive. - It does not involve radiation. Thermography itself does not appear to pose any physical risk to a person, but there can be other risks. The authors of a review article noted both that thermography produces a high number of false-positive and false-negative results and that estimates of its sensitivity vary widely. They concluded that, overall, thermography was “not sufficiently sensitive” to use as a diagnostic tool. False-positive results can result in anxiety and unnecessary follow-up procedures. They could occur if there is another issue, such as mastitis. False-negative results can give the impression that breast cancer is not present when it is, which may result in late diagnosis and a lower chance of effective treatment. The FDA echo these concerns. Some organizations that provide thermography may not provide a person with all of the information that they need, potentially resulting in a false sense of security. They may give the impression that they are monitoring the person’s health, when, in fact, they are not making the person aware of the whole picture. Some people say that thermography is better than mammography because it is a “natural” method that avoids exposure to radiation. Mammography screening guidelines try to balance the risk of the small amount of radiation a person will receive with that of finding breast cancer when it is too late to treat it effectively. Consequently, they recommend more frequent screening for people who have a higher risk of breast cancer. Lack of scientific evidence The authors of a systematic review concluded that there was not enough evidence to support the use of thermography as a screening method for breast cancer, either alone or in combination with of other screening methods. The authors were unable to find enough suitable data to assess the tool effectively. They noted that some studies receive sponsorship from industries supporting the use of thermography, which can lead to biased results. How does thermography compare with mammography? Find out more here. Health authorities do not currently recommend using thermography to replace mammogram screening. If a person undergoes thermography, doctors urge them also to have a mammogram. Mammography remains the “gold standard” to screen for early signs of breast cancer. Although it is not always accurate, more scientific evidence supports mammography than thermography. Breastcancer.org note that researchers are looking into new types of thermography that may, one day, prove reliable. Until then, however, it is best to choose a screening method that has scientific evidence to support its effectiveness. Some women in our family had breast cancer at an early age, and they did not survive. I am concerned about my daughter, who is 18 years old. I don’t want her to start having mammograms — even if the insurance would cover them — and I was thinking about thermography. What do you suggest? Anyone with a family history of breast cancer should consider genetic testing to check for mutations in the BRCA gene, which can increase breast cancer susceptibility. The results will allow a doctor to provide more information on options to reduce the risk of breast cancer through surveillance and surgical methods. Mammography is a screening method for women with an average risk of breast cancer. Due to the lack of scientific evidence to support thermography, experts do not recommend it as a screening method, even for women at average risk.Christina Chun, MPH Answers represent the opinions of our medical experts. All content is strictly informational and should not be considered medical advice.
During the dog days of summer, it can get so hot that you might think you could fry an egg on the sidewalk. If you're a budding scientist, you may have even tried it. But have you ever WONDERed if you could really cook food using the Sun? Guess what? You can! And many people around the world do so every day. For example, people who live in rural areas of India, China, and Sudan rely heavily on the Sun, because they don't always have electricity in their homes. To cook effectively using the Sun's rays, you need a special piece of cooking equipment called a solar oven or solar cooker. Solar cookers work by focusing sunlight onto a small area. That area is used as the cooking surface and it can become very hot when the Sun's heat energy is concentrated upon it. Solar cookers have reflective panels on top that are usually made of a shiny metal, like aluminum, silver, or chromium. The panels are built at an angle or are curved so that, when sunshine hits these metal surfaces, the sunlight gets reflected downward. In this way, the panels focus the sunlight in a small area to concentrate the Sun's heat energy. If you've ever walked barefoot on blacktop on a hot summer day, you know that asphalt can feel a lot hotter than the air. That's because of its color. Darker colors absorb light which converts to heat, so any dark surface sitting out in the Sun will be a lot hotter than a lighter one. Solar cookers use this scientific principle to cook food, too. Underneath the reflective panels, solar cookers usually have a box where the food goes. The walls and floor of the box are painted black, and the cook uses a black dish or pot. The dark color makes the food get hotter than it would in other cookware. Have you ever noticed that your home stays warm even when it's cold outside? That's because of the scientific principle of heat retention. Your home is insulated, and the insulation keeps the heated air inside. Solar cookers rely upon heat retention, too. The box holding the food has insulated walls and a clear lid that allows concentrated sunlight in but keeps the heat from escaping. These features improve heat retention and make the temperatures inside the box stay high. How high? A solar cooker can reach temperatures as high as 250º Fahrenheit (121º Celcius). Have you ever been told to wash your hands before and after handling food? That's because uncooked food can sometimes carry bacteria that can make us sick. Bacteria like temperatures that are nice and warm — around 140º Fahrenheit (60º Celcius). Cooks use ovens to raise the temperature of food in order to eliminate harmful bacteria. For example, most meats need to reach an internal temperature between 145-165º Fahrenheit (about 63-74º Celcius) in order to be safe to eat. Most people who use solar cookers use them to cook foods that are less likely to contain bacteria, such as beans, rice, soups, and stews. Solar cookers aren't like the microwave or the oven in your kitchen. They don't get as hot and they need a long time to cook food. However, in many places in the world, solar cookers are an important tool for cooking food.
Generally, there are two types of relative clauses: restrictive and non-restrictive relative clauses. Restrictive relative clauses are also called defining relative clauses. Non-restrictive relative clauses are also called non-defining relative clauses. In both restrictive and non-restrictive relative clauses, the relative pronoun can act as the subject or object. It can also act as a possessive pronoun (e.g. whose). Relative pronouns in restrictive relative clauses Relative pronouns used to introduce restrictive relative clauses are not separated from the main clause by a comma. A restrictive relative clause adds essential information which is crucial for understanding the meaning of the sentence. If the relative clause is removed from the sentence, it will have a different meaning. Examples of restrictive relative clauses are given below. I like people who are honest about their intentions. Here the relative clause ‘who are honest about their intentions’ adds essential information. Consider removing the relative clause from the sentence. Now we will have the simple sentence I like people. Although this sentence still makes complete sense its meaning is different from that of the original sentence. - He who works hard will succeed. Here again the relative clause adds essential information. He will succeed does not mean the same He who works hard will succeed. More examples of restrictive relative clauses are given below. - This is the house that my grandfather built. - This is the boy who won the first prize. - The boy who broke the window was punished. - The girl whose brother serves in the army is my classmate. - The dish washer that I bought for my wife was really expensive. - The woman whom you met on the train is my colleague.
A cochlear implant is a small electronic device that helps someone hear. Most of the people that acquire one of these are deaf or have trouble hearing. The device has a piece that goes inside the ear and a piece that goes outside the ear. The part that is outside fits snugly behind the ear and the part that is inside is surgically put under the skin. The implant consists of a microphone, a speech processor, a transmitter and receiver, and an electrode array. While the cochlear implant does not give a deaf person their full hearing back, it allows them to be able to make out different sounds and help them understand better when someone is talking to them. Unlike a hearing aid, this implant works past the damaged part of the ear and goes straight to stimulating the auditory nerve. The signals from the auditory nerve are sent to the brain and are recognized by the brain as sound. It takes some getting used to because it is not like normal hearing. Both children and adults who are deaf can get this implant. For a young child, it is better for them to get it as early as possible for it will expose them to sounds during a time when they are developing their speech and language skills. Studies have been conducted to show that it is more effective for a deaf child to get a cochlear before 18 months of age. For adults who lost their hearing later in life, they can relearn the sounds that they can remember by relating them to the sounds that come through the cochlear. The cochlear implant has allowed science to basically reverse a defect that can come with age or reverse a defect that could cause someone to live a life in silence.
MLA style is used to cite sources within English, international languages, theater, cultural studies, and other humanities. It is a set of rules for publications, including research papers. In MLA style, you must cite sources that you have paraphrased, quoted or otherwise used to write your research paper. Cite your sources in two places:. When deciding how to cite your source, start by consulting the list of core elements. MLA Style (8th Edition) Citation Guide: Introduction MLA Formatting and MLA Style: An Introduction | Scribendi If you are a high school or college student, there will be a time when you find yourself in a position where you need to cite a research paper , dissertation, or create an annotated bibliography. In the article, writers of our paper writing service will teach you how to cite a research paper using MLA format correctly. MLA is the formatting style of the Modern Language Association used in areas such as English studies, comparative literature, foreign language, and literature or cultural studies. This academic style guides extensively used in the United States, Canada, and other countries. The MLA Style Center Another task from a professor made you devastated and lost? Of course, because it's something that no one likes to do. Especially a young student who obviously have dozens of other important things to do. What task we are talking about? A research paper is a piece of academic writing that provides analysis, interpretation, and argument based on in-depth independent research. Research papers are similar to academic essays , but they are usually longer and more detailed assignments, designed to assess not only your writing skills but also your skills in scholarly research. Writing a research paper requires you to demonstrate a strong knowledge of your topic, engage with a variety of sources, and make an original contribution to the debate. This step-by-step guide takes you through the entire writing process, from understanding your assignment to proofreading your final draft. Table of contents Understand the assignment Choose a research paper topic Conduct preliminary research Develop a thesis statement Create a research paper outline Write a first draft of the research paper Write the introduction Write a compelling body of text Write the conclusion The second draft The revision process Research paper checklist Free lecture slides.
Activities for Practice: Food on the Shelf (excerpted and adapted from Tutoring ESL: A Handbook for Volunteers) Purpose: Practice nouns and prepositions. Materials: Actual refrigerator or model of refrigerator, and/or actual or model of food storage cabinet, actual food or play food. If you decide to use models, you can acquire them at a thrift store. Or, during an earlier lesson, you and students can work together to make the models of the refrigerator and/or food storage cabinet and the models of the food items. You can give the directions and ask the questions in this activity, or two students can do it independently. One person directs the other in placing the “food” on the “shelves,” using directional vocabulary. Example: Student 1: Put the milk on the top shelf. Put the lettuce on the second-from-the-bottom shelf, next to the bread. Then, the students can practice questions and answers. Example: Student 1: Where is the milk? Student 2: It’s on the top shelf. Start out with a few items and simple directions, then gradually increase the complexity. If you use models, be sure that students understand the relationship between real shelves and the symbolic ones. If you decide to use models, you could still begin by looking at a real refrigerator and doing some questions and answers about the food that’s in there. From Tutoring ESL: A Handbook for Volunteers. Reproduced with permission from the publisher, Tacoma Community House Training Project, Tacoma, WA 98405. Excerpted and adapted, with permission, by
New Technology in the Classroom New technology in the classroom not New technology in the classroom not only provides the teacher with a wealth of supportive tools but also provides interest and variety for the student and makes learning more interesting and relevant to today's society. The World Wide Web has made communication readily available and for the English learner offers a wealth of opportunities to supplement learning, A key asset of new technology in the classroom is the Interactive Whiteboard, a large touch sensitive board, which can be connected to a digital projector and a computer, which displays images from the computer screen onto the board and allows for more varied, creative and seamless use of teaching materials. It provides electronically all the familiar features of a traditional classroom blackboard or roller whiteboard but unlike a traditional whiteboard, information added to an interactive whiteboard can be saved for future use, further; it is possible to highlight and annotate key points, using the marker pens. Anything on the screen can be saved as a ´snapshot´, making it easy to review and summarise key teaching points and allow teachers not only to prepare materials beforehand but also to save and/or print features of the whiteboard and share or re-use materials. Whereas the number of pupils that can practicably be accommodated around a standard computer set-up is limited, whole classes may comfortably participate in whiteboard presentations which because of the ability to interact with materials on the board encourage learner participation. Lessons can be enhanced by easily integrating video, animation, graphics, text and audio, including CD-ROMs, websites, DVDs, VHS tapes or television and both teacher and student are able to draw or write on the board using different coloured pens. The use of computers by the students is a further enhancement to English language learning, though accessibility is a major problem as ideally each student should have access to one each though their use should be controlled to avoid losing the impact or effectiveness of their use. That aside, the opportunities they can provide are boundless; Word Processing facilities allow students the opportunity, in group work, in pairs or individually to present text in an attractive way, and self-correction of spelling and grammar errors using the built-in facilities gives the student confidence to experiment with the language without 'losing face'. Using a word processor can help motivate students to write and by printing these out and sticking them on the classroom wall will motivate students, especially if the students photo is pasted onto the work, as well as recording progress. It makes the classroom look better and gets students involved with their own learning. Help with writing is available through features in both Microsoft Word and Publisher and tasks could be to make calendars, a CV, a business card, a menu, a poster (No Smoking, Only Speak English, Switch Off Your Mobile etc.) or perhaps a class newsletter ' ideal for developing group skills . Interactive CD ROMS can also be utilised, in groups, pairs or by the individual, to extend and supplement learning. These discs which are structured give immediate feedback, are colourful and fun to use, are produced to follow recognised curricula (e.g. Folens Key Stage 3 and 4) and re-enforce learning points. It should also be noted that many course books have accompanying CD ROMs whose use is to again supplement the taught text. There are hundreds of research tasks that students can undertake utilising the Internet such as finding out the local or international weather, find a business or person in the yellow pages , find a job , look at maps etc - 'Yahoo'(www.yahoo.com) and 'Google (www.google.com)' being but two Internet Search Engines. Students could be asked to write a biography about a famous person they admire and they will have to find out information about and them (www.biography.com) or they could write or talk about a country they would like to visit or their own country. Email is another enhancement that can be used to enhance and develop English communication skills. With access to an email account (e.g. www.yahoo.com or www.hotmail.com) students can sign up for information about jobs or talk to other students of English around the world using ESL online talk communities (www.rong'chang.com). They can send greeting cards, (www.yahoo.americangreetings.com) and can also write to you or other teachers about anything they want to if you give them your email address. Simple short emails in response (Do they like the city they live in or how many brothers and sisters they have) using a large font so they can be read easily will encourage the student to experiment more. The Internet also provides a wealth of resources for the teacher, the BBC in particular having good activities on its EFL website (www.esl-lab.com) and also some listening tasks on its basic skills site (www.bbc.co.uk/skillswise/words/listening). TV advertisements can be downloaded at www.absolutelyandy.com/tvadverts, and there is a plethora of interactive learner sites available, a good starting point being http://iteslj.org/ whilst Randall's Cyber Lab (www.esl- lab.com), has good free activity resources. It is important however to bear in mind that IT is not a replacement for face-to-face teaching, some students do get bored quickly if they are directed to games for an hour and as such the use of the Internet in the classroom must have a beginning, a middle and an end..
Many researchers and science writers seem to think so. Just recently, we were told that recent research shows that differences between human neurons and those of other species “may contribute to the enhanced computing power of the human brain”: Using hard-to-obtain samples of human brain tissue, MIT neuroscientists have now discovered that human dendrites have different electrical properties from those of other species. Their studies reveal that electrical signals weaken more as they flow along human dendrites, resulting in a higher degree of electrical compartmentalization, meaning that small sections of dendrites can behave independently from the rest of the neuron… “It’s not just that humans are smart because we have more neurons and a larger cortex. From the bottom up, neurons behave differently,” says Mark Harnett, the Fred and Carole Middleton Career Development Assistant Professor of Brain and Cognitive Sciences. “In human neurons, there is more electrical compartmentalization, and that allows these units to be a little bit more independent, potentially leading to increased computational capabilities of single neurons.” “Electrical properties of dendrites help explain our brain’s unique computing power” at ScienceDaily It gets better: From the same research we learn, “Each of our brain cells could work like a mini-computer, according to the first recording of electrical activity in human cells at a super-fine level of detail: Compared with mice, the dendrites of human neurons turn out to have fewer ion channels, molecules studded in the cell’s outer membrane that let electricity flow along the dendrite. While this might sound bad, it could give greater computing powers to each brain cell. Imagine a mouse neuron: if a signal starts down one dendrite, there are so many ion channels to conduct electricity that the signal will probably continue into the main trunk of the neuron. In a human neuron, by contrast, it’s less certain that the signal will conduct into the main trunk: whether it does will probably depend on activity in other dendrites, says Harnett. Clare Wilson, “Your brain is like 100 billion mini-computers all working together” at New Scientist The researchers are suggesting, it seems, that we are smarter than mice because our neurons have more independence. One wonders, would a mouse whose neurons had more independence be smarter, or just dysfunctional? Seeing the brain as a computer doesn’t tell us as much as we might think. When human beings build computers, we design them in a way that we can understand and use. So we think our brains must be like that too. Sure enough, in the vast complexity of our brains, we can surely find some elements that remind us of a computer. Others won’t. And the mystery of human consciousness floats above it all, untouched… Faced with the fact that human intelligence and consciousness is unique in our world, in recent years, researchers have come up with a remarkable variety of ad hoc explanations, whole or partial, for how it came to exist. The explanations usually center on claims about the evolution of the human brain. “The brain is a computer” is especially popular just now but many others are on offer. Some say we evolved large brains alongside small guts, but another research team found no such correlation. Alternatively, fluid societies (relative to chimps) explains it. And, according to some, mental illness helped. Chimpanzees’ improved skills throwing excrement are also said to provide hints about human brain development. (The ability to throw projectiles at very high speeds is apparently unique to humans.) Our ancestors had to grow bigger brains anyway, we are told, to make axes and hunt something besides elephants. Collective intelligence (“ideas having sex”), whatever that means, has been really important to human evolution as well. Denyse O’Leary, “Human Origins: The War of Trivial Explanations” at Evolution News & Science Today Amid all the speculation, we’re not even sure how important brain size is. We learn variously that social challenges decreased our brain size, that large brains helped Neanderthals to go extinct, and that Homo naledi’s small but sophisticated brain challenges belief in “an inevitable march towards bigger, more complex brains.” There doesn’t seem to be a thread here. One constant theme does run through all these stories, however: Researchers seek a way that the human brain evolved immaterial consciousness as a mere fluke of evolution. But the sheer number of flukey explanations offered makes it seem increasingly unlikely that the development was an accident at all. One might wonder, in that case, why researchers who are looking for a purely natural, material cause of human intelligence and consciousness keep adding to the stack. One problem is that more daring approaches to the problem are even less likely. Philosopher Daniel Dennett insists that consciousness is an evolved illusion but, if so, science is grounded in an illusion. Another philosopher, Bernardo Kastrup holds that human consciousness is simply a dissociated fragment of the universal consciousness of all life forms. Panpsychist philosophers like Philip Goff argue that all particles are conscious: “Panpsychism is crazy, but it’s also most probably true.” How can it probably be true? Because all these philosophers deny that human beings are unique and that human consciousness is immaterial. Dennett and Kastrup seek an explanation for human consciousness that would apply equally to a gorilla and an amoeba. Goff applies it to a coffee mug as well, for a fully naturalist universe. In that scheme, a computer would, of course, be conscious, just like the coffee mug, at least to the extent that it is an assemblage of particles that are held to be conscious. So, in the end, we may be able to use a computer image to explain some aspects of how our brains work, but the idea will not really bear much more weight than that. It will, however, give rise to many pop science stories, easy to write and read but not very informative in the long run. See also: Reconciling mind with materialism twenty-five years on Do big brains matter to human intelligence? The brain is not a meat computer (Michael Egnor)
There are several types of snowflakes, but they all have 6 sides or points when they are formed. The question is, why? Well, it comes down to chemistry and their molecular structure. Snowflakes form from molecules of water and as we all learned in chemistry class, the structure of water is H20...2 molecules of hydrogen and 1 molecule of oxygen. When water freezes, the arrangements of the bonds (hydrogen bonds) in the molecule take a hexagonal shape, the same shape as most snowflakes. Now the question arises, what makes snowflakes different from regular ice, from a refrigerator for example. The main thing is, the ice that is formed on the ground or in your fridge is not entirely pure. However, in a cloud, when snowflakes form, they need to start out as a supercooled water droplet. This is a water droplet is 100% pure with not even a speck of dust or dirt in it. These droplets are the most important detail while forming dendrites and precipitation in general. To explain, regular water (tap water, ground water, etc.) freezes at 32 degrees F, or 0°C. But, supercooled water will not freeze until it is -40°F or colder! This is critical to the growth of snowflakes. Dendrites typically are formed in the Dendritic Growth Zone (DGZ) which is around -10 to -20°C, or -4 to -14°F. If there are no supercooled water droplets, then snow growth will not take place. However, if there are regular water droplets, they will turn to ice and the cloud will be "glaciated" which means it turns into ice crystals. So, how do the individual snowflakes form? Well, when there is a supercooled water droplet within the DGZ, all you need is a trigger object, or a "condensation nuclei". Basically, it is dust, dirt, or any tiny particles for the droplet to "catch" so it can freeze and crystallize. Once the droplet connects with a condensation nuclei, it is no longer a pure droplet and instantly freezes, so the creation of a snowflake has now begun. Depending on where it is in the cloud, the temperature, and how much lift there is in the atmosphere (along with a few other things), will determine the size and what type of snowflake it ultimately becomes. Snowflakes can come in 35 different shapes as well as an infinite number of different sizes. A few of these shapes are: Stellar Dendrite, Needle, and Prism. These develop at different temperatures in the atmosphere and have their own distinctive atmospheric conditions which helps them evolve. Without getting into too much detail, the snowflake that everyone knows and loves is the "Stellar Dendrite". This is the classic 6 branched snowflake with many other appendages coming out of each branch. This photo is from WeatherWorks Meteorologist Simon Wachholz. With over 50 snowflakes in this picture, you can see that none of them are the same and they also come in many different shapes and sizes. One thing to note about this image is these snowflakes are heavily rimed. This occurs when a snowflake passes through a supercooled cloud and accretes tiny droplets, making them look fuzzy. So, if there are only 35 different shapes of snowflakes, does that mean it is a myth that all snowflakes are different? Not necessarily. Snowflakes can form in the same atmospheric conditions, then continue to grow and aggregate. While each individual snowflake has 6 sides, they can continue growing different branches from each side. Also, they can merge with each other and create "dendritic clumps". We can see these clumpy snowflakes when it's a "wetter" snow and temperatures are closer to freezing. Now, it is nearly impossible to say that no snowflake in history has ever been recreated. There is just no simple way to know unless a person is lucky enough to examine every single snowflake to ever fall to the ground. But, that of course, will never happen. Snowflakes are unique in their own way and most of them are different from each other. The odds that you see 2 exactly identical snowflakes are nearly 0%. With the trillions upon trillions that fall during every snowstorm, good luck finding 2 of the same snowflakes, they will all have a different sequence to them, different shaped branches, or even different shapes all together. *But* if you want to have some fun and experience the flakes for yourself, the next time it snows, freeze a black towel or cloth, go outside and catch them! Look at the beauty of mother nature, you never know what you will find.
Your Firebox uses slash notation, also known as CIDR (Classless Inter-Domain Routing) notation, for many purposes, such as policy configuration. You use slash notation differently for IPv4 and IPv6 addresses. Slash notation is a compact way to show or write an IPv4 subnet mask. When you use slash notation, you write the IP address, a forward slash (/), and the subnet mask number. To find the subnet mask number: - Convert the decimal representation of the subnet mask to a binary representation. - Count each “1” in the subnet mask. The total is the subnet mask number. For example, to write the IPv4 address 192.168.42.23 with a subnet mask of 255.255.255.0 in slash notation: - Convert the subnet mask to binary. In this example, the binary representation of 255.255.255.0 is: - Count each 1 in the subnet mask. In this example, there are twenty-four (24). - Write the original IP address, a forward slash (/), and then the number from Step 2. The result is 192.168.42.23/24. This table shows common network masks and their equivalents in slash notation. |Network Mask||Slash Equivalent| In IPv6, slash notation is used to represent the network identifier prefix for an IPv6 network. The prefix is expressed as a slash (/) followed by the prefix size, which is a decimal number between 1 and 128. The CIDR notation works exactly the same as with IPv4, which means if you have a /48, that means the first 48 bits of the address are the prefix. This table shows common IPv6 network prefixes and the number of IPv6 subnets and IPv6 addresses they support. |Prefix||Number of Subnets| |/64||1 IPv6 subnet with up to 18,446,744,073,709,551,616 IPv6 host addresses| |/56||256 /64 subnets| |/48||65,536 /64 subnets| A network site that is assigned a /48 prefix can use prefixes in the range /49 to /64 to define valid subnets.
Probably due to the mechanisation of harvesting cereal crops, the tradition of making straw shapes and figures (‘corn dollies’) largely died out by the early years of the 20th century in Britain. From about the 1960s, the craft was revived, particularly for tourist souvenirs. The term ‘corn’ referred to cereal crops such as wheat, though nowadays corn can also mean ‘corn on the cob’ or maize. Many of the traditional and revived ‘corn dollies’ bore no resemblance to dolls or figures. THE LAST STRAW STANDING Before the 20th century, local harvest customs were widespread throughout Britain, and although they differed from place to place, various elements were common to most. When the final patch of corn was cut, some straw with the ears of grain still attached (as in the picture above) was saved. Usually by plaiting or weaving, a figure or shape was made from this straw. It was then hung up in a building, often a farmhouse, barn or church, and was kept until the following year. Gathering the last stalks of straw was straightforward when cereal crops were cut by a scythe or sickle, but more difficult with a horse-drawn harvesting machine or huge modern combine harvester, so it easy to see why such traditions faded away. KILLING THE KING Sir James George Frazer (1854–1941) was an anthropologist best known for his work called The Golden Bough. The third edition (the largest and most complete) was published in 12 volumes and contained a mass of data from archaeological, anthropological and folklore sources. Frazer used this information to support his contention that all societies across the world had experienced similar stages of magical and/or religious beliefs during their evolution. Such cross-cultural studies were much in vogue during the late 19th century, but tended to ignore or play down the historical changes within different societies. His research is therefore often overlooked and his conclusions treated as invalid. One of his main ideas was that the annual cycle of vegetational growth and decay was represented as a sacred king who had to be killed and replaced when he grew old, because his strength was linked to the life-force of crops. Some customs of the cutting of the last standing corn could certainly be interpreted in this way. In 1938 one ritual observed in Lancashire was reported in the Yorkshire Evening Post newspaper as being one of the few remaining harvest customs: ‘The reapers leave an uncut patch in the centre of the last cornfield to be cut … Then a sickle is thrown at the patch in an endeavour to cut it in this way. The corn is bound into an image, which is wrapped in white linen and tied with coloured ribbons and called a “doll”. This “doll” is carried into the farmer’s house where it is kept for luck till the following year.’ These images were interpreted by Frazer as representations of a ‘corn spirit’, but in the 19th and 20th centuries, the reason for keeping a corn dolly was generally said to be ‘for luck’. Any deeper significance was long forgotten. A ‘doll’ or ‘dolly’ is usually a child’s toy that represents a baby, young child or adolescent, but corn dollies were not always doll-like or small, and some were so large that they were carried on top of a pole as an emblem or used to adorn the last load of the harvest. Even if Frazer’s theories have any validity, these corn dollies and their associated customs undoubtedly had more than one meaning over time. Frazer discovered that similar harvest customs took place in many other countries, not just Britain. He attempted to reconcile his theories with ancient Greek myths, and The Golden Bough actually referred to one myth, but with the Greek goddesses Persephone and Demeter, Frazer was faced with a problem. Persephone was the daughter of Demeter and Zeus, who was chief of all the gods. When Persephone was snatched by Hades, the god of the underworld, to be his queen, her mother Demeter searched the world in vain. Eventually Zeus released her, but not fully. Bizarrely, because Persephone had eaten some pomegranate seeds in the underworld, she could spend only eight (in some versions six) months of the year above ground. The remaining time had to be spent with Hades. This myth was understood as an allegory of the cycle of seed corn being planted in the soil and its emergence, growth and harvest, but it contradicted Frazer’s idea of a male corn spirit that needed to be ritually killed each year, especially as Demeter was also the central goddess of the Eleusinian Mysteries. Frazer’s convoluted explanation was less than convincing, but he was scrupulously honest when he ended his argument with the words: ‘It must not, however, be forgotten that this proposed explanation of such pairs of deities as Demeter and Persephone or Isis and Osiris is purely conjectural, and is only given for what it is worth.’ Although some of Frazer’s arguments are flimsy, his research remains an invaluable resource. FROM EGYPT TO SANTORINI Corn dollies are not especially durable, nor were they common, because only one figure was made each year in a small community. No really old ones are known, but there are possible representations in ancient art, such as in wall paintings within the Tomb of Nakht at Luxor in Egypt (Tomb 38). Another possible representation of a corn dolly has been noted on a pottery jar from the excavations of Akrotiri on the Aegean Greek island of Santorini, also known as Thera or Thira. This island, 120 miles south-east of mainland Greece, is the fragmentary remains of a volcano that blew up about 3,600 years ago. It was one of the largest volcanic eruptions in recorded history, leaving behind deep deposits of ash. It possibly triggered the collapse of the Minoan civilisation through the devastating tsunami and may well have given rise to the legend of the submergence of Atlantis. The island today has three main parts: the eastern half of the rim of the volcanic crater (the mainland of Santorini), a small central island formed by the remains of the volcanic vent, which is still active, and a fragment of the western rim. The importance for Greek archaeology is that the fall of ash covered ancient settlements, and a Bronze Age settlement was discovered beneath the ash in the south of the mainland. Large-scale excavations began in 1967, taking the name of Akrotiri, the nearest modern village. Extensive remains of buildings and their contents have been uncovered, leading to comparisons with Pompeii and Herculaneum in Italy, but it seems as if the inhabitants escaped before everything was engulfed. A large roof structure was built to protect the uncovered ruins, but it collapsed in 2005, and the site was closed for many years. The finds continued to be studied, and in 2009 Anaya Sarpaki published a paper called ‘Harvest Rites and Corn Dollies in the Bronze Age Aegean’ (pages 59–67 in Hesperia Supplements vol. 42, published by the American School of Classical Studies at Athens). Sarpaki draws attention to two designs on a pot that portray an object made of straw in the form of a circular wreath with a central cross and projecting ears of grain around the periphery. It looks something like a modern catherine-wheel firework. Sarpaki interprets these designs as representations of corn dollies and evidence of ancient beliefs related to the annual cycle of agriculture. If correct, corn dollies have a very long history indeed.
For the next few weeks, I’d like to explore some key points of child development from three-years of age to eighteen. We must know what we are dealing with in order to deal with it effectively. The same goes for teaching. We would never walk into a room of kindergarten students expecting them to do algebra, or hand a class of high schoolers the Dolch pre-primer list and have them read it expecting growth in literature. Unfortunately with all of the paperwork, curriculum, district evaluation procedures, standardized testing and new common core implementation, very few teachers have time to brush up on their child development. Child development is a process every child goes through. This process involves learning and mastering skills like sitting, walking, talking, skipping and tying shoes. Most children learn these skills, called developmental milestones, during predictable time periods. Milestones develop in a sequential fashion. This means that a child will need to develop some skills before he or she can develop other skills. For example, children must first learn to crawl and to pull up to a standing position before they are able to walk. Each milestone that a child acquires builds on the last milestone developed. There are five main areas of development in which children develop skills: - Cognitive Development: This is the child’s ability to learn and solve problems. - Social and Emotional Development: This is the child’s ability to interact with others, which includes being able to help themselves and self-control. - Speech and Language Development: This is the child’s ability to both understand and use language. - Fine Motor Skill Development: This is the child’s ability to use small muscles, specifically their hands and fingers, to pick up small objects, hold a spoon, turn pages in a book, or use a crayon to draw. - Gross Motor Skill Development: This is the child’s ability to use large muscles. Through extensive research, we now know that neurons can continue to make connections on into adulthood. However, the fact still remains that the brain grows very rapidly with billions of neurological connections being made during the first three years of life so it is very important that children get adequate exposure early on to the five areas previously listed. Although the digital age has expanded the abilities and knowledge of young children, it should never act as a replacement for providing the exposure children need in order to reach these milestones. Each child is an individual and may meet developmental milestones a little earlier or later than his peers. However, there are definitely blocks of time when most children will meet a milestone. And developmental milestones don’t just end once kids are six or seven. All five areas continue to develop up to the age of 21 for most children, especially boys. Although gross motor, fine motor and speech and language development have reached a plateau, cognitive and social development will continue to snowball. If we go into a classroom completely unprepared for whom we are teaching, it will be very difficult to see progress and will cause tremendous frustration for the students and for us. Our expectations need to be high, but not higher than what the child is developmentally able to give us. In the next few weeks, I’d like to provide checklists in each area for each age of development. It will by no means be an end in itself, but more a springboard for teachers to use in order to evaluate and work from. We must also remember that children are individuals and will not develop in the five areas at the same rate. This is where the importance of differentiated classrooms come into play. All classrooms are differentiated by definition, meaning that not every student is in the same place as others. And even though it’s so very difficult in today’s world of education to find any extra time to evaluate outside of the box, let alone teach all over the board, if we do our homework beforehand it becomes easier to identify what we are dealing with in our classrooms.
Types of Learning Disabilities Below you’ll find details about different types of learning disabilities we see here at MindWare Academy. If you feel your child exhibits symptoms or signs you see here, please get in touch. We can help your child thrive. Dyslexia is defined as an inherited, neurologically-based condition, varying in degrees of severity that makes it extremely difficult to read, write and spell in your native language, despite at least average intelligence. This definition is full of important information that must be looked at piece by piece if we are to understand dyslexia and how it affects both children and adults. First of all, dyslexia is inherited. It is in fact the most heritable of the reading disabilities, affecting 1 in 5 people. It is carried on up to three chromosomes. The main marker is chromosome #6, but it can also be carried on #2 and #15. Chromosome #6 is the one responsible for phonemic awareness, the number one factor involved in dyslexia. Dyslexia is neurologically based. You are born dyslexic and will be dyslexic your entire life. Although you can develop coping strategies that help, dyslexia does not go away. Dyslexia also affects more than reading. Dyslexics have difficulty with many tasks including: - Language processing difficulties: both receptive and expressive - Rote memorization - Time Concepts and Management - Organization of Physical Space - Mechanics of Math - May have ADHD like symptoms About Autism Spectrum Disorder (ASD) Autism Spectrum disorder (ASD) is a complex developmental condition that often leads to challenges in social interaction, speech and nonverbal communication. At times, individuals on the Spectrum may have restricted or repetitive behaviors. It is important to note that each individual with ASD is unique and the severity of symptoms will vary from person to person. Autism Spectrum Disorder may impact a person’s ability to navigate the complexities in our society. While individuals with ASD may exhibit a range of symptoms to different degrees, a common characteristic is a significant impairment in social interaction. Although many children and adults with ASD have average to above average intelligence, some may find it difficult to function well in school, make and maintain friendships, or find stable employment. Individuals on the Spectrum are at a higher risk for developing anxiety, depression, and poor self-esteem, often resulting from bullying and social isolation. Early intervention and support can make a significant and positive difference in both present and later life successes. Many individuals on the Spectrum may: - Not pick up on social cues and may lack inborn social skills - Dislike any changes in routines - Appear to lack empathy or struggle to understand others’ perspectives - Be unable to recognize subtle differences in speech tone, pitch, and accent that alter the meaning of others’ speech - Avoid eye contact or stare at others - Have unusual facial expressions or postures - Be preoccupied with only one or few interests, which he or she may be very knowledgeable about - Talk a lot, usually about a favorite subject (One-sided conversations are common and internal thoughts are often verbalized) - Have delayed motor development - Have heightened sensitivity and become over stimulated by loud noises, lights, or strong tastes or textures - Need support with emotional regulation About Nonverbal Learning Disabilities The very name is misleading. Despite the name, these children have superior verbal skills. Their difficulties lie in interpreting nonverbal cues and the subtleties of speech such as facial expression, body language, inferences and sarcasm. On a WISC there will be a marked discrepancy between the verbal score and the performance score. Children with a nonverbal learning disability often struggle with writing and note taking, social skills, some aspects of math such as geometry and pattern replication, poor coordination, difficulties with fine motor skills and difficulty adjusting to new situations and making transitions. Unlike language-based learning disabilities that can be detected in the early years, most children with NLD are not diagnosed until Grade 3 or later. Another difference is that NLD gets worse with age, as more of our language because nonverbal and there is more emphasis placed on social skills. Social difficulties consistent with NLD: - Difficulty making and keeping friends. - Inappropriate social behaviours that are seen as “weird”. - Unsuitable conversation. - Lack of understanding of personal space, boundary and privacy issues. - Difficulty maintaining social conversation. - Fixation on certain topics or interests that are not “normal” for their age. - Often humour is lost on them as they interpret language literally. - Sarcasm and threats are lost on them. - Difficulty seeing someone else’s perspective which is often seen as a lack of empathy. - Naively trusting of others. - Does not embrace the concept of dishonesty. - Has trouble recognizing lying and deception in other children. Sees everything in black and white – true and false. Needs a strict routine and has difficulty with change About Gifted L.D. Many children at our school have a dual diagnosis of gifted L.D. They are extremely bright but have deficits in other areas. Our small class sizes and experienced teachers allow us to individualize the program so that this group is challenged in the areas they excel at but feel supported in areas where they struggle. It is not enough to just feed the passion, we must close the gap. Likewise, we cannot be so focused on the deficit that this group becomes bored and begins to hate learning. Also, it is important to know that gifted kids do not need more work – they need different work. They need enrichment activities that allow them to explore their interests, not more questions for homework!.
The “necklaces” are tiny: beads of animal teeth, shells, and ivory no more than a centimeter long. But they provoked an outsized debate that has raged for decades. Found in the Grotte du Renne cave at Arcy-sur-Cure in central France, they accompanied delicate bone tools and were found in the same layers as fossils from Neandertals—our archaic cousins. Some archaeologists credited the artifacts—the so-called Châtelperronian culture—to Neandertals. But others argued that Neandertals were incapable of the kind of symbolic expression reflected in the jewelry and insisted that modern humans must have been the creators. Now, a study uses a new method that relies on ancient proteins to identify and directly date Neandertal bone fragments from Grotte du Renne and finds that the connection between the archaic humans and the artifacts is real. Ross Macphee, a paleontologist at the American Museum of Natural History in New York City, who has worked with ancient proteins in other studies, calls it “a landmark study” in the burgeoning field of paleoproteomics. And others say it shores up the picture of Neandertals as smart, symbolic humans. Unearthed between 1949 and 1963, the controversial artifacts were made during a transitional time, when modern humans were sweeping across Europe and the Neandertals who had lived there for hundreds of thousands of years were dying out. Although the artifacts were reportedly from the same layer as Neandertal fossils, many researchers suspected that artifacts and bones from different layers got mixed up in the investigation, as dating expert Thomas Higham of the University of Oxford in the United Kingdom suggested in 2010. Matthew Collins, a bioarchaeologist at the University of York in the United Kingdom, has been an early pioneer in the study of ancient proteins, and decided to turn the new method on some of the unidentified bones associated with the Grotte du Renne cultural artifacts. But the fragments were so small that he couldn’t even tell what species they came from. DNA, increasingly used to identify fossils, was scarce in the fragments, so Collins and his colleagues turned to proteins. When the protein analysis came back, it left little room for doubt: The bone was human. But was it an archaic human, like a Neandertal, or a modern human? Which species was really associated with the artifacts? Had the earlier discovery of Neandertal fossils in the Châtelperronian layer been an illusion based on sloppy digging and compromised evidence? To answer that question, Collins and his team compared the chemical composition of the collagen in the fragments with the collagen produced by modern and archaic humans. Modern human collagen contains high amounts of an amino acid called aspartic acid, but the ancient collagen was once rich in a different amino acid, asparagine—and previously sequenced Neandertal DNA includes a collagen-producing gene that likely resulted in an asparagine-rich version. To double-check their finding, they sequenced the fragments’ mitochondrial DNA as well, finding that the bones came from individuals with Neandertal ancestry on their mothers’ side. “[The bone fragments] weren’t useful 10 years ago, but now we realize they’re a great molecular record,” Collins says. In addition to being archaic, the collagen was a form found only in bone that is still growing. The bone fragment also contained a high proportion of certain nitrogen isotopes, which is associated with children who are breast-feeding. Those two lines of evidences led the researchers to conclude that at least some of the bone fragments likely came from the skull of a Neandertal infant. Direct radiocarbon dating of the sample shows that it’s about 42,000 years old—just when the Châtelperronian beads and tools were made. The team published its results online today in the Proceedings of the National Academy of Sciences. “You can invent all sorts of stories. But the simplest explanation is that this assemblage was made at least in part by Neandertals,” says co-author Jean-Jacques Hublin, a paeloanthropologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. He believes that Neandertals likely picked up the ideas behind the tools and the ornaments from their new modern human neighbors but fashioned the artifacts themselves. Chris Stringer, an anthropologist at the Natural History Museum in London, wonders whether modern humans could have had a genetic influence on the last Neandertals as well. Because scientists know Neandertals and modern humans mated with each other, “is it possible that the ‘modern’ DNA these late Neandertal groups picked up included genes for enhanced cognitive abilities?” he wonders. But other researchers who have long argued that Neandertals had sophisticated cognitive abilities, including João Zilhão at the University of Barcelona in Spain, doubt that they had any help, genetic or otherwise, from the new arrivals. Higham, who led the study that cast doubt on the integrity of the Grotte du Renne layers, is convinced by the new work. “This paper is the first time that a Neandertal bone has been dated from this key site, and it … provides additional data to support a Châtelperronian-Neandertal link,” he says. “I think it is quite possible that Neandertals were capable of making and using personal ornaments.”
What is Matrices? A matrix is the arrangement of some elements or numbers in rows and columns. Horizontal lines of elements are called rows of the matrix while the vertical lines of elements are called columns of the matrix. The elements of a matrix are enclosed by bracket() or . Matrix having m rows and n columns is called an m x n containing mn elements in it. The order of this matrix is m x n. It is denoted by A=[a ij] Addition & subtraction of Matrices Addition & subtraction of Matrices two matrices is possible when the order of two matrices are same and their solution is also a matrix of same order. Multiplication of Matrices Multiplication of two matrices A and B is defined only when the number of column of A is equal to the numbers of rows of B. If the number of columns of A is different from number of rows of B, then the product AB is not define. (2, 1, 4) • (4, 2, 3) = 2×4 + 1×2 + 4×3 A matrix obtained by interchanging the rows and columns is called Transpose of a Matrix. If A is any matrix of order m x n, then its transpose denoted by A’ OR AT is of order n x m. Also (i,j)th element of A = (j,i)th element of A’ OR AT
An updated radiocarbon dating technique has thrown our conceptions of when Neanderthals died out into question. A new paper in PNAS is suggesting that Neanderthals disappeared considerably earlier than previously thought — a resetting of the anthropological clock that calls into question whether our human ancestors interacted and interbred with the now-extinct species. But as the limited nature of the study suggests, we might want to hold off for bit before we re-write the history books. For the past 20 years, anthropologists have assumed that the Neanderthals made their final stand in the northern part of the Iberian peninsula (the west coast of Spain) — a time when humans occupied the same space. But now, an international study involving researchers from the Spanish National Distance Education University (UNED), Oxford University, Australia National University, and many other institutions, is claiming that this is highly improbable — that there's a slim chance Neanderthals were still alive in this region 30,000 years ago. According to co-author Jesús F. Jordá, a researcher at the Department of Prehistory and Archaeology of the UNED, the Neanderthals disappeared from Iberia no earlier than 45,000 years ago. And in fact, 50,000 years ago is the most plausible date. If the researchers are right, it means that humans and Neanderthals never interacted in this part of Europe, or if they did, it would have been for a short period of time (about 3,000 to 4,000 years). The team reached this conclusion by applying a new radiocarbon dating method — one that utilizes an ultrafiltration protocol. It's a new technique that purifies the collagen of the bone samples, including the removal of unwanted contaminent molecules like amino acids. "The problem with radiocarbon dating alone is that it does not provide reliable dates older than 50,000 years" explained Jordá through a statement. Contamination is another problem; older samples collect more residues, leading to incorrect analysis. And with the new technique came new date ranges — changes that are altering our conceptions of the Neanderthal story. But before the history books get re-written, it's important to note that the team's samples were quite limited. The research team analyzed 215 fossil bones from 11 Iberian sites. But in the end, they were only able to identify 27 specimens, with only six of them producing a useable date. The problem was that the other samples contained insufficient collagen. In addition to small sample size, the new technique may have resulted in a selection effect in which only older samples were isolated and dated. Moreover, this research will have to be reconciled against other evidence in support of the idea that Neanderthals and humans crossed paths and interbred. While it's now clear that Neanderthals and humans may not have made contact in Iberia, anthropologists are still fairly certain that the two species did interact at other locations, and at other times. A study from 2011, for example, suggested that modern humans were living in Italy and the UK as far back as 41,000 to 45,000 years ago. Other studies suggest that the Neanderthals hung on until 28,000 years ago — thus prolonging the potential "overlap" time with humans. And not only that, there's the genetics to consider. There's enough evidence from DNA studies to suggest that interbreeding occurred between humans and Neanderthals (some 1 to 4 percent of Neanderthal DNA is present in modern humans). That said, it's quite possible that this mixing happened much longer ago, as much as 80,000 to 90,000 years ago in the eastern Mediterranean and Middle Eastern regions (what's referred to as the Levant Region). The entire study can be read at PNAS.
You might not have given much thought to predatory bacteria before, but a new study reveals that the behavior of these microorganisms plays a crucial part in the balance of nutrients and carbon capture in soil. These predatory bacteria – bacteria that eat other bacteria – grow at a faster rate and consume more resources than non-predatory bacteria, and have more of an influence on their surroundings than scientists have previously realized. In fact, the team behind the study describes the actions of the predatory bacteria as being very much like a wolf pack: They use enzymes and even fang-like filaments to devour other types of bacteria, giving them an outsized influence on their environment. “We’ve known predation plays a role in maintaining soil health, but we didn’t appreciate how significant predator bacteria are to these ecosystems before now,” says Bruce Hungate, a soil ecologist at Northern Arizona University. The team analyzed a total of 82 sets of data containing hundreds of bacterial species, from 15 sites across a range of ecosystems (including one stream). About 7 percent of the bacteria were found to be predatory. When extra carbon was added to the soil, the predatory bacteria were better able to use it to spur their growth. A recently developed technique called quantitative Stable Isotope Probing (qSIP) was used for the analysis. It uses labeled isotopes to track the activity of bacteria, almost like you might track social media comments with a hashtag, and it enables scientists to see the habits and reach of predatory bacteria. Two types of predatory bacteria were highlighted in the study: Bdellovibrionales and Vampirovibrionales, which are both obligate predator bacteria. They grew 36 percent faster and captured carbon 211 percent more quickly than non-predatory bacteria. That’s pretty important to know, considering how critical our soil is for storing carbon and keeping it out of our atmosphere. An insight into how carbon and other nutrients move through soil is going to play a vital role in climate change modeling – soil ecosystems currently contain more carbon than is stored in all the plants on Earth. As well as offering an insight into how microbial food chains work and are kept together, the study of predatory bacteria might eventually become valuable in the development of therapeutic drugs, the researchers say. “Until now, predatory bacteria have not been a part of that soil story, but this study suggests that they are important characters who have a significant role determining the fate of carbon and other elements,” says Hungate. “These findings motivate us to take a deeper look at predation as a process.” The research has been published in mBio.
Update: March 10, 2020 What is novel coronavirus? 2019 Novel Coronavirus (COVID-19) is a virus (more specifically, a coronavirus) identified as the cause of an outbreak of respiratory illness first detected in Wuhan, China. Early on, many of the patients in the outbreak in Wuhan, China reportedly had some link to a large seafood and animal market, suggesting animal-to-person spread. However, a growing number of patients reportedly have not had exposure to animal markets, indicating person-to-person spread is occurring. At this time, it’s unclear how easily or sustainably this virus is spreading between people. How does novel coronavirus spread? Much is unknown about how COVID-19, a new coronavirus, spreads. Current knowledge is largely based on what is known about similar coronaviruses. Coronaviruses are a large family of viruses that are common in many different species of animals, including camels, cattle, cats, and bats. Rarely, animal coronaviruses can infect people and then spread between people such as with MERS, SARS, and now with COVID-19. Most often, spread from person-to-person happens among close contacts (about 6 feet). Person-to-person spread is thought to occur mainly via respiratory droplets produced when an infected person coughs or sneezes, similar to how influenza and other respiratory pathogens spread. These droplets can land in the mouths or noses of people who are nearby or possibly be inhaled into the lungs. It’s currently unclear if a person can get COVID-19 by touching a surface or object that has the virus on it and then touching their own mouth, nose, or possibly their eyes. Typically, with most respiratory viruses, people are thought to be most contagious when they are most symptomatic (the sickest). With COVID-19, however, there have been reportsexternal icon of spread from an infected patient with no symptoms to a close contact. It’s important to note that how easily a virus spreads person-to-person can vary. Some viruses are highly contagious (like measles), while other viruses are less so. There is much more to learn about the transmissibility, severity, and other features associated with COVID-19 and investigations are ongoing. This information will further inform the risk assessment. Read the latest 2019 Novel Coronavirus, Wuhan, China situation summary. What are the symptoms? For confirmed COVID-19 infections, reported illnesses have ranged from people with little to no symptoms to people being severely ill and dying. Symptoms can include: - Shortness of breath CDC believes at this time that symptoms of COVID-19 may appear in as few as 2 days or as long as 14 after exposure. This is based on what has been seen previously as the incubation period of MERS viruses. The latest situation summary updates are available on CDC’s web page 2019 Novel Coronavirus. How can I protect myself? There is currently no vaccine to prevent COVID-19 infection. The best way to prevent infection is to avoid being exposed to this virus. However, as a reminder, CDC always recommends everyday preventive actions to help prevent the spread of respiratory viruses, including: - Wash your hands often with soap and water for at least 20 seconds, especially after going to the bathroom; before eating; and after blowing your nose, coughing, or sneezing. - If soap and water are not readily available, use an alcohol-based hand sanitizer with at least 60% alcohol. Always wash hands with soap and water if hands are visibly dirty. - Avoid touching your eyes, nose, and mouth with unwashed hands. - Avoid close contact with people who are sick. - Stay home when you are sick. - Cover your cough or sneeze with a tissue, then throw the tissue in the trash. - Clean and disinfect frequently touched objects and surfaces using a regular household cleaning spray or wipe. For information about handwashing, see CDC’s Handwashing website For information specific to healthcare, see CDC’s Hand Hygiene in Healthcare Settings These are everyday habits that can help prevent the spread of several viruses. CDC does have specific guidance for travelers. Have there been cases of nCoV in the US? The Centers for Disease Control and Prevention is tracking the U.S. cases of COVID-19. For the most up-to-date information: What should we know about travel from China? For additional travel information, visit the link below: Where can I learn more? The Ohio Department of Health has a Call Center at 1-833-4-ASK-OHIO. Posted by: Amanda Carter
leetcode Water and Jug Problem You are given two jugs with capacities x and y litres. There is an infinite amount of water supply available. You need to determine whether it is possible to measure exactly z litres using these two jugs. - Fill any of the jugs completely. - Empty any of the jugs. - Pour water from one jug into another till the other jug is completely full or the first jug itself is empty. 12 Input: x = 2, y = 6, z = 4Output: True 12 Input: x = 2, y = 6, z = 5Output: False
- What is our responsibility to ourselves, our parents, and society? - Where is the line between sanity and madness? - What causes us act? What causes us to delay or be inert? - What is the nature of revenge? What is the nature of justice? Are they the same thing? - How does corruption infect the individual and society? How do we rid ourselves of corruption? Submit drafts of sites for Lit Circle books Introduction to the Play with scene cards - What does the line mean? - How can you make the words mean something more by how I use my voice and body? - Rules of the Game: - Only use the words on the card - Must add gestures and movement - May add props, costumes, chairs, etc. - Rules of the Game: - Find a partner and create a scene with just the two lines and perform - What are some of the inferences we can make about the play based on these lines? - Do we see any patterns? HW: Reading excerpt from Aristotle’s Poetics Due Wednesday snow day or no snow day </strong. - Main Ideas - Key terms - Confusing Information - What does this mean for the reading of Hamlet? Reading full 1.1 and 1.2 with notes on Hamlet vs. Claudius and Hamlet’s mental state - How does Claudius’s language in his speech reveal character? - In the lines between Claudius and Hamlet (1.2.66-96), what is each character’s subtext to their lines? Close reading of Claudius’s opening speech and their exchange - KING CLAUDIUS 1-2 - Acting out the exchange between Claudius & Gertrude and Hamlet - Hamlet’s soliloquy HW: Reading 1.3-1.5 Reviewing the first act - Family relationships between Polonius, Laeretes, and Ophelia - What can we infer from Polonious’s choice of words and the sentence structure of “but” in lines 60-87 (44-45) - Understanding the ghost - What is significant about Hamlet’s speech after his encounter with the ghost? Reading and annotating the introduction of Hamlet for lines of inquiry HW: Journal #1 on google classroom 2.1 In class - Playing what isn’t seen in lines 2.1.84-134 - One student read and the others mime the actions described - What does the scene demand of Hamlet? - What is Hamlet up to in this scene? - Why is he treating Ophelia this way? - Why Ophelia of all people? - Does he love her? If not, how does he show this? If Yes, what possible reasons could he have for putting on this show for her? What are you basing this on from the play so far? - Does Ophelia love Hamlet? What is her reaction? What is the support for her feelings? Working with Act 2 Scene 2 - Hamlet’s 2nd soliloquy HW: Journal #2 Reading Act 3 Scenes 1 & 2 HW: Reading Act 3 Scenes 3 & 4 The famous soliloquy Revising the Hamlet’s famous soliloquy - What purpose does a soliloquy serve? - What happens in a soliloquy (usually)? Hamlet and Ophelia’s exchange - Hamlet knows from the beginning of the scene that Polonius and Claudius are watching him - Hamlet does not know until later in the scene. When do you think this happens based on the text? - Hamlet never knows he’s being watched - What is his objective? - What specific gestures, inflections, movements, or pauses should the actor use to support this objective? - How does the objective inform the subtext? Converting the soliloquy into an argument - Two students read - Two groups read using pitch, tone, inflection, and stress to emphasize the meaning of words and lines reading in unison (practice & reading) HW: Soliloquy assignment Work day for the soliloquy assignment - Submit by end of period HW: Revise the Critical Lens site for Tuesday’s class Reading 3.3 – 3.4 - What is Hamlet’s state going into this scene? - What position is Gertrude now in? HW: Journal 3 Reading Act 4 HW: Finish reading Act 4 Act 4 & Cutting Lines with your group - How does Ophelia’s madness compare to Hamlet’s? - Olivier (1.42.58) - Tennant (2.13.48) HW: Enjoy the break! Writing the journal entry for Act 4 Reading Act 5 Scene 1 and answering questions HW: Act 5 Scene 2 and questions on classroom RSC Hamlet (2.35.25-2.47.37) HW: Read Act 5.2 Reading journal for Act 5 HW: Review Aristotle’s Poetics In class reading of criticism - T.S. Eliot, Coleridge Tolstoy Criticism on Hamlet - THE REAL OR ASSUMED MADNESS OF HAMLET by Simon Augustine Blackmore - “Hearing Ophelia: Gender and Tragic Discourse in Hamlet” by Sandra K. Fischer - “The Psychoanalytic Solution” from Hamlet and Oedipus by Ernest Jones - “Hamlet: A Love Story” by Joshua Rothman (a review of Stay Illusion! by Simon Critchley and Jamieson Webster) HW: Final journal entry on Hamlet criticism. Complete journal due Monday at beginning of class In class essay – Passage Response In class essay – Open Response using Hamlet
I have created additional resources for the vocabulary workbook "Wordly Wise" (Book 8). I split each lesson into two parts: the first eight vocabulary words are in the form of word squares and the last seven vocabulary words are in the form of a creative writing activity. This product is in both .DOCX and .PDF. GREAT FOR SUB PLANS or HOMEWORK!! Since both of these worksheets are independent assignments. FOR FULL BUNDLE (LESSONS 1-15): https://www.teacherspayteachers.com/Product/BUNDLE-Wordly-Wise-Book-8-Lessons-1-15-Additional-Resources-3923600 Fully aligned to Common Core Grade 8 Acquire and use accurately grade-appropriate general academic and domain-specific words and phrases; gather vocabulary knowledge when considering a word or phrase important to comprehension or expression.
Language, Logic, and Discrete Mathematics Course Composition and Objectives - Set , Relations, Functions, Numbers - Students will understand set operations, applications of relations, equivalence relations, function composition, inverse functions, logarithms, exponential function, number systems, applications of number theory. - Students will apply understandings of sets, relations, functions, and numbers to mathematical data types (integers, fractions, real numbers, tuples, function spaces); exponential growth; non-feasible algorithms; and/or public key encryption. - Logic and Boolean Algebra - Students will understand predicates, quantifiers, formulas, interpretations, syllogisms, logical consequence, tableau method, Boolean connectives, Boolean functions, valuations, truth tables, and logic gates. - Students will apply understandings of logic and Boolean algebra to database query languages, specification languages, switching circuits, and/or Boolean search expressions. - Combinatorics and Probability - Students will understand combination, permutation, and discrete probability. - Students will apply understandings of combinatorics and probability to lexicographic ordering, combinatorial explosions, lower bounds of algorithms, and/or reliability of computer systems . - Graphs and Trees - Students will understand directed and undirected graphs, weighted graphs, walks, paths, matrix representations, graph algorithms, spanning trees, rooted and structured trees, combining trees to form new trees, inserting nodes in trees, sorting, and searching. - Students will apply understandings of graphs and trees to flow diagrams, task scheduling, critical paths, network connectivity, finite state machines, parsing, derivation, and/or trees as data structures for storing information. - Induction and Recursion - Students will understand induction and recursion on the natural numbers and other structures such as trees. - Students will apply understandings of induction and recursion to recursive evaluation of mathematical and Boolean expressions, recursive searching and sorting algorithms, and/or asymptotic analysis of algorithms. - Grammars, Languages, and Finite State Machines - Students will understand alphabets, strings, grammars, languages, regular languages, regular expressions, finite state machines, and language recognizers. - Students will apply understandings of grammars, languages, and finite state machines to regular expression search and/or efficient pattern matching using finite-state machines. - Instructors Choice: Instructors may choose topics and learning objectives that meet the spirit of the course as defined here. Instructors may choose to devote more time to the learning objectives listed above or to add additional, complimentary objectives. Supplementary material and objectives should not overlap with the defined content of other courses in the curriculum. IST 230 is one of the five introductory core courses for the baccalaureate degree program in Information Sciences and Technology. The purpose of IST 230 is to provide students with an understanding of an array of mathematical concepts and methods which form the foundation of modern information science, in a form that will be relevant and useful for IST students. Exams and assignments will be used to assess that understanding. IST 230 will draw some of its material from several mathematical disciplines: formal language theory, mathematical logic, discrete mathematics. In-depth treatments of each of these subjects are offered elsewhere in the University as advanced mathematics and computer science courses. The difference is that IST 230 will present these concepts in a more elementary way, with much more emphasis on IST applications, and in a more eclectic, web-based format.
Remember that mathematicians love to abbreviate things (RTMLTAT, for short). To write "3 multiplied by 4" in symbols, we could write 3 · 4, 3 × 4, or (3)(4). To write "3 multiplied by x" we could also write 3 · x, 3 × x, or (3)(x). However, there is a much shorter way: write 3x. When multiplying a number by a variable, we can write the number and the variable side by side. They get along swimmingly, so there is no need to separate them with a symbol. We can't do the same when multiplying numbers together, because if we write 2 next to 4, for example, we get 24. If you think that 2 times 4 is 24, then you may have taken a 2 x 4 to the back of the head. When multiplying two (or more) variables, we also write the variables next to each other to show that they are being multiplied. For example, xy means "x times y." This is another reason that we go with such rarely used letters as our variables. If we used a and b most of the time, you might see ab and think we are talking about somebody's six-pack. The mathematical convention (the usual way of doing things) is to write the number before the variable when multiplying numbers by variables. In other words, we write 3x, not x3. If you do write x3 people will probably know what you mean, but you likely won't be invited back to the convention. Observe that xy = yx since multiplication of real numbers is commutative. When multiplying variables together, it can be helpful to write the variables in alphabetical order (xy or xyz), so we have a standard order in which to write them. Writing yx instead of xy isn't nearly as bad as writing x17 in place of 17x, but it is still frowned upon in certain circles. Generally the circles frequented by us math nerds. You scoff, but our frowns can be intimidating. When we multiply a variable by itself several times—almost like cloning, but much less controversial—we can use exponent notation. For example, x · x · x = x3. We may read x3 as "three copies of x," since x3 is an abbreviation for three copies of x multiplied together. It is too bad we don't need 100 copies, as then we receive a price break. When dividing a variable by a number, there are a couple of different ways to write the division in symbols. Since then and both mean "x divided by 4." In this expression, the x could not possibly stand for the United States of America, because our nation is indivisible. Pledge of allegiance, represent. Be careful: It is safer to write division using fraction notation than it is to write division using the slash. Not that you will be in any real physical danger if you do the latter, unless you encounter a real fraction bully, but it isn't advisable and here's why. The expression 1/4x is ambiguous, as it could mean either or . Avoid the problem by simply not writing 1/4x. No, your solution of avoiding the problem by skipping algebra altogether is not a valid one. Nice try.
Previously on this blog, we’ve covered two major kinds of algebraic objects: the vector space and the group. There are at least two more fundamental algebraic objects every mathematician should something know about. The first, and the focus of this primer, is the ring. The second, which we’ve mentioned briefly in passing on this blog, is the field. There are a few others important to the pure mathematician, such as the -module (here is a ring). These do have some nice computational properties, but in order to even begin to talk about them we need to know about rings. A Very Special Kind of Group Recall that an abelian group is a set paired with a commutative binary operation , where has a special identity element called 0 which acts as an identity for . The archetypal example of an abelian group is, of course, the integers under addition, with zero playing the role of the identity element. The easiest way to think of a ring is as an abelian group with more structure. This structure comes in the form of a multiplication operation which is “compatible” with the addition coming from the group structure. Definition: A ring is a set which forms an abelian group under (with additive identity 0), and has an additional associative binary operation with an element 1 serving as a (two-sided) multiplicative identity. Furthermore, distributes over in the sense that for all The most important thing to note is that multiplication is not commutative both in general rings and for most rings in practice. If multiplication is commutative, then the ring is called commutative. Some easy examples of commutative rings include rings of numbers like , which are just the abelian groups we know and love with multiplication added on. If the reader takes anything away from this post, it should be the following: Rings generalize arithmetic with integers. Of course, this would imply that all rings are commutative, but this is not the case. More meaty and tempestuous examples of rings are very visibly noncommutative. One of the most important examples are rings of matrices. In particular, denote by the set of all matrices with real valued entries. This forms a ring under addition and multiplication of matrices, and has as a multiplicative identity the identity matrix . Commutative rings are much more well-understood than noncommutative rings, and the study of the former is called commutative algebra. This is the main prerequisite for fields like algebraic geometry, which (in the simplest examples) associate commutative rings to geometric objects. For us, all rings will have an identity, but many ring theorists will point out that one can just as easily define a ring to not have a multiplicative identity. We will call these non-unital rings, and will rarely, if ever, see them on this blog. Another very important example of a concrete ring is the polynomial ring in variables with coefficients in or . This ring is denoted with square brackets denoting the variables, e.g. . We rest assured that the reader is familiar with addition and multiplication of polynomials, and that this indeed forms a ring. Let’s start with some easy properties of rings. We will denote our generic ring by . First, the multiplicative identity of a ring is unique. The proof is exactly the same as it was for groups, but note that identities must be two-sided for this to work. If are two identities, then . Next, we prove that for all . Indeed, by the fact that multiplication distributes across addition, , and additively canceling from both sides gives . An identical proof works for . In fact, pretty much any “obvious property” from elementary arithmetic is satisfied for rings. For instance, and and are all trivial to prove. Here is a list of these and more properties which we invite the reader to prove. Zero Divisors, Integral Domains, and Units One thing that is very much not automatically given in the general theory of rings is multiplicative cancellation. That is, if I have then it is not guaranteed to be the case that . It is quite easy to come up with examples in modular arithmetic on integers; if then , but . The reason for this phenomenon is that many rings have elements that lack multiplicative inverses. In , for instance, has no multiplicative inverse (and neither does 6). Indeed, one is often interested in determining which elements are invertible in a ring and which elements are not. In a seemingly unrelated issue, one is interested in determining whether one can multiply any given element by some to get zero. It turns out that these two conditions are disjoint, and closely related to our further inspection of special classes of rings. Definition: An element of a ring is said to be a left zero-divisor if there is some such that . Similarly, is a right zero-divisor if there is a for which . If is a left and right zero-divisor (e.g. if is commutative), it is just called a zero-divisor. Definition: Let . The element is said to be a left inverse to if , and a right inverse if . If there is some for which , then is said to be a two-sided inverse and is called the inverse of , and is called a unit. As a quick warmup, we prove that if has a left and a right inverse then it has a two-sided inverse. Indeed, if , then , so in fact the left and right inverses are the same. The salient fact here is that having a (left- or right-) inverse allows one to do (left- or right-) cancellation, since obviously when and has a right inverse, we can multiply to get . We will usually work with two-sided inverses and zero-divisors (since we will usually work in a commutative ring). But in non-commutative rings, like rings of matrices, one-sided phenomena do run rampant, and one must distinguish between them. The right way to relate these two concepts is as follows. If has a right inverse, then define the right-multiplication function which takes and spits out . In fact, this function is an injection. Indeed, we already proved that (because has a right inverse) if then . In particular, there is a unique preimage of under this map. Since is always true, then it must be the case that the only way to left-multiply times something to get zero is . That is, is not a right zero-divisor if right-multiplication by is injective. On the other hand, if the map is not injective, then there are some such that , implying , and this proves that is a right zero-divisor. We can do exactly the same argument with left-multiplication. But there is one minor complication: what if right-multiplication is injective, but has no inverses? It’s not hard to come up with an example: 2 as an element of the ring of integers is a perfectly good one. It’s neither a zero-divisor nor a unit. This basic study of zero-divisors gives us some natural definitions: Definition: A division ring is a ring in which every element has a two-sided inverse. If we allow that is commutative, we get something even better (more familiar: are the standard examples of fields). Definition: A field is a nonzero commutative division ring. The “nonzero” part here is just to avoid the case when the ring is the trivial ring (sometimes called the zero ring) with one element. i.e., the set is a ring in which zero satisfies both the additive and multiplicative identities. The zero ring is excluded from being a field for silly reasons: elegant theorems will hold for all fields except the zero ring, and it would be messy to require every theorem to add the condition that the field in question is nonzero. We will have much more to say about fields later on this blog, but for now let’s just state one very non-obvious and interesting result in non-commutative algebra, known as Wedderburn’s Little Theorem. Theorem: Every finite divison ring is a field. That is, simply having finitely many elements in a division ring is enough to prove that multiplication is commutative. Pretty neat stuff. We will actually see a simpler version of this theorem in a moment. Now as we saw units and zero-divisors are disjoint, but not quite opposites of each other. Since we have defined a division ring as a ring where all (non-zero) elements are units, it is natural to define a ring in which the only zero-divisor is zero. This is considered a natural generalization of our favorite ring , hence the name “integral.” Definition: An integral domain is a commutative ring in which zero is the only zero-divisor. Note the requirement that the ring is commutative. Often we will simply call it a domain, although most authors allow domains to be noncommutative. Already we can prove a very nice theorem: Theorem: Every finite integral domain is a field. Proof. Integral domains are commutative by definition, and so it suffices to show that every non-zero element has an inverse. Let be our integral domain in question, and the element whose inverse we seek. By our discussion of above, right multiplication by is an injective map , and since is finite this map must be a bijection. Hence must have some so that . And so is the inverse of . We could continue traveling down this road of studying special kinds of rings and their related properties, but we won’t often use these ideas on this blog. We do think the reader should be familiar with the names of these special classes of rings, and we will state the main theorems relating them. Definition: A nonzero element is called prime if whenever divides a product it either divides or (or both). A unique factorization domain (abbreviated UFD) is an integral domain in which every element can be written uniquely as a product of primes. Definition: A Euclidean domain is a ring in which the division algorithm can be performed. That is, there is a norm function , for which every pair can be written as with r satisfying . Paolo Aluffi has a wonderful diagram showing the relations among the various special classes of integral domains. This image comes from his book, Algebra: Chapter 0, which is a must-have for the enterprising mathematics student interested in algebra. In terms of what we have already seen, this diagram says that every field is a Euclidean domain, and in turn every Euclidean domain is a unique factorization domain. These are standard, but non-trivial theorems. We will not prove them here. The two big areas in this diagram we haven’t yet mentioned on this blog are PIDs and Noetherian domains. The reason for that is because they both require a theory of ideals in rings (perhaps most briefly described as a generalization of the even numbers). We will begin next time with a discussion of ideals, and their important properties in studying rings, but before we finish we want to focus on one main example that will show up later on this blog. Let us formally define the polynomial ring. Definition: Let be a commutative ring. Define the ring , to be the set of all polynomials in with coefficients in , where addition and multiplication are the usual addition and multiplication of polynomials. We will often call the polynomial ring in one variable over . We will often replace by some other letter representing an “indeterminate” variable, such as , or , or multiple indexed variables as in the following definition. Definition: Let be a commutative ring. The ring is the set of all polynomials in the variables with the usual addition and multiplication of polynomials. What can we say about the polynomial ring in one variable ? It’s additive and multiplicative identities are clear: the constant 0 and 1 polynomials, respectively. Other than that, we can’t quite get much more. There are some very bizarre features of polynomial rings with bizarre coefficient rings, such as multiplication decreasing degree. However, when we impose additional conditions on , the situation becomes much nicer. Theorem: If is a unique factorization domain, then so is . Proof. As we have yet to discuss ideals, we refer the reader to this proof, and recommend the reader return to it after our next primer. On the other hand, we will most often be working with polynomial rings over a field. And here the situation is even better: Theorem: If is a field, then is a Euclidean domain. Proof. The norm function here is precisely the degree of the polynomial (the highest power of a monomial in the polynomial). Then given , the usual algorithm for polynomial division gives a quotient and a remainder so that . In following the steps of the algorithm, one will note that all multiplication and division operations are performed in the field , and the remainder always has a smaller degree than the quotient. Indeed, one can explicitly describe the algorithm and prove its correctness, and we will do so in full generality in the future of this blog when we discuss computational algebraic geometry. For multiple variables, things are a bit murkier. For instance, it is not even the case that is a euclidean domain. One of the strongest things we can say originates from this simple observation: Lemma: is isomorphic to . We haven’t quite yet talked about isomorphisms of rings (we will next time), but the idea is clear: every polynomial in two variables can be thought of as a polynomial in where the coefficients are polynomials in (gathered together by factoring out common factors of ). Similarly, is the same thing as by induction. This allows us to prove that any polynomial ring is a unique factorization domain: Theorem: If is a UFD, so is . Proof. is a UFD as described above. By the lemma, so by induction is a UFD implies is as well. We’ll be very interested in exactly how to compute useful factorizations of polynomials into primes when we start our series on computational algebraic geometry. Some of the applications include robot motion planning, and automated theorem proving. Next time we’ll visit the concept of an ideal, see quotient rings, and work toward proving Hilbert’s Nullstellensatz, a fundamental result in algebraic geometry.
The Symptoms You Should Know Children’s Actions Relating To Hearing Loss Here is a list of general warning signs regarding children. As a parent suspecting a hearing issue, please be very noticeable and aware of these signs: - The child seems to respond inconsistently to sound, sometimes hearing and sometimes not. - The child intently watches the speakers face. - The child often says “What?” when spoken to. - The child exhibits behaviors that seem to favor one ear, such as tilting the head to the left or right when listening. - There is a history of hearing in the family. - The child’s mother had rubella (German measles) during pregnancy. - There is a history of blood incompatibility or difficulty in pregnancy. - There child has had frequent high fevers. - The child has a history of chronic ear infections. - The child frequently complains of hurting ears. - The child seems to respond better to low – or high – pitched sounds. - There is a change in how loudly or how much the child babbles or talks. If you suspect a hearing loss, examine the child’s speech and language development. The speech of children who have a hearing loss may sound different or less clear because they will be imitating a distorted signal. Many children have had a hearing impairment since birth and have therefore not heard speech and language of the same quality as that experienced by children with normal hearing. As a result, their language acquisition is an ongoing, effort-filled sequence instead of a gradual, easy, natural process. Consider the scores of times small children hear a word before they can learn to actually say it. Children with an impairment do not hear as many words in their surroundings as easily, and consequently they may build a vocabulary at a much slower pace. Improving the vocabulary of children with hearing loss is so important. Also expanding children’s hearing opportunities is a big ‘Plus’. Their words may also be missing word endings (e.g., s, ing), and short words (e.g., the, is, it) may be missing from their speech. The children’s written work may also reflect their inability to hear. These specific age-related behaviors can signal a hearing loss in infants and toddlers: BEFORE SIX MONTHS: - The child DOESN’T startle in some way, such as a blink of the eyes or a jerk of the body or a change of activity in response to sudden, loud sounds. - The child DOESN’T initiate sounds such as cooing or babbling. - The child shows NO RESPONSE to noise-making toys. - The child DOESN’T respond to or is not soothed by the sound of his or her’s mother’s voice. BY SIX MONTHS: - The child DOESN’T search for sounds by shifting eyes or turning the head from side to side. BY TEN MONTHS: - The child DOESN’T show some kind of response to his or her name. - He or she REDUCES their amount of vocal behaviors, such as babbling. BY TWELVE MONTHS: - The child shows NO RESPONSE to common household sounds, such as pots banging, running water, or footsteps from behind. - The child yells when imitating sounds. - The child DOESN’T respond to someone’s voice by turning his or her head or body in all directions to search for the source. BY FIFTEEN MONTHS: - The child ISN’T beginning to imitate many sounds or ISN’T attempting to say simple words. - In order to get the child’s attention, you consistently have to raise your voice. What to do if your child or yourself seem to have some loss of hearing? If you suspect that your child has a hearing loss or if you feel that sounds are not as loud as you need them to be, or that speech is muffled, it is a good idea to first have your family physician check for wax in the ear canals, infection, or a treatable disease. If the problem can be treated medically or surgically, pursue that treatment. If this is not possible, or if after treatment you or your child still has some difficulty hearing, investigate hearing help with the role of an audiologist To begin, ask your physician for a signed statement or form called a “medical clearance” saying that the hearing loss has been medically evaluated and that you or your child may be considered a candidate for different type hearing aids. This form is required by law before a hearing aid dispenser can provide you with a hearing aid. (Adults over eighteen may sign a waiver of this regulation, but for your best hearing health you should obtain a medical check-up instead.) Then arrange for a hearing test to determine how much hearing loss there is. Get a complete hearing evaluation from a licensed audiologist who is a Fellow in the American Academy of Audiology (FAAA) and/or one with a Certificate of Clinical Competence in Audiology (CCC-A) issued by the American Speech Language and Hearing Association (ASHA). Do not confuse the FAAA or CCC-A certification with the description used by many hearing aid dealers of “Board Certified,” which is granted by the National Hearing Aid Society (NHAS). NHAS is a trade association of hearing and dealers. Audiologist can measure hearing ability and identify the degree of loss. They can design and direct a rehabilitation program, recommend and fit the most appropriate hearing aids, and measure the hearing improvement from the use of hearing aids. They will provide guidance and training on how to use the new hearing aids and recommend the use of other assistant technology, if it’s appropriate. They can also teach speech reading. They can help you and your child to find solutions that reduce the effects of hearing loss by working with your spouse, family, employer, teacher, caregiver, or other medical specialist. In addition, audiologists evaluate balance, vertigo and dizziness disorders. If a hearing aid is recommended, be certain to arrange for a trial of at least thirty days through a facility that assist you and your child in becoming oriented to the new experience of hearing with amplification. Remember, it is a learning experience that does require time, practice, and patience. Sign up Today and Receive a 10 Step Self Management Exercise Guide For The Ringing in Your Ears! If you find the information in this post interesting & useful, please share it with your friends and colleagues on Facebook, Twitter and Google Plus. See Below How We Can Help You Or Someone You May Know: Save $12.00 (24%) When You Purchase The SadoTech Flashing Wireless Doorbell. An Amazing Product Specifically Designed For The Blind And Hard Of Hearing. Free Two-Day Shipping For Prime College Students. Click Here For More Exciting Details… Save $3.59 (7% off) When You Purchase, “Sign With Your Baby: How To Communicate With Your Infant Before They Could Speak.” An Amazing Collection To Own. Book, Reference Guide & DVD. Specifically Designed To Teach Infants Sign Langauge. Free Two-Day Shipping For Prime College Students. Click Here For More Exciting Details… 50% off Prime Students and get Fast, Free Shipping. Tinnitus Relief including ringing in ears, clicking, roaring, buzzing with all natural Sonavil. 1 Tinnitus treatment specially formulated to safely and effectively manage Tinnitus related ear issues. 60 Capsules (1 Month Supply) with a 100% Lifetime Money Back Guarantee – 2 Pack When you buy something from this website, I may receive an affiliate commission. These are my opinions and are not representative of the companies that create these products. My reviews are based on my personal experience and research. I never recommend poor quality products, or create false reviews to make sales. It is my intention to explain products so you can make an informed decisions on which ones suit your needs best. Editor’s Note: This post was originally published on January 3, 2017 and has been completely revamped and updated for accuracy and comprehensiveness.
We are going to analyse what happens when routing occurs on a network (IP routing process). When I was new to the networking area, I thought that all you needed was the IP Address of the machine you wanted to contact but so little did I know. You actually need a bit more information than just the IP Address ! The process we are going to explain is fairly simple and doesn't really change, no matter how big your network is. In our example, we have 2 networks, Network A and Network B. Both networks are connected via a router (Router A) which has 2 interfaces: E0 and E1. These interfaces are just like the interface on your network card (RJ-45), but built into the router. Now, we are going to describe step by step what happens when Host A (Network A) wants to communicate with Host B (Network B) which is on a different network. 1) Host A opens a command prompt and enters >Ping 188.8.131.52. 2) IP works with the Address Resolution Protocol (ARP) to determine which network this packet is destined for by looking at the IP address and the subnet mask of Host A. Since this is a request for a remote host, which means it is not destined to be sent to a host on the local network, the packet must be sent to the router (the gateway for Network A) so that it can be routed to the correct remote network (which is Network B). 3) Now, for Host A to send the packet to the router, it needs to know the hardware address of the router's interface which is connected to its network (Network A), in case you didn't realise, we are talking about the MAC (Media Access Control) address of interface E0. To get the hardware address, Host A looks in its ARP cache - a memory location where these MAC addresses are stored for a few seconds. 4) If it doesn't find it in there it means that either a long time has passed since it last contacted the router or it simply hasn't resolved the IP address of the router (192.168.0.1) to a hardware address (MAC). So it then sends an ARP broadcast. This broadcast contains the following "What is the hardware (MAC) address for IP 192.168.0.1 ? ". The router identifies that IP address as its own and must answer, so it sends back to Host A a reply, giving it the MAC address of its E0 interface. This is also one of the reasons why sometimes the first "ping" will timeout. Because it takes some time for an ARP to be sent and the requested machine to respond with its MAC address, by the time all that happens, the TTL (Time To Live) of the first ping packet has expired, so it times out! 5) The router responds with the hardware address of its E0 interface, to which the 192.168.0.1 IP is bound. Host A now has everything it needs in order to transmit a packet out on the local network to the router. Now, the Network Layer hands down to the Datalink Layer the packet it generated with the ping (ICMP echo request), along with the hardware address of the router. This packet includes the source and destination IP address as well as the ICMP echo request which was specified in the Network Layer. 6) The Datalink Layer of Host A creates a frame, which encapsulates the packet with the information needed to transmit on the local network. This includes the source and destination hardware address (MAC) and the type field which specifies the Network Layer protocol e.g IPv4 (that's the IP version we use), ARP. At the end of the frame, in the FCS portion of the frame, the Datalink Layer will stick a Cyclic Redundancy Check (CRC) to make sure the receiving machine (the router) can figure out if the frame it received has been corrupted. To learn more on how the frame is created, visit the Data Encapsulation - Decapsulation. 7) The Datalink Layer of Host A hands the frame to the Physical layer which encodes the 1s and 0s into a digital signal and transmits this out on the local physical network. 8)The signal is picked up by the router's E0 interface and reads the frame. It will first do a CRC check and compare it with the CRC value Host A added to this frame, to make sure the frame is not corrupt. 9)After that, the destination hardware address (MAC) of the received frame is checked. Since this will be a match, the type field in the frame will be checked to see what the router should do with the data packet. IP is in the type field, and the router hands the packet to the IP protocol running on the router. The frame is stripped and the original packet that was generated by Host A is now in the router's buffer. 10) IP looks at the packet's destination IP address to determine if the packet is for the router. Since the destination IP address is 184.108.40.206, the router determines from the routing table that 220.127.116.11 is a directly connected network on interface E1. 11) The router places the packet in the buffer of interface E1. The router needs to create a frame to send the packet to the destination host. First, the router looks in the ARP cache to determine whether the hardware address has already been resolved from a prior communication. If it is not in the ARP cache, the router sends an ARP broadcast out E1 to find the hardware address of 18.104.22.168 12) Host B responds with the hardware address of its network interface card with an ARP reply. The router's E1 interface now has everything it needs to send the packet to the final destination. 13)The frame generated from the router's E1 interface has the source hardware address of E1 interface and the hardware destination address of Host B's network interface card. However, the most important thing here is that even though the frame's source and destination hardware address changed at every interface of the router it was sent to and from, the IP source and destination addresses never changed. The packet was never modified at all, only the frame changed. 14) Host B receives the frame and runs a CRC. If that checks out, it discards the frame and hands the packet to IP. IP will then check the destination IP address. Since the IP destination address matches the IP configuration of Host B, it looks in the protocol field of the packet to determine the purpose of the packet. 15) Since the packet is an ICMP echo request, Host B generates a new ICMP echo-reply packet with a source IP address of Host B and a destination IP address of Host A. The process starts all over again, except that it goes in the opposite direction. However, the hardware address of each device along the path is already known, so each device only needs to look in its ARP cache to determine the hardware (MAC) address of each interface. And that just about covers our routing analysis. If you found it confusing, take a break and come back later on and give it another shot. Its really simple once you grasp the concept of routing. Back to the Routing Section
Thirteenth Amendment to the United States Constitution |This article is part of a series on the| |Constitution of the United States of America |Preamble and Articles of the Constitution |Amendments to the Constitution| |Full text of the Constitution and Amendments| The Thirteenth Amendment (Amendment XIII) to the United States Constitution abolished slavery and involuntary servitude, except as punishment for a crime. In Congress, it was passed by the Senate on April 8, 1864, and by the House on January 31, 1865. The amendment was ratified by the required number of states on December 6, 1865. On December 18, 1865, Secretary of State William H. Seward proclaimed its adoption. It was the first of the three Reconstruction Amendments adopted following the American Civil War. Slavery had been tacitly enshrined in the original Constitution through provisions such as Article I, Section 2, Clause 3, commonly known as the Three-Fifths Compromise, which detailed how each state's total slave population would be factored into its total population count for the purposes of apportioning seats in the United States House of Representatives and direct taxes among the states. Though many slaves had been declared free by President Abraham Lincoln's 1863 Emancipation Proclamation, their post-war status was uncertain. On April 8, 1864, the Senate passed an amendment to abolish slavery. After one unsuccessful vote and extensive legislative maneuvering by the Lincoln administration, the House followed suit on January 31, 1865. The measure was swiftly ratified by nearly all Northern states, along with a sufficient number of border and "reconstructed" Southern states, to cause it to be adopted before the end of the year. Though the amendment formally abolished slavery throughout the United States, factors such as Black Codes, white supremacist violence, and selective enforcement of statutes continued to subject some black Americans to involuntary labor, particularly in the South. In contrast to the other Reconstruction Amendments, the Thirteenth Amendment was rarely cited in later case law, but has been used to strike down peonage and some race-based discrimination as "badges and incidents of slavery". The Thirteenth Amendment applies to the actions of private citizens, while the Fourteenth and Fifteenth Amendments apply only to state actors. The amendment also enables Congress to pass laws against sex trafficking and other modern forms of slavery. - 1 Text - 2 Slavery in the United States - 3 Proposal and ratification - 4 Effects - 5 Congressional and executive enforcement - 6 Judicial interpretation - 7 Prior proposed Thirteenth Amendments - 8 See also - 9 References - 10 External links Section 1. Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction. Section 2. Congress shall have power to enforce this article by appropriate legislation. Slavery in the United States The institution of slavery existed in all of the original thirteen British North American colonies. Prior to the Thirteenth Amendment, the United States Constitution (adopted in 1789) did not expressly use the words slave or slavery but included several provisions about unfree persons. The Three-Fifths Clause (in Article I, Section 2) allocated Congressional representation based "on the whole Number of free Persons" and "three fifths of all other Persons". This clause was a compromise between Southerners who wished slaves to be counted as 'persons' for congressional representation and northerners rejecting these out of concern of too much power for the South, because representation in the new Congress would be based on population in contrast to the one-vote-for-one-state principle in the earlier Continental Congress. Under the Fugitive Slave Clause (Article IV, Section 2), "No person held to Service or Labour in one State" would be freed by escaping to another. Article I, Section 9 allowed Congress to pass legislation outlawing the "Importation of Persons", but not until 1808. However, for purposes of the Fifth Amendment—which states that, "No person shall... be deprived of life, liberty, or property, without due process of law"—slaves were understood as property. Although abolitionists used the Fifth Amendment to argue against slavery, it became part of the legal basis for treating slaves as property with Dred Scott v. Sandford (1857). Stimulated by the philosophy of the Declaration of Independence between 1777 and 1804, every Northern state provided for the immediate or gradual abolition of slavery. Most of the slaves involved were household servants. No Southern state did so, and the slave population of the South continued to grow, peaking at almost 4 million people in 1861. An abolitionist movement headed by such figures as William Lloyd Garrison grew in strength in the North, calling for the end of slavery nationwide and exacerbating tensions between North and South. The American Colonization Society, an alliance between abolitionists who felt the races should be kept separated and slaveholders who feared the presence of freed blacks would encourage slave rebellions, called for the emigration and colonization of both free blacks and slaves to Africa. Its views were endorsed by politicians such as Henry Clay, who feared that the main abolitionist movement would provoke a civil war. Proposals to eliminate slavery by constitutional amendment were introduced by Representative Arthur Livermore in 1818 and by John Quincy Adams in 1839, but failed to gain significant traction. As the country continued to expand, the issue of slavery in its new territories became the dominant national issue. The Southern position was that slaves were property and therefore could be moved to the territories like all other forms of property. The 1820 Missouri Compromise provided for the admission of Missouri as a slave state and Maine as a free state, preserving the Senate's equality between the regions. In 1846, the Wilmot Proviso was introduced to a war appropriations bill to ban slavery in all territories acquired in the Mexican–American War; the Proviso repeatedly passed the House, but not the Senate. The Compromise of 1850 temporarily defused the issue by admitting California as a free state, instituting a stronger Fugitive Slave Act, banning the slave trade in Washington, D.C., and allowing New Mexico and Utah self-determination on the slavery issue. Despite the compromise, tensions between North and South continued to rise over the subsequent decade, inflamed by, amongst other things, the publication of the 1852 anti-slavery novel Uncle Tom's Cabin; fighting between pro-slave and abolitionist forces in Kansas, beginning in 1854; the 1857 Dred Scott decision, which struck down provisions of the Compromise of 1850; abolitionist John Brown's 1859 attempt to start a slave revolt at Harpers Ferry and the 1860 election of slavery critic Abraham Lincoln to the presidency. The Southern states seceded from the Union in the months following Lincoln's election, forming the Confederate States of America, and beginning the American Civil War. Proposal and ratification Crafting the amendment Acting under presidential war powers, Lincoln issued the Emancipation Proclamation on January 1, 1863, which proclaimed the freedom of slaves in the ten states that were still in rebellion. However, it did not affect the status of slaves in the border states that had remained loyal to the Union. That December, Lincoln again used his war powers and issued a "Proclamation for Amnesty and Reconstruction", which offered Southern states a chance to peacefully rejoin the Union if they abolished slavery and collected loyalty oaths from 10% of their voting population. Southern states did not readily accept the deal, and the status of slavery remained uncertain. In the final years of the Civil War, Union lawmakers debated various proposals for Reconstruction. Some of these called for a constitutional amendment to abolish slavery nationally and permanently. On December 14, 1863, a bill proposing such an amendment was introduced by Representative James Mitchell Ashley. Representative James F. Wilson soon followed with a similar proposal. On January 11, 1864, Senator John B. Henderson of Missouri submitted a joint resolution for a constitutional amendment abolishing slavery. The Senate Judiciary Committee, chaired by Lyman Trumbull, became involved in merging different proposals for an amendment. Radical Republicans led by Senator Charles Sumner and Representative Thaddeus Stevens sought a more expansive version of the amendment. On February 8, 1864, Sumner submitted a constitutional amendment stating: All persons are equal before the law, so that no person can hold another as a slave; and the Congress shall have power to make all laws necessary and proper to carry this declaration into effect everywhere in the United States. Sumner tried to promote his own more expansive wording by circumventing the Trumbull-controlled Judiciary Committee, but failed. On February 10, the Senate Judiciary Committee presented the Senate with an amendment proposal based on drafts of Ashley, Wilson and Henderson. The Committee's version used text from the Northwest Ordinance of 1787, which stipulates, "There shall be neither slavery nor involuntary servitude in the said territory, otherwise than in the punishment of crimes whereof the party shall have been duly convicted.":1786 Though using Henderson's proposed amendment as the basis for its new draft, the Judiciary Committee removed language that would have allowed a constitutional amendment to be adopted with only a majority vote in each House of Congress and ratification by two-thirds of the states (instead of two-thirds and three-fourths, respectively). Passage by Congress The Senate passed the amendment on April 8, 1864, by a vote of 38 to 6; two Democrats, Reverdy Johnson of Maryland and James Nesmith of Oregon voted "aye." However, just over two months later on June 15, the House failed to do so, with 93 in favor and 65 against, thirteen votes short of the two-thirds vote needed for passage; the vote split largely along party lines, with Republicans supporting and Democrats opposing. In the 1864 presidential race, former Free Soil Party candidate John C. Frémont threatened a third-party run opposing Lincoln, this time on a platform endorsing an anti-slavery amendment. The Republican Party platform had, as yet, failed to include a similar plank, though Lincoln endorsed the amendment in a letter accepting his nomination. Fremont withdrew from the race on September 22, 1864 and endorsed Lincoln. With no Southern states represented, few members of Congress pushed moral and religious arguments in favor of slavery. Democrats who opposed the amendment generally made arguments based on federalism and states' rights. Some argued that the proposed change so violated the spirit of the Constitution that it would not be a valid "amendment" but would instead constitute "revolution". Some opponents[which?] warned that the amendment would lead to full citizenship for blacks. Republicans portrayed slavery as uncivilized and argued for abolition as a necessary step in national progress. Amendment supporters also argued that the slave system had negative effects on white people. These included the lower wages resulting from competition with forced labor, as well as repression of abolitionist whites in the South. Advocates said ending slavery would restore the First Amendment and other constitutional rights violated by censorship and intimidation in slave states. White Northern Republicans, and some Democrats, became excited about an abolition amendment, holding meetings and issuing resolutions. Many blacks, particularly in the South, focused more on landownership and education as the key to liberation. As slavery began to seem politically untenable, an array of Northern Democrats successively announced their support for the amendment, including Representative James Brooks, Senator Reverdy Johnson, and Tammany Hall, a powerful New York political machine. President Lincoln had had concerns that the Emancipation Proclamation of 1863 might be reversed or found invalid after the war. He saw constitutional amendment as a more permanent solution. He had remained outwardly neutral on the amendment because he considered it politically too dangerous. Nonetheless, Lincoln's 1864 party platform resolved to abolish slavery by constitutional amendment. After winning the election of 1864, Lincoln made the passage of the Thirteenth Amendment his top legislative priority, beginning his efforts while the "lame duck" session was still in office. Popular support for the amendment mounted. and Lincoln urged Congress on in his December 6 State of the Union speech: "there is only a question of time as to when the proposed amendment will go to the States for their action. And as it is to so go, at all events, may we not agree that the sooner the better?" Lincoln instructed Secretary of State William H. Seward, Representative John B. Alley and others to procure votes by any means necessary, and they promised government posts and campaign contributions to outgoing Democrats willing to switch sides. Seward had a large fund for direct bribes. Ashley, who reintroduced the measure into the House, also lobbied several Democrats to vote in favor of the measure. Representative Thaddeus Stevens commented later that "the greatest measure of the nineteenth century was passed by corruption, aided and abetted by the purest man in America"; however, Lincoln's precise role in making deals for votes remains unknown. Republicans in Congress claimed a mandate for abolition, having gained in the elections for Senate and House. The 1864 Democratic vice-presidential nominee, Representative George H. Pendleton, led opposition to the measure. Republicans toned down their language of radical equality in order to broaden the amendment's coalition of supporters. In order to reassure critics worried that the amendment would tear apart the social fabric, some Republicans explicitly promised that the amendment would leave patriarchy intact. In mid-January, 1865, Speaker of the House Schuyler Colfax estimated the amendment to be five votes short of passage. Ashley postponed the vote. At this point, Lincoln intensified his push for the amendment, making direct emotional appeals to particular members of Congress. On January 31, 1865, the House called another vote on the amendment, with neither side being certain of the outcome. Every Republican supported the measure, as well as 16 Democrats, almost all of them lame ducks. The amendment finally passed by a vote of 119 to 56, narrowly reaching the required two-thirds majority. The House exploded into celebration, with some members openly weeping. Black onlookers, who had only been allowed to attend Congressional sessions since the previous year, cheered from the galleries. While under the Constitution, the President plays no formal role in the amendment process, the joint resolution was sent to Lincoln for his signature. Under the usual signatures of the Speaker of the House and the President of the Senate, President Lincoln wrote the word "Approved" and added his signature to the joint resolution on February 1, 1865. On February 7, Congress passed a resolution affirming that the Presidential signature was unnecessary. The Thirteenth Amendment is the only ratified amendment signed by a President, although James Buchanan had signed the Corwin Amendment that the 36th Congress had adopted and sent to the states in March 1861. Ratification by the states When the Thirteenth Amendment was submitted to the states on February 1, 1865, it was quickly taken up by several legislatures. By the end of the month it had been ratified by eighteen states. Among them were the ex-confederate states of Virginia and Louisiana, where ratifications were submitted by Reconstruction governments. These, along with subsequent ratifications from Arkansas and Tennessee raised the issues of how many seceded states had legally valid legislatures; and if there were fewer legislatures than states, if Article V required ratification by three-fourths of the states or three-fourths of the legally valid state legislatures. President Lincoln in his last speech, on April 11, 1865, called the question about whether the Southern states were in or out of the Union a "pernicious abstraction." Obviously, he declared, they were not "in their proper practical relation with the Union"; whence everyone's object should be to restore that relation. Lincoln was assassinated three days later. With Congress out of session, the new President, Andrew Johnson, began a period known as "Presidential Reconstruction", in which he personally oversaw the creation of new state governments throughout the South. He oversaw the convening of state political conventions populated by delegates whom he deemed to be loyal. Three leading issues came before the convention: secession itself, the abolition of slavery, and the Confederate war debt. Alabama, Florida, Georgia, Mississippi, North Carolina, and South Carolina held conventions in 1865, while Texas' convention did not organize until March 1866. Johnson hoped to prevent deliberation over whether to re-admit the Southern states by accomplishing full ratification before Congress reconvened in December. He believed he could silence those who wished to deny the Southern states their place in the Union by pointing to how essential their assent had been to the successful ratification of the Thirteenth Amendment. Direct negotiations between state governments and the Johnson administration ensued. As the summer wore on, administration officials began including assurances of the measure's limited scope with their demands for ratification. Johnson himself suggested directly to the governors of Mississippi and North Carolina that they could proactively control the allocation of rights to freedmen. Though Johnson obviously expected the freed people to enjoy at least some civil rights, including, as he specified, the right to testify in court, he wanted state lawmakers to know that the power to confer such rights would remain with the states. When South Carolina provisional governor Benjamin Franklin Perry objected to the scope of the amendment's enforcement clause, Secretary of State Seward responded by telegraph that in fact the second clause "is really restraining in its effect, instead of enlarging the powers of Congress". White politicians throughout the South were concerned that Congress might cite the amendment's enforcement powers as a way to authorize black suffrage. When South Carolina ratified the amendment in November 1865, it issued its own interpretive declaration that "any attempt by Congress toward legislating upon the political status of former slaves, or their civil relations, would be contrary to the Constitution of the United States".:1786–1787 Alabama and Louisiana also declared that their ratification did not imply federal power to legislate on the status of former slaves.:1787 During the first week of December, North Carolina and Georgia gave the amendment the final votes needed for it to become part of the Constitution. The Thirteenth Amendment became part of the Constitution on December 6, 1865, based on the following ratifications: - Illinois — February 1, 1865 - Rhode Island — February 2, 1865 - Michigan — February 3, 1865 - Maryland — February 3, 1865 - New York — February 3, 1865 - Pennsylvania — February 3, 1865 - West Virginia — February 3, 1865 - Missouri — February 6, 1865 - Maine — February 7, 1865 - Kansas — February 7, 1865 - Massachusetts — February 7, 1865 - Virginia — February 9, 1865 - Ohio — February 10, 1865 - Indiana — February 13, 1865 - Nevada — February 16, 1865 - Louisiana — February 17, 1865 - Minnesota — February 23, 1865 - Wisconsin — February 24, 1865 - Vermont — March 8, 1865 - Tennessee — April 7, 1865 - Arkansas — April 14, 1865 - Connecticut — May 4, 1865 - New Hampshire — July 1, 1865 - South Carolina — November 13, 1865 - Alabama — December 2, 1865 - North Carolina — December 4, 1865 - Georgia — December 6, 1865 Having been ratified by the legislatures of three-fourths of the several states—27 of the 36 states (including those that had been in rebellion), Secretary of State Seward, on December 18, 1865, certified that the Thirteenth Amendment had become valid, to all intents and purposes, as a part of the Constitution. Included on the enrolled list of ratifying states were the three ex-confederate states that had given their assent, but with strings attached. Seward accepted their affirmative votes and brushed aside their interpretive declarations without comment, challenge or acknowledgment. The Thirteenth Amendment was subsequently ratified by: - Oregon — December 8, 1865 - California — December 19, 1865 - Florida — December 28, 1865 (Reaffirmed – June 9, 1869) - Iowa — January 15, 1866 - New Jersey — January 23, 1866 (After rejection – March 16, 1865) - Texas — February 18, 1870 - Delaware — February 12, 1901 (After rejection – February 8, 1865) - Kentucky — March 18, 1976 (After rejection – February 24, 1865) - Mississippi — March 16, 1995; Certified – February 7, 2013 (After rejection – December 5, 1865) The impact of the abolition of slavery was felt quickly. When the Thirteenth Amendment became operational, the scope of Lincoln's 1863 Emancipation Proclamation was widened to include the entire nation. Although the majority of Kentucky's slaves had been emancipated, 65,000–100,000 people remained to be legally freed when the Amendment went into effect on December 18. In Delaware, where a large number of slaves had escaped during the war, nine hundred people became legally free. In addition to abolishing slavery and prohibiting involuntary servitude, except as a punishment for crime, the Thirteenth Amendment also nullified the Fugitive Slave Clause and the Three-Fifths Compromise. The population of a state originally included (for congressional apportionment purposes) all "free persons", three-fifths of "other persons" (i.e., slaves) and excluded untaxed Native Americans. The Three-Fifths Compromise was a provision in the Constitution that required three-fifths of the population of slaves be counted for purposes of apportionment of seats in the House of Representatives and taxes among the states. This compromise had the effect of increasing the political power of slave-holding states by increasing their share of seats in the House of Representatives, and consequently their share in the Electoral College (where a state's influence over the election of the President is tied to the size of its congressional delegation). Even as the Thirteenth Amendment was working its way through the ratification process, Republicans in Congress grew increasingly concerned about the potential for there to be a large increase the congressional representation of the Democratic-dominated Southern states. Because the full population of freed slaves would be counted rather than three-fifths, the Southern states would dramatically increase their power in the population-based House of Representatives. Republicans hoped to offset this advantage by attracting and protecting votes of the newly enfranchised black population. Political and economic change in the South Southern culture remained deeply racist, and those blacks who remained faced a dangerous situation. J. J. Gries reported to the Joint Committee on Reconstruction: "There is a kind of innate feeling, a lingering hope among many in the South that slavery will be regalvanized in some shape or other. They tried by their laws to make a worse slavery than there was before, for the freedman has not the protection which the master from interest gave him before." W. E. B. Du Bois wrote in 1935: Slavery was not abolished even after the Thirteenth Amendment. There were four million freedmen and most of them on the same plantation, doing the same work that they did before emancipation, except as their work had been interrupted and changed by the upheaval of war. Moreover, they were getting about the same wages and apparently were going to be subject to slave codes modified only in name. There were among them thousands of fugitives in the camps of the soldiers or on the streets of the cities, homeless, sick, and impoverished. They had been freed practically with no land nor money, and, save in exceptional cases, without legal status, and without protection. Official emancipation did not substantially alter the economic situation of most blacks who remained in the south. As the amendment still permitted labor as punishment for convicted criminals, Southern states responded with what historian Douglas A. Blackmon called "an array of interlocking laws essentially intended to criminalize black life". These laws, passed or updated after emancipation, were known as Black Codes. Mississippi was the first state to pass such codes, with an 1865 law titled "An Act to confer Civil Rights on Freedmen". The Mississippi law required black workers to contract with white farmers by January 1 of each year or face punishment for vagrancy. Blacks could be sentenced to forced labor for crimes including petty theft, using obscene language, or selling cotton after sunset. States passed new, strict vagrancy laws that were selectively enforced against blacks without white protectors. The labor of these convicts was then sold to farms, factories, lumber camps, quarries, and mines. After its ratification of the Thirteenth Amendment in November 1865, the South Carolina legislature immediately began to legislate Black Codes. The Black Codes created a separate set of laws, punishments, and acceptable behaviors for anyone with more than one black great-grandparent. Under these Codes, Blacks could only work as farmers or servants and had few Constitutional rights. Restrictions on black land ownership threatened to make economic subservience permanent. Some states mandated indefinitely long periods of child "apprenticeship". Some laws did not target Blacks specifically, but instead affected farm workers, most of whom were Black. At the same time, many states passed laws to actively prevent Blacks from acquiring property. Southern business owners sought to reproduce the profitable arrangement of slavery with a system called peonage, in which (disproportionately black) workers were entrapped by loans and compelled to work indefinitely because of their debt. Peonage continued well through Reconstruction and ensnared a large proportion of black workers in the South. These workers remained destitute and persecuted, forced to work dangerous jobs and further confined legally by the racist Jim Crow laws that governed the South. Peonage differed from chattel slavery because it was not strictly hereditary and did not allow the sale of people in exactly the same fashion. However, a person's debt—and by extension a person—could still be sold, and the system resembled antebellum slavery in many ways. Congressional and executive enforcement As its first enforcement legislation, Congress passed the Civil Rights Act of 1866, which guaranteed black Americans citizenship and equal protection of the law, though not the right to vote. The Amendment was also used as authorization for several Freedmen's Bureau bills. President Andrew Johnson vetoed these bills, but a Congressional supermajority overrode his veto to pass the Civil Rights Act and the Second Freedmen's Bureau Bill. Proponents of the Act including Trumbull and Wilson argued that Section 2 of the Thirteenth Amendment (enforcement power) authorized the federal government to legislate civil rights for the States. Others disagreed, maintaining that inequality conditions were distinct from slavery.:1788–1790 Seeking more substantial justification, and fearing that future opponents would again seek to overturn the legislation, Congress and the states added additional protections to the Constitution: the Fourteenth Amendment (1868), which defined citizenship and mandated equal protection under the law, and the Fifteenth Amendment (1870), which banned racial voting restrictions. The Freedmen's Bureau enforced the Amendment locally, providing a degree of support for people subject to the Black Codes. (Reciprocally, the Thirteenth Amendment established the Bureau's legal basis to operate in Kentucky.) The Civil Rights Act circumvented racism in local jurisdictions by allowing blacks access to the federal courts. The Enforcement Acts of 1870–1871 and the Civil Rights Act of 1875, in combating the violence and intimidation of white supremacy, were also part of the effort to end slave conditions for Southern blacks. However, the effect of these laws waned as political will diminished and the federal government lost authority in the South, particularly after the Compromise of 1877 ended Reconstruction in exchange for a Republican presidency. With the Peonage Act of 1867, Congress abolished "the holding of any person to service or labor under the system known as peonage", specifically banning "the voluntary or involuntary service or labor of any persons as peons, in liquidation of any debt or obligation, or otherwise." In 1939, the Department of Justice created the Civil Rights Section, which focused primarily on First Amendment and labor rights. The increasing scrutiny of totalitarianism in the lead-up to World War II brought increased attention to issues of slavery and involuntary servitude, abroad and at home. The U.S. sought to counter foreign propaganda and increase its credibility on the race issue by combatting the Southern peonage system. Under the leadership of Attorney General Francis Biddle, the Civil Rights Section invoked the constitutional amendments and legislation of the Reconstruction Era as the basis for its actions. In 1947, the DOJ successfully prosecuted Elizabeth Ingalls for keeping domestic servant Dora L. Jones in conditions of slavery. The court found that Jones "was a person wholly subject to the will of defendant; that she was one who had no freedom of action and whose person and services were wholly under the control of defendant and who was in a state of enforced compulsory service to the defendant." The Thirteenth Amendment enjoyed a swell of attention during this period, but from Brown v. Board (1954) until Jones v. Alfred H. Mayer Co. (1968) it was again eclipsed by the Fourteenth Amendment. Victims of human trafficking and other conditions of forced labor are commonly coerced by threat of legal actions to their detriment. Victims of forced labor and trafficking are protected by Title 18 of the U.S. Code. - Title 18, U.S.C., Section 241 – Conspiracy Against Rights: Conspiracy to injure, oppress, threaten, or intimidate any person's rights or privileges secured by the Constitution or the laws of the United States - Title 18, U.S.C., Section 242 – Deprivation of Rights Under Color of Law: It is a crime for any person acting under color of law (federal, state or local officials who enforce statutes, ordinances, regulations, or customs) to willfully deprive or cause to be deprived the rights, privileges, or immunities of any person secured or protected by the Constitution and laws of the U.S. This includes willfully subjecting or causing to be subjected any person to different punishments, pains, or penalties, than those prescribed for punishment of citizens on account of such person being an alien or by reason of his/her color or race. Department of Justice definitions - Refers to a person in "debt servitude," or involuntary servitude tied to the payment of a debt. Compulsion to servitude includes the use of force, the threat of force, or the threat of legal coercion to compel a person to work. - Involuntary servitude - Refers to a person held by actual force, threats of force, or threats of legal coercion in a condition of slavery – compulsory service or labor against his or her will. This includes the condition in which people are compelled to work by a "climate of fear" evoked by the use of force, the threat of force, or the threat of legal coercion (i.e., suffer legal consequences unless compliant with demands made upon them) which is sufficient to compel service. In Bailey v. Alabama (1911), the U.S. Supreme Court ruled that peonage laws violated the amendment's ban on involuntary servitude. - Requiring specific performance as a remedy for breach of personal services contracts has been viewed as a form of involuntary servitude by some scholars and courts, though other jurisdictions and scholars have rejected this argument; it is a popular rule in academia and many local jurisdictions, but has never been upheld by higher courts. - Forced labor - Labor or service obtained by: - threats of serious harm or physical restraint; - any scheme, plan, or pattern intended to cause a person to believe he would suffer serious harm or physical restraint if he did not perform such labor or services: - the abuse or threatened abuse of law or the legal process. In contrast to the other "Reconstruction Amendments", the Thirteenth Amendment was rarely cited in later case law. As historian Amy Dru Stanley summarizes, "beyond a handful of landmark rulings striking down debt peonage, flagrant involuntary servitude, and some instances of race-based violence and discrimination, the Thirteenth Amendment has never been a potent source of rights claims". Black slaves and their descendants U. S. v. Rhodes (1866), one of the first Thirteenth Amendment cases, tested the Constitutionality of provisions in the Civil Rights Act of 1866 that granted blacks redress in the federal courts. Kentucky law prohibited blacks from testifying against whites—an arrangement which compromised the ability of Nancy Talbot ("a citizen of the United States of the African race") to reach justice against a white person accused of robbing her. After Talbot attempted to try the case in federal court, the Kentucky Supreme Court ruled this federal option unconstitutional. Noah Swayne (a Supreme Court justice sitting on the Kentucky Circuit Court) overturned the Kentucky decision, holding that without the material enforcement provided by the Civil Rights Act, slavery would not truly be abolished. With In Re Turner (1867), Chief Justice Salmon P. Chase ordered freedom for Elizabeth Turner, a former slave in Maryland who became indentured to her former master. In Blyew v. U.S., (1872) the Supreme Court heard another Civil Rights Act case relating to federal courts in Kentucky. John Blyew and George Kennard were white men visiting the cabin of a black family, the Fosters. Blyew apparently became angry with sixteen-year-old Richard Foster and hit him twice in the head with an ax. Blyew and Kennard killed Richard's parents, Sallie and Jack Foster, and his blind grandmother, Lucy Armstrong. They severely wounded the Fosters' two young daughters. Kentucky courts would not allow the Foster children to testify against Blyew and Kennard. But federal courts, authorized by the Civil Rights Act, found Blyew and Kennard guilty of murder. When the Supreme Court took the case, they ruled (5–2) that the Foster children did not have standing in federal courts because only living people could take advantage of the Act. In doing so, the Courts effectively ruled that Thirteenth Amendment did not permit a federal remedy in murder cases. Swayne and Joseph P. Bradley dissented, maintaining that in order to have meaningful effects, the Thirteenth Amendment would have to address systemic racial oppression. Though based on a technicality, the Blyew case set a precedent in state and federal courts that led to the erosion of Congress's Thirteenth Amendment powers. The Supreme Court continued along this path in the Slaughter-House Cases (1873), which upheld a state-sanctioned monopoly of white butchers. In United States v. Cruikshank (1876), the Court ignored Thirteenth Amendment dicta from a circuit court decision to exonerate perpetrators of the Colfax massacre and invalidate the Enforcement Act of 1870. The Thirteenth Amendment was not solely a ban on chattel slavery, but also covers a much broader array of labor arrangements and social deprivations. As the U.S. Supreme Court explicated in the Slaughter-House Cases (1873) with respect to the Fourteenth and Fifteenth Amendment and the Thirteenth Amendment in special: Undoubtedly while negro slavery alone was in the mind of the Congress which proposed the thirteenth article, it forbids any other kind of slavery, now or hereafter. If Mexican peonage or the Chinese coolie labor system shall develop slavery of the Mexican or Chinese race within our territory, this amendment may safely be trusted to make it void. And so if other rights are assailed by the States which properly and necessarily fall within the protection of these articles, that protection will apply, though the party interested may not be of African descent. But what we do say, and what we wish to be understood is, that in any fair and just construction of any section or phrase of these amendments, it is necessary to look to the purpose which we have said was the pervading spirit of them all, the evil which they were designed to remedy, and the process of continued addition to the Constitution, until that purpose was supposed to be accomplished, as far as constitutional law can accomplish it. In the Civil Rights Cases (1883), the Supreme Court reviewed five consolidated cases dealing with the Civil Rights Act of 1875, which outlawed racial discrimination at "inns, public conveyances on land or water, theaters, and other places of public amusement". The Court ruled that the Thirteenth Amendment did not ban most forms of racial discrimination by non-government actors. In the majority decision, Bradley wrote (again in non-binding dicta) that the Thirteenth Amendment empowered Congress to attack "badges and incidents of slavery". However, he distinguished between "fundamental rights" of citizenship, protected by the Thirteenth Amendment, and the "social rights of men and races in the community". The majority opinion held that "it would be running the slavery argument into the ground to make it apply to every act of discrimination which a person may see fit to make as to guests he will entertain, or as to the people he will take into his coach or cab or car; or admit to his concert or theatre, or deal with in other matters of intercourse or business." In his solitary dissent, John Marshall Harlan (a Kentucky lawyer who changed his mind about civil rights law after witnessing organized racist violence) argued that "such discrimination practiced by corporations and individuals in the exercise of their public or quasi-public functions is a badge of servitude, the imposition of which congress may prevent under its power." The Court in the Civil Rights Cases also held that appropriate legislation under the amendment could go beyond nullifying state laws establishing or upholding slavery, because the amendment "has a reflex character also, establishing and decreeing universal civil and political freedom throughout the United States" and thus Congress was empowered "to pass all laws necessary and proper for abolishing all badges and incidents of slavery in the United States." The Court stated about the scope the amendment: This amendment, as well as the Fourteenth, is undoubtedly self-executing, without any ancillary legislation, so far as its terms are applicable to any existing state of circumstances. By its own unaided force and effect, it abolished slavery and established universal freedom. Still, legislation may be necessary and proper to meet all the various cases and circumstances to be affected by it, and to prescribe proper modes of redress for its violation in letter or spirit. And such legislation may be primary and direct in its character, for the amendment is not a mere prohibition of State laws establishing or upholding slavery, but an absolute declaration that slavery or involuntary servitude shall not exist in any part of the United States. Attorneys in Plessy v. Ferguson (1896) argued that racial segregation involved "observances of a servile character coincident with the incidents of slavery", in violation of the Thirteenth Amendment. In their brief to the Supreme Court, Plessy's lawyers wrote that "distinction of race and caste" was inherently unconstitutional. The Supreme Court rejected this reasoning and upheld state laws enforcing segregation under the "separate but equal" doctrine. In the (7–1) majority decision, the Court found that "a statute which implies merely a legal distinction between the white and colored races—a distinction which is founded on the color of the two races and which must always exist so long as white men are distinguished from the other race by color—has no tendency to destroy the legal equality of the two races, or reestablish a state of involuntary servitude." Harlan dissented, writing: "The thin disguise of 'equal' accommodations for passengers in railroad coaches will not mislead any one, nor, atone for the wrong this day done." In Hodges v. United States (1906), the Court struck down a federal statute providing for the punishment of two or more people who "conspire to injure, oppress, threaten or intimidate any citizen in the free exercise or enjoyment of any right or privilege secured to him by the Constitution or laws of the United States". A group of white men in Arkansas conspired to violently prevent eight black workers from performing their jobs at a lumber mill; the group was convicted by a federal grand jury. The Supreme Court ruled that the federal statute, which outlawed conspiracies to deprive citizens of their liberty, was not authorized by the Thirteenth Amendment. It held that "no mere personal assault or trespass or appropriation operates to reduce the individual to a condition of slavery". Harlan dissented, maintaining his opinion that the Thirteenth Amendment should protect freedom beyond "physical restraint". Corrigan v. Buckley (1922) reaffirmed the interpretation from Hodges, finding that the amendment does not apply to restrictive covenants. Enforcement of federal civil rights law in the South created numerous peonage cases, which slowly traveled up through the judiciary. The Supreme Court ruled in Clyatt v. United States (1905) that peonage was involuntary servitude. It held that although employers sometimes described their workers' entry into contract as voluntary, the servitude of peonage was always (by definition) involuntary. In Bailey v. Alabama the U.S. Supreme Court again reaffirmed its holding that Thirteenth Amendment was not solely a ban on chattel slavery, but also covers a much broader array of labor arrangements and social deprivations In addition to the aforesaid the Court also ruled on Congress enforcement power under the Thirteenth Amendment. The Court said: The plain intention [of the amendment] was to abolish slavery of whatever name and form and all its badges and incidents; to render impossible any state of bondage; to make labor free, by prohibiting that control by which the personal service of one man is disposed of or coerced for another's benefit, which is the essence of involuntary servitude. While the Amendment was self-executing, so far as its terms were applicable to any existing condition, Congress was authorized to secure its complete enforcement by appropriate legislation. Jones and beyond Legal histories cite Jones v. Alfred H. Mayer Co. (1968) as a turning point of Thirteen Amendment jurisprudence. The Supreme Court confirmed in Jones that Congress may act "rationally" to prevent private actors from imposing "badges and incidents of servitude". The Joneses were a black couple in St. Louis County, Missouri who sued a real estate company for refusing to sell them a house. The Court held: Congress has the power under the Thirteenth Amendment rationally to determine what are the badges and the incidents of slavery, and the authority to translate that determination into effective legislation. [...] this Court recognized long ago that, whatever else they may have encompassed, the badges and incidents of slavery -- its "burdens and disabilities" -- included restraints upon "those fundamental rights which are the essence of civil freedom, namely, the same right . . . to inherit, purchase, lease, sell and convey property, as is enjoyed by white citizens." Civil Rights Cases, 109 U. S. 3, 109 U. S. 22. Just as the Black Codes, enacted after the Civil War to restrict the free exercise of those rights, were substitutes for the slave system, so the exclusion of Negroes from white communities became a substitute for the Black Codes. And when racial discrimination herds men into ghettos and makes their ability to buy property turn on the color of their skin, then it too is a relic of slavery. Negro citizens, North and South, who saw in the Thirteenth Amendment a promise of freedom—freedom to "go and come at pleasure" and to "buy and sell when they please"—would be left with "a mere paper guarantee" if Congress were powerless to assure that a dollar in the hands of a Negro will purchase the same thing as a dollar in the hands of a white man. At the very least, the freedom that Congress is empowered to secure under the Thirteenth Amendment includes the freedom to buy whatever a white man can buy, the right to live wherever a white man can live. If Congress cannot say that being a free man means at least this much, then the Thirteenth Amendment made a promise the Nation cannot keep. The Court in Jones reopened the issue of linking racism in contemporary society to the history of slavery in the United States. The Jones precedent has been used to justify Congressional action to protect migrant workers and target sex trafficking. The direct enforcement power found in the Thirteenth Amendment contrasts with that of the Fourteenth, which allows only responses to institutional discrimination of state actors. Other cases of involuntary servitude The Supreme Court has taken an especially narrow view of involuntary servitude claims made by people not descended from black (African) slaves. In Robertson v. Baldwin (1897), a group of merchant seaman challenged federal statutes which criminalized a seaman's failure to complete their contractual term of service. The Court ruled that seamen's contracts had been considered unique from time immemorial, and that "the amendment was not intended to introduce any novel doctrine with respect to certain descriptions of service which have always been treated as exceptional." In this case, as in numerous "badges and incidents" cases, Justice Harlan authored a dissent favoring broader Thirteenth Amendment protections. In Selective Draft Law Cases, the Supreme Court ruled that the military draft was not "involuntary servitude". In United States v. Kozminski, the Supreme Court ruled that the Thirteenth Amendment did not prohibit compulsion of servitude through psychological coercion. Kozminski defined involuntary servitude for purposes of criminal prosecution as "a condition of servitude in which the victim is forced to work for the defendant by the use or threat of physical restraint or physical injury or by the use or threat of coercion through law or the legal process. This definition encompasses cases in which the defendant holds the victim in servitude by placing him or her in fear of such physical restraint or injury or legal coercion." The U.S. Courts of Appeals, in Immediato v. Rye Neck School District, Herndon v. Chapel Hill, and Steirer v. Bethlehem School District, have ruled that the use of community service as a high school graduation requirement did not violate the Thirteenth Amendment. Prior proposed Thirteenth Amendments During the six decades following the 1804 ratification of the Twelfth Amendment two proposals to amend the Constitution were adopted by Congress and sent to the states for ratification. Neither has been ratified by the number of states necessary to become part of the Constitution. Commonly known as the Titles of Nobility Amendment and the Corwin Amendment, both are referred to as Article Thirteen, as was the successful Thirteenth Amendment, in the joint resolution passed by Congress. - The Titles of Nobility Amendment (pending before the states since May 1, 1810) would, if ratified, strip citizenship from any United States citizen who accepts a title of nobility or honor from a foreign country without the consent of Congress. - The Corwin Amendment (pending before the states since March 2, 1861) would, if ratified, shield "domestic institutions" of the states (in 1861 this meant slavery) from the constitutional amendment process and from abolition or interference by Congress. - Crittenden Compromise - National Freedom Day - Slavery Abolition Act 1833 - Slave Trade Acts - List of amendments to the United States Constitution - "13th Amendment". Legal Information Institute. Cornell University Law School. November 20, 2012. Retrieved November 30, 2012. - Kenneth M. Stampp (1980). The Imperiled Union:Essays on the Background of the Civil War. Oxford University Press. p. 85. ISBN 9780199878529. - Jean Allain (2012). The Legal Understanding of Slavery: From the Historical to the Contemporary. Oxford University Press. p. 117. ISBN 9780199660469. - Jean Allain (2012). The Legal Understanding of Slavery: From the Historical to the Contemporary. Oxford University Press. pp. 119–120. ISBN 9780199660469. - Tsesis, The Thirteenth Amendment and American Freedom (2004), p. 14. - Foner, 2010, pp. 20–22 - Vile, John R., ed. (2003). "Thirteenth Amendment". Encyclopedia of Constitutional Amendments, Proposed Amendments, and Amending Issues: 1789 - 2002. ABC-CLIO. pp. 449–52. - Goodwin, 2005, p. 123 - Foner, 2010, p. 59 - "The Emancipation Proclamation". National Archives and Records Administration. Retrieved 2013-06-27. - McPherson, 1988, p. 558 - Vorenberg, Final Freedom (2001), p. 47. - Vorenberg, Final Freedom (2001), p. 48–51. - Leonard L. Richards, Who Freed the Slaves?: The Fight over the Thirteenth Amendment (2015) excerpt - "James Ashley". Ohio History Central. Ohio Historical Society. - Tsesis, The Thirteenth Amendment and American Freedom (2004), (2001), pp. 38–42. - Stanley, "Instead of Waiting for the Thirteenth Amendment" (2010), pp. 741–742. - Michigan State Historical Society (1901). Historical collections. Michigan Historical Commission. p. 582. Retrieved December 5, 2012. - Vorenberg, Final Freedom (2001), pp. 52–53. "Sumner made his intentions clearer on February 8, when he introduced his constitutional amendment to the Senate and asked that it be referred to his new committee. So desperate was he to make his amendment the final version that he challenged the well-accepted custom of sending proposed amendments to the Judiciary Committee. His Republican colleagues would hear nothing of it. - "Congressional Proposals and Senate Passage", Harpers Weekly, The Creation of the 13th Amendment, Retrieved Feb 15, 2007 - Vorenberg, Final Freedom (2001), p. 53. "It was no coincidence that Trumbull's announcement came only two days after Sumner had proposed his amendment making all persons 'equal before the law.' The Massachusetts senator had spurred the committee into final action." - "Northwest Ordinance; July 13, 1787". Avalon Project. Lillian Goldman Law Library, Yale Law School. Retrieved February 17, 2014. - McAward, Jennifer Mason (November 2012). "McCulloch and the Thirteenth Amendment". Columbia Law Review (Columbia Law School) 112 (7): 1769–1809. JSTOR 41708164. Pdf. - Vorenberg, Final Freedom (2001), p. 54. "Although it made Henderson's amendment the foundation of the final amendment, the committee rejected an article in Henderson's version that allowed the amendment to be adopted by the approval of only a simple majority in Congress and the ratification of only two-thirds of the states." - Goodwin, 2005, p. 686 - Goodwin, 2005, pp. 624–25 - Foner, 2010, p. 299 - Goodwin, 2005, p. 639 - Benedict, "Constitutional Politics, Constitutional Law, and the Thirteenth Amendment" (2012), p. 179. - Benedict, "Constitutional Politics, Constitutional Law, and the Thirteenth Amendment" (2012), p. 179–180. Benedict quotes Sen. Garrett Davis: "there is a boundary between the power of revolution and the power of amendment, which the latter, as established in our Constitution, cannot pass; and that if the proposed change is revolutionary it would be null and void, notwithstanding it might be formally adopted." The full text of Davis's speech, with comments from others, appears in Great Debates in American History (1918), ed. Marion Mills Miller. - Colbert, "Liberating the Thirteenth Amendment" (1995), p. 10. - Benedict, "Constitutional Politics, Constitutional Law, and the Thirteenth Amendment" (2012), p. 182. - tenBroek, Jacobus (June 1951). "Thirteenth Amendment to the Constitution of the United States: Consummation to Abolition and Key to the Fourteenth Amendment". California Law Review (California Law Review, Inc. via JSTOR) 39 (2): 180. doi:10.2307/3478033. JSTOR 3478033. It would make it possible for white citizens to exercise their constitutional right under the comity clause to reside in Southern states regardless of their opinions. It would carry out the constitutional declaration "that each citizen of the United States shall have equal privileges in every other state." It would protect citizens in their rights under the First Amendment and comity clause to freedom of speech, freedom of press, freedom of religion and freedom of assemblyPreview. - Vorenberg, Final Freedom (2001), p. 61. - Trelease, White Terror (1971), p. xvii. "Negroes wanted the same freedom that white men enjoyed, with equal prerogatives and opportunities. The educated black minority emphasized civil and political rights more than the masses, who called most of all for land and schools. In an agrarian society, the only kind most of them knew, landownership was associated with freedom, respectability, and the good life. It was almost universally desired by Southern blacks, as it was by landless peasants the world over. Give us our land and we can take care of ourselves, said a group of South Carolina Negroes to a Northern journalist in 1865; without land the old masters can hire us or starve us as they please." - Vorenberg, Final Freedom (2001), p. 73. "The first notable convert was Representative James Brooks of New York, who, on the floor of Congress on February 18, 1864, declared that slavery was dying if not already dead, and that his party should stop defending the institution." - Vorenberg, Final Freedom (2001), p. 74. "The antislavery amendment caught Johnson's eye, however, because it offered an indisputable constitutional solution to the problem of slavery." - Vorenberg, Final Freedom (2001), p. 203. - Foner, 2010, pp. 312–14 - Donald, 1996, p. 396 - Vorenberg, Final Freedom (2001), p. 48. "The president worried that an abolition amendment might foul the political waters. The amendments he had recommended in December 1862 had gone nowhere, mainly because they reflected an outdated program of gradual emancipation, which included compensation and colonization. Moreover, Lincoln knew that he did not have to propose amendments because others more devoted to abolition would, especially if he pointed out the vulnerability of existing emancipation legislation. He was also concerned about negative reactions from conservatives, particularly potential new recruits from the Democrats. - Willis, John C. "Republican Party Platform, 1864". University of the South. Retrieved 2013-06-28. Resolved, That as slavery was the cause, and now constitutes the strength of this Rebellion, and as it must be, always and everywhere, hostile to the principles of Republican Government, justice and the National safety demand its utter and complete extirpation from the soil of the Republic; and that, while we uphold and maintain the acts and proclamations by which the Government, in its own defense, has aimed a deathblow at this gigantic evil, we are in favor, furthermore, of such an amendment to the Constitution, to be made by the people in conformity with its provisions, as shall terminate and forever prohibit the existence of Slavery within the limits of the jurisdiction of the United States. - "1864: The Civil War Election". Get Out the Vote. Cornell University. 2004. Retrieved 2013-06-28. Despite internal Party conflicts, Republicans rallied around a platform that supported restoration of the Union and the abolition of slavery. - Goodwin, 2005, pp. 686–87 - Vorenberg, Final Freedom (2001), p. 176–177, 180. - Vorenberg, Final Freedom (2001), p. 178. - Foner, 2010, pp. 312–13 - Goodwin, 2005, p. 687 - Goodwin, 2005, pp. 687–689 - Donald, 1996, p. 554 - Vorenberg, Final Freedom (2001), p. 187. "But the clearest sign of the people's voice against slavery, argued amendment supporters, was the recent election. Following Lincoln's lead, Republican representatives like Godlove S. Orth of Indiana claimed that the vote represented a 'popular verdict . . . in unmistakable language' in favor of the amendment." - Goodwin, 2005, p. 688 - Vorenberg, Final Freedom (2001), p. 191. "The necessity of keeping support for the amendment broad enough to secure its passage created a strange situation. At the moment that Republicans were promoting new, far-reaching legislation for African Americans, they had to keep this legislation detached from the first constitutional amendment dealing exclusively with African American freedom. Republicans thus gave freedom under the antislavery amendment a vague construction: freedom was something more than the absence of chattel slavery but less than absolute equality." - Vorenberg, Final Freedom (2001), pp. 191–192. "One of the most effective methods used by amendment supporters to convey the measure's conservative character was to proclaim the permanence of patriarchal power within the American family in the face of this or any textual change to the Constitution. In response to Democrats who charged that the antislavery was but the first step in a Republican design to dissolve all of society's foundations, including the hierarchical structure of the family, the Iowa Republican John A. Kasson denied any desire to interfere with 'the rights of a husband to a wife' or 'the right of [a] father to his child." - Vorenberg, Final Freedom (2001), pp. 197–198. - Vorenberg, Final Freedom (2001), p. 198. "It was at this point that the president wheeled into action on behalf of the Amendment […] Now he became more forceful. To one representative whose brother had died in the war, Lincoln said, 'your brother died to save the Republic from death by the slaveholders' rebellion. I wish you could see it to be your duty to vote for the Constitutional amendment ending slavery.'" - "TO PASS S.J. RES. 16. (P. 531-2).". GovTrack.us. - Foner, 2010, p. 313 - Foner, 2010, p. 314 - McPherson, 1988, p. 840 - Harrison, "Lawfulness of the Reconstruction Amendments" (2001), p. 389. "For reasons that have never been entirely clear, the amendment was presented to the President pursuant to Article I, Section 7, of the Constitution, and signed. - "Joint Resolution Submitting 13th Amendment to the States; signed by Abraham Lincoln and Congress". The Abraham Lincoln Papers at the Library of Congress: Series 3. General Correspondence. 1837-1897. Library of Congress. - Thorpe, Constitutional History (1901), p. 154. "But many held that the President's signature was not essential to an act of this kind, and, on the fourth of February, Senator Trumbull offered a resolution, which was agreed to three days later, that the approval was not required by the Constitution; 'that it was contrary to the early decision of the Senate and of the Supreme Court; and that the negative of the President applying only to the ordinary cases of legislation, he had nothing to do with propositions to amend the Constitution'." - Thorpe, Constitutional History (1901), p. 154. "The President signed the joint resolution on the first of February. Somewhat curiously the signing has only one precedent, and that was in spirit and purpose the complete antithesis of the present act. President Buchanan had signed the proposed amendment of 1861, which would make slavery national and perpetual." - Lincoln's struggle to get the amendment through Congress, while bringing the war to an end, is portrayed in Lincoln. - Harrison, "Lawfulness of the Reconstruction Amendments" (2001), p. 390. - Samuel Eliot Morison (1965). The Oxford History of the American People. Oxford University Press. p. 710. - Harrison, "Lawfulness of the Reconstruction Amendments" (2001), p. 394–397. - Eric L. McKitrick (1960). Andrew Johnson and Reconstruction. U. Chicago Press. p. 178. ISBN 9780195057072. - Clara Mildred Thompson (1915). Reconstruction in Georgia: economic, social, political, 1865-1872. Columbia University Press. p. 156. - Vorenberg, Final Freedom (2001), pp. 227–228. - Vorenberg, Final Freedom (2001), p. 229. - Du Bois, Black Reconstruction (1935), p. 208. - Thorpe, Constitutional History (1901), p. 210. - Tsesis, The Thirteenth Amendment and American Freedom (2004), 48. - U.S. GOVERNMENT PRINTING OFFICE, 112th Congress, 2nd Session, SENATE DOCUMENT No. 112–9 (2013). "THE CONSTITUTION of the UNITED STATES OF AMERICA ANALYSIS AND INTERPRETATION Centennial Edition INTERIM EDITION: ANALYSIS OF CASES DECIDED BY THE SUPREME COURT OF THE UNITED STATES TO JUNE 26, 2013s" (PDF). p. 30. Retrieved February 17, 2014. - Seward certificate proclaiming the Thirteenth Amendment to have been adopted as part of the Constitution as of December 6, 1865. - Vorenberg, Final Freedom (2001), p. 232. - Kocher, Greg (February 23, 2013). "Kentucky supported Lincoln's efforts to abolish slavery — 111 years late". Lexington Herald-Leader. Retrieved February 17, 2014. - Ben Waldron (February 18, 2013). "Mississippi Officially Abolishes Slavery, Ratifies 13th Amendment". ABC News. Archived from the original on June 4, 2013. Retrieved April 23, 2013. - "The Constitution of the United States: Amendments 11-27". United States National Archives. United States National Archives. Retrieved 24 February 2014. - Lowell Harrison & James C. Klotter, A New History of Kentucky, University Press of Kentucky, 1997; p. 180; ISBN 9780813126210 - Forehand, "Striking Resemblance" (1996), p. 82. - Hornsby, Alan, ed. (2011). "Delaware". Black America: A State-by-State Historical Encyclopedia. ABC-CLIO. p. 139. ISBN 9781573569767. - Tsesis, The Thirteenth Amendment and American Freedom (2004), pp. 17 & 34. - "The Thirteenth Amendment", Primary Documents in American History, Library of Congress. Retrieved Feb 15, 2007 - Goldstone 2011, p. 22. - Stromberg, "A Plain Folk Perspective" (2002), p. 111. - Nelson, William E. (1988). The Fourteenth Amendment: From Political Principle to Judicial Doctrine. Harvard University Press. p. 47. ISBN 9780674041424. Retrieved June 6, 2013. - Stromberg, "A Plain Folk Perspective" (2002), p. 112. - J. J. Gries to the Joint Committee on Reconstruction, quoted in Du Bois, Black Reconstruction (1935), p. 140. - Du Bois, Black Reconstruction (1935), p. 188. - Quoted in Vorenberg, Final Freedom (2001), p. 244. - Trelease, White Terror (1971), p. xviii. "The truth seems to be that, after a brief exulation with the idea of freedom, Negroes realized that their position was hardly changed; they continued to live and work much as they had before." - Blackmon 2008, p. 53. - Novak, Wheel of Servitude (1978), p. 2. - Blackmon 2008, p. 100. - Tsesis, The Thirteenth Amendment and American Freedom (2004), pp. 51–52. - Blackmon 2008, p. 6. - Vorenberg, Final Freedom (2001), pp. 230–231. "The black codes were a violation of freedom of contract, one of the civil rights that Republicans expected to flow from the amendment. Because South Carolina and other states anticipated that congressional Republicans would try to use the Thirteenth Amendment to outlaw the codes, they made the preemptive strike of declaring in their ratification resolutions that Congress could not use the amendment's second clause to legislate on freed people's civil rights." - Benjamin Ginsberg, Moses of South Carolina: A Jewish Scalawag during Radical Reconstruction; Johns Hopkins Press, 2010; pp. 44–46. - Tsesis, The Thirteenth Amendment and American Freedom (2004), p. 50. - Tsesis, The Thirteenth Amendment and American Freedom (2004), p. 51. - Wolff, "The Thirteenth Amendment and Slavery in the Global Economy" (2002), p. 981. "Peonage was a system of forced labor that depended upon the indebtedness of a worker, rather than an actual property right in a slave, as the means of compelling work. A prospective employer would offer a laborer a "loan" or "advance" on his wages, typically as a condition of employment, and then use the newly created debt to compel the worker to remain on the job for as long as the employer wished." - Wolff, "The Thirteenth Amendment and Slavery in the Global Economy" (2002), p. 982. "Not surprisingly, employers used peonage arrangements primarily in industries that involved hazardous working conditions and very low pay. While black workers were not the exclusive victims of peonage arrangements in America, they suffered under its yoke in vastly disproportionate numbers. Along with Jim Crow laws that segregated transportation and public facilities, these laws helped to restrict the movement of freed black workers and thereby keep them in a state of poverty and vulnerability." - Wolff, "The Thirteenth Amendment and Slavery in the Global Economy" (2002), p. 982. "Legally sanctioned peonage arrangements blossomed in the South following the Civil War and continued into the twentieth century. According to the Professor Jacqueline Jones, 'perhaps as many as one-third of all [sharecropping farmers] in Alabama, Mississippi, and George were being held against their will in 1900." - Wolff, "The Thirteenth Amendment and Slavery in the Global Economy" (2002), p. 982. "It did not recognize a property right in a human being (a peon could not be sold in the manner of a slave); and the condition of peonage did not work 'corruption of blood' and travel to the children of the worker. Peonage, in short, was not chattel slavery. Yet the practice unquestionably reproduced many of the immediate practical realities of slavery—a vast underclass of laborers, held to their jobs by force of law and threat of imprisonment, with few if any opportunities for escape." - Vorenberg, Final Freedom (2001), pp. 233–234. - W. E. B. Du Bois, "The Freedmen's Bureau", The Atlantic, March 1901. - Goldstone 2011, pp. 23–24. - Tsesis, The Thirteenth Amendment and American Freedom (2004), pp. 50–51. "Blacks applied to local provost marshalls and Freedmen's Bureau for help against these child abductions, particularly in those cases where children were taken from living parents. Jack Prince asked for help when a woman bound his maternal niece. Sally Hunter requested assistance to obtain the release of her two nieces. Bureau officials finally put an end to the system of indenture in 1867". - Forehand, "Striking Resemblance" (1996), p. 99–100, 105. - Tsesis, The Thirteenth Amendment and American Freedom (2004), p. 66–67. - Tsesis, The Thirteenth Amendment and American Freedom (2004), pp. 56–57, 60–61. "If the Republicans had hoped to gradually use section 2 of the Thirteenth Amendment to pass Reconstruction legislation, they would soon learn that President Johnson, using his veto power, would make increasingly more difficult the passage of any measure augmenting the power of the national government. Further, with time, even leading antislavery Republicans would become less adamant and more willing to reconcile with the South than protect the rights of the newly freed. This was clear by the time Horace Greely accepted the Democratic nomination for president in 1872 and even more when President Rutherford B. Hayes entered the Compromise of 1877, agreeing to withdraw federal troops from the South." - Goluboff, "Lost Origins of Civil Rights" (2001), p. 1638. - Soifer, "Prohibition of Voluntary Peonage" (2012), p. 1617. - Goluboff, "Lost Origins of Civil Rights" (2001), p. 1616. - Goluboff, "Lost Origins of Civil Rights" (2001), pp. 1619–1621. - Goluboff, "Lost Origins of Civil Rights" (2001), pp. 1626–1628. - Goluboff, "Lost Origins of Civil Rights" (2001), pp. 1629, 1635. - Goluboff, "Lost Origins of Civil Rights" (2001), p. 1668. - Goluboff, "Lost Origins of Civil Rights", pp. 1680–1683. - "US Code – Title 18: Crimes and criminal procedure". Codes.lp.findlaw.com. Retrieved November 30, 2012. - "18 U.S.C. § 241: US Code – Section 241: Conspiracy against rights". Codes.lp.findlaw.com. Retrieved November 30, 2012. - "18 U.S.C. § 242: US Code – Section 242: Deprivation of rights under color of law". Codes.lp.findlaw.com. Retrieved November 30, 2012. - Peonage Section 1581 of Title 18 U.S. Department of Justice, Civil Rights Division Involuntary servitude, forced labor and sex trafficking statutes enforced - Involuntary Servitude Section 1584 of Title 18 U.S. Department of Justice, Civil Rights Division Involuntary servitude, forced labor and sex trafficking statutes enforced - "Specific Performance and the Thirteenth Amendment by Nathan Oman". SSRN. Retrieved November 30, 2012. - Forced Labor Section 1589 of Title 18 U.S. Department of Justice, Civil Rights Division Involuntary servitude, forced labor and sex trafficking statutes enforced. NB According to the Dept. of Justice, "Congress enacted § 1589 in response to the Supreme Court's decision in United States v. Kozminski, 487 U.S. 931 (1988), which interpreted § 1584 to require the use or threatened use of physical or legal coercion. Section 1589 broadens the definition of the kinds of coercion that might result in forced labor." - Amy Dru Stanley (June 2010). "Instead of Waiting for the Thirteenth Amendment: The War Power, Slave Marriage, and Inviolate Human Rights". American Historical Review 115 (3): 735. - Kenneth L. Karst (January 1, 2000). "Thirteenth Amendment (Judicial Interpretation)". Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). Retrieved June 16, 2013. - 27 Fed. Cas. 785 (1866) - Tsesis, The Thirteenth Amendment and American Freedom (2004), pp. 62–63. - Seth P. Waxman, "orgetown.edu/facpub/287/ Twins at Birth: Civil Rights and the Role of the Solicitor General", Indiana Law Journal 75, 2000; pp. 1302–1303. - Tsesis, The Thirteenth Amendment and American Freedom (2004), pp. 63–64. - 80 U.S. 581 (1871) - Tsesis, The Thirteenth Amendment and American Freedom (2004), pp. 64–66. - Waskey, Andrew J. "John Marshall Harlan". In Wilson, Steven Harmon. The U.S. Justice System: An Encyclopedia: An Encyclopedia. ABC-CLIO. p. 547. ISBN 978-1-59884-305-7. - Maria L. Ontiveros, Professor of Law, University of San Francisco School of Law, and Joshua R. Drexler, J.D. Candidate, May 2008, University of San Francisco School of Law (21 July 2008), The Thirteenth Amendment and Access to Education for Children of Undocumented Workers: A New Look at Plyler v. Doe'; Publisher: University of San Francisco Law Review, Volume 42, Spring 2008, Pages 1045-1076; here page 1058-1059. The article was developed from a working paper prepared for the roundtable, "The Education of All Our Children: The 25th Anniversary of Plyler v. Doe," sponsored by the Chief Justice Earl Warren Institute on Race, Ethnicity & Diversity (University of California, Berkeley, Boalt Hall School of Law), held on May 7, 2007. - The Slaughter-House Cases, 83 U.S. (36 Wall.), at 72 (1873) - Text of Civil Rights Cases, 109 U.S. 3 (1883) is available from: Findlaw Justia LII - Goldstone 2011, p. 122. - Tsesis, The Thirteenth Amendment and American Freedom (2004), p. 70. - Appleton's Annual Cyclopædia and Register of Important Events of the Year ... D. Appleton & Company. 1888. p. 132. Retrieved June 11, 2013. - Tsesis, The Thirteenth Amendment and American Freedom (2004), p. 73. - 163 U.S. 537 (1896) - Tsesis, The Thirteenth Amendment and American Freedom (2004), p. 76. - Goldstone 2011, pp. 162, 164–65. - Tsesis, The Thirteenth Amendment and American Freedom (2004), p. 78. - 203 U.S. 1 (1906) - Tsesis, The Thirteenth Amendment and American Freedom (2004), p. 79–80. - Wolff, "The Thirteenth Amendment and Slavery in the Global Economy" (2002), p. 983. - Bailey v. Alabama, 219 U.S. 219, 241 (1910). - Tsesis, The Thirteenth Amendment and American Freedom (2004), p. 3. "After Reconstruction, however, a series of Supreme Court decisions substantially diminished the amendment's significance in achieving genuine liberation. The Court did not revisit the amendment's meaning until 1968, during the heyday of the Civil Rights movement. In Jones v. Alfred H. Mayer, the Court found that the Thirteenth Amendment not only ended unrecompensed, forced labor but that its second section also empowered Congress to develop legislation that is 'rationally' related to ending any remaining 'badges and incidents of servitude'." - Colbert, "Liberating the Thirteenth Amendment" (1995), p. 2. - "Jones v. Alfred H. Mayer Co. 392 U.S. 409 (1968)". Legal Information Institute at Cornell University Law Schhool. Retrieved 22 October 2015. Syllabus: "[T]he badges and incidents of slavery that the Thirteenth Amendment empowered Congress to eliminate included restraints upon those fundamental rights which are the essence of civil freedom, namely, the same right . . . to inherit, purchase, lease, sell and convey property, as is enjoyed by white citizens. Civil Rights Cases, 09 U.S. 3, 22. Insofar as Hodges v. United States, 203 U.S. 1, suggests a contrary holding, it is overruled." Footnote 78: "[W]e note that the entire Court [in the Civil Rights Cases; content added] agreed upon at least one proposition: the Thirteenth Amendment authorizes Congress not only to outlaw all forms of slavery and involuntary servitude, but also to eradicate the last vestiges and incidents of a society half slave and half free by securing to all citizens, of every race and color, the same right to make and enforce contracts, to sue, be parties, give evidence, and to inherit, purchase, lease, sell and convey property, as is enjoyed by white citizens. [...] The conclusion of the majority in Hodges rested upon a concept of congressional power under the Thirteenth Amendment irreconcilable with the position taken by every member of this Court in the Civil Rights Cases and incompatible with the history and purpose of the Amendment itself. Insofar as Hodges is inconsistent with our holding today, it is hereby overruled." - 'Jones v. Alfred H. Mayer Co., 392 U.S. 409 (1968) - Alison Shay, "Remembering Jones v. Alfred H. Mayer Co.", Publishing the Long Civil Rights Movement, 17 June 2012. - Colbert, "Liberating the Thirteenth Amendment" (1995), pp. 3–4. - Tsesis, The Thirteenth Amendment and American Freedom (2004), p. 3. "The Court's holding in Jones enables Congress to pass statutes against present-day human rights violations, such as the trafficking of foreign workers as sex slaves and the exploitation of migrant agricultural workers as peons." - Tsesis, The Thirteenth Amendment and American Freedom (2004), pp. 112–113. "... the Thirteenth Amendment remains the principal constitutional source requiring the federal government to protect individual liberties against arbitrary private and public infringements that resemble the incidents of involuntary servitude. Moreover, the Thirteenth Amendment is a positive injunction requiring Congress to pass laws to that end, while the Fourteenth Amendment is 'responsive' to 'unconstitutional behavior.'" - Wolff, "The Thirteenth Amendment and Slavery in the Global Economy" (2002), p. 977. - 245 U.S. 366 (1918) - 487 U.S. 931 (1988) - "Thirteenth Amendment—Slavery and Involuntary Servitude", GPO Access, U.S. Government Printing Office, p. 1557 - Risa Goluboff (2001), "The 13th Amendment and the Lost Origins of Civil Rights," Duke Law Journal, Vol 50, no. 228, p. 1609 - Loupe, Diane (August 2000). "Community Service: Mandatory or Voluntary? – Industry Overview". School Administrator: 8. - Mark W. Podvia (2009). "Titles of Nobility". In David Andrew Schultz. Encyclopedia of the United States Constitution. Infobase. pp. 738–39. ISBN 9781438126777. - "Constitutional Amendments Not Ratified". United States House of Representatives. Archived from the original on 2012-07-02. Retrieved 2013-11-21. - Foner, 2010, p. 158 - Belz, Herman (1978). Emancipation and Equal Rights: Politics and Constitutionalism in the Civil War Era. New York: W. W. Norton. Preview. - Benedict, Michael L. (2011). "Constitutional Politics, Constitutional Law, and the Thirteenth Amendment". Maryland Law Review (University of Maryland School of Law) 71 (1): 163–188. Pdf. - Blackmon, Douglas A. (March 25, 2008). Slavery by Another Name: The Re-Enslavement of Black Americans from the Civil War to World War II. Knopf Doubleday Publishing Group. ISBN 9780385506250. Preview. - Colbert, Douglas L. (Winter 1995). "Liberating the Thirteenth Amendment". Harvard Civil Rights-Civil Liberties Law Review (Harvard Law School) 30 (1): 1–55. - Cramer, Clayton E. (1997). Black Demographic Data, 1790-1860: A Sourcebook. Greenwood Publishing Group. ISBN 9780313302435. - Donald, David Herbert (1996). Lincoln. Simon & Schuster. ISBN 9780684825359. Preview. - Du Bois, W.E.B. (1935). Black Reconstruction: An Essay Toward a History of the Part Which Black Folk Played in the Attempt to Reconstruct Democracy in America, 1860–1880'. New York: Russell & Russell. OCLC 317424. - Foner, Eric (2010). The Fiery Trial: Abraham Lincoln and American Slavery. W. W. Norton. ISBN 9780393066180. Preview. - Forehand, Beverly (1996). Striking Resemblance: Kentucky, Tennessee, Black Codes and Readjustment, 1865–1866 (Masters thesis). Western Kentucky University. - Goluboff, Risa L. (April 2001). "The Thirteenth Amendment and the lost origins of civil rights". Duke Law Journal (Duke University School of Law) 50: 1609–1685. doi:10.2307/1373044. JSTOR 1373044. Pdf. - Goldstone, Lawrence (2011). Inherently Unequal: The Betrayal of Equal Rights by the Supreme Court, 1865-1903. Walker & Company. ISBN 9780802717924. Preview. - Goodwin, Doris Kearns (2005). Team of rivals: the political genius of Abraham Lincoln. Simon & Schuster. ISBN 9780743270755. Preview. - Harrison, John (Spring 2001). "The lawfulness of the reconstruction amendments". University of Chicago Law Review (University of Chicago Law School) 68 (2): 375–462. doi:10.2307/1600377. JSTOR 1600377. Pdf. - Kachun, Mitch (2003). Festivals of Freedom: Memory and Meaning in African American Emancipation Celebrations, 1808–1915. Online. - McAward, Jennifer Mason (2010). "The scope of Congress's Thirteenth Amendment enforcement power after City of Boerne v. Flores". Washington University Law Review (Washington University School of Law) 88 (1): 77–147. Pdf. - Response to McAward: Tsesis, Alexander (2011). "Congressional authority to interpret the Thirteenth Amendment". Maryland Law Review (University of Maryland School of Law) 71 (1): 40–59. SSRN 1753224. Pdf. - Response to Tsesis: McAward, Jennifer Mason (2011). "Congressional authority to interpret the Thirteenth Amendment: a response to Professor Tsesis". Maryland Law Review (University of Maryland School of Law) 71 (1): 60–82. SSRN 2271791. Pdf. - McConnell, Joyce E. (Spring 1992). "Beyond metaphor: battered women, involuntary servitude and the Thirteenth Amendment". Yale Journal of Law and Feminism (Yale Law School) 4 (2): 207–253. Pdf. - McPherson, James M. (1988). Battle Cry of Freedom: The Civil War Era. Oxford University Press. ISBN 9780195038637. Preview. - Novak, Daniel A. (1978). The Wheel of Servitude: Black Forced Labor after Slavery. Kentucky: University Press of Kentucky. ISBN 0813113717. - Richards, Leonard L. (2015). Who Freed the Slaves?: The Fight over the Thirteenth Amendment. Excerpt. Emphasis on the role of Congressman James Ashley. - Samito, Christian G., Lincoln and the Thirteenth Amendment (Southern Illinois University Press, 2015) xii, 171 pp. - Stanley, Amy Dru (June 2010). "Instead of waiting for the Thirteenth Amendment: the war power, slave marriage, and inviolate human rights". The American Historical Review (Oxford Journals for the American Historical Association via JSTOR) 115 (3): 732–765. doi:10.1086/ahr.115.3.732. JSTOR 10.1086/ahr.115.3.732. Pdf. - Stromberg, Joseph R. (Spring 2002). "A plain folk perspective on reconstruction, state-building, ideology, and economic spoils". Journal of Libertarian Studies (Center for Libertarian Studies, Ludwig von Mises Institute) 16 (2): 103–137. Pdf. - tenBroek, Jacobus (June 1951). "Thirteenth Amendment to the Constitution of the United States: Consummation to Abolition and Key to the Fourteenth Amendment". California Law Review (California Law Review, Inc. via JSTOR) 39 (2): 171–203. doi:10.2307/3478033. JSTOR 3478033. Pdf. - Thorpe, Francis Newton (1901). "The Constitutional History of the United States, vol. 3: 1861 – 1895". Chicago. - Trelease, Allen W. (1971). White Terror: The Ku Klux Klan Conspiracy and Southern Reconstruction. New York: Harper & Row. - Tsesis, Alexander (2004). The Thirteenth Amendment and American freedom: a legal history. New York: New York University Press. ISBN 0814782760. - Vicino, Thomas J.; Hanlon, Bernadette (2014). Global Migration The Basics. Routledge. ISBN 9781134696871. - Vorenberg, Michael (2001). Final freedom: the Civil War, the abolition of slavery, and the Thirteenth Amendment. Cambridge: Cambridge University Press. ISBN 9781139428002. Preview. - Barrington Wolff, Tobias (May 2002). "The Thirteenth Amendment and slavery in the global economy". Columbia Law Review (Columbia Law School) 102 (4): 973–1050. doi:10.2307/1123649. JSTOR 1123649. - Wood, Gordon S (2010). Empire of Liberty: A History of the Early Republic, 1789–1815. Oxford University Press. ISBN 9780195039146., Book Maryland Law Review, special issue: Symposium - the Maryland Constitutional Law Schmooze - Garber, Mark A. (2011). "Foreword: Plus or minus one: the Thirteenth and Fourteenth Amendments". Maryland Law Review (University of Maryland School of Law) 71 (1): 12–20. Pdf. - Carter Jr., William M. (2011). "The Thirteenth Amendment, interest convergence, and the badges and incidents of slavery". Maryland Law Review (University of Maryland School of Law) 71 (1): 21–39. SSRN 1932762. Pdf. - Tsesis, Alexander (2011). "Congressional authority to interpret the Thirteenth Amendment". Maryland Law Review (University of Maryland School of Law) 71 (1): 40–59. SSRN 1753224. Pdf. - McAward, Jennifer Mason (2011). "Congressional authority to interpret the Thirteenth Amendment: a response to Professor Tsesis". Maryland Law Review (University of Maryland School of Law) 71 (1): 60–82. SSRN 2271791. Pdf. - McClain, Linda C. (2011). "Involuntary servitude, public accommodations laws, and the legacy of Heart of Atlanta Motel, Inc. v. United States". Maryland Law Review (University of Maryland School of Law) 71 (1): 83–162. SSRN 2188939. Pdf. - Benedict, Michael Les (2011). "Constitutional politics, constitutional law, and the Thirteenth Amendment". Maryland Law Review (University of Maryland School of Law) 71 (1): 163–188. Pdf. - Pope, James Gray (2011). "What's different about the Thirteenth Amendment, and why does it matter?". Maryland Law Review (University of Maryland School of Law) 71 (1): 189–202. SSRN 1894965. Pdf. - Novkov, Julie (2011). "The Thirteenth Amendment and the meaning of familial bonds". Maryland Law Review (University of Maryland School of Law) 71 (1): 203–228. Pdf. - Kersch, Ken I. (2011). "Beyond originalism: conservative declarationism and constitutional redemption". Maryland Law Review (University of Maryland School of Law) 71 (1): 229–282. Pdf. - Zietlow, Rebecca E. I. (2011). "Conclusion: the political Thirteenth Amendment". Maryland Law Review (University of Maryland School of Law) 71 (1): 283–294. SSRN 2000929. Pdf. Columbia Law Review, special issue: Symposium: The Thirteenth Amendment: Meaning, Enforcement, and Contemporary Implications - PANEL I: THIRTEENTH AMENDMENT IN CONTEXT - Balkin, Jack M.; Levinson, Sanford (November 2012). "The dangerous Thirteenth Amendment". Columbia Law Review (Columbia Law School) 112 (7): 1459–1499. JSTOR 41708156. Pdf. - Graber, Mark A. (November 2012). "Subtraction by addition?: The Thirteenth and Fourteenth Amendments". Columbia Law Review (Columbia Law School) 112 (7): 1501–1549. JSTOR 41708157. Pdf. - Rutherglen, George (November 2012). "The Thirteenth Amendment, the power of Congress, and the shifting sources of civil rights law". Columbia Law Review (Columbia Law School) 112 (7): 1551–1584. JSTOR 41708158. Pdf. - PANEL II: RECONSTRUCTION REVISITED - Foner, Eric (November 2012). "The Supreme Court and the history of reconstruction – and vice-versa". Columbia Law Review (Columbia Law School) 112 (7): 1585–1606. JSTOR 41708159. Pdf. - Soifer, Aviam (November 2012). "Federal protection, paternalism, and the virtually forgotten prohibition of voluntary peonage". Columbia Law Review (Columbia Law School) 112 (7): 1607–1639. JSTOR 41708160. Pdf. - Tsesis, Alexander (November 2012). "Gender discrimination and the Thirteenth Amendment". Columbia Law Review (Columbia Law School) 112 (7): 1641–1695. JSTOR 41708161. Pdf. - Zietlow, Rebecca E. (November 2012). "James Ashley's Thirteenth Amendment". Columbia Law Review (Columbia Law School) 112 (7): 1697–1731. JSTOR 41708162. Pdf. - PANEL III: THE LIMITS OF AUTHORITY - Greene, Jamal (November 2012). "Thirteenth Amendment optimism". Columbia Law Review (Columbia Law School) 112 (7): 1733–1768. JSTOR 41708163. Pdf. - McAward, Jennifer Mason (November 2012). "McCulloch and the Thirteenth Amendment". Columbia Law Review (Columbia Law School) 112 (7): 1769–1809. JSTOR 41708164. Pdf. - Miller, Darrell A.H. (November 2012). "The Thirteenth Amendment and the regulation of custom". Columbia Law Review (Columbia Law School) 112 (7): 1811–1854. JSTOR 41708165. Pdf. - PANEL IV: CONTEMPORARY IMPLICATIONS - Carter, Jr., William M. (November 2012). "The Thirteenth Amendment and pro-equality speech". Columbia Law Review (Columbia Law School) 112 (7): 1855–1881. JSTOR 41708166. SSRN 2166859. Pdf. - Delgado, Richard (November 2012). "Four reservations on civil rights reasoning by analogy: the case of Latinos and other Nonblack groups". Columbia Law Review (Columbia Law School) 112 (7): 1883–1915. JSTOR 41708167. Pdf. - Koppelman, Andrew (November 2012). "Originalism, abortion, and the Thirteenth Amendment". Columbia Law Review (Columbia Law School) 112 (7): 1917–1945. JSTOR 41708168. Pdf. - Ripley, C. Peter et al. eds. Witness for Freedom: African American Voices on Race, Slavery, and Emancipation (1993) online - Thirteenth Amendment and related resources at the Library of Congress - National Archives: Thirteenth Amendment - CRS Annotated Constitution: Thirteenth Amendment - Original Document Proposing Abolition of Slavery - Model State Anti-trafficking Criminal Statute – U.S. Dept of Justice - "Abolishing Slavery: The Thirteenth Amendment Signed by Abraham Lincoln"; website of Seth Kaller, a dealer who has sold six Lincoln-signed copies of the Thirteenth Amendment. - Seward certificate announcing the Amendment's passage and affirming the existence of 36 States - Analysis of court decisions giving December 18, 1865, as the date of ratification
A healthy, well-balanced diet contains naturally occurring sugars as integral components of whole foods (ie, within whole fruits, vegetables, milk and dairy products, and some grains). Added sugars provide sensory effects to foods and promote enjoyment, but although they may be required in some clinical situations, they are not a necessary component of the diet in healthy children. By providing calories without other essential nutrients (2), they can displace nutrient-dense foods and contribute to poor health outcomes, which is of special concern in children. Excessive consumption of sugars has been linked with several metabolic abnormalities and adverse health conditions (3). The aim of this paper is to review the terminology, classification, and definitions of sugars and sugar-containing beverages; current recommendations for intake of sugars and beverages; intakes of sugars, sugars-sweetened foods/beverages in children/adolescents; evidence on the development of sweet taste and preference for sweet foods; evidence on the health effects of sugar and sugar-containing beverages in infants, children, and adolescents; what sugars should be replaced by; and provide recommendations and practical points on the intake of free sugars in the paediatric population, with a focus on establishing healthy dietary practices and preventing health problems. The paper focuses on the general paediatric population. A systematic search of the literature was performed to identify publications relevant to the aims of the position paper. We searched PubMed, EMBASE, and the Cochrane Central Register of Controlled Trials CENTRAL for randomised controlled trials (RCT), cohort studies, cross-sectional studies, clinical trials, epidemiological studies, systematic reviews, meta-analyses, and consensus statements/guidelines published in English up to March 2015. Case-control studies and qualitative studies/research were not included. Reference lists of included studies and other relevant articles were also searched. For the literature search to identify studies relevant to the development of sweet taste or flavour preference and associations with sugars intake, we used the search strategy in Appendix 1 (Supplemental Digital Content 1, http://links.lww.com/MPG/B102). Systematic reviews published after March 2015, up to September 2016, were also eligible, but no formal search was undertaken after March 2015. Due to the heterogeneous nature of the literature, a narrative summary of the selected papers is provided in the text. To identify studies on intake of sugars and health outcomes in infants, children, and adolescents a systematic search of the literature as described above was conducted up to February 2016 using defined search criteria (Appendix 2, Supplemental Digital Content 2, http://links.lww.com/MPG/B103). Systematic reviews published after February 2016 up to September 2016 were also eligible (eg, a systematic review by the American Heart Association [AHA] (4)), but no formal search was undertaken after February 2016. Studies were included if the participants were infants, toddlers, children, and adolescents; studies involving adults were included only if they also included children and adolescents. Due to the large body of literature, including several systematic reviews and meta-analyses, we presented the evidence relevant to the paediatric population from the largest, most comprehensive reviews conducted by the WHO (5,6) and UK Scientific Advisory Committee on Nutrition (7), and discussed relevant individual studies published since the cut-off of the literature searches for those reviews. We gave the priority to RCTs over observational studies. TERMINOLOGY, CLASSIFICATION AND DEFINITIONS OF TYPES OF SUGARS AND SUGARS-CONTAINING BEVERAGES IN THE DIET Sugar is a ubiquitous term, but is not easy to define and measure. The term “total sugars” refers to the combination of naturally occurring sugars and free sugars (of which added sugars are a subgroup). “Sugar-containing” means foods and beverages that contain sugar. Previous analytical methods measured only the total sugars in foods. Nutrient databases and nutrition labels include values for total sugars (2). Recently, a precise step-by-step method that enables systematic calculation of free sugars content of foods and beverages was developed within the University of Toronto's Food Label Information Program Canada. A comprehensive assessment of total sugars and free sugars levels of 15,342 products was obtained. Free sugar accounted for 64% of total sugar content (8). Various definitions of sugars are used in different contexts, for example, in chemical classification (Table 1), current dietary recommendations (Table 2), research studies, regulations and food labelling. Sugars: Chemical Classification and Relative Sweetness The term “sugars” describes mono- and di-saccharides. The 3 principal monosaccharides—hexoses (6-carbon sugars)—are glucose, fructose, and galactose, which are the building blocks of naturally occurring di-, oligo-, and polysaccharides. Carbohydrates are a major source of energy in the diet and include a range of compounds containing carbon, hydrogen, and oxygen. Carbohydrates are divided into 3 groups: mono- and di-saccharides (degree of polymerisation [DP] 1–2; i.e., sugars (Table 1), oligosaccharides (DP 3–9; eg, maltodextrins), and polysaccharides (DP ≥ 10) (7). Sweetness is a gustatory response evoked by sugars and sweeteners. The initiation of a taste response involves the interaction of a stimulant molecule with a receptor located at the taste-cell plasma membrane. Sweetness is defined relative to sucrose, which has a sweetness value of 1.00 (or 100%). The relative sweetness of sugars differs. Fructose is the sweetest (relative sweetness: 1.17), followed by sucrose (1.00), glucose (0.74), maltose (0.33), galactose (0.32), and lactose (0.16) (9). Definitions for Sugars Used in Dietary Recommendations and Research Studies The updated WHO definition of “free sugars” is “monosaccharides and disaccharides added to foods and beverages by the manufacturer, cook, or consumer (i.e. added sugars), plus sugars naturally present in honey, syrups, fruit juices, and fruit juice concentrates (i.e. non-milk extrinsic sugars)” (5). This term describes sugars that may have physiological consequences different from intrinsic sugars incorporated within intact plant cell walls or lactose naturally present in milk. The UK Scientific Advisory Committee on Nutrition (UK SACN) also adopted the definition “free sugars” (7). The European Food Safety Authority (EFSA) defines sugars as “total sugars,” including both indigenous sugars naturally present in foods (ie, “naturally occurring sugars”) such as fruit, vegetables, cereals, and lactose in milk products, and added sugars. The term “added sugars” refers to sucrose, fructose, glucose, starch hydrolysates (glucose syrup, high-fructose syrup, isoglucose), and other isolated sugar preparations used as such, or added during food preparation and manufacturing (10). The United States (US) dietary reference intakes define “added sugars” as sugars and syrups that are added to foods during processing and preparation. Added sugars do not include naturally occurring sugars such as lactose in milk and fructose in fruits (11). The different terminology used in dietary recommendations is challenging. The EFSA and US definitions of “added sugars” (10,11) do not include sugars present in unsweetened fruit and vegetable juice and fruit juice concentrate, all of which are, however, captured in the definition of free sugars (5). The US definition of “added sugars” further excludes sugars found in jellies, jams, preserves, and fruit spreads, while the EFSA definition also does not include honey; all of these are included in the definition of free sugars (5). In the US, there is now a mandatory requirement to include “added sugars,” in grams under “Total Sugars” and as % Daily Value on labels (12). In research studies, exact definitions of sugars are often omitted, making it difficult to determine what was under investigation. In epidemiological studies, sugars consumption is often underestimated (13,14). Recently Nash et al (15) validated an expensive dual-isotope model based on red blood cell carbon (delta13C) and nitrogen (delta15N) isotope ratios that explained a large percentage of the variation in self-reported sugars intake. Red blood cell, plasma, and hair isotope ratios predict sugars intake and provide data that will allow comparison of studies using different sample types. This is a useful technique, but it is currently too expensive for use in epidemiological studies. In epidemiological studies, it is often easier to assess intake of sugar-sweetened beverages (SSBs) as these can be counted in food frequency instruments (2). Definitions for Sugars Used in Regulations and Food Labelling The terminology used in regulations and on food labels differs from that used in dietary recommendations. In Europe, there is no mandatory labelling of added or free sugars and only “total sugar” has to be declared (10,16). Of specific relevance to infants, the WHO (5) and SACN (7) definitions of “free sugars” do not mention human milk and infant formulas. The compositional requirements of infant formulas and follow on formulas require a total glycaemic carbohydrate content of 9 to 14 g/100 kcal, with a minimum 4.5 g/100 kcal of lactose. For infant formulas, lactose is the preferred sugar, whereas sucrose, glucose, and fructose are not permitted (17–19). Glucose and sucrose may, however, be added to infant formulae manufactured from protein hydrolysates to mask the bitter taste. For follow-on formulas, the addition of sucrose and fructose may be considered acceptable, because most infants will be exposed to these sugars in complementary foods. If honey is used (for follow on formulas only), it has to be treated to destroy spores of Clostridium botulinum (17). Interestingly, it is permitted to add free sugars to processed cereal-based foods and baby foods for infants and young children. It is stated: “if sucrose, fructose, glucose, glucose syrups or honey are added to ‘processed cereal-based foods’, i.e. simple cereals which are or have to be reconstituted with milk or other appropriate nutritious liquids or to rusks and biscuits which are to be used either directly or, after pulverisation, with the addition of water, milk; the amount of added carbohydrates from these sources shall not exceed 7.5 g/100 kcal” (20). The health claims “no added sugar” and “naturally occurring sugars” on foods for infants, children, and adolescents are in accordance with “Regulation No 1924 on nutrition and health claims on foods” (21), but not with the WHO definition of “free sugars” and “naturally occurring sugars” (= “intrinsic sugars”) (5) (Table 2)). Labels on foods for infants, children, and adolescents may therefore state “no added sugars” despite the fact that they contain “free sugars,” which need to be limited in the diet. With the current terminology in European regulations and food labelling, “free sugars” are “hidden” and consumers may not be aware that they are present in foods and beverages. Sugars-containing Beverages: Sugar-sweetened Beverages and Fruit Juices SSBs, also called sugar or nutritively sweetened drinks/beverages, are beverages that contain added caloric sweeteners such as sucrose, high-fructose corn syrup, and fruit juice concentrates. They include the full spectrum of soft drinks, carbonated soft drinks, fruitades, fruit drinks, sports drinks, energy and vitamin water drinks, sweetened iced tea, cordial, squashes, fruit syrup, and sweetened lemonade (22). The high-fructose corn syrup that is commonly used in beverages contains 55% fructose and 45% glucose derived from corn, whereas sucrose consists of 50% fructose and 50% glucose (23,24). Fruit juices are not SSBs (23). Usually they have superior nutritional composition to SSBs, as they contain potassium, vitamins A and C and some are fortified with vitamin D and/or calcium, but they contain similar amounts of free sugars (5%–17% of sucrose, glucose, fructose, and/or sorbitol) and energy (23–71 kcal/100 mL) (24) to SSBs and have similar potential to promote weight gain in children (25,26). Table 3 shows the main groups of SSBs and fruit juices with the ranges of energy and free sugars content (24). Smoothies are not included in the definition of SSBs, even though they contain free sugars. It is also important to note that sweetened milks (eg, chocolate milks, chocolate soy drinks) are also not included in the definition of SSBs, although they contain 3.6 to 11.5 g of free sugars/100 mL and are commonly consumed by children and adolescents (24). CURRENT RECOMMENDATIONS FOR INTAKE OF SUGARS AND BEVERAGES The WHO recommends limiting the intake of free sugars to <10% of total energy intake (strong recommendation) based on moderate quality evidence from observational studies of dental caries, and suggests that a reduction to <5% would have additional benefits in reducing the risk of dental caries (conditional recommendation) in children and adults (5). The UK SACN review recommends the average population intake of free sugars should be <5% of total dietary energy from 2 years upwards. This figure was based on calculations of the mean reduction in free sugars intake needed to lower mean population energy intakes by 100 kcal/day with the aim of addressing energy imbalance and leading to a moderate degree of weight loss in the majority of individuals, assuming a baseline of 10% sugars intake as per previous UK recommendations. They further recommend that the contribution of free sugars toward recommended total carbohydrate intake should, in people with a healthy body mass index (BMI) and in energy balance, be replaced by starches, sugars contained within the cellular structure of foods and lactose naturally present in milk and milk products. In overweight individuals, the reduction of free sugars should be part of decreasing energy intake. Finally, they recommend that the consumption of SSBs should be minimised in children and adults (7). Five percent of daily energy for a 3-year-old girl is equivalent to <13 g of free sugars/day, (that is, <3 teaspoons), which is present in an average 170 mL (81–260 mL) of fruit nectar for example (Tables 3 and 4) (24). The AHA recommends that children consume ≤25 g (100 kcal or ∼6 teaspoons) of added sugars/day and to avoid added sugars for children <2 years of age. This recommendation is based on decreasing cardiovascular disease risk among children (excess weight gain and obesity, elevated blood pressure and uric acid levels, dyslipidemia, nonalcoholic fatty liver disease), insulin resistance and type 2 diabetes mellitus (T2D) and also to maintain diet quality (4). Several other scientific associations have called for reductions in consumption of SSBs for prevention of obesity and chronic diseases (27–32). The recommended fluid for thirst for infants after the introduction of solid foods is water. Infants should not be given sugar-containing drinks in bottles or training cups and children should be discouraged the habit of a child sleeping with a bottle (33). The recommended beverages for children and adolescents are water, mineral water, or/and (fruit or herbal) tea without added sugars (34). It should be noted that existing recommendations focus on free or added sugars rather than on total sugars, as there is consistent evidence that free and added sugars are the major contributor to the weight gain, obesity, dental caries, and other adverse health effects (see later). “Naturally occurring sugars” as integral components of whole foods (ie, within whole fruits, vegetables, some grains, and dairy products), that also contribute to the “total sugar intake,” are of less concern as they are less likely to be overconsumed and contain a wide range of bioactive health-enhancing nutrients, fibre, antioxidants, and phytochemicals that reduce inflammation and improve endothelial function. Indeed, evidence in adults suggests that weight gain during a 4-year period is inversely associated with intake of naturally occurring sugars (35), whereas in another analysis, low intakes of fruits, vegetables, whole grains, or nuts and seeds or a high dietary intake of salt were reported to be individually responsible for 1.5% to >4% of the global disease burden (36). It is also more practical to recommend a minimised intake of added/free sugars than to set a limit for total sugars. INTAKES OF SUGARS, SUGARS-SWEETENED FOODS, AND BEVERAGES IN CHILDREN AND ADOLESCENTS Comparison of the intake of sugars and SSBs between countries is difficult, as studies use different definitions for sugar-containing beverages. According to the European Society for Paediatric Gastroenterology, Hepatology and Nutrition (ESPGHAN) Position Paper on Complementary Feeding no sugars should be added to complementary foods and fruit juices or SSBs should be avoided (33). In a study in 5 European countries it was, however, found that these liquids are frequently given to breast-fed and particularly to formula fed infants during the first months of life. Infants given energy providing liquids showed lower intakes of infant formula and solids (37). The current food environment is characterised by a cheap and abundant sugars supply (38). Added sugars contribute about 14% of daily energy intake in 2 to 9 years old children in Europe (39) and 2 to 18 years old in the USA (40). In Slovenian adolescents aged 15 to 16 years mean intake of free sugars constituted 16% of daily energy intake (130 g/day) in boys and 17% (110 g/day) in girls (41). Consumption of SSBs has increased dramatically in recent decades among children and adults (42). In the UK, soft drinks provided almost a third of the intake of non-milk extrinsic sugars in children aged 11 to 18 years. Biscuits, buns, cakes, and puddings, confectionery, and fruit juice were also significant contributors. There is a socioeconomic gradient, with higher sugars intakes in lower-income groups (7). A study among adolescents aged 12 to 17 years from 9 European countries reported consumptions of 424 mL of sugar-containing beverages/day (228 mL SSBs, 63 mL sweetened tea, and 133 mL fruit juice) (43). A German study reported a soft drink consumption of 480 mL/day in boys and 280 mL/day in girls aged 12 to 17 years (44). A Slovenian study reported SSBs (including sweetened tea and syrups) consumption of 683 and 715 mL/day in boys and girls aged 14 to 17 years; higher than the intake of milk and milk products (513 and 479 g/day in boys and girls) (45). Fruit juice consumption was 114 and 102 mL/day in boys and girls. SSBs contributed 9% and 10% of total energy intake in boys and girls, representing the primary source of free sugars in the diet of Slovenian adolescents (41,45). In a cross-sectional survey of 200,000 adolescents aged 11 to 15 years from 43 countries and regions across Europe and North America, the prevalence of daily soft drink consumption tended to increase between ages 11 and 15 years, especially in boys (46). There is a lack of studies in younger children. THE DEVELOPMENT OF SWEET TASTE AND PREFERENCE FOR SWEET FOODS Innate and Programmed Preferences for Tastes Taste is simply defined as the sensation arising from the taste system, but flavour is considered a more inclusive term for the complex of sensory cues, including olfaction, taste and touch systems (47). An infant's experience with flavours begins early, in utero via amniotic fluid and later during breast-feeding, where flavours from the mother's diet are experienced (48,49). Infants have innate preference for sweet, salty, and umami tastes, and innate rejection of sour and bitter tastes (47,50–52). Newborns prefer sugar solutions to water (47,53) and sweeter solutions over less sweet solutions (47,54) possibly because that ingestion of sweet sugars leads to endogenous opioid release (47,55). This effect is used in neonatal practice for procedural pain relief in infants (56–58). Individual sensitivity to and preference for sweet foods is determined not only by presence or absence of sugars on sweet taste receptors, but also by genetic sensitivity to taste including polymorphisms in the gene for sweet taste receptors TAS1R (59–61). Programming of preference for certain tastes and palatable food is a complex process involving systems that regulate appetite and food preferences at a central level (altered development of systems regulating motivation, reward, and perception of taste). There are also other influences; for example prenatal exposure to cocaine is associated with greater preference for sweet taste in newborns (62). In rats, a similar effect has been shown with morphine (63). Epigenetic changes may also contribute to the programming effect; however, the precise mechanisms still remain poorly understood (64–68). Innate Preference for Energy-dense Foods Along with preference for sweet taste, we are also predisposed to prefer energy-dense foods, thus “healthy” foods given as complex carbohydrates and vegetables which are not sweet, salty, and energy-dense are initially rejected by children (47). Especially in young children, sweet taste by itself is probably not the main regulator of food intake. Young children show so-called “caloric compensation,” adjustment of food intake based not primarily on sweetness, but on energy content of a previous preload meal given up to 1 hour before eating a self-selected meal. This mechanism seems not to be present in older children (9–10 years old) and in adults (69). Preference for energy-dense foods was advantageous in the past when food resources were scarce. In today's obesogenic environment, this can contribute to development of overweight and obesity (47). Postnatal Taste and Flavour Learning Children's food choices and preferences are influenced not only by genetic predisposition to certain tastes, but also by food availability and by cultural and parental influences, and they track through childhood and into adulthood (49,70–73). Acceptance of basic taste in weaning may be different among breast-fed and formula-fed infants (49,74,75). Formula-fed infants are exposed to a constant flavour, a predominantly sweet taste. Human milk also has a sweet taste, but additionally exposes the infant to varying flavours and aromas, depending on the nutrition of the mother. Facial responses to various taste solutions at 3 months of age before weaning did not show any difference between breast-fed and formula-fed infants and were consistent with inborn preference for sweet and salty tastes (49). In an observational study, breast-fed infants, however, had a greater acceptance of new foods and flavours at 2 to 8 months of age versus formula-fed infants (70). Breast-feeding was also associated with greater diversity in foods and lower intake of juice at 9 months of age and healthier meat and vegetable dietary pattern at 2 to 8 years of age (70,76). Longer exclusive breast-feeding was associated with higher vegetable intake at age of 5 years and longer breast-feeding duration has consistently been related to higher fruits and vegetables intake in young children (77,78). In a recent study by Perrine et al (79) in 1355 children, frequency of consumption of water, fruits, and vegetables was positively associated, whereas the intake of SSBs was inversely associated, with any breast-feeding duration be it partial or exclusive. These are, however, observational studies and it is not possible to determine whether these associations are causal. Despite the innate preference for sweet tastes, children are also typically phobic to new foods, especially to sour fruits, vegetables, and protein foods. Food neophobia is highly heritable, as shown in twin studies (80). Sweet taste is preferred, but only in familiar food contexts and is influenced by the increase in the availability of sweet products associated with urbanisation (59,81). Sweet and fat taste preferences vary across geographical regions even in Europe and are related to weight status in European children. Sweet preference, however, is not always related to consumption of sweet food (82). Acceptance of novel foods in infants can be enhanced by exposure to variety of flavours (83). A positive correlation was observed between sensitivity to bitter taste and sweet taste perception (59) and between salty and sweet taste preferences (84). Children have the ability to learn preferences for foods made available to them, thus innate preference for sweet taste can be partly modified by experience with food even in early infancy (47,85,86). Are Interventions to Modify Taste Preferences Effective? Observational studies show inconclusive results in the association between feeding experience during foetal development and early infancy and later taste preferences (52,87,88). Exposure to palatable foods high in fat and sugars before birth via maternal intake or in early infancy may lead to overall increase in food intake and increased preference for palatable foods after weaning (64). Mother's choices of drinks for their young children are also influenced by various social, environmental, and behavioural factors, such as child age, child preference, and temperament, grandparents’ influence and sweetened drinks given as a reward (89). Caution is required when trying to introduce strategies to encourage children to consume nonpreferred foods. These feeding practices may lead to children disliking rather than accepting these foods and restriction of energy-dense, sweet, salty, and fatty foods may promote their liking for and intake of those foods (47,90). It seems that the best opportunity for promoting patterns of preference consistent with healthier diets may be to focus on the young (47). In 7 to 16-year-old children, sensory preferences did not change within 12 months in a long-term outpatient obesity lifestyle intervention programme based on behaviour and exercise therapy and a nutritional course including session on taste training (91). Intervention studies trying to show effects of repeated exposure to specific foods on food preference have some methodological pitfalls. Novel whole food products (consisting of many taste combinations) are often used for testing, which does not allow discrimination between individual taste dimensions. Using novel foods also makes it difficult to distinguish the effect of reduction in food neophobia from an increase in preference for the specific taste (92). Attempts have been made to develop reliable methods to test taste sensitivity and aversion even in young children (93,94). Liem and de Graaf (92) have shown that exposure to sweet orangeade in 9 years old children (age range 6–11 years) for 8 days increases preference for sweet orangeade, but not in adults. It is not clear whether this effect is stable over time and if it is possible to extrapolate it to other sugar-rich food. In a recent systematic review by Nehring et al (52) (published after the cut-off date of the literature search), the hypothesis that foetuses and infants exposed to sweet, salty, sour, bitter, umami, or specific tastes show greater acceptance of that same taste later in life was explored. The authors identified 20 studies (15 intervention and 5 observational), of which 10 studies in 13 subgroups examined the effect of exposure to sweet tastes. All were conducted in infants below 1 year of age. Of these, 6 showed a statistically significant increase in intake, whereas 7 showed no difference. Subgroups not finding an effect had smaller sample sizes. Based on intervention studies alone, the authors concluded that it is not clear whether exposure to sweet taste affects the later intake of sweet-flavoured foods. Persistence of Learned Preferences Infants routinely fed sweetened water by their mothers show a greater preference for sweetened water at 6 months (47,85), 2 years (92,95), 6 years (96), and 6 to 10 years of age (97). A prospective study among 166 girls from US reported that soda (carbonated SSB or artificially sweetened beverages) drinkers at age 5 years continued to have higher mean consumption of sodas at 7 to 15 years of age (98). These mostly observational studies suggest that SSB intake during infancy and early childhood may influence SSB intake in later childhood and continue through adolescence, but they do not allow causal inferences. Children prefer higher concentrations of sucrose in water than do adults (84). They are less well able to discriminate between different sucrose concentrations than adolescents, and adolescents in turn have higher optimal preferred sucrose concentrations than adults. The age effects are similar for sucrose in water and sucrose in lemonade (99). Children at 8 to 9 years of age have a much higher density of taste pores and thus greater sensitivity to sucrose than adults (100). Eating habits with preferences for fatty and sweet food are likely to persist at least during early childhood. The Bogalusa Heart study has shown in a prospective manner that persistence of eating behaviours appears to begin as early as age 2 years, and consistency of intake levels of several nutrients including total sugars and sucrose lasts until at least 4 years of age (72). The preference for sweet taste seems to decline with age (101). INTAKE OF SUGARS, SUGARS-SWEETENED FOODS/BEVERAGES, AND HEALTH OUTCOMES IN CHILDREN/ADOLESCENTS The WHO commissioned a systematic review and meta-analysis on the association of sugars intake and body weight (5,6) as well as dental caries (see below) (5,102) in children and adults. The systematic review on the association between sugars intake and body weight in children and adults included 30 RCTs (5 in children) and 38 prospective cohort studies (21 in children) (5,6). The UK SACN also performed a systematic review and meta-analysis and reviewed the relationships between carbohydrates, including sugars, sugars-sweetened foods and SSBs, and health, including body weight and dental caries in children, adolescents. and adults (7). The 2 reviews employed different inclusion criteria for studies; the WHO considered a wider evidence base including studies of shorter duration, nonrandomised trials, population and cross-sectional studies (5,6) compared with SACN (7). A summary of the 2 reviews focussing on outcomes in the paediatric age group, and their conclusions and recommendations is provided in Table 6 in Appendix 3 (Supplemental Digital Content 3, http://links.lww.com/MPG/B104), and the main conclusions are described in the following sections along with data published since these reviews. Intake of Sugars/Sugar-sweetened Beverages and Body Weight or Adiposity in Children and Adolescents Effect of a Higher Intake of Sugar-sweetened Beverages and/or Sugars The WHO meta-analysis of 5 prospective cohort studies in children revealed that after 1-year follow-up a higher consumption of SSBs was associated with a 55% higher risk of becoming overweight/obese versus those with the lowest intake. Among free living people consuming ad libitum diets, intake of free sugars or SSBs is associated with body weight (5,6). SACN reviewed evidence from prospective cohort studies and RCTs on the relationships between all types of carbohydrates in diet, including sugars, sugar-sweetened foods and SSBs, and health in children, adolescents, and adults. They highlighted several associations between sugars intake and body weight, BMI, body fatness as a part of other health parameters. A recent longitudinal study examined the association between SSB intake during infancy and obesity at age 6 years in 1189 US children. The odds for obesity were 71% higher for any SSB intake and 92% higher for SSB introduction before age 6 months compared with children who had no SSB intake during infancy. The odds of obesity at 6 years among children who consumed ≥3 SSBs/week (1 SSB = 230 mL; 106 kcal) between 10 and 12 months was twice that of children who were not fed SSBs (103). A cross-sectional study assessed the effects of SSBs on obesity prevalence in 2295 2 to 4-year-olds. High intakes of SSBs were linked to increases in obesity prevalence. Compared with ≥2 SSB/day, no SSB intake was associated with a 28% reduction in obesity prevalence (104). A recent longitudinal, multicentre study investigated associations between SSB consumption in childhood and adolescence with subsequent changes in body fatness in early adulthood at 6- and 12-year follow-up. They enrolled 283 Danish children aged 9 years and collected data at 9, 15, and 21 years. Subjects who consumed >1 serving of SSB/day at age 15 years had larger increases in BMI and waist circumference (WC) than nonconsumers over the subsequent 6 years. Subjects who increased their SSB consumption from age 9 to 15 years also had larger increases in BMI and WC from 15–21 years than those with no change in consumption (105). Effect of Reduced Intake of Sugar-sweetened Beverages and/or Sugars The WHO meta-analysis of 5 RCTs in children that reduced SSBs and sugar-sweetened foods showed no change in body weight measured by standardised BMI or BMI z score. Evidence was found to be less consistent in children than in adults due to low compliance with dietary advice. Nutrition education alone as an intervention to reduce free sugars intake had a limited effect (6). The meta-analysis, however, did not include 2 more recent studies, which (106,107) overcame the limitations of previous trials, and a case-control study (108). The double-blind placebo-controlled trial by de Ruyter et al (106) randomised 641 normal-weight Dutch children aged 5 to 11 years to an 18-month intervention (250 mL sugar-free, sucralose-sweetened beverage/day; 0 g sucrose (=0 kcal/serving)) versus a control group (250 mL SSBs, 26 g sucrose (=104 kcal/serving)). Compliance was measured by urinary sucralose. After 18 months, children receiving the noncalorically sweetened beverage had lower BMI z score, skinfold thickness, waist-to-hip ratio, and less fat mass compared to children receiving SSBs. A reduction of 104 kcal from SSBs/day (∼5% of daily energy at the diet 2000 kcal/day) was associated with 1.01 kg lower weight gain for 1.5 years in normal weight children. The results were similar for dropouts. This study had good retention rates, was sufficiently powered and provided evidence that masked replacement of SSBs with noncaloric beverages reduces weight gain and fat accumulation in normal-weight children. Ebbeling et al (107) randomly assigned 224 overweight and obese US adolescents who regularly consumed SSBs or 100% fruit juice (1.7 serving/day at baseline in both groups) to intervention (home delivery of water or noncaloric beverages for 1 year in place of SSBs) or a control group with the usual consumption. After 1-year of active intervention, the intervention group consumed significantly fewer SSBs (mean ± SEM 0.2 ± 0.4 vs 0.9 ± 1.1 servings/day in control group; 1 serving = 355 mL), gained less weight (mean difference ± SEM −1.9 ± 0.9 kg; P = 0.04) and had a smaller increase in BMI (mean difference ± SEM −0.57 ± 0.28 kg/m2; P = 0.045) versus control group. Both groups were followed up for an additional year without any intervention. At 2 years, the consumption of SSBs was lower in the intervention group (mean ± SEM 0.4 ± 0.5 vs 0.8 ± 0.8 servings/day in control group), but there was no significant difference in weight or BMI between the groups. These RCTs provide some evidence that decreasing consumption of SSBs, as a part of active intervention, may reduce childhood obesity (106,107). They suggest an inadequate energy compensation (degree of voluntary reduction in intake of other foods/drinks) for energy delivered as sugars. Both studies were included by SACN after their initial systematic review and contributed to upgrading their recommendation (7). A cluster RCT of a school-based education programme in 644 English children aged 7 to 11 years (overweight: 19% girls, 21% boys; obese: 10% girls, 11% boys in the study group and similar in the control group) produced a reduction in carbonated beverages, included noncalorically sweetened and SSBs, consumed. This was associated with a reduction in the number of overweight and obese children after the 1-year intervention (included in WHO (5,6) and SACN reviews (7,109)), but not 2 years after the educational programme was discontinued (110). This results supports a benefit of reducing SSB consumption as part of an active intervention programme on childhood obesity, but points to the need for continuing intervention to promote a healthy food environment and healthy behaviours in children to maintain the effect (107,110). A RCT investigated the effect of decreasing SSBs consumption on body weight in US adolescents (13–18 years). Environmental intervention for 25 weeks almost completely eliminated SSBs consumption. The beneficial effect of reducing SSBs consumption on body weight increased with increasing baseline body weight. Decreasing SSBs consumption had a beneficial effect on body weight only in children in the upper tertile of BMI (included in WHO) (5,6,111). Additionally to the reviews by WHO (5,6) and SACN (7), a systematic review and meta-analysis of studies in children, adolescents and adults by Malik et al (112) concluded that SSB consumption promotes weight gain. Sensitivity analyses of RCTs in children showed more pronounced benefits in preventing weight gain in SSB substitution trials than in school-based educational programs and among overweight compared with normal-weight children. Kaiser et al (113) performed a meta-analysis of studies in children and adults that added SSBs to diets and reported dose-dependent increases in weight. A meta-analysis of studies attempting to reduce SSB consumption in children and adolescents showed an equivocal effect on BMI in all subjects, whereas there was greater weight loss/less weight gain in subjects who were overweight at baseline. Thus, the effect of SSBs may be more pronounced in obese children. These RCTs are trials of behavioural modifications and the findings are affected by intervention intensity and limited by adherence (114). Intake of Sugars/Sugar-sweetened Beverages and Oral Health or Dental Caries Sucrose is the most cariogenic sugar (33). It can form glucans that enable bacterial adhesion to teeth and limit diffusion of acid and buffers in the plaque (115,116). Dental diseases are the most prevalent noncommunicable diseases worldwide (5,117,118). Their treatment consumes 5% to 10% of healthcare costs in industrialised countries (5,117,119). SSBs intake is associated with increased risk of dental caries due to sugars and acidity that results in enamel erosion (120–122). Also the frequency of SSBs and sugar-containing foods consumption as well as oral hygiene play a role. In some studies, results are adjusted for tooth brushing frequency. The WHO systematic review included studies if they reported an intervention to alter sugars intake, provided information on dental caries and lasted at least 1 year. Observational studies were included if they reported absolute or partial change in sugars intake and information on dental caries. Studies that reported solely on the frequency of sugars intake were excluded. The majority of studies were conducted in children (1 nonrandomised intervention study, 50 observational studies). Eighty-four percent of studies in children and 100% studies in adults reported at least 1 positive association between sugars and caries. For the systematic review on dental caries, in most studies, dental caries was diagnosed at the level of cavitation (an advanced stage). It was concluded that there is a moderate level of evidence that the incidence of caries is lower when free-sugars intake is <10% of energy. Using the <5% free-sugar cut-off, a significant decrease in dental caries was observed in 18,447 Japanese children around the second world war, when consumption of free sugars fell from 15 kg/person per year (ie, 5%–10% of energy) to <10 kg/person/year (ie, <5% of energy intake), but the evidence was of low quality, as it came from the ecological studies, which precludes linking exposure data to outcomes along with any assessment of causality (5,102). The SACN systematic review on dental caries included cohort studies and trials conducted in children and adolescents. The cohort studies that adjusted results for tooth brushing frequency were given more weight. Consumption of larger amounts of total sugars, sugar-containing foods/beverages as well as greater frequency of consumption of sugar-containing foods/beverages, but not the frequency of consumption of total sugars, was associated with a greater risk of dental caries in deciduous and permanent dentitions. The review concluded that there is consistent evidence, from prospective cohort studies, that the consumption of sugars is associated with increased risk of dental caries (7). In a recent longitudinal study of 1274 US children frequent SSBs intake between 10 and 12 months was associated with a significantly greater likelihood of having dental caries at age 6 years; children with an average frequency intake of SSBs ≥ 3 times/week had 83% higher odds for having dental caries by age 6 compared with children who were never fed SSBs, after adjusting for covariates including sweet foods intake. Infancy may be an important time for mothers to establish healthy beverage habits for their children (123) as well as good dental hygiene practice with regular tooth brushing. Intake of Sugars/Sugar-sweetened Beverages and Type 2 Diabetes Mellitus and Cardiometabolic Risk There are no relevant studies focusing on the intake of SSBs in children and adolescents and later T2D. In adults, the SACN review, however, concluded that there is consistent evidence from prospective cohort studies that the intake of SSBs is associated with an increased risk of T2D, and that the effect may be biologically relevant (7). Two meta-analyses of prospective studies in adults showed an association between SSBs consumption and incidence of T2D (23,124). Another meta-analysis of 10 large prospective studies in adults showed significant relative risk reduction of T2D by adherence to a healthy dietary pattern including decreased consumption of SSBs (125). The econometric analysis of Basu et al (126) ascertained that sugars meets the Bradford Hill criteria for causation for diabetes, including dose, duration, directionality, and precedence. There are several RCTs in adults using diets differing in the proportion of sugars in relation to blood pressure (127–131). In a cross-sectional study in adolescents, consumption of fructose and added sugars from SSBs was associated with higher blood pressure (132). The SACN concluded that there was not enough evidence on the effect of sugars intake on cardiovascular diseases to draw conclusions (7); however, a number of studies published since this review suggest possible associations between sugars consumption and cardiovascular risk factors. A prospective cohort study suggested a significant relationship between added sugars consumption in adults and increased risk for cardiovascular disease mortality (133). A systematic review and meta-analysis in adults on the association between sugars intake and blood pressure and lipids concluded that dietary sugars influence diastolic blood pressure and serum lipids. In trials that lasted ≥8 weeks, higher consumption of sugars was associated with higher blood pressure independent of the effect of sugars on body weight (134). A possible effect of sugars on blood pressure is also suggested by some reviews in children, adolescents and adults (135,136). Two studies showed a relationship between sugars consumption and markers of cardiovascular disease in adolescents (137,138). In a study of 559, 14 to 18-year-old adolescents living in the southern US higher total fructose consumption (free fructose + 50% of free sucrose) was positively associated with multiple markers of increased risk for cardiovascular disease and T2D. The relationships were independent of likely potentially confounding factors including physical activity, socioeconomic status, energy intake, and fibre consumption and were modified by visceral obesity (137). Whether fructose has specific metabolic effects is still controversial (139). In a cross-sectional study of 2157 US adolescents aged 12 to 18 years consumption of added sugars was positively associated with multiple measures known to increase cardiovascular disease risk. Added sugars intake was negatively correlated with mean high-density lipoprotein-cholesterol levels, whereas positively with low-density lipoprotein-cholesterol and triglycerides levels. Among overweight and obese adolescents, added sugars were positively correlated with the insulin resistance index (138). A recent scientific statement from the AHA reviewed cardiovascular disease risk outcomes associated with added sugars including excess weight gain/obesity, elevated blood pressure and uric acid levels, dyslipidemia, and nonalcoholic fatty liver disease in children (risk factors). They cite several epidemiological and clinical trials studies where “excessive fructose intake resulted in increased blood pressure in children and young adults” and concluded that added sugars are a source of excess fructose, whereas the reduction of fructose from added sugars is likely to decrease uric acid, possibly improving blood pressure in children (4). Other Possible Health Effects of Sugars-containing Beverages Malabsorption of sugars from fruit juice, especially when consumed in excessive amounts or even in nonexcessive amounts (ie, 240 mL of apple juice) in susceptible infants and children, can result in chronic diarrhoea, flatulence, bloating, and abdominal pain, and growth faltering in children (140–143) as well as in adults (144). Withdrawal of apple juice from the diets of susceptible children was curative in all cases (140). SSBs and fruit juices given to infants may displace human milk or infant formula, which may adversely affect nutrient supply and decrease dietary quality (7). Consumption of SSBs in children and adolescents is also associated with inadequate intake of calcium, iron, and vitamin A (145,146). Metabolic and Satiety Responses to Fluid Versus Solid Forms of Sugars The form (liquid or solid) of dietary intake is related to energy balance. In a 6-year longitudinal study of 359 Danish children aged 8 to 10 years, liquid sucrose consumption was more strongly associated with changes in WC and BMI z scores compared with solid sucrose consumption (147). Lee et al (148) used data from a 10-year study of 2021 US girls aged 9 to 10 years at baseline to determine if the association with adiposity varies by the form (liquid vs solid) of sugars consumed. Before total energy adjustment, each additional teaspoon of liquid or solid added sugar was significantly associated with an increase in WC and BMI z score. After adjustment for total energy intake, the association remained statistically significant only between liquid added sugars and WC among all subjects and between solid added sugars and WC among overweight/obese subjects only. There was no significant association with naturally occurring sugars. These findings suggest a positive association between added sugars intake (liquid and solid) and BMI that is mediated by total energy intake and an association with WC that is independent of it. Studies in adults suggest whole foods are more satiating than liquid foods and that people do not compensate well for calories consumed as liquids by eating less food (130,149,150). A whole food decreases food intake at subsequent meals, whereas fibre added to a drink is not effective (2). Study participants consumed fewer calories at lunch after consuming apples compared to equal calories as apple sauce, apple juice, or apple juice with added fibre (151). Whole carrots were associated with lower calorie intake compared to carrot juice or a carrot juice cocktail that contained all the nutrients in carrots (152). In lean and obese adults, liquid foods elicited a weaker compensatory dietary response than solid foods (watermelon juice vs watermelon). Energy intake was 12.4% higher on the days the liquid forms of the high-carbohydrate foods were ingested, due to weaker satiety effect (153). Fruit juices have no nutritional advantages over whole fruits and, as they lack fibre, they are consumed more quickly than whole fruits (25). WHAT SHOULD SUGARS BE REPLACED WITH IN PRODUCTS, OR IN THE DIET? Effect of Replacing Sugars-containing Beverages With Water or Milk A randomised, controlled cluster trial conducted by Muckelbauer et al (154) in 32 elementary schools in 8-year-old German children tested an education programme with environmental interventions (provision of drinking water in 17 schools; 15 control schools) and showed a modest reduction in the amount of SSBs consumed, which was associated with a 31% lower adjusted risk of overweight and obesity. A systematic review from 6 electronic databases from inception to November 2013 included 6 cohort studies and 4 RCTs in children and adults and showed a potential beneficial effect on long-term body weight management when SSBs are replaced by water, tea, coffee (in adults) or, in some studies, low-calorie artificially sweetened beverages. The optimal beverage alternative to SSBs may vary according to age group and/or disease outcome (155). A study examined the association between different types of beverage intake and substitution of SSBs by water, milk, or 100% fruit juice in relation to 6-year change in body fatness. A cohort of 358 children aged 9 years who participated in the Danish part of the European Youth Heart Study was followed for development of body fatness over 6 years. SSB intake was associated with long-term changes in body fatness in children. Replacing SSBs with water or milk, but not 100% fruit juice, was inversely associated with body fatness development (156). Secondary analysis of a nationally representative cross-sectional study of 3098 US children and adolescents (aged 2–19 years) found that each additional 235 mL serving of SSB corresponds to 106 kcal/day higher total energy intake. Replacing SSBs with water was associated with a significant decrease in total energy intake; each 1% of replacement was associated with 6.6 kcal lower daily energy intake and this reduction was not negated by compensatory increase in other food or beverages. The authors calculated that replacing all SSBs with water would result in an average net reduction of 235 kcal/day (157). A secondary analysis of data from a 1.5-year RCT designed to prevent overweight among Danish children (aged 2–6 years) showed that every 100 g/day increase in sugary drink intake was associated with 0.10 kg and 0.06 unit increases in body weight and BMI z score. Substitution of 100 g sugar-containing beverages/day with 100 g milk/day was inversely associated with Δ weight and Δ BMI z score. Sugary drink consumption was associated with body weight gain among young children with high predisposition for overweight (158). A 16-week intervention trial in 8 to 10-year-old Chilean children showed that replacing SSBs with milk may have beneficial effects on lean body mass and growth, with no changes in percentage body fat (159). A systematic review of studies in adults showed that drinking water versus SSBs or fruit juices before a meal was associated with a lower energy intake. In short-term feeding trials in adults drinking SSBs or fruit juices before a meal was associated with 7.8% or 14.4% higher total energy intake compared with drinking water (160). Findings suggest a role of water in reducing energy intake and obesity prevention. Effect of Replacing Sugars With Non-nutritive Sweeteners Non-nutritive Sweeteners (NNS or noncaloric sweeteners) are low in calories or have no calories and include artificial sweeteners (aspartame, acesulfame-K, saccharin, sucralose, neotame, advantame), low-calorie sweeteners (stevia, a natural low-calorie sweetener and sugar alcohols), and noncaloric sweeteners (161). A recent systematic review on early exposure of pregnant women, infants, or children below 12 years of age, to NNS and long-term metabolic health concluded that the effect of NNS exposure on metabolic health in children is uncertain, with conflicting evidence regarding the effects on BMI gain and fat accumulation. No studies have investigated this association among pregnant women and infants. Further research is required to understand the long-term metabolic impact of NNS exposure during gestation, infancy, and childhood and to inform evidence-based recommendations for NNS use in this sensitive population (162). A recent secondary, explorative analysis of a double-blind RCT (106) which showed that replacement of 250 mL SSBs/day by a sugar-free drink for 18 months significantly reduced weight gain in German children, aimed to estimate the extent of spontaneous compensation for changes in the intake of liquid kilocalories (ie, liquid sugars, SSBs kcal) (163). Spontaneous compensation was more pronounced in children with a low BMI. Relative to the SSBs, consumption of the sugar-free beverage for 18 months reduced the BMI z score in lower versus higher BMI group by 0.05 versus 0.21 SD units, and body weight gain by 0.62 versus 1.53 kg. A physiologically based model of growth and energy balance to estimate the degree to which children had compensated for the covertly removed sugars kilocalories by increasing their intake of other foods predicted that children with a lower BMI had compensated for 65%, whereas children with a higher BMI compensated only 13%. The authors postulated that in children with higher BMI, the sensing of kilocalories might be compromised (163). It has been suggested that artificial sweeteners may be associated with increased risk of the same chronic diseases linked to sugars consumption and that they can interfere with basic learning processes that serve to anticipate the normal consequences of consuming sugars, leading to overeating, diminished release of hormones such as glucagon-like peptide-1, and impaired blood glucose regulation. It is also possible that they may alter gut microbiota, which could contribute to impaired glucose regulation, and that use of artificial sweeteners may be particularly problematic in children since exposure to hyper-sweetened foods and beverages at young ages may have effects on sweet preferences that persist into adulthood (164). For children, the long-term effects of consuming artificially sweetened beverages are unknown, so it has been proposed that it is best for children to avoid them (161) and that a focus on reducing sweetener intake (caloric or noncaloric) is a better strategy for combating overweight and obesity than use of artificial sweeteners (164). The AHA and the American Diabetes Association concluded that there are, however, insufficient data to determine conclusively whether the use of NNS to displace caloric sweeteners in beverages and foods reduces added sugars or carbohydrate intakes, or benefits appetite, energy balance, body weight, or cardiometabolic risk factors (165). The American Academy of Pediatrics and AHA also noted that data on NNSs are scarce in terms of the long-term benefits for weight management in children and adolescents or the consequences of the long-term consumption. They concluded that a recommendation for or against the routine use of NNSs in the diets of children cannot be made at this time (4,166). Effect of Replacing Sugars With Starch A study in children (27 Latino, 16 African-American; aged 8–18 years) with obesity and metabolic syndrome investigated whether isocaloric replacement of sugars (disaccharides that consist of glucose and fructose) with starch (polymeric carbohydrate that consist of glucose units) would improve metabolic parameters. Children consumed a diet restricted in added sugars (reduced from 28% to ≤10% energy, and substituted with starch; fructose was reduced to ≤4% energy)), whereas the % of energy from protein, fat, and carbohydrate remained unchanged. After 10 days, reductions in diastolic blood pressure, lactate, triglyceride, and low-density lipoprotein-cholesterol were noted; glucose tolerance and hyperinsulinemia improved significantly, while weight and fat-free mass reduced. In a sub-group of children (n = 10) who did not lose weight over the 10 days hyperinsulinemia improved significantly as well. Intervening in children and adjusting for effects of energy, weight gain, and adiposity, the isocaloric fructose restriction improved metabolic parameters in children with obesity and metabolic syndrome irrespective of weight change. The health detriments of sugars, specifically fructose, were independent of its caloric value or effects on weight (167). SUMMARY AND CONCLUSIONS Regarding the terminology, classification and definitions of sugars and sugar-containing beverages - Existing studies use different definitions of sugars, so comparisons between studies are difficult. - Smoothies and sweetened milks/milk products, including condensed milk, contain free sugars (above the level naturally present in milk or yoghurt) but are not classified as SSBs. They are, however, commonly consumed by children and represent an important source of free sugars in liquid form. Regarding current recommendations and intakes of sugars, sugars-sweetened foods/beverages in children/adolescents - There is no nutritional requirement for free sugars in infants, children and adolescents. - Current average intake levels of sugars, particularly SSBs, among European children and adolescents far exceed recommended levels. Data are scarce for younger children. Regarding the development of sweet taste and preference for sweet foods - The preference for sweet taste is innate, has a strong genetic component and decreases with age. It may be modified or reinforced by pre- and postnatal exposures. Preference for sweet taste is driven by an interplay of many factors that involve feeding behaviour (reward system), food choices (senses and emotions) and taste (genetic and programming effects). - Breast-feeding may be associated with greater acceptability of new foods and flavours. - Observational studies, show that SSB intake during infancy and early childhood is associated with SSB intake in childhood and adolescence, but cannot demonstrate that this is casual. Regarding the evidence on health effects of sugars/sugar-containing beverages in infants, children and adolescents - A higher than recommended intake of free sugars, particularly SSBs in children and adolescents, is associated with increased incidence of dental caries and adiposity. - A higher than recommended intake of added sugars among adolescents may be positively associated with multiple measures known to increase cardiovascular disease risk. Data in adolescents reflect interventional studies in adults suggesting that higher fructose consumption (from added sugars) is also associated with multiple factors that increase risk for cardiovascular disease and T2D. - Sugars-containing beverages do not promote satiety compared to the equivalent amount of sugars in solid form and therefore induce excessive energy intakes. Regarding what sugars should be replaced by - Reducing the intake of SSBs by replacing them with water in children and adolescents is associated with reduced weight and adiposity. - Replacing free sugars with NNS may reduce energy intake in the short-term, but their effectiveness and safety as a long-term weight management strategy remains to be evaluated. There is a lack of research in children on the effects of NNSs. Based on These Conclusions, the ESPGHAN Committee on Nutrition Recommends - The WHO definition of “free sugars” should be used uniformly in dietary recommendations, studies, regulations and foods labelling. “Free sugars” include monosaccharides and disaccharides added to foods and beverages by the manufacturer, cook or consumer, and sugars naturally present in honey, syrups, unsweetened fruit juices and fruit juice concentrates. This term describes sugars that may have physiological consequences different from intrinsic sugars incorporated within intact plant cell walls or lactose naturally present in milk. - Smoothies and sweetened milk drinks/products (ie, milk products containing a higher concentration of sugars than unprocessed human, cow or goat milk, such as chocolate milks, condensed milks, fruit yoghurts) are not specifically mentioned in the WHO definition, however they are an important source of free sugars and their intake should be limited. - Sugars naturally present in intact fruits (fresh, frozen or dried) and lactose in amounts naturally present in human milk or infant formula as well as in cow/goat milk and unsweetened milk products (eg, natural yoghurt) are not free sugars. - Intakes of free sugars should be reduced and minimised with a desirable upper limit of <5% energy intake in children and adolescents aged ≥ 2–18 years. This represents: 15 to 28 g of free sugars (3.5–7 teaspoons) for girls; 16 to 37 g (4–9 teaspoons) for boys, according to age (Table 4). Intakes should be even lower in infants and toddlers <2 years. - Sugar-containing beverages and foods (SSBs, fruit juices, fruit-based smoothies and sweetened milk drinks/products) should be replaced by water or, in the latter case, with unsweetened milk drinks/products with lactose up to the amount naturally present in milk and unsweetened milk products. - In Europe, the term “free sugars” should be included on food composition labels, expressed both in grams (under “Total Sugars”) and as % daily energy intake. A more practical way of informing consumers might be to add the number of teaspoons of “total sugars” and “free sugars” (1 tea spoon = 4 g sugars) on front-of-pack labels. However, further research testing consumer preferences and understanding is required. - National Authorities should adopt policies aimed at reducing the intake of free sugars in infants, children and adolescents. Depending on local circumstances, this may include education, improved labelling, restriction of advertising, introducing standards for kindergarten and school meals that include limits on free sugars, and fiscal measures such as taxation for SSBs and sugar-rich foods and/or incentivising the purchase of healthy food. - Sugars should preferably be consumed: - In its natural form such as human milk, milk, unsweetened dairy products, fresh fruits (Table 5, left column), rather than as SSBs, fruit juices, smoothies, and/or or sweetened milk drinks/products - As a part of a main meal, not as snacks. - It is especially important to avoid or limit free sugars in infants and obese/overweight children/adolescents. - Parents should be educated about the importance of regular tooth-brushing with fluoride toothpaste from the time the first tooth erupts SUGGESTED FUTURE RESEARCH DIRECTIONS - Better understanding of how infants and toddlers develop their food preferences and self-regulatory mechanisms, especially for sweet foods, to enable the development of evidence-based guidance for caregivers on how to feed infants and toddlers to favourably influence children's intake patterns. - Systematic calculation of free sugars content in infant formulas, infant/toddlers/children/adolescent's foods and beverages should be promoted. This would allow: the incorporation of “free sugars” for all foods and beverages to food composition tables and nutritional software programmes, more precise studies evaluating intake and health effects of free sugars, better information for the menu planners in kindergartens and schools, and better information for the consumers. - Long-term RCTs of dietary sugars consumption are difficult, because there are no biomarkers for dietary total sugars and/or free sugars for measure of compliance. The development of cheap and easy to use dietary biomarkers of total sugars and free sugars intake that would be suitable for use in large-scale studies are needed. They would enable objectively assess dietary consumption without the bias of self-reported dietary intake. - The hypothesis that the effect of SSBs may depend on baseline BMI should be further studied in randomised double blind studies. - The interaction between free sugars intake in infants, children, adolescents and the microbiome. Nataša Fidler Mis acknowledges the support of the Slovenian Research Agency (P3-0395: Nutrition and Public Health). 1. Hess J, Latulippe ME, Ayoob K, et al. The confusing world of dietary sugars: definitions, intakes, food sources and international dietary recommendations. Food Funct 2. Slavin J. Beverages and body weight: challenges in the evidence-based review process of the Carbohydrate Subcommittee from the 2010 Dietary Guidelines Advisory Committee. Nutr Rev 2012; 70 (suppl 2):S111–S120. 3. Johnson RK, Appel LJ, Brands M, et al. Dietary sugars intake and cardiovascular health: a scientific statement from the American Heart Association. Circulation 4. Vos MB, Kaar JL, Welsh JA, et al. Added sugars and cardiovascular disease risk in children. A scientific statement from the American Heart Association. Circulation 5. World Health Organization. Guideline: Sugars Intake for Adults and Children. Geneva, Switzerland: World Health Organization; 2015. 6. Te Morenga L, Mallard S, Mann J. Dietary sugars and body weight: systematic review and meta-analyses of randomised controlled trials and cohort studies. BMJ 7. SACN, Scientific Advisory Committee on Nutrition. Carbohydrates and Health. London, Ireland: TSO; 2015. 8. Bernstein JT, Schermel A, Mills CM, et al. Total and free sugar content of Canadian prepackaged foods and beverages. Nutrients 2016; 8: pii: E582. 9. Joesten MD, Hogg JL, Castellion ME. Oxidation-reduction reactions. In World Chem Essentials. 4th ed. Belmont, CA: Thomson Brooks/Cole; 2007:203–20. 10. EFSA. Review of labelling reference intake values. Scientific Opinion of the Panel on Dietetic Products, nutrition and allergies on a request from the Commission related to the review of labelling reference intake values for selected nutritional elements. EFSA J 11. Institute of Medicine. Dietary Reference Intakes for Energy, Carbohydrate, Fiber, Fat, Fatty Acids, Cholesterol, Protein, and Amino Acids. Washington, DC: National Academies Press; 2005. 13. Rangan A, Allman-Farinelli M, Donohoe E, et al. Misreporting of energy intake in the 2007 Australian Children's Survey: differences in the reporting of food types between plausible, under- and over-reporters of energy intake. J Hum Nutr Diet 14. Kobe H, Kržišnik C, Mis NF. Under- and over-reporting of energy intake in Slovenian adolescents. J Nutr Educ Behav 15. Nash SH, Kristal AR, Hopkins SE, et al. Stable isotope models of sugar intake using hair, red blood cells, and plasma, but not fasting plasma glucose, predict sugar intake in a Yup Õ ik Study Population. J Nutr 16. The European Parliament and the Council of the European Union. Regulation (EU) No. 1169/2011 of the European Parliament and of the Council of 25 October 2011 on the provision of food information to consumers, amending Regulations (EC) No. 1924/2006 and (EC) No 1925/2006 of the European Parliament and of the Council, and repealing Commission Directive 87/250/EEC, Council Directive 90/496/EEC, Commission Directive 1999/10/EC, Directive 2000/13/EC of the European Parliament and of the Council, Commission Directives 2002/67/EC and 2008/5/EC and Commission Regulation (EC) No. 608/2004, Off J Eur Union 17. EFSA Panel on Dietetic Products Nutrition and Allergies NDA. Scientific opinion on the essential composition of infant and follow-on formulae. EFSA J 18. Food and Agriculture Organisation of the United Nations, World Health Organization. Codex Alimentarius. International food standards. Standard for infant formula and formulas for special medical purposes intended for infants. Codex stan 72 – 1981 , revision 2007; 1–19. 19. Koletzko B, Baker S, Cleghorn G, et al. Global standard for the composition of infant formula: recommendations of an ESPGHAN coordinated international expert group. J Pediatr Gastroenterol Nutr 20. The Commission of the European Communities. Commission directive 2006/125/EC of 5 Dec. 2006 on processed cereal-based foods and baby foods for infants and young children. Off J Eur Union 21. European Parliament and Council of the European Union. Regulation (EC) No. 1924/2006 of the European Parliament and of the Council of 20 December 2006 on nutrition and health claims made on foods. Off J Eur Communities 22. Hu FB, Malik VS. Sugar-sweetened beverages and risk of obesity and type 2 diabetes: Epidemiologic evidence. Physiol Behav 23. Malik VS, Popkin BM, Bray GA, et al. Sugar-sweetened beverages and risk of metabolic syndrome and type 2 diabetes. A meta-analysis. Diabetes Care 25. Village EG. The use and misuse of fruit juice in pediatrics. Pediatrics 26. Faith MS, Dennison BA, Edmunds LS, et al. Fruit juice intake predicts increased adiposity gain in children from low-income families: weight status-by-environment interaction. Pediatrics 27. Agostoni C, Braegger C, Decsi T, et al. Role of dietary factors and food habits in the development of childhood obesity: a commentary by the ESPGHAN Committee on Nutrition. J Pediatr Gastroenterol Nutr 28. Barlow SE. Expert committee recommendations regarding the prevention, assessment, and treatment of child and adolescent overweight and obesity: summary report. Pediatrics 2007; 120 (suppl 4):S164–S192. 29. Gidding SS, Dennison BA, Birch LL, et al. Dietary recommendations for children and adolescents: a guide for practitioners. Pediatrics 30. Institute of Medicine of the National Academie. Local Government Actions to Prevent Childhood Obesity. Washington DC: Institute of Medicine of the National Academie; 2009. 31. US Department of Health and Human Services and US Department of Agriculture. Dietary Guidelines for Americans. 6th ed.Washington, DC: US Government Printing Office; 2010. 32. World Health Organization. Set of Recommendations on the Marketing of Foods and Non-alcoholic Beverages to children. Geneva, Switzerland: World Health Organization; 2010. 33. Fewtrell M, Bronsky J, Campoy C, et al. Complementary feeding: a position paper by the european society for Paediatric Gastroenterology, Hepatology, and Nutrition (ESPGHAN) Committee on Nutrition. J Pediatr Gastroenterol Nutr 34. Kersting M, Alexy U, Clausen K. Using the concept of food based dietary guidelines to develop an optimized mixed diet (OMD) for German children and adolescents. J Pediatr Gastroenterol Nutr 35. Mozaffarian D, Hao T, Rimm EB, et al. Changes in diet and lifestyle and long-term weight gain in women and men. N Engl J Med 36. Lim S, Vos T, Flaxman A, et al. A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990–2010: a systematic analysis for the Global Burden of Disease Study. Lancet 37. Schiess S, Grote V, Scaglioni S, et al. Intake of energy providing liquids during the first year of life in five European countries. Clin Nutr 38. Popkin BM, Nielsen SJ. The sweetening of the world's diet. Obes Res 39. Svensson A, Larsson C, Eiben G, et al. European children's sugar intake on weekdays versus weekends: the IDEFICS study. Eur J Clin Nutr 40. Poti JM, Slining MM, Popkin BM. Solid fat and added sugar intake among U.S. children: the role of stores, schools, and fast food. Am J Prev Med 41. Fidler Mis N, Kobe H, Štimec M. Dietary intake of macro- and micronutrients in Slovenian adolescents: comparison with reference values. Ann Nutr Metab 42. Popkin BM, Ph D, Chaloupka FJ, et al. The public health and economic benefits of taxing sugar-sweetened beverages. N Engl J Med 43. Duffey KJ, Huybrechts I, Mouratidou T, et al. Beverage consumption among European adolescents in the HELENA study. Eur J Clin Nutr 44. Kohler S, Kleiser C, Richter A, et al. The Fluid intake of adolescents in Germany. Results collected in EsKiMo. Ernährungswiss und Prax 45. Kobe H, Štimec M, Ribič CH, et al. Food intake in Slovenian adolescents and adherence to the Optimized Mixed Diet: a nationally representative study. Public Health Nutr 46. Currie C, Zanotti C, Morgan A, et al. Social determinants of health and well-being among young people. Health behaviour in school-aged children (HBSC) study: International Report from the 2009/2010 survey. Copenhagen, WHO regional Office for Europe, 2012 (Health Policy for Children and Adolescents, No 6):1–272. 47. Birch LL. Development of food preferences. Annu Rev Nutr 48. Mennella J, Jagnow CP, Beauchamp GK. Prenatal and postnatal flavor learning by human infants. Pediatrics 49. Alves JGB, Russo PC, Alves GV. Facial responses to basic tastes in breastfeeding and formula-feeding infants. Breastfeed Med 50. Anliker J, Bartoshuk L, Ferris M, et al. Children's food preferences and genetic sensitivity to the bitter taste of 6-n-propylthiouracil (PROP). Am J Clin Nutr 51. Ramirez I. Why do sugars taste good? Neurosci Biobehav Rev 52. Nehring I, Kostka T, von Kries R, et al. Impacts of in utero and early infant taste experiences on later taste acceptance: a systematic review. J Nutr 53. Maller O, Turner RE. Taste in acceptance of sugars by human infants. J Comp Physiol Psychol 54. Nisbett RE, Gurwitz SB. Weight, sex, and the eating behavior of human newborns. J Comp Physiol Psychol 55. Blass EM, Fitzgerald E. Milk-induced analgesia and comforting in 10-day-old rats: opioid mediation. Pharmacol Biochem Behav 56. Gibbins S, Stevens B. Mechanisms of sucrose and non-nutritive sucking in procedural pain management in infants. Pain Res Manag 57. Stevens B, Yamada J, Lee GY, et al. Sucrose for analgesia in newborn infants undergoing painful procedures. Cochrane Database Syst Rev 58. Harrison D, Beggs S, Stevens B. Sucrose for procedural pain management in infants. Pediatrics 59. Furquim TRD, Poli-Frederico RC, Maciel SM, et al. Sensitivity to bitter and sweet taste perception in schoolchildren and their relation to dental caries. Oral Heal Prev Dent 60. Drewnowski A, Henderson SA, Barratt-Fornell A. Genetic taste markers and food preferences. Drug Metab Dispos 61. Max M, Shanker YG, Huang L, et al. Tas1r3, encoding a new candidate taste receptor, is allelic to the sweet responsiveness locus Sac. Nat Genet 62. Maone TR, Mattes RD, Beauchamp GK. Cocaine-exposed newborns show an exaggerated sucking response to sucrose. Physiol Behav 63. Gagin R, Cohen E, Shavit Y. Prenatal exposure to morphine alters analgesic responses and preference for sweet solutions in adult rats. Pharmacol Biochem Behav 64. Muhlhausler BS, Ong ZY. The fetal origins of obesity: early origins of altered food intake. Endocr Metab Immune Disord Drug Targets 65. Bayol SA, Farrington SJ, Stickland NC. A maternal “junk food” diet in pregnancy and lactation promotes an exacerbated taste for “junk food” and a greater propensity for obesity in rat offspring. Br J Nutr 66. Mennella JA, Beauchamp GK. Flavor experiences during formula feeding are related to preferences during childhood. Early Hum Dev 67. Mennella JA. Flavour programming during breast-feeding. Adv Exp Med Biol 68. Mennella JA, Johnson A, Beauchamp GK. Garlic ingestion by pregnant women alters the odor of amniotic fluid. Chem Senses 69. Birch LL, Fisher JO. Food intake regulation in children. Fat and sugar substitutes and intake. Ann N Y Acad Sci 70. Grieger JA, Scott J, Cobiac L. Dietary patterns and breast-feeding in Australian children. Public Health Nutr 71. Larsen JK, Hermans RCJ, Sleddens EFC, et al. How parental dietary behavior and food parenting practices affect children's dietary behavior. Interacting sources of influence? Appetite 72. Nicklas TA, Webber LS, Berenson GS. Studies of consistency of dietary intake during the first four years of life in a prospective analysis: Bogalusa Heart Study. J Am Coll Nutr 73. Nicklaus S. Development of food variety in children. Appetite 74. Hausner H, Nicklaus S, Issanchou S, et al. Breastfeeding facilitates acceptance of a novel dietary flavour compound. Clin Nutr 75. Rosenstein D, Oster H. Differential facial responses to four basic tastes in newborns. Child Dev 76. Conn JA, Davies MJ, Walker RB, et al. Food and nutrient intakes of 9-month-old infants in Adelaide, Australia. Public Health Nutr 77. Möller LM, de Hoog MLA, van Eijsden M, et al. Infant nutrition in relation to eating behaviour and fruit and vegetable intake at age 5 years. Br J Nutr 78. de Lauzon-Guillain B, Jones L, Oliveira A, et al. The influence of early feeding practices on fruit and vegetable intake among preschool children in 4 European birth cohorts. Am J Clin Nutr 79. Perrine CG, Galuska DA, Thompson FE, et al. Breastfeeding duration is associated with child diet at 6 years. Pediatrics 2014; 134 (suppl 1):S50–S55. 80. Cooke LJ, Wardle J. Genetic and environmental influences on children's food. Am J Clin Nutr 81. Jamel HA, Sheiham A, Cowell CR, et al. Taste preference for sweetness in urban and rural populations in Iraq. J Dent Res 82. Lanfer A, Knof K, Barba G, et al. Taste preferences in association with dietary habits and weight status in European children: results from the IDEFICS study. Int J Obes 83. Gerrish CJ, Mennella JA. Flavor variety enhances food acceptance in formula-fed infants. Am J Clin Nutr 84. Mennella JA, Finkbeiner S, Lipchock SV, et al. Preferences for salty and sweet tastes are elevated and related to each other during childhood. PLoS One 85. Beauchamp GK, Moran M. Dietary experience and sweet taste preference in human infants. Appetite 86. Sullivan SA, Birch LL. Pass the sugar, pass the salt: experience dictates preference. Dev Psychol 87. Birch LL, Gunder L, Grimm-Thomas K, et al. Infants’ consumption of a new food enhances acceptance of similar foods. Appetite 88. Mennella JA, Griffin CE, Beauchamp GK. Flavor programming during infancy. Pediatrics 89. Hoare A, Virgo-Milton M, Boak R, et al. A qualitative study of the factors that influence mothers when choosing drinks for their young children. BMC Res Notes 90. Fisher JO, Birch LL. Restricting access to palatable foods affects children's behavioral response, food selection, and intake. Am J Clin Nutr 91. Alexy U, Schaefer A, Sailer O, et al. Sensory preferences and discrimination ability of children in relation to their body weight status. J Sens Stud 92. Liem DG, de Graaf C. Sweet and sour preferences in young children and adults: role of repeated exposure. Physiol Behav 93. Knof K, Lanfer A, Bildstein MO, et al. Development of a method to measure sensory perception in children at the European level. Int J Obes 94. Visser J, Kroeze JH, Kamps WA, et al. Testing taste sensitivity and aversion in very young children: development of a procedure. Appetite 95. Beauchamp GK, Moran M. Acceptance of sweet and salty tastes in 2-year-old children. Appetite 96. Park S, Pan L, Sherry B, et al. The association of sugar-sweetened beverage intake during infancy with sugar-sweetened beverage intake at 6 years of age. Pediatrics 97. Pepino MY, Mennella JA. Factors contributing to individual differences in sucrose preference. Chem Senses 2005; 30 (suppl 1):i319–i320. 98. Fiorito LM, Marini M, Mitchell DC, et al. Girls’ early sweetened carbonated beverage intake predicts different patterns of beverage and nutrient intake across childhood and adolescence. J Am Diet Assoc 99. De Graaf C, Zandstra EH. Sweetness intensity and pleasantness in children, adolescents, and adults. Physiol Behav 100. Segovia C, Hutchinson I, Laing DG, et al. A quantitative study of fungiform papillae and taste pore density in adults and children. Dev Brain Res 101. Desor JA, Beauchamp GK. Longitudinal changes in sweet preferences in humans. In Physiol Behav 102. Moynihan PJ, Kelly SAM. Effect on caries of restricting sugars intake: systematic review to inform WHO guidelines. J Dent Res 103. Pan L, Li R, Park S, et al. A longitudinal analysis of sugar-sweetened beverage intake in infancy and obesity at 6 years. Pediatrics 2014; 134 (suppl 1):S29–S35. 104. Davis JN, Koleilat M, Shearrer GE, et al. Association of infant feeding and dietary intake on obesity prevalence in low-income toddlers. Obesity (Silver Spring) 105. Zheng M, Rangan A, Olsen NJ, et al. Sugar-sweetened beverages consumption in relation to changes in body fatness over 6 and 12 years among 9-year-old children: the European Youth Heart Study. Eur J Clin Nutr 106. de Ruyter JC, Olthof MR, Seidell JC, et al. A trial of sugar-free or sugar-sweetened beverages and body weight in children. N Engl J Med 107. Ebbeling CB, Feldman HA, Chomitz VR, et al. A randomized trial of sugar-sweetened beverages and adolescent body weight. N Engl J Med 108. Martin-Calvo N, Martínez-González M-A, Bes-Rastrollo M, et al. Sugar-sweetened carbonated beverage consumption and childhood/adolescent obesity: a case-control study. Public Health Nutr 109. James J, Thomas P, Cavan D, et al. Preventing childhood obesity by reducing consumption of carbonated drinks: cluster randomised controlled trial. BMJ 110. James J, Thomas P, Kerr D. Preventing childhood obesity: two year follow-up results from the Christchurch obesity prevention programme in schools (CHOPPS). BMJ 111. Ebbeling CB, Feldman HA, Osganian SK, et al. Effects of decreasing sugar-sweetened beverage consumption on body weight in adolescents: a randomized, controlled pilot study. Pediatrics 112. Malik VS, Pan A, Willett WC, et al. Sugar-sweetened beverages and weight gain in children and adults: a systematic review and meta-analysis. Am J Clin Nutr 113. Kaiser KA, Shikany JM, Keating KD, et al. Will reducing sugar-sweetened beverage consumption reduce obesity? Evidence supporting conjecture is strong, but evidence when testing effect is weak. Obes Rev 114. Malik VS, Willett WC, Hu FB. Nutritively sweetened beverages and obesity. JAMA 115. Bowen WH, Pearson SK, Rosalen PL, et al. Assessing the cariogenic potential of some infant formulas, milk and sugar solutions. J Am Dent Assoc 116. Anderson A. Sugars health - risk assessment to risk management. Public Health Nutr 117. World Health Organization. The World Oral Health Report 2003. Continuous Improvement of Oral Health in the 21st Century—The Approach of the WHO Global Oral Health Programme. Geneva, Switzerland: World Health Organization; 2003. 118. Marcenes W, Kassebaum NJ, Bernabe E, et al. Global burden of oral conditions in 1990–2010: a systematic analysis. J Dent Res 119. Petersen PE, Bourgeois D, Ogawa H, et al. The global burden of oral diseases and risks to oral health. Bull World Health Organ 120. Warren JJ, Weber-Gasparoni K, Marshall TA, et al. A longitudinal study of dental caries risk among very young low SES children. Community Dent Oral Epidemiol 121. Ismail AI, Sohn W, Lim S, et al. Predictors of dental caries progression in primary teeth. J Dent Res 122. Jamel HA, Sheiham A, Watt RG, et al. Sweet preference, consumption of sweet tea and dental caries: studies in urban and rural Iraqi populations. Int Dent J 123. Park S, Lin M, Onufrak S, et al. Association of sugar-sweetened beverage intake during infancy with dental caries in 6-year-olds. Clin Nutr Res 124. Greenwood DC, Threapleton DE, Evans CEL, et al. Association between sugar-sweetened and artificially sweetened soft drinks and type 2 diabetes: systematic review and dose-response meta-analysis of prospective studies. Br J Nutr 125. Esposito K, Kastorini C-M, Panagiotakos DB, et al. Prevention of type 2 diabetes by dietary patterns: a systematic review of prospective studies and meta-analysis. Metab Syndr Relat Disord 126. Basu S, Yoffe P, Hills N, et al. The Relationship of sugar to population-level diabetes prevalence: an econometric analysis of repeated cross-sectional data. PLoS One 127. Surwit RS, Feinglos MN, McCaskill CC, et al. Metabolic and behavioral effects of a high-sucrose diet during weight loss. Am J Clin Nutr 128. Vasilaras TH, Raben A, Astrup A. Twenty-four hour energy expenditure and substrate oxidation before and after 6 months’ ad libitum intake of a diet rich in simple or complex carbohydrates or a habitual diet. Int J Obes Relat Metab Disord 129. Poppitt SD, Keogh GF, Prentice AM, et al. Long-term effects of ad libitum low-fat, high-carbohydrate diets on body weight and serum lipids in overweight subjects with metabolic syndrome. Am J Clin Nutr 130. Raben A, Vasilaras TH, Møller AC, et al. Sucrose compared with artificial sweeteners: different effects on ad libitum food intake and body weight after 10 wk of supplementation in overweight subjects. Am J Clin Nutr 131. Black RNA, Spence M, McMahon RO, et al. Effect of eucaloric high- and low-sucrose diets with identical macronutrient profile on insulin resistance and vascular risk: a randomized controlled trial. Diabetes 132. Nguyen S, Choi HK, Lustig RH, et al. Sugar-sweetened beverages, serum uric acid, and blood pressure in adolescents. J Pediatr 133. Yang Q, Zhang Z, Gregg EW, et al. Added sugar intake and cardiovascular diseases mortality among US adults. JAMA Intern Med 134. Te Morenga LA, Howatson AJ, Jones RM, et al. Dietary sugars and cardiometabolic risk: systematic review and meta-analyses of randomized controlled trials of the effects on blood pressure and lipids. Am J Clin Nutr 135. Malik VS, Popkin BM, Bray GA, et al. Sugar-sweetened beverages, obesity, type 2 diabetes mellitus, and cardiovascular disease risk. Circulation 136. Malik VS, Hu FB. Sugar-sweetened beverages and health: where does the evidence stand? Am J Clin Nutr 137. Pollock NK, Bundy V, Kanto W, et al. Greater fructose consumption is associated with cardiometabolic risk markers and visceral adiposity in adolescents. J Nutr 138. Welsh JA, Sharma A, Cunningham SA, et al. Consumption of added sugars and indicators of cardiovascular disease risk among US adolescents. Circulation 139. Khan TA, Sievenpiper JL. Controversies about sugars: results from systematic reviews and meta-analyses on obesity, cardiometabolic disease and diabetes. Eur J Nutr 140. Hyams JS, Leichtner AM. Apple juice. An unappreciated cause of chronic diarrhea. Am J Dis Child 141. Hyams JS, Etienne NL, Leichtner AM, et al. Carbohydrate malabsorption following fruit juice ingestion in young children. Pediatrics 142. Lifschitz CH. Carbohydrate absorption from fruit juices in infants. Pediatrics 143. Cole CR, Rising R, Lifshitz F. Consequences of incomplete carbohydrate absorption from fruit juice consumption in infants. Arch Pediatr Adolesc Med 144. Rumessen JJ, Gudmand-Høyer E. Functional bowel disease: malabsorption and abdominal distress after ingestion of fructose, sorbitol, and fructose-sorbitol mixtures. Gastroenterology 145. Frary CD, Johnson RK, Wang MQ. Children and adolescents’ choices of foods and beverages high in added sugars are associated with intakes of key nutrients and food groups. J Adolesc Health 146. Ballew C, Kuester S, Gillespie C. Beverage choices affect adequacy of children's nutrient intakes. Arch Pediatr Adolesc Med 147. Olsen NJ, Andersen LB, Wedderkopp N, et al. Intake of liquid and solid sucrose in relation to changes in body fatness over 6 years among 8- to 10-year-old children: the European Youth Heart Study. Obes Facts 148. Lee AK, Chowdhury R, Welsh JA. Sugars and adiposity: the long-term effects of consuming added and naturally occurring sugars in foods and in beverages. Obes Sci Pract 149. Van Wymelbeke V, Béridot-Thérond M-E, de La Guéronnière V, et al. Influence of repeated consumption of beverages containing sucrose or intense sweeteners on food intake. Eur J Clin Nutr 150. DiMeglio DP, Mattes RD. Liquid versus solid carbohydrate: effects on food intake and body weight. Int J Obes Relat Metab Disord 151. Flood-Obbagy JE, Rolls BJ. The effect of fruit in different forms on energy intake and satiety at a meal. Appetite 152. Anne Moorhead S, Welch RW, Barbara M, et al. The effects of the fibre content and physical structure of carrots on satiety and subsequent intakes when eaten as part of a mixed meal. Br J Nutr 153. Mourao DM, Bressan J, Campbell WW, et al. Effects of food form on appetite and energy intake in lean and obese young adults. Int J Obes (Lond) 154. Muckelbauer R, Libuda L, Clausen K, et al. Promotion and provision of drinking water in schools for overweight prevention: randomized, controlled cluster trial. Pediatrics 155. Zheng M, Allman-Farinelli M, Heitmann BL, et al. Substitution of sugar-sweetened beverages with other beverage alternatives: a review of long-term health outcomes. J Acad Nutr Diet 156. Zheng M, Rangan A, Olsen NJ, et al. Substituting sugar-sweetened beverages with water or milk is inversely associated with body fatness development from childhood to adolescence. Nutrition 157. Wang YC, Ludwig DS, Sonneville K, et al. Impact of change in sweetened caloric beverage consumption on energy intake among children and adolescents. Arch Pediatr Adolesc Med 158. Zheng M, Rangan A, Allman-Farinelli M, et al. Replacing sugary drinks with milk is inversely associated with weight gain among young obesity-predisposed children. Br J Nutr 159. Albala C, Ebbeling CB, Cifuentes M, et al. Effects of replacing the habitual consumption of sugar-sweetened beverages with milk in Chilean children. Am J Clin Nutr 160. Daniels MC, Popkin BM. Impact of water intake on energy intake and weight status: a systematic review. Nutr Rev 162. Reid AE, Chauhan BF, Rabbani R, et al. Early exposure to nonnutritive sweeteners and long-term metabolic health: a systematic review. Pediatrics 163. Katan MB, de Ruyter JC, Kuijper LDJ, et al. Impact of masked replacement of sugar-sweetened with sugar-free beverages on body weight increases with initial BMI: secondary analysis of data from an 18 month double-blind trial in children. PLoS One 164. Swithers SE. Artificial sweeteners are not the answer to childhood obesity. Appetite 165. Gardner C, Wylie-Rosett J, Gidding SS, et al. Nonnutritive sweeteners: current use and health perspectives: a scientific statement from the American Heart Association and the American Diabetes Association. Diabetes Care 166. Council on School Health, Committee on Nutrition. Snacks, sweetened beverages, added sugars, and schools. Pediatrics 167. Lustig RH, Mulligan K, Noworolski SM, et al. Isocaloric fructose restriction and metabolic improvement in children with obesity and metabolic syndrome. Obesity (Silver Spring) 168. German Society for Nutrition (Deutsche Gesellschaft für Ernährung (DGE)), Austrian Society for Nutrition (Österreichische Gesellschaft für Ernährung (ÖGE)), Swiss Society of Nutrition (Schweizerische Gesellschaft für Ernährung (SGE)). D-A-CH Reference values for the intake of nutrients (D-A-CH Referenzwerte Für Die Nährstoffzufuhr). 2nd edit. (2. Auflage), Bonn: DGE; 2015. caries; free sugars; obesity; overweight; paediatric; recommendations; sugar; sugar-sweetened beverages; sugar-containing beverages; sweet taste Supplemental Digital Content © 2017 by European Society for Pediatric Gastroenterology, Hepatology, and Nutrition and North American Society for Pediatric Gastroenterology,
You are given a block of text which explains the theory of this concept. Once you have read the theory, do the exercises given below to test how well you have understood the ideas. How to do the Exercises: You are given a set of words. You are also given some sentences with input boxes, and you are required to use the words to complete the sentences correctly. You can put your chosen word into the input box by first clicking on the word and then in the input box. The word will appear in the input box. If it is correct, it will go green, and if not, it will go red. APPROPRIATE and SUITABLE QUESTION: What's the difference between APPROPRIATE and SUITABLE? APPROPRIATE and SUITABLE are both qualitative adjectives - i.e. they describe the quality of something - and are very similar in meaning and usage. They carry the meaning of 'fitted, suited to a purpose. They are both placed as modifiers before nouns and they are both used as complements after the verb be, although appropriate is perhaps more commonly used in this way, especially with the pronoun it. They are both used with the preposition for and are often used with negative prefixes. The adjectival form suitable (for) sometimes crops us in the verb format suited (to). Study the following examples: - It is inappropriate to make jokes at funerals. - It was inappropriate for her to joke with the Queen in such a light-hearted manner. - The clothes she was wearing were quite unsuitable/inappropriate for the cold weather. - Does this dress suit me? ~ Oh yes, it does. And it's very suitable/appropriate for formal occasions. - It is a very violent film and is considered unsuitable/inappropriate for children to watch. - I'm glad you praised him for that. It was an appropriate thing to do. - He is just not suited to/suitable for this type of work. Such small flats are not really suitable for couples with young children. It is unsuitable/inappropriate accommodation.
In this quick tutorial you'll learn how to draw a Takin in 7 easy steps - great for kids and novice artists. The images above represents how your finished drawing is going to look and the steps involved. Below are the individual steps - you can click on each one for a High Resolution printable PDF version. At the bottom you can read some interesting facts about the Takin. Make sure you also check out any of the hundreds of drawing tutorials grouped by category. How to Draw a Takin - Step-by-Step Tutorial Step 1: Let's begin our Takin. First, draw a thick neck and a long face. Step 2: Now draw in the detail for the face. Draw an oval for the eye, another for the nose, a line for the mouth, and then triangles for the ear and a curved line downwards with squiggles for the long beard. Step 3: Draw two curved and pointed up horns. These the takin uses to defend himself from predators! Step 4: Alright, now for the body. Simply draw one long line at the top, and one small line at the bottom - leaving space for the legs. Step 5: You're getting there! Draw big, curved hind legs and make separate sections for the hooves. Step 6: Then, draw the two, thinner front legs and the hooves. Step 7: Finally, draw a small, thin tail bending downwards! Interesting Facts about the TAKIN The Takin is a member of the sheep family and the scientific term for them is Budorcas taxicolor. Other common names for this animal are the Cattle Chamois, the Gnu Goat, or the Goat-Antelope. They are similar to the Muskox and the national animal of Bhutan. Their light brown bodies with dark brown faces and legs have a fluffy coat. This animal is vulnerable to extinction. Did you know? - The animal was first documented in 1850. - They are almost 5 feet tall. - This species is almost 9 feet in length. - They stand on their legs to reach food over 10 feet high. - The animal weighs up to 1,300 pounds. - They have horns up to 1 foot long. - The animal can have hair over 9 inches in length. - They live almost 14,000 feet in above sea level. The short legs have hooves for feet that are split into two toes. These animals eat leaves in forests, valleys, rocky mountains, and grassy mountains. They feed in groups of less than a couple of dozen members to more than a few hundred individuals. When threatened, they give a coughing sound to warn others and lie down in bushes.
How are thermometers invented? Like most inventions, the thermometer wasn’t invented in a day and by just one person. It is assumed that the first attempts to connect the warming of a body with a fluid, which increases its volume in a cylindrical container, was made by the ancient Greek scientist Heron of Alexandria, who lived in the second-third century AD. Or at least we have this information today. His thermoscope, improved by Galilei in 1597, was not meant to measure heat, but to increase the volume of water during heating. The thermoscope represented a small glass ball with a small glass tube, welded to it. Upon heating of the glass globe and placing the end of the tube in a water container, after some time the air in the hollow glass ball begins to cool and the tube draws in part of the water in which it is immersed and, conversely, if we re-heat the glass ball, the air within will expand and it will push the water in the glass tube out. It is obvious that this thermoscope cannot measure temperature because it does not have a graduated scale. It only gives information about the degree of heating of the body, but even this information is conditional because the water level in the tube will depend not only on the temperature but also on the atmospheric pressure at the time of the experiment. About a century later the device was improved by Florentine scientists who draw the air from the flask and sealed it by a water tank at its lower end. Moreover, they lined a value scale on the tube. As a result, the temperature could be measured not only qualitatively but also quantitatively. Later they substituted the water with a little alcohol and removed the tank. The principle of this device, primitive by today’s perspective, was based on the body’s expansion. The constant values in both ends of the scale represented the temperatures in the hottest summer day and the coldest winter day. Later Lord Bacon, Robert Fludd, Cornelis Drebbel and others continued working on the improvement of this device. Without telling the whole story of the improvements of the thermometer over the years, we will only note that in order to make comparison between different thermometers, two main variables in thermometer scales appeared. One is the absolute zero, and the other – the freezing point of water. The scale of Celsius degrees (Anders Celsius – a Swedish astronomer, geologist and meteorologist 1730-1744) is offset by 273.15 degrees in relation to Kelvin (William Thomson or Lord Kelvin – an English physicist and engineer 1824-1907), where 1°C = 1°K. In fact, there is a deliberate mistake here, as Kelvin degrees are not indicated by a small zero, but simply by “K” and in the international SI system there are only Kelvin degrees or just kelvins because the kelvin is not even referred to as a degree. The point in the scale corresponding to -273.15°C is zero Kelvin (0 K). This is the absolute zero in physics, in which the movement of molecules reaches its minimum. And 0°C is the temperature at which water freezes under normal conditions. However, if you go to the United States you will encounter a temperature scale, measuring degrees by Fahrenheit, named after the German physicist Daniel Gabriel Fahrenheit, 1686-1736. This scale is full of conditions and is hardly used except in the US and Belize. 1°C = 1 K = 1.8°F. The zero by Fahrenheit is offset by 32°F up on the Celsius scale, i.e. 0°F = 1.8°C + 32 and the boiling point of water is at 212°F. There is 180°F between the boiling point and the freezing point of water. If you are wondering why, the explanation is very subjective – 0°F corresponds to the lowest temperature measured during the winter of 1708/1709, and 100°F is chosen by Fahrenheit to be the body temperature of a horse (!), which is about 37.8°C. Now, I guess, it becomes clear why Fahrenheit scale has not achieved much popularity and application. Types of thermometers Liquid thermometers are based on the principle of changes in the volume of the liquid, which fills the thermometer. This is usually alcohol or mercury. Mercury is a highly toxic liquid metal, whose vapors cause damage to respiratory airways and cell membranes. The European Commission prohibited the manufacture and sale of mercury thermometers in all EU countries since the beginning of 2009, which started the production of galinstan thermometers. Galinstan is a liquid alloy of gallium, indium and tin. The active part of digital thermometers is a conductor, whose resistance changes with the change of the ambient temperature. A large part of these thermometers have thermocouples of two metals with different conductivity, depending on the temperature, but the most accurate and unchanging over time are resistance thermometers, which contain platinum wire or platinum with a ceramic base. The most commonly used alloy is the so called platinum 100 – PT100, which at 0°C has resistance of 100Ω or platinum 1000, which at 0°C has resistance of 1000Ω. The dependence of the resistance on the temperature is almost linear in the temperature range of -200 to +850°C, and this quality is used for graduating the thermometers. They are based on the change of the properties of optical fibers when the temperature changes. Generally they change their degree of light transmission as well as their range. In order to measure the temperature, the tactile part of an infrared thermometer doesn’t have to be in contact with the body, whose temperature it measures. These thermometers are more and more used in industry, medicine, and recently in households. They are devices for measuring the temperature, whose operating principle is subject to the Charles’s law. In 1703 Charles found that at uniform heating of any gas there is uniform increase in pressure, if the volume of the gas is maintained constant. Moreover, the dependence of the pressure on the temperature is linear, so it can be assumed that pressure is a quantitative indicator of temperature. In this way, when connecting a gauge to an air-tight container, filled with gas, the gauge can be graduated for measuring temperature, instead of pressure. The gas thermometers containing hydrogen or helium are the most precise ones. It is also possible to classify thermometers according to their purpose of use but this is not the subject of this article. By definition, weather-stations represent a combination of various devices for measuring weather parameters. The main devices of this type are: thermometer – for measuring the air temperature; barometer – for measuring the atmospheric pressure; hygrometer – for measuring humidity; anemometer – for measuring the speed and direction of wind; rain gauge – for measuring rainfall, etc. The building, where meteorological observations are conducted is also called weather-station. In this article, however, we will have a look at the so called home weather-stations, which are used in households. Ever since the 19th century, in public buildings and ships’ dining rooms there were not only clocks and thermometers but also massive barometers placed in solid wooden hulls. As electronics was slowly entering households, compact digital weather-stations began to appear on the market. Through one or more sensors they indicate the inside and outside temperature, the humidity and the atmospheric pressure, and analyzing the data through a small processor, they make a weather forecast for one day ahead. These weather-stations operate by replaceable batteries or are plugged into the mains. Often this small digital device, except for weather sensors is equipped with a digital alarm clock with a calendar, displays the moon phases, etc. These multifunctional devices provide multiple data on one display, thus facilitating the daily lives of millions of people, and are also quite affordable.
Teachable moments and targeted instruction « Back to the Main Inquiry Cycle When we are reviewing student work to plan for instruction, it can be helpful to think about “Writers Workshop” instead of “Writing Workshop”. Teachers can feel so pressured to instruct skills and strategies; when reviewing with a piece of writing, we might automatically identify all the skills students does not have yet. Practice responding to the writer first, then respond to the writing. Identify the one skill to reinforce, set a goal with the writer, and move on. When students practice writing everyday, the stakes are lowered- they are not pressured to demonstrate cumulative skills in a single writing task. Ultimately writing instruction is not about producing a perfect text, it is about fostering resiliency and fortitude in children. Throughout the process we want to preserve a child’s confidence so they can carry on with the complex task of writing. Select a text. Then use Jigsaw protocol to read and discuss After reviewing work, this teacher has identified needs of a small group of Kindergarten students. Watch video (9:20) of mini-lesson to see how she targets instruction in small group setting. After reviewing work, this teacher has identified needs of an individual first grade writer. Watch video (7:40) of author Jennifer Serravallo targets instruction during a writing conference. Look at samples of writing from your students to identify characteristics of writing development.(Or review the writing samples provided in grade bands ) Chart current stage of student writing development on class record. Finally, with a partner or teaching team, plan a 5-10 minute mini-lesson based on the data for each of the stages of writing you have identified. Back to Top Previous SessionNext Session
Answer to Question #67505 in Other English for Janet Compose only one paragraph at most of summary. Remember that your audience has already read the story! The majority of the essay must move beyond summary to analysis. Explanation of Literary Terms 1.Setting – The setting is where and when the story takes place. 2.Purpose – What was the author hoping to accomplish or communicate in writing this story? 3.Symbolism - A symbol is a character, place, thing or event that stands for something else, often an abstract idea. 4.Theme - A theme is a general message or insight into life revealed through a literary work. It is basically what the writing suggests about people or life. Need a fast expert's response?Submit order and get a quick answer at the best price for any assignment or question with DETAILED EXPLANATIONS!
The File Transfer Protocol (FTP) is a standard network protocol used for the transfer of computer files from a server to a client using the Client–server model on a computer network. FTP is built on a client-server model architecture and uses separate control and data connections between the client and the server. FTP users may authenticate themselves with a clear-text sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it. For secure transmission that protects the username and password, and encrypts the content, FTP is often secured with SSL/TLS (FTPS). SSH File Transfer Protocol (SFTP) is sometimes also used instead; it is technologically different. The first FTP client applications were command-line programs developed before operating systems had graphical user interfaces, and are still shipped with most Windows, Unix, and Linux operating systems. ftp utility is used to connect,manage directories,upload/download files from an ftp server. To connect to the FTP server, we have to type in the terminal window 'ftp' and then the domain name 'domain.com' or IP address of the FTP server. Replace the IP and domain in the above examples with the IP address or domain of your FTP server. If you connect to a so-called anonymous FTP server, then try to use "anonymous" as user name and a nempty password: The commands to list, move and create folders on an FTP server are almost the same as we would use locally on our computer, ls for list, cd to change directories, mkdir to create directories... Before downloading a file, we should set the local ftp file download directory by using 'lcd' command: If you dont specify the download directory, the file will be downloaded to the current directory where you were at the time you started the FTP session. Now, we can use the command 'get' command to download a file, the usage is:get file The file will be downloaded to the directory previously set with the 'lcd' command. To download several files we can use wildcards. In this example I will download all files with the .xls file extension.mget *.xls We can upload files that are in the local directory where we made the FTP connection. To upload a file, we can use 'put' command. When the file that you want to upload is not in the local directory, you can use the absolute path starting with "/" as well: To upload several files we can use the mput command similar to the mget example from above: Once we have done the FTP work, we should close the connection for security reasons. There are three commands that we can use to close the connection: Any of them will disconnect our PC from the FTP server.
Latin name: Otiorhynchus rugosostriatus Reason for Concern: 1) Weevil larvae can feed on and girdle the roots, rootlets and basal crown area and are especially harmful to young plants. 2) Adults feed at night on leaves, notching the leaf edges. 3) Feeding damage results in stunted plants, poor yields, and possibly plant death. Larvae are legless, C-shaped, and white or pink with brown heads. Black Vine larvae are up to 1.3 cm long when fully grown. Strawberry Rough larvae are slightly smaller and Strawberry Root larvae are the smallest. Adults are flightless, hard-shelled beetles with coarse punctures on their wingless wing covers. They are oblong, have a broad snout, long, downward curved mouthparts and elbowed antennae. They feed at night and hide around the crowns of plants during the day. Strawberry Root Weevils are about 1/5 inch long, shiny black with thinly scattered yellowish short hairs (pubescence), and reddish-brown antennae and legs. Black Vine Weevils are about 1/3 inch long, black with patches of orange or yellow scales. Rough Strawberry Root Weevils are slightly more than 1/4 inch long and resemble the Strawberry Root Weevil and Black Vine Weevil except for size. They are black, shiny, and without scales. Root weevil larvae overwinter 2 to 8 inches deep in the soil, feeding on small roots that can quickly reduce the vigor of young plants. Adults (nearly all females) appear after bloom, continuing through and after harvest. The beetles do not fly but are strong walkers, able to climb into the canopy at night to feed on foliage. They lay their eggs around the crowns about 1 month after emergence (usually July or August). Eggs are deposited without fertilization on the soil of host plants. They lay several eggs each day and usually lay 200 eggs during their adult lifetime (90-100 days). Rough strawberry root weevil, unlike other weevil species, lay many of their eggs in late summer/early fall. Eggs will hatch in 2 to 3 weeks. The larvae work into the soil and feed on roots and crowns. Larvae grow slowly over the summer, molting 5 to 6 times. By late fall, they have matured and are about 5/8-inch long. They enter a prepupal stage in an earthen cell and pupate the following spring/summer. There is 1 generation per year. Because they are flightless, spread can be relatively slow; however, they are active walkers. - Detection can be found by using pitfall traps or temporary shelter (tile, cardboard, plywood, etc). - Greater attention should be given to younger plantings as weevils can girdle and kill young plants. - Start inspecting the soil in April for signs of damage from larvae. Watch for leaf notching, especially on sucker growth near the ground. - Adult weevils feed at night and usually return to the trash at the base of the plant in the day. - Weevils may stay in the foliage on cool, cloudy days especially if the foliage is dense. - To confirm weevil contribution to low vigor locations, dig up plant and look for evidence of grub population in roots. Note developmental stage to help time treatment.
Macular degeneration is the leading cause of severe, irreversible vision loss in people over age 60. It occurs when the small central portion of the retina, known as the macula, deteriorates. The retina is the light-sensing nerve tissue at the back of the eye. Because the disease develops as a person ages, it is often referred to as age-related macular degeneration (AMD). Although macular degeneration is almost never a totally blinding condition, it can be a source of significant visual disability. There are two main types of age-related macular degeneration:
An Act for the Gradual Abolition of Slavery, March 1, 1780 The enslavement of African servants has a long and dishonorable history in Pennsylvania. Even before William Penn received his charter to the province in 1681, the Dutch and Swedish settlers in the Delaware Valley held Africans as slaves. The Society of Friends, or Quakers, who began to arrive in the early 1680s, including Penn himself, owned slaves. Many African slaves came to Pennsylvania from the West Indies where they had experienced a period of "seasoning" and entered the province through the port of Philadelphia. With few exceptions, they remained in the southeastern area, where they served as house servants, farmhands, laborers on iron plantations, and skilled craftsmen. Like other colonies, Pennsylvania enacted "Black codes": slaves were not allowed to meet in groups of more than four; they were not permitted to travel more than ten miles from their "master's" residence without his permission; they could not marry Europeans; were not to be tried by juries; and could not buy liquor. Nevertheless, slavery never was prominent in Pennsylvania. In 1700, when the colony's population was approximately 30,000, there were only about 1,000 slaves present. Even at the institution's numerical peak in 1750, slaves numbered only 6,000 of a total of 120,000 residents. Pennsylvania "had fewer slaves than New Jersey, and only half as many as New York." In Virginia, slaves constituted about half of the total population. In South Carolina, slaves outnumbered European settlers. Protests against slavery emerged shortly after Pennsylvania was established. Indeed, the first written protest in England's American colonies came from Germantown Friends in 1688. Numerous writers and speakers followed, including George Keith, Ralph Sandiford, Benjamin Lay, Anthony Benezet, and John Woolman. Most were Friends who based their objections on religious principles. The Philadelphia Yearly Meeting of Friends criticized the importation of slaves in 1696, objected to slave trading in 1754, and in 1775 determined to disown members who would not free their slaves. In 1775, Pennsylvanians formed the Pennsylvania Abolition Society, the first of its kind in the nation. Throughout the 1700s, the Pennsylvania Assembly attempted to discourage the slave trade by taxing it repeatedly. In addition to earlier influences the ideology of the American Revolution stimulated the movement for the abolition of slavery in Pennsylvania. Inspired by the philosophy of natural rights, numerous pamphleteers charged that taxation by the British parliament made slaves of the American colonists. Several, such as Benjamin Rush, Thomas Paine, and Richard Wells noted the hypocrisy of Americans "who condemned the tyranny of England's colonial pollicies…while holding one-fifth of the colonial population in chains." Expressing similar sentiments is the "Act for the Gradual Abolition of Slavery" passed by the Pennsylvania Assembly in 1780. It was the first such legislative enactment in America. Drafted by a committee of Revolutionary Pennsylvania's new political leaders and probably guided through the Assembly by George Bryan, the act begins with an expression of gratitude for deliverance from the "tyranny of Great Britain" and for the opportunity to "extend a portion of that freedom to others." It specified that "every Negro and Mulatto child born within the State after the passing of the Act (1780) would be free upon reaching age twenty-eight. When released from slavery, they were to receive the same freedom dues and other privileges "such as tools of their trade," as servants bound by indenture for four years. Slaves were to be registered and those not recorded were to be free. The bill passed by a vote of 34 to 21. The most consistent "opposition to abolition came from German Lutherans and Reformed representatives" from heavily German counties, at least seventy-five percent of whom voted against the bill. Probably, they feared that emancipation of slaves would affect their social status in Pennsylvania. Episcopal and Presbyterian representatives split on the issue. Pennsylvania's Act for the Gradual Abolition of Slavery was the most conservative of the laws emancipating slaves that were passed in northern states between 1780 and 1804. The law freed few slaves immediately. Although Pennsylvanians could no longer legally import slaves; they could buy and sell those who had been registered. Indeed, some pro-slavery residents of counties along the Delaware and Maryland borders violated the law and continued to buy slaves from those states until the law was tightened in l788. In 1781, conservative assemblymen attempted to extend the registration dead line and to re-enslave those whom courts had declared free because their owners had failed to register them in time. Simultaneously, they attempted also to repeal the 1780 Gradual Abolition Act. Only with great effort were slavery's opponents able to defeat these attempted revisions. Despite such resistance to change, slavery declined after the passage of the act. In addition to emphasizing slavery's inconsistency with religious beliefs and philosophical principles, opponents pointed to its increasingly evident economic impracticality. Some owners freed their slaves during their lifetimes, while others provided for freedom in their wills. The Pennsylvania Abolition Society purchased a significant number of slaves and promptly set them free. Furthermore some slaves did not wait for such humanitarianism or for the Act to set them free but escaped from bondage. Between 1790 and 1800, the number of slaves dropped from 3,737 to 1,706 and by 1810 to 795. In 1840, there still were 64 slaves in the state, but by 1850 there were none. The act for the Gradual Abolition of Slavery in Pennsylvania had achieved its sponsors' objectives--very gradually.
Sometimes in science, the answer you end up with is not exactly the question you started with. The path to discovery is not always predictable. Researchers have to constantly evaluate what they are finding, and be ready to adjust their course when the data leads down a different path. This is especially true in tropical ecology, where there is so much basic information yet to be learned. Such is the case with our new paper published this week in the Proceedings of the National Academy of Sciences (PNAS). We started tracking the fate of tropical seeds with small radio-transmitters because we thought that the predation of agoutis (the main mover of palm seeds) by ocelots (the main predator of agoutis) would leave a bunch of “orphan seeds” buried in the forest where no other agoutis would discover them. These orphaned seeds would thus be free to germinate and grow into new palm trees. It was a cool idea, and would show how predators affect prey, ultimately trickling down through the tropic levels to affect seed survival, forest regeneration, etc… We had all the hypotheses, sub-hypotheses, and sub-sub-hypotheses worked out. Now we just had to go into the jungle and prove ourselves right. We set out to map all the palm trees, radio-collar a bunch of agoutis, have them disperse our special radio-tagged seeds, and then wait for the ocelots to pick them off one-by-one. Earlier research suggested that only about 1/3 of these rodents survive one year, with most falling to the island’s ocelots. If we did our part we knew we could count on the ocelots to do theirs. This was actually a huge amount of work, we needed “our agouti” to move “our seed”, and bury it in a little hole for safe-keeping. Camera traps told us whether one of “our agoutis” moved a particular seed, and more often than not it was an un-marked agouti, or a rat or squirrel. Initially animals just ate most of the seeds, but once they recovered from the recently-ended hungry season, they started storing seeds in scattered underground caches for later, when little fresh fruit will be available. Finally our radio-tagged seeds were moving. Only, and here’s where the change in the-path-to-discovery comes in, the seeds didn’t stop moving. Once a seed was buried we figured we’d just sit and wait till it was dug up and eaten, sometime in the next few months or year. Instead, the seeds were quickly dug-up, moved, and buried again, and again, and again. During our first season of field-work this high rate of movement caught us off guard and the additional work of tracking down these crazy seed movements completely wore down everyone on the project. Given the super-high rates of seed movement, we realized we needed to look for (actually, listen for radio-signals) moving seeds every single day. Even daily checks didn’t catch all the movements because we observed some seeds actually move twice in one day. What the heck was going on? Why were agoutis moving seeds so often? Some seeds were going 100’s of meters. Were agoutis shifting home-ranges and taking their seeds with them? Or, were there thieves amongst us? For our second field season we decided to switch tactics a bit, and investigate this new research path illuminated by the crazy seed movements. We mounted a major trapping effort to try and capture and mark as many agoutis as we could. By being able to recognize lots of animals in one area, we hoped to determine who was taking the seeds. We hid motion-sensitive cameras next to the buried seeds to see which animal’s dug the seeds up. Our videos (example above) showed that most (84%) of seeds were being stolen by robber-agoutis. These unscrupulous rodents weren’t just eating the buried treasure, but often moved it over to the center of their territory, where they could more easily find it during the upcoming hungry-season. This repeated thievery resulted in seeds moving much further than you would expect from a single agouti. Slightly more than 1/3 of seeds moved more than 100m, which is typically considered far enough to escape the competition of sibling-seeds that just drop underneath the mom-tree. One seed was cached 36 different times, traveling over 749 m back-and-forth between territories until it was 280 m from its starting point. We made a movie illustrating this amazing amount of movement (shown below with a fun soundtrack). Although our test of the predator-mediated seed dispersal hypothesis didn’t go off exactly as planned, our results incidentally disproved it. Even if seeds do become “orphaned” by predated agoutis, we now know that the rates of seed theft are so high that these orphaned seeds still have a good probability of being discovered. While this particular route of influence between predators-prey-trees is probably not important to forest dynamics, our other work shows how other behavior of these agoutis is heavily influenced by the threat of predation (recent biotropica paper, and another one in the works). This discovery of robber-rodents helping trees by moving their seeds long distances was made even more interesting by the fact that the dispersal of this particular type of tree has been a tropical enigma since Janzen and Martin published “Neotropical anachronisms: The fruits the gomphotheres ate.” In 1982. This paper, and dozens since it, suggested that the very largest fruits and seeds found in the Neotropics must have co-evolved to be dispersed by the now-extinct Pleistocene Megafauna. How these trees have survived the >10,000 years since megafaunal extinction has puzzled tropical ecologists for decades. These results are also important when applied to current mammalian extinctions. If tree species are able to survive due to “disperser substitution” maybe this holds a glimmer of hope for trees that are dispersed by mammals that are currently being hunting to extinction or local extirpation. Alternately, our results also show how important of a role these little agoutis can play in their ecosystems. When poaching gets so bad that they also deplete these smaller-sized mammals, the trees seeds may have no chance to survive. Our accidental discovery of robbing rodents offers a new potential answer to this mystery, and highlights the potential rewards of following thieves down the dark and mysterious scientific path to discovery. By Roland Kays
High Temperature Superconducting Technology for Next-Generation Power Generation a Success GE’s superconductive technology research offers advantages in efficiency, size, mass, and weight reductions when compared with conventional machines. The impact on energy production from alternative energy sources could be substantial. GE’s Power Conversion business has taken an important step in testing a viable way of producing large amounts of electricity from renewable resources using superconductors running at relatively high temperatures. The company has successfully completed trials of Hydrogenie, a power generator incorporating groundbreaking technologies that enable highly efficient production of electricity in a small space. Hydrogenie makes use of superconductors instead of copper for the rotor windings on the motor, operating at 43 Kelvin or -230°C. It was tested late last year up to and well beyond its full rated load 1.7 MW spinning at 214 rpm and met expectations and design predictions. The tests were carried out at a GE Power Conversion facility in Rugby, England. Until recently, superconductivity could only be achieved at around 4K (-269°C). But new “high temperature superconductors” (HTS) exhibit the phenomenon at much higher temperatures. Such machines will need less complex insulation systems and less powerful cooling than used on devices such as medical MRI magnets. “This technology is a true breakthrough,” says Martin Ingles, Hydrogenie project manager at GE Power Conversion. “It could radically improve the efficiency of equipment producing electricity from water and from wind and may also be suitable for further applications down the road.” Latest superconductors are made by depositing a superconducting layer of ceramic onto a relatively cheap base metal. They have virtually no resistance to electrical current when cooled to very low temperatures, so windings can be made with wires having a cross section around 2 percent of a conventional copper wire winding. More windings can be fitted into electromagnet coils, resulting in a higher power magnet that is substantially smaller or lighter than before. Superconductivity offers significant advantages in efficiency and significant weight reductions compared with conventional machines. The greatest benefits in terms of size and mass reduction are for applications where high torque machines are typically used, most likely as a direct-drive application in installations such as wind turbines, ship propulsion or run-of-river hydro plants. GE has overcome significant technical challenges relating to the cryogenic cooling and thermal insulation required to keep the superconductors at the required temperature. Extremely cold helium gas is piped through a rotating coupling into the machine rotor and then circulated around the individual coils. “It’s rather like trying to keep ice cubes frozen on a rotisserie in a very hot oven,” says Ingles. “Except that our rotisserie is rather high tech.” The rotor is located inside a vacuum, but still has some direct contact, via its shaft, with the outside world. This creates issues relating to the massive temperature differences along the shaft. The machine incorporates a patented method for transferring torque from cold HTS coils to the machine rotor. Low resistance thermal joints and assemblies ensure that low cooling power is required to cool the coils. In fact, the machine demonstrates all of the technologies required to make HTS machines a commercial reality. GE’s Power Conversion business did much of the development of the Hydrogenie 1.7 MW 214 rpm HTS generator as part of the E.U. Framework Programme 6 funded project that ran between 2006 and 2010. The successful completion of the Hydrogenie project will set the framework for continued research and development in the study of superconducting machines. One specific area that may potentially benefit in the future is the upgrading of older run-of-river power plants. Coupled with running the machine/turbine at variable speed the benefits could allow efficiency improvements of up to 12 percent at part load. The technology building blocks developed as part of the project also will be used in other businesses where high torque and slow speed machines are in use. The most immediate areas of demand are in wind power generation and in marine propulsion. A superconducting wind turbine generator may permit significant reductions of mass mounted on the tower, thus helping to reduce the cost for the tower itself and its foundations. Recent studies conducted for GE Power Conversion show that the lifetime energy saving for a superconducting wind turbine compared to a conventional machine could be as much as 20 percent for offshore or desert machines above 10 MW.
NAG Library Chapter Introduction g04 – Analysis of Variance 1 Scope of the Chapter This chapter is concerned with methods for analysing the results of designed experiments. The range of experiments covered include: - single factor designs with equal sized blocks such as randomized complete block and balanced incomplete block designs, - row and column designs such as Latin squares, and - complete factorial designs. Further designs may be analysed by combining the analyses provided by multiple calls to functions or by using general linear model functions provided in Chapter g02 2 Background to the Problems 2.1 Experimental Designs An experimental design consists of a plan for allocating a set of controlled conditions, the treatments, to subsets of the experimental material, the plots or units. Two examples are: ||In an experiment to examine the effects of different diets on the growth of chickens, the chickens were kept in pens and a different diet was fed to the birds in each pen. In this example the pens are the units and the different diets are the treatments. ||In an experiment to compare four materials for wear-loss, a sample from each of the materials is tested in a machine that simulates wear. The machine can take four samples at a time and a number of runs are made. In this experiment the treatments are the materials and the units are the samples from the materials. In designing an experiment the following principles are important. ||Randomisation: given the overall plan of the experiment, the final allocation of treatments to units is performed using a suitable random allocation. This avoids the possibility of a systematic bias in the allocation and gives a basis for the statistical analysis of the experiment. ||Replication: each treatment should be ‘observed’ more than once. So in example (b) more than one sample from each material should be tested. Replication allows for an estimate of the variability of the treatment effect to be measured. ||Blocking: in many situations the experimental material will not be homogeneous and there may be some form of systematic variation in the experimental material. In order to reduce the effect of systematic variation the material can be grouped into blocks so that units within a block are similar but there is variation between blocks. For example, in an animal experiment litters may be considered as blocks; in an industrial experiment it may be material from one production batch. ||Factorial designs: if more than one type of treatment is under consideration, for example the effect of changes in temperature and changes in pressure, a factorial design consists of looking at all combinations of temperature and pressure. The different types of treatment are known as factors and the different values of the factors that are considered in the experiment are known as levels. So if three temperatures and four different pressures were being considered, then factor (temperature) would have levels and factor (pressure) would have four levels and the design would be a factorial giving a total of treatment combinations. This design has the advantage of being able to detect the interaction between factors, that is, the effect of the combination of factors. The following are examples of standard experimental designs; in the descriptions, it is assumed that there are ||Completely Randomised Design: there are no blocks and the treatments are allocated to units at random. ||Randomised Complete Block Design: the experimental units are grouped into blocks of units and each treatment occurs once in each block. The treatments are allocated to units within blocks at random. ||Latin Square Designs: the units can be represented as cells of a by square classified by rows and columns. The rows and columns represent sources of variation in the experimental material. The design allocates the treatments to the units so that each treatment occurs once in each row and each column. ||Balanced Incomplete Block Designs: the experimental units are grouped into blocks of units. The treatments are allocated so that each treatment is replicated the same number of times and each treatment occurs in the same block with any other treatment the same number of times. The treatments are allocated to units within blocks at random. ||Complete Factorial Experiments: if there are treatment combinations derived from the levels of all factors then either there are no blocks or the blocks are of size units. Other designs include: partially balanced incomplete block designs, split-plot designs, factorial designs with confounding, and fractional factorial designs. For further information on these designs, see Cochran and Cox (1957) , Davis (1978) or John and Quenouille (1977) 2.2 Analysis of Variance The analysis of a designed experiment usually consists of two stages. The first is the computation of the estimate of variance of the underlying random variation in the experiment along with tests for the overall effect of treatments. This results in an analysis of variance (ANOVA) table. The second stage is a more detailed examination of the effect of different treatments either by comparing the difference in treatment means with an appropriate standard error or by the use of orthogonal contrasts. The analysis assumes a linear model such as is the observed value for unit is the overall mean, is the effect of the is the effect of the th treatment which has been applied to the unit, and is the random error term associated with this unit. The expected value of is zero and its variance is In the analysis of variance, the total variation, measured by the sum of squares of observations about the overall mean, is partitioned into the sum of squares due to blocks, the sum of squares due to treatments, and a residual or error sum of squares. This partition corresponds to the arguments , and . In parallel to the partition of the sum of squares there is a partition of the degrees of freedom associated with the sums of squares. The total degrees of freedom is , where is the number of observations. This is partitioned into degrees of freedom for blocks, degrees of freedom for treatments, and degrees of freedom for the residual sum of squares. From these the mean squares can be computed as the sums of squares divided by their degrees of freedom. The residual mean square is an estimate of . An -test for an overall effect of the treatments can be calculated as the ratio of the treatment mean square to the residual mean square. For row and column designs the model is is the effect of the th row and is the effect of the th column. Usually the rows and columns are orthogonal. In the analysis of variance the total variation is partitioned into rows, columns treatments and residual. In the case of factorial experiments, the treatment sum of squares and degrees of freedom may be partitioned into main effects for the factors and interactions between factors. The main effect of a factor is the effect of the factor averaged over all other factors. The interaction between two factors is the additional effect of the combination of the two factors, over and above the additive effects of the two factors, averaged over all other factors. For a factorial experiment in blocks with two factors, , in which the th unit of the th block received level the model is is the main effect of level is the main effect of level is the interaction between level . Higher-order interactions can be defined in a similar way. Once the significant treatment effects have been uncovered they can be further investigated by comparing the differences between the means with the appropriate standard error. Some of the assumptions of the analysis can be checked by examining the residuals. 3 Recommendations on Choice and Use of Available Functions This chapter contains functions that can handle a wide range of experimental designs plus functions for further analysis and a function to compute dummy variables for use in a general linear model. computes the analysis of variance and treatment means with standard errors for any block design with equal sized blocks. The function will handle both complete block designs and balanced and partially balanced incomplete block designs. computes the analysis of variance and treatment means with standard errors for a row and column designs such as a Latin square. computes the analysis of variance and treatment means with standard errors for a complete factorial experiment. Other designs can be analysed by combinations of calls to nag_anova_random (g04bbc) , nag_anova_row_col (g04bcc) and nag_anova_factorial (g04cac) . The functions compute the residuals from the model specified by the design, so these can then be input as the response variable in a second call to one of the functions. For example a factorial experiment in a Latin square design can be analysed by first calling nag_anova_row_col (g04bcc) to remove the row and column effects and then calling nag_anova_factorial (g04cac) with the residuals from nag_anova_row_col (g04bcc) as the response variable to compute the ANOVA for the treatments. Another example would be to use both nag_regsn_mult_linear (g02dac) and nag_anova_random (g04bbc) to compute an analysis of covariance. It is also possible to analyse factorial experiments in which some effects have been confounded with blocks or some fractional factorial experiments. For examples see Morgan (1993) For experiments with missing values, these values can be estimated by using the Healy and Westmacott procedure; see John and Quenouille (1977) . This procedure involves starting with initial estimates for the missing values and then making adjustments based on the residuals from the analysis. The improved estimates are then used in further iterations of the process. For designs that cannot be analysed by the above approach the function nag_dummy_vars (g04eac) can be used to compute dummy variables from the classification variables or factors that define the design. These dummy variables can then be used with the general linear model function nag_regsn_mult_linear (g02dac) In addition to the functions for computing the means and the basic analysis of variance one function is available for further analysis. computes simultaneous confidence intervals for the differences between means with the choice of different methods such as the Tukey–Kramer, Bonferron and Dunn–Sidak. 4 Functionality Index |Analysis of variance for:|| | | general block design or completely randomized design,|| | 5 Functions Withdrawn or Scheduled for Withdrawal Cochran W G and Cox G M (1957) Experimental Designs Wiley Davis O L (1978) The Design and Analysis of Industrial Experiments Longman John J A (1987) Cyclic Designs Chapman and Hall John J A and Quenouille M H (1977) Experiments: Design and Analysis Griffin Morgan G W (1993) Analysis of variance using the NAG Fortran Library: Examples from Cochran and Cox NAG Technical Report TR 3/93 NAG Ltd, Oxford Searle S R (1971) Linear Models Wiley
Disorders of the Aorta The aorta pumps oxygenated blood from the heart to all parts of the body. Although aortic disease is not the most common cause of heart disease, disorders of the aorta can be life threatening. Common aortic abnormalities include: Aortic dissection, also known as an aortic aneurysm-dissecting, is a potentially life-threatening condition in which there is bleeding into and along the wall of the aorta, the major artery carrying blood out of the heart. Aortic dissection most often occurs because of a tear or damage to the inner wall of the aorta. This usually occurs in the thoracic (chest) portion of the artery, but may also occur in the abdominal portion. An aortic dissection is classified as type A or B depending on where it begins and ends. - Type A begins in the first (ascending) part of the aorta. - Type B begins in the descending part of the aorta. When a tear occurs, it creates two channels: One in which blood continues to travel and another where blood remains still. As the aortic dissection grows bigger, the channel with non-traveling blood can get bigger and push on other branches of the aorta. The exact cause is unknown, but risks include atherosclerosis (hardening of the arteries) and high blood pressure. Traumatic injury is a major cause of aortic dissection, especially blunt trauma to the chest. Thoracic Aortic Aneurysms A thoracic aortic aneurysm is a widening (bulging) of part of the wall of the aorta, the body's largest artery. Thoracic aneurysms most often occur in the descending thoracic aorta. Others may appear in the ascending aorta or the aortic arch. The most common cause of a thoracic aortic aneurysm is hardening of the arteries (atherosclerosis). Most patients have no symptoms until the aneurysm begins to leak or expand. Chest or back pain may mean sudden widening or leakage of the aneurysm. Marfan Syndrome and Loeys-Dietz- Syndrome are disorders of connective tissue, and are caused by defects in a gene that causes too much growth of the long bones of the body. These syndromes can affect the cardiovascular system because the aorta, the main blood vessel that takes blood from the heart to the body, may stretch or become weak (called aortic dilation or aortic aneurysm).
The Sound Toll Registers (STR) are the accounts of the toll which the king of Denmark levied on the shipping through the Sound, the strait between Sweden and Denmark. They have been conserved (with gaps in the first decades) for the period from 1497 to 1857, when the toll was abolished. From 1574 on, the series is almost complete. The STR are being kept by the Danish National Archives (Rigsarkivet) in Copenhagen. The STR contain data on 1.8 million passages. The officials of the tollhouse at Elsinore recorded of each passage in principle the following data: - the date - the name of the shipmaster - the domicile of the shipmaster - the port of departure - the port of destination (from the mid-1660s) - the composition of the cargo - the toll The STR are an important source for research into trade, transport, production and consumption in Europe, amd also into the lifes of the shipmasters in the merchant marine. They are among the most important serial sources of the economic and maritime history of the Netherlands and Europe. A large portion of the commodity transport within Europe went via the Sound. The STR contain data on that exchange for a period of more than three and a half centuries. There is no other source for the European shipping and trade which covers a period of that lenght.
Everyone can spot a watermelon fruit, but fewer people know what the vine and its leaves look like when they visit a spacious vegetable garden. Native to Africa, watermelon (Citrullus lanatus) quickly sprouts, grows, flowers and set fruits within one growing season, making it an annual vine. Plant it in sunny, moist but well-draining soils when no frosts occur. Seeded and seedless types of watermelons all look the same until you slice open the juicy fruits. Look at the stems of the plant. Watermelon's hairy stems grow long and sprawl across the soil or ramble atop nearby shrubbery. They grow as long as 20 to 30 feet in length and look for curly tendrils emanating from the stem tips. Note the foliage. Though similar to those of a pumpkin, squash or cucumber, a watermelon plant's leaves grow no larger than an adult's hand. Look for leaves that are medium to light green with ornate lobes. Each leaf has a hairy surface and three to five lobes with possible rounded teeth on each lobe. Even newly germinating watermelon plants quickly develop leaves with these lobes, helping identify plants even before the long stems develop. Examine the flowers, if necessary, noting their light green-yellow color and the fused, five petals. Each blossom contains both sex organs: the female pistil and the male stamens that bear pollen. According to the University of Delaware College of Agriculture, the flowers open an hour or two after sunrise each day. Look for fruits. Even small, immature fruits have thick, smooth and waxy skin. When small, they typically remain solid green but then develop stripes or mottled blotches of various green tones. Some varieties mature with dark green skin.
C. Mass Imprisonment and Voter Disenfranchisement When the nation was founded in the late 1700s, the vast majority of people in the United States were ineligible to participate in democratic life. Excluded were women, blacks, Native Americans, and other minorities, as well as illiterates, poor people, and felons. Only white males were “citizens” with the right to vote. Over the course of 200 years, restrictions for all these categories have been lifted— save for those with felony convictions. Today, some 5 million Americans are ineligible to vote as a result of a felony conviction in the 48 states and D.C. that employ disenfranchisement policies for varying degrees of felons and ex-felons. If there was any doubt about the effect of these laws, consider the 2000 presidential election in Florida. That election was decided by less than 1,000 votes in favor of George W. Bush, while an estimated 600,000 former offenders—people who had already completed their sentences—were ineligible to vote due to that state’s restrictive policies. One wonders who most former inmates would have supported. While an estimated 2% to 3% of the national population is disenfranchised, the rate for black men is 13%, and in some states is well over 20%. When such high numbers of men in urban communities can’t vote, the voting power/efficacy of that whole community is reduced in relation to communities with low rates of incarceration. New evidence indicates that disenfranchisement effects go well beyond the legally disenfranchised population. Studies of voter turnout show that in the most restrictive states, voter turnout is lower, particularly among African Americans, and even among persons who themselves are not disenfranchised as a result of a felony conviction. Voting is a civic duty, and a process engaged in with families and communities. Family members talk about elections at home, drive to polls together, and see their neighbors there. When a substantial number of people in a community are legally unable to vote, it is likely to dampen enthusiasm and attention among others as well. Forty years after the Voting Rights Act was passed, mass imprisonment and disenfranchisement results in a greater proportion of African American and other minority communities losing the right to vote each year. D. Mass Imprisonment and State Budgets Regarding the impact of mass imprisonment on state economies, specifically higher education, a recent report by Grassroots Leadership shows how massive spending on Mississippi prisons has siphoned funds from classrooms and students, leaving higher education appropriations stagnant and African Americans shouldering the burden. The report documents a startling shift in Mississippi budget priorities. In 1992, the state spent most of the discretionary portion of its budget on higher education. By 2002, the majority of discretionary funds went to build and operate prisons. Between 1989 and 1999,Mississippi saw per capita state corrections appropriations rise by 115%, while per capita state higher education appropriations increased by less than 1%. Mississippi built 17 new prisons between 1997 and 2005, but not one new state college or university. And several more Mississippi prisons are under construction or consideration. There are now almost twice as many African American men in Mississippi prisons as in colleges and universities, and the state spends nearly twice as much to incarcerate an inmate as it takes to send someone to college. Moreover, due to new drug laws and a “truth-in-sentencing” bill passed in the mid-1990s, nearly 70% of those imprisoned in the state are nonviolent offenders. Mississippi is not unique in this situation—most states have followed this path and are facing serious budget shortages due to multiyear commitments to expand their correctional systems. These and other dynamics of mass imprisonment make up what are called invisible punishments or collateral consequences. Changing the trends noted here are difficult for several reasons. First, it is very difficult to alter prevailing sentencing policies and practices, which can be legislated in a matter of hours but take years to undo. In a broader sense, the national commitment to mass imprisonment is deeply embedded in a punitive and individualistic approach to social policy. This has not always been the case in the United States, and is certainly not the style adopted in many other countries. Changing this political and social environment remains the real obstacle to a more effective and humane crime policy.
Using the iPad ap, Timed Tests, students practiced their math facts. They needed to do two tests and try to beat their recorded scores. They are set at differentiated levels so that the student can master their level and develop confidence and fluency at the same time. If you don't have iPads for aps, paper and pencil timed tests are a good way to check up math fact fluency. Another resource we have used in the past is Rocket Math. This suggests 80 problems at 4 minutes. I have used this as my guideline when setting goals.. It is a manageable time and on average, students can be fairly successful at that rate. Today I introduced Prime, Composite and Square Numbers on the whiteboard and we discussed the meaning as students wrote down the definitions in their notebooks. After a brief discussion as I pointed out examples of each on the cards we used in yesterday's lesson, I brought up this Learnzillion lesson* to introduce the idea that the Commutative Pair ( The Commutative Pair is the factor pairs that are the factor pairs that are the example of the Commutative Property within the factor pairs: i.e. 4x8 and 8x4.) needs to be considered when figuring out how many different ways we can make groups for a given whole number. I used this as another resource that will set them up for solving some word problems in a later lesson. I want them to understand that listing factor pairs is another way of considering groups of things. I want the to consider what the expression means: i.e 4 groups of 8 or 8 groups of 4. I continued the lesson by asking my students these questions: We have been listing just the factor pairs up to the Commutative Pair. How does that limit our ability to show how many ways we can group things? When should we be considering all the ways we can make factor pairs? I used the question to front load what I wanted them to think about as the Learnzillion lesson played. I told them that the Learnzillion lesson would show them a different way of thinking about grouping. That sometimes we need to consider the commutative pair because of what it means. I started the video. I stopped at 12 to show them how to list factors again. I taught them that sometimes we are asked to just consider the factors of a product and not think about the pair part. They took notes in their journals. We watched one more story problem involving the product 16 and we discussed how the doubles function. I reviewed again: 1. When we are asked to simply list factor pairs or factors we do not include the commutative pair? 2. If we are talking about how many different ways we can group something, we use factor pairs to help us and include the commutative pairs. There is a distinct difference in what we are considering. I told them that we would study this more deeply later when we solve multiplication word problems and that they needed to keep their notes handy for recall. * If the Learnzillion resource wants you to register, just click on the "I am a student tab" to view the video. I told my students that we needed to finish up the factor pair cards from our lesson yesterday. They set to work quite quickly by partnering up with their buddies from yesterday until all of them were finished and then hung on the wall. The lesson from yesterday will need to be finished up the next day depending on how students work together and the speed in which they find their factor pairs. Some students finished their factor pair cards sooner than others, so I decided to show each pair of students who were ready this handy factor pair calculator. This Factor Pair Calculator is great, but does not list the factor pairs in the manner we list them. It simply lists it as factors, which I had to explain in my instruction. But it is a really cool tool to use because it serves as a support tool after students use the strategies they have learned to check their accuracy. It is showing the use of Math Practice 5 because they use it after the thinking and listing process as a support tool. The standard says that they must be able to list factor pairs to 100 and so the task is a bit more complicated as the numbers progress. I told each pair of students that they needed to list Factor Pairs in notebooks as they had in their lesson from the day before, but that they would be starting with 51 and moving forward. I intended for them to just practice in today's lesson and that they could use this calculator to check, but they must use their divisibility rules that they had learned, arrays, and/or the listing strategy I had taught them. ( 1x51, 2x? 3x17, etc.) . As each group finished, soon all were playing with the calculator and working on listing factor pairs in their notebooks. In the future, I use this strategy and lesson in warm ups of other lessons as it needs to be continually practiced toward mastery of this standard. I stopped my students after about 10 minutes. I had planned for them to practice longer, but the factor pair cards take longer to do than you think if some students are weak in their facts or have trouble figuring out the missing factor. That's what happened today. I asked them: What did you notice? Was there anything different about these factor pairs that are greater than fifty? One student noticed that there were prime numbers, but she thought 51 would be prime, so the calculator surprised her when it showed the factors! Another student noticed that aren't more factor pairs just because the number increases. Another student said that she thought is was more fun than she expected. These remarks show the type of thinking that Common Core demands. Math Practice Standard 8 expects us to teach students to look for that repeated reasoning. This exercise supports most of the Math Practice Standards in general, but I think that looking for that repeated reasoning of using divisibility and logic for the missing factor is really stressed here. It gets them to think about comparison on a higher level now, and not just memorizing math facts by rote. The factor pair calculator serves as a check for them as they strive for accuracy. I noticed that they are thrilled when they discover they have found all of the factor pairs using just their strategies. They are enjoying that the calculator affirms they are absolutely right rather than they are wrong or forgetting some of the factor pairs. I assigned IXL lesson D.4 Identify Factors for 30 minutes to sharpen their skills at looking at just a factor. It was intended as a drill and practice only for at home.
|Leader||Prime Minister of Japan| |Government type||Unitary Parliamentary Constitutional Monarchy| |Period active||1947 - 2077| Japan was an island nation in east Asia before the Great War of 2077. Originally occupied by the United States after World War II, sovereignty was restored to the country with the Treaty of Okinawa, signed in 1966. With the Japan located just under 800 miles from China, the two Asian superpowers often found themselves butting heads throughout the 21st century (Much of the aggression was the result of Japan's actions during the Second World War). After a humiliating defeat in the Third Sino-Japanese War, the Japanese military and economy were subsequently dismantled by China, while the southern regions of the country were occupied by the PLA. In response to large bouts of civil unrest in the late-2070s, the Chinese forces stationed there evacuated after the American military defeated the PLA in Alaska. Subsequently, Japan fell victim to Chinese and American warheads during the Great War. In August 1945, Japan became the first country in history to be the victim of nuclear warfare, when American atomic bombs were dropped on the cities of Hiroshima and Nagasaki. This resulted in the unconditional surrender of the Japanese Empire, and the end of the Second World War. U.S. forces continued to stay in the nation of Japan until the Treaty of Okinawa was signed in 1966, giving Japan full sovereignty, removing American forces from the nation, and establishing friendly trade-relations and political ties between the two countries. With sovereignty back to Japan, Japan began building up there military again to protect themselves from any enemies they might encounter. Though it was still technically a 'defense force', the Japanese military quickly grew in numbers as volunteers began to join to serve their country. Small-scale Nuclear weapons were also installed in Japan, given to the nation from the United States to ensure that a possible Communist Chinese takeover couldn't occur. Japan also experienced a so-called "economic miracle", with their economy soon becoming one of the most powerful in the world, thanks in large part to a strong manufacturing base. By the 1970s, Japan was the 2nd leading power in technological innovation (Behind the Fallout universe's United States, much to the ire of neighboring China. Throughout the next two decades, Japanese technological innovation would achieve significant breakthroughs in computer science and robotics. Japan still had much resentment towards China, and vice versa through the decades. Old wounds from the second World War did not fully heal, which would result in arguments and threats of attack on both sides. There would be hundreds of thousands of incidents in the East China Sea over trade and fishing rights, some of them turning into violent situations with small arms fire; others turning into yelling fests in conference rooms. By 1993, the country had already had made an alliance with Korea to ensure that both countries could hold strong againist Chinese advances. East-Asian Democratic BlocEdit Japan spearheaded an effort in the late nineties to establish greater ties with the other democratic nations of East Asia, based upon the economic model of the European Union (and later Commonwealth). The founding members were Japan, Korea, and Indochina, who established an official military and economic alliance between the three states, offering one another protection and economic aid where needed. This bloc, which would later be termed the East-Asian Democratic Bloc in the western world, grew and expanded to encompass several other nations militarily -- namely, the remainder of Southeast Asia that wasn't under Communist control, as well as the less democratic Polynesian and Malaysian states. Economic cooperation and privileges were extended to all member states, as well as to their allies, especially Australia, the Philippines, New Zealand, India, and the United States. Disputes with ChinaEdit All of the nations of East Asia, being close to China, were constantly under threat by the Chinese forces, especially in terms of the navy. Skirmishes between the Japanese and Chinese navies were frequent, although no war was ever officially declared between the two states or any of their allies. This semi-belligerent attitude between the two never amounted to much beyond propaganda fodder, but continued on for decades, until the start of the Resource Wars. The Resource WarsEdit Japan, in conjunction with the East Asian Democratic Bloc, became involved in the Resource wars when the Chinese launched a massive invasion of southeast Asia, eventually crossing into Malaysia. Japan was unable to properly mobilize quickly enough in the face of the threat, and as a result couldn't come to the aid of Southeast Asia. Only Indochina was able to put up a fight, with the aid of American forces stationed in the area (who had been brought into the war in parallel to the East Asian bloc). Meanwhile, Japan was focusing on defending their own home waters as well as Korea, mounting a joint defensive in Northern Korea throughout 2067 and 2068, during which the Chinese seemingly left Korea alone. The skirmishes between Chinese and Japanese fleets which had been common in the prewar days escalated and became full-on naval battles, a series of which would eventually leave both sides battered and bruised and unwilling to launch any further major naval excursions. Thus, Japan played a key role in keeping America from an outright invasion of the mainland United States: without the proper support or infrastructure, China couldn't possibly shift enough troops to put up a significant fight in the more densely populated areas of America. The government of Japan was a parliamentary constitutional monarchy in which the prime minister was the head of government and actual leader of the country. The emperor was the official head of state but was only a symbolic leader and was the symbol of Japan. The prime minister however, will be appointed by the Japanese emperor and has a cabenit of ministers to assist him. The system was established in the aftermath of World War II after the imperial government of the Japanese Empire was abolished after the empire collapsed. The Japan Self-Defense Forces was unique when compared to the other militaries of the world. It was divided into three branches; the Japan Ground Self-Defense Forces, Maritime Self-Defense Forces, and the Air Self-Defense Forces. It was rather small being at around 180,000 active personal but, it was one of the most sophisticated, professional, and modern armies in the world. Since Japan was at the forefront of science and technological development, the Japanese had some of the most advanced aircraft and military hardware in the world as well as support from the United States as well making it a formidable opponent despite its small size.
The world's oldest sea turtle fossil shows the ancient animal swam the oceans at least 120 million years ago, when dinosaurs still roamed the Earth, according to a recent analysis. The now-extinct Desmatochelys padillai turtle skeleton was found in Villa de Leyva, Colombia, and is 25 million years older than the Santanachelys gaffneyi turtle from Brazil that previously held the record for the world's oldest sea turtle fossil. The D. padillai specimen was dug up by hobby paleontologist Mary Luz Parra and her two brothers in 2007. However, it wasn't until Edwin Cadena, a researcher at the Senckenberg Research Institute and Natural History Museum in Germany, and James Parham, an assistant professor of geological sciences at California State University, Fullerton, inspected it that the fossil was determined to be the oldest sea turtle specimen in the world, dating back to the Cretaceous period, between 145.5 million and 65.5 million years ago. [Image Gallery: 25 Amazing Ancient Beasts] "The cool thing about this turtle is that it's really old, but it's not very primitive," Parham told Live Science. Though the specimen is at least 120 million years old, the turtle doesn't look like an ancient species that was early on in its evolution, and instead is "very specialized," he added. This suggests there could be older sea turtles still to be found (if they are preserved), the scientists said. The finding also suggests that turtles could have evolved to become sea dwellers more than once throughout history, the researchers said. In fact, because D. padillai is so old but doesn't look primitive, it might not be related to modern sea turtles. Rather, it might have evolved to live in the sea, and then other turtles later evolved in the same way from a separate ancestor, they said. Parham said there has been some resistance to this idea from other scientists. However, it shouldn't be an altogether surprising theory, he added, because mammals, reptiles and other animals evolved separately several times to produce a variety of sea-dwelling animals. For instance, mammals advanced many times to become sea creatures like dolphins and seals, and they came from different ancestors. The researchers think it's likely that turtles did the same, and evolved several times with different descendants to live in the sea. Some sea turtles became ones like D. padillai, while others evolved independently to become the modern turtles that live in the sea today. To determine the fossilized turtle's age, the researchers examined the invertebrates, called ammonites, preserved in the rocks and sediment around the turtle. Ammonites were widespread throughout the Cretaceous period, which means they can be used to figure out how old the surrounding rock is, Parham said. The finding that the turtle lived during the Cretaceous period could help shed light on sea turtle history, the researchers said. The exact point that turtles split into sea dwellers and land dwellers has been difficult for researchers to identify. There are few turtle fossils from this period, so every specimen is important for understanding the story of how sea turtles evolved. The researchers haven't yet conducted tests to determine whether the D padillai fossil evolved independently from modern-day turtles, but paleontology labs around the world are studying the idea. "We're trying to figure out how turtles who lived over 100 million years ago are related," Parham said. "It's not easy!" Some partial remains of D. padillai were originally discovered and dug up in the 1940s in Colombia, but were not studied for many years. For Parham, the new research comes full circle, because he was first introduced to these fossils when he was in graduate school at the University of California, Berkeley. Now, 18 years later, he realizes the fossil's significance. The new finds dug up in 2007 had better location data, which allowed the researchers to date the turtle more accurately. "It was really exciting that this turtle that I kind of knew about, was somewhat familiar with, and then all of a sudden, it was like, 'Hey, we've got new skeletons, and by the way, they're super old,'" Parham said. "If I had known how old the specimens at Berkeley were in 1996, I would have included them in my dissertation, for sure.” The new study was published online Sept. 7 in the journal PaleoBios. - Wipe Out: History's Most Mysterious Extinctions - Image Gallery: Photos Reveal Prehistoric Sea Monster - Image Gallery: Fossilized Turtles Caught in the Act Copyright 2015 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
geometric figure which has infinite surface area but encloses a finite volume. The name refers to the tradition identifying the Archangel Gabriel as the angel who blows the horn to announce Judgment Day, associating the divine, or infinite, with the finite. The properties of this figure were first studied by Italian physicist and mathematician Evangelista Torricelli. Gabriel's horn is formed by taking the graph of , with the domain (thus avoiding the asymptote at x = 0) and rotating it in three dimensions about the x-axis. The discovery was made using Cavalieri's principle before the invention of calculus, but today calculus can be used to calculate the volume and surface area of the horn between x = 1 and x = a, where a > 1. Using integration (see Solid of revolution and Surface of revolution for details), it is possible to find the volume V and the surface area A: a can be as large as required, but it can be seen from the equation that the volume of the part of the horn between x = 1 and x = a will never exceed π; however, it will get closer and closer to π as abecomes larger. Mathematically, the volume approaches π as a approaches infinity. Using the limit notation of calculus, the volume may be expressed as: This is so because as a approaches infinity, 1 / a approaches zero. This means the volume is π(1 - 0) which equals π. As for the area, the above shows that the area is greater than 2π times the natural logarithm of a. There is no upper bound for the natural logarithm of a as it approaches infinity. That means, in this case, that the horn has an infinite surface area. That is to say; - Apparent paradox When the properties of Gabriel's Horn were discovered, the fact that the rotation of an infinite curve about the x-axis generates an object of finite volume was considered paradoxical. However the explanation is that the bounding curve, , is simply a special case–just like the simple harmonic series (Σ1/x1)–for which the successive area 'segments' do not decrease rapidly enough to allow for convergence to a limit. For volume segments (Σ1/x2) however, and in fact for any generally constructed higher degree curve (eg y = 1/x1.001), the same is not true and the rate of decrease in the associated series is sufficiently rapid for convergence to a (finite) limiting sum. Christiaan Huygens and François Walther de Sluze found a surface of revolution with related properties: an infinitely high solid with finite volume (so it can be made of finite material) which encloses an infinitely large cavity. This was obtained by rotating the non-negative part defined on 0≤x<1 of the cissoid of Diocles around the y-axis. De Sluze described it as a "drinking vessel that has small weight but that even the hardiest drinker could not empty". Together these two paradoxes formed part of a great dispute over the nature of infinity involving many of the key thinkers of the time including Thomas Hobbes, John Wallis and Galileo.
What is extinction? Extinction is the process in which a previously reinforced behavior is no longer followed by the reinforcing consequences. It is not a punishment procedure. Example of behavior maintained by poisitve reinforcement Hitting self=attention from parent Intervention= The R+ is no longer provided..behavior decreases. Example of behavior maintained by negative reinforcement hitting self= espcape from chores. Intervention= removal of the aversive stimulus no longer occures for innapropriate behavior- behavior decreases Example of behavior maintained by auto R+ Hitting self = auto R+...intervention- dampen or removing sensory consquence. wearing helmet Extinction burst is... When a behavior is no longer reinforced, the behavior temporarily increases in frequency, duration, or intensity before decreasing. Novel resoponses may also occur. Using extinction properly means... Withold all reinforcers mainting the problem behavior, withold reinforcement consistently , combine extinction with other procedures (dro and dro), include signficant others in intervention, use instructions. The process in which a behavior is weakend by the immediate consquence that follows its occurance. Punisher- the consequence which weakens an operant behavior. Occurs when an aversive stimulus is presented immeditaely following the occurence of a behavior and results ina decrease in future occurrences of the behavior. Negative punishment involves the termination or removal of an already present stimulus immediately following the occurence of a behavior that results in a decrease in the future frequency of that behavior ie. response cost The student is required to engage in an effortful behavior for an extended period contingent on each instance of the problematic behavior. 1. positive practice- practice correct forms of behavior 2. resistution- must restore environment
Our short-term memory is closely related to what is referred to as ‘working memory’ and acts like the brain’s receptionist. Short-term memory is one of two main memory types and is responsible for temporarily storing information and then determining if the information should be transferred to long-term memory or simply dismissed. This process, although it might sound somewhat complicated, is completed by your short-term memory in under a minute. Right now, for example, your short-term memory is helping you by storing information you gathered from the beginning of this sentence so that you can make complete sense of it by the time you reach the end. Scientists have, more recently, been diving deeper into understanding the brain’s short-term memory functions and have identified “working” memory which is a similar (but separate) type of memory. Short-Term Memory vs. Working Memory Although short-term memory and working memory are often used interchangeably, working memory is actually a newer concept that emphasizes how the brain manipulates information it receives (storing it, using it, etc.) and is often thought of as the brain’s “scratch pad” that keeps information – a name, a phone number, or whatever else – available just long enough to be used. Short-term memory, on the other hand, is a more passive concept. Short-Term Memory & Age The length of time our short-term memory is able to store information decreases as we grow older. We are more likely to experience difficulty keeping up with certain tasks as a result of aging and other clinical conditions, like remembering which button in a department store phone menu to press. We are also more likely to forget details of recent events because the decreasing length of time means our brains have less time to successfully move new information to our long-term memory. Although cognitive decline and memory lapses are a normal part of the aging process you can work towards a slow down of the process by keeping your memory active and maintaining a brain-healthy lifestyle.
Guanine is the complement of cytosine. Adenine is the complement of thymine in DNA and uracil in RNA. Thus, the fraction of one base must be equal to the fraction of its complement the complementary strand. This is known as the Chargaff's rule. A biochemist is given a single strand of DNA with a base composition of 35% guanine. His task is to create a complement strand of DNA. According to Chargaff's rule the complementary strand must have a base cmposition of 35% cytosine! The base pairing gives very precise geometry. The distance between the C1' of a base and the C1' of its complement is exactly 10.85 angstrom. The angle formed by the bond between the c1' and n9 and a line connecting the c1's of the complementary bases are all 51.5 degrees. This is expected since the double helix structure, when viewed from above, is circular with a diameter of 10.85 angstrom. Because of the presence of an extra site for hydrogen bonding, the guanine-cytosine base pairing is found to be more stable than the adenine-thymine pairing. Thus, if a DNA strand is found to be rich in guanine and cytosine, then more energy will be needed to denature the DNA (i.e., to destroy the double helix structure). If a biochemist were given two strands of DNA of equal length, but of different base composition with Strand 'a' having a high guanine and cytosine fraction than Strand 'b', Strand 'b' should denature at a lower temperature, since 'b' has a lower g-c fraction, there is less hydrogen bonds to break. thus less energy is required! It has been found that DNA double helix can exist in three different forms. they are known as: The difference between the three different DNA forms are its geometry. In the 'a DNA' the angle between two successive bases on a strand is 32.7# degrees, whereas for the 'b DNA' the angle is 36 degree. In the 'b DNA' the plane containing the base and its complement is parallel to other planes containing a base pair. Whereas in the 'a DNA' this does not hold. The information on the 'z DNA' is not available, but recent research suggests that DNA exists in this form when it is actively being transcribed Into mRNA's. This result is not too surprising since 'z DNA' is a metastable configuration for DNA.
To build off yesterday's lesson of generating a list of character traits, I will have the students read an article titled 6th Grader Combats Bullying with Bench I will have the students read the article then answer the question "How would you describe this girl?" I am using this nonfiction piece even though we are reading fiction because of the high interest level of the story. The bravery and determination the girl in the article displays is something every middle school student can connect with in some way. The issue of bullying becomes more prevalent during the middle school levels and 6th graders are trying to figure out just how they fit in to this new and very strange place. This article makes a connection instantly. I also felt it would be a great paired text, which common core leans more towards using. The story we are reading today is about a boy who is brave and does the right thing-so the story is a great fit for the theme. Once I discuss the article with the students, I may take a moment to ask how the girl in the article's character trait can be a lesson to all of us and our own character. I want to make the learning meaningful and provide a purpose. When they see themselves change due to their own motivation, they will have an easier time identifying and understanding how and why character's change throughout a story. To introduce skill of inferring character traits, I want the students to understand how we make these inferences. I will ask the students what they thought about me after our first class. I will create a list of their "impressions" of me on the board. As they give me their response-I will ask them to defend why they felt that way. I am asking them to defend why they felt that way to demonstrate the point that at times we infer people's character traits based on their actions. We don't usually introduce ourselves by saying "Hi, I'm Tiffany and I am a little bossy." It is our actions that give people impressions of who we are as people. Now, to transition to the skill of inferring based off of text, I will project the story "Bunnicula" up onto the board. I am using this story because it is a short piece that is a little more complex. There are not many direct characterizations given, so the student will have to infer and determine some indirect characterizations. I will model how I use the author's words to make these inferences. I will connect it back to making a first impression. My first impression of Harold, the dog, is that he is lazy. In the text, Harold states "I'd rather be stretched out on my favorite rug, in front of a nice, whistling radiator." He is choosing to be home sleeping rather than going out with the family. I would use this to demonstrate how I used what the author stated to infer my character trait. I would have the students copy down the definition of direct and indirect characterization from the Characterization and Plot. By writing down the definition, the students are able to interact with the meaning. The simple act of writing down a definition helps to create a pathway to the brain for retention of the term. For independent practice today, we will apply the new concepts of character traits and character motivation. However, we will also continue to reinforce our skill of analyzing the text for plot. Plot is a skill that is ongoing and will need to be ongoing practice and reinforcement. We will read aloud the story The Born Worker by Gary Soto. This is a longer piece, but works well with plot and characterization. I am reading it aloud to guide for comprehension and understanding. I will stop often to check for understanding. To break up the reading, I will stop twice to "pause and reflect". I do this to give the students a chance to think about what they have read and process the events. For the first pause, I will stop after the cousin Arnie states "Let me tell you how it works." At this point, the author has developed the exposition, introduced the characters and has given me a basic conflict. I will ask the students to compare and contrast the two boys. How are they alike? Different? This will already engage the students in analyzing the text for character traits and motivation. I will continue reading until after the Mr. Clemens falls into the pool. This is a good place to stop because the two boys react in such different ways-it will lead to great discussion on character traits and motivation. Have the students pause and reflect on how the two boys reacted. Explain why each boy responded the way he did. This will help them connect character trait to character motivation. Finish reading the story. Now, that I have guided them through reading the text as well as modeled how to infer from the author's words. I will pass out the handout titled "The Born Worker Character Traits". Have the students work to identify character traits about the two main characters. Providing them with a handout will help aid them and provide them with direction for the task. After all the modeling, you may find yourself exhausted! However, this is where that little push is so important. The students will want to give up and throw in the towel because now they are asked to grapple with the text! I will monitor, prompt and encourage them to keep going. Most students are very capable in completing this-they just need the encouragement and the push! I will be chanting "struggle through it!" Once they have pulled out some examples, give them the Plot Structure Diagram and have the students work to complete the chart on their own. Remind them to identify the conflict of the story first-then use that to identify the climax. Walk around and assess the students on their ability. You could pull any students who are struggling to work with in small group. A few students will still be struggling with the concept of plot and the diagram can be overwhelming. Working with them in small groups allows you the ability to chunk the activity and provide support. Have them finish the story for homework or turn in the plot chart. To assess the students' understanding and to help them process their learning, I will ask the students to complete a Closure Slip. I want them to really think about their learning and this allows them a safe place to process and communicate any concerns. Closure slips are a very quick, easy way for me to feel as if I have "checked-in" with my students.
Top, left to right: star-of-Bethlehem, belladonna, poison ivy, poison oak, bottom: yew, oleander, wisteria, and poison hemlock. Poisonous Plants, plants containing substances that, taken into the body of humans or animals in small or moderate amounts, provoke a harmful reaction resulting in illness or death. Possibly as many as one out of each 100 species of plants is poisonous, but not all have been recognized as such. Dangerous plants are widely distributed in woods (baneberry) and fields (star-of-Bethlehem), swamps (false hellebore) and dry ranges (scrub oak), roadsides (climbing bittersweet) and parks (kalmia), and may be wild (celandine) or cultivated (wisteria). Many ornamental plants, such as oleander, lily of the valley, and mistletoe, are poisonous. Botanists have no set rules to determine accurately whether any given plant is poisonous. Toxic species are scattered geographically, in habitat, and in botanical relationship. They contain more than 20 kinds of poisonous principles, primarily alkaloids, glycosides, saponins, resinoids, oxalates, photosensitizing compounds, and mineral compounds such as selenium or nitrates accumulated from the soil. The poisonous compound may be distributed throughout all parts of the plant (poison hemlock), or it may accumulate in one part more than any other, such as the root (water hemlock), berry (daphne), or foliage (wild cherries). A plant may vary in toxicity as it grows, generally becoming more toxic with maturity; certain plants, however, can be highly toxic when young and harmless later (cocklebur). Some active principles cause skin irritation directly (nettle); others bring about an allergenic reaction (poison ivy). Most poisons, however, must enter the body before they act, and in almost all cases this happens when they are eaten. Usually more than 57 g (2 oz) of the poisonous portion of the plant must be eaten by an average adult before poisoning will result (the amount is proportionately less for children). Some plants, however, are toxic in small amounts; for instance, one or two castor beans, the seeds of the castor-oil plant, may kill a child. After ingestion, the poison may act immediately on the digestive tract (dumbcane, euphorbia, nightshade), producing severe abdominal pain, vomiting, and possibly internal bleeding, or it may be absorbed into the bloodstream. If so, it passes first to the liver, which may be injured. Oxalates crystallize in the kidneys (rhubarb), rupturing the tubules. Some plants affect the heart (oleander). Small amounts of principles in some of these (digitalis) may be used in medicine. Plants containing alkaloids often produce unpleasant or dangerous reactions in the nervous system. Examples are paralysis (poison hemlock), hallucinations (jimsonweed), or heart block (yew). A few poisons act directly within the cells of the body. The best example is cyanide, released from a glycoside in the plant (wild cherries), which prevents cells of the body from using oxygen. In contrast, unusually high levels of nitrates in plants combine with the hemoglobin of the blood so that it can no longer carry oxygen to the body cells. Some reactions are highly specific. Bracken destroys bone marrow, in which blood cells are formed. Saint John’s wort contains a poison that, when ingested by animals, reacts with sunlight to produce severe sunburn and lesions on exposed skin. Poisonous plants are too numerous to eradicate, and many are highly prized as houseplants or garden ornamentals. If poisoning is suspected, a physician or the local poison control center should be consulted immediately.
Jews in Europe: Ashkenazi Jews are a group of Jews who emigrated from Europe to the occupied territories. Ashkenazi means the German Land in Hebrew. Most Ashkenazis lived in the country before the establishment of the Zionist regime. Over time, the term was also used to refer to other Jews in Europe, including the French, and so on. Ashkenazi Jews played a key role in creating the Zionist occupation regime. Their wealth and influence in European countries provide the basis for the establishment of the regime. Then, they dedicated themselves to promote the migration of Jews from all over the world to the occupied territories. Therefore, historians consider them the main responsible for planting seeds of occupation and rape in Palestine. Regime’s upper class: Ashkenazi Jews are the upper class of Zionists society. They enjoy the highest amenities and have senior executive jobs in Government. They have earned most of the wealth. Ashkenazi’s wages are generally higher than other Jewish races. Naturally, they are also at a higher political and social level. Most of the regime’s rulers and founders of its protocols have been Ashkenazi. Jews in the Middle East: Sephardi Jews or Eastern Jews or Middle Eastern Jews are those who migrated from Arab states or third world countries to Palestine. Shepherd refers to the first generation of immigrant Jews from Iraq, Iran, the Arabian Peninsula, Afghanistan, North Africa and the Caucasus region. It also uses to describe Jews referring to Iberia, Spain and Portugal. Most of them were expelled from their countries and were vassals of the Ottoman Empire. Other than the language of the country of residence, they used a special language called “Ladino”, which is a combination of Hebrew and Latin. The term Sephardi means the east. But in one of the translations of five Sephar of Moses, it was mistakenly attributed to a city in Asia Minor, in Spain which the term also refers to Spain in Modern Hebrew. The poor and miserable class: Sephardi Jews are the poor and oppressed class of society. They have suffered a lot and only are appointed for tough and intense careers. They have lower social position than Palestinian Arabs. While, they have enjoyed welfare and high social status in their country. It is obvious that there is a wide gap between Eastern and Western Jews in the occupied territories in today’s society. A society that considers itself as the leader in equality and social justice and summoned a lot of Jews from the eastern countries to the occupied territories with the same slogan. In the early years of the formation of the Zionist regime, the Eastern Jews immigrated to Israel by bribes and threats. Ashkenazi Jews with the help of Israeli security agencies encouraged the Eastern Jewish to emigrate by scaring Sephardi Jews and performing destruction operations at religious and holy sites of the Jews. The structure of society classes in occupied territories led Ashkenazi Jews to use their wealth and infinite power to treat others in a worst possible way and make class differences clearer day-by-day.
Victorian Britain was both the greatest power in the world and the least militarised, with a standing army far smaller and less influential in public life than those of France, Prussia, Austria or Russia. Its military shortcomings were starkly revealed by the disastrous Crimean War (1854–6) and Boer Wars (1880–81 and 1899–1902). In the 1840s and 1850s the army of the East India Company – the trading company which had controlled large parts of India since the mid-18th century – extended the frontiers of British rule in the Indian subcontinent and beyond into south-east Asia. The shocking 1857 rebellion (‘Mutiny’) by the Company’s native soldiers led to the British government taking full control of the Indian Empire. Soldiers from the subcontinent were deployed in conflicts fought in China, Abyssinia (now Ethiopia) and, less successfully, Afghanistan. CRIMEA AND REFORM By contrast, only one war was fought in Europe during Victoria’s reign: the Crimean War of 1854–6. It dramatically exposed the weakness of an army mainly led by amateur officers. So many soldiers died of disease and neglect that the army was rendered largely ineffective. The British population, made aware of the disaster by the pioneering investigative journalism of The Times, were profoundly shocked. Conditions for soldiers at home were scarcely better. In 1859 the Army Sanitary Commission condemned much of the existing military accommodation in England, like the barracks at Berwick-upon-Tweed, Northumberland, which lacked ventilation and washing facilities of any kind. Reform proceeded slowly, but there were steady improvements in military technology, and the army reforms of 1879 introduced professional training. The officer class continued to be dominated by county families. Ties to the counties also remained strong through the regiments and their bases, as at Carlisle Castle, Cumbria. THE ROYAL NAVY The Royal Navy was larger and more celebrated than the army. It had a much higher global profile, with bases such as Portsmouth and Chatham dockyards at home, and Gibraltar, Malta and Bombay (Mumbai) overseas. From the Battle of Trafalgar in 1805 until the final years of the 19th century the navy enjoyed unchallengeable superiority, playing a vital role in safeguarding trade networks, exerting British power, and combating the slave trade. France was viewed as the main potential enemy. Germany, with its strong ties to the royal family, was seen as a friendly power and culturally much closer to Britain. From the 1850s England and France were caught up in a race for military advantage. There were spectacular advances in weaponry with vastly increased firepower: examples can be seen at Hurst Castle, Hampshire, and Pendennis and St Mawes castles in Cornwall. The vast scale of the ‘Palmerston forts’ (such as Fort Brockhurst) built in the 1860s around Portsmouth and Plymouth, the expansion of the Western Heights at Dover, and improvements to earlier defences such as Landguard Fort, Suffolk, all testify to febrile anxieties about invasion. A GRAND ILLUSION? The British Empire and its armed forces became a source of intense public awareness and pride for the Victorians. Only occasional setbacks, such as the ‘martyrdom’ of General Gordon in Khartoum, Sudan, in 1885, reminded the public of the knife-edge on which British power was sometimes balanced. The successful resiistance of the Afrikaner settlers in southern Africa in the Second Boer War (1899–1902), and revelations about the poor quality of recruits to the army from the industrial cities, were reminiscent of the shocks of the Crimea. By 1901 British power was in some respects a grand illusion. British dominance was no longer unchallengeable; and Germany, after 1871 incontestably the leading power on the Continent, was now looking ominously strong both on land and at sea.
Physical education programs in schools are valuable because they contribute to each student's cardiovascular health and helps promote the development of strong muscles and bones, according to the Virginia Education Association. Exercising regularly also combats obesity in children, which can reduce the risk of developing heart disease, diabetes and other common illnesses. Physical education not only helps keep kids strong, but it can also increase their concentration and focus, improve their classroom behavior and boost their overall academic performance, according to the Virginia Education Association. Through physical education, students come to understand the value of being physically active, which increases their likelihood of developing healthful habits to carry on well into adulthood. Physical education also fosters healthy social interactions among students. It allows young children and teenagers to engage in activities that encourage them to support one another and to work effectively in teams. As children develop these positive character traits and boost their confidence, they are more likely to want to try out for a school team sport or to engage in community activities that require a certain degree of fitness, such as soccer or martial arts. Regular physical activity can also be an outlet for students who need to release tension and anxiety to remain emotionally stable, according to the Virginia Education Association.
Discover all kinds of interesting information in this chinchilla facts for kids article. Chinchillas are rodents that are native to the Andes Mountains of northern Chile. Chinchillas were almost driven to extinction due to the demand for their luxurious soft fur. Today, they’re often kept as pets. Historically, wild chinchilla fur was mottled yellow-gray in the wild. Selective breeding, however, has led to other colors becoming common, including silver, yellow-gray, bluish-gray, white, beige and black. Each hair ends in a black tip, no matter what color the chinchilla is. Chinchillas have short forelimbs and long, muscular hind legs with four toes on each foot. Each toe has a thin claw. They resemble rabbits, but their ears are much shorter and rounder. Chinchillas have large, black eyes and bushy tails. They are typically 9 to 15 inches (23 to 38 centimeters) long, but the tail can add another 3 to 6 inches (8 to 15 cm) to their length. They generally weigh 1.1 to 1.8 lbs. (0.5 to 0.8 kilograms). In the Andes, chinchillas can live in elevations of about 9,800 to 16,400 feet (3,000 to 5,000 meters). At those heights, it can be very cold; 23 degrees Fahrenheit (minus 5 degrees C) is the average minimum temperature in some places. This is why chinchillas have developed such thick fur, to help them tolerate freezing temperatures. They burrow underground tunnels or nestle in rock crevasses to make their homes. They are highly social and live in colonies that consist of hundreds of chinchillas. Female chinchillas are often aggressive toward other females. When ready to mate, they can also be aggressive toward males. Females are primarily monogamous which means they only have one mate for life. Males, on the other hand, can have many female mates. Chinchillas are largely nocturnal so will be most active at night. Sometimes they are called crepuscular, meaning their activity peaks at dawn and dusk. Chinchillas are active and playful and thrive when they have a consistent routine. Chinchillas are omnivores. This means they eat both plants and meat. They eat almost any vegetation including grasses and herbs. They also eat seeds. Additionally, they eat insects and bird eggs when available. To eat, they sit up on their hind legs and hold their food in their front paws to nibble on it. In dry climates, chinchillas will find water from morning dew and the flesh and fruit of cacti. The breeding season for chinchillas runs from November to May in the Northern Hemisphere and from May to November in the Southern Hemisphere. A pregnant chinchilla will carry her young for about 111 days before giving birth. Females have babies twice a year. Each time they give birth, they will have one to six babies. These groups of babies are called litters and the individual babies are called kits. Unlike most rodents which are born without fur and are helpless, kits are born with hair and with their eyes open and functioning. Newborn kits even have teeth. They weigh only 1.2 ounces (35 grams). Although they are able to digest solid foods, the kits nurse for six to eight weeks. Kits seek warmth and will crawl under their mother for protection. Their mother’s clean them regularly. At about 8 months old, the babies are ready to have offspring of their own. Species: There are two species of chinchillas: Chinchilla chinchilla (short-tailed chinchilla), Chinchilla lanigera (long-tailed chinchilla) Chinchillas are related to guinea pigs and porcupines. Chinchillas first appeared around 41 million years ago. The chinchilla's ancestors were some of the first rodents in South America. Chinchilla fur became popular in the 1700s and were hunted to near extinction by 1900. About that time, Argentina, Bolivia, Chile and Peru banned the hunting of wild chinchillas. An American mining engineer named Mathias F. Chapman got special permission from the Chilean government to bring chinchillas to the United States in 1923. Almost every pet chinchilla in the United States today is a direct descendant of the 11 chinchillas that Chapman brought to the country. Chinchillas have many predators. Owls and hawks are an example of some of the wild chinchilla's predators, which fly and hunt chinchillas from above. Snakes are also a predator to the chinchillas. A snake can sneak up on a chinchilla from behind or from below in snake holes. Foxes, mountain lions and cougars are also predators of the chinchilla. A chinchilla’s instinct is to run and hide when it senses danger. Chinchillas are small enough to hide under logs, within bushes or burrow underground to get away from predators. They are also capable of clinging to tree trunks and rocks as well as jumping and leaping. In general, chinchillas live eight to 10 years. However, some have lived as long as 20 years. 25 Unusual Facts about Chinchillas - With gentle handling from a young age, most chinchillas can become quite tame and bond with their owners. - Chinchillas cannot survive in temperatures higher than 80 F (27 C). High temperatures and humidity can cause them to suffer from heat stroke. - Many chinchillas are bred commercially for their fur. The Convention on International Trade in Endangered Species has restricted the sale and trade of wild chinchillas since 1975. - Both species of chinchilla are on the International Union for Conservation of Nature and Natural Resources' endangered-species list. Both the short-tailed chinchilla and the long-tailed chinchilla are listed as critically endangered. - Domesticated chinchillas should be kept in a wire mesh cage with a solid floor. It should be well ventilated and kept dry and cool in temperatures from 60 to 70 F (16 to 21 C). Chinchillas should be kept in individual cages as they do not get along well when caged together. - To stay clean, these rodents give themselves dust baths. - Chinchillas are thought to be smarter than rabbits and can be taught to play with humans. However, they are hyperactive and high-strung so should not be given as a pet to small children. - Despite having the densest fur of all land mammals, they can have around 50 to 75 individual hairs growing out of a single hair follicle. Chinchilla fur is extremely soft to the touch. - Their incredibly dense fur protects them from skin parasites and fleas. However, fleas and ticks can still make their way in spots where the fur is thinner, such as the belly, face, ears, and feet. - Chinchillas can shed large patches of fur to break free from the grasp of their predators. This is called “fur slipping.” They may also do this when they’re stressed, held too tightly, or get stuck. - Their dense fur traps their body heat, which can lead to overheating when placed in warmer environments. - Chinchillas can be especially caring, even adopting abandoned or orphaned baby chinchillas. Unlike other rodents, the male chinchillas also participate in caring for the young. - Chinchillas are quite social. They like playing with other chinchillas and enjoy foraging together. Often, a few chinchillas stand guard and watch for predators as the others forage. - Chinchillas communicate primarily through sound. They make a wide range of sounds such as chirping and soft cooing. - Between male and female chinchillas, females are larger, heavier, and more dominant than the males. - Their large ears provide chinchillas with sensitive hearing that lets them detect predators easily. - Unlike in rats where males are easily distinguishable from females, both male and female chinchillas have cone-like external genitals. - Chinchillas have sensitive whiskers, similar to cats, that they use to navigate in the dark. - Chinchillas can jump as high as six feet (1.8 m) thanks to their powerful hind legs. - Though rare, chinchillas can experience convulsions or sudden and intense involuntary movements. These convulsions may indicate brain damage, nutrient deficiency, or hemorrhaging. This can also happen after mating, and indicates circulatory problems. They can also get convulsions due to stress in extreme cases. - In 2016, their populations experienced a small recovery, going from “critically endangered” to “endangered”. - Chinchillas are very inquisitive and love to explore their surroundings. - Similar to rabbits, chinchillas perform a behavior called cecotrophy, which means they ingest specific droppings in order to ensure they get the required level of nutrition from their food. - While they don’t typically run, a chinchilla can run up to 15 mph (24 km/h) to escape a predator. - Chinchilla teeth continually grow throughout their lifespan. For domesticated chinchillas, this means they require ongoing dental care. You don't want to miss learning about these cool mammals.
What is Scoliosis and Why is Scoliosis Surgery Required? The spine is the backbone of the body. It naturally curves a little. This allows us to walk, move and balance ourselves properly. But some people have a spine that curves too much to one side. This condition is called scoliosis. In most cases, especially in children and adolescents, the cause of scoliosis is unknown and scoliosis is referred to as idiopathic scoliosis. Scoliosis usually has no symptoms. In severe cases, the body looks asymmetrical with uneven hips or shoulders. Severe scoliosis may also cause backache and could contribute to other health problems. To diagnose scoliosis, your doctor will do a physical examination. The doctor will also order some diagnostic tests such as X-rays, a Computed Tomography (CT) scan and/or Magnetic Resonance Imaging (MRI) to determine the actual curving of the spine. If your doctor considers that your curve is mild, he/she may prescribe back braces to prevent further curving. However, if the curve is more than 45 degrees, your doctor may recommend corrective spinal surgery. What is the Anterior-Posterior Approach for Scoliosis Surgery? The goal of scoliosis surgery is to both reduce the abnormal curve in the spine and to prevent it from progressing further and getting worse. To achieve this, a spinal fusion is performed to fuse the vertebrae in the curve to be corrected. This involves placing bone graft or bone graft substitute in the intervertebral space between the two vertebrae. Instrumentation such as rods and screws are also used to realign and stabilize the vertebrae until the graft heals and fuses the two vertebrae together. There are several approaches for scoliosis surgery. The choice of approach depends on a number of factors such as the type of scoliosis, location of the curvature of the spine, ease of approach to the area of the curve and the preference of the surgeon. Anterior-posterior approach is also called front and back spinal surgery. This approach is usually recommended for very severe and stiff curves. Sometimes, it is also used to correct previous failed attempts. In this approach, the spine is first accessed from the front or anterior side of the body through an incision on the side followed by an incision on the back (posterior side of the body). How is an Anterior-Posterior Scoliosis Surgery Performed? An anterior-posterior scoliosis surgery is performed under general anesthesia. - First, an incision is made on the side and the spine is accessed through this incision from the front side of the body. - To reach the spine, a rib is often removed which may be used as a source of bone graft for spinal fusion. - Disc material is removed from between the vertebrae involved in the most severe part of your curve. Removal of the disc material improves the flexibility of the curve and also provides a large surface area for spinal fusion. - The bony surface between the vertebral bodies is roughened and bone graft or bone graft substitute is packed into the space between the vertebral bodies to promote fusion and then the anterior incision is closed. - You are then positioned on your stomach for the posterior part of the procedure. - An incision is made down the middle of the back and the muscles are stripped off the spine to reach the bony elements of the spine. - Instrumentation is used to reduce the curvature of the spine. This involves placing screws, hooks, wires or other devices at each vertebral level involved in the curve. A specially contoured rod is then attached to these connection points at each level and correction is performed. - Once all the implants are placed securely, a final tightening is done and the incision is closed. Sometimes, a drain may be placed into the wound to protect the incision. The whole procedure usually takes several hours. How long does the recovery take? The spine looks much straighter soon after the surgery, but some curve will still be there. Spinal bones take a minimum of 3 months to fuse together. However, complete fusion usually takes one to two years. What are the potential risks and complications of the procedure? Scoliosis surgery is a major surgery. All attempts are made to reduce the chances of any risks or complications of this surgery. Still, complications may occur in a few patients. Complications of scoliosis surgery may include paraplegia, excessive blood loss, infection and failure of the spine to fuse. Rarely, cerebrospinal fluid leakage or instrumentation problems such as breaking of rods or dislodging of hooks and screws may also occur.
Price elasticity of demand (PED) refers to the responsiveness of quantity demanded to a change in price. A type of price elasticity of demand is inelastic PED. Inelastic PED displays a low responsiveness of quantity demanded to a change in price (Inelastic PED = % change in price / % change in quantity demanded). Governments are making an effort to reduce smoking by substantially increasing taxes on cigarettes. Taxes is a type of price control which has the responsibility to decrease the production and consumption of goods. A tax would be shown on a market diagram through a leftward shift of the supply curve. This indicates that the quantity supplied would decrease and would create a new equilibrium with the same demand curve at a new point where the price would be higher and both quantity supplied and demanded would be lower. The increase in price is explained as taxes are placed on the factors of production, which increases the cost of production. This in turn decreases the quantity that can be supplied at a limited cost. Therefore, an increase in price would be due to an increase in the cost of production to supply the goods, in this example, tobacco. However, it is important to note that Tobacco is assumed to have an inelastic demand. As mentioned before, this is because tobacco is addictive and consumers, especially those with a high income, would choose to purchase similar amounts of tobacco even if the price increased. However, there still would be a responsiveness, as low income consumers’ ability to purchase tobacco decreases as tobacco now makes up a higher percentage of their income. Therefore, the responsiveness of quantity demanded would change, but it stays significantly low compared to the change in price. Tobacco as a Demerit Good Additionally, tobacco is a demerit good. A demerit good is a good or service whose consumption is considered unhealthy, degrading, or otherwise socially undesirable due to the perceived negative effects on the consumers themselves. Most demerit goods are addictive goods, and therefore, tend to have an inelastic demand. This means that if the price increases, the quantity demanded would decrease by a lower rate — only a very small percentage of consumers would choose to no longer buy the good. These people would likely be those who are no longer able to afford the good after the price increased (low income consumers). Tobacco having an inelastic demand displays why the government’s efforts of reducing smoking would not be effective. This is because a good with an inelastic demand would only increase the profit that suppliers receive, and the consumers would pay more because the quantity demanded would go down by a small percentage only. And therefore, placing a tax on a good with an inelastic demand would not be effective and would not reduce smoking, causing the government’s goal to not be achieved. Alternative measures that would rather be more effective would possibly include education campaigns. A lot of people who buy cigarettes do not understand the harmful impacts of the demerit good. If people understand the effect it holds on a human body, there may be a greater change in the quantity demanded of tobacco than a change in price would. Additionally, governments should possibly promote the use of alternatives by subsidizing them, such as nicotine patches or electronic vapes. These alternatives could help consumers reduce their dependence/addiction on/of the demerit goods. This overall would be a more effective way governments could possibly best reduce smoking.