content
stringlengths
275
370k
A collection of chubby lab mice has led to a huge breakthrough, but scientists remain uncertain about the obesity epidemic. Obesity is one of the biggest health issues facing the U.S., and scientists still struggle to come up with a solid way of addressing it. According to a recent study from scientists at the Johns Hopkins School of Medicine, however, a discovery inside the brains of chubby lab mice could change our understanding of the obesity epidemic entirely. Scientists were originally working with lab mice to study the way the brain learns and remembers information when they noticed that the mice were gaining a lot of weight. They stumbled upon a specific type of nerve cell that had a huge influence on the mice’s eating behaviors, which could play a huge role in the fight against obesity. According to Richard Huganir, the director of the Hopkins Department of Neuroscience, “When the type of brain cell we discovered fires and sends off signals, our laboratory mice stop eating soon after. The signals seem to tell the mice they’ve had enough.” The discovery sheds light on one of the most troubling aspects of obesity – the inability to stop eating. Hormones in the stomach normally tell the brain when enough food is consumed, but people suffering from obesity fail to receive the message and continue to eat. The new discovery focused on an enzyme called OGT, which is involved in a wide range of bodily functions. The enzyme adds a derivative of glucose to proteins, altering their behavior in the context of the brain. To assess the roll of the enzyme in feeding behavior, the team deleted the gene that codes for it in the brains of adult mice. The mice without OGT genes were observed eating more calories with each meal and packing on a significant amount of fat. Researchers say that the mice were unable to determine when they have had enough food, so they continued to eat even after they fulfilled their daily calorie requirements. The study could steer obesity research towards medications that regulate OGT and its role in eating behaviors, allowing people to feel full before they reach dangerous levels of overeating.
What is blank verse in Shakespearean drama (especially in regards to A Midsummer Night's Dream)? 2 Answers | Add Yours Since you specifically asked about the use of blank verse in A Midsummer Night's Dream, I have moved this question. Essentially there are three worlds in the play: the world of the court, the world of the workers, and the world of fairy land. In both in the court and fairyland, blank verse is spoken while Bottom and his mates speak prose. Since both the court and fairy land are formal worlds, the language is formal. Bottom and his mates are workers. They speak prose to show there are of a lower class. It also shows that these men are friends and they speak an informal language. Even when Titiana speaks verse to him, Bottom speaks in prose. Shakespeare often used prose to show lower class characters against the higher class ones. This is not always the case. For example the play Much Ado About Nothing is about 60% prose. As stated by cldbentley, blank verse is a rhythm. From the time it was first used by Christopher Marlowe, it became the standard for Elizabethan theatre. (It is easier to memorize and most closely resembles everyday English speech rhythms.) Shakespeare used blank verse a great deal throughout his writings. Blank verse is a form of poetry containing lines that do not rhyme. Although they are unrhymed, these lines do consist of iambic pentameter, which is five stressed beat in every line with every other syllable being stressed. Often, it is easier to really notice the stressing of syllables when they are read aloud than when they are read silently. In Shakespeare's plays, characters who spoke iambic pentameter were usually important. Minor or insignificant characters often did not speak in verse at all. Join to answer this question Join a community of thousands of dedicated teachers and students.Join eNotes
Gluten is a combined protein found in wheat-like grains. Glutens from different grains have different properties, and they form the majority of protein in most grains. Proteins bind other things together, like, and are often used as binders and thickeners in food products. Gluten in wheat gives it most of its protein content, while providing characteristics such as chewiness to the dough. When people use the term 'gluten' or 'gluten-free', they are always referring to the glutens found in the products made from grains like wheat, barley, rye, etc., which has an ability different from other grains, causing an auto-immune reaction in people with celiac disease or gluten intolerance. Such people cannot ingest gluten without suffering an autoimmune response, damaging their intestines, and other complications, including internal bleeding and cancer of the intestines. Gluten content in grains creates an allergy. Now-a-days gluten allergies are relatively common. According to studies, one in 167 seemingly healthy children and in 111 adults, have some type of allergy to gluten. Celiac disease is found to be most severe, and it will attack one in 100 people. This kind of allergy may attack any person, and the diagnosis of this allergy can be much difficult. Celiac disease affects millions of people worldwide, but many sufferers are not aware that they have the symptomatic condition for this disease, and they may have been misdiagnosed for other illnesses. A pioneering new test developed with EU-funding should soon be available in hospitals, offering an accurate, quick, cost-effective diagnosis and monitoring solution. A gluten allergy involves an involuntary adverse reaction within the body to a component in wheat, rye and barley. Wheat is the most common food ingredient that people have allergic reactions with. According to The Food Allergy and Anaphylaxis Network, a gluten allergy is not the same thing as celiac disease, which is a digestive disorder. However if you are allergic to gluten, eating something that contains it will produce an immune system reaction. Symptoms include hives, swelling, wheezing, abdominal pain and difficulty breathing. It is important to determine if gluten is producing this type of reaction when you eat foods, as this will assist in avoiding gluten-containing foods in future. Some people suffer for many years from gluten allergies before a diagnosis is made. Most doctors have to see severe gastro intestinal problems prior to consider ordering an intestinal biopsy to diagnose celiac disease. An intestinal biopsy, considered the "gold standard" for diagnosing celiac disease, is a procedure that involves placing a camera into the digestive tract and removing a sample of the small intestine for biopsy. Celiac disease is diagnosed based on biopsy result and it is determined by the presence of damage to the villi (finger-like hairs that absorb nutrients in the small intestines). However, not all people with gluten allergies, or even celiac disease, will present gastrointestinal symptoms. Other tests to determine an allergy to gluten involves a series of blood tests to measure the levels of antibodies to gluten. These tests includes the anti-tissue transglutaminase antibody (tTG) test, the anti-gliadin antibodies (AGA) test, the anti-endomysial antibodies (EMA) test, and the anti-reticulin antibodies (ARA) test. A gluten allergy can be diagnosed simply by eliminating all gluten from the diet for 30 days, and reintroducing it to the body, and documenting the reaction on it. These tests vary in their reliability and sensitivity in detecting gluten allergies and will often be accompanied by a full blood count to check for other possible symptoms, such as anemia, electrolytes, renal function and liver enzymes. The EZ Gluten Test is an easy to use kit that will quickly detect the presence of gluten in foods and beverages. It is sensitive enough to detect levels of gluten as low as 10 ppm. This simple test is small and portable enough for use at restaurants or when traveling, and is sensitive and robust enough for use in industry and food manufacturing. It can be used to test individual ingredients in foods and beverages. The EZ Gluten Assay has been validated and certified as a Performance Tested Method (#051101) by the AOAC Research Institute as an effective method for the detection of gluten in a wide variety of foods and environmental surfaces.
During the Triassic period, which followed the Permian-Triassic mass extinction, there was a lot of change in ecosystems as life bounced back. One important group during this time was the archosauromorphs. They underwent ample diversification, meaning they evolved into many different species. This diversification included the ancestors of dinosaurs, pterosaurs (flying reptiles), and crocodylomorphs (ancestors of crocodiles). A new study explored the evolution of locomotion in Archosauromorpha to test whether dinosaurs show any distinctive locomotory features that might explain their success. In a new study, scientists reveal why dinosaurs were able to rule the Earth for a whopping 160 million years. They found that the first dinosaurs were faster and more dynamic than their competitors. Researchers compared the limb proportions of many reptiles from the Triassic period when dinosaurs first appeared. They looked at whether these creatures walked on four legs or two legs and measured how good they were at running. They found that dinosaurs and their close relatives were good at running on two legs and had a wider range of running abilities compared to their competitors, known as the Pseudosuchia. Pseudosuchians were the ancestors of modern crocodiles. While some were small and walked on two legs, most were medium-to-large-sized and walked on four legs. However, dinosaurs and their relatives were more versatile in moving around. Lead researcher Amy Shipley, a Palaeobiology student, explained, “When things got tough around 233 million years ago, dinosaurs came out on top. It seems they were good at saving water, like many reptiles today, but their ability to walk and run helped them thrive.” Professor Mike Benton from Bristol‘s School of Earth Sciences added, “After a mass extinction event, most pseudosuchians disappeared, but dinosaurs adapted and took over different habitats.” Dr. Armin Elsler noted, “Surprisingly, dinosaurs didn’t evolve very quickly, but their way of moving was a big advantage when times got tough. They could adapt to new environments more easily.” Dr. Tom Stubbs said, “We often think of dinosaurs as slow and huge, but they started as small and fast insect-eaters. Their ability to run on two legs helped them catch prey and escape predators.” Dr. Suresh Singh concluded, “And of course, their diversity of posture and focus on fast running meant that dinosaurs could diversify when they had the chance.” “After the end-Triassic mass extinction, we get truly huge dinosaurs, over ten metres long, some with armour, many quadrupedal, but many still bipedal like their ancestors. The diversity of their posture and gait meant they were immensely adaptable, which ensured strong success on Earth for so long.” - AE Shipley, A Elsler, SA Singh, TL Stubbs and MJ Benton. Locomotion and the early Mesozoic success of Archosauromorpha. Royal Society Open Science. DOI: 10.1098/rsos.231495
Greenland’s archaeological sites threatened by thawing permafrost Among the world heritage sites that are threatened by climate change are sites in Greenland that are important for their archaeological evidence of early human inhabitants of Greenland. Arctic archaeological sites are extremely important globally because so much organic material, such as wood, bone, animal skins and hair, is preserved in frozen ground as the process of decay has been halted. Warming conditions in the Arctic are now rapidly leading to the loss of many archaeological resources that are vital for understanding the everyday and spiritual lives of the first peoples to live in these often inhospitable lands. Thawing permafrost, loss of sea ice leading to coastal erosion, and increasing tundra fires are putting archaeological sites and historic monuments at risk throughout the Arctic. Already today, wooden artifacts preserved for more than 4 000 years in the permafrost but exposed to summer thaw over the last 30 years, are markedly degraded. In addition, the metabolism of bacteria actively decomposing the organic deposits in the thawing permafrost layer generates heat, which in turn accelerates the thawing of the frozen ground. Model results suggest a critical shift from a first phase of relatively slow permafrost thaw, driven by climate change and low heat production, to a second phase of accelerated permafrost thaw when water is drained and increasing oxygen availability markedly triggers a higher internal heat production. If this tipping point is reached, the heat production can accelerate decomposition and the archaeological clues that can help us understand our ancestors’ lives in Greenland could be lost forever within 80–100 years. Source: Markham et al., 2016. World Heritage and Tourism in a Changing Climate. Photo: Nick Russil (www.flickr.com)
The Effectiveness of Radio Direction Finding for EVA Navigation in Situations of Low Visibility MDRS Crew 186 Journalist Determining one’s position is a fundamental problem encountered in engineering. On Earth it is possible to use the constellation of GPS satellites to accurately pinpoint your position relative to a location where you would like to go. This capability does not currently exist on Mars, nor will it be likely to exist when humans first set foot on the planet. The difficulty of localizing an astronaut’s position relative to a location of interest is amplified in conditions of low visibility such as night or an unexpected dust storm. The resulting disorientation could greatly imperil any astronaut caught unprepared in such circumstances. The purpose of this research was to explore how a disoriented astronaut might use radio signals to guide them to a target while on EVA. The core concept is to have the astronaut carry a radio antenna whose sensitivity is directional. Meanwhile, a navigation beacon at the target broadcasts a radio signal in all directions. If the astronaut is unable to locate their query by traditional means they can use the directional radio to determine the direction of the transmitting beacon, and therefore the direction they must walk to reach it. Figure 1: Searching for the direction of maximum signal. Prior to the mission I assembled a 3-element handheld Yagi antenna from schematics researched on the Internet. The particular design uses foldable elements made from steel tape measure and originated with Joe Leggio for use in amateur radio foxhunts [Leggio, 1993]. This design is lightweight and easy to stow due to the foldable elements. A coaxial cable with an SMA adapter allows the antenna to be plugged into virtually any portable ham radio. The transmitter beacon is a commercial handheld ham radio with no special modifications. I created an audio file of a Morse signal toning the phrase, “This is the MDRS amateur navigation beacon crew 186”, and broadcast this signal from the beacon by connecting an iPod playing the audio file to the radio with an aux cable. In each test of the navigation experiment the radio beacon was located at the habitat and the Morse signal was broadcast at regular intervals by having a crewmember simply hold down the transmit button. A crewmember on EVA would then attempt to use the Yagi antenna to locate the direction of maximum signal and thereby the bearing to the habitat. The beacon was transmitted on the low power setting of the beacon radio (approximately 2.5 Watts) at a frequency of 146.565 MHz. The Yagi antenna was used to aid EVA navigation on a total of four EVA’s, two of which were dedicated exclusively to testing its effectiveness. On the first two tests I followed a road on the outward trek and then attempted to follow the navigation signal along a straight line back to the habitat. This took me through unfamiliar terrain but did not adequately represent conditions of low visibility. On the later two tests I gave the antenna to a crewmember unfamiliar with amateur radio and covered the upper two thirds of their helmet with a cardboard visor. This restricted their vision to approximately 5 meters and prevented them from using landmarks to help them locate the habitat. Supporting members of the EVA then led the “lost astronaut” volunteer at least 2 kilometers from the habitat and monitored their safety as they attempted to return to the habitat using the radio alone. Figure 2: Preparing the lost astronaut before EVA. The first two tests of the navigation antenna were useful for understanding its performance. The accuracy of the antenna in locating the direction to the beacon generally improves with distance. This is because close to the beacon the signal is strong enough to saturate the receiver even along its insensitive axis. The beacon signal therefore appears to originate from all directions. At greater distances the beacon signal is weak and careful pointing of the antenna may be required to receive it at all. At a distance of 4 kilometers the accuracy of the antenna in determining the bearing to the habitat appeared to be better than 10 degrees. This was reduced to over 90 degrees when within a kilometer of the habitat and worse still when even nearer. During the tests I found that the poor accuracy of the antenna near the transmitter could be mitigated in the following way. The antenna is least sensitive to incoming signal along a direction parallel to the receiving elements. By searching instead for the direction of minimum signal I could deduce that the beacon was located at a right angle to my current pointing direction. This provided acceptable accuracy at sub-kilometer distances. On the tests with the cardboard visor limiting the astronaut’s vision the difficulty of navigating by natural senses alone was professed by the arcing paths participants took prior to and in between broadcasts of the navigation beacon. In fact, on the final test the mock “lost astronaut” walked a complete circle with a radius less than 100 m in between two broadcasts of the beacon. To limit the drift of the astronaut’s path it was necessary to decrease the intervals between the beacon transmissions to a nearly continuous broadcast. In both tests with the cardboard visor the astronaut was able successfully navigate to within 500 meters of the habitat despite limited knowledge of their initial position and orientation. I also note that the surrounding terrain did not appear to have a significant detrimental affect of the performance of the antenna, but this has been difficult to quantify. Figure 3: Scanning to find the bearing to the hab. The navigation experiments of MDRS Crew 186 suggest that a handheld directional antenna is a simple and effective means of EVA navigation in low visibility conditions. However, the current set up has several limitations which are noted below. The current means of searching for the direction of maximum signal provides only the direct bearing to the transmitter beacon. As was found in several of the tests, following a direct path to the beacon is not always possible due to intervening terrain. The user is then on their own to determine an appropriate detour and this may act to further their disorientation. Additional information may be required beyond that provided by a directional antenna in order to navigate successfully. At distances close to the transmitter the technique of searching for a direction at right angles to the directions of minimum signal proved satisfactory in our experiments. However, because there are always two such directions the astronaut is at risk of following a path directly away from the beacon instead of towards it. This is possible when receiving along the sensitive axis of the antenna as well, but is less likely because the signal strength received by the back lobe of the antenna is generally much weaker compared to the front. Instead of searching for the direction of minimum signal, a better solution would be to attach an attenuator between the antenna and receiver so that the astronaut can reduce the received signal when close to the beacon. Finally, the rapid drift of participants from their initial heading in between broadcasts of the beacon suggests that either the navigation beacon should be broadcast continuously or astronauts should have some way of preserving their orientation while walking. The later option is desirable because terrain, weather, or the need to handle equipment may temporarily prevent the signal from being received. On Earth an obvious solution is to mark the desired bearing on a compass and follow it accordingly, but this will not work on Mars due to the lack of a global magnetic field. Figure 4: Finding the habitat using its radio beacon. Leggios, J. (1993), Tape Measure Beam Optimized for Radio Direction Finding, http://theleggios.net/wb2hol/projects/rdf/tape_bm.htm. I would like to thank Geoffrey Andrews, Jennifer Pouplin, and Cesare Guariniello at Purdue University for their assistance fabricating and testing the Yagi antenna prior to the mission.
Here comes the sun: How optimization of photosynthetic light reactions can boost crop yields. Photosynthesis started to evolve some 3.5 billion years ago, when our atmosphere was composed of a much lower CO2 concentration. CO2 is the driving force of all photosynthetic processes and in the past 200-250 years, atmospheric levels have doubled due to human industrial activities. This time span, however, is not sufficient for adaptation mechanisms of photosynthesis to be evolutionarily manifested. Steep increases in human population, shortage of arable land and food, and climate change call for actions, now. Thanks to substantial research efforts and advances in the last century, basic knowledge of photosynthetic and primary metabolic processes can now be translated into strategies to optimize photosynthesis to its full potential in order to improve crop yields and food supply for the future. Many different approaches have been proposed in recent years, some of which have already proven successful in different crop species. Here, we summarize recent advances on modifications of the complex network of photosynthetic light reactions. These are the starting point of all biomass production and supply the energy equivalents necessary for downstream processes as well as the oxygen we breathe. This article is protected by copyright. All rights reserved. Bill & Melinda Gates Foundation (via University Of Illinois) (088649-17776)
Lesbian, Gay, Bisexual, Transgendered and Questioning (LGBTQ) Individuals Sexual Orientation and Gender Identity Understanding the appropriate terminology is essential to understanding LGBTQ individuals. Sexual orientation, sexual behavior, gender identity, and gender role are different concepts. Sexual orientation is the affectional or loving attraction to another person. Heterosexuality is the attraction to persons of the opposite sex; homosexuality, to persons of the same sex; and bisexuality, to both sexes. Sexual orientation can be considered as ranging along a continuum from same-sex attraction only at one end of the continuum to opposite-sex attraction only at the other end. Sexual behavior, or sexual activity, differs from sexual orientation and alone does not define someone as an LGBTQ individual. Sexual identity is the personal and unique way that a person perceives his or her own sexual desires and sexual expressions. Biological sex is the biological distinction between men and women. Gender is the concept of maleness and masculinity or femaleness and femininity. Gender identity is the sense of self as male or female and does not refer to one’s sexual orientation or gender role. Gender role describes the behaviors that are viewed as masculine or feminine by a particular culture. Transgender individuals are those who conform to the gender role expectations of the opposite sex or those who may clearly identify their gender as the opposite of their biological sex. In common usage, transgender usually refers to people in the transsexual group that may include people who are contemplating or preparing for sexual reassignment. A transgender person may be sexually attracted to males, females, or both. Research & Statistics - Connecticut LGBTQ+ Needs Assessment Survey Needs Assessment Report The Consultation Center at Yale, a program of the Yale Department of Psychiatry, partnered with Connecticut’s LGBTQ+ Health and Human Services Network and the Connecticut Department of Public Health Office of Health Equity to create and launch the first statewide LGBTQ+ survey and needs assessment in early 2021. The report aims to increase understanding of the number of people that identify as part of the LGBTQ+ community in Connecticut and understand their needs, including safety, housing, health, mental health, legal services, social support, and community engagement. - Lesbian, Gay, and Bisexual Behavioral Health: Results from the 2021 and 2022 National Surveys on Drug Use and Health This report focuses on substance use and mental health indicators among adults aged 18 or older in the United States based on pooled NSDUH data from 2021 and 2022. Estimates are presented by adults’ sexual identity (i.e., gay/lesbian, bisexual, straight) and gender. All estimates (e.g., percentages and numbers) presented in the report are derived from survey data that are subject to sampling errors and have met the criteria for statistical precision. - The Trevor Project 2022 National Survey on LGBTQ Youth Mental Health These data provide critical insights into some of the unique suicide risk factors faced by LGBTQ youth, top barriers to mental health care, and the negative impacts of COVID-19 and relentless anti-transgender legislation. This research also highlights several ways in which we can all support the LGBTQ young people in our lives—and help prevent suicide.
Modoc National Wildlife Refuge is a National Wildlife Refuge of the United States located in northeastern California. It is next to the South Fork of the Pit River in Modoc County, southeast of Alturas. The area was first claimed by the Dorris Family in 1870 under the first of the federal Homestead Acts. The family developed a livestock ranch and built a reservoir. The first 5,360 acres were purchased from the family in 1960 to establish the refuge. Over the years more territory was purchased from several landowners, and today the refuge covers over 7,000 acres. The refuge is located about 60 miles outside the Klamath Basin. It is on the western edge of the Great Basin and includes many types of habitat, such as seasonal and semi-permanent wetlands, wet meadows, riparian zones, sagebrush steppe, reservoir, and cropland. It is a staging area and wetland breeding habitat for migratory birds of the Pacific Flyway, such as waterfowl and the sandhill crane. More than 250 species of birds have been recorded on the refuge. The functions of the refuge include preservation and conservation of habitat and flora and fauna, and recreation and public services such as hunting, fishing, boating, hiking, wildlife observation and photography, and education. Management activities in the local habitat include grazing and crop cultivation, prescribed burning, and water manipulation. The wetlands are stewarded to provide healthy vegetation for the use of resident and visiting birds.
What is BCD and How is it Used in Automation?The BCD system offers relative ease of conversion between machine-readable and human-readable numerals. In this article, we will learn about BCD, Binary Coded Decimal, which offers relative ease of conversion between machine-readable and human-readable numerals. As computers evolved from very early transistor-based models to the desktop personal computers using microchips, memory and instruction registers were 8-bits in length with computing, having to adapt to the standard decimal-based system. Specific instructions used by programmers early on were designed with 8-bits in length to facilitate all of computing. These instructions have been maintained throughout the years of computer development, and will most likely continue to be used in the future. Boolean Logic and Expressions Within computers, each of the 8-bits has only two values for representing either a logic 1 (or True) and a logic 0 (or False). This is what is referred to as Boolean in computer science. Boolean logic and expressions make the system of using binary numbers perfect for use in digital or electronic circuits and systems. The BCD system offers relative ease of conversion between machine-readable and human-readable numerals. Advantage of the BCD System An advantage of the Binary Coded Decimal system is each decimal digit is denoted by a group of 4 binary digits and that it allows easy conversion between decimal a base-10 system and binary a base-2 system. Disadvantage of the BCD System A disadvantage is BCD code does not used all the states between binary 1010 for the decimal 10 and binary 1111 for the decimal 15. Binary Numbering System Used in Computers Binary coded decimal has specifically important applications using digital displays. Now let’s talk about the binary numbering system used in computers. This system is a Base-2 numbering system which follows the same set of rules used with decimal or base-10 number system. Base-10 uses powers of ten, for example 1, 10, 100, 1000 and so on, where binary numbers use powers of two, effectively doubling the value of each sequential bit, for example 1, 2, 4, 8, 16, 32 and so on. This conversion between binary and decimal values is called Binary-coded decimal and allows for easy conversion between decimal and binary numbers. Binary-Coded Decimal or BCD Binary-coded decimal or BCD is a code using a series of binary digits or bits that when decoded represents a decimal digit. A decimal number contains 10 digits, zero to nine. So, each decimal digit 0 through 9 is represented by a series of four binary bits where the numerical value when decoded is equivalent to a decimal digit. In BCD we will use binary numbers from 0000-1001, which are equivalent to decimal 0-9. Using the decimal number 5 for example, 5 in BCD is represented by 0101 and 2 in BCD is represented by 0010 and 15 in BCD is represented by 0001 0101. Weighted BCD of Decimal Numbers and Weighted Decimal Let’s look a bit closer on how this conversion works. As you can see in the illustration below, the decimal weight of each decimal digit to the left increases by a factor of 10. With BCD number system, the binary weight of each digit increases by a factor of 2. The first digit has a weight of 1 or (20), the second digit has a weight of 2 or (21), the third digit has a weight of 4 or (22), and the fourth digit has a weight of 8 or (23). BCD Truth Table Now with the basic understanding of the binary-weighted system, the relationship between decimal numbers and weighted binary coded decimal digits for decimal values of 0 through 15 are provided as a truth table for BCD. Binary-Coded Decimal vs. Binary to Decimal Conversion Keep in mind, Binary-coded decimal is not the same as binary to decimal conversion. For example, if I would represent the decimal number 72 in both forms, the bit formation would be like this: BCD: 0111 0010 Binary: 0100 1000 When we use a table to explain and expand out the weighted values, using 16 bits, we can convert the following decimal numbers: 9620, 120 and 4568 into their binary equivalents. By adding together all the decimal number values from right to left from each of the bit positions that are represented by a 1 gives us the decimal equivalent. 9620 (=8192+1024+256+128+16+4) equals this Binary value: 0010 0101 1001 0100 120 (=64+32+16+8) equals this Binary value: 0000 0000 0111 1000 4568 (=4096+254+128+64+16+8) equals this Binary value: 0001 0001 1101 1000 However, for the same decimal number, the BCD form representation will be like this: 9620 (9, 6, 2, 0) equals this BCD value: 1001 0110 0010 0000 120 (1, 2, 0) equals this BCD value: 0001 0010 0000 4568 (4, 5, 6, 8) equals this BCD value: 0100 0101 0110 1000 Electronic circuits and systems can be divided into two types of circuits, analog and digital. Analog Circuits amplify varying voltage levels that can alternate between a positive and negative value over a period of time and Digital Circuits produce distinct positive or negative voltage levels representing either a logic level 1 or a logic level 0 state. HIGH and LOW States in Digital Signals Voltages used in digital circuits could be any value, however in digital and computer systems they are below 10 volts. In digital circuits voltages are called logic levels and typically one voltage level will represent a HIGH state, and the lower voltage level will represent a LOW state. A binary number system will use both of these two states. Digital signals consist of discrete voltage levels that change between these two HIGH and LOW states. BCD Usage in Alpha-Numeric Display and RTC BCD was commonly used for displaying alpha-numeric in the past but in modern-day BCD is still used with real-time clocks or RTC chips to keep track of wall-clock time and it’s becoming more common for embedded microprocessors to include an RTC. It’s very common for RTCs to store the time in BCD format. A binary clock might use LEDs to express binary values. With this clock each column of LEDs displays a binary-coded decimal numeral. 7 Segment Displays and Thumbwheel Switches Back in the days, before touchscreens, seven segment displays, and thumbwheel switches were used for a numerical interface between PLCs and humans. Even before the PLC, these BCD type devices were the only graphical way to interface with system circuits numerically. BCD in Siemens S7 Standard Timer and Counter Data Types Some PLCs for example, Siemens S7 standard timer and counter data types use Binary Coded Decimal in their data structures because these structures go back to when engineers had to deal with things like these thumbwheels and 7 segment displays. In fact, the S7 timer setpoints are still entered as S5T#2S for a two-second setpoint because this is inherited from the S5 PLC platform. These timers use three BCD digits or 12 bits and two extra bits for the time base. This is true for counters in which they only count from 0 to +999. This concludes the article, What is Binary-Coded-Decimal or BCD and how is it used in Automation. Want to Learn More? By downloading the RealPars app, you can have access to a wealth of practical knowledge as an automation engineer right in your pocket and you will also receive new fresh out of the oven videos each and every week. If you would like to get additional training on a similar subject please let us know in the comment section. Check back with us soon for more automation control topics. Got a friend, client, or colleague who could use some of this information? Please share this article. 5 ACTIONABLE TIPS FOR GETTING A PLC PROGRAMMING JOB WITH NO EXPERIENCE The mindset that helped me find my ideal job as a PLC programmer with NO experience. Working as a PLC Programmer is one of the most attractive and highest paying jobs in the engineering industry.... With this article, we will continue with where our previous article, What are the leading Industrial Automation Job types?, closed with beginning the discussion on Automation Job Interview questions. So now that we have learned about the Automation Job types let's... RealPars is the world's largest online learning platform for automation engineers. +31 10 316 6400 Mon - Fri 8:30 am to 5:30 pm (CET) Rotterdam Science Tower, Marconistraat 16, 3029AK Rotterdam, The Netherlands © 2020 RealPras B.V. All rights reserved. Created with coffee and tea in Rotterdam.
Brain Conditions – Aneurysm A brain aneurysm, also called a cerebral aneurysm, occurs when a blood vessel wall in the brain becomes weakened and a small balloon or bulge forms. If this bulge remains intact, the patient may never experience symptoms. However, if the bulge leaks blood or breaks (ruptures), the condition is a serious and potentially life-threatening medical emergency. Brain aneurysms occur when the wall of a blood vessel in the brain becomes thin or damaged. This damage may be caused by high blood pressure, atherosclerosis, or rarely after head trauma. In some patients, brain aneurysms occur because of a congenital defect. Although a brain aneurysm can occur in any blood vessel of the brain, they frequently occur in the area between the brain and the thin arachnoid tissue covering the brain, mainly in the base of the skull and in the sylvian fissure (big sulcus contiang the middle cerebral artery) If a brain aneurysms leaks or ruptures, causing bleeding into the brain, that is known as a subarachnoid hemorrhage and it may be called haemorrhagic stroke. An aneurysm that occurs in the area between the brain and arachnoid tissue layer is called a subarachnoid aneurysm. Other words to describe a cerebral aneurysm help physicians understand its size (small, large, giant, and super-giant) and shape. - Saccular aneurysms form a sack-like pouch protruding from the vessel wall. - Berry aneurysms are saccular aneurysms with a neck that resembles a plant stem. - Fusiform aneurysms are bulges in the artery wall and have no necks. If a brain aneurysm forms but does not leak or rupture, the patient may have no symptoms. In some cases, an aneurysm may cause symptoms by putting pressure on nearby brain tissue or nerves. In such cases, a patient may report: - Pain above or behind one eye - Dilated pupil - Visual disturbance or double vision - Numbness, weakness, or paralysis of one side of the face - Drooping eyelid - Epileptic fit In some cases, an aneurysm will start to leak blood into the brain before it breaks. This is sometimes called a sentinel bleed because it alerts clinicians to the likelihood of a dangerous rupture. Symptoms of a leaking aneurysm include a very severe headache that comes on suddenly. A leaking aneurysm often progresses to a ruptured aneurysm. However, some aneurysms rupture without first producing a sentinel bleed. The most common signs of a ruptured cerebral aneurysm are: - Sudden-onset, extremely severe headache - Nausea and vomiting - Stiff neck - Visual impairment or double vision - Extreme sensitivity to light - Drooping eyelid - Loss of consciousness - In severe cases death may occur A brain aneurysm is a serious medical condition and a leaking or ruptured aneurysm is a potentially life-threatening medical emergency. Contact emergency services at once if these symptoms occur. A common characteristic of these symptoms is their abrupt onset. Treatment of aneurysms depends on the location and severity of the aneurysm and may involve medical hypotensive therapy, surgical clipping, or endovascular coiling. Many of these treatments are very new and rely on innovative technology. A complication of subarachnoid hemorrhage caused by a cerebral aneurysm is vasospasm, a “spasm” (narrowing of the artery) of the arteries in the brain that carry blood from the heart into the brain. This can result in a potentially lethal stroke. For that reason, patients treated for cerebral aneurysm are usually aggressively monitored for vasospasm. To prevent vasospasm, patients may be treated with drug delivery and fluids along with close monitoring. Vasospasm can be viewed by angiography or transcranial doppler and even with M.R. Angiography or C.T. Angiography. Because aneurysms are most effectively treated with extremely advanced technology (some of it just a few years old), it is highly recommended that patients with cerebral aneurysms be treated by highly trained experts in a specialised setting dealing with this kind of diseases. Cerebral aneurysms are very serious medical conditions that are fatal in some patients. Those who recover may have neurological deficits. However, some patients with cerebral aneurysm make a full recovery with little or no neurological problems afterward. The patient’s chances for full recovery depend on many factors including how soon the patient got expert treatment, the patient’s age and overall health, and the severity of his symptoms before the treatment, and location of the aneurysm. Some known risk factors for cerebral aneurysm are things that cannot be helped: individuals who are older, female, or have a family history of aneurysm are thought to be at higher risk for brain aneurysms. Other risk factors include: - Hypertension (high blood pressure) - Atherosclerosis (hardening of the arteries) - Abuse of illicit drugs, especially cocaine - Alcohol abuse - Head trauma - Certain blood infections Cerebral aneurysms are potentially life-threatening medical emergencies which usually cause abrupt symptoms, including a sudden severe headache. New surgical and technological approaches are improving outcomes, but such innovations depend on expert care and treatment in a highly specialised setting.
There she goes again singing “Let it Go” from the other room for the 15th time today! You think to yourself, my child has a decent voice for her age! Is it time to get her a singing teacher? Research has shown that kids are physically and emotionally ready for formal music lessons between 5 and 7 years of age. Read on to learn five of the most amazing benefits voice lessons can have for your kids. 1. Improved Cognitive Development When children are young, their brain is still molded by the skills they learn. This can cause the brain to develop faster and get healthier! If they are taking voice lessons, their brain actually changes and allows the child to have skills like better memory and better motor skills. Studies also show that children test several points higher on both IQ tests and standardized tests after they begin music classes. 2. Improved Language Skills The development of the left side of the brain from music classes also translates to improved language skills because both functions are in that hemisphere of the brain. On top of that, kids can learn music in other languages. They can also learn about those cultures, which helps expand their horizons. For example, by singing an Italian song, your child will learn phrases in that language and some information about the Italian culture as well. 3. Better Social Skills Children who take music classes learn to develop their social skills as well, especially when in a group class setting. They learn to work with peers in order to harmonize and make one beautiful sound. On top of that, music classes add a great self-esteem boost when they see how well they are progressing and get good feedback from their peers and teachers! 4. Spatial Intelligence When children learn to read and understand music, they also are better able to solve multistep problems in areas such as engineering, math, and computers. Research has shown that kids who can read music can actually visualize complex math problems better and have increased spatial-temporal abilities. They can identify patterns and recognize relationships between things better when they are also skilled in understanding written music. 5. Better Mental Capacity Neuroscience research has shown that when children are taking music classes, their brain actually functions better! Learning music makes the child use more of their brain, which actually makes the structure and pathways of neurons more efficient. Think of the brain as a muscle. When you work on a muscle group, it builds a bigger muscle that works better. The same thing happens to your brain during music lessons! Where Can I Find Voice Lessons? There are often music facilities near you that offer all kinds of music lessons including voice lessons. No music facility near you? Is your schedule too packed to squeeze in a regular music lesson? There are other options that will work for you too! There are also teachers who give lessons online via downloadable lessons or using video meeting software to do one on one lessons in real time. If you still have questions about getting your child into voice lessons, get in touch with us so we can help!
The countries that were once part of the Soviet Union include Armenia, Azerbaijan, Belarus, Estonia and Georgia. Kazakhstan, Kyrgyzstan, Latvia, Lithuania and Moldova also belonged. Russia, Tajikistan, Turkmenistan, Ukraine and Uzbekistan round out the total. The Soviet Union dissolved on Dec. 26, 1991. The Soviet Union was formed in 1922 from the wreckage of the Russian Empire. Although many of the former Russian holdings had experienced a period of freedom in 1917, after the empire fell, the communist leaders of Russia began to reassert their authority over the breakaway regions on Dec. 30, 1922. This date is marked by the signing of the Treaty of Creation of the USSR and the Declaration of the Creation of the USSR. The independent countries that agreed to these treaties included the Russian SFSR, the Transcaucasian SFSR, the Ukrainian SSR and the Byelorussian SSR. In the 1980s, political and social reforms, paired with economic turbulence, began undermining the power of the USSR. Russian satellite states in Eastern Europe, such as Poland and Czechoslovakia, were the first to stride away from USSR dominion. The USSR itself began to dissolve soon after, beginning on March 11, 1990, when Lithuania declared its independence. Estonia and Latvia followed suit and withdrew from USSR control. Armenia and Georgia followed within the year, and the remaining countries became independent by the end of 1991. As of 2015, several countries without widespread diplomatic recognition exist in the former Soviet territory: Abkhazia, Transnistria, Nagorno-Karabakh and South Ossetia.
The Victorian artist Walter Crane thought that children could learn from pictures long before they could read or write. His colourful, well-designed nursery books opened parents’ eyes to the educational value of picture book reading. Lesley Delaney, curator of a display of Walter Crane’s picture books at the V&A, explains his revolutionary approach to learning to read. Walter Crane (1845-1915) was the most prolific and influential children’s book creator of his generation. His pioneering designs for nursery books helped to popularise the idea of visual literacy. Crane radically improved the standard of ‘toy books’ – cheap, mass-market colour picture books, featuring alphabets, nursery rhymes, fairy tales, and modern stories. He also created a novel series of musical rhymes and fables for babies, as well as a set of experimental books that show how reading, writing and arithmetic can be learned through imaginative play. Crane believed that good art and design could stimulate interest in books and help children to learn to read from a very early age. He recognised that every feature of the book – including covers, end papers, titles, illustration, type, and page layout – can be used to encourage children’s enjoyment of reading. This visual approach to reading is seen in one of his early toy books, Grammar in Rhyme (1868). Crane uses the text box like a blackboard, placing it within the illustrations of children at play to suggest the idea of learning as an enjoyable everyday activity. To help the child’s understanding, he creates a memorable rhyme that relates the parts of speech to the games and toys shown in the pictures. ‘Bright, frank colours’ and comic touches Crane uses colour and pattern to attract children’s interest in reading. The exciting effect is shown in the vibrant illustrations for toy books such as Beauty and the Beast (1874), which display the influence of his work as a painter, designer and decorative artist. Crane drew inspiration from a wide range of influences, including Japanese art. This can be seen in the illustrations for This Little Pig Went to Market (1870), in which he uses the bold outlines and flat colours that were typical of Japanese prints. Crane’s designs also reflect his observations of young children. He noticed that they appear to see most things in profile and prefer ‘well-designed forms and bright frank colours’. Children are not concerned with three dimensions, he suggested; they could accept symbolic representations. To encourage close observation of the pictures, Crane adds comic touches. For example, in This Little Pig he gives the hilarious cartoon character glasses and cloven boots; he places bows on both its curly tail and pigtail wig. Children can also spot the pig displayed on the mantelpiece in the picture on the facing page. The picture panels for Puss in Boots (1874) show Crane as an early exponent of the comic strip form. The design leads the child from one frame to the next in a sequence of detailed pictures that follows the cat’s actions, enabling even pre-readers to understand the story. Crane introduces visual jokes to help the child’s understanding of reading conventions, such as turning the page. This can be seen in the playful illustration for ‘Hey diddle diddle!’ on the cover of The Baby’s Opera (1877). The three mice featured in the bottom panel appear to be running into the book. They reappear in the following pages engaging in various amusing antics, such as outwitting the cat. Crane wanted to excite children’s curiosity about what they would find on the next page. The square format of the baby books was inspired by designs for nursery tiles and provides a model for baby books even today. The innovative fantasy series, called ‘The Romance of the Three Rs’ (1885-6), shows how early learning can be turned into imaginative games. The three titles – Little Queen Anne, about reading, Pothooks and Perseverance, about learning to write, and Slateandpencilvania, about counting – represent the first picture stories about the difficulties children face in early learning. The fluid illustrative style and use of heavy punning show similarities with the homemade books that Crane created for his own children. Crane’s visual approach to learning attracted the interest of leading reading specialists. He collaborated with Professor Meiklejohn to produce The Golden Primer (1884-5) and also with Nellie Dale to create the ‘Walter Crane readers’ (1899). These popular reading schemes were the forerunners of the Ladybird ‘Key Words’ series and the ‘Oxford Reading Tree’. © Lesley Delaney, UCL and the V&A ‘Walter Crane: Revolutionary picture books for the nursery’ runs from 8 November 2010 until 3 April 2011: Room 85, National Art Library Landing, V&A South Kensington, Cromwell Road, London SW7 2RL (020 7942 2000, www.vam.ac.uk). Lesley Delaney, University College London and the National Art Library at the V&A, is supported by a Collaborative Doctoral Award from the Arts and Humanities Research Council (AHRC). Detail of portrait of Walter Crane by George Frederic Watts.
In sedimentary rocks below the base of the Cambrian there is not only a dearth of body fossils, but signs of creatures burrowing and stirring up the sediment are most uncommon. A burrower needs several criteria to be fulfilled: a supply of oxygen; sufficient food; a body able to penetrate and an ability to move back and forth, but forth would probably do fine, provided the animal could turn corners. The amount of oxygen in bottom waters would have influenced its availability beneath the seabed. Whatever the conditions, dead organic matter falls and is buried by sediment before it is oxidised away, even nowadays. There is little sign that there was any marked change between the oxygenation of the planet just before and after the start of the Cambrian Period, so the main control over burrowing is that of animal morphology. Many modern burrowing animals are pretty flaccid but moving sediment aside and upwards demands some muscle power. Most important, the creature needs a means of navigation, albeit of a rudimentary kind, and since what goes in beneath the surface – food – must go out – excreta – there must be a front- and a back end. That ‘fore-and-aft’ symmetry is the essential feature of bilaterian animals. Only a limited range of animal taxa don’t have that built-in. Sponges are the most obvious example, having no discernible symmetry of any kind. Radially symmetrical animals such as jellyfish and coral polyps only have a top and a bottom. An absence of inbuilt horizontal directionality stops non-bilaterians from burrowing in any shape or form. But, so what? The vast majority of animals have some kind of bilateral symmetry; even echinoderms have it from their 5-fold symmetry that is also the simplest kind of radiality. By the start of the Cambrian, not only had bilaterians split off from the less symmetrical but almost all the phyla living today, and several that became extinct in the last 542 Ma, have representatives in the Cambrian fossil record. The only logical conclusion is that emergence of bilaterians and their fundamental diversification took place in the Precambrian: they are absent from earlier strata only because they had no hard parts. Comparing the DNA of living representatives of the main bilaterian phyla and with that of non-bilaterians can help date the times of genetic and morphological separation, but only crudely. This ‘molecular clock’ approach points to some time between 900 and 650 Ma ago for the last common ancestor of bilaterians. Getting a handle on the minimum time for the split depends either on finding fossils or unequivocal signs of bilaterian activity. The oldest unequivocally bilaterian fossils occur in rocks about 550 Ma old, which doesn’t take us much further back than the base of the Cambrian. But there are trace fossils that are significantly more ancient (Pecoits, E. et al. 2012. Bilaterian burrows and grazing behaviour at >585 million years ago. Science, v. 336, p. 1693-1696). They are tiny burrows in fine-grained sediments from Uruguay, so tiny that there is a chance that they may be traces of grazing bacterial films on the seabed rather than beneath it. The decider is the mechanics of trace fossil formation. Surface tracks only a millimetre or so across would only penetrate the biofilm, so on lithification they would simply disappear. Burrows on the other hand penetrate the sediment itself to get at food items. Even if this was a biofilm, the track would be in sediment above the film, so compaction would preserve it. The Uruguayan exam-[les are exquisite horizontal burrows, and they push back the minimum age for the origin of the bilaterians to at least 40 Ma older than the start of the Cambrian. In fact 585 Ma is a minimum age for the sediments as it is the U-Pb age of zircons in a granite that intrudes and metamorphoses them. An equally significant observation is that the burrows only appear towards the end of a glacial episode – probably the last of the Neoproterozoic ‘Snowball Earth’ events – as marked by tillites below the burrowed shales and occasional ‘dropstones’ in them. Could it be that the climatic and other stresses of a global glaciation triggered the fundamental division among the Animalia?
- physics, power is the rate at which energy is transferred, used, or transformed. The unit of power is the joule per second (J/s), known as thewatt (in honor of James Watt, the eighteenth-century developer of the steam engine). For example, the rate at which a light bulb transforms electrical energy into heat and light is measured in watts—the more wattage, the more power, or equivalently the more electrical energy is used per unit time. Energy transfer can be used to do work, so power is also the rate at which this work is performed. The same amount of work is done when carrying a load up a flight of stairs whether the person carrying it walks or runs, but more power is expended during the running because the work is done in a shorter amount of time. The output power of an electric motor is the product of the torque the motor generates and the angular velocity of its output shaft. The power expended to move a vehicle is the product of the traction force of the wheels and the velocity of the vehicle.
STORAGE BATTERY OPERATION Most batteries that are used to make up a storage battery bank are a form of lead-acid battery. A lead-acid battery comprises a number of cells, each of which has two sets of sheets of lead, called plates, which are immersed in diluted sulphuric acid, which is called electrolyte. The electricity is taken from connections to the two sets of plates, called terminals. Discharging the cellWhen the cell is fully charged the set of plates that is connected to its positive terminal is coated with lead dioxide (a compound that comprises lead and oxygen.) As the cell is discharged, oxygen from the lead dioxide on the plate is exchanged for sulphur from the sulphuric acid in the electrolyte. The sulphur forms a coating of lead sulphate on the positive plates that progressively replaces the coating of lead dioxide. The oxygen released into the electrolyte by this process combines with hydrogen that is left over from the break-down of the sulphuric acid, and forms water. This water further dilutes the electrolyte. Sulphur from the sulphuric acid also attaches to the negative plates, forming a coat of lead sulphate. When all of the lead oxide on the positive plate has been replaced with lead sulphate the cell is completely discharged. Charging the cellWhen the cell is recharged the process is reversed. The lead sulphate coating on the positive plate is progressively replaced with lead dioxide. The lead sulphate on the negative plates reverts to lead, and the strength of the sulphuric acid in the electrolyte increases to its original level. Heavily discharging the cellThe lead sulphate on the plates of a discharged battery grows into crystals. The larger the crystals are, the more stable they become, and the more difficult it becomes to engage them in the necessary chemical reactions of the charging process. For this reason it is best to keep lead-acid batteries as charged as possible, and if they are heavily discharged to recharge them as soon as possible so that crystals don’t start to form. If the cell is left discharged for too long the crystals will become too big and stable, and it will not be possible to break them down again. This reduces the capacity of the cell, and permanently damages it. This effect is called sulphation. Overcharging the cellIf the cell is charged beyond fully-charged (overcharged) the plates will no longer be able to take up the sulphur from the electrolyte and the charging energy will go into breaking down the water of the electrolyte into hydrogen and oxygen gasses. If the cell is not a sealed type the gasses can escape into the atmosphere, creating an explosion hazard. The water lost from the electrolyte in this way must be replaced before the level drops sufficiently to expose the plates. If the overcharged cell is a sealed type the gasses will pressurise the casing. When the overcharging stops catalysts built into to the cell slowly recombine the oxygen and hydrogen gasses into water, replenishing the electrolyte, and releasing heat in the process. If the overcharging doesn’t stop, the pressure may distort and damage the cell casing, or the gasses may be released through a pressure relief valve into the atmosphere, and be lost to the electrolyte. The charging processThe entire charging process is managed by the controller in the charging equipment, which may be a generator set, solar panels or a battery charger. Good quality charging equipment will ensure that the storage battery bank is not damaged during the charging process, and that the entire charging process can occur properly without any attention from you. INDEPENDENT POWER PLANTS INDEPENDENT ENERGY SYSTEM COMPONENTS INDEPENDENT ENERGY SYSTEM COMPONENTS
In this section Definition of Terms Maintaining a safe environment for all patients Educating families and carers Considerations for discahrge Falls are the most common cause of paediatric injury leading to emergency department visits. It is widely acknowledged that children are at risk of falls in the community and with many education programs supporting prevention, it is important that this education is reflected in the hospital environment. Children fall as they grow, develop coordination and new skills.and are often unaware of their limitations. Therefore one could conclude that all children are at some risk of falling. The intention of this guideline is to raise awareness and educate nursing staff and the multidisciplinary team of the importance of maintaining a safe environment for all patients; assist with identifying patients who are high risk of fall; provide the tools to educate families and carers of the potential risk of falls and outline strategies to develop individualised management plans of care to reduce risk for high risk patients. All paediatric patients are considered at risk of falling and simple prevention strategies should be put in place to ensure the risk of injury is minimized. A safe environment should be maintained for all patients within the Royal Children's Hospital (RCH). Standard safety measures should be put place for all patients regardless of identified risk, these include: Half of falls incidents within the RCH occur when a parent or carer is present. Whilst most parents are aware of maintaining a safe environment for their children in the home environment, many are unaware of the environmental risks when in hospital due to being in an unfamiliar environment accompanied with increased levels of anxiety related to hospital admission.The hospitalisation of children provides an opportunity to reinforce parent/carer information and education concerning normal psychological and motor development of small children, which is related to falls risks and other hazards both inside and outside hospital. The falls risk assessment score is documented in the Primary Assessment flow sheet in the EMR. The falls risk assessment tool does not replace clinical judgment, if a patient does not present with a high risk score but is thought to be high risk by medical or nursing staff, allied health, parents or carers extra precautions to protect such patients should be documented and actioned. Factors influencing risk include: See Clinical Guideline (Nursing): Nursing Assessment for more detailed assessment information. Standard safety measures should be put in place for all patients regardless of the risk identified.Falls score equal to or greater than 3 necessitates the implementation of a Falls High Risk Management Plan which is located in the Primary Assessment flowsheet within the EMR.For all patients identified as high risk, i.e., those with a falls risk score of 3 or greater; a Falls High Risk Management Plan must be commenced. The plan will be developed in collaboration with the child's parent or carer and will be specific to the patient's individual needs.The plan will remain in use until the patient's falls risk score changes. If the falls risk score alters a new plan will be implemented as the patients needs may have changed. Patient risk should continue to be assessed daily, once the patient's risk score is less than 3 and the patient's risk of falling is reduced, a management plan is no longer required; however it is important that a safe environment is always maintained.A physiotherapist can advise as to how to safely support the patient during positioning, transfers, standing, walking and use of mobility aids.An occupational therapist can ensure safe setup of the ward bedroom, bathroom and toilet to minimise falls risks and recommend management techniques/assistive equipment for self-care tasks.In the event of the occurrence of a fall: Documentation of a fall event Some patients may have a high risk score at the time of discharge. For this patient group the following should be considered: High risk patients may be eligible for Post Acute Care (PAC). To make a referral contact the RCH Complex Care Hub. Little Schmidy Falls Risk Assessment Tool Click here to view the evidence table. Please remember to read the disclaimer. The development of this nursing guideline was coordinated by Nadine Stacey, Clinical Lead, Quality and Safety, and approved by the Nursing Clinical Effectiveness Committee. Updated August 2017.
How hot are atoms in the shock wave of an exploding star? A new method to measure the temperature of atoms during the explosive death of a star will help scientists understand the shock wave that occurs as a result of this supernova explosion. An international team of researchers, including a Penn State scientist, combined observations of a nearby supernova remnant—the structure remaining after a star's explosion—with simulations in order to measure the temperature of slow-moving gas atoms surrounding the star as they are heated by the material propelled outward by the blast. The research team analyzed long-term observations of the nearby supernova remnant SN1987A using NASA's Chandra X-ray Observatory and created a model describing the supernova. The team confirmed that the temperature of even the heaviest atoms—which had not yet been investigated—is related to their atomic weight, answering a long-standing question about shock waves and providing important information about their physical processes. A paper describing the results appears January 21, 2019, in the journal Nature Astronomy. "Supernova explosions and their remnants provide cosmic laboratories that enable us to explore physics in extreme conditions that cannot be duplicated on Earth," said David Burrows, professor of astronomy and astrophysics at Penn State and an author of the paper. "Modern astronomical telescopes and instrumentation, both ground-based and space-based, have allowed us to perform detailed studies of supernova remnants in our galaxy and nearby galaxies. We have performed regular observations of supernova remnant SN1987A using NASA's Chandra X-ray Observatory, the best X-ray telescope in the world, since shortly after Chandra was launched in 1999, and used simulations to answer longstanding questions about shock waves." The explosive death of a massive star like SN1987A propels material outwards at speeds of up to one tenth the speed of light, pushing shock waves into the surrounding interstellar gas. Researchers are particularly interested in the shock front, the abrupt transition between the supersonic explosion and the relatively slow-moving gas surrounding the star. The shock front heats this cool slow-moving gas to millions of degrees—temperatures high enough for the gas to emit X-rays detectable from Earth. "The transition is similar to one observed in a kitchen sink when a high-speed stream of water hits the sink basin, flowing smoothly outward until it abruptly jumps in height and becomes turbulent," said Burrows. "Shock fronts have been studied extensively in the Earth's atmosphere, where they occur over an extremely narrow region. But in space, shock transitions are gradual and may not affect atoms of all elements the same way." The research team, led by Marco Miceli and Salvatore Orlando of the University of Palermo, Italy, measured the temperatures of different elements behind the shock front, which will improve understanding of the physics of the shock process. These temperatures are expected to be proportional to the elements' atomic weight, but the temperatures are difficult to measure accurately. Previous studies have led to conflicting results regarding this relationship, and have failed to include heavy elements with high atomic weights. The research team turned to supernova SN1987A to help address this dilemma. Supernova SN1987A, which is located in the nearby constellation called the Large Magellanic Cloud, was the first supernova visible to the naked eye since Kepler's Supernova in 1604. It is also the first to be studied in detail with modern astronomical instruments. The light from its explosion first reached earth on February 23, 1987, and since then it has been observed at all wavelengths of light, from radio waves to X-rays and gamma waves. The research team used these observations to build a model describing the supernova. Models of SN1987A have typically focused on single observations, but in this study, the researchers used three-dimensional numerical simulations to incorporate the evolution of the supernova, from its onset to the current age. A comparison of the X-ray observations and the model allowed the researchers to accurately measure atomic temperatures of different elements with a wide range of atomic weights, and to confirm the relationship that predicts the temperature reached by each type of atom in the interstellar gas. "We can now accurately measure the temperatures of elements as heavy as silicon and iron, and have shown that they indeed do follow the relationship that the temperature of each element is proportional to the atomic weight of that element," said Burrows. "This result settles an important issue in the understanding of astrophysical shock waves and improves our understanding of the shock process."
Parkinson’s Disease – (PD) belongs to a group of conditions called motor system disorders, which are the result of the loss of dopamine-producing brain cells. The four primary symptoms of PD are tremor, or trembling in hands, arms, legs, jaw, and face; rigidity, or stiffness of the limbs and trunk; bradykinesia, or slowness of movement; and postural instability, or impaired balance and coordination. As these symptoms become more pronounced, patients may have difficulty walking, talking, or completing other simple tasks. PD usually affects people over the age of 50. Early symptoms of PD are subtle and occur gradually. In some people the disease progresses more quickly than in others. As the disease progresses, the shaking, or tremor, which affects the majority of PD patients may begin to interfere with daily activities. Other symptoms may include depression and other emotional changes; difficulty in swallowing, chewing, and speaking; urinary problems or constipation; skin problems; and sleep disruptions. There are currently no blood or laboratory tests that have been proven to help in diagnosing sporadic PD. Parkinson’s disease is caused by the progressive impairment or deterioration of neurons (nerve cells) in an area of the brain known as the substantia nigra. When functioning normally, these neurons produce a vital brain chemical known as dopamine. Dopamine serves as a chemical messenger allowing communication between the substantia nigra and another area of the brain called the corpus striatum. This communication coordinates smooth and balanced muscle movement. A lack of dopamine results in abnormal nerve functioning, causing a loss in the ability to control body movements. signs and symptoms Parkinson’s disease symptoms and signs may vary from person to person. Early signs may be mild and may go unnoticed. Symptoms often begin on one side of your body and usually remain worse on that side, even after symptoms begin to affect both sides. Parkinson’s signs and symptoms may include: Ø Tremor. A tremor, or shaking, usually begins in a limb, often your hand or fingers. You may notice a back-and-forth rubbing of your thumb and forefinger, known as a pill-rolling tremor. One characteristic of Parkinson’s disease is a tremor of your hand when it is relaxed (at rest). Ø Slowed movement (bradykinesia). Over time, Parkinson’s disease may reduce your ability to move and slow your movement, making simple tasks difficult and time-consuming. Your steps may become shorter when you walk, or you may find it difficult to get out of a chair. Also, you may drag your feet as you try to walk, making it difficult to move. Ø Rigid muscles. Muscle stiffness may occur in any part of your body. The stiff muscles can limit your range of motion and cause you pain. Ø Impaired posture and balance. Your posture may become stooped, or you may have balance problems as a result of Parkinson’s disease. Ø Loss of automatic movements. In Parkinson’s disease, you may have a decreased ability to perform unconscious movements, including blinking, smiling or swinging your arms when you walk. Ø Speech changes. You may have speech problems as a result of Parkinson’s disease. You may speak softly, quickly, slur or hesitate before talking. Your speech may be more of a monotone rather than with the usual inflections. Ø Writing changes. It may become hard to write, and your writing may appear small. Can Parkinson’s Disease Be Prevented? To date, there is no known prevention or cure for Parkinson’s disease. But, there are several treatment options, including drug therapy and/or surgery that can reduce the symptoms, and make living with the disease easier.
The term nitrogen oxides (NOx) describes a mixture of nitric oxide (NO) and nitrogen dioxide (NO2), which are gases produced from natural sources, motor vehicles and other fuel burning processes. Nitric oxide is colourless and is oxidised in the atmosphere to form nitrogen dioxide. Nitrogen dioxide has an odour, and is an acidic and highly corrosive gas that can affect our health and environment. Nitrogen oxides are critical components of photochemical smog. They produce the yellowish-brown colour of the smog. In poorly ventilated situations, indoor domestic appliances such as gas stoves and gas or wood heaters can be significant sources of nitrogen oxides. Environmental and health effects of nitrogen oxides Elevated levels of nitrogen dioxide can cause damage to the human respiratory tract and increase a person's vulnerability to, and the severity of, respiratory infections and asthma. Long-term exposure to high levels of nitrogen dioxide can cause chronic lung disease. It may also affect the senses, for example, by reducing a person's ability to smell an odour. High levels of nitrogen dioxide are also harmful to vegetation—damaging foliage, decreasing growth or reducing crop yields. Nitrogen dioxide can fade and discolour furnishings and fabrics, reduce visibility, and react with surfaces. Air quality standard The recommended air quality standards for nitrogen dioxide are: - 0.12 parts per million (ppm) for a 1-hour exposure period - 0.03ppm for an annual exposure period. These standards are designed to protect sensitive individuals, such as children and asthmatics. Typical outdoor nitrogen dioxide levels are well below the 1-hour standard and exposure at these levels does not generally increase respiratory symptoms. Measuring nitrogen oxides Nitrogen oxides are measured with a technique known as ‘chemiluminescence’, which is a chemical reaction that emits energy in the form of light. This particular reaction is the oxidation of nitric oxide (NO) to nitrogen dioxide (NO2) by ozone (O3) as shown below: NO + O3 > NO2*+ O2 It is an exothermic (heat generating) reaction, which produces an activated molecule of NO2*. When these NO2* molecules return from the activated state to normal state, some energy is emitted in the form of a small amount of light. A photomultiplier tube measures the intensity of the emitted light. Since 1 NO molecule is required to form 1 NO2 molecule, the intensity of the chemiluminescent reaction is directly proportional to the NO concentration in the sample. The analyser measures the amount of light emitted and converts this to a concentration. Nitrogen oxides analyser The animated illustration shows how the analyser works. A vacuum pump draws both the air supply for the ozone generator and the ambient air samples into the analyser. The green dot shows the ambient air sample path. A high-voltage corona discharge generates the ozone in dry air, shown in the diagram by the path of the red dot. The chemiluminescent reaction only occurs between O3 and NO. To measure the NO2 component the instrument diverts the ambient air stream alternately through a converter containing a catalyst (molybdenum) maintained at a temperature of 315°C, which converts any NO2 present to NO before entering the reaction cell. The blue dot shows the path taken. The difference between NO levels in the undiverted and diverted gas streams is the amount of NO2. Differential optical absorption spectroscopy (DOAS) instruments also measure nitrogen oxides. Go to the live air data service to check the current levels of nitrogen oxides in the monitoring network. Download the nitrogen oxides poster .
Radiographic inspection is based on the exposure by Either an X-ray machine or a radioactive source (Ir-192, Co-60, or in rare cases Cs-137) can be used as a source of photons. Since the amount of radiation emerging from the opposite side of the material can be detected and measured, variations in this amount (or intensity) of radiation are used to determine thickness or composition of material. Penetrating radiations are those restricted to that part of the electromagnetic spectrum of wavelength less than about 10 nanometers.The principle of Industrial Radiography Testing is Differential Absorption. This means that different materials absorb different amount of radiation based on the thickness difference, density and presence of defect. Generally for detection medium is films which consist of an emulsion-gelatin containing radiation sensitive silver halide crystals, such as silver bromide or silver chloride, and a flexible, transparent, blue-tinted base. When x-rays, gamma rays, or light strike the grains of the sensitive silver halide in the emulsion, some of the Br- ions are liberated and captured by the Ag+ ions. This change is of such a small nature that it cannot be detected by ordinary physical methods and is called a “latent (hidden) image.” However, the exposed grains are now more sensitive to the reduction process when exposed to a chemical solution (developer), and the reaction results in the formation of black, metallic silver. It is this silver, suspended in the gelatin on both sides of the base, that creates an image. Radiographic testing provides a permanent record in the form of a radiograph and provides a highly sensitive image of the internal structure of the material. RT APPLICATIONS INCLUDE: These have the advantage that they do not need a supply of electrical power to function, but they do have the disadvantage that they can not be turned off. Also it is difficult using radioactivity to create a small and compact source that offersthe photon flux possible with a normal sealed X-ray tube. Gamma rays are produced by subatomic particle interactions such as electron positron annihilation, radioactive decay, fusion,fission or inverse Compton Scattering in astrophysical processes INTERNAL X-RAY RADIOGRAPHY: The X-Ray Crawler is similar to conventional radiography however an x-ray source tube on a crawler device is run inside the pipe to each weld. Film is wrapped around the welds and the source tube is excited. The film is then developed in a mobile dark-room on location. The technique is quick and can inspect on average 150 welds per day. The advantages of x-ray crawlers are their speed and the short exposure time. The film is also crisper and much less grainy when compared to conventional radiography using Iridium type sources. EXTERNAL X-RAY RADIOGRAPHY: GAMMA RAY RADIOGRAPHY: Gamma Radiography works the same way as x-rays . However, to produce effective gamma rays only a small pellet of radioactive material sealed in a titanium capsule is needed, instead of a bulky x-ray producing machine. As isotopes are easily transported, gamma radiography is very useful in remote areas where it can be used to check for defects in wields in pipelines carrying gas or oil. An advantage using gamma radiography is that no power is required and thus eliminates the need of x-ray sets, which require power to operate which may not be readily available at a remote site. +91 63839 92501
Province: British Columbia Location: Flows NW from Rocky Mountains North West Company explorer Simon Fraser (1776–1862) opened the fur trade west of the Rocky Mountains, and was the first white man to descend the Fraser River to its mouth. Fraser was born in Bennington, Vermont, and came to Québec with his mother after his father, a Loyalist officer, died as a prisoner of war during the American revolution. Fraser joined the North West Company in 1792 and was sent to the Athabasca department. He became a partner in the company in 1801. He founded the New Caledonia posts of McLeod Lake (1805), Stuart Lake (later Fort St. James, 1806), Fraser Lake (1806) and Fort George (1807). During May and June of 1808, with a party of nineteen French Canadian voyageurs, two clerks, and two Indians, Fraser made his great journey down the Fraser River from Fort George to present-day Vancouver. It was a bitter disappointment for him to discover that the river was not the Columbia, and that it was not a practical canoe route to the coast. In 1815, Fraser went to Montréal seeking to retire from the fur trade, but he was induced to return to Athabasca. On his return trip he was at Fort William when it was seized by Lord Selkirk in reprisal for the North West Company’s attacks on the Red River settlement. Fraser and other North West Company partners were sent to Montréal, charged with complicity n the attacks on Red River. When the trial took place at York in 1817, Selkirk was unable to prove his charges. Meanwhile, Fraser had retired and settled at St. Andrews, Upper Canada. The Fraser River was discovered by Alexander Mackenzie during his journey to the Pacific in 1793. On a map printed with his Voyages in 1801, Mackenzie called the river “Tacoutche Tesse, or Columbia River.” The Spaniards never found their way up the mouth of the Fraser, but in 1791, finding evidence that they were near the mouth of a major river, they named it “Rio Floridablanca” in honor of the prime minister of Spain. The Fraser River was also known in the early days as the New Caledonia River. It was named after Fraser in 1813 by David Thompson (the Thompson River was given its name by Fraser.) - Akrigg, George Philip Vernon, 1913-, and Helen B. Akrigg. British Columbia place names. Vancouver: UBC Press, 1997 - Story, Norah. The Oxford Companion to Canadian History and Literature. Toronto: Oxford University Press, 1967
National Institutes of Health - The primary NIH organization for research on Lead Poisoning is the National Institute of Environmental Health Sciences Lead is a metal that occurs naturally in the earth's crust. Lead can be found in all parts of our environment. Much of it comes from human activities such as mining and manufacturing. Lead used to be in paint; older houses may still have lead paint. You could be exposed to lead by Breathing air, drinking water, eating food, or swallowing or touching dirt that contains lead can cause many health problems. Lead can affect almost every organ and system in your body. In adults, lead can increase blood pressure and cause infertility, nerve disorders, and muscle and joint pain. It can also make you irritable and affect your ability to concentrate and remember. Lead is especially dangerous for children. A child who swallows large amounts of lead may develop anemia, severe stomachache, muscle weakness, and brain damage. Even at low levels, lead can affect a child's mental and physical growth. Agency for Toxic Substances Disease Registry References and abstracts from MEDLINE/PubMed (National Library of Medicine)
We've used the Sun for drying clothes and food for thousands of years, but only recently have we been able to use it for generating power. The Sun is 150 million kilometres away, and amazingly powerful. Just the tiny fraction of the Sun's energy that hits the Earth (around a hundredth of a millionth of a percent) is enough to meet all our power needs many times over. In fact, every minute, enough energy arrives at the Earth to meet our demands for a whole year - if only we could harness it properly. Solar panels collect solar radiation from the sun and actively convert that energy to electricity. Solar panels are comprised of several individual solar cells. These solar cells function similarly to large semiconductors and utilize a large-area p-n junction diode. When the solar cells are exposed to sunlight, the p-n junction diodes convert the energy from sunlight into usable electrical energy. The energy generated from photons striking the surface of the solar panel allows electrons to be knocked out of their orbits and released, and electric fields in the solar cells pull these free electrons in a directional current, from which metal contacts in the solar cell can generate electricity. The more solar cells in a solar panel and the higher the quality of the solar cells, the more total electrical output the solar panel can produce.
The book called ‘Adventures of Huckleberry Finn, was authored by Mark Twain. When it was published for the first time, people commented that the book had ‘coarse, language. There are allegations on the portrayal of the character ‘Huckleberry Finn,. He was portrayed as a character having true conscience. But, he was shown to steal, rebel against authorities, not speak well, and run away from the home. Ã’šÃ‚ This book was banned as it was not considered to be suitable for the young people or children to read. Some think that the word ‘perspiration, was presented as ‘sweat, in the book. This was considered as bad language usage. There is another word used here called ‘nigger, which has become more offensive recently. The word refers to black people or dark skinned race. After the issue of bad language was discussed and criticized against the book of Mark Twain, the racism has become another issue of concern. Due to the use of the word ‘nigger, for so many times in the book, it was not acceptable for most of the people. The character called ‘Jim, in the novel was shown as if he cannot think on his own and does everything according to Huck’s wish. Jim was not treated properly and was put down by various other characters. This was condemned by the blacks and they expressed their agony while most of the blacks were still viewed as slaves. The book showed how inhuman people were to black people and treated them as slaves. The book was under controversy as the story deals mostly with Huck making the slave as his friend. The strong relationship between them was an odd thing to be shown in those days. People could not accept the great friendship between a white and a black. Apart from this, at the end of the story, Huck releases the black free which was not at all expected from the ordinary people in those days. All these aspects were responsible for the book to be declared banned.
Fonts & Typesetting The advent of desktop publishing has provided graphic artists with a seemingly endless collection of digital fonts to work with. Designers can also manipulate these fonts to create additional typographic forms. They can be layered, extended, overlapped or condensed, just to name a few. Although this is a dream for designers, these advanced typographic fonts can make it difficult to comprehend or even read the content of a printed document. The word font originated from the word foundry, relating to the location that type was cast, and has since evolved to mean that which represents the characters in a font. Fonts or typefaces are collections of characters. Characters are the smallest forms of the written language, in other words, separate letterforms. Whereas a character represents the printed image, a glyph represents the shape of each character. Fonts are measured in points. Designers can manipulate point spacing using either kerning or leading, or both. Kerning adjusts the spacing between the letterforms and leading adjusts the spacing between the lines. This allows a designer or printer to manipulate spacing or create different effects without changing the font. In addition to spacing, there are three main typesetting styles used when printing -- justified left, justified right and centered. Wrapping the text around a visual is another typesetting option. Typestyles include two main categories: serif and san serif. Serif fonts have small hats on the letter edges and san serif fonts do not. For example, serif fonts include Garamond, Times New Roman, Palatino, Lucida Bright, and Courier. San serif fonts include Arial, Chicago, Geneva, and Monaco. In addition, both serif and san serif fonts can either use bold or italic styles. In conclusion, a font is a complete set of letters, numbers and punctuation marks. Size, spacing, and alignment provide additional arrangements for the font in a printed document.
The Gender Wheel (2017), written and illustrated by Maya Gonzalez, introduces readers to gender diversity through the concept of a “gender wheel.” Gonzalez’s images are warm and inviting. She illustrates her characters in a range of skin-tones with a variety of gender expressions. The commendable purpose of the book is to teach children to understand gender outside of a binary model.Gonzalez connects the binary sex-gender system to European colonization of the Americas. As a result, she firmly places gender meanings as we inherit them within a history of socio-political struggle and conquest. This is a fair analysis and a useful alternative to considering binary sex-gender distinctions as a product of nature. For Gonzalez, gender diversity is natural and gender “boxes,” which emanate from a culture of conquest, violently disrupt the soft circles and cycles nature produces. In Gonzalez’s theory of gender, nature and culture stand in an antagonistic relationship and readers should search to reconnect with nature and by extension their unique gender “truth”. Gonzalez writes: “Seeing where girl/boy beliefs come from and how they try to control nature helps us see through the false boxes they create. This brings the truth of nature back into focus.” Gonzalez’s theorization of gender is reminiscent of essentialist models of gender and not the most sophisticated gender theory. Although I don’t agree with Gonzalez’s theory of gender, I certainly appreciate the overall message of inclusivity. I think this could be a teachable text and a useful intervention into binary thinking about gender for young children. Because it is so very text heavy, I would suggest it for children nine to eleven. It could certainly be taught to younger children, but would need a lot of “scaffolding” from a competent teacher. This review is part of my “Snapshots of LGBTQ Kid Lit” project. I’m working on a book, The New Queer Children’s Literature: Exploring the Principles and Politics of LGBTQ* Children’s Picture Books, which is under contract with the University Press of Mississippi. Part of my research is identifying and interpreting English-language children’s picture books with LGBTQ* content published in the US and Canada between 1979 and 2019. Follow my blog to follow my journey!
A tool used to raise awareness and illustrate a persons impact on the world. The website activity is intended for use as individuals but discussion can take place in groups afterwards. This activity also features as a useful eco-design tool in the Students' Section of this website. When to use the activity Use this activity when highlighting the link between consumption patterns and lifestyles and sustainability. It is particularly useful in making students think about their own behaviour as consumers and the impact they have as individuals. Who is the activity for? This introductory activity can be used at either AS or A2 level. It can be either a group or individual activity or a combination of the two. Activity and hints on how to organise it You are asked a number of questions about your lifestyle Enter your responses into the drop down As you give your answers you can see how many planets are needed to support the way you live (if everyone adopted your lifestyle) • If working in groups review the results and the students' reaction to them. For further information about Footprinting
Very Low Frequency Electromagnetic (VLF-EM) VLF-EM surveying is carried out with a combination magnetometer and VLF-EM receiver. This instrument is designed to measure the electromagnetic component of the very low frequency field (VLF-EM) which is transmitted from various stations within North America. In all electromagnetic prospecting, a transmitter induces an alternating magnetic field (called the primary field) by having a strong alternating current move through a coil of wire. This primary field travels through any medium and if a conductive mass such as a sulphide body is present, the primary field induces a secondary alternating current in the conductor, and this current in turn induces a secondary magnetic field. The receiver picks up the primary field and, if a conductor is present, the secondary field distorts the primary field. The fields are expressed as a vector, which has two components, the “in-phase” (or real) component and the “out-of-phase” (or quadrature) component. For the VLF-EM receiver, the tilt angle in degrees of the distorted electromagnetic field with a conductor is measured from that which it would have been if the field was not distorted with a conductor. Since the fields lose strength proportionally with the distance they travel, a distant conductor has less of an effect than a close conductor. Also, the lower the frequency of the primary field, the further the field can travel and therefore the greater the depth penetration. The VLF-EM uses a frequency range from 13 to 30 kHz, whereas most EM instruments use frequencies ranging from a few hundred to a few thousand Hz. Because of its relatively high frequency, the VLF-EM can pick up bodies of a much lower conductivity and therefore is more susceptible to clay beds, electrolyte-filled fault or shear zones and porous horizons, graphite, carbonaceous sediments, lithological contacts as well as sulphide bodies of too low a conductivity for other EM methods to pick up. Consequently, the VLF-EM has additional uses in mapping structure and in picking up sulphide bodies of too low a conductivity for conventional EM methods and too small for induced polarization (in places it can be used instead of IP). However, its susceptibility to lower conductive bodies results in a number of anomalies, many of them difficult to explain and, thus, VLF-EM preferably should not be interpreted without a good geological knowledge of the property and/or other geophysical and geochemical surveys.
Europe was essentially defined by Islam. And Islam is redefining it now. For centuries in early and middle antiquity, Europe meant the world surrounding the Mediterranean, or Mare Nostrum (“Our Sea”), as the Romans famously called it. It included North Africa. Indeed, early in the fifth century a.d., when Saint Augustine lived in what is today Algeria, North Africa was as much a center of Christianity as Italy or Greece. But the swift advance of Islam across North Africa in the seventh and eighth centuries virtually extinguished Christianity there, thus severing the Mediterranean region into two civilizational halves, with the “Middle Sea” a hard border between them rather than a unifying force. Since then, as the Spanish philosopher José Ortega y Gasset observed, “all European history has been a great emigration toward the North.” To read the full article, visit The Atlantic website.
Explore the Deep Sea Volcanoes & Vents Hydrothermal vents: underwater geysers What happens when tectonic plates move towards each other: subduction, trenches and mountain building. Hot fluid jets and wafts from cracks in the seafloor thousands of meters below the ocean surface. The fluid is essentially water that has filtered down into the rock of the Earth's crust through tiny channels and fissures. Surrounding rocks heat the water as it moves downwards, and various minerals dissolve in it. Vents occur where some of the hot fluid finds its way back to the surface. Most areas containing vents are found where the Earth's tectonic plates are moving apart, or in other areas of tectonic activity, such as ocean basins and hotspots. Plumes of hot fluid As the hot fluid shoots out of cracks in the rock, it meets the surrounding ocean water, which is cold—just a few degrees above freezing. So the hot fluid begins to cool. The further it moves from the point it came out of the seafloor (the vent), the cooler it gets. But the fluid is so hot when it leaves the seafloor that it can take several hundred meters to cool down to the temperature of the surrounding ocean. Consequently, each vent is marked by a "plume" of warm water that billows into the ocean above. Vents can be tracked down by finding their associated plumes. Chimneys and smokers Hot vent fluid has many minerals dissolved in it. When it mixes with seawater and cools, minerals precipitate out of solution, forming a dense cloud of what looks like smoke—so some vents are referred to as "smokers." Depending on the fluid's temperature and what minerals are dissolved in it, the smoke can look black or white. Black smokers emit hot fluids containing iron and sulfide. White smoker fluid is cooler and contains whitish compounds of barium, calcium and silicon. The fluid jetting out of these chimneys can be very hot indeed: hundreds of degrees Celsius. Some minerals drop out of the water right next to the vent; over time, these can build up to form "chimneys." Some chimneys form very fast—growing several inches per day. Large chimneys can grow to over 150 feet (50 m) high and 90 feet (30 m) diameter. As the chimneys grow, they are colonized by animals and microbes which are able to survive in the extreme conditions. …more about investigating minerals deposited by vents In some areas, seawater seeps into cracks in the seafloor and mixes with the rising vent fluid. So by the time the vent fluid exits the seafloor it is cooler, and has a different chemical make-up, than the fluid that gushes out of chimneys (above): tens, rather than hundreds of degrees Celsius. This is still considerably hotter than the background temperature of the ocean, which is just a few degrees above freezing. …more about investigating the chemical make-up of vent fluid Just add sulfur Often, the hot vent fluids contain sulfur-rich chemicals. Sulfur is a yellow mineral that was much prized by mediaeval alchemists, and is one of the ingredients of gunpowder. Sulfur compounds are often very poisonous. However, some microbes have evolved the ability to break down the sulfur-containing chemicals in vent fluids to derive energy. This enables the microbes and many other animals to live in great abundance around vents on the ocean floor, despite the fact that there are no plants in the deep ocean. …more about life in the deep
29 April 2020 It is difficult to process the deluge of news about the coronavirus (SARS-COV-2) responsible for the COVID-19 pandemic. Scientists, clinicians and public health experts are still struggling to get a handle on this pathogen, which was completely unknown just six months ago. Myths and misconceptions can spread quickly in this environment, but there is information available to help the public to better protect themselves and their loved ones. We are yet to understand various aspects of the SARS-COV-2. Research1 has concluded the virus stays in the air for up to three hours under the specific hospital settings because inserting breathing tubes in COVID-19 patients makes the virus spread, and stay in the air, more aggressively. Scientists, however, can still not say that the virus is airborne. Policymakers are still trying to determine whether mask use should be common advice. Some facts about the respiratory virus that affected almost 3 million around the world, however, are well established. Myth: Only old people need to worry about COVID-19 Elderly people are at greatest risk of severe disease or death from coronavirus infection. One recent study2 in Italy found that the median age of patients admitted to the intensive care unit (ICU) was 63 years old, and mortality rates were more than twice as high for those patients over 63. However, patients as young as 14 were hospitalized, and a report3 from the US Centers for Disease Control and Prevention found that 20% of American ICU patients were between the ages of 20 and 44. And as for the many young people who remain asymptomatic after infection, they are still contagious and can spread the disease, putting older family members, neighbours, and coworkers at considerable risk. Myth: The COVID-19 coronavirus originated in a laboratory The larger family of coronaviruses are well-known ‘zoonotic’ agents—meaning, pathogens that can make the leap from animal hosts to humans. In the past 20 years, the world has already seen multiple outbreaks from deadly coronaviruses that originated in species such as bats or camels, including the severe acute respiratory syndrome (SARS) outbreak of 2002–3 and Middle East respiratory syndrome (MERS), which first emerged in 2012. All available evidence indicates that the SARS-CoV-2 virus responsible for COVID-19 is directly descended from a natural bat-borne virus. In a recent study4 from Nature Medicine, a team of scientists from the US, Australia and UK analyzed the SARS-CoV-2 genome, and report that “our analyses clearly show that SARS-CoV-2 is not a laboratory construct or a purposefully manipulated virus.” Myth: COVID-19 is less dangerous than the flu Seasonal and pandemic influenza are both major threats to health and life—Johns Hopkins estimates that flu virus infection is responsible for between 291,000 and 646,000 deaths worldwide each year. In contrast, COVID-19 has already caused upward of 200,000-plus deaths as of late April — just five months since the virus first appeared. It should also be noted that the COVID-19 death toll is in spite of social distancing, surveillance, and infection control measures taken by many countries. Furthermore, both vaccines and therapeutics are available to mitigate influenza, whereas no such protection is currently available for COVID-19, creating the potential for a much deadlier situation if strong public health measures are not maintained. COVID-19 is also more contagious than the flu; and each COVID-19 patients infects between 2.2 to 2.5 others, whereas the rate in the flu is only 1.3. Flu mortality rate is also less than 1%, whereas early estimates of the World Health Organization puts the rate for COVID-19 at 3.4%. Myth: A mask is sufficient to protect against catching COVID-19 In many countries, members of the public are being encouraged or even required to wear a surgical or cloth mask when in public during the pandemic. Masks can help reduce the spread of infection and are a good precaution, but should not be seen as complete protection. A report5 published in Nature found that individuals who wear homemade cotton masks or surgical masks are less likely to spread droplets containing the virus when they breathe or talk, and can thus help protect others in the community from infection. However, wearing a mask will not necessarily prevent against contracting COVID-19 from unmasked individuals. It is essential that the mouth and nose are both fully covered, that the wearer refrains from touching either their face or the mask while in use, and that the mask is either immediately discarded afterward (for surgical masks) or washed before reuse (for fabric masks). And even with a mask, appropriate social distancing remains essential for maximum protection. It is also important to remember that the virus can also be caught through the eyes. Myth: Pets can spread COVID-19 To date, there have been a few reports of animals testing positive for infection with the SARS-CoV-2 virus, including one cat in Belgium, and a two dogs in Hong Kong. However, there is currently no evidence to suggest that this virus can cause active disease in these animals, and likewise, there are no indications that animals carrying the virus can transmit it to humans. One laboratory study that is in preprint has demonstrated that cats can spread the virus to other cats in an experimental setting, but none of these animals developed disease, and it remains unclear whether similar spread between animals occurs in the real world. The American Veterinary Medical Association has announced that they “have no information that suggests that pets might be a source of infection for people with the coronavirus that causes COVID-19,” although they advise that patients with active infection should limit their contact to pets until they recover. Myth: Warm spring and summer weather will slow the spread of COVID-19 In temperate climates, cold and flu season typically spans from late fall to spring. The reasons for this are not fully understood, but some experts6 believe that the airborne droplets that spread these viruses are more stable in the low-humidity winter air than in the damper summertime air. Our mucus membranes and immune systems may also be more vulnerable to infection in the winter. Although there is some evidence that COVID-19 is also somewhat hindered by warmer weather, many leading virologists see little reason to believe that summer will meaningfully hamper the pandemic’s spread, including Harvard researchers in a recent study7 published in Science. This is largely due to the novelty of the SARS-CoV-2 virus; even if weakened by heat or humidity, human immune systems are ill-equipped to fend it off, relative to familiar foes like cold or flu viruses. - Doremalen,V. et al. Aerosol and Surface Stability of SARS-CoV-2 as Compared with SARS-CoV-1. New England Journal of Medicine. 382. (2020) | article - Grasselli G. et al. Baseline Characteristics and Outcomes of 1591 Patients Infected With SARS-CoV-2 Admitted to ICUs of the Lombardy Region, Italy. JAMA. (2020). | article - Severe Outcomes Among Patients with Coronavirus Disease 2019 (COVID-19) — United States, February 12–March 16, 2020. MMWR Morb Mortal Wkly Rep 69, 343-346 (2020). | article - Andersen, K.G., et al. The proximal origin of SARS-CoV-2. Nat Med 26, 450–452 (2020). | article - Leung, N.H.L., et al. Respiratory virus shedding in exhaled breath and efficacy of face masks. Nat Med (2020). | article - Shaman, J. et al. Absolute humidity and the seasonal onset of Influenza in the Continental United States. PLoS Biol 8(2) (2010) | article - Kissler, S. et al. Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period. Science. | article
Abraham Lincoln is elected the 16th president of the United States over a deeply divided Democratic Party, becoming the first Republican to win the presidency. Lincoln received only 40 percent of the popular vote but handily defeated the three other candidates: Southern Democrat John C. Breckinridge, Constitutional Union candidate John Bell, and Northern Democrat Stephen Douglas, a U.S. senator for Illinois. Lincoln, a Kentucky-born lawyer and former Whig representative to Congress, first gained national stature during his campaign against Stephen Douglas of Illinois for a U.S. Senate seat in 1858. The senatorial campaign featured a remarkable series of public encounters on the slavery issue, known as the Lincoln-Douglas debates, in which Lincoln argued against the spread of slavery, while Douglas maintained that each territory should have the right to decide whether it would become free or slave. Lincoln lost the Senate race, but his campaign brought national attention to the young Republican Party. In 1860, Lincoln won the party’s presidential nomination. In the November 1860 election, Lincoln again faced Douglas, who represented the Northern faction of a heavily divided Democratic Party, as well as Breckinridge and Bell. The announcement of Lincoln’s victory signaled the secession of the Southern states, which since the beginning of the year had been publicly threatening secession if the Republicans gained the White House. By the time of Lincoln’s inauguration on March 4, 1861, seven states had seceded, and the Confederate States of America had been formally established, with Jefferson Davis as its elected president. One month later, the American Civil War began when Confederate forces under General P.G.T. Beauregard opened fire on Union-held Fort Sumter in South Carolina. In 1863, as the tide turned against the Confederacy, Lincoln emancipated the slaves and in 1864 won reelection. In April 1865, he was assassinated by Confederate sympathizer John Wilkes Booth at Ford’s Theatre in Washington, D.C. The attack came only five days after the American Civil War effectively ended with the surrender of Confederate General Robert E. Lee at Appomattox. For preserving the Union and bringing an end to slavery, and for his unique character and powerful oratory, Lincoln is hailed as one of the greatest American presidents.
Why get vaccinated? Influenza (“flu”) is a contagious disease that spreads around the United States every year, usually between October and May. Flu is caused by influenza viruses, and is spread mainly by coughing, sneezing, and close contact. Anyone can get flu. Flu strikes suddenly and can last several days. Symptoms vary by age, but can include: - sore throat - muscle aches - runny or stuffy nose Flu can also lead to pneumonia and blood infections, and cause diarrhea and seizures in children. If you have a medical condition, such as heart or lung disease, flu can make it worse. Flu is more dangerous for some people. Infants and young children, people 65 years of age and older, pregnant women, and people with certain health conditions or a weakened immune system are at greatest risk. Each year thousands of people in the United States die from flu, and many more are hospitalized. Flu vaccine can: - keep you from getting flu, - make flu less severe if you do get it, and - keep you from spreading flu to your family and other people.
Metal Additive Manufacturing Metal additive manufacturing (metal AM), or metal 3D printing, has the potential to profoundly change the production, time-to-market, and simplicity of components and assemblies. Unlike conventional or subtractive manufacturing processes, such as drilling, which creates a part by removing material, additive manufacturing builds a part using a layer-by-layer process directly from a digital model, without the use of molds or dies that add time, waste material, and expense to the manufacturing process. Additive manufacturing has been used as a design and prototyping tool for decades, but the focus of additive manufacturing is now shifting to the direct production of components, such as medical implants, aircraft engine parts, and jewelry. Additive manufacturing is not a single type of technology or process. However, all additive manufacturing systems employ a common layer-by-layer approach, but they use a wide variety of technologies, materials, and processes. Additive manufacturing technologies utilizing metal powders: Laser Sintering (LM/SLS/SLFS)* Electron Beam Melting (EBM)* Selective Inkjet Binding (SIB)* Laser Powder Forming (LPF) Fused Deposition Modeling (FDM)/Extrusion * Powder bed process
Universe may hold three times as many stars as thought New observations have indicated that there are three times as many stars in the universe as previously believed. Instruments at the Keck Observatory in Hawaii have detected the faint signature of red dwarfs in eight massive galaxies between about 50 million and 300 million light years away - and found that they are much more widespread than previously thought. "No one knew how many of these stars there were," said Pieter van Dokkum, a Yale University astronomer who led the research. "Different theoretical models predicted a wide range of possibilities, so this answers a longstanding question about just how abundant these stars are." The team discovered that there are about 20 times as many red dwarfs in these elliptical galaxies as in the Milky Way. "We usually assume other galaxies look like our own. But this suggests other conditions are possible in other galaxies," says Charlie Conroy of the Harvard-Smithsonian Center for Astrophysics. "So this discovery could have a major impact on our understanding of galaxy formation and evolution." For instance, Conroy said, galaxies may contain less dark matter than previous measurements of their masses might have indicated. Instead, the abundant red dwarfs could contribute more mass than realized. As well as boosting the total number of stars in the universe, the discovery also increases the number of planets orbiting those stars - in turn elevating the number of planets that might harbor life. "There are possibly trillions of Earths orbiting these stars," van Dokkum said, adding that the red dwarfs they discovered, which are typically more than 10 billion years old, have been around long enough for complex life to evolve. "It’s one reason why people are interested in this type of star."
Introduction to Particle Theory – Notes and Essential Questions Today we are going to get an initial run at particle theory and molecules. We have already discussed some elements of this but today we are going to go onto Discovery Techbook to take some initial notes on molecules and how they make up our universe. To get to Discovery Techbook you need to visit the AEMS homepage and follow the provided link to the website. Then students will use their regular school log in information to access their “assignment”. Students are to complete the “Essential Question” below using the notes taken from online. They are to submit their notes along with their complete answers to the questions. What are molecules made of and what are their properties? How are compounds different from the elements from which they are formed? How can you predict how atoms will combine to form a molecule? How can ionic compounds break into pieces?
The subphylum Chelicerata is one of the five subdivisions of the phylum Arthropoda, with members characterized by the absence of antennae and mandibles (jaws) and the presence of chelicerae (a pincer-like mouthpart as the anterior appendage, composed of a base segment and a fang portion). Extant chelicerates include spiders, scorpions, ticks, and mites (class Arachnida), horseshoe crabs (class Xiphosura or Merostomata), and sea spiders (class Pycnogonida). Chelicerata is one of five subphyla into which arthropods are typically divided. The other subphyla are Trilobitomorpha (trilobites), Myriapoda (millipedes, centipedes), Hexapoda (insects), and Crustacea (lobsters, crabs, barnacles, shrimp, copepods, etc.). Chelicerates, which are mainly predatory arthropods, ultimately outlasted the now extinct trilobites, the common marine arthropod of the Cambrian era. Most of the marine chelicerates, including all of the eurypterids, are now extinct. The chelicerates and their closest fossil relatives (mostly originally included in the Xiphosura) are grouped together with the trilobites to form the taxon Arachnomorpha. Chelicerata reflects both the diversity and unity in nature, having a unique body form distinct from other arthropods, and yet this large and varied group of invertebrates, found worldwide, all share similar attributes from a common lineage. As with all arthropods, chelicerates are characterized by the possession of a segmented body, a pair of jointed appendages on each segment, and an exoskeleton. In the Chelicerata, the body is divided into two parts. The anterior part is called a prosoma (or cephalothorax) and is composed of eight segments plus a presegmental acron. The posterior part is called a opisthosoma (or abdomen) and is composed of twelve segments plus a postsegmental telson. The prosoma usually has eyes. The first two segments of the prosoma bear no appendages; the third bears the chelicerae. The fourth segment bears legs or pedipalps, and all subsequent segments bear legs. The legs on the prosoma are either uniramous or have a very reduced gill branch, and are adapted for walking or swimming. The appendages on the opisthosoma, in contrast, are either absent or are reduced to their gill branch. As in other arthropods, the mouth lies between the second and third segments, but whereas in other groups there is usually a pair of antennae on the last preoral segment, here there are none. The chelicerae, which give the group its name, are pointed appendages that grasp the food in place of the chewing mandibles most other arthropods have. Most chelicerates are unable to ingest anything solid, so they drink blood or spit or inject digestive enzymes into their prey. The Chelicerata are divided into four classes: The Pycnogonida actually show some strong differences from the body plan described above, and it has been suggested that they represent an independent line of arthropods. They may have diverged from the other chelicerates early on, or represent highly modified forms. Sometimes they are excluded from the Chelicerata but grouped with them as the Cheliceriformes. Eurypterida is an extinct class that predates the earliest fishes. The eurypterid (sea scorpion) was the largest known arthropod that ever lived (with the possible exception of Arthropleuridae). The largest, such as Pterygotus, reached two meters or more in length, but most species were less than 20 centimeters. They were formidable predators that thrived in warm shallow water in the Cambrian to Permian from 510 to 248 million years ago. Although called "sea scorpions," only the earliest ones were marine (most lived in brackish or freshwater), and they were not true scorpions. Xiphosura is a class of marine chelicerates, which includes a large number of extinct lineages and only four recent species in the family Limulidae, which include the horseshoe crabs. The group has hardly changed in millions of years; the modern horseshoe crabs look identical to prehistoric genera such as the Jurassic Mesolimulus, and are considered to be living fossils. The name Merostomata as the class of horseshoe crabs is traditional, but is unpopular in cladistics taxonomies because in all recent cladistic hypotheses it refers to a paraphyletic group composed by the Xiphosura + Eurypterida. The Burgess shale animal, Sanctacaris, and perhaps the aglaspids, may also belong here. These are extinct forms that arose in the Cambrian, and the aglaspids are believed to have died out during the Silurian. After them, the oldest group of chelicerates are the Eurypterida, found from the Ordovician onwards. When young, these show a resemblance to the trilobites, suggesting a possible relationship between these two groups. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia:
The Origin of Meteorites A meteorite is a rock that was once part of another planet, a moon or a large asteroid. It was dislodged from its home by a powerful impact event. That impact launched the rock with enough force to escape the gravity its home body and propel it through space. While it travelled through space it was known as a "meteoroid". Eventually, perhaps billions of years later, the meteoroid was captured by Earth's gravitational field and it fell through Earth's atmosphere to the ground. Meteorites from Mars, Moon and Asteroids Although meteorites are extremely rare, thousands of them have been found on Earth's surface. Over 99% of all meteorites found on Earth are thought to be pieces of asteroids. A few of the meteorites found on Earth have been attributed to specific solar system bodies. A very small number (less than 1/4% of all meteorites found on Earth) have been carefully studied and attributed to be from the Moon or from Mars . A few have been studied thoroughly enough to be attributed to the asteroid Vesta. Some researchers believe that an amazing 5% to 6% of all meteorites found on Earth originated from Vesta. Determining the Source of a Meteorite Researchers have learned a lot about the chemistry, mineralogy and isotopic composition of rocks from the Moon by studying specimens brought back to Earth by NASA's lunar missions. The characteristics of rocks on Mars have been determined through analyses done by rovers and other equipment sent to that planet. By comparing the composition of meteorites to this data, researchers have been able to identify meteorites that are probably pieces of Moon and Mars. While orbiting Vesta, NASA's Dawn spacecraft has scanned the surface of the asteroid collecting data about its chemical and mineralogical composition. This information has confirmed that HED meteorites, a subgroup of stony achondrite meteorites, are pieces of Vesta that have fallen to Earth. The colorful images at the top of this page are photomicrographs of slices of HED meteorites from Vesta taken in plane polarized light under crossed polarizers. HED meteorites are achondrites (stony meteorites that do not contain chondrules) that are similar to terrestrial igneous rocks. They are thought to have originated from Vesta. There are four subgroups: Howardites, Eucrites and Diogenites. These differ in mineral composition and texture which were determined by their history while still part of the crust of Vesta. |Photomicrographs of three| Vesta meteorites shown in greater detail at the top of this page. Images by Harry McSween, University of Tennessee. Howardites are regolith breccias made up of eucrite, diogenite and some carbonaceous chondrules. They are believed to have formed on the surface of Vesta from impact ejecta which was buried by later impact debris and lithified. There are no known terrestrial equivalents to this type of Basaltic Eucrites are rocks from the crust of Vesta that are composed mainly of Ca-poor pyroxene, pigeonite and Ca-rich plagioclase. Cumulate eucrites have a similar composition to basaltic eucrites; however, they have oriented crystals and are thought to be intrusive rocks, crystallized in shallow plutons within Vesta's crust. Diogenites are believed to have crystallized in deep plutons within Vesta's crust. They have a much coarser texture than eucrites and are composed mainly of Mg-rich orthopyroxene, plagioclase and olivine. Rheasilvia Crater as a Meteorite Source The most prominent feature on the surface of Vesta is an enormous crater near the south pole. The Rheasilvia Crater is about 500 kilometers in diameter (300 miles). The floor of the crater is about 13 kilometers (8 miles) below the undisturbed surface of Vesta and its rim, a combination of upturned strata and ejecta, rises between 4 and 12 kilometers (2.5 and 7.5 miles) above the surface of the undisturbed surface of Vesta. This crater is thought to have formed by an enormous impact with another asteroid about one billion years ago. The impact is thought to have launched about 1% of the volume of Vesta as ejecta, exposing multiple layers of the crust in the walls of the crater and possibly exposing some olivine mantle. This impact is thought to have been the source of the HED meteorites found on Earth and about 5% of Earth's asteroids. Meteorites on Moon and Mars Meteorites beyond Earth have been found by NASA space missions. At least three lunar-resident meteorites have been found by NASA moon landings. In addition trace element evidence of extralunar materials has been found in lunar regolith samples. NASA's Mars Rovers have encountered and photographed several impressive meteorites on the surface of Mars. Contributor: Hobart King Find it on Geology.com More from Geology.com |Petrified Wood is a fossil that forms when dissolved material precipitates and replaces wood. |Ruby and Sapphire are the 2nd and 3rd most popular colored stones in the United States. |Eagle Ford Shale: One of the most prolific producers of oil and gas in the United States. |Malachite has served as a gem material, pigment and ore of copper for thousands of years. |Ammolite is a fossil and a gemstone. It is shell material from fossil ammonites. | Vesta is one of the largest asteroids in the solar system. It is about 500 kilometers across (300 miles) and comprises about 9% of the mass of the asteroid belt. NASA's Dawn spacecraft orbited Vesta for about one year between July 2011 and June 2012, collecting data about the mineralogy, chemistry and isotopic composition of the asteroid. This image views the south polar area of Vesta showing the Rheasilvia Crater which is about 500 kilometers (300 miles) across. Image by NASA. | Color topographic map of Vesta asteroid viewing the south polar area. Deep blue areas are topographic lows. Topographic highs are red through pink to white. This view shows the giant Rheasilva Crater in the southern hemisphere with a high central peak. Image by NASA. Meteorwritings: A series of articles about meteorites authored by Geoffrey Notkin of Aerolite Meteorites and published by Geology.com in 2008 through 2010. Lunar Meteorites: Department of Earth and Planetary Sciences, Washington University in St. Louis, accessed May, 2012. Martian Meteorites: International Meteorite Collectors Association, accessed May, 2012. Dawn's Targets -- Vesta and Ceres: Article in the Dawn Missions section of the NASA website, accessed May, 2012. Extralunar Materials in Lunar Regolith: A White Paper Submitted for the NRC Decadal Survey by Marc Fries, John Armstrong, James Ashley, Luther Beegle, Timothy Jull and Glenn Sellar. Lunar and Planetary Institute, accessed May, 2012.
- Info & Contacts The Melanesia and Australia Region The islands of Melanesia have been inhabited for at least 3,000 years, but the first western contact was with sailors from Spain and Portugal who landed on New Guinea around 1526 CE. The Solomon Islands were discovered by westerners in 1568, when the region's first historical eruption was recorded on Savo. The New Hebrides (now Vanuatu) were discovered by Spaniards in 1606, a year after the Dutch sighted northernmost Australia. But it was not until Cook's historic voyage of 1770 that Australia's east coast was discovered, and its substantial settling did not begin until 1788. In 1884, Germany took possession of the northern part of New Guinea, and 3 days later Britain declared the southern section a protectorate, followed by outright annexation in 1888. The region's combined land area equals that of California, but many areas are sparsely settled and its population is only slightly greater than that of the cities of Los Angeles and San Diego. In 1906 this British territory was transferred to newly independent Australia, which also took control of the northern portion of New Guinea during World War I (WW-I). With the exception of Japanese occupation in 1942-45, this situation prevailed until self-government was declared in 1973; full independence came to Papua New Guinea (PNG) two years later. The Solomon Islands had only sporadic contact with the west until Britain established a protectorate in the 1890s; the islands gained independence in 1978. Captain Cook extensively explored the southern islands in 1774, naming them the New Hebrides. Both France and Britain formed trading posts and missions in the last century, formalizing an Anglo-French Condominium in 1906, but the islands remained isolated despite considerable attention during WW-II. The Republic of Vanuatu was declared in 1980. South of New Britain lies an oceanic trench that parallels its arcuate coast. Nearing the Solomons, the trench swings SE'ly, then down along the Vanuatu chain before turning east again and ending below Hunter Island. This trench system marks the subduction of oceanic crust–the Solomon and Coral Seas–moving N, NE, and E under the volcanic islands formed by this process. Tectonic complications in the form of two short oceanic spreading centers affect nearby volcanoes: One extends from SE New Guinea eastward to Kavachi, and the other runs broadly east-west below the Admiralty Islands at the north end of the region. Of all historically documented eruptions now known from Melanesia, three-fourths have been recorded in the past century. Melanesia almost matches the Atlantic Ocean as the region with the highest proportion of its eruptions being submarine, and it has nearly a quarter of the world's documented island-building eruptions, many of these from Kavachi volcano. It also matches Indonesia as the region with the highest number of tsunami-producing eruptions; a tsunami accompanying the collapse of Ritter Island volcano in 1888 swept the coasts of Papua New Guinea and New Britain, causing about 700 fatalities. The explosive character of volcanism in this caldera-rich region places Melanesia at the top of the list of documented Holocene caldera-forming eruptions. One of these calderas formed the magnificent natural harbor of Rabaul during a major eruption and collapse in the 6th century. Rabaul was the capital of PNG from 1910 through 1941, and site of a 1937 eruption that killed 441 people. This event led to the founding, in the same year, of one of the world's pre-eminent volcano centers, the Rabaul Volcanological Observatory (RVO), operated by the Geological Survey of Papua New Guinea. RVO covers all the volcanoes of PNG, and its work has been particularly valuable in major eruptions such as Lamington in 1951, only a year after RVO resumed operations following WW-II. A major eruption in 1994 destroyed much of Rabaul town and prompted moving the capital city of the province of East New Britain to Kokopo, 20 km away. Australia overwhelms the island nations of the Melanesia region in size, but we list only one Holocene volcano. This 'volcano' is actually one of Earth's largest volcanic fields, called the Newer Volcanics Province, which covers a broad 15,000 km2 area of SE Australia with nearly 400 small shield volcanoes and explosive vents of Tertiary-to-Holocene age. List of Holocene volcanoes in the Melanesia and Australia region.
Of course, many people in Scotland and Wales wanted to keep their ancient language and culture. Many Welsh people, and many more Scottish people, suffered as the English language culture became dominant. However, this was all nothing compared to the suffering of the Irish. English was first introduced to Ireland by the Normans who settled ordinary English speakers (rather than French) in the area known around Dublin, known as the Pale. By the time of the Tudors, however, these English-speaking areas had virtually disappeared. Henry VIII proclaimed himself king of Ireland in 1541 and tried to introduce the Reformation to the Irish. Ireland remained Catholic but the conquest of Ireland was finally completed under Elizabeth I and James I. Both the Tudors and the Stuarts saw the use of the Irish language as a threat to their power and they encouraged the use of English. In the sixteenth and seventeenth century, the English followed a policy of simply taking land from the Irish and settling English-speakers, many of whom were from Scotland. These areas were called the Plantations. English-speaking Protestants were rewarded; Catholic speakers of Irish Gaelic were persecuted and punished. As the united kingdom of Scotland and England became more strongly Protestant, Catholics had fewer and fewer rights. In the seventeenth century, the British virtually completed the process of taking land from Irish Catholics in wars; about one third of the Irish population either died or left the country. From the fifteenth to the eighteenth century, thousands of Irish political prisoners were sold as slaves by the British in an astonishing act of dehumanization of one’s neighbors. Most of these ended up in the Caribbean. Those who stayed at home were very often no better off than the slaves, however; they were governed by terror. The Protestant English-speaking people who had power in Ireland were often irresponsible and incompetent and showed no concern for the Irish Catholics; the Irish were denied political power and basic human rights. In the potato famine of 1740-1741, about 40% of the Irish population died. In the second potato famine of 1845-1852, about one million people died and about the same number of people emigrated. Ireland was actually producing lots of food at this time, but the British rulers did not allow them to have any of it! Ireland became an English speaking country but only after many had been killed or forced to leave the country. Little wonder that the Irish were overjoyed to gain independence in 1916 after a long struggle against tyranny.
Happy Independence Day and July 4th, America! On July 4, 1776, the Second Continental Congress adopted the Thomas Jefferson draft of the Declaration of Independence. On this day, 13 colonies unanimously declared their independence from Great Britain. The Declaration of Independence was signed on August 2, 1776 at Independence Hall in Philadelphia. The historic events later led to the formation of the United States and July 4th as a federal holiday in America. July 4th also is known as Independence Day. Above, a parchment souvenir copy of the Declaration of Independence. The original Declaration of Independence, the Constitution and the Bill of Rights are on display in the Rotunda of the National Archives building in Washington, DC. Sources: National Archives, History.com
As we mentioned previously, the Egyptians had a lot of trouble with their teeth, in large part because their bread had grit and sand in it, which wore out their enamel. While they didn't have dentistry, they did make some effort to keep their teeth clean. Archaeologists have found toothpicks buried alongside mummies, apparently placed there so that they could clean food debris from between their teeth in the afterlife. Along with the Babylonians, they're also credited with inventing the first toothbrushes, which were frayed ends of wooden twigs. But the Egyptians also contributed a innovation to dental hygiene, in the form of toothpaste. Early ingredients included the powder of ox hooves, ashes, burnt eggshells and pumice, which probably made for a less-than-refreshing morning tooth-care ritual [source: Colgate.com]. Archaeologists recently found what appears to be a more advanced toothpaste recipe and how-to-brush guide written on papyrus that dates back to the Roman occupation in the fourth century A.D. The unknown author explains how to mix precise amounts of rock salt, mint, dried iris flower and grains of pepper, to form a "powder for white and perfect teeth" [source: Zoech].
Machining is any of various processes in which a piece of raw material is cut into a desired final shape and size by a controlled material-removal process. The many processes that have this common theme, controlled material removal, are today collectively known as subtractive manufacturing, in distinction from processes of controlled material addition, which are known as additive manufacturing. Exactly what the "controlled" part of the definition implies can vary, but it almost always implies the use of machine tools (in addition to just power tools and hand tools). The precise meaning of the term machining has evolved over the past one and a half centuries as technology has advanced. In the 18th century, the word machinist simply meant a person who built or repaired machines. This person's work was done mostly by hand, using processes such as the carving of wood and the hand-forging and hand-filing of metal. At the time, millwrights and builders of new kinds of engines (meaning, more or less, machines of any kind), such as James Watt or John Wilkinson, would fit the definition. The noun machine tool and the verb to machine (machined, machining) did not yet exist. Around the middle of the 19th century, the latter words were coined as the concepts that they described evolved into widespread existence. Therefore, during the Machine Age, machining referred to (what we today might call) the "traditional" machining processes, such as turning, boring, drilling, milling, broaching, sawing, shaping, planing, reaming, and tapping. In these "traditional" or "conventional" machining processes, machine tools, such as lathes, milling machines, drill presses, or others, are used with a sharp cutting tool to remove material to achieve a desired geometry. Since the advent of new technologies such as electrical discharge machining, electrochemical machining, electron beam machining, photochemical machining, and ultrasonic machining, the retronym "conventional machining" can be used to differentiate those classic technologies from the newer ones. In current usage, the term "machining" without qualification usually implies the traditional machining processes. Machining is a part of the manufacture of many metal products, but it can also be used on materials such as wood, plastic, ceramic, and composites. A person who specializes in machining is called a machinist. A room, building, or company where machining is done is called a machine shop. Machining can be a business, a hobby, or both. Much of modern day machining is carried out by computer numerical control (CNC), in which computers are used to control the movement and operation of the mills, lathes, and other cutting machines. The three principal machining processes are classified as turning, drilling and milling. Other operations falling into miscellaneous categories include shaping, planing, boring, broaching and sawing. - Turning operations are operations that rotate the workpiece as the primary method of moving metal against the cutting tool. Lathes are the principal machine tool used in turning. - Milling operations are operations in which the cutting tool rotates to bring cutting edges to bear against the workpiece. Milling machines are the principal machine tool used in milling. - Drilling operations are operations in which holes are produced or refined by bringing a rotating cutter with cutting edges at the lower extremity into contact with the workpiece. Drilling operations are done primarily in drill presses but sometimes on lathes or mills. - Miscellaneous operations are operations that strictly speaking may not be machining operations in that they may not be swarf producing operations but these operations are performed at a typical machine tool. Burnishing is an example of a miscellaneous operation. Burnishing produces no swarf but can be performed at a lathe, mill, or drill press. An unfinished workpiece requiring machining will need to have some material cut away to create a finished product. A finished product would be a workpiece that meets the specifications set out for that workpiece by engineering drawings or blueprints. For example, a workpiece may be required to have a specific outside diameter. A lathe is a machine tool that can be used to create that diameter by rotating a metal workpiece, so that a cutting tool can cut metal away, creating a smooth, round surface matching the required diameter and surface finish. A drill can be used to remove metal in the shape of a cylindrical hole. Other tools that may be used for various types of metal removal are milling machines, saws, and grinding machines. Many of these same techniques are used in woodworking. As a commercial venture, machining is generally performed in a machine shop, which consists of one or more workrooms containing major machine tools. Although a machine shop can be a stand-alone operation, many businesses maintain internal machine shops which support specialized needs of the business. Machining requires attention to many details for a workpiece to meet the specifications set out in the engineering drawings or blueprints. Beside the obvious problems related to correct dimensions, there is the problem of achieving the correct finish or surface smoothness on the workpiece. The inferior finish found on the machined surface of a workpiece may be caused by incorrect clamping, a dull tool, or inappropriate presentation of a tool. Frequently, this poor surface finish, known as chatter, is evident by an undulating or irregular finish, and the appearance of waves on the machined surfaces of the workpiece. Overview of machining technology Machining is any process in which a cutting tool is used to remove small chips of material from the workpiece (the workpiece is often called the "work"). To perform the operation, relative motion is required between the tool and the work. This relative motion is achieved in most machining operation by means of a primary motion, called "cutting speed" and a secondary motion called "feed". The shape of the tool and its penetration into the work surface, combined with these motions, produce the desired shape of the resulting work surface. Types of machining operation There are many kinds of machining operations, each of which is capable of generating a certain part geometry and surface texture. In turning, a cutting tool with a single cutting edge is used to remove material from a rotating workpiece to generate a cylindrical shape. The primary motion is provided by rotating the workpiece, and the feed motion is achieved by moving the cutting tool slowly in a direction parallel to the axis of rotation of the workpiece. Drilling is used to create a round hole. It is accomplished by a rotating tool that typically has two or four helical cutting edges. The tool is fed in a direction parallel to its axis of rotation into the workpiece to form the round hole. In boring, a tool with a single bent pointed tip is advanced into a roughly made hole in a spinning workpiece to slightly enlarge the hole and improve its accuracy. It is a fine finishing operation used in the final stages of product manufacture. In milling, a rotating tool with multiple cutting edges is moved slowly relative to the material to generate a plane or straight surface. The direction of the feed motion is perpendicular to the tool's axis of rotation. The speed motion is provided by the rotating milling cutter. The two basic forms of milling are: - Peripheral milling - Face milling. Other conventional machining operations include shaping, planing, broaching and sawing. Also, grinding and similar abrasive operations are often included within the category of machining. The cutting tool A cutting tool has one or more sharp cutting edges and is made of a material that is harder than the work material. The cutting edge serves to separate chip from the parent work material. Connected to the cutting edge are the two surfaces of the tool: - The rake face; and - The flank. The rake face which directs the flow of newly formed chip, is oriented at a certain angle is called the rake angle "α". It is measured relative to the plane perpendicular to the work surface. The rake angle can be positive or negative. The flank of the tool provides a clearance between the tool and the newly formed work surface, thus protecting the surface from abrasion, which would degrade the finish. This angle between the work surface and the flank surface is called the relief angle. There are two basic types of cutting tools: - Single point tool; and - Multiple-cutting-edge tool A single point tool has one cutting edge and is used for turning, boreing and planing. During machining, the point of the tool penetrates below the original work surface of the workpart. The point is sometimes rounded to a certain radius, called the nose radius. Multiple-cutting-edge tools have more than one cutting edge and usually achieve their motion relative to the workpart by rotating. Drilling and milling uses rotating multiple-cutting-edge tools. Although the shapes of these tools are different from a single-point tool, many elements of tool geometry are similar. Relative motion is required between the tool and work to perform a machining operation. The primary motion is accomplished at a certain cutting speed. In addition, the tool must be moved laterally across the work. This is a much slower motion, called the feed. The remaining dimension of the cut is the penetration of the cutting tool below the original work surface, called the depth of cut. Collectively, speed, feed, and depth of cut are called the cutting conditions. They form the three dimensions of the machining process, and for certain operations, their product can be used to obtain the material removal rate for the process: - – the material removal rate in mm3/s, (in3/s), - – the cutting speed in m/s, (in/min), - – the feed in mm, (in), - – the depth of cut in mm, (in). - Note: All units must be converted to the corresponding decimal (or USCU) units. Stages in metal cutting Machining operations usually divide into two categories, distinguished by purpose and cutting conditions: - Roughing cuts, and - Finishing cuts Roughing cuts are used to remove large amount of material from the starting workpart as rapidly as possible, i.e. with a large Material Removal Rate (MRR), in order to produce a shape close to the desired form, but leaving some material on the piece for a subsequent finishing operation. Finishing cuts are used to complete the part and achieve the final dimension, tolerances, and surface finish. In production machining jobs, one or more roughing cuts are usually performed on the work, followed by one or two finishing cuts. Roughing operations are done at high feeds and depths – feeds of 0.4–1.25 mm/rev (0.015–0.050 in/rev) and depths of 2.5–20 mm (0.100–0.750 in) are typical, but actual values depend on the workpiece materials. Finishing operations are carried out at low feeds and depths – feeds of 0.0125–0.04 mm/rev (0.0005–0.0015 in/rev) and depths of 0.75–2.0 mm (0.030–0.075 in) are typical. Cutting speeds are lower in roughing than in finishing. A cutting fluid is often applied to the machining operation to cool and lubricate the cutting tool. Determining whether a cutting fluid should be used, and, if so, choosing the proper cutting fluid, is usually included within the scope of cutting condition. Today other forms of metal cutting are becoming increasingly popular. An example of this is water jet cutting. Water jet cutting involves pressurized water in excess of 620 MPa (90 000 psi) and is able to cut metal and have a finished product. This process is called cold cutting, and it increases efficiency as opposed to laser and plasma cutting. Relationship of subtractive and additive techniques With the recent proliferation of additive manufacturing technologies, conventional machining has been retronymously classified, in thought and language, as a subtractive manufacturing method. In narrow contexts, additive and subtractive methods may compete with each other. In the broad context of entire industries, their relationship is complementary. Each method has its own advantages over the other. While additive manufacturing methods can produce very intricate prototype designs impossible to replicate by machining, strength and material selection may be limited. - Machining: An Introduction - Additive Manufacturing Advances Another Step - Machining Page - Machining and Metalworking at Home - Define Machining - Universal Tools and Manufacturing Company, Definitions - ADDITIVE/SUBTRACTIVE MANUFACTURING RESEARCH - How and When to Choose Between Additive and Subtractive Prototyping - Additive or subtractive? - Albert, Mark [Editor in Chief] (2011-01-17), "Subtractive plus additive equals more than ( - + + = > ): subtractive and additive processes can be combined to develop innovative manufacturing methods that are superior to conventional methods ['Mark: My Word' column – Editor's Commentary]", Modern Machine Shop (Cincinnati, Ohio, USA: Gardner Publications Inc) 83 (9): 14. - Groover, Mikell P. (2007), "Theory of Metal Machining", Fundamentals of Modern Manufacturing (3rd ed.), John Wiley & Sons, Inc., pp. 491–504, ISBN 0-471-74485-9 - Oberg, Erik; Jones, Franklin D.; McCauley, Christopher J.; Heald, Ricardo M. (2004), Machinery's Handbook (27th ed.), Industrial Press, ISBN 978-0-8311-2700-8. - "Machine Tool Practices", 6th edition, by R.R.; Kibbe, J.E.; Neely, R.O.; Meyer & W.T.; White, ISBN 0-13-270232-0, 2nd printing, copyright 1999, 1995, 1991, 1987, 1982 and 1979 by Prentice Hall. - www.efunda.com, Machining: An Introduction - www.nmri.go.jp/eng, Elementary knowledge of metalworking - www.machiningpartners.com, Machining:Climb Milling VS Conventional Milling - www.mmsonline.com, Drill And Bore With A Face Mill - Buhl Fijnmetaalbewerking
Relative clauses are clauses starting with the relative pronouns who*, that, which, whose, where, when. They are most often used to define or identify the noun that precedes them. Here are some examples: * There is a relative pronoun whom, which can be used as the object of the relative clause. For example: My science teacher is a person whom I like very much. To many people the word whom now sounds old-fashioned, and it is rarely used in spoken English. Relative pronouns are associated as follows with their preceding noun: |Preceding noun||Relative pronoun||Examples| |a person||who(m)/that, whose||- Do you know the girl who ..| - He was a man that .. - An orphan is a child whose parents .. |a thing||which†/that, whose||- Do you have a computer which ..| - The oak a tree that .. - This is a book whose author .. Note 1: The relative pronoun whose is used in place of the possessive pronoun. It must be followed by a noun. Example: There's a boy in grade 8 whose father is a professional tennis player. (There's a boy in grade 8. His father is a professional tennis player.) Note 2: The relative pronouns where and when are used with place and time nouns. Examples: FIS is a school where children from more than 50 countries are educated. 2001 was the year when terrorists attacked the Twin Towers in New York. Some relative clauses are not used to define or identify the preceding noun but to give extra information about it. Here are some examples: Note 1: Relative clauses which give extra information, as in the example sentences above, must be separated off by commas. Note 2: The relative pronoun that cannot be used to introduce an extra-information (non-defining) clause about a person. Wrong: Neil Armstrong, that was born in 1930, was the first man to stand on the moon. Correct: Neil Armstrong, who was born in 1930, was the first man to stand on the moon. There are two common occasions, particularly in spoken English, when the relative pronoun is omitted: 1. When the pronoun is the object of the relative clause. In the following sentences the pronoun that can be left out is enclosed in (brackets): Note: You cannot omit the relative pronoun a.) if it starts a non-defining relative clause, or, b.) if it is the subject of a defining relative clause. For example, who is necessary in the following sentence: What's the name of the girl who won the tennis tournament? 2. When the relative clause contains a present or past participle and the auxiliary verb to be. In such cases both relative pronoun and auxiliary can be left out: † Some native speakers, particularly those from the USA, consider it a mistake to use which in a defining/restrictive relative clause. Advanced learners of English can read more about this on Language Log.
What Do Animal Viruses Have to Do with Human Health? (Mailman) Source: Mailman School of Public Health Simon Anthony has discovered viruses in dolphins, seals, and flying fox monkeys. He’s even had the chance to name new viruses. His most recent discovery in seals, he dubbed phovirus. But what do animal infections have to do with human health? The seal discovery offers a clue: phovirus closely resembles hepatitis A, a virus that infects 1.4 million people worldwide. Anthony, assistant professor of Epidemiology at the Mailman School, explains that this genetic similarity could well be the result of a phenomenon called zoonosis that describes when a virus jumps from species to species. Outbreaks from HIV to Ebola are believed to have emerged from wildlife. While it’s not clear yet that phovirus jumped from seals to humans or the other way around, Anthony believes that studying zoonoses can help predict the next outbreak, or at least reduce the risk that new diseases emerge. Studying animal infections has never been more important. According to Anthony, who is based at the School’s Center for Infection and Immunity and travels to remote areas from Brazil to Bangladesh, spillover events are on the rise, in part because rapid development in forested areas multiplies encounters between humans and wildlife. What’s more: today’s spillovers are much more likely to become pandemics. With air travel, an emerging infection can spread to the other side of the world within a single day. There is a lot to learn about viruses. “We don’t know where they are; we don’t know what hosts they live in; we don’t even know how many there are,” says Anthony. To start to fill in this information, he is taking a viral census of sorts. So far, his best estimate is between 300,000 and more than a million. This information could reveal patterns of viral diversity and distribution—both geographically and within the host animals that carry them that, according to Anthony, “will get a step closer to predicting risk.”
Seabirds and Marine Taking action for the world’s most threatened group of birds A recent major review in BirdLife's journal Bird Conservation International confirmed that the world's seabirds are more threatened than any other group of birds. Of 346 species, 101 (29%) are globally threatened and a further 10% Near Threatened, while nearly half are known or suspected to be experiencing population declines. The albatross family is especially imperilled, with 17 of 22 species threatened with extinction. Human activities lie behind these declines. At sea, commercial fisheries have degraded fish stocks and caused the deaths of innumerable seabirds through accidental bycatch, while on land the introduction of invasive species such as rats and cats has killed off many breeding colonies. Every year longline fishing fleets set about three billion hooks, killing an estimated 300,000 seabirds, of which 100,000 are albatrosses. The slaughter of seabirds takes place when the hooks are still visible near the sea's surface. Foraging birds grab the bait and are hooked, dragged under, and drowned. Research by BirdLife Partners has shown that significant numbers of seabirds are also killed in trawling and gill-net fisheries, particularly around New Zealand, southern Africa and South America. The situation would be even worse without the actions and lobbying work of the BirdLife Partnership. These actions have included changing fishing methods to make them less hazardous to seabirds, and protecting nesting sites, especially by eradicating invasive species. A network of BirdLife Partners is influencing global and regional policies affecting seabirds. BirdLife Partners have also been engaged in mapping marine Important Bird Areas around coasts, in territorial waters and on the high seas, and the BirdLife Partnership is working with national governments and international bodies to create a network of marine Protected Areas. In 2012, BirdLife published the e-Atlas of Marine Important Bird Areas , describing 3,000 sites worldwide. Over 150 marine IBAs have already been recognised in the CBD process to identify Ecologically or Biologically Significant marine Areas (EBSAs), a step on the way to protecting them. Bridging the gaps between knowledge, policy and action The world's oceans are open and dynamic systems that pose few physical barriers to the dispersal and migration of many seabird species. Seabird conservation issues therefore need to be addressed globally, which led BirdLife International to establish its Global Seabird Conservation Programme in 1997. The objectives of the programme are: - To promote new and existing initiatives to reduce the incidental mortality of seabirds by fisheries, particularly that of longlining. - To address seabird conservation issues at a global level, as appropriate, and engage relevant stakeholders regionally and internationally - To establish and support a network of BirdLife partners and others to influence global and regional policies affecting seabirds. - To eradicate invasive alien species and restore important seabird colonies Policy and science We manage the world’s seabird data In 2004, BirdLife International publishedTracking Ocean Wanderers: the global distribution of albatrosses and petrels. This report was the result of a unique collaboration between scientists worldwide, analysing the results of satellite-tracking data to reveal the distribution of albatrosses and petrels across the world's oceans. Now accessible online as the Global Procellariiform Tracking Database (GPTD), managed by BirdLife, this rapidly growing resource provides a comprehensive dataset that can be used by conservationists, scientists, governments and Regional Fisheries Management Organisations. Since 2007 BirdLife International has been compiling a database of seabird foraging ranges and ecological preferences in the marine environment. The aim is to provide an authoritative global dataset that can be used to delimit marine IBAs adjacent to major breeding colonies, highlight gaps in our knowledge of foraging behaviour, and help identify key areas for future research. As part of the recently established World Seabird Union, BirdLife is helping build a World Seabird Colony Database which will provide a better understanding of how seabird populations fluctuate over time andspace; allow for analysis related to existing and emerging threats such as climate change; assist prioritisation exercises on regional and global scales (such as sites most in need of alien eradication); and help identify future management priorities. The e-Atlas of Marine Important Bird Areas is the result of six years of effort that, to date, has involved around 40 BirdLife Partners, in collaboration with the world's leading seabird scientists, government departments of environment and fisheries, and the secretariats of several international conventions. It provides essential information on more than 3,000 sites worldwide for use by conservation practitioners and policy makers, fisheries, the energy sector, marine pollution management planners, and the insurance industry. Fishing fleets have taken our message on board In 2006, we formed the Albatross Task Force - the world's first international team of skilled, at-sea instructors. Albatross Task Force teams are based in the bycatch 'hotspots' of southern Africa and South America, where albatrosses come into contact with large and diverse longline and trawl fishing fleets. Since its formation, we have seen dramatic reductions in the numbers of albatrosses and other seabirds killed. This is a sure sign that Albatross Task Force members really are getting something practical done to help save albatrosses from extinction. Global Seabird Programme Coordinator: Dr Ben Sullivan International Marine Policy Officer: Dr Cleo Small Global Marine IBA Officer: BirdLife’s Pacific Seabird Programme Manager: Albatross Task Force Coordinator: South America Regional Coordinator: Southern Africa Regional Coordinator: Dr Ross Wanless European Regional Coordinator:
A sorting algorithm falls into the adaptive sort family if it takes advantage of existing order in its input. It benefits from the presortedness in the input sequence – or a limited amount of disorder for various definitions of measures of disorder – and sorts faster. Adaptive sorting is usually performed by modifying existing sorting algorithms. Comparison-based sorting algorithms have traditionally dealt with achieving an optimal bound of O(n log n) when dealing with time complexity. Adaptive sort takes advantage of the existing order of the input to try to achieve better times, so that the time taken by the algorithm to sort is a smoothly growing function of the size of the sequence and the disorder in the sequence. In other words, the more presorted the input is, the faster it should be sorted. This is an attractive algorithm because nearly sorted sequences are common in practice. Thus, the performance of existing sort algorithms can be improved by taking into account the existing order in the input. Note that the most worst-case sorting algorithms that do optimally well in the worst-case, notably heap sort and merge sort, do not take existing order within their input into account, although this deficiency is easily rectified in the case of merge sort by checking if left.last_item ≤ right.first_item, in which case a merge operation may be replaced by simple concatenation – a modification that is well within the scope of making an algorithm adaptive. A classic example of an adaptive sorting algorithm is Straight Insertion Sort. In this sorting algorithm, we scan the input from left to right, repeatedly finding the position of the current item, and insert it into an array of previously sorted items. In pseudo-code form, the Straight Insertion Sort algorithm could look something like this: procedure Straight Insertion Sort (X, n): X := −∞ for j := 2 to n do i := j − 1 t := X[j] while t < X[i] do X[i + 1] := X[i] i := i - 1 end X[i + 1] := t end The performance of this algorithm can be described in terms of the number of inversions in the input, and then T(n) will be roughly equal to I(A) + (n - 1), where I(A) is the number of Inversions. Using this measure of presortedness – being relative to the number of inversions – Straight Insertion Sort takes less time to sort the closer it is to being sorted. Other examples of adaptive sorting algorithms are adaptive heap sort, adaptive merge sort, and splaysort. Dijkstra’s smoothsort algorithm is a variation on heap-sort that is also considered an adaptive sorting algorithm. - Hagerup, Torben; Jyrki Katjainen (2004). Algorithm Theory – SWAT 2004. Berlin Heidelberg: Springer-Verlag. pp. 221–222. ISBN 3-540-22339-8. - Mehta, Dinesh P.; Sartaj Sahni (2005). Data Structures and Applications. USA: Chapman & Hall/CRC. pp. 11‑8–11‑9. ISBN 1-58488-435-5. - Estivill-Castro, Vladmir; Derick Wood (December 1992). "A survey of adaptive sorting algorithms". ACM (New York, NY, USA: ACM) 24 (4): 441–476. doi:10.1145/146370.146381. ISSN 0360-0300. CiteSeerX: 10 .1 .1 .45 .8017. - Petersson, Ola; Alistair Moffat (1992). "A framework for adaptive sorting". Lecture Notes in Computer Science (Berlin: Springer Berlin / Heidelberg) 621: 422–433. doi:10.1007/3-540-55706-7_38. ISSN 1611-3349. Retrieved February 23, 2009.
A new study has suggested that rapid eye movement (REM) sleep actively converts waking experiences into lasting memories and abilities in young brains. The Washington State University finding broadens the understanding of children’s sleep needs and calls into question the increasing use of REM- disrupting medications such as stimulants and antidepressants. Researcher Marcos Frank said scientists have known that infant animals spend much of their early life in REM sleep, but little was understood about the actual nuts and bolts of REM’s ability to change or recombine memories. Providing new insights, Frank and his colleagues documented the effects of sleep on vision development in young animals. The researchers found that brain circuits change in the visual cortex as animals explore the world around them, but that REM sleep is required to make those changes stick. The scientists showed that the changes are locked in by ERK, an enzyme that is activated only during REM sleep. REM sleep acts like the chemical developer in old-fashioned photography to make traces of experience more permanent and focused in the brain, said Frank, adding that experience is fragile and these traces tend to vanish without REM sleep and the brain basically forgets what it saw. Frank said young brains, including those of human children, go through critical periods of plasticity, or remodeling, when vision, speech, language, motor skills, social skills and other higher cognitive functions are developed. The study suggests that during these periods, REM sleep helps growing brains adjust the strength or number of their neuronal connections to match the input they receive from their environment, he said.
Book T of C Chap T of C Response cost or negative punishment is another way to make behavior less frequent. It is therefore a form of punishment. It occurs when a stimulus is taken away as a consequence of behavior and the effect is to reduce the frequency of the behavior. The word "negative" in "negative punishment" comes from the fact that a stimulus is removed. What is response cost or negative punishment? How are penalties negative punishment? In general, any time you use the word penalty you are talking about response cost. A speeding ticket is a negative reinforcer. Your money is taken away to reduce the frequency of speeding behavior. This is a form of punishment because it is a stimulus that makes the behavior it follows less frequent in the future. Technically it is negative punishment because a stimulus is removed or subtracted as a form of punishment. The alternative label-response cost- is perhaps more intuitive. It labels the fact that a response (such as driving too fast) "costs" you. Extinction and response cost both make a behavior less frequent by taking away something good. The distinction between them is fairly subtle, but here it is. In extinction the reinforcer that maintains a behavior is withheld. This means the behavior has been analyzed and the reinforcer causing the behavior has been identified and taken away. That produces extinction. An example would be extinguishing the bar press operant by turning off the food dispenser in a Skinner Box. How is extinction distinguished from response cost? Why is a speeding ticket response cost rather than extinction? How would you extinguish speeding? Response cost, by contrast, involves any valued stimulus being removed, whether or not it caused the behavior. If you get a speeding ticket, your money (a valued stimulus) is taken away from you. However, the money was probably not the reason you were speeding. Therefore a speeding ticket is categorized as response cost (negative punishment) rather than extinction. How would you extinguish speeding? If a person drove fast for thrills, then to extinguish speeding one would have to eliminate the thrills. This might occur naturally if, for example, a person matured and eventually grew bored with driving over the speed limit. The result would be extinction of speeding behavior in that individual. The usefulness of all these concepts is directly linked to their abstract quality. People have intuitions about what is reinforcing and punishing, and often these intuitions are wrong. By stepping back and analyzing the situation ("Is a stimulus being added or subtracted? Is the behavior getting more frequent or less frequent?") one can categorize the situation and identify a reinforcer or a punisher. How can analysis of operant behavior produce insights? For example, you might know somebody who teases you. You respond in a way that you assume will make the teasing stop, for example, by showing some irritation. But if the teasing gets more intense instead of stopping, then your response functioned as reinforcement. Whatever you did, it must be reinforcement because the frequency of teasing increased after the stimulus was applied. Therefore it is time to try something different. One web site that advertised a dramatic new technique for reducing bullying. The author found that protests by the bullied child were ineffective and actually encouraged more bullying, but he said he had discovered a technique that worked much better. It consisted of teaching a bullied child to react to a bully with friendliness. The author of the web site was not very familiar with behavioral techniques and reacted with astonishment when I mentioned that, technically, his procedure was punishing the bully. To him, punishment had to hurt! In the anti-bullying technique he described, however, the punishing stimulus was friendly behavior, because it reduced the incidence of bullying. Prev page | T of C | Next page Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below. Copyright © 2007 Russ Dewey
Date of this Version Ecosystem, species and genetic dimensions of biodiversity have eroded since widespread settlement of the Great Plains. Conversion of native vegetation in the region followed the precipitation gradient, with the greatest conversion in the eastern tallgrass prairie and eastern mixed-grass types. Areas now dominated by intensive land uses are "hot spots" for exotic birds. However, species of all taxa listed as threatened or endangered are well-distributed across the Great Plains. These species are often associated with special landscape features, such as wetlands, rivers, caves, sandhills and prairie dog towns. In the long run, sustaining biodiversity in the Great Plains, and the goods and services we derive from the plains, will depend on how successfully we can manage to maintain and restore habitat variation and revitalize ecosystem functioning. Public policy and legislation played a significant role in the degradation of native habitats in the region. Both policy and legislation will be needed to reverse the degradation and restore critical ecosystem processes.
- (Samskritam)Sanskrit (sam, complete; krita, done, i.e., that which is done completely, the perfected, the refined) is the ancient liturgical or ritual language of India. In the Sanskrit language itself the lan-guage is called Samskritam.Sanskrit is the oldest extant Indo-European language. It is linguistically related to such Euro-pean languages as English, French, and German and such Asian languages as Persian. The earli-est evidence for Sanskrit is in the ancient Indian texts, the VEDAS, the earliest of which, the RIG VEDA, dates from approximately 1500 B.C.E. The Vedas were received as divine revelation by seers called RISHIS, who recorded them. The Sanskrit of the Vedas is noticeably different from its classical form, as defined authoritatively by the grammar-ian Panini around 450 B.C.E.After Panini, virtually no changes were accepted into the language. Today Sanskrit is still spoken by pandits (scholars) and those learned An ochre-robed wandering sannyasi in Benares (Varanasi) (Constance A. Jones) in Indian philosophy. There are several Sanskrit universities today in India, where all classes are conducted in that language. There are a few mil-lion Indians who can truly speak Sanskrit today in a population of over a billion or more; none of them speaks Sanskrit only.There are many theories regarding the Sanskrit language; the different philosophical schools and sects in India have developed their own view-points. Most of them believe that the Vedas them-selves are eternal and always existed; therefore, Sanskrit itself is similarly eternal, rather than an arbitrary language created by humans; it is the “language of the gods” (devavani).When JAINISM and Buddhism began to develop scriptures and liturgies that departed from the Vedic ritual tradition, they made use of the Prakrits, the regional vernacular languages that had begun to develop out of Sanskrit. In that era (c. 800 to 0 B.C.E.), Sanskrit was still the spoken language of the educated classes and the language of Vedic high culture. By the turn of the millennium, how-ever, even Buddhists and Jains began to write their works in Sanskrit, an indication that the cultural force of developing Hinduism had overwhelmed these heterodox traditions at least in that respect.Sanskrit, thus, is the cultural link language of India. It has been used as the language of high culture for nearly 3,000 years. The body of extant writing in the language is vast. The Vedas, which are basically collections of MANTRAS, are accompanied by the BRAHMANAS, the ARANYAKAS, and the classical UPANISHADS. Hundreds of later texts called “Upanishads” exist independently of the Vedas.The Sanskrit epics, the RAMAYANA and the MAHABHARATA, were written somewhat later. The Ramayana is itself about 40,000 verses in length and the Mahabharata over 100,000 verses. Included alongside the epics are the 18 Puranas that tell the tales of the divinities. There are also 18 minor Puranas and hundreds of Sthala-puranas or local works that tell the tales of local divinities.Other prolific genres emerged over the long history of Sanskrit. There are hundreds of plays, longer poems, and other classical literary forms. There are works on aesthetics, erotics, medicine, philosophy and theology, and logic; there are devotional hymns, dictionaries, works on astron-omy and astrology, works on mathematics, ritual, law, architecture, TANTRISM, history, music, sculp-ture, and painting. Additionally, there is much panegyric literature and many inscriptions. Every one of these Sanskrit genres has examples in the Jain tradition as well. All told, there are hundreds of thousands of texts and manuscripts, most of which have not been studied for centuries and are not edited, let alone translated.Sanskrit is written in the DEVANAGARI script, which is made up of 48 to 51 letters, depending on the precise system. The script appears to have been devised during the Gupta era (fourth to sixth centuries C.E.).Most Indian languages rely on Sanskrit-derived vocabulary. Even in a Dravidian language such as Telegu, more than 50 percent of the vocabulary is derived from Sanskrit.At about the time of the arrival of the Muslims in India in the 13th century, Sanskrit learning began to decline. The vital and central role that Sanskrit had played in Indian culture for 3,000 years began to fade, and the vernacular languages began to develop as literary alternatives. (In South India, Tamil has long had a developed literature, still extant, dating to before the Common Era.)Even then Sanskrit did not die out. Many texts continued to be written in the language through the 18th century; in fact, many works are still composed in Sanskrit. On Indian television and radio one can hear Sanskrit newscasts and bulle-tins. There also are a few Sanskrit newspapers.Further reading: K. C. Aryan, The Little Goddesses (Matrikas) (New Delhi: Rekha, 1980); T. Burrow, The Sanskrit Language (London: Faber, 1973); Jan Gonda, ed., A History of Sanskrit Literature, 10 vols. (Wies-baden: Otto Harrosowitz, 1975–82); John Grimes, A Concise Dictionary of Indian Philosophy: Sanskrit Terms Defined in English (Albany: State University of New York Press, 1989); Arthur Berriedale Keith, Classical Sanskrit Literature (Calcutta: Y. M. C. A. Publishing House, 1947); ———, A History of Sanskrit Literature (London: Oxford University Press, 1920); Diana Mor-rison, A Glossary of Sanskrit from the Spiritual Tradition of India (Petaluma, Calif.: Nilgiri Press, 1977); Sheldon Pollock, ed., Literary Cultures in History: Reconstruc-tions from South Asia (Berkeley: University of California Press, 2003); M. N. Srinivas, The Cohesive Role of San-skritization and Other Essays (Delhi: Oxford University Press, 1989); Judith M. Tyberg, The Language of the Gods: Sanskrit Keys to India’s Wisdom (Los Angeles: East-West Cultural Centre, 1970). Encyclopedia of Hinduism. A. Jones and James D. Ryan. 2007.
Forest soils certainly benefit from the addition of plant nutrients. Elements like nitrogen, phosphorous, potassium, calcium, and magnesium are the building blocks of leaves, twigs, trunks, and roots, and they regulate or activate countless physiological processes in the microscopic life of plants – functions like water movement, enzyme activation, and stress signaling and response. No mineral nutrients in the soil below, no living plants above. Some forest stands are naturally flush with nutrients. Plant-available minerals in the soil come from the weathering of rocks, deposition of airborne particles relocated from somewhere else, and from the recycling of decomposed organic matter from dead plants and animals on the site. Their continual cycling between soils and trees is vital to the maintenance of soil minerals. But not all soils contain sufficient nutrients for healthy tree growth. Some soils are just naturally depauperate, some have been exhausted by erosion or poor management practices, and some have been depleted by repeated harvesting and removal in the form of grass, wool, milk, or logs over many decades. Minerals can also be leached from soil in drainage water. Recently, we’ve learned that some minerals, like calcium, can be leached at accelerated rates by inputs of acid precipitation. Such losses of essential nutrients lead to deficiencies that reduce growth and jeopardize forest health. So can you fertilize a forest? Yes. Fertilization of forest trees – particularly with nitrogen – has been a common practice in intensive plantation silviculture in the Southeast and Northwest since the 1960s. Most is applied by aircraft, unless there is adequate spacing between rows of trees where it can be done by tractor or skidder-mounted equipment. The vast majority of such applications use dry, pelletized forms of synthetic fertilizers. There have been experimental applications of fertilizer to northeastern forests. For example, in 1999, a 30-acre hardwood stand at the Hubbard Brook Experimental Forest in New Hampshire was amended with over 50 tons of calcium dropped from a helicopter in an attempt to restore that which had been leached away by acid precipitation. By following the forest ecosystem’s response over the past 16 years, researchers documented that increases of calcium in such conditions stimulated a significant increase in growth of forest vegetation. While these findings are significant, they do not necessarily indicate that amending forest soil with a helicopter is the best solution to a forest health problem. For starters, it is highly impractical and, unless you’ve got your own aircraft, prohibitively expensive. Moreover, there are many possible reasons beyond fertility why a forest stand might exhibit slow growth, discolored or misshapen foliage, or dieback. Fertilization simply will not fix the limitations of a site that is too wet or too dry, and it cannot overcome destructive logging practices that erode soils or damage tree stems and roots. Similarly, fertilization cannot prevent defoliation by insects (in fact, it might just nourish them). And an overcrowded stand where trees have no room for expansion will likely benefit far more from a good thinning. Fertilization won’t improve the growth of trees already growing on a nutrient-rich site, and if overdone, it can actually have a deleterious effects on trees and the greater environment. Indeed, high soil concentrations of even the most essential nutrients can be toxic to plants and excessive nutrients can run off and pollute nearby waters. Effects on wildlife have not been adequately studied and remain largely unknown. Fertilization may be a workable idea if your forest is a young plantation of southern pines and your sole objective is growing timber as fast as possible, or if your forest is an abandoned surface mine and you are heaven-bent on restoring its vegetation. Otherwise, it’s probably not worth the associated expense, practical difficulties, or environmental risks. If you really want to enhance your forest soil’s productivity, advocate for clean air, retain leaves, twigs, and branches from harvested trees, practice good silviculture and careful logging, and return your raked yard leaves to the woods from whence they came. Michael Snyder, a forester, is commissioner of the Vermont Department of Forests, Parks, and Recreation.
This image layout shows two views of the same baby star from NASA's Spitzer Space Telescope. Spitzer's view shows that this star has a second, identical jet shooting off in the opposite direction of the first. This new false-colored image from NASA's Hubble, Chandra and Spitzer space telescopes shows a giant jet of particles that has been shot out from the vicinity of a type of supermassive black hole called a quasar. NASA's Hubble and Spitzer telescopes combined to make these shape-shifting galaxies taking on the form of a giant mask. The icy blue eyes are actually the cores of two merging galaxies, called NGC 2207 and IC 2163, and the mask is their spiral arms. NASA's Spitzer, Hubble and Chandra space observatories teamed up to create this multi-wavelength, false-colored view of the M82 galaxy. The lively portrait celebrates Hubble's 'sweet sixteen' birthday. This image demonstrates how data from two of NASA's Great Observatories, the Spitzer and Hubble Space Telescopes, are used to identify one of the most distant galaxies ever seen. This galaxy is named named HUDF-JD2. This false-color image from three of NASA's Great Observatories provides one example of a star that died in a fiery supernova blast. Called Cassiopeia A, this supernova remnant is located 10,000 light-years away in the constellation Cassiopeia.
What does it mean to be a peacemaker? Learners identify what key characteristics of a peacemaker are. They write in their journals about a time when they have been treated unfairly: What happened? How did it make you feel? What did you do about it? Students read an excerpt of a biography; modeling using the graphic organizer to come up with key events and ideas for artifacts.
“What a lovely plant!” Growing in wetlands, purple loosestrife (Lythrum salicaria) attracts the attention of passers-by as soon as the first blossoms appear in July. This stout, erect, perennial herb with a well-developed taproot typically grows to a height of two to six feet with a column of small magenta flowers at the end of each stalk. It is, however, an invasive, first brought to America in the early nineteenth century both as a contaminant of European ship ballast and deliberately for medicinal use for treatment of diarrhea, dysentery, bleeding, wounds, ulcers, and sores. A single plant can produce more than two million seeds annually. Such high seed density overwhelms all local native plants, building up a seed bank of massive proportions. The result is a loss of plant and wildlife diversity. Purple loosestrife grows best in high organic soil, but tolerates a wide variety of soils including clay, sand, muck, and silt. A closer inspection shows the striking color of its flowers. Within Marblehead, the most prominent display of loosestrife is at Ware Pond. As a pond fills in, loosestrife moves in along with other invasives such as phragmite. Hand removal is recommended for small populations of the plant, ideally before they set seed. The entire rootstock must be removed because regeneration from root fragments is possible. Ongoing experiments have successfully demonstrated that certain loosestrife-eating insects can cause the populations of plants to decrease in size, leaving more manageable and less harmful densities.
Changing Our Thinking about Sticking Together The pitch is thrown, the batter connects—crack! For a fraction of a moment, these two objects are stuck together, pulled together by adhesion energy and pushed apart by their own elasticity. This moment of adhesion is studied by the field of engineering known as contact mechanics. By measuring things like force, elasticity, and surface area, baseball statisticians (read: engineers) can describe how the objects deform and connect using the Johnson-Kendall-Roberts (JKR) model of elastic contact, a foundational theory of the field. But Yale researchers led by Eric Dufresne, Associate Professor of Mechanical Engineering & Materials Science, Physics, and Cell Biology, have shown that this model doesn't always hold up. If the ball—in this case a hard, silica microsphere—is small enough and, more importantly, is thrown at a soft enough surface—such as a sticky, flat, silicone substrate—then the adhesion obeys a new set of rules. In these cases, solid surface tension, which opposes adhesion by flattening the surface of soft solids, dominates elasticity, to the point that the silicone substrate adheres to the microsphere in a manner similar to the way cream adheres to a spoon. "Given the current boom of interest in stretchable electronics and biomaterials, we hope this new understanding could lead to making materials with exciting new properties," said first-author Robert W. Style of Yale. The team, which also includes Callen Hyland, Rostislav Boltyanskiy, and John S. Wettlaufer of Yale, publishes their results today in Nature Communications under the title "Surface Tension and Contact with Soft Elastic Solids."
Student Learning Objectives: - Explain how size, structure, and scale relate to surface features - Describe the function of compliant surfaces with regard to adhesion (What happens when a surface of an object is applied to the surface of another object?) At a Glance for Teachers: - Mini-Me Activity (optional) - Visualization and diagramming of surface of gecko foot at various scales Note: Some questions in the Student Journal are underlined as formative assessment checkpoints for you to check students’ understanding of lesson objectives. Vocabulary: Adhesive, Compliant, Lamella, Seta, Spatula, Surface Terrain, Topography Refer to the end of this Teacher Guide for definitions. - PowerPoint for Lesson 4 - Student Journals for Lesson 4 - Computer with LCD or overhead projector - Transparent tape - Pipe cleaners (for Mini-Me optional activity) - Measuring tape (for Mini-Me optional activity) - Modeling clay (for Mini-Me optional activity) Student Journal Page # Teacher Background Information and Pedagogy Slides 1 & 2 Student Journal Page: 4–1 By definition, an adhesive is any material that will be useful in holding two objects together solely by surface contact. For an adhesive to achieve extremely close contact with an object, it must have the properties of a liquid, which makes it capable of coming into intimate contact with the surface. In use, an adhesive must be able to resist any applied force that attempts to break the bond formed between it and the object or surface to which it was applied. In other words, it will need the properties of a solid. Adapted from Johnston, J. (2003). Pressure sensitive adhesive tapes. Pressure Sensitive Tape Council. Northbrook, Illinois, Page 23. 1. Provide each student with a piece of transparent tape. “Observe the transparent tape on a smooth surface, like a desk. Place the tape on the surface and “rub” out as many of the bubbles as possible. Use a pen to circle the bubbles on the tape. Place the actual tape onto the journal page 4–1 with the bubbles marked.” Student Journal Page: 4–1 2. Based on student observations and this text, ask students questions similar to these below. Solicit from the students their ideas about the properties of the tape. Draw out the properties of solids and liquids and explanations for what makes the tape work. “Would you consider the tape stuck to a surface to be an adhesive? Why or why not?” “Work in your small groups to describe how transparent tape has the properties of both a liquid and a solid.” 3. Ask the students the following: “Recall the activity we did with the shoes. What forces exist between each pair of surfaces?” “Tape and smooth surface” “Shoe and floor” Students might say “adhesive force” for the tape and “gravity” for the shoe. “Which surface to surface interaction do you think is most similar to a gecko climbing a wall? Explain your answer.” Allow students to explain their response. 4. Note to teacher: For image 4.3, point out that two compliant surfaces would come into even closer contact than one compliant surface with a hard surface. 5. The Gecko Up-Close Activity: A gecko can “stick” or adhere to just about any surface from a single toe. Ask the students to make predictions. “We are going to be making some close-up observations of the surface of the gecko’s foot. Before we take a closer look, predict what you think the surface of the gecko’s foot looks like at the nanoscale level.” Solicit student predictions without comment. “Let’s take a closer look at the foot of a gecko. Each of the following images takes a progressively closer look at the foot of a gecko.” Student Journal Pages: 4–2 4–3 4–4 4–5 Teacher Background Information: The following description of the anatomy of a gecko’s toe is adapted from “The Life of Reptiles Volume I” by Angus Bellairs: Each toe pad is covered on its under surface by rows of wide scales called lamellae, which overlap each other at their edges. Each lamella is covered by fine projecting hairs or setae about in length. The setae branch and sub-branch several times, the final twigs ending in a pair of minute flattened tips called spatulas. It has been estimated that the total number of setae on all the lemellae of all the toes of the four feet of a gecko is about one million. Each seta carries between and spatulas on its branches. Optional Hands-On Activity: Use the Mini-Me activity located at the end of this teacher guide (courtesy of the University of Wisconsin, Madison) to provide students with the experience of picturing themselves much smaller than what they are and then sculpting an image that symbolizes their understanding of the invisible world. Once students have built their mini-me, have them proceed to the student journal page 4-2. 6. Working in pairs, students should respond to the journal prompts and visualize what it would be like to shrink down to these scales and interact with the various structures. Tell them to be creative with their drawings, but also to be as accurate as they can when they “enter” this new world. Provide an example of the first sketch on the board. Students may describe the “view” as appearing like hills and valleys. Below is a sample student drawing depicting the centimeter scale from the field test. Optional: You may want to add some discussion about what makes a good scientific drawing. 7. Use the slides at the end of the PowerPoint of this lesson to illustrate a field test student’s for each magnification. As you introduce the four images on this slide, you may wish to use the following descriptions and explanations: Image (a): “The toes are lined with deep ridges.” Image (b): “At the next higher magnification, one can see that the surface looks like a rug. These yarn-like projections are called “setae.” Each seta is times thinner than a single hair on your head. The very end of each seta is frayed into even tinier projections.” Image (c): “At the next higher magnification, one can see that the tiny projections are actually flattened on the end.” Image (d): “These tiny objects are called “spatulas” because of their shape. Each spatula is about thick. This is the upper limit of the range of what is considered nanoscale science. Nanoscale science is the study of objects in the range of .” “It has been estimated that a gecko has a total of about one million setae on all its feet.” 7. Ask the students: “What is the significance that each seta contains between and spatulas?” Students should conclude that with one million setae, each with spatula, there is a lot of potential for surface contact between the surface and the gecko. “As the scale decreased, what did you find out about the structure of the gecko’s toe?” As the scale decreased, more and smaller structures became evident in the images. 8. As a culminating discussion, ask students to respond to the questions in “Making Connections.” “Let’s review briefly your understanding.” 1. “Describe one or two ideas that you learned during this lesson.” 2. “Which range do gecko setae fit into?” 3. “Which force would make best use of these many points of contact?” 4. “How might the gecko’s foot structure help the gecko climb a wall or ceiling?” Make sure to talk with the students about the structure of the gecko’s foot being compliant with a surface. 9. The end of each lesson contains this flow chart that provides an opportunity to show students the “big picture” and where they are in the lesson sequence. The following color code is used: Yellow: Past Lessons Blue: Current Lesson Green: Next Lesson White: Future Lessons Appendix: NanoLeap Physical Science Vocabulary Something that tends to remain in association or attachment - Soft and able to conform to the surface of another object - Yielding to physical pressure Lamella (Lamellae plural) Each gecko toe pad is covered on its under surface by rows of wide scales Seta (Setae plural) Each gecko lamella is covered by fine projecting hairs or setae about in length Spatula (Spatulas plural) The setae branch and sub-branch several times; the final twigs end in a pair of minute flattened tips called spatulas The physical features of a surface, usually referring to the topography - The physical or natural features of an object and their structural relationships - The depths and rises on a surface To understand the concept of scale To relate to changes in scale To learn how to accurately measure length and height If the properties of matter are to be truly understood scientists need to be able to change their perspective to understand scale and size. How do we begin to understand a scale we cannot see, a scale that is characterized by less than one hundred millionth of a meter? First we must be able to see ourselves on a very small scale. We must be able to picture ourselves much smaller than what we are and sculpt an image that symbolizes our understanding of the invisible world. Imagine yourself of your size, for example if your height is your new size would be . You will now sculpt a miniature of yourself from clay making sure to keep your height, and the length of your arms and lengths to their normal size. You have moved from the meter scale to the centimeter scale. How much more would you need to shrink your sculpture to become the smallest size visible by your eye? You would need to shrink another smaller than your sculpture. What if you were to shrink to nanosize? You would need to shrink the sculpture times? You would be the smallest size visible by a special microscope called an electron microscope. Think of the advantage you would have to understand how reactions occur if you could witness them. Now picture the sculpture of yourself as an atom, what would be the normal size of a person? A person would be as tall as the distance across the earth. Construction of a Mini Me Crayola Modeling Clay Each student is given pipe cleaners, a small lump of Crayola modeling clay and a measuring tape. The students should work in pairs and assist each other in measuring their height, and the length of their arm's span. Then each student should take two pipe cleaners and twist them to make the body and legs. Use one more pipe cleaner to make the arms. The length of the body should be the student's height and the length of arms of the sculpture should be that of the students arm's span. Then the crayola clay should be shaped around the pipe cleaner body and a head should also be fashioned from the clay and slid on the top of the body formed by the pipe cleaners. The sculpture should be allowed to dry and checked for accuracy. After completing their mini-me have students repeat the process by making a scale model of their mini-me. This mini-mini-me will now be of their original size. Ask students how many more times they would need to do this process to make a nanome ( more times). Investigating Static Forces in Nature: The Mystery of the Gecko Lesson 4: What Do We Learn When We Look More Closely? © 2009 McREL
August 19th heralds Earth Overshoot Day According to the Global Footprint Network, on August 19th humanity’s footprint will be bigger than the Earth’s ability to regenerate those resources in this year: Earth Overshoot Day. In 1961, most countries’ biocapacity was bigger than its footprint, but ever since the 1970’s we have started to overstep our limits. Each year Earth Overshoot Day has come earlier in the year as we deplete our world’s resources. In 2000 Earth Overshoot Day was in October. 14 years later and now it comes 2 months earlier. This year it would take 1.5 Earths to generate the amount of food, timber, and other resources that we need to continue business as usual. Using a conservative estimate, by 2050 it will take 3 Earths to cover our footprint. This overuse is taking a physical toll on our environment manifested in soil erosion, water scarcity, and desertification. Countries are starting to take action. For instance, the Philippines plans to reduce its footprint through its National Land Use Act which will protect and manage its natural resources in a sustainable way. Our foot is becoming too big for its only shoe. It is time to take action and make change. Learn more about how Earth Overshoot Day is calculated: Author: Johanna Bozuwa, EDN Intern
(PhysOrg.com) -- Just as electronics revolutionized computing and communications technology, spintronics is touted to follow suit. This relatively new field involves manipulating the flow of a magnetism-related property called 'spin'. In magnons, a spintronic counterpart of electrons, Naoto Nagaosa from the RIKEN Advanced Science Institute in Wako, Japan, and his colleagues have observed an effect first seen with electrons over 130 years ago: the Hall effect. The Hall effect is used in sensitive detectors, so the researchers believe their finding could lead to new applications for magnetic insulators. The Hall effect arises because a charge-carrying particle such as an electron experiences a force perpendicular to its direction of motion as it moves through a magnetic field of a conducting material. The result is a build-up of charges of opposite signs on either side of the material, which creates a measureable electric field. Magnons, however, have no charge, so an analogous effect had never been observed previously. The Hall effect is one of the most fundamental phenomena in condensed matter physics, explains Nagaosa. It is important to study to what extent we can apply ideas from conventional electronics to spintronics. Nagaosa, along with Yoshinori Tokura also from ASI, Yoshinori Onose and co-workers from The University of Tokyo, and Hosho Katsura from the University of California, Santa Barbara, USA, studied the magnetic and thermal properties of the insulating ferromagnet Lu2V2O7 at low temperatures. Rather than the electric field associated with the conventional effect, the Hall effect manifested in this material as a thermal conductivity gradient across the sample (Fig. 1). This difference occurs because the magnons carry heat, rather than charge. The researchers showed that the size of the effect is not proportional to the applied magnetic field, but has a maximum at relatively low fields. This supports the hypothesis that magnons, influenced by the relativistic interaction, are responsible because the number of magnons is known to be reduced at these low-level magnetic fields. They also observed that the conductivity gradient started to decrease at higher fields. This observation allowed Nagaosa and colleagues to rule out lattice vibrations, or phonons, as another possible underlying cause of the experimental results: a phonon-induced thermal conductivity gradient would be expected to continue to increase with magnetic field. According to our theoretical prediction, only certain types of the crystal structure show this magnon Hall effect, says Nagaosa. To confirm this theory, we next aim to check that the phenomenon is absent in more conventional structures such as a cubic lattice. Explore further: IHEP in China has ambitions for Higgs factory More information: Onose, Y., Ideue, T., Katsura, H., Shiomi, Y., Nagaosa, N. & Tokura, Y. Observation of the magnon Hall effect. Science 329, 297299 (2010) www.sciencemag.org/cgi/content/short/329/5989/297
To factor a polynomial means to rewrite the polynomial as a product of simpler polynomials or of polynomials and monomials. Because polynomials may take many different forms, many different techniques are available for factoring them. The first method of factoring is called factoring out the GCF (greatest common factor). Factor 5 x + 5 y. Since each term in this polynomial involves a factor of 5, then 5 is a common factor of the polynomial. Factor 24 x 3 – 16 x 2 + 8 x. x is a common factor for all three terms. Also, the numbers 24, –16, and 8 all have the common factors of 2, 4, and 8. The greatest common factor is 8 x.
“How Does Learning Happen? Ontario’s Pedagogy for the Early Years (2014)” is the foundation on which we build our practice. There are 4 core values that help us develop quality experiences for children and families: Families matter. Raising children to be responsible, responsive citizens is hard work. Having genuine, open conversations between families and staff makes the job just a little bit easier. We believe that families want to be involved in their child’s day. Work, school and life commitments often make that challenging. Communication between staff and families is vital to creating a community of mutual respect and trust. OCCC is a teaching facility. We support a variety of post-secondary students in achieving their goals. We also connect with community partners to link families to services. Community partners offer valuable services that complement the programs offered at OCCC. Quality practices in health, safety and nutrition are the foundation that well-being is built on. Being able to identify their individual needs and then understanding how those needs make them feel and function is an important part of growth and healthy development. Building relationships and contributing to a community is a vital part of well-being. Children are natural born risk takers. Risk in play is an old concept being revisited with new ideas and theories. Being able to assess situations then take risks safely is an integral part of learning by doing. Children who are comfortable with taking risks often become successful and resilient learners. It is through play that children discover how their world works, how materials work, what happens when the environment changes and how their own actions and involvement play an important part in the functioning of the larger community. Involved, focused learning is messy. It’s hands-on, involves choices, critical thinking and experimentation. Children need to be immersed in their discoveries. The intensity of their play will make a mess of the environment and of themselves. That’s OK. That’s expected. Learning to communicate, negotiate, understand, problem solve and work together is a process. Behaviour and emotions are a part of that learning and important to healthy development. Social skills are developed on an individual continuum that reflects the child’s age and stage of development. Early literacy, in all forms, is critical to healthy development and later school success. Positive communication skills and self-regulation provide children with strong foundation on which to build relationships, create a sense of belonging, general well-being and contribute and engage with the world around them. To read more about how we use "How Does Learning Happen? Ontario's Pedagogy for the Early Years (2014)" into our programming, check out our Parent Handbook.
Author: Natalia Cardozo Durán/ Natalia Méndez Lugo – Description: This English self-study guide helps you learn vocabulary related to bullying and types of bullying. Additionally, this guide helps you give suggestions to people experiencing bullying. The activities in this guide are connected to the reading in Way to Go Student Book 7 Module 2 Unit 3 (page 76). It starts with some vocabulary activities. Then, you must read a text and do some activities about it; then, you will discover how to give suggestions. Finally, you will make an informative flyer giving some suggestions to those experiencing bullying. That flyer will be shared with your teacher and classmates.
Although many Northerners, including Abraham Lincoln, initially hoped to prosecute the war without interfering with slavery as it existed, pressure from slaves who fled to Union lines, abolitionist sentiment in the North, and a deteriorating military situation pushed Lincoln to consider abolishing slavery. In September 1862 Lincoln issued a preliminary Emancipation Proclamation. He signed the final edict on January 1, 1863. In this caricature by Baltimore pro-South Democrat Adalbert Johann Volck, an inebriated Lincoln, surrounded by symbols of Satanism and paintings honoring John Brown and slave rebellions, trod on the Constitution as he drafted the proclamation. Source: V. Blada (A. J. Volck), Sketches from the Civil War in North America, 1861, ‘62, ’63 (1863)—American Social History Project.
The body needs fats for growth and energy. It also uses them to synthesize hormones and other substances needed for the body’s activities. The body may deposit excess fat in blood vessels and within organs, where it can block blood flow and damage organs, often causing serious disorders. Important fats (lipids) found in the blood are Cholesterol is an essential component of cell membranes, of brain and nerve cells, and of bile, which helps the body absorb fats and fat-soluble vitamins. The body uses cholesterol to make vitamin D and various hormones, such as estrogen, testosterone, and cortisol. The body can produce all the cholesterol that it needs, but it also obtains cholesterol from food. Triglycerides, which are contained in fat cells, can be broken down, then used to provide energy for the body’s metabolic processes, including growth. Triglycerides are produced in the intestine and liver from smaller fats called fatty acids Fats . Some types of fatty acids are made by the body, but others must be obtained from food. Fats, such as cholesterol and triglycerides, cannot circulate freely in the blood, because blood is mostly water. To be able to circulate in blood, cholesterol and triglycerides are packaged with proteins and other substances to form particles called lipoproteins. There are different types of lipoproteins. Each type has a different purpose and is broken down and excreted in a slightly different way. Lipoproteins include High-density lipoproteins (HDL) Low-density lipoproteins (LDL) Very low density lipoproteins (VLDL) Cholesterol transported by LDL is called LDL cholesterol, and cholesterol transported by HDL is called HDL cholesterol. The body can regulate lipoprotein levels (and therefore lipid levels) by increasing or decreasing the production rate of lipoproteins. The body can also regulate how quickly lipoproteins enter and are removed from the bloodstream. Levels of cholesterol and triglycerides vary considerably from day to day. From one measurement to the next, cholesterol levels can vary by about 10%, and triglyceride levels can vary by up to 25%. Lipid levels may be Lipid levels may become abnormal because of changes that occur with aging, various disorders (including inherited ones), use of certain drugs, or lifestyle (such as consuming a diet high in saturated fat, being physically inactive, or being overweight). Complications of abnormal lipid levels Abnormally high levels of certain lipids (especially cholesterol) can lead to long-term problems, such as atherosclerosis Atherosclerosis Atherosclerosis is a condition in which patchy deposits of fatty material (atheromas or atherosclerotic plaques) develop in the walls of medium-sized and large arteries, leading to reduced or... read more . Generally, a high total cholesterol level (which includes LDL, HDL, and VLDL cholesterol), particularly a high level of LDL (the "bad") cholesterol, increases the risk of atherosclerosis and thus the risk of heart attack Acute Coronary Syndromes (Heart Attack; Myocardial Infarction; Unstable Angina) Acute coronary syndromes result from a sudden blockage in a coronary artery. This blockage causes unstable angina or a heart attack (myocardial infarction), depending on the location and amount... read more or stroke Overview of Stroke A stroke occurs when an artery to the brain becomes blocked or ruptures, resulting in death of an area of brain tissue due to loss of its blood supply (cerebral infarction) and symptoms that... read more . However, not all types of cholesterol increase this risk. A high level of HDL (the "good") cholesterol may decrease risk, and conversely, a low level of HDL cholesterol may increase risk. The effect of triglyceride levels on the risk of heart attack is less clear-cut. But very high levels of triglycerides (higher than 500 milligrams per deciliter of blood, or mg/dL [5.65 mmol/L) can increase the risk of pancreatitis Overview of Pancreatitis Pancreatitis is inflammation of the pancreas. The pancreas is a leaf-shaped organ about 5 inches (about 13 centimeters) long. It is surrounded by the lower edge of the stomach and the first... read more . Measuring lipid levels The fasting lipid profile (sometimes called a lipid panel), is the levels of total cholesterol, triglycerides, LDL cholesterol, and HDL cholesterol measured after a person fasts for 12 hours. Doctors usually do this test every 5 years starting at age 20 as part of assessing whether the person is at risk of coronary artery disease Overview of Coronary Artery Disease (CAD) Coronary artery disease is a condition in which the blood supply to the heart muscle is partially or completely blocked. The heart muscle needs a constant supply of oxygen-rich blood. The coronary... read more . In children and adolescents, screening with a fasting lipid profile is recommended between the ages of 2 and 8 years if the child has risk factors, such as a family member with severe dyslipidemia Dyslipidemia Dyslipidemia is a high level of lipids (cholesterol, triglycerides, or both) or a low high-density lipoprotein (HDL) cholesterol level. Lifestyle, genetics, disorders (such as low thyroid hormone... read more or one who developed coronary artery disease at a young age. In children with no risk factors, screening with a non-fasting lipid profile is usually done once before the child reaches puberty (usually between age 9 to 11) and once more between the ages of 17 to 21.
By the end of this section, you will be able to: - Describe the study of epistemology. - Explain how the counterexample method works in conceptual analysis. - Explain the difference between a priori and a posteriori knowledge. - Categorize knowledge as either propositional, procedural, or by acquaintance. The word epistemology is derived from the Greek words episteme, meaning “knowledge,” and logos, meaning “explanation” and translated in suffix form (-logia) as “the study of.” Hence, epistemology is the study of knowledge. Epistemology focuses on what knowledge is as well as what types of knowledge there are. Because knowledge is a complex concept, epistemology also includes the study of the possibility of justification, the sources and nature of justification, the sources of beliefs, and the nature of truth. How to Do Epistemology Like other areas within philosophy, epistemology begins with the philosophical method of doubting and asking questions. What if everything we think we know is false? Can we be sure of the truth of our beliefs? What does it even mean for a belief to be true? Philosophers ask questions about the nature and possibility of knowledge and related concepts and then craft possible answers. But because of the nature of philosophical investigation, simply offering answers is never enough. Philosophers also try to identify problems with those answers, formulate possible solutions to those problems, and look for counterarguments. For example, in questioning the possibility of knowledge, philosophers imagine ways the world could be such that our beliefs are false and then try to determine whether we can rule out the possibility that the world really is this way. What if there’s a powerful evil demon who feeds you all your conscious experiences, making you believe you are currently reading a philosophy text when in fact you are not? How could you rule this out? And if you can’t rule it out, what does this say about the concept of knowledge? In answering epistemological questions, theorists utilize arguments. Philosophers also offer counterexamples to assess theories and positions. And many philosophers utilize research to apply epistemological concerns to current issues and other areas of study. These are the tools used in epistemological investigation: arguments, conceptual analysis, counterexamples, and research. Conceptual Analysis and Counterexamples One of the main questions within epistemology pertains to the nature of the concepts of knowledge, justification, and truth. Analyzing what concepts mean is the practice of conceptual analysis. The idea is that we can answer questions like “What is knowledge?” and “What is truth?” by using our grasp of the relevant concepts. When investigating a concept, theorists attempt to identify the essential features of the concept, or its necessary conditions. So, when investigating knowledge, theorists work to identify features that all instances of knowledge share. But researchers are not only interested in isolating the necessary conditions for concepts such as knowledge; they also want to determine what set of conditions, when taken together, always amounts to knowledge—that is, its sufficient conditions. Conceptual analysis is an important element of doing philosophy, particularly epistemology. When doing conceptual analysis, theorists actively endeavor to come up with counterexamples to proposed definitions. A counterexample is a case that illustrates that a statement, definition, or argument is flawed. The introductory chapter provides an in-depth exploration of conceptual analysis. Counterexamples are discussed in the chapter on logic and reasoning. Counterexamples to definitions in epistemology usually take the form of hypothetical cases—thought experiments intended to show that a definition includes features that are either not necessary or not sufficient for the concept. If a counterexample works to defeat an analysis, then theorists will amend the analysis, offer a new definition, and start the process over again. The counterexample method is part of the philosophical practice of getting closer to an accurate account of a concept. Understanding the process of conceptual analysis is key to following the debate in epistemological theorizing about knowledge and justification. For example, a theorist could contend that certainty is a necessary component of knowledge: if a person were not completely certain of a belief, then they could not be said to know the belief, even if the belief were true. To argue against this “certainty” theory, another philosopher could offer examples of true beliefs that aren’t quite certain but are nevertheless considered to be knowledge. For example, take my current belief that there’s a bird on a branch outside my office window. I believe this because I can see the bird and I trust my vision. Is it possible that I am wrong? Yes. I could be hallucinating, or the so-called bird may be a decoy (a fake stuffed bird). But let’s grant that there is indeed a real bird on the branch and that “there is a bird on that branch” is true right now. Can I say that I know there is a bird on the branch, given that I believe it, it’s true, and I have good reason to believe it? If yes, then the “certainty” thesis is flawed. Certainty is not necessary to have knowledge. This chapter includes several examples such as this, where a theorist offers an example to undermine a particular account of knowledge or justification. As with all areas of philosophy, epistemology relies on the use of argumentation. As explained in the chapter on logic and reasoning, argumentation involves offering reasons in support of a conclusion. The aforementioned counterexample method is a type of argumentation, the aim of which is to prove that an analysis or definition is flawed. Here is an example of a structured argument: - Testimonial injustice occurs when the opinions of individuals/groups are unfairly ignored or treated as untrustworthy. - If the testimony of women in criminal court cases is less likely to be believed than that of men, then this is unfair. - So, if the testimony of women in criminal court cases is less likely to be believed than that of men, this is a case of testimonial injustice. The above argument links the general concept of testimonial injustice to a specific possible real-world scenario: women being treated as less believable by a jury. If women are considered less believable, then it is problematic. Notice that the above argument does not say that women are in fact considered less believable. To establish this thesis, philosophers can offer further arguments. Often, arguments utilize empirical research. If a theorist can find studies that indicate that women are treated less seriously than men in general, then they can argue that this attitude would extend to the courtroom. Philosophers often search for and utilize research from other areas of study. The research used can be wide-ranging. Epistemologists may use research from psychology, sociology, economics, medicine, or criminal justice. In the social and hard sciences, the goal is to accurately describe trends and phenomena. And this is where philosophy differs from the sciences—for epistemology, the goal is not only to describe but also to prescribe. Philosophers can argue that unjustifiably discounting the opinions of groups is bad and to be avoided. Hence, epistemology is a normative discipline. The Normative Nature of Epistemology This chapter began with the observation that knowledge is the goal of many disciplines. If knowledge is a goal, then it is desirable. Humans do not like being proven wrong in their beliefs. Possessing justification in the form of reasons and support for beliefs makes a person less likely to be wrong. Hence, both justification and knowledge are valuable. If knowledge is valuable and there are proper methods of justification that we should follow, then epistemology turns out to be a normative discipline. Normativity is the assumption that certain actions, beliefs, or other mental states are good and ought to be pursued or realized. One way to think of epistemology is that in describing what knowledge, truth, and justification are, it further prescribes the proper way to form beliefs. And we do treat knowledge as valuable and further judge others according to the justification for their beliefs. A Preliminary Look at Knowledge Because the concept of knowledge is so central to epistemological theorizing, it is necessary to briefly discuss knowledge before proceeding. Knowledge enjoys a special status among beliefs and mental states. To say that a person knows something directly implies that the person is not wrong, so knowledge implies truth. But knowledge is more than just truth. Knowledge also implies effort—that the person who has knowledge did more than just form a belief; they somehow earned it. Often, in epistemology, this is understood as justification. These features of knowledge are important to keep in mind as we continue. First, we will look at the different ways of knowing. Ways of Knowing The distinction between a priori knowledge and a posteriori knowledge reveals something important about the possible ways a person can gain knowledge. Most knowledge requires experience in the world, although some knowledge without experience is also possible. A priori knowledge is knowledge that can be gained using reason alone. The acquisition of a priori knowledge does not depend on experience. One way to think of a priori knowledge is that it is logically prior to experience, which does not necessarily mean that it is always prior in time to experience. Knowledge that exists before experience (prior in time) is innate knowledge, or knowledge that one is somehow born with. Theorists disagree over whether innate knowledge exists. But many theorists agree that people can come to know things by merely thinking. For example, one can know that 4 × 2 = 8 without needing to search for outside evidence. A posteriori knowledge is knowledge that can only be gained through experience. Because a posteriori knowledge depends on experience, it is empirical. Something is empirical if it is based on and verifiable through observation and experience, so empirical knowledge is knowledge gained from sense perception. If my belief that there’s a bird on the branch outside my window is knowledge, it would be a posteriori knowledge. The difference between a posteriori and a priori knowledge is that the former requires experience and the latter does not. While a priori knowledge does not require experience, this does not mean that it must always be reached using reason alone. A priori knowledge can be learned through experience. Think of mathematical truths. While it is possible to figure out multiplication using thinking alone, many first understand it empirically by memorizing multiplication tables and only later come to understand why the operations work the way they do. Things You Can Know: Types of Knowledge Philosophers classify knowledge not only by source but also by type. Propositional knowledge is knowledge of propositions or statements. A proposition or statement is a declarative sentence with a truth value—that is, a sentence that is either true or false. If one knows a statement, that means that the statement is true. And true statements about the world are usually called facts. Hence, propositional knowledge is best thought of as knowledge of facts. Facts about the world are infinite. It is a fact that the square root of 9 is 3. It is a fact that Earth is round. It is a fact that the author of this chapter is five feet, one inch tall, and it is a fact that Nairobi is the capital of Kenya. Often, philosophers describe propositional knowledge as “knowledge that,” and if you look at the structure of the previous sentences, you can see why. Someone can know that Nairobi is the capital of Kenya, and “Nairobi is the capital of Kenya” is a true proposition. Propositional knowledge can be a priori or a posteriori. Knowledge of our own height is clearly a posteriori because we cannot know this without measuring ourselves. But knowing that 3 is the square root of 9 is a priori, given that it’s possible for a person to reason their way to this belief. Propositional knowledge is the primary focus of traditional epistemology. In the following sections of this chapter, keep in mind that knowledge refers to propositional knowledge. While traditional epistemology focuses on propositional knowledge, other types of knowledge exist. Procedural knowledge is best understood as know-how. Procedural knowledge involves the ability to perform some task successfully. While a person may know that a bicycle stays erect using centrifugal force and forward momentum caused by peddling, and that the forces of friction and air resistance will affect their speed, this does not mean that they know how to ride a bicycle. Having propositional knowledge concerning a task does not guarantee that one has procedural knowledge of that task. Indeed, one could be a physicist who studies the forces involved in keeping a bike upright, and therefore know many facts about bicycles, but still not know how to ride a bike. Knowledge by acquaintance is knowledge gained from direct experience. A person knows something by acquaintance when they are directly aware of that thing. This awareness comes from direct perception using one’s senses. For example, I have knowledge by acquaintance of pain when I am in pain. I am directly aware of the pain, so I cannot be mistaken about the existence of the pain. British philosopher Bertrand Russell (1872–1970) is credited with first articulating a distinction between knowledge by acquaintance and propositional knowledge, which he called knowledge by description (Russell 1910–1911). According to Russell, knowledge by acquaintance is a direct form of knowledge. A person has knowledge by acquaintance when they have direct cognitive awareness of it, which is awareness absent of inference. That knowledge by acquaintance is not the product of inference is very important. Inference is a stepwise process of reasoning that moves from one idea to another. When I feel pain, I am acquainted with that pain without thinking to myself, “I am in pain.” No inference is required on my part for me to know of my pain. I am simply aware of it. It is the directness of this knowledge that differentiates it from all other a posteriori knowledge. All knowledge by acquaintance is a posteriori, but not all a posteriori knowledge is knowledge by acquaintance. My awareness of pain is knowledge by acquaintance, yet when I infer that “something is causing me pain,” this belief is propositional. Russell’s distinction between knowledge by acquaintance and propositional knowledge, if accurate, has important implications in epistemology. It shows that inference is used even in cases of beliefs that people think are obvious: ordinary beliefs based on perception. Russell thought that one can only have knowledge by acquaintance of one’s sensations and cannot have direct awareness of the objects that could be the cause of those sensations. This is a significant point. When I see the bird on a branch outside my office window, I am not immediately aware of the bird itself. Rather, I am directly aware of my perceptual experience of the bird—what philosophers call sense data. Sense data are sensations gained from perceptual experience; they are the raw data obtained through the senses (seeing, smelling, feeling, etc.). One’s perceptual experience is of sense data, not of the objects that could be causing that sense data. People infer the existence of external objects that they believe cause their perceptual experiences. Russell’s view implies that people always use reasoning to access the external world. I have knowledge by acquaintance of my perceptual experience of seeing a bird; I then infer ever so quickly (and often unconsciously) that there is a bird on the branch, which is propositional knowledge. Not all philosophers think that experience of the external world is mediated through sense data. Some philosophers contend that people can directly perceive objects in the external world. But Russell’s theory introduces an important possibility in epistemological thinking: that there is a gap between one’s experience of the world and the world itself. This potential gap opens up the possibility for error. The gap between experience and the world is used by some thinkers to argue that knowledge of the external world is impossible. Table 7.1 summarizes the types of knowledge discussed in this section. |Propositional knowledge||Knowledge of propositions or statements; knowledge of facts||Examples are infinite: “I know that…” the Earth is round, two is an even number, lions are carnivores, grass is green, etc.| |Procedural knowledge||“Know-how”; understanding how to perform some task or procedure||Knowing how to ride a bicycle, do a cartwheel, knit, fix a flat tire, dribble a basketball, plant a tree, etc.| |Knowledge by acquaintance||Knowledge gained from direct experience||Perception of physical sensations, such as pain, heat, cold, hunger; important to differentiate between the knowledge by acquaintance that is the sensation (e.g., a physical sensation of feeling cold) and related inferences, such as “the air temperature must be dropping,” which is propositional knowledge.| Philosophers who argue that knowledge of the external world is impossible do so based on the idea that one can never be certain of the truth of one’s external world beliefs. But what does it mean to claim that a belief is true? People are sometimes tempted to believe that truth is relative. A person may say things like “Well, that’s just their truth” as if something can be true for one person and not for others. Yet for statements and propositions, there is only one truth value. One person can believe that Earth is flat while another can believe is it round, but only one of them is right. People do not each personally get to decide whether a statement is true. Furthermore, just because one has no way of determining whether a statement is true or false does not mean that there is no truth to the matter. For example, you probably don’t quite know how to go about determining the exact number of blades of grass on the White House lawn, but this does not mean that there is no true answer to the question. It is true that there is a specific number of blades of grass at this moment, even if you cannot know what that number is. But what does it mean for a statement to be true? At first, this question may seem silly. The meaning of truth is obvious. True things are correct, factual, and accurate. But to say that something is correct, factual, or accurate is just another way of saying it is true. Factual just means “true.” Creating a noncircular and illuminating account of truth is a difficult task. Nevertheless, philosophers attempt to explain truth. Philosophers often are curious about and question concepts that most people accept as obvious, and truth is no exception. Theories of truth and the debate over them are a rather complicated matter not suitable for an introductory text. Instead, let’s briefly consider two ways of understanding truth in order to gain a general understanding of what truth is. Aristotle claimed that a true statement is one that says of something that it is what it is or that it is not what it is not (Aristotle 1989). A possible interpretation of Aristotle’s idea is that “A is B” is true if and only if A is B. Notice that this simply removes the quotations around the proposition. The idea is simple: the statement “Dogs are mammals” is true if dogs are mammals. Another way of understanding truth is as a correspondence between statements and the world. The correspondence theory of truth proposes that a statement is true if and only if that statement corresponds to some fact (David 2015). A fact is a state of affairs in the world—an arrangement of objects and properties in reality—so the statement “The dog is under the bed” is true if and only if there exists in the world a dog and a bed and the dog is related to the bed by being underneath it. The correspondence theory of truth makes truth a relation between statements and the world. If statements are appropriately related to the world—if they correspond to the world—then those statements can be said to be true.
How Closed Captions Help ESL Learners Improve Their English Skills One in five Americans speaks English as a second language. This might mean they speak both English and another language fluently. However, of those that do speak more than one language, 41% reported they speak English “less than very well.” That’s more than 25 million ESL learners who may struggle to understand every word spoken in a video they watch online. If you’re creating video content to reach a broad audience, you should consider adding captions to your videos to make them more accessible to ESL learners. By providing written words to match the spoken language in your video content, you can help ESL learners not only to understand your content better, but to become stronger, more proficient English speakers. So, how exactly do closed captions help ESL learners? Vocabulary is one of the most challenging aspects of learning any language, but particularly English. English is made up of words borrowed from many other languages, and that means that many words break commonly-taught rules for pronunciation and usage. “I before e except after c” doesn’t help with learning words like “science” or “weird.” Words like this are best learned through exposure and experience, and captions enable viewers to form visual patterns for a language as they encounter those words in normal usage. Giving non-native speakers the opportunity to see and hear words in action increases their ability to remember them. Making matters more challenging, if your videos deal with a specialized subject matter, they may involve many words that are new to ESL audiences. Including closed captions will give those viewers the ability to look up and translate words they don’t understand, without having to worry about misspelling them and missing out on the meaning. Not only will the saturation of visually-represented language through closed captioning improve spelling skills for ESL students, this demographic of learner has been reported to respond more positively to the use of closed captions over more traditional forms of teaching spelling. Syntax refers to the order in which words are arranged in a sentence. Because syntax varies so much from language to language, it’s one of the most difficult aspects for ESL students to learn. Using closed captions to help students understand syntax can be incredibly useful. This again comes back to the value of visual representations of English language patterns coupled with auditory representations. Students can match the flow of the written language with the spoken language and develop a more precise understanding of why English syntax works the way it does. Heteronyms and Homophones A heteronym is a word that is spelled the same as another word (or words), but has an entirely different meaning depending on pronunciation and context. The word “tear” can be pronounced like “tare” and refer to a rip, or it can be pronounced as “teer” and refer to moisture coming from someone’s eye. Homophones are the opposite of heteronyms; they’re words that sound the same when spoken, but have different spellings and different meanings. The most common example of this phenomenon is “to,” “two,” and “too.” These dual meanings and distinct spellings may be hard for a non-native English speaker to detect. Seeing the words on the screen along with the video provides contextual clues and connects a word’s pronunciation to its spelling, helping ESL viewers become more familiar with proper pronunciation and spelling. Idioms are groups of words that, as a whole, have a meaning that is not deducible from the individual words. Idioms are a product of culture, and are commonly used in English as they are in many other languages. People say things like, “it’s raining cats and dogs” to say it’s raining heavily, or “at the drop of a hat” to indicate something was done immediately. Translating those phrases word-for-word into another language wouldn’t capture their meaning so idioms must be considered as distinct vocabulary words as people learn English for the first time. To that end, visual representations of idiomatic statements through closed captions have been proven to help ESL learners understand and retain common idiomatic statements. When you use closed captions in your videos, not only are you making your content accessible to over 25 million additional viewers, you’re support accessibility. For ESL learners and others that find themselves struggling with listening to English compared to reading it, your closed captions will give them a chance to engage with our content at their own pace using the methods they are most comfortable with.
Medication Sensitivities – Drug Allergy and Reaction Everyone reacts to medications differently. One person may develop a rash while taking a certain medication, while another person on the same drug may have no adverse reaction. Does that mean the person with the rash has an allergy to that drug? All medications have the potential to cause side effects, but only about 5% to 10% of adverse reactions to drugs are allergic. Whether allergic or not, reactions to medications can range from mild to life-threatening. It is important to take all medications exactly as your physician prescribes. If you have side effects that concern you, or you suspect a drug allergy has occurred, call your physician. If your symptoms are severe, seek medical help immediately. Allergy symptoms are the result of a chain reaction that starts in the immune system. Your immune system controls how your body defends itself. For instance, if you have an allergy to a particular medication, your immune system identifies that drug as an invader or allergen. Your immune system reacts by producing antibodies called Immunoglobulin E (IgE) to the drug. These antibodies travel to cells that release chemicals, triggering an allergic reaction. This reaction causes symptoms in the nose, lungs, throat, sinuses, ears, lining of the stomach or on the skin. Most allergic reactions occur within hours to two weeks after taking the medication and most people react to medications to which they have been exposed in the past. This process is called “sensitization.” However, rashes may develop up to six weeks after starting certain types of medications. One of the most severe allergic reactions is anaphylaxis (pronounced an-a-fi-LAK-sis). Symptoms of anaphylaxis include hives, facial or throat swelling, wheezing, light-headedness, vomiting and shock. Most anaphylactic reactions occur within one hour of taking a medication or receiving an injection of the medication, but sometimes the reaction may start several hours later. Anaphylaxis can result in death, so it is important to seek immediate medical attention if you experience these symptoms. Antibiotics are the most common culprit of anaphylaxis, but more recently, chemotherapy drugs and monoclonal antibodies have also been shown to induce anaphylaxis. A number of factors influence your chances of having an adverse reaction to a medication. These include: body size, genetics, body chemistry or the presence of an underlying disease. Also, having an allergy to one drug predisposes one to have an allergy to another unrelated drug. Contrary to popular myth, a family history of a reaction to a specific drug does not increase your chance of reacting to the same drug. Symptoms of non-allergic drug reactions vary depending on the type of medication. People being treated with chemotherapy often suffer from vomiting and hair loss. Other people experience flushing, itching or a drop in blood pressure from intravenous dyes used in x-rays or CT scans. Certain antibiotics irritate the intestines, which can cause stomach cramps and diarrhea. If you take ACE (angiotension converting enzyme) inhibitors for high blood pressure, you may develop a cough or facial and tongue swelling. Some people are sensitive to aspirin, ibuprofen, or other non-steroidal anti-inflammatory drugs (NSAIDs). If you have aspirin or NSAID sensitivity, certain medications may cause a stuffy nose, itchy or swollen eyes, cough, wheezing or hives. In rare instances, severe reactions can result in shock. This is more common in adults with asthma and in people with nasal polyps (benign growths). It is important to tell your physician about any adverse reaction you experience while taking a medication. Be sure to keep a list of any drugs you are currently taking and make special note if you have had past reactions to specific medications. Share this list with your physician and discuss whether you should be avoiding any particular drugs or if you should be wearing a special bracelet that alerts people to your allergy. When to See an Allergist / Immunologist If you have a history of reactions to different medications, or if you have a serious reaction to a drug, an allergist/immunologist, often referred to as an allergist, has specialized training to diagnose the problem and help you develop a plan to protect you in the future. Allergic drug reactions account for 5% to 10% of all adverse drug reactions. Any drug has the potential to cause an allergic reaction. Symptoms of adverse drug reactions include cough, nausea, vomiting, diarrhea, high blood pressure and facial swelling. Skin reactions (i.e. rashes, itching) are the most common form of allergic drug reaction. Non-steroidal anti-inflammatory drugs, antibiotics, chemotherapy drugs, monoclonal antibodies, anti-seizure drugs and ACE inhibitors cause most allergic drug reactions. If you have a serious adverse reaction, it is important to contact your physician immediately. Feel Better. Live Better. An allergist/immunologist, often referred to as an allergist, is a pediatrician or internist with additional years of specialized training in the diagnosis and treatment of problems such as allergies, asthma, autoimmune diseases and the evaluation and treatment of patients with recurrent infections.The right care can make the difference between suffering with an allergic disease and feeling better. By visiting the office of an allergist, you can expect an accurate diagnosis, a treatment plan that works and educational information to help you manage your disease. (We thank the American Academy of Allergy, Asthma and Immunology for providing much of the information contained in this article.)
Before using this website, you must read our disclaimer here. Microscope (Parts and function) A Microscope is an instrument used to view or magnify organisms smaller that 0.001mm which cannot be seen by human naked eyes. A microscope is made of the following parts. (i) Eye piece lens or ocular (x10) is used for viewing a magnified object. (ii) Body tube: It provides attachment to eyepiece and revolving nose piece (iii) Revolving nose piece is used for selecting objective lenses to be used and to be in line with the eyepiece (a). Low power objective lens(x4) is used for the lowest magnification of an object (b) Medium power objective lens (x10) it magnifies object more than low power objective lens (c) High power objective lens (x40). It is used for the highest magnification of object for minute detail (iv) Coarse focus knob: it is used for focusing an object at low power (v) Fine adjustment knob: it is used for focusing object at medium and high power magnification so that object is sharper at focus (vi) Arm is used for lifting or carrying the microscope (vii) Stage: is for displaying slide and specimens under focus (a) Clips: are the stage for holding glass slide on the stage (b) Hole: is present on the stage for light source to the viewing object (viii) Condenser: it is used to regulate the amount of light rays entering the microscope and object (ix) Mirror: it is used for collecting light rays and directing them to the condenser and object. (x) Base of the microscope- is for balancing it on the table
What is Education and Examples? Education is a process of acquiring knowledge, skills, values, and attitudes through various formal and informal means. It is an essential tool that empowers individuals to lead a meaningful and productive life by providing them with the necessary tools and resources to succeed in various spheres of life. Education can take many forms, including formal education such as attending schools, colleges, and universities, as well as informal education such as learning through life experiences, reading, and online courses. It plays a crucial role in personal growth, career development, and social mobility, and can lead to higher levels of achievement, better employment opportunities, and a higher standard of living. There are various types of education, such as: Formal Education: Formal education is a structured form of education that takes place in schools, colleges, and universities. It provides students with a structured curriculum that covers various subjects, including math, science,
Using ‘mental gym’ to support children with anxiety The principle of mental gym is quite simple. It is a way of practising different thought patterns in order to develop more flexible thinking and break out of negative responses. Mental gym is especially useful for tackling anxiety as it helps us to unpick the connection between our thoughts and the overwhelming sensations that flood in during an anxious reaction. If your child is prone to worries, practising mental gym will help them understand their anxiety better and open up different ways of responding that could reduce their anxious episodes. The best way to teach your child how to do mental gym is to try it out for yourself first. What makes you anxious? Identify a situation that makes you feel anxious and work through the steps below – or use this readymade example: Your child has come home with a ‘FAIL’ for a piece of Maths homework. You become anxious that they won’t keep up at school and this will have a negative impact on their future prospects. The Mental Gym Steps - Step 1: Establish the facts. In this example, the fact is they have not reached the required standard in this piece of homework. - Step 2: Identify your thoughts (and the beliefs underlying them). For example, you might be thinking that they failed because they don’t try hard enough and don’t understand the importance of schoolwork? Perhaps you believe that if they don’t do well at school that will impact on them for the rest of their life? Or, you might be thinking that it’s your fault they failed because you didn’t help them? - Step 3: Focus on your physical sensations, actions and emotions. Is your heart racing? Your head hurting? Your stomach churning? What would you call this feeling? Fear? Panic? Worry? What are you doing? Having a go at your child? - Step 4: Go back to the facts (step 1). Is there an alternative way of thinking about these facts (step 2)? For example, perhaps your child finds Maths hard or misunderstood the task? Perhaps the unique talents that will help them succeed in life lie elsewhere? Perhaps they ran out of time because they were prioritising something else? What actions and emotions might those alternative thoughts and beliefs lead to (step 3)? The purpose of this exercise is to help us understand that the way we interpret a situation (our thoughts and beliefs about it) will directly influence how we feel (physically and emotionally) and how we act. One thought might lead to an anxious or angry response while an alternative thought might lead to a calm response. It’s not the situation that provokes our response, it is our thoughts about that situation. How mental gym helps anxious children When we feel anxious, the emotional and physical sensations can feel overwhelming. It all seems to arrive in a rush and we lose sight of the thoughts that triggered the anxious response. Children often believe it is the situation that is making them anxious – the approaching dog, the exam, the playground – rather than their thoughts about that situation. If we can help children see that thoughts are not facts and that there are different ways of thinking about the same situation then we open up the possibility of influencing how they react to that triggering situation. Practice makes perfect Just like the physical gym, you won’t get much success just visiting the mental gym once. Catching our triggering thoughts takes practice. When a child is in the middle of an acute anxiety response that is seldom the best time to try and get them to think more flexibly for the first time. Practising mental gym when they are calm (or just mildly worried) will help them develop their thinking skills. You can use story books or examples from friends or family to help them work through the steps indirectly in a safe context (see How to use Story Time to understand your child better or Books to help children with anxiety for ideas). If it helps, you can rename ‘mental gym’ the ABC model (A = Activating event, B = Belief, C = Consequence) – whatever makes it accessible and memorable for your child. You might also want to check out some mindfulness apps or teach them some self-soothe strategies for calming their physical responses. Does your child suffer from ANXIETY? We offer specialised support for parents to help you learn how to support an anxious child/teenager and build their confidence. Details here.
What is Artificial Intelligence (AI)? Technologies that adopt various characteristics of human intelligence and behavior are typically referred to as Artificial Intelligence (AI). It is the study of agents that analyze their surroundings to create plans and accomplish the best decision after accounting for both retrospective and current percepts. AI is a construct using two techniques, which are machine learning and deep learning. Although these derivations are closely related since deep learning is a subset of machine learning when put in simple terms, they have distinct characteristics which impact AI differently. Machine learning uses algorithms which interprets data, learns from it, and then applies the knowledge to achieve informed decisions. This term falls under the umbrella of AI as it is used to forecast and render conclusions based on the data analyzed. Rather than following the conventional route of human learning, machine learning is instead, engineered to be proactive when confronting complex and dynamic environments. It uses the provided data to perform this function and progressively improves. Machine learning and deep learning are loosely interchangeable due to its similar functions; however, the distinguishing factor lies in its capabilities. Deep learning generates algorithms which analyze data continuously using a logical structure similar to how a human would deduce conclusions, entirely on its own. The overall deep learning decision-making process is driven by a multifaceted framework of algorithms called an Artificial Neural Network (ANN). An ANN is reflective of a human brain, which by far, exceeds the standard machine learning in the aspects of capability and intelligence. Another disparity between these two concepts draws machine learning as more dependent on engineer intervention to resolve inaccurate algorithm predictions. On the contrary, a deep learning model is independent and can identify whether their prediction is accurate or not on its own. Essentially, correctly executed deep learning is the core fundamental powering true artificial intelligence. Categories of Artificial Intelligence (AI) – Weak and Strong AI can further be classified as weak or strong. Weak AI is characterized by its limited design and set of rules conforming it to a particular task. It cannot act upon any information beyond the scope of their assigned rules. For instance, Apple’s Siri may hold the impression that it is very intelligent for being able to respond in conversations and make snarky remarks, but as soon as it is asked a question outside what it is programmed to answer, Siri struggles to deliver accurate results. Although the Internet is an encapsulation of the entire world’s knowledge, leveraging the information and data to power AI is not enough for it to respond in dynamic situations. Its intelligence to compute and make conclusions is capped at an extent since they are only capable of what they are wired to do, which is why Siri’s “Sorry I didn’t quite catch that” has become quite the familiarity. The key takeaway is that weak AI is very competent in executing tasks that it is programmed to do and may even become exceptional at it; however, its stream of certainty is lost when it diverges from the job it is assigned to do. On the opposite spectrum, the limitations of strong AI are almost non-existent, as it is capable of mimicking cognitive functions without discrepancy from a human mind. Other than adopting the same skillset as humans to facilitate reasoning, problem-solving, planning, learning, and communicating; true intelligence also requires a consciousness with sentiments and self-awareness. These key attributes are crucial in the learning process, which starts with a childlike mind and matures with learning and development. After reaching this stage, it should be able to interact with its surroundings make inferences independently and even render their own creations. However, due to the complex construct governing strong AI, we have yet to witness a technology that embodies true intelligence in its entirety. Although AI is still in the early stages, RBS has already developed upon its true principles by integrating vital AI attributes to achieve berth optimization. To ensure your terminal meets efficiency in all areas ranging from planning to forecasting, click here to find out how.
Pollination is the process of transferring pollen grains from a flower's male anther to its female stigma. Every living organism, including plants, strives to produce offspring for the next generation. Plants can produce offspring in a variety of ways, one of which is by producing seeds. Seeds contain the genetic information required to create a new plant. Flowers are the tools that plants use to produce seeds. The diagram below depicts the basic parts of a flower. Pollen can only be transferred between flowers of the same species to produce seeds. A species is defined as a population of individuals capable of freely interbreeding with one another but who do not interbreed with members of other species due to geographic, reproductive, or other barriers. [Image will be uploaded soon] Flowers must rely on pollen vectors to spread pollen. Wind, water, birds, insects, butterflies, bats, and other animals that visit flowers are examples of vectors. Pollinators can be either animals or insects that transfer pollen from one plant to another. Pollination is usually an unintended result of an animal's activity on a flower. When pollen grains attach themselves to the animal's body, it is often eating or collecting pollen for its protein and other nutritional properties, or it is sipping nectar from the flower. When the animal visits another flower for the same reason, then what happens is there are chances that the pollen might fall off onto the flower's stigma, resulting in the flower's successful reproduction. According to the animated image, pollen from Flower 1's anthers is deposited on Flower 2's stigma. Pollen may "germinate" on the stigma, which means that a "pollen tube" forms on the sticky surface of the stigma and grows down into the plant's ovule. [Image will be uploaded soon] This growth can result in: Successful fertilisation of the flower and the growth of seeds and fruit; or, a plant that is only partially fertilised, resulting in the fruit and/or seeds not fully developing; or, a plant that is only partially fertilised, resulting in the fruit and/or seeds not fully developing; or, a plant that is only partially fertilised, resulting in the fruit and/or seeds not fully developing; or the plant may not be pollinated at all and may not reproduce at all. Plants Can Be: Self-Pollinating - which means that the plant can fertilize itself; or, Self-pollination ensures the abolition of recessive traits. When compared to cross-pollination, pollen grain waste is very low. The purity of the race is maintained during the self-pollinating process because there is no diversity in the genes. External factors such as wind, water, and other pollinating agents are not involved in self-pollination. Self-pollination ensures that even small amounts of pollen grains produced by plants have a high success rate in pollination. The main disadvantage of self-pollinating is that there is no gene mixing. As a result, the race's vigour and vitality are diminished.The immune system of the offspring is weakened as a result. Cross-Pollinating - which means that the plant needs a vector or a catalyst (a pollinator or the wind) to get the pollen to another flower of the same species. Let’s discuss the Advantages and Disadvantages of Cross-pollination The seeds that are produced have a high level of vigour and vitality. Through the process of cross-pollination, all unisexual plants can reproduce. As a result of genetic recombination, recessive traits in the lineage are eliminated. This process strengthens the offspring's resistance to diseases and other environmental factors. Cross-pollination introduces new genes into a species sequence primarily through fertilisation between genetically different gametes. There is a significant amount of pollen grain waste in this process. Because of genetic recombination during meiosis, there is a chance that good qualities will be lost and unwanted characteristics will be added to offspring. "Ornithophily" refers to the act of pollination by birds. Hummingbirds, spiderhunters, sunbirds, honeycreepers, as well as honeyeaters are the most common pollinators. Hummingbirds are the world's smallest birds, weighing as little as 2.5 grammes, or the weight of a penny. Pollination is essential for flowering plants to survive. Because most flowering plants cannot pollinate themselves, they must rely on other animals. Many small birds, such as sunbirds and hummingbirds, play an important pollination role. Plants that are pollinated by birds are built to accommodate them, such as having a sturdy structure to support perching and flowers with a re-curved, tube-like shape that does not tangle the birds. The flowers are also shaped in such a way that a bird's beak can reach them. These plants also have brightly coloured nectar-containing flowers. The pollination process in birds is as follows: Birds visit flowers in search of energy-rich nectar. Most flowers pollinated by birds contain nectar deep within the flower. When a bird tries to reach the nectar, pollen adheres to its head, neck, and back. When birds visit other plants, they spread pollen. 1. What are Sugarbirds? Answer. The sugarbirds are a small genus, Promerops, and family of passerine birds found only in southern Africa. They resemble large, long-tailed sunbirds in appearance and habits, but they may be more closely related to Australian honeyeaters. 2. What are Love Birds? Answer. The genus Agapornis is the common name for a small group of parrots in the Old World parrot family Psittaculidae. The genus contains nine species, eight of which are native to Africa, with the grey-headed lovebird being native to Madagascar. Lovebirds can be kept alone to bond with their pet parent, or in pairs to bond with one another. Birds of various species should not be housed together. Pet parents should socialise their birds on a daily basis. 3. Which Flower is Ornithophilous? What are the Characteristics of an Ornithophilous Flower? Answer. Ornithophily refers to cross pollination that occurs with the help of birds, and ornithophilous flowers include Bignonia, Bottle brush, Butea, Bombax, Callistemon, Grevillea, Agave, and others. Flowers with ornithophilous characteristics are large and colourful. They produce a large amount of mucilaginous nectar. Flowers, in general, have no scent. Some flowers may contain edible components. Pollen grains are clingy. Floral parts that are thick and fleshy are produced by the flower.
Dna Replication Worksheets Teacher Worksheets Dna Replication Showing Top 8 Worksheets In The Category Dna Replication Some Of The Worksheets Displayed Are Dna And Replication Work Dna Replication And Transcription Work Dna Replication Dna Replication Protein Synthesis Answers Dna Interactive Work The Components Structure Of Dna Dna Chromosomes Chromatin And Genes Name Date Period Dna Replication Practice Work Dna Replication Worksheet Tamalpais Union High School Dna Replication Worksheet Use Chapter 17 2 To Help You 1 Why Does Dna Need To Replicate 2 In Relation To The Pictures Below Explain Three Main Steps In The Process Of Dna Replication Name The Enzymes That Go With Each Step A B C 3 In Which Direction Are New Nucleotides Added During Replication 4 What Is The Difference Between And Leading And Lagging Strand Dna Replication Worksheet Biology Dna And Replication Part A What Is The Function Of Dna In Cells Describe The Components And Structure Of A Dna Nucleotide Draw A Nucleotide List The Names And Abbreviations Of The 4 Bases Why Is Dna Called A Double Helix What Forms The Sides Of The Dna Ladder What Forms The Rungs Of The Dna Ladder Where Is Dna Found Part B What Is Meant By The Term Base Pairing How Is Base Paring Dna Replication Worksheet Wordpress Dna Replication Worksheet Directions Answer The Following Questions About Dna Replication In Complete Sentences 1 Why Does Dna Replicate 2 Is Dna Replication Describe As Conservative Or Semi Conservative Why 3 What 2 Enzymes Are Used During Dna Replication Describe What Each Does During Replication 4 When Does Dna Replication Occur In A Cell 5 Where Does Dna Replication Occur In A Cell Dna And Replication Worksheet Troup County School District During Dna Replication What Sequence On Complementary Base Pairs Will Be Match To The Following Sequence Atacgcgtta Title Dna And Replication Worksheet Author Lhorne Created Date 3 23 4 14 37 PmDna Replication And Transcription Worksheet Answers The Right Set Of Dna Replication And Transcription Worksheet Answers Will Always Reflect The Materials That Have Been Taught In Class And That You Can Verify Are Correct You Can Always Ask The Tutor To Look Over The Answers For You So That You Do Not Need To Find Out For Yourself Working Memory Is One Of The Ways That We Work Out Information It Is Easy To Lose Information When You Try To Remember Everything At Once As Well As Making It Difficult To Answer Queries This Could Mean ThatDna And Replication Answer Key Worksheets Kiddy Math Dna And Replication Answer Key Displaying Top 8 Worksheets Found For This Concept Some Of The Worksheets For This Concept Are Dna Structure And Function Work Answers Dna Structure Work Answers Section 12 2 Chromosomes And Dna Replication Work Dna Structure Practice Answer Key Km 754e Dna Replication Protein Synthesis Answers Dna Double Helix Key Dna Structure AndDna Replication Worksheet Answer Key Briefencounters A Dna Replication Worksheet Is A Powerful Tool To Help You Get Organized Dna Replication Worksheets Make It Easy To Keep Track Of Who Has What You Need At Your Fingertips You Can Be Proactive In Your Dna Project By Using A Worksheet All You Need To Do Is Fill Out The Fields For The Person S Name Address And Birth Certificate Structure Of Dna And Replication Worksheet Answers Structure Of Dna And Replication Worksheet Answers Along With Suitable Matters Because You Want To Deliver Everything That You Need Available As One Reputable In Addition To Efficient Reference We All Existing Helpful Information About Various Topics Along With Topics Coming From Recommendations On Speech Composing To E Book Outlines Or To Pinpointing What Sort Of Paragraphs For The Dna Worksheet Key Dna Replication Worksheet With Answer Key Worksheets Stuff To Dna Replication Worksheet Ib Dna Structure Replication Review Key 2 6 2 7 7 1 Dna Worksheet Dna Workshop Worksheet Key Mr Lesiuk Dna Coloring Answer Key Worksheet On Dna Rna And Protein Synthesis Dna Worksheet Cr Dna Worksheet Ubjectives Know The Dna Replication Worksheet. The worksheet is an assortment of 4 intriguing pursuits that will enhance your kid's knowledge and abilities. The worksheets are offered in developmentally appropriate versions for kids of different ages. Adding and subtracting integers worksheets in many ranges including a number of choices for parentheses use. You can begin with the uppercase cursives and after that move forward with the lowercase cursives. Handwriting for kids will also be rather simple to develop in such a fashion. If you're an adult and wish to increase your handwriting, it can be accomplished. As a result, in the event that you really wish to enhance handwriting of your kid, hurry to explore the advantages of an intelligent learning tool now! Consider how you wish to compose your private faith statement. Sometimes letters have to be adjusted to fit in a particular space. When a letter does not have any verticals like a capital A or V, the very first diagonal stroke is regarded as the stem. The connected and slanted letters will be quite simple to form once the many shapes re learnt well. Even something as easy as guessing the beginning letter of long words can assist your child improve his phonics abilities. Dna Replication Worksheet. There isn't anything like a superb story, and nothing like being the person who started a renowned urban legend. Deciding upon the ideal approach route Cursive writing is basically joined-up handwriting. Practice reading by yourself as often as possible. Research urban legends to obtain a concept of what's out there prior to making a new one. You are still not sure the radicals have the proper idea. Naturally, you won't use the majority of your ideas. If you've got an idea for a tool please inform us. That means you can begin right where you are no matter how little you might feel you've got to give. You are also quite suspicious of any revolutionary shift. In earlier times you've stated that the move of independence may be too early. Each lesson in handwriting should start on a fresh new page, so the little one becomes enough room to practice. Every handwriting lesson should begin with the alphabets. Handwriting learning is just one of the most important learning needs of a kid. Learning how to read isn't just challenging, but fun too. The use of grids The use of grids is vital in earning your child learn to Improve handwriting. Also, bear in mind that maybe your very first try at brainstorming may not bring anything relevant, but don't stop trying. Once you are able to work, you might be surprised how much you get done. Take into consideration how you feel about yourself. Getting able to modify the tracking helps fit more letters in a little space or spread out letters if they're too tight. Perhaps you must enlist the aid of another man to encourage or help you keep focused. Dna Replication Worksheet. Try to remember, you always have to care for your child with amazing care, compassion and affection to be able to help him learn. You may also ask your kid's teacher for extra worksheets. Your son or daughter is not going to just learn a different sort of font but in addition learn how to write elegantly because cursive writing is quite beautiful to check out. As a result, if a kid is already suffering from ADHD his handwriting will definitely be affected. Accordingly, to be able to accomplish this, if children are taught to form different shapes in a suitable fashion, it is going to enable them to compose the letters in a really smooth and easy method. Although it can be cute every time a youngster says he runned on the playground, students want to understand how to use past tense so as to speak and write correctly. Let say, you would like to boost your son's or daughter's handwriting, it is but obvious that you want to give your son or daughter plenty of practice, as they say, practice makes perfect. Without phonics skills, it's almost impossible, especially for kids, to learn how to read new words. Techniques to Handle Attention Issues It is extremely essential that should you discover your kid is inattentive to his learning especially when it has to do with reading and writing issues you must begin working on various ways and to improve it. Use a student's name in every sentence so there's a single sentence for each kid. Because he or she learns at his own rate, there is some variability in the age when a child is ready to learn to read. Teaching your kid to form the alphabets is quite a complicated practice.
You've Got What? Malaria PDF 84 KB Malaria is caused by a parasite called Plasmodium. There are 5 species of Plasmodium which infect humans: Of these, Plasmodium falciparum infection is the most severe and can cause death in up to 10% of cases. It can be rapidly fatal. Pregnant women and children are especially at risk. Other types of malaria are less severe, but still may cause death. Malaria is a notifiable condition1 The parasite is transmitted to humans by the bite of infected female Anopheles species mosquitoes. The parasites multiply in the liver and the bloodstream of the infected person. The parasite may be taken up by another mosquito when it bites an infected person. The mosquito is then infected for the duration of its life and can infect other humans when it bites them. Occasionally malaria is transmitted by blood transfusion. For this reason, people who have travelled to countries where malaria occurs may be deferred from giving blood for a short period. Malaria can also be transmitted from a mother to her fetus. Malaria occurs in most tropical and sub-tropical areas of the world, including: Over 600,000 people living in these countries die from malaria each year. Many thousands of tourists also get malaria during their travels to countries where malaria is present. Tourists often get severe illness because they have had no previous exposure to malaria and have no resistance to the disease. Symptoms of malaria may include: Plasmodium falciparum may cause cerebral malaria, a serious complication resulting from inflammation of the brain that may cause coma. Diagnosis is made by a blood test – sometimes it is necessary to repeat the test a number of times, as the parasites can be difficult to detect. (time between becoming infected and developing symptoms) Varies with the type: These periods are approximate and may be longer if the person has been taking drugs taken to prevent infection. (time during which an infected person can infect others) Direct person-to-person spread does not occur. A person remains infectious to mosquitoes as long as the parasites are present in the blood. This may be several years if adequate treatment is not given. Parasites disappear from the blood within a few days of commencing appropriate treatment. Mosquitoes remain infected for life. Specific antimalarial treatment is available and must always be started as soon as malaria is diagnosed. There is increasing resistance to currently available drugs and treatment should be carried out by an infectious diseases specialist or other expert in the field. Extensive international programs are undertaken in malarious countries to try to control this disease. For travellers, the following advice is given: 1 – In South Australia the law requires doctors and laboratories to report some infections or diseases to SA Health. These infections or diseases are commonly referred to as 'notifiable conditions'.
Quantum computers promise to simulate complex chemistry that can’t be modeled on conventional computers. But today’s quantum computers use error-prone hardware and don’t yet live up to this potential. So materials scientists and quantum engineers are working to improve the basic hardware element of quantum computers, called the qubit. They are developing better manufacturing recipes and control equipment for the leading technologies: superconducting qubits and trapped-ion qubits. They’re developing technologies based on that old standby, silicon. And they’re envisioning less error-prone qubits based on new quantum materials that exhibit weird effects. Discovering quantum materials requires physicists, chemists, and materials scientists to work together. Materials science is a huge component of increasing the performance of circuits. Source: Chemical & Engineering News Name of Author: Katherine Bourzac
Create beautiful interactive timelines quickly and easily. Begin with a Google spreadsheet from the template provided. Add from a variety of media sources such as Twitter, Google Maps, YouTube, and much more. When finished, publish to the web, and share using links or embed code. Be sure to check out the example link for suggestions and ideas for use. The tutorial video is hosted on YouTube. If your district blocks YouTube, it may not be viewable. tag(s): timelines (47) In the Classroom Use your interactive whiteboard or projector to share timelines about historical events, research literature, learn about different decades and events throughout the world, and more. Transform student learning by having them create timelines for research projects. Use a whole class Google account or individual Google apps accounts if you have them. Use this tool to make a timeline of your school year. Create author biographies, animal life cycles, or timelines of events and causes of wars. Challenge students to create a timeline of the plot of a novel, interspersed with the ways themes appear throughout the novel. If you teach chemistry, have students create illustrated sequences explaining oxidation or reduction (or both). Have elementary students interview grandparents and create a class timeline about their grandparents for Grandparents' Day. Why not create a timeline highlighting students' family events for a special gift for Mother's Day, Father's Day, or other holidays? You may need to assign students to do some investigative work first (years of births, marriages, vacations, etc.). In world language classes, have students create a timeline of their family in the language to master with vocabulary about relatives, jobs, and more (and verb tenses!). Students learn about photo selection, detail writing, chronological order, and photo digitization while creating the timelines of their choice. Making a timeline is also a good way to review the history and cultural developments.
Education is the process of facilitating learning or acquiring knowledge, skills or habits through methods such as teaching, training or discussion; it is a vital aspect of every individual's life, and this is why it has been made a fundamental right globally. There is a broad range of reasons that entice people to get an education; some are curious and interested in learning whereas others simply do it so that they can get a job so that they can be able to fit into society. In light of the importance of education, it is essential that all stakeholders – teachers, parents, government, students – are involved when making decisions on the best educational practices for students so as to ensure a conducive environment for learning. Although the current education system has been in use for decades and is reasonably functional, it is necessary to improve it so that the stakeholders realize better outcomes from the system, which can be achieved by having proper grading systems, encouraging practical learning methods, increasing school funding and improving the attitude teachers have towards their student. How to Improve the Education System It is important that education methods are hands on and practical; students prefer methods that allow them to apply their knowledge practically. Going on field trips and inviting professional motivational speakers are examples of actual exposure that students need so that they can easily understand their field of study. A study conducted showed practical classwork is more involving when compared to theory-based classwork, where students just memorize information. Students should apply information covered in previous courses in their current social and academic lives. Practical aspects of a subject entice students, and they consequently develop an intrinsic interest to learn. Grading standards have, over the years, become a sensitive issue in the academic community. There is a significant increase in the number of cases of teachers giving good grades to students who did not deserve to pass. This is a worrying trend as it interferes with the quality of education a student acquires and additionally demotivates the students who are not favored by the teachers. Grading standards are, hence, essential; these standards are put in place for a reason and that is to grade students according to how they fared in each subject. Teachers are obligated to allocate an appropriate grade for the student’s performance without favor or prejudice. This, however, is not being implemented by the all the teachers. Some feel guilty when students fail and others want to alter the final statistics of the performance of the students in their class; they thus refrain from changing their grading standards, which only encourages students to be lazy as they are assured of passing. Not only should these grades be adhered to but they should also be reviewed. This will challenge both the lecturers and students and will consequently motivate both parties to do better. This move will, additionally, enable the students to develop the proper attitude, knowledge, and skills to tackle challenges in the future. Moreover, a proper grading systems will enable the teachers identify the students’ academic strengths and weaknesses. This will sensitize the student on their possible talents and additionally give them a general perspective of the career they might succeed at in future. Consistent low grades will also prompt the teacher to involve the students’ parents and this will ensure the student has a holistic and well-merged home and school life. This will reduce degradation of the education system and ensure it produces graduates with high quality knowledge. The structure, facilities and resources in a school greatly influence the quality of education a student acquires. Making improvements to a school, or even building a new school, is often expensive and therefore government funding needs to be increased so that schools can operate optimally. The amount of money allocated for a particular school should be tailored to the school's needs and should be enough to cater for the essential needs. Each classroom should be able to accommodate a good number of students without congestion. Additionally, the classrooms have to be properly ventilated and well lit. The school should also have a playing field and office spaces for teachers. All these have to be considered for a conducive learning environment for both the students and teachers. The body in charge of registering and licensing schools should also be keen on their assessment of the schools and should, moreover, make regular visits. This will ensure continued review of the academic programs, departments and the school itself. This will consequently facilitate the closing down of schools that are not up to par, ensuring all students get a good education. The current school curriculum is mainly theoretical and this is one of the factors that has contributed to the decreased interest students have towards school. A broad curriculum offering a broad range of subjects acts as an additional motivator for the student. However, due to lack of proper funding, most schools are cutting down on the number of courses they offer and only retaining the core subjects. This decision is often influenced by the lack of teachers, classrooms and equipment. The government should, additionally, look into funding additional subjects and co-curricular activities. This will greatly motivate students to learn and improve the overall quality of education acquired. It is also crucial that each student receives individual attention from the teacher as some students tend to hide behind other students both literally and academically. They do this to avoid attracting attention to themselves, often due to shyness, or the student having a general disinterest in learning. These students tend to get left behind and often end up failing. Individual attention to each student will ensure equal participation from all students and this facilitate an increased learning curve. Teachers can do this by forming small group discussions or arranging a schedule in which each student is allowed to have individual meetings with the teacher. Teachers are a key tool in education and should thus be well trained, disciplined and approachable. Training of teachers has to be of a high standard, the process of hiring teachers should be fair and thorough. In recent years, a significant number of cases have appeared in the media in regards to undisciplined teachers who have caused physical or mental harm to their students. This is one of the consequences of hiring a teacher without doing a thorough background check. The presence of unqualified teachers in schools is, additionally, another reason behind the degradation of the education system. A good teacher should patiently understand their students, their interests and learning ability; they should be firm but also approachable by students on both academic and personal issues. Improving the recruitment process of teachers will greatly contribute to improved students’ success. Preparing a person for a teaching career should follow the model of apprenticeship, in which student teachers learn from experienced teachers. Teaching skills should be continually sharpened, since teachers play a crucial role in coaching and guiding students throughout the learning process. The training process for teachers should be looked at and improved on and this will ensure a quality teaching staff focused on improving students’ success. Most of the parents in the modern families both work. This leads to the said parents lacking time to help their children study or do their homework. However, when parents are involved in their children’s schoolwork, it helps the student to learn more. Parents are their child's first teacher and can teach these children values that encourage the child to learn. Learning institutions should build strong bonds with parents and consistently invite their active participation in the classroom. Teachers should, moreover, remind parents on the importance of education for their child and show the parents ways of assisting the students with their home and class work. All the educational practices mentioned are aimed at ensuring the education system is optimal for students and best promotes the students’ success. Every aspect of life as a student has been looked at from the home life, where the parent is a child's first teacher, to school where teachers also nurture the students into responsible, well-educated adults. Teachers should respect the standard grading and this will curb the degrading of education. Moreover, school boards and school heads should address the issue of grading standards with the seriousness it deserves. This will ensure that only students who qualify are the only ones rightfully promoted to the next class. Factors that surround a student such as a school itself, classrooms and overall facilities, may either facilitate them to learn better or act as a barrier and hinder them from learning. It is thus vital that the government avails sufficient funding to ensure that good quality education is upheld. Teachers have to be well taught and mentored for them to become good teachers and nurturers and the best way to ensure that is through apprenticeship. These practices will only work if teachers, parents, government, students and the community work together to promote students’ success.
Today we are sharing a collection of Practice Worksheets that are perfect for kids who are on the stage of writing alphabet. The worksheets consist of tracing practices. Print out these free alphabet tracing worksheets to help your kids recognize and write the alphabet. These worksheets contain the letters in both lower and upper case. Check out the alphabet tracing worksheets in the following images below. Tracing and writing are the activities that develop a child’s control of their hand-grip and learning their writing. It allows children to improve their hand and eye coordination. You can give this Practice Worksheet tracing collection to young children who struggle to remember their letters. This set of tracing sheets is also great for preschoolers who know the alphabet but still need help with letter formation. More of tracing worksheets are provided in the following images below. Practice tracing the letters of the alphabet with this fun tracing sheets. These worksheets will instill reading and writing fundamentals. Tracing letters is an easy way for children to learn writing each letter that leads to freehand writing. The following worksheet below includes the preposition worksheet for the older kid. These tracing sheets are perfect for alphabet learning activities. This collection is perfect for preschool and kindergarten kids learning and reviewing the alphabet. Use them however you want to help your students practice their letters! Free worksheets are daily updated and you can visit more at any time!
Server virtualization is a process that creates and abstracts multiple virtual instances on a single server. A server administrator uses virtualization software to partition one physical server into multiple isolated virtual environments; each virtual environment is capable of running independently. The virtual environments are sometimes called virtual private servers, but they are also known as guests, instances, containers or emulations. Single, dedicated servers can only run one operating system (OS) instance at a time. Each single dedicated server requires its own OS, memory, central processing unit (CPU), disk and other hardware to properly operate. The single server would also have complete access to all of its hardware resources. Server virtualization, on the other hand, can allow a server to run multiple independent OSes, all with different configurations. Server virtualization also masks server resources, including the number and identity of individual physical servers, processors and operating systems. Server virtualization will instead share the server's resources with the other abstracted virtual instances on that server. Server virtualization allows organizations to cut down on the necessary number of servers, saving money and reducing the hardware footprints involved in keeping a host of physical servers. Server virtualization also allows for much more efficient use of IT resources. If an organization is afraid of under- or overutilizing their servers, then virtualization may be a good practice to try. An organization can also use server virtualization to move workloads between virtual machines (VMs), cut down on the overall number of servers or to virtualize small- and medium-scaled applications. Server virtualization can be viewed as part of an overall virtualization trend in enterprise IT that includes storage virtualization, network virtualization and workload management. This trend is one component in the development of autonomic computing, in which the server environment will be able to manage itself based on perceived activity. Server virtualization can be used to eliminate server sprawl, to make more efficient use of server resources, to improve server availability, to assist in disaster recovery, testing and development and to centralize server administration. How does server virtualization work? Server virtualization works by partitioning software from hardware using a hypervisor. Hypervisors come in different types and are used in different scenarios. The most common hypervisor -- Type 1 -- is designed to sit directly on a server, which is why it is also called a bare-metal hypervisor. Type 1 hypervisors provide the ability to virtualize a hardware platform for use by VMs. Type 2 hypervisors are run as a software layer atop a host operating system and are used more often for testing/labs. One of the first steps in server virtualization is figuring out what servers an organization may want to virtualize -- for example, deciding to virtualize a server that doesn't make use of all of its resources may be a good idea, so those unused resources can be reutilized for other tasks. Once the server to be virtualized is chosen, users should monitor the system to determine the performance and resource usage of the physical deployment before sizing a VM. For example, users should monitor resources such as memory, disk usage or microprocessor loads. This information provides an organization with an idea of how many resources can be dedicated to each virtual instance. Users start the virtualization process with virtualization software. An organization can deploy hypervisors -- such as Microsoft Hyper-V and VMware vSphere -- or use virtualization tools, such as PlateSpin Migrate. Depending on the server, specific parts should be virtualized first, such as supporting applications or hard disks in a database server. After migration, it may be necessary to adjust the resources allocated to a virtual instance to ensure adequate performance. Advantages of server virtualization Some advantages of server virtualization include: - Live migration - Server consolidation - Reduced need for physical infrastructure - Each virtual instance can run its own OS - Each individual instance can act independently - Reduced cost on servers - Reduced energy consumption - Easier to backup and recover from disasters - Easer to install or setup software patches and updates - Ideal in web hosting Disadvantages of server virtualization Disadvantages of server virtualization include: - Possible availability issues - Possible resource consumption - Possible memory overcommits - Upfront costs -- considering the virtualization software and an organization's network - Software licensing - IT staff with experience in virtualization may be needed because a learning curve may be involved - Security may also be a concern, especially if a virtual server is on the same physical server as a separate organization Server virtualization uses and application Server virtualization should be used to consolidate resources, save money and provide independent environments for software on a single server. As a few practical examples, an IT organization can use server virtualization to reduce the time spent managing individual servers, gain experience before migrating servers to the cloud or use more OSes and applications without needing to add more physical machines. Server virtualization can also be used in web servers as a low-cost way to host web services. It can also provide redundancies in case of any data loss if an organization hosts copies of their data on a virtualized server 3 types of server virtualization There are three popular approaches to server virtualization: the virtual machine model, the paravirtual machine model and virtualization at the OS layer. Virtual machines are based on the host/guest paradigm. Each guest runs on a virtual imitation of the hardware layer. This approach allows the guest operating system to run without modifications. It also allows the administrator to create guests that use different operating systems. The guest has no knowledge of the host's operating system because it is not aware that it's not running on real hardware. It does, however, require real computing resources from the host -- so it uses a hypervisor to coordinate instructions to the CPU. The hypervisor is called a virtual machine monitor (VMM). It validates all the guest-issued CPU instructions and manages any executed code that requires additional privileges. VMware and Microsoft Virtual Server both use the virtual machine model. The paravirtual machine (PVM) model is also based on the host/guest paradigm -- and it uses a virtual machine monitor, too. In the paravirtual machine model, however, the VMM actually modifies the guest operating system's code. This modification is called porting. Porting supports the VMM so it can utilize privileged systems calls sparingly. Like virtual machines, paravirtual machines are capable of running multiple operating systems. Xen and Unified Modeling Language (UML) both use the paravirtual machine model. Virtualization at the OS-level works a little differently. It isn't based on the host/guest paradigm. In the OS-level model, the host runs a single OS kernel as its core and exports operating system functionality to each of the guests. Guests must use the same operating system as the host, although different distributions of the same system are allowed. This distributed architecture eliminates system calls between layers, which reduces CPU usage overhead. It also requires that each partition remain strictly isolated from its neighbors so that a failure or security breach in one partition isn't able to affect any of the other partitions. In this model, common binaries and libraries on the same physical machine can be shared, allowing an OS-level virtual server to host thousands of guests at the same time. Virtuozzo and Solaris Zones both use OS-level virtualization. A hypervisor is what abstracts an operating system from the underlying computer hardware. This abstraction allows the host machine's hardware to independently operate multiple VMs as guests -- meaning the guest VMs effectively share the system's physical compute resources. Traditionally, hypervisors are implemented as a software layer and are separated into Type 1 and Type 2 hypervisors. Type 1 hypervisors are most commonly used in enterprise data centers, while Type 2 hypervisors are commonly found on endpoints such as PCs. In the 1960s, IBM developed virtualization of system memory, which was the first step that would help lead to server virtualization. In the 1970s, IBM virtualized an OS called VM/370, which was worked on more before becoming z/VM, the first commercial VM. Since then, VMs and server virtualization have gained more and more popularity. VMware released VMware Workstation in 1999, which could virtualize servers on x86 and x64 architectures.
With the expected rise in space tourism, we think that there may become a time when we’re building a Moon Hotel, but we’re a bit stuck on what we think it might look like. We’d love to see your pupils’ ideas on what a Moon Hotel could look like, what weird and wacky features they might include and how they will overcome the environmental issues on the Moon. Using the list of facts about the Moon below and children’s own knowledge and research, we’d like them to design a poster or pamphlet. We’d like pupils to think about food, water, air, gravity, and how people will get there. How would it be different from hotels on earth? What would people need to survive? How would they get around? What cool things would there be inside for people to do? Children must draw a poster or write a pamphlet explaining their hotel, the features they have added to it and why they have chosen to do certain things e.g., build a greenhouse to grow food. The poster or pamphlet should be aimed at attracting people to visit your Moon Hotel by advertising the features of the establishment. Facts About The Moon 1. The Moon orbits the Earth and is the only natural satellite 2. The same side always faces the Earth, meaning we have never seen the far side from our planet (although there are pictures taken from satellites that have gone all the way around). This is called tidal-locking and happens because the Moons rotation is the same speed as the Earths. 3. The temperature is -233°C to +123°C 4. The Moon’s orbit controls the Earths tides 5. The Moon is moving away from the Earth at a speed of 3.8cm a year 6. You would weigh less on the surface of the Moon because it has less gravity and would also mean lighter objects might float away into space 7. There is no atmosphere on the Moon and therefore no protection from outer-space or things like the suns rays, comets, and high winds 8. There are many moonquakes, causing fractures and ruptures that happen every week 9. It takes 3 days in a spaceship to get to the Moon 10. There are sources of oxygen on the Moon, but the lack of atmosphere means that there is no way of keeping it near the surface (like it is on Earth) meaning you can’t breathe the air Lower KS2: Draw a poster to attract people to visit a new hotel on the Moon Upper KS2: Design a pamphlet to explain a new hotel on the Moon and attract people to visit Start date: 26th March 2019 Close date: 14th June 2019 Winners announced: 28th June 2019 What we’re looking for in the winning entry: Clear link between the environment on the Moon and the hotel features, i.e. a 0-gravity room for entertainment, a greenhouse to grow food, cylinders to make water, oxygen pumps etc. Clear descriptions of their ideas, explaining why they have chosen certain things and what the purpose of everything is. Extra points to children who consider the journey to the Moon. There will be only one winner from both upper and lower KS2. The prize for pupils: Selection of goodies from the European Space Agency and a selection of activities and investigations they can do at home The prize for school: 1 year’s free subscription to Empiribox HOW TO ENTER Download templates for pupils’ to use. You can enter by submitting your pupils’ posters and pamphlets via post; FAO Sam Saul Rutherford and Appleton Laboratory R71, Office 17a By email: Upload your posters to dropbox, google docs or another shared folder and send a link to [email protected]. Please ensure that digital entries are scanned in full colour with all pages clearly visible. To allow us to contact the lucky winners quickly, every entry must have the school name, the school postcode, the pupil’s name and year group clearly marked. *Entries must be received before 4pm on the 14th June 2019 by either of the above methods with children’s first name, year group, school name and postcode clearly visible on each entry. Entries received after this date or without the correct information displayed will be discounted. The templates supplied do not have to be used. The winning school will be required to allow images, quotes and other forms of media to be published by Empiribox on social media, our website and in press releases in local papers and education magazines. By submitting your entry you are agreeing to these terms and can provide us with media consent from pupils parents and guardians. For more information, please contact [email protected]. For a free lesson box, please sign up using the form below.
Diabetes is one of the most common chronic diseases among children and youth. It requires appropriate care in the school setting to ensure student safety, well-being and optimal academic performance. Type 1 Diabetes is the most common in childhood years. Lack of exercise, poor nutrition, obesity and family predisposition has led to the rise of Type 2 Diabetes among children. Provincial standards create a safe and supportive environment for children by outlining the roles and responsibilities of parents/guardians, school administrators, and health authorities. - Provincial Standards: Supporting Students with Type 1 Diabetes in the School Setting (PDF) - Normes provinciales relatives au soutien offert en milieu scolaire aux élèves atteints de diabète de type 1 (PDF) Standards, services and supports are based on best practice developed by Child Health BC: - Diabetes Care in the School Settings: Evidence-Informed Key Components, Care Elements and Competencies Learn about keeping student records and medical alerts: Planning with Parents Work closely with parents to provide the support their child's needs – especially in an emergency situation. Start building a support plan for each child by having parents complete the following form: If parents wish to have glucagon given in the event of an emergency, have them complete the following form and get it signed by their child’s physician.
Table of Contents About Fibonacci (The Man) His real name was Leonardo Pisano Bogollo. He lived in Italy between 1170 and 1250. “Fibonacci” was his nickname, which roughly means “Son of Bonacci”. Fibonacci was not the first to know about the sequence, it was known in India hundreds of years before! As well as being famous for the Fibonacci Sequence, he helped spread Hindu-Arabic Numerals (like our present numbers 0,1,2,3,4,5,6,7,8,9) through Europe in place of Roman Numerals (I, II, III, IV, V, etc). Definition of Fibonacci Sequence The Fibonacci Sequence is the series of numbers: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, … The next number is found by adding up the 2 numbers before it. |Number||Multiplication of 2 Numbers Before| |2||1 + 1| |3||1 + 2| |5||2 + 3| |8||3 + 5| |13||5 + 8| |21||8 + 13| |32||13 + 21| The Rule of Fibonacci Sequence Let’s write Fibonacci Sequence a Rule! First, the terms are numbered from 0 onwards like this: So term number 7 is called x7, which equals 13. the 7th term is the 6th term plus the 5th term: x7 = x6 + x5 So we can write the rule: xn = xn-1 + xn-2 - xn is term number “n” - xn-1 is the previous term (n-1) - xn-2 is the term before that (n-2) Fibonacci Sequence Terms Below Zero The sequence works below zero too, like this: In fact the sequence below zero has the same numbers as the sequence above zero, except they follow a +-+- … pattern. It can be written like this: x−n = (−1)n+1 xn Which says that term “-n” is equal to (−1)n+1 times term “n”, and the value (−1)n+1 neatly makes the correct 1,-1,1,-1,… pattern. Interesting Pattern of Fibonacci Sequence |Every 3rd number is a multiple of x3 = 2||2, 8, 34, 144, 610, …| |Every 4th number is a multiple of x4 = 3||3, 21, 144, …| |Every 5th number is a multiple of x5 = 5||5, 55, 610, …| |Every nth number is a multiple of xn|
Geography provides a spacial perspective for learning about the world. It transcends time, chronology, and sequence by teaching students to think in terms of physical and human systems, patterns, distributions, the movement of people, goods, and ideas, the world's regions in all their forms, and the interaction between people and their environment. It examines the spatial dimension of human experience (i.e. space and place) in ways that cannot be adequately developed if left in the mix of social studies. (2003) Geography: An Essential School Subject—Five Reasons Why, Journal of Geography, 102:1,42-43, DOI: 10.1080/00221340308978519
Since the Every Student Succeeds Act (ESSA) became law in December 2015, the role of assessments in accountability systems have been hot topics. ESSA provides more flexibility to states in designing accountability systems and creating more balanced systems of assessment. In light of this dialogue, it seems like the right time for a quick refresher on the different types and purposes of assessment. Educators generally agree on three broad categories of educational assessment: formative, interim, and summative. Formative assessment guides learning. It includes giving clear, actionable feedback to students, sharing learning goals, and modeling what success looks like. By design, formative assessment: - has an explicit connection to an instructional unit - consists of many kinds of strategies, and is typically informal - helps educators guide the learning process, rather than grade or evaluate student performance We have several blogs posts on formative assessment; check out How Formative Assessment Plays a Critical Role in Classroom Learning or Empowering Students with Formative Assessment for more information. Summative assessment certifies learning. Generally, educators administer a summative assessment near the end of an instructional unit to help them answer the question, “What did students learn?” All sorts of different assessment instruments are used for summative assessment, including: - end-of-unit tests and end-of-course tests - performance tasks/simulations - oral examinations - research reports - state accountability tests Despite this array of summative instruments, state accountability tests most often come to mind. High stakes can be associated with summative assessment, such as selection, promotion, and graduation. And, policymakers use state assessment data to communicate the state of education to the public. Because summative assessment happens so late in the instructional process, the most effective use of its test data is evaluative versus instructional. For teachers, data can help guide decisions such as assigning grades for a course, promotion to the next grade, credit for courses, and more. Summative assessment data also play a role at the administrative level, where they’re useful assets for planning curricula, determining professional development needs, and identifying the resources the district needs to flourish. As you may have heard before, an easy way to remember the difference is that formative assessment is assessment for learning, while summative is assessment of learning. Interim assessment guides and tracks learning. A wide middle ground exists between teachers’ day-to-day formative assessment of student learning and the formal protocols of state summative assessment. This middle ground offers opportunities—captured under the umbrella term interim assessment—to gather information about many things that are relevant to the teaching and learning process, including: - individual and collective student growth - effectiveness of teaching practice and programs - projection of whether a student, class, or school is on track to achieve established benchmarks - instructional needs of individual students Educators can use interim assessments in a formative way to directly guide instruction. When this happens, data aggregation is considered the key difference between formative and interim assessment. This ability to aggregate data at critical points in the learning cycle allows interim assessment to have a broader set of purposes than both formative and summative assessment. As a result, interim assessment is the only type of assessment that provides educators with data for instructional, predictive, and evaluative purposes. Understanding the differences between these three types of assessments is important in determining how best to use the data and insight each one provides – a key part of our purpose here at AssessmentLiteracy.org. For more on different types and purposes of assessment, check out our interactive infographic, blog archive or library of resources.
The importance of early warnings Late last summer, a heat wave in Europe killed more than 15,000 people in France alone. At about the same time, South Asia flooded and forest fires raged in Spain and Portugal. In the United States, East coast electrical power went out and West Nile virus spread widely. It’s said that to be forewarned is to be forearmed. Mickey Glantz (ESIG) and Zhang Renhe of the Chinese Academy of Meteorological Sciences organized a workshop in Shanghai last October to study early warning systems for crises like floods, droughts, epidemics, invasive species, food insecurity, terrorism, and more. About 30 people from academia, government organizations, and private industry attended. “As all of our usable science workshops have been, it was designed to be a high-impact workshop,” Mickey says. “We captured the attention of various governments, United Nations agencies, and organizations.” Early warning systems can range from formal bureaucratic chains of command to informal coping mechanisms that have been passed down through the generations. An early warning system might be quantitative; for example, if high temperatures reach a certain level for a specific number of days, a heat wave warning is issued. Or it might be qualitative and rely on anecdotal evidence, like truck drivers returning from the countryside who report seeing famine-stricken villagers selling their possessions for food. “There must be scores of definitions of an early warning system,” Mickey says. “People don’t know how to define it, but they know one when they see it. An early warning is any kind of notice that there is some impending change in current conditions.” The goal of the workshop participants was to make early warnings more effective by gathering lessons from people who have developed and worked with early warning systems in different contexts. The participants would like to see government officials and others use these insights to prepare warnings and educate the media and public, making early warning systems more useful, credible, and reliable. Some of the specific topics participants discussed include different types of early warning systems, how long before crises they should be issued, circumstances that lead people to ignore warnings, and how to measure the effectiveness of early warning systems. Participants emphasized the link between early warning systems and sustainable development. Governments can use early warning systems to encourage settlements to develop in more secure areas, as well as to make the post-disaster rebuilding process less likely to hold back economic development for years after a crisis. “Sustainable development prospects, and even the stability of a government, are much more dependent on successful early warnings than most observers and governments realize,” Mickey says. “Governments pay lip service to early warning systems, but they really have to take them much more seriously.” Participants pointed out that while early warning systems might always look effective on paper or in PowerPoint demonstrations, not surprisingly they run into bottlenecks in reality. For example, national meteorological services accurately forecast the heat wave in Europe, but the cascade of early warnings was clearly insufficient and didn’t reach people most at risk. Another problem is that disaster priorities in a given location vary over time as new hazards appear or existing hazards occur in new areas. Global warming in particular has the potential to change the location, seasonality, and severity of hydrometeorological hazards. “Many of the impacts of global warming are still quite speculative, whereas the kinds of hazards we’re dealing with today are real and known,” Mickey says. “But we have a win-win situation by focusing on early warning systems, because if we can’t deal with extremes and variability today we won’t be able to deal with them under conditions of global warming in the future.” NCAR, NSF, NOAA, and the Chinese Academy of Meteorological Sciences sponsored the conference. In addition to Mickey, other ESIG staffers who attended the conference included Qian Ye, DJan Stewart, and Anne Oman.
Anatomy and General Function The colon, also known as the large intestine, is a hollow organ of approximately 5 feet in length. Although much shorter in length than the small intestine, the term “large intestine” refers to its much larger diameter. The colon is divided into anatomical segments based off blood supply and adjacent organs. The first portion of the colon is the cecum. The appendix projects off the cecum and within inches is the junction between the small intestine and colon. This is referred to as the ileocecal junction. The colon generally forms an inverted “U” as it ascends the right side of the abdomen (ascending colon), crosses the abdomen from right to left (transverse colon) and descends down the left abdomen(descending colon) to the “S” shaped sigmoid colon. The sigmoid colon empties into the rectum. Although the rectum is a continuation of the sigmoid colon there are a host of anatomic, mechanical and functional differences. Diseases of the colon and rectum are often managed in a different fashion. The function of the colon, in general, is the recycling of fluid, electrolytes and unabsorbed nutrients. The small intestine is generally responsible for the absorption of ingested nutrients (protein, fat, carbohydrates, vitamins and minerals). Digestion and absorption requires several liters of fluid rich enzymes to enter the GI tract. A large volume of fluid along with some electrolytes and nutrients pass from the small intestine into the colon. The colon absorbs approximately 90% of the fluid and electrolytes that pass through preventing continuous diarrhea and major fluid losses. The colon has more than 400 different bacterial species that play a key role in maintaining colon integrity, immune function and overall health. It is estimated that every gram of stool contains approximately 100 billion-1 trillion bacteria! The natural colonic bacteria (flora) breakdown indigestible fiber providing a nutrient by-product that is the preferred energy source for the cells lining the inside of the colon. In addition, bacteria are involved in the absorption of important vitamins such as Vitamin K. When is colon surgery needed? There are a number of diseases that may require surgical intervention. One of the more common reasons for colon surgery is for colon cancer. Colon cancer screening generally begins at the age of 50 with the use of colonoscopy. Colonoscopy is a procedure where a long slender tube with a camera is placed into the rectum and allows examination of the inner lining of the entire colon. Individuals with a family history of colon cancer usually will have their first colonoscopy prior to age 50. Colonoscopy can identify growths within the colon. These growths are called polyps. The majority of polyps are benign, however, there is strong evidence that some cancers will arise from certain polyps. Colonoscopy also allows for biopsy and sometimes complete removal of the polyp. Indications for colon surgery are: biopsy proven colon cancer, inability to remove an entire polyp or the polyp is located in an area that excision may lead to other complications. The most common indication for colon surgery for benign disease is diverticulosis. Diverticulum are small out-pouchings of the wall of the colon. Colonic diverticulum are common and it is estimated that 30 % of people older than 50 have diverticulum. About 60% of individuals over 85 have diverticulum. The majority of diverticulum are seen in the sigmoid and descending colon, however, some individuals may develop diverticulum in the cecum, ascending colon and throughout the entire colon. Colonic diverticulum have a propensity to bleed or become inflamed and can be a source of colon perforation. Recurrent episodes of bleeding requiring blood transfusions is a strong indicator for surgery. Perforation from an inflamed diverticulum (diverticulitis) can be a surgical emergency. Sometimes small contained perforations can be managed with bowel rest and IV antibiotics. Recurrent episodes of inflammation (diverticulitis) is another indicator for surgery. Laparoscopic Colon Surgery At one time, laparoscopic colon surgery was limited to benign disease such as diverticulitis. The concern with laparoscopic surgery for colon cancer stemmed from the idea that the ability to harvest lymph nodes in the resected specimen was not as good when performed in an open fashion. There have been several studies that have shown the ability to harvest lymph nodes to be equivalent when performed by a surgeon skilled in laparoscopic techniques. Laparoscopic surgery can now safely be performed for malignant diseases of the colon with equivalent oncologic results. The benefits of laparoscopic surgery in colon cancer are: - Reduced wound complications (infection and hernia) - Shorter hospital stay - Decreased pain - Earlier return to work/activity In some cases laparoscopic surgery may not be possible or the surgeon may convert to an open approach during the procedure. Previous abdominal surgeries, scar tissue, tumor size and degree of inflammation are major determining factors.
The B sound is also considered to be a bilabial sound-this means that your two lips work together in order to make this sound-they are making closure (lips together). The only difference between the B sound and the P sound is that when you make the B sound your vocal folds vibrate. Sometimes I tell the children that their “motor is on” when they make this sound. Visual Cue/Physical Cue and Tactile Cue: Make a fist with your hand and gently bang on a table or on your thigh while producing the B sound. You should ask your little one to do the same. This cue is showing your child that the B sound is a sound that stops. The S sound is a sound that continues during production, however, the B sound stops. So be sure to demonstrate to your little one……by gently banging you fist on a table or your thigh. While you bang your fist on the table place your other hand on your throat so that you can feel the vibration of your vocal folds. Now when practicing with your child-you can do the movement with your hand making the fist and have her do the same with her fist…..and while you are both doing this….have her take her own hand and feel the vibration on your throat. Be sure to explain to her that “your motor is on”. By saying “motor on” you are telling her that her vocal folds are moving/vibrating. The B sound is a stop sound…..and a motor on sound. By using the physical cue (fist banging on table) and tactile cue (feeling vibration of the vocal folds) you are telling her two things: showing her that the B sound is a sound that stops and does not continue like the S sound and that when making the B sound your vocal folds move. Be sure to always model the production of the target sound. Remember this means that you are demonstrating how the B sound should sound like when produced correctly. When she is ready….have her practice the B sound followed by vowels, then in simple words, next short phrases and then in longer sentences. Remember to use melody as a cue when producing target words. It is easiest when the word has more than one syllable. Words like: baby, baker, butter, and birdie. Be sure to change your pitch and inflection…..that will help her to say the word. CLICK ON THIS LINK FOR INITIAL B WORDS WORKSHEET: INITIALBWORDS SAY THEM IN ISOLATION (JUST THE WORD-NOT IN A SENTENCE) SAY THEM IN A SHORT PHRASE SAY THEM IN A SENTENCE SAY THEM IN A LONGER SENTENCE TRY IN REGULAR CONVERSATION Also posted under the articulation page are strategies to help teach many other sounds. Also find out under the articulation page how P and B are similar when talking.
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Pity is an emotion, usually resulting from an encounter with an unfortunate, injured, or pathetic person or creature. A person experiencing pity will often take mercy on the person/creature, giving them aid or money. Many people pity the homeless, orphans, the terminally ill, and victims of rape and torture. Because pity will result in people aiding the pitiful, most people consider it a positive thing. Philosopher Friedrich Nietzsche, however, believed that pity causes an otherwise normal person to feel the suffering of others; "Pity makes suffering contagious," he says in The Antichrist. Often, the word is used alone in speaking to refer to something unfortunate. The full sentence is "It's a pity," but saying the word alone (as in "Pity.") conveys the same idea as the sentence. See also Edit |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
Samples Tell Tales suggested grade levels: 7- 8 view Idaho achievement standards for this lesson 1. Students will understand that rocks can form in layers. 2. Students will be able to tell which layers are older when compared with other layers. |Various colors of clay||Clear plastic tubes or straws| |Salt and Pepper||Exacto knife| This activity is related to the fossil and sedimentary rock sections of the Digital Atlas. To get there: Click on Atlas Home, mouse-over Geology, then click on Fossils. Encourage students to read through the introduction on fossils. Have them click on the pictures of the different fossil types at the left side of the screen. This will help the students to see the dazzling variety of life forms to be discovered in the study of fossils. 1. Encourage your students to put various layers of colored clay in a container. Layers can vary in their thickness. The salt and pepper can be mixed within some of the layers to represent different types of fossils. 2. Divide the class into groups of 3-4 students, each group will stick a plastic tube into the clay as far as it will go. Remove tube and let students observe the different layers of "rock" that they have in their core sample. 3. If the straw is not easy to see through, teachers should cut away the tube with an Exacto knife and allow the students to draw their layers on paper. Students should indicate where various "fossils" are found. Have students label their drawing with the relative ages of each layer or stratum. 4. Have groups of students compare core samples with other groups. See if each group can correlate the layers within each core sample. Not all core samples should be alike because samples were chosen at different places within the clay. 5. Try twisting or folding the clay in the container to represent the geologic forces that convert sedimentary rock into metamorphic rock, or cut the clay and move the layers to represent an earthquake. Take another core sample and let students observe the changes. These are links to access the handouts and printable materials. Geology: Geology Topics
Central Equatorial Pacific Experiment (CEPEX) - Principal Investigator: - Paul Willis (CIMAS) - Other Scientists: - Chris Samsury (The Weather Channel) - Andy Heymsfield (NCAR) The purpose of the Central Equatorial Pacific Experiment (CEPEX) is to investigate this mechanism. The overall scientific goal of CEPEX is to establish the respective roles of cirrus radiative effects and surface evaporation in limiting maximum surface temperature in the equatorial Pacific. Of the so-called atmospheric greenhouse gases, water vapor is the most effective in producing surface-atmospheric warming. Water-vapor concentrations increase rapidly over the tropical oceans when sea surface temperatures (SSTs) begin to rise. This effect is greatest over the huge "warm pools" of the Pacific Ocean. The water-vapor content of the atmosphere above the ocean increases by about 15-20% for each 1% of increase in SST. These increasing concentrations of water vapor trap more and more heat, which causes the ocean's surface temperature to rise even further, thus creating a "super-greenhouse effect." Unchecked, this feedback mechanism would result in runaway warming. This is not what is observed, however; even in the "warm pool," SSTs never exceed 304°K (31°C). This suggests that some kind of "thermostat" might exist. Furthermore, deep intensive convection over the tropical oceans (cloud tops reaching altitudes of 18-20 km) occurs only when SSTs exceed about 300°K These observations raise two - Why do maximum SSTs in the tropical oceans remain within a few degrees of the 300°K threshold SST for deep convection? - What are the restoring forces that limit SST and deep convection to observed tropical Pacific values? It has been argued that cooling by evaporation from the ocean surface provides such a mechanism. However, observations from space and from the atmospheric boundary layer indicate that this process is not sufficient. Rather, it may be the very high and cold cirrus clouds, streaming from tropical thunderstorms, stretching over large areas of the Pacific, and reflecting the incoming solar radiation that, in fact, act as a thermostat. Direct in-situ measurement of radiation fluxes, cirrus microphysics, evaporation rates, and water-vapor distributions must be obtained over a range of SSTs, from regions where SST is just below the convection threshold temperature to regions where SST exceeds it. Accordingly, the CEPEX experiment domain encompassed the transition (with respect to SST) region from the central equatorial Pacific to the tropical south Pacific or the tropical western Pacific "warm pool." The primary experimental objectives of CEPEX are to: - Measure, by direct atmospheric observations, the vertical structure of the water-vapor greenhouse effect. - Measure the effect of cirrus on radiation fluxes over the equatorial Pacific. - Measure the east-west gradients of SST and the evaporative and sensible heat-flux from the sea surface along the equatorial Pacific. - Measure the east-west gradients of vertical distribution of water-vapor along the equatorial Pacific. - Explore the microphysical factors contributing to the high albedo of widespread tropical cirrus layers. CEPEX was conducted in March 1993 with an operations base in Fiji, immediately following the TOGA-COARE study of the western tropical Pacific Ocean, taking advantage of many of COARE's observing systems, including several critical ones that remained in place during the CEPEX field phase. Data from the TOGA-COARE field phase provides information essential to CEPEX (i.e., understanding the most important forcing mechanisms for maintenance of the warm pool). CEPEX contributes to the interpretation of TOGA-COARE results by providing coverage for an extended period and over a larger area and by focusing on the thermodynamic cloud forcing mechanism for the regulation of the ocean warm pool. Observations from high-altitude aircraft above and below the cirrus are used to estimate the albedo of cirrus and the radiation energy converging into the cirrus, as well as the water-vapor distribution above and below the cirrus, the horizontal gradient of cirrus radiative heating, and the microphysical causes for the brightness of the cirrus. Observations from the NOAA WP-3D and NCAR Electra aircraft, as well as a ship are used to estimate evaporation from the sea surface and its relationship to SST gradients and how the cirrus regulates solar energy flux to the sea surface. In addition, upsondes launched from the ship and islands, dropsondes launched from aircraft, surface buoys, satellite cloud data, and island surface meteorological and radiation sensors complete the CEPEX composite observing Radar data have been reduced and composites produced for the CEPEX, a project. HRD produced of the Lower Fuselage Radar time composites for the 13 WP-3D flights | Example: Lower fuselage radar composite for the flight on 3 March 1993. The map represents the horizontal precipitation distribution over ~1h in time and 500 X 500 km in HRD also produced Tail Radar time composites for selected flights to map the three-dimensional precipitation structure for use in analysis of the radiation data. |Example: Tail radar time composite for the flight on 3 March 1993. The map represents vertical and horizontal cross sections through the domain is delineated by the rectangle in the lower left of the lower fuselage composite. The vertical cross section is along the line on the horizontal cross section. |Example: Tail radar time composite for the flight on 3 March 1993. The map represents 8 horizontal cross sections at altitudes from 1-8 km. The domain is delineated by the rectangle in the lower left of the lower fuselage composite. Future work will focus on analysis of the data from a multi-aircraft portion of the CEPEX experiment on 2 March 1993, a segment with a NOAA P-3 aircraft in the boundary layer taking in situ and radar data, a Lear jet making cirrus microphysical measurements, and the NASA ER-2 making remote sensing measurements over the top of a mesoscale convective system. Updated May 5, 1998
Explore the Deep Sea Researchers work together to explore and study areas where tectonic plates are moving apart. Dangerous and far away Ocean-floor volcanoes and vents are mostly found hundreds of miles from land and thousands of feet below the surface of the ocean. Traveling to these areas requires time and a ship. Diving thousands of meters to underwater volcanoes requires sophisticated equipment that can withstand the immense pressure of the deep ocean, not to mention a wide range of temperatures and acidity. So researchers often collaborate to mount expeditions to particular ocean-floor sites. Many fields of study—working together The NSF-funded Ridge 2000 program encourages scientists from many different fields of study to work together to share information and ideas. This collaboration helps researchers make faster progress in answering important questions in geology, chemistry, biology and other scientific fields. Each expedition is led by a chief scientist. He or she decides what research takes place and when. The timetable is carefully planned beforehand, but sometimes has to be revised on short notice—when equipment breaks or a new discovery needs to be explored further. Cruises by name, not by nature Life on-board ship is often hectic. A great deal needs to be packed into the schedule. Research continues day and night, with many aboard getting little or no sleep for days in a row. Since the number of berths on each ship is limited, researchers are often short-handed and have to help each other out. So it's somewhat ironic that research expeditions are often referred to as "cruises:" it's hard to imagine a voyage less like the relaxing journey of a holiday cruise. Most expeditions are planned ahead of time… Scientists studying mid-ocean ridges typically plan expeditions months or even years ahead. Careful planning helps scientists to: - Book an appropriate ship. Only a few ships worldwide can host expeditions effectively: they need trained crews and much specialized equipment. - Make the most of a limited time at sea. Most expeditions last just a few weeks or months, and even if science teams work 24 hours a day, it can be tough to fit everything in. - Ensure the right equipment and supplies are aboard. Many expeditions study sites far from land; once they leave port, it is often impossible to get hold of any missing gear. - Link up with other scientists. Scientists are often interested in the same locations, even if they are investigating different things. So researchers often band together to plan an expedition to a particular place. This coordination not only makes efficient use of time, it also aids the science — because discoveries in one field (like chemistry) can help researchers working in another field (like biology). ... Some expeditions need to be put together quickly Nature isn't always predictable — interesting events often happen without much warning. If scientists want to study the event as it happens, they need to be quick off the mark. This is certainly true for scientists who study the dynamic environments of mid-ocean ridges. The volcanic eruptions and hydrothermal venting along these ridges don't necessarily happen often or last very long. If scientists want to catch these events as they unfold they need to respond quickly, or risk losing the opportunity to collect unique and important information.
Surgery (from the χειρουργική cheirourgikē, via chirurgiae, meaning "hand work") is a medical specialty that uses operative manual and instrumental techniques on a patient to investigate and/or treat a pathological condition such as disease or injury, to help improve bodily function or appearance, or sometimes for some other reason. An act of performing surgery may be called a surgical procedure, operation, or simply surgery. In this context, the verb operating means performing surgery. The adjective surgical means pertaining to surgery; e.g. surgical instruments or surgical nurse. The patient or subject on which the surgery is performed can be a person or an animal. A surgeon is a person who performs operations on patients. Persons described as surgeons are commonly medical practitioners, but the term is also applied to podiatrists, dentists and veterinarians. Surgery can last from minutes to hours, but is typically not an ongoing or periodic type of treatment. The term surgery can also refer to the place where surgery is performed, or simply the office of a physician, dentist, or veterinarian. Elective surgery is done to correct a non-life-threatening condition, and is carried out at the patient's request, subject to the surgeon's and the surgical facility's availability. Emergency surgery is surgery which must be done quickly to save life, limb, or functional capacity. Exploratory surgery is performed to aid or confirm a diagnosis. Therapeutic surgery treats a previously diagnosed condition. Amputation involves cutting off a body part, usually a limb or digit. Replantation involves reattaching a severed body part. Reconstructive surgery involves reconstruction of an injured, mutilated, or deformed part of the body. Cosmetic surgery is done to improve the appearance of an otherwise normal structure. Excision is the cutting out of an organ, tissue, or other body part from the patient. Transplant surgery is the replacement of an organ or body part by insertion of another from different human (or animal) into the patient. Removing an organ or body part from a live human or animal for use in transplant is also a type of surgery. When surgery is performed on one organ system or structure, it may be classed by the organ, organ system or tissue involved. Examples include cardiac surgery (performed on the heart), gastrointestinal surgery (performed within the digestive tract and its accessory organs), and orthopedic surgery (performed on bones and/or muscles). Minimally invasive surgery involves smaller outer incision(s) to insert miniaturized instruments within a body cavity or structure, as in laparoscopic surgery or angioplasty. By contrast, an open surgical procedure requires a large incision to access the area of interest. Laser surgery involves use of a laser for cutting tissue instead of a scalpel or similar surgical instruments. Microsurgery involves the use of an operating microscope for the surgeon to see small structures. Robotic surgery makes use of a surgical robot, such as the Da Vinci or the Zeus surgical systems, to control the instrumentation under the direction of the surgeon. Prior to surgery, the patient is given a medical examination, certain pre-operative tests, and an ASA score. If these results are satisfactory, the patient signs a consent form and is given a surgical clearance. If the procedure is expected to result in significant blood loss, an autologous blood donation may be made some weeks prior to surgery. If the surgery involves the digestive system, the patient may be instructed to perform a bowel prep by drinking a solution of polyethylene glycol the night before the procedure. Patients are also instructed to abstain from food or drink (an NPO order after midnight on the night before the procedure, to minimize the effect of stomach contents on pre-operative medications and reduce the risk of aspiration if the patient vomits during or after the procedure. In the pre-operative holding area, the patient changes out of his or her street clothes and is asked to confirm the details of his or her surgery. A set of vital signs are recorded, a peripheral IV line is placed, and pre-operative medications (antibiotics, sedatives, etc) are given. When the patient enters the operating room, the skin surface to be operated on is cleaned and prepared by applying an antiseptic such as chlorhexidine gluconate or povidone-iodine to reduce the possibility of infection. If hair is present at the surgical site, it is clipped off prior to prep application. Sterile drapes are used to cover all of the patient's body except for the surgical site and the patient's head; the drapes are clipped to a pair of poles near the head of the bed to form an "ether screen", which separates the anesthetist/anesthesiologist's working area (unsterile) from the surgical site (sterile). Anesthesia is administered to prevent pain from incision, tissue manipulation and suturing. Based on the procedure, anesthesia may be provided locally or as general anesthesia. Spinal anesthesia may be used when the surgical site is too large or deep for a local block, but general anesthesia may not be desirable. With local and spinal anesthesia, the surgical site is anesthetized, but the patient can remain conscious or minimally sedated. In contrast, general anesthesia renders the patient unconscious and paralyzed during surgery. The patient is intubated and is placed on a mechanical ventilator, and anesthesia is produced by a combination of injected and inhaled agents. An incision is made to access the surgical site. Blood vessels may be clamped to prevent bleeding, and retractors may be used to expose the site or keep the incision open. The approach to the surgical site may involve several layers of incision and dissection, as in abdominal surgery, where the incision must traverse skin, subcutaneous tissue, three layers of muscle and then peritoneum. In certain cases, bone may be cut to further access the interior of the body; for example, cutting the skull for brain surgery or cutting the sternum for thoracic (chest) surgery to open up the rib cage. Work to correct the problem in body then proceeds. This work may involve: Blood or blood expanders may be administered to compensate for blood lost during surgery. Once the procedure is complete, sutures or staples are used to close the incision. Once the incision is closed, the anesthetic agents are stopped and/or reversed, and the patient is taken off ventilation and extubated (if general anesthesia was administered). After completion of surgery, the patient is transferred to the post anesthesia care unit and closely monitored. When the patient is judged to have recovered from the anesthesia, he/she is either transferred to a surgical ward elsewhere in the hospital or discharged home. During the post-operative period, the patient's general function is assessed, the outcome of the procedure is assessed, and the surgical site is checked for signs of infection. If removable skin closures are used, they are removed after 7 to 10 days post-operatively, or after healing of the incision is well under way. Post-operative therapy may include adjuvant treatment such as chemotherapy, radiation therapy, or administration of medication such as anti-rejection medication for transplants. Other follow-up studies or rehabilitation may be prescribed during and after the recovery period. The oldest known surgical texts date back to ancient Egyptian about 3500 years ago. Surgeries were performed by priests, specialized in medical treatments similar to today. The procedures were documented on papyrus and were the first to describe patient case files; the Edwin Smith Papyrus (held in the New York Academy of Medicine) documents surgical procedures based on anatomy and physiology, while the Ebers Papyrus describes healing based on magic. Their medical expertise was later documented by Herodotus: "The practice of medicine is very specialized among them. Each physician treats just one disease. The country is full of physicians, some treat the eye, some the teeth, some of what belongs to the abdomen, and others internal diseases. Sushruta (also spelled Susruta or Sushrutha) (c. 6th century BC) was a renowned surgeon of Ancient India, and the author of the book Sushruta Samhita. In his book, he described over 120 surgical instruments, 300 surgical procedures and classifies human surgery into 8 categories. Sushruta is also known as the father of plastic surgery and cosmetic surgery. He was a surgeon from the dhanvantari school of Ayurveda. The Hippocratic Oath was an innovation of the Greek physician Hippocrates. However ancient Greek culture traditionally considered the practice of opening the body to be repulsive and thus left known surgical practices such as lithotomy to such persons as practice [it]. In China, Hua Tuo was a famous Chinese physician during the Eastern Han and Three Kingdoms era. He was the first person to perform surgery with the aid of anesthesia, albeit a rudimentary and unsophisticated form. In the Middle Ages, surgery was developed to a high degree in the Islamic world. Abulcasis (Abu al-Qasim Khalaf ibn al-Abbas Al-Zahrawi), an Andalusian-Arab physician and scientist who practised in the Zahra suburb of Córdoba, wrote medical texts that shaped European surgical procedures up until the Renaissance. He is also often regarded as a Father of Surgery. In Europe, the demand grew for surgeons to formally study for many years before practicing; universities such as Montpellier, Padua and Bologna were particularly renowned. By the fifteenth century at the latest, surgery had split away from physics as its own subject, of a lesser status than pure medicine, and initially took the form of a craft tradition until Rogerius Salernitanus composed his Chirurgia, laying the foundation for modern Western surgical manuals up to the modern time. Late in the nineteenth century, Bachelor of Surgery degrees (usually Ch.B.) began to be awarded with the (M.B.), and the mastership became a higher degree, usually abbreviated Ch.M. or M.S. in London, where the first degree was M.B.,B.S.. Barber-surgeons generally had a bad reputation that was not to improve until the development of academic surgery as a specialty of medicine, rather than an accessory field. Basic surgical principles for asepsis etc are known as Halsteads principles Modern surgery developed rapidly with the scientific era. Ambroise Paré (sometimes spelled "Ambrose) pioneered the treatment of gunshot wounds, and the first modern surgeons were battlefield doctors in the Napoleonic Wars. Naval surgeons were often barber surgeons, who combined surgery with their main jobs as barbers. Three main developments permitted the transition to modern surgical approaches - control of bleeding, control of infection and control of pain (anaesthesia). Bleeding: Before modern surgical developments, there was a very real threat that a patient would bleed to death before treatment, or during the operation. Cauterization (fusing a wound closed with extreme heat) was successful but limited - it was destructive, painful and in the long term had very poor outcomes. Ligatures, or material used to tie off severed blood vessels, are believed to have originated with Abulcasis in the 10th century and improved by Ambroise Paré in the 16th century. Though this method was a significant improvement over the method of cauterization, it was still dangerous until infection risk was brought under control - at the time of its discovery, the concept of infection was not fully understood. Finally, early 20th century research into blood groups allowed the first effective blood transfusions. Infection: The concept of infection was unknown until relatively modern times. The first progress in combating infection was made in 1847 by the Hungarian doctor Ignaz Semmelweis who noticed that medical students fresh from the dissecting room were causing excess maternal death compared to midwives. Semmelweis, despite ridicule and opposition, introduced compulsory handwashing for everyone entering the maternal wards and was rewarded with a plunge in maternal and fetal deaths, however the Royal Society in the UK still dismissed his advice. Significant progress came following the work of Pasteur, when the British surgeon Joseph Lister began experimenting with using phenol during surgery to prevent infections. Lister was able to quickly reduce infection rates, a reduction that was further helped by his subsequent introduction of techniques to sterilize equipment, have rigorous hand washing and a later implementation of rubber gloves. Lister published his work as a series of articles in The Lancet (March 1867) under the title Antiseptic Principle of the Practice of Surgery. The work was groundbreaking and laid the foundations for a rapid advance in infection control that saw modern aseptic operating theatres widely used within 50 years (Lister himself went on to make further strides in antisepsis and asepsis throughout his lifetime). Pain: Modern pain control through anesthesia was discovered by two American dental surgeons, Horace Wells (1815-1848) and William Morton. Before the advent of anesthesia, surgery was a traumatically painful procedure and surgeons were encouraged to be as swift as possible to minimize patient suffering. This also meant that operations were largely restricted to amputations and external growth removals. Beginning in the 1840s, surgery began to change dramatically in character with the discovery of effective and practical anaesthetic chemicals such as ether and chloroform, later pioneered in Britain by John Snow. In addition to relieving patient suffering, anaesthesia allowed more intricate operations in the internal regions of the human body. In addition, the discovery of muscle relaxants such as curare allowed for safer applications. Some other specialties involve some forms of surgical intervention, especially gynaecology. Also, some people consider invasive methods of treatment/diagnosis, such as, cardiac catheterization, endoscopy, and placing of chest tubes or central lines "surgery". In most parts of the medical field, this view is not shared.
Investigating the Charleston Bump August 2 16, 2003 Off the coast of South Carolina and Georgia exists a unique feature called the Charleston Bump, which rises off the surrounding Blake Plateau from 2,000 ft deep to a depth of about 1,200 ft. The rocky, erosion-resistant Charleston Bump impedes the flow of the Gulf Stream, deflecting it offshore and creating a zone of gyrating eddies and swift, narrow currents. The combination of rocky, high-relief bottom and strong currents creates a complex habitat consisting of scour depressions, mounds of deep-water corals and rubble, and steep scarps that are deeply undercut with extensive caves. During this exploration, researchers investigated how fishes and invertebrates adapt to the variety of bottom habitats, and how they tolerate the strong and shifting currents. The investigators also sought new and unique species. Much of the work was conducted using the Johnson-Sea-Link II manned submersible, which was outfitted with digital video, still cameras and other instruments to record information about the surrounding waters. The submersible also collected samples of rocks and sediments, as well as specimens of fishes and invertebrates such as sponges and deep-water corals. Little was known about the creatures that inhabit "the Bump" and the surrounding Blake Plateau, making this a true mission of exploration. Background information for this exploration can be found on the left side of the page. Daily updates and logs that summarize expedition activities are posted below and to the right.
All emotions matter! What we do with those emotions matters as well. We need to help students understand that feelings are neither right nor wrong it is what we do with those feelings that truly matter. If we want to teach students how to regulate their emotions then we too need to be able to regulate our own emotions. Modelling how we feel is important for students. If we are not afraid to admit when we are angry, frustrated or sad and we handle those emotions in an appropriate way , the students will learn how to do that as well. Being open and honest about how we feel in a respectful manner is great modelling for students and other educators. Marc Brackett , director of Yale Centre for Emotional Intelligence and expert in social emotional learning has developed an acronym RULER for emotional skills that is helpful for educators: R– recognizing emotions in yourself and others U- understanding the causes of your emotions L- labelling your emotions E- expressing emotions R- regulating emotions Educators and School Counsellors can and do make a difference in promoting the wellbeing and emotional intelligence of students. When we put ourselves in a childs shoes we may be more compassionate to how they are feeling. What is it like to be them? Could they be experiencing a roller coaster of emotions and how does this impact them , their feelings and their learning? Sesame Street has some great videos that explain feelings and teaches students about emotional regulation. Here is a good example: Emotional Regulation Resources for educators : The Mood Meter App cost of 1.39 cents Helping students with mixed emotions: Teaching students to have meta moments. One of the best strategies we used when my daughter was a teenager was for her and I to agree that when we were angry with each other or when our emotions were running high we would agree to back off and give each other space and discuss things the next day. Each of us would signal the other that it was ok to discuss when we were both more level-headed. I would call these mega moments. This strategy saved our relationship in those emotional years. Yes , it does begin with me. Being a lifelong learner I hope to be able to fully understand emotional regulation by reading the newest research so that I can best help myself, my students and my family. What are some of the best strategies you use as educators, parents and School Counsellors?
While fragments of wax-resist or batik cloth have survived in parts of the world, dating to early 5th and 6th centuries AD in Egypt, and 8th century AD Japan, it’s not known with certainty where this process began. Some researchers feel the technique was developed in India then spread out from there. One thing is sure, trade between India and Southeast Asia was mentioned as early as the 1st century AD. By 1200 the Hindu religion and culture was a major influence in many parts of what is now Indonesia. Imported Indian textiles continued to have a deep impact in the region well into the early 19th century. In 1518 the first known use of the word tulis was associated with a shipment of trade goods from Java. (Elliot p. 22) Today distinctive traditional batik styles can be found in Africa, China, Malaysia, Sri Langka and Northern Thailand. But of all the places known for traditional batik, none are as famous for their rich heritage of patterns and colors as Indonesia, especially Javanese batik. Batik in the Royal Court Cities of Central Java Unlike the more open, free-spirited north coast of Java influenced by traders from Europe, India, Arabia and China, the royal court cities of Central Java looked inward, building on a different set of rules and values. In 1755, the nearly 200-year-old Mataram Sultanate of Central Java split into the two court cities of Yogyakarta and Surakarta, or Solo as it’s also known. These ancient aristocratic, feudal societies placed much emphasis on tradition, a sense of order within a strict code of conduct, an awareness of spiritual values and the use of symbolism. Power was concentrated at the top under the sultan as supreme ruler. These deeply held values are clearly reflected in the batik of this region. From the Hindu-Buddhist era in Java come stylized forms from nature, using rounded, flowing lines rather than realistic depictions of flowers or leaves. Believers of Islam, which came to Java in the 1500s, are not allowed to portray any living creature. This in turn pushed batik into more geometric designs.
We Shall Overcome: The People who Made America Post WWII Subjects: History, Post-WWII Economics, Agricultural Technology, Citizenship, Legacy Class Session: 1 Lesson Summary: In this lesson, students will focus on the less-obvious people like Hans and Erik Gregerson who helped the United States during post-WWII reconstruction. Their status as hardworking immigrants and pioneers in agricultural technology, politics and management gives students a unique perspective on what kinds of jobs were available then and how the Gregerson brother’s outlook can inform the way we look at similar issues today. Their story is meant to be an alternative example to the people normally covered in this time period. The lesson steps below are a guideline. Consider the context of your previously covered material and your student’s needs. Address standards below in an assessment paper Consider: what does it take to create economic boom and social transformation in a country? Do some background research on the Gregersen Brother’s historical context Analyze the Gregersen portrait for themes Draw connections between the Gregersen’s life and their own Portrait of Hans and Erik Gregersen by Holli Harmon Ways to research or define the following: Solvang, Cuyama oil fields, First Female Ph.D. at Yale, FMC Corporation, United Nations Food and Agriculture Organization, InterAmerican Development Bank, Consultative Group on International Agricultural Research (CGIAR), UN 2030 Agenda on Sustainable Development (Research paper/presentation opportunities) Lesson Plan/Discussion Questions: Give students two column guided notes to record observations and questions as they watch the video interview of Hans and Erik Gregersen Give students some historical context in terms of framing economic development post WWII (whatever is currently in your unit) Distribute copies of the Gregersen essay to table groups and have students discuss their own questions after a close-reading of the article. Class discussion: 1) Are the Gregersen’s immigrants? 2) What makes a global perspective? 3) Compare and contrast their story to another immigrant story, what is different? What is the same? Put up the portrait of Hans and Erik Gregersen. See. Think. Wonder Ask: after reading and seeing the interview, how does the portrait connect? In what ways do the students pick up on similar themes in their own lives? Research Paper: Research one of the mentioned terms/organizations above and consider its impact then and now. Or students create their own research question based on something that came up in the discussion. Tracking the History of: Students create a 8-10min presentation (group or solo) tracking the change over time of a particular thing from the Gregersen’s early days. This could be a piece of agricultural equipment, a developing country, A UN committee or even a piece of Danish or American culture. The goal is for student’s to give a thoughtful presentation that takes the context of a thing and critically tracks how it changed comparison or contrast to its historical context. Discussion ins Small Groups and in Class Have students turn in their notes as reference/assessment 11.8 Students analyze the economic boom and social transformation of post–World War II America. 1. Trace the growth of service sector, white collar, and professional sector jobs in business and government. 3. Examine Truman’s labor policy and congressional reaction to it. 6. Discuss the diverse environmental regions of North America, their relationship to local economies, and the origins and prospects of environmental problems in those regions. 7. Describe the effects on society and the economy of technological developments since 1945, including the computer revolution, changes in communication, advances in medicine, and improvements in agricultural technology. CCCSS Listening and Comprehension Presentation of Ideas. 11.4 a. Plan and deliver a reflective narrative that: explores the significance of a personal experience, event, or concern; uses sensory language to convey a vivid picture; includes appropriate narrative techniques (e.g., dialogue, pacing, description); and draws comparisons between the specific incident and broader themes. (11th or 12th grade) CA b. Plan and present an argument that: supports a position. Created By: Katherine Kwong Intern Fall 2016
|Systematic name||Sodium chloride| |Other names||Common salt, |Molar mass||58.442 g/mol| |Appearance||white and crystalline| |Density and phase||2.16 g/cm³, solid| |Solubility in water||35.9 g/100 ml (25 °C)| |Melting point||801 °C (1074 K)| |Boiling point||1465 °C (1738 K)| |Crystal structure||Face centered cubic| |Main hazards||Irritant and may sting| |R/S statement||R: none |n, εr, etc.| Solid, liquid, gas |Spectral data||UV, IR, NMR, MS| |Other anions||NaF, NaBr, NaI| |Other cations||LiCl, KCl, RbCl, CsCl, MgCl2, CaCl2 |Related salts||Sodium acetate| |Except where noted otherwise, data are given for materials in their standard state (at 25 °C, 100 kPa) Sodium chloride, also known as common salt or table salt, is a chemical compound with the formula NaCl. Its mineral form is called halite. It is highly soluble in water and is the salt most responsible for the salinity of the ocean and of the extracellular fluid of many multicellular organisms. As the main ingredient in edible salt, it has long been used as a food seasoning and preservative. In its latter capacity, it reduced human dependence on the seasonal availability of food and allowed travel over long distances. In this manner, it served as a foundation for the spread of civilization. It is currently available quite inexpensively and in large quantities. Historically, however, it was difficult to obtain and was a highly valued trade item. Until the 1900s, it was one of the prime movers of national economies and wars. It was controlled by governments and taxed as far back as the twentieth century B.C.E. in China. - See main article: History of salt Salt's preservative ability was a foundation of civilization. It eliminated dependency on the seasonal availability of food and allowed travel over long distances. By the Middle Ages, caravans consisting of as many as forty thousand camels traversed four hundred miles of the Sahara bearing salt, sometimes trading it for slaves. There are 35 references (verses) to salt in the Bible (King James Version), the most familiar probably being the story of Lot's wife, who was turned into a pillar of salt when she disobeyed the angels and looked back at the wicked city of Sodom (Genesis 19:26). In the Sermon on the Mount, Jesus also referred to his followers as the "salt of the earth." The apostle Paul also encouraged Christians to "let your conversation be always full of grace, seasoned with salt" (Colossians 4:6) so that when others enquire about their beliefs, the Christian's answer generates a 'thirst' to know more about Christ. In the native Japanese religion Shinto, salt is used for ritual purification of locations and people, such as in Sumo Wrestling. Historically, there have been two main sources for common salt: sea water and rock salt. Rock salt occurs in vast beds of sedimentary evaporite minerals that result from the drying up of enclosed lakes, playas, and seas. Salt beds may be up to 350 meters (meters) thick and underlie broad areas. In the United States and Canada extensive underground beds extend from the Appalachian basin of western New York through parts of Ontario and under much of the Michigan basin. Other deposits are in Ohio, Kansas, New Mexico, Nova Scotia, and Saskatchewan. In the United Kingdom underground beds are found in Cheshire and around Droitwich. Salt is currently produced in one of two principal ways: - The evaporation of seawater or brine (salt water) from other sources, such as brine wells and salt lakes; - The mining of rock salt, called halite. This includes solution mining, in which water is used to dissolve the salt and the brine that reaches the surface is evaporated to recover the salt. Solar evaporation of seawater In the correct climate (one for which the ratio of evaporation to rainfall is suitably high) it is possible to use solar evaporation of sea water to produce salt. Brine is evaporated in a linked set of ponds until the solution is sufficiently concentrated by the final pond that the salt crystallizes on the pond’s floor. Open pan production from brine One of the traditional methods of salt production in more temperate climates is using open pans. In an open pan salt works, brine is heated in large, shallow open pans. Earliest examples date back to prehistoric times and the pans were made of ceramics known as briquetage, or lead. Later examples were made from iron. This change coincided with a change from wood to coal for the purpose of heating the brine. Brine would be pumped into the pans, and concentrated by the heat of the fire burning underneath. As crystals of salt formed these would be raked out and more brine added. Closed pan production under vacuum The open pan salt works has effectively been replaced with a closed pan system where the brine solution is evaporated under a partial vacuum. In the second half of the nineteenth century, it became possible to mine salt, which is less expensive than evaporating seawater or extracting salt from brine. Consequently, the price of salt became more reasonable. However, extraction of salt from brine is still heavily used: for example vacuum salt produced by British Salt in Middlewich has 57 percent of the UK market for salt used in cooking. Sodium chloride forms crystals with cubic symmetry. In these, the larger chloride ions, shown in the diagram as green spheres, are arranged in a cubic close-packing, while the smaller sodium ions, shown as blue spheres, fill the octahedral gaps between them. The ions are held together with ionic bonds. Each type of ion is surrounded by six ions of the other kind. This same basic structure is found in many other minerals and is known as the halite structure. This arrangement is known as cubic close packed (ccp) crystal system. |Solubility of NaCl in various solvents (grams NaCl per 100 grams of solvent at 25 °C) J. Burgess, Metal Ions in Solution (Ellis Horwood, New York, 1978), While most people are familiar with the many uses of salt in cooking, they might be unaware that salt is used in a plethora of applications, from manufacturing pulp and paper to setting dyes in textiles and fabric, to producing soaps and detergents. In most of Canada and the northern United States, large quantities of rock salt are used to help clear highways of ice during winter, although "road salt" loses its melting ability at temperatures below -15 °C to -20 °C (5 °F to -4 °F). Industrially, elemental chlorine is usually produced by the electrolysis of sodium chloride dissolved in water. Along with chlorine, this chloralkali process yields hydrogen gas and sodium hydroxide, according to the chemical equation: - 2NaCl + 2H2O → Cl2 + H2 + 2NaOH Sodium metal is produced commercially through the electrolysis of liquid sodium chloride. This is done in an apparatus called a Downs cell, in which sodium chloride is mixed with calcium chloride to lower the melting point below 700 °C. As calcium is more electropositive than sodium, no calcium will be formed at the cathode. This method is less expensive than the earlier method of electrolyzing sodium hydroxide. Salt is commonly used as a flavor enhancer for food and has been identified as one of the basic tastes. Unfortunately, it is often ingested well in excess of the required intake. This leads to elevated levels of blood pressure (hypertension) in some, which in turn is associated with increased risks of heart attack and stroke. Consuming salt in excess can also dehydrate the human body. Many microorganisms cannot live in an overly salty environment: water is drawn out of their cells by osmosis. For this reason salt is used to preserve some foods, such as smoked bacon or fish, and can also be used to detach leeches that have attached themselves to feed. It has also been used to disinfect wounds (although it causes a great deal of pain). In medieval times, salt would be rubbed into household surfaces as a cleansing agent. While salt was once a scarce commodity in history, industrialized production has now made salt plentiful. About 51 percent of the worldwide output of salt is now used to de-ice roads in freezing weather conditions. The salt may be put in grit bins and spread by winter service vehicles. This approach works because salt and water form an eutectic mixture. Under controlled lab conditions, a solution of sodium chloride in water can reduce the freezing temperature of water to -21 °C (-6 °F). In practice, however, sodium chloride can melt ice down to only about -9 °C (15 °F). The salt sold for consumption today is usually not pure sodium chloride. In 1911, magnesium carbonate was first added to salt to make it flow more freely. In 1924, trace amounts of iodine—in the form of sodium iodide, potassium iodide or potassium iodate—were first added, creating iodized salt to reduce the incidence of simple goiter. Salt for de-icing in the UK typically contains sodium hexacyanoferrate (II) at less than 100 parts per million as an anti-caking agent. In recent years this additive has also been used in table salt. Chemicals used in de-icing salts are mostly found to be sodium chloride (NaCl) or calcium chloride (CaCl2). Both are similar and are effective in de-icing roads. When these chemicals are produced, they are mined/made, crushed to fine granules, then treated with an anti-caking agent. Adding salt lowers the freezing point of the water, which allows the liquid to be stable at lower temperatures and allows the ice to melt. Alternative de-icing chemicals have also been used. Chemicals such as calcium magnesium acetate are being produced. These chemicals have few of the negative chemical effects on the environment commonly associated with NaCl and CaCl2. - ↑ Solar Salt production. Salt Institute. Retrieved June 22, 2007. - ↑ Towards an understanding of open pan salt making. Lion Salt Works History & Heritage. Retrieved June 22, 2007. - ↑ Early Salt Making. Lion Salt Works History & Heritage. Retrieved June 22, 2007. - ↑ Vacuum Pan Salt Refining. Salt Institute. Retrieved June 22, 2007. - ↑ The Competition Commission. Factors affecting rivalry in the relevant market prior to the merger. British Salt Limited and New Cheshire Salt Works Limited: A report on the acquisition by British Salt Limited of New Cheshire Salt Works Limited. Retrieved June 22, 2007. - Kurlansky, Mark. 2003. Salt: A World History. New York: Penguin. ISBN 0142001619 - Multhauf, Robert P. 1996. Neptune's Gift: A History of Common Salt. Johns Hopkins Studies in the History of Technology. Baltimore, MD: Johns Hopkins University Press. ISBN 0801854695 - Mineral Gallery. 2006. The Mineral Halite. Amethyst Galleries. Retrieved May 9, 2007. - De-icing - Environment. Salt Manufacturers' Association. Retrieved May 9, 2007. - Salt Institute Retrieved May 9, 2007. - MrBloch Salt Archive Retrieved May 9, 2007. - Salt: Statistics and Information. United States Geological Survey. Retrieved May 9, 2007. - Using Salt and Sand for Winter Road Maintenance. Road Management Journal, U.S. Roads. Retrieved May 9, 2007. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
What Is Persimmon? The persimmon is a tree-growing fruit that belongs to the Diospyros genus within the Ebenaceae family. Although persimmon is further divided into several species, the most common edible species is the Japanese variety. The smooth exterior of this fruit can be found in anywhere from a light yellowish orange to a dark reddish orange color. It grows to between .59 and 3.54 inches in diameter. Other edible varieties include: date-plum, American, black, velvet-apple, Indian, and Texas. Its flavor can be classified as either astringent (due to a high level of tannins) or non-astringent. The astringent variety can only be eaten when fully ripened, whereas the non-astringent can be eaten while the fruit is still firm. Persimmon was known by ancient Greeks as the “fruit of the gods”. On average, the persimmon tree grows to over 32 feet in height and takes between 5 and 6 years to produce fruit on a commercial level. Persimmon trees, particularly the Japanese variety, thrive in subtropical and even temperate climates. During winter, it can tolerate temperatures as low as 10° fahrenheit. Persimmon crops require a well-drained soil and do not tolerate high levels of salt. In order to produce plump fruit, these orchards need between 3 and 4 feet of controlled irrigation water in addition to natural rainfall. At harvest time, the fruit is picked and placed in picking buckets to prevent bruising. Uses For Persimmon Persimmon is used primarily for culinary purposes and mainly eaten raw. Other presentations include dried and cooked. When eaten raw, most people bite into it like an apple, although others prefer to peel the fruit. In several Asian countries, the fruit is harvested and dried outside. In this form it is eaten as a snack or dessert. In Korea, however, this dried persimmon is the basis for a spicy drink and when fermented, a vinegar. Persimmon leaves are used to make herbal teas as well. The fresh fruit is often used in cakes, cookies, pudding, and pies throughout North America. World’s Top Producers Of Persimmon In 2013, world production of persimmon reached 4.6 million tonnes. The top three leading persimmon producing countries are discussed below: Production in China accounted for 43% of this total with 2 million tonnes produced. The most commonly harvested persimmon here is the previously mentioned Japanese variety. Large tracts of persimmon production can be found along the Yellow river and covers approximately 254 square miles. Of this large production rate, between 70% and 80% is sold as fresh fruit. The remainder is processed. The second leading persimmon producer in the world is South Korea, which produces just .3 million tonnes. The persimmon holds a special cultural value in this country and is seen as a symbolic fruit due to its bitter to sweet transformation. The agricultural industry in this country is actively trying to improve persimmon production. As part of this effort, in 2004, Korea significantly increased its persimmon exports to Hong Kong. Japan takes the number 3 spot with a .26 million-tonne production level, just shy of South Korea. This number does not represent an increase and actually remains the same from the production rates of 2002. The primary locations for persimmon production include the following prefectures: Wakayama, Fukuoka, Nara, and Gifu. The majority of persimmon produced in Japan is sold locally, although small amounts are exported to Southeast Asian countries. The Leading Persimmon Producing Countries In The World |Rank||Country||Production (millions of tons)| |2||Republic of Korea||0.3|