content
stringlengths 275
370k
|
---|
Amounts of money may be written in several different ways. Cents may be written
with the ¢ sign and dollars can be written with the dollar sign ($). Adding
money that is expressed in these forms just involves adding the amounts and placing
the proper sign on the answer.
Often money is written as a decimal with dollars to the left of the decimal point
and cents to the right of the decimal point. Twenty-three dollars and eighty-seven
cents is written $23.87.
Decimal money amounts are added the same way that
decimals are added. Remember to put the $ sign before the answer.
Adding Decimals is just like adding other numbers.
Always line up the decimal points when adding decimals.
Remember to put the decimal point in the proper place in your answer. |
The 18th Century (1701-1800) was the Age of the Enlightenment. During the 18th century, the world largely moved into the modern era, partly due to the Scientific Revolution, and partly due to the political revolutions in America and France. Philosophy stressed natural science and reason while political reality stressed the absolutism of Louis XIV, all of which culminated in the Reign of Terror.
The eighteenth century opened with the establishment of the Kingdom of Prussia in 1701 and the merge of England and Scotland into the United Kingdom in 1707. The War of Spanish Succession occupied much of the European continent from 1706 to 1713, followed shortly after by the death of Louis XIV of France, the fabled Sun King. The 18th century was also the fading age of piracy, evidenced by the death of Blackbeard (Edward Teach) off the North Carolina coast in 1718.
The 18th century was the Enlightenment. Hume, Bentham, Rousseau and Diderot made major contributions to philosophy. Johann Sebastian Bach was the leading musician of the age, while Voltaire, Jonathan Swift, Daniel Defoe and Jane Austen wrote landmark novels. Peter the Great and Catherine the Great ruled Russia, while Frederick the Great ruled in Prussia. In America, the Founding Fathers, men such as Thomas Jefferson, Benjamin Franklin, and George Washington, took the leading notions of the Enlightenment and created the United States. |
Grade Range: 4-12
Resource Type(s): Reference Materials
Date Posted: 11/18/2008
In this online exhibition, students will explore the story of the Star-Spangled Banner by learning about the War of 1812 and the Battle of Baltimore; Mary Pickersgill and the making of the flag; Francis Scott Key and the song that became the national anthem; the legacy of the flag and its use as a patriotic symbol; and the efforts undertaken to preserve the flag as a national treasure. This resource includes interactive activities and educational resources that can be used to further enhance this exploration of the flag that inspired the national anthem.
Historical Thinking Standards (Grades K-4)
Historical Thinking Standards (Grades 5-12)
Standards in History (Grades K-4)
United States History Standards (Grades 5-12) |
If you reside in Gwinnett or DeKalb, then you’re on land once held by the Creek Indians. In the early history of Georgia, the British claimed land, mainly for large plantations, under authority of royal charters. But after independence when Americans moved west, officials in Georgia distributed three-quarters of the state’s land through a lottery system to benefit white yeoman farmers.
There were eight land lotteries in Georgia from 1805 to 1833, but it was the ones in 1820-1821 that led to white settlement in Gwinnett and then in DeKalb. Common men received lottery lands based on eligibility and chance, paying miniscule amounts averaging 7 cents per acre for lots typically 202.5 acres.
Actual lotteries typically took place in the antebellum capital of Milledgeville, where commissioners randomly drew names out of rolling drums. U.S veterans from the Revolutionary War, 1812, or Indian Wars, got precedence, but other classifications included families with three-year residence requirements. Participation fee was a hefty $19, but winning meant bargain land prices. The last lottery was $10 per acre in 1832 for gold districts in Dahlonega as the Cherokee nation faded.
The lottery system helped power shifts from aristocrats to everyday farmers. With enslavement and cot-ton, some landowners moved up the social ladder, but that was unusual. Descendants of white lottery win-ners sold their land over the years while effects of emancipation and the end of sharecropping for blacks were for generations slow in coming. Indians participated in landholding again only until after they became thoroughly assimilated into the U.S.
–Dr. Paul Hudson |
I just released a new social studies differentiated instruction lesson plan on my store http://www.teacherspayteachers.com/Product/American-Revolution-Military-and-Political-Aspects-Differentiated-Lesson-459201. If you are a NYS 7th or 8th grade social studies teacher and you have not checked out this site yet, what are you waiting for?! I follow the NYS 7th and 8th grade social studies curriculum and standards.
The new social studies differentiated instruction lesson plan is called “Military and Political Aspects of the American Revolution”. It includes:
- Lesson Plan
- 3 Ability Levels of Vocabulary Sheets
- 2 Ability Levels of Note Sheets for the Enriched PowerPoint
- Basic Note Sheet for the Basic PowerPoint (Enriched note sheets works for the Basic PowerPoint Version too) The PowerPoint is leveled to incorporate parallel teaching in the classroom. It allows for the enriched students to go into more detail and critical thinking. The Basic level version is simplified which allows students time for repetition.
- Activity Choice Sheet can be given the day before so that the teacher can run enough copies and group appropriately.
- Talk Show Activity with Rubric
- Superhero Story Activity with Rubric
- Board Game Developer Activity with Rubric
The Military and Political Aspects of the Revolutionary War differentiated instruction lesson plan includes the following content in the Leveled PowerPoint that goes with it:
- Washington’s Leadership
- Strategies of the Principle Military Engagements: The Battles
- Role of the Loyalists
- Physical land features affected the outcome of the war
- Advantages and Disadvantages of the Colonies and the British
- Foreign Aid
- Role of Women, African Americans and Native Americans
- Haphazard Occurances of Events: The Human Factor
- Clash between Colonial Authority and the Continental Congress
The essential questions for this social studies differentiated instruction lesson plan are:
- What was the military course of the Revolutionary War?
- What role did leadership, commitment, and luck play in the American victory over the British?
As you can see, differentiated instruction is a major player within my social studies lesson plans. Allow me to plan for you so you can focus on teaching your wonderful students. http://www.teacherspayteachers.com/Store/Kasha-Mastrodomenico Check it out! |
Energy Lab Guide for Educators
For discussion questions and lesson plans, go to the Energy Lab collection on PBS LearningMedia.
The individual components of the Energy Lab give you a range of options for integrating some or all of the Lab into your instruction. From homework enrichment, to science fair project, to a week-long lesson module, the flexibility of the Energy Lab components will help you address the topics of energy, Earth’s systems, technology, engineering, and scientific modeling with your middle school or high school students. Below are details of the Lab components and some strategies to get you started.
The Research Challenge allows students to design their own renewable energy systems to generate power. Students can create virtual wind, solar, geothermal, and biomass systems to provide reliable power to real cities, from Tennessee to California. Students will use maps, graphs, and weather data to assess the energy potential of each geographic location and design their system to meet production targets based on resident demand. Students will then test their model against actual historical and real-time weather and solar data and work to update and optimize their systems based on this feedback. To support your students in the research challenge, you might use some of these strategies:
- Ask students before they begin about how they might plan and design their own systems totake advantage of local renewable energy sources. Talk as a group about theimportant variables to consider (region, weather, energy demand, cost, etc).
- Ask students to discuss what success would look like for a system in a particular location. Isit generating the most power? It is saving the most money? Assign students orsmall teams to compete with each other in the same city/region to design the “best”system based on their criteria for success.
- Compare and contrast the challenges and benefits of using renewable energy in large andsmall cities, and in different regions of the country.
- As the real-time data continues to update, check back in with the Energy Lab after all the challenges are complete to compare the power outputs of student-designedsystems at various times of day and seasons of the year.
- Have students research what types of renewable energy resources are available andused in their own city or region. If your local power company uses anyof these renewable energy resources, consider taking a field trip to one of these power generation stations.
The Energy Lab includes a collection of eight short videos that cover basic energy topics. Videos include contextual information explaining why these topics matter to students and society. These videos will help students with concepts that they will face in the Research Challenge, but can also be used for individual exploration of key topics in energy and power. The collection is organized into three main “lessons”—each with its own set of brief and engaging videos and assessment questions that cover key areas of energy research: consumption, production, and distribution. Students can track their progress through these lessons and record their answers and notes in a customized, printable Lab Report that you can collect and use for assessment purposes.
Strategies for using the video library include:
- Work through one or more of the video lessons with your class. Watch the videos andhave students answer all of the questions. When finished, have your students printout their answers and notes, and facilitate a discussion about what they found most interestingand surprising.
- Have students make energy concept maps based on the topics and ideasaddressed in the videos.
- Use a single video from the collection to enhance a preexisting lesson plan.
You can go directly to any of the Energy Lab videos using these links:
- Growing Appetites, Limited Resources – explores the impacts of energy use, the issue of dwindling resources, and the need for alternatives.
- Energy Defined – covers the basics of this abstract property, what energy is, how it's conserved, and what makes some forms more useful than others.
- Putting Energy to Use – explains that making use of energy often involves converting it into other forms.
- A Never-Ending Supply – explains what makes a renewable a renewable and explores some of the more promising alternative energy sources available.
- Solar Power – covers the basics of capturing the enduring energy of the Sun and converting it into other forms, especially electrical energy.
- Wind Power – explains how wind can be captured and transformed into electrical energy and explores some of the challenges of using wind to power cities.
- Solving the Storage Problem – explores the need for storage, namely the intermittent nature of many renewable resources and explains why this is not an easy problem to solve.
- Toward a Smarter Grid – looks at the state of the current electric power grid and explains how making the grid "smarter" will make it both more reliable and more efficient.
The Energy Lab will periodically have scientists and engineers available to engage with you in the classroom and to answer students’ questions about topics related to energy. By interacting with professionals who are active in this field, students will have the chance to connect with careers in science and engineering surrounding the future of energy technology. Check the calendar of events on the NOVA Labs Facebook Page to see who is available—and then ask away!
- Work with your class to compile a list of questions to ask the featured scientist. Submit your questions online as a class.
- Ask students to discuss the types of skills they might need to become a scientist in the energy field.
The Energy Lab Standards Alignment
To see how different parts of the Energy Lab can be used to meet your course objectives, download a standards alignment here:
As new activities and lessons become available, we will include them in the NOVA Education Blog: Science of Learning.
Image courtesy of NREL
Below are more resources from NOVA and other organizations to enhance your lessons about energy.
- NOVA’s Power Surge – In this NOVA program, experts tackle the question: Can emerging technology defeat global warming? Learn how the United States has invested tens of billions of dollars in the search for sustainable energy sources.
- Smart Grid – In this segment from NOVA scienceNOW, learn how electricity travels from its source to your light switch, and hear from scientists and engineers who think a "smarter" grid is the key to more reliable and efficient energy distribution.
- PBS LearningMedia Energy Resources – Visit this free online digital media service from PBS for a variety of multimedia resources about energy. From interactives to teacher videos, there’s sure to be something here you can use to improve a lesson about energy. Here are just a few examples: Inside a Solar Cell , New Ways to Catch Rays, Capturing Renewable Energy, and Energy Sources.
- Energy Kids – This teacher guide provides energy lessons that use the “Energy Kids” website as a resource. The guide provides language arts, math, performing arts, science, and social studies extension activities for all ages.
- Energy Education K-12 Lesson Plans – Teach your students the importance of green energy while enhancing your curriculum. Here you'll find many creative lesson plans, labs, projects, and other activities for grades K–12 on energy-related topics.
- Climate Literacy and Energy Awareness Network – The CLEAN project, a part of the National Science Digital Library, provides a collection of carefully reviewed resources, coupled with the tools to enable an online community built around the teaching of climate and energy science.
- Clean Energy Institute – Find detailed lesson plans for hands-on sustainable energy engineering activities for middle and high school students. Activities are aligned to the Next Generation Science Standards. |
Add It Up Alligators
Students strengthen shape and pattern recognition, fine-motor and perceptual skills, encouraging imagination, play and sense of self within a group
5 Views 26 Downloads
- Activities & Projects
- Graphics & Images
- Lab Resources
- Learning Games
- Lesson Plans
- Primary Sources
- Printables & Templates
- Professional Documents
- Study Guides
- Writing Prompts
- AP Test Preps
- Interactive Whiteboards
- All Resource Types
- Show All
See similar resources:
Will the Real Abraham Lincoln Please Stand Up?
Engage young historians in learning about America's sixteenth president with this fun Reader's Theater script. The second play in this series puts a modern spin on learning about history, involving a game show where three contestants try...
1st - 5th Social Studies & History CCSS: Adaptable
Drama Warm Ups and Circle Games
Circle games, energetic games, calming games, exit games. Whether used as ice breakers, warm ups, or exit strategies, or used in drama classes or content areas, the 28 games detailed in this packet deserve a spot in your curriculum library.
2nd - 12th Visual & Performing Arts
Que Llueva! It's Raining, It's Pouring!
Students learn to sing a song in another language. In this music lesson, students learn a song about rain in English and Spanish. Students listen to the song Que Llueva in Spanish and It's Raining, It's Pouring in English, sign it...
K - 3rd Visual & Performing Arts |
Medical examinations are usually a patient's first step in the health care process. Doctors talk with patients about their history and current problems they may be experiencing. Tests and studies may be performed. Then, doctors will discuss a treatment plan with the individual. When the patient is finished with their visit, the doctor traditionally has written notes recording the visit. Today, doctors often use electronic devices with voice recognition software.
A medical transcription professional receives this information through a recording program, often online. They listen to the doctor's voice and then create a transcript. This information is placed in a patient's records or hospital charts. A transcription professional sometimes works on-site in an office or at a hospital. But, more than likely today, those transcribing are working online as workers who telecommute. Most health care providers today are outsourcing their transcribing needs to major companies who specialize in dictation.
Great care must be taken during the transcribing process to ensure the service is performed according to the standards of the medical community. The doctor or nurse making the recording must be careful to speak slowly so that the voice program will accurately record the text. The person taking the dictation must be extremely accurate when typing the information. The doctor is then responsible for reviewing the documents for accuracy to guard against putting patients at risk because of misinformation.
Given the large databases of medical records, technology has drastically changed the way the process of data collection functions. Those involved are no longer simply using typing equipment to interpret hand written notes. Today's medical community records and accesses information in a largely electronic format. Medical transcription continues to be an important part of that process. |
What Rocks Mean To Us
Rock Types And Formation
Just a point to note, is that the earth’s outer layer is mainly made up of rocks.Rock contains minerals ,and it’s chemical composition and make-up determines it’s main classification as: The Metamorphic Rocks, Igneous Rocks and Sedimentary Rocks.
Metamorphic rocks are formed when the original rock composition is changed resulting into a different rock formation.This is mainly through excessive pressure and change in weather conditions especially heat.They are harder than their parent rocks and are not easily eroded.They include Gneiss,Marble,Graphite and Quartzite.
These Rocks form either on or below the earth’s surface.They are as a result of highly heated molten, solidified lava.They are divided into two groups: Plutonic and Volcanic.Plutonic rocks occur when molten lava cools and solidifies within the earth’s crust while that of the volcanic nature forms when lava solidifies on the surface of the earth. A good example of this is the pumice stone.
These are rock particles formed as a result of organic and chemical sediments formed through erosion and weathering .They are deposited by agents of denudation like wind and water,and accumulate to form sediments.
These are found on the earth’s surface and also in water bodies,where you are much more likely to find carbonate rocks, mud stone and sand stone .
They are classified as those that are:
Mechanically formed -Those formed through the deposition of particles which are derived from existing rocks, eventually laid down in layers (strata).
Organically formed -These are rocks formed from organic remains of either plants or animals.
Chemically formed -These are formed from mineral particles dissolved in water and solidified through precipitation of the same.
They are further sub - classified as:
Carbonates: Which are made up of carbonate compounds.
Sulphates: Containing sulphate compounds like gypsum which is a soft, white or colourless stone.
Chlorides: These are rock salts like table salt (sodium chloride).
Silicates: Formed from the deposition of silica.
Ironstone: Formed from various iron oxide deposits.
Economic Value of Rocks
Various rocks contain valuable minerals like silver,zinc,copper,aluminium and lead.These are extracted and put into economic use, thereby earning revenue for countries.
Precious Stones are valuable to us. Diamond is a hard rock that is used in making jewelry and other accessories.
Sandstone, Limestone and Marble are used in the construction industry to make cement , gravel and sand aggregates.
Rock is a parent material for all soils.It is a determining factor in regards to the type of soil formed from it.
If you have ever wondered how water is easily stored on the earth’s surface where we can easily access it, the answer is right there: Rock.Non-porous and impervious ones are precious for this ability.
As an attraction,they are so unique and eye-catching like the breathtaking,natural forming volcanic rocks that grace our planet Earth. |
Cephalopods possess a great variety of light-producing organs (photophores). Some are very small and complex like the drawing below of a section through a photophore of Abralia trigonura which is less than 0.2 mm in diameter and as complex as the eyes of some animals. Some other photophores are very large such the arm tip organs of Taningia which can be nearly 5 cm in length and 2-3 cm wide. Cephalopod photophores have a wide range in structure from a simple group of photogenic cells to organs with photogenic cells surrounded by reflectors, lenses, light guides, color filters and muscles. The latter complex photophores are often able to actively adjust the color, intensity and angular distribution of the light they produce. Photophores of most oceanic cephalopods have intrinsic luminescence with the light coming from their own specialized cells, the photocytes. Photophores of most neritic cephalopods, in contrast, have extrinsic luminescence with the light produced by bacteria that are cultured in specialized light organs of the host cephalopod.
Figure. Longitudinal section through an integumental chromatophore of Abralia trigonura. Drawing with artificial colors modified from Young and Arnold (1982).
Chromatophores - Pigment cells that absorb light leaving the photophore in undesirable directions or that shield the reflectors of the photophore, when the photophore is not active, from reflecting external light that could reveal the presence of the cephalopod.
Color filters - Structures within a photophore that restrict the color of the light emitted by the photocytes. Filters can either rely on selective absorption of light (pigment filters) or selective transmission/reflection of light (iridophores).
Lenses - A variety of structures that apparently affect the directionality of light are called "lenses." Some of these appear to act like typical optical lenses but the mode of action of others (like those in the illustration to the right) are uncertain.
Light guides - Structures that control the direction of emitted light through the use of "light pipes" that rely on total internal reflection. These function in the same manner as fiber-obtic light guides.
Photocytes - Cells that produce light (i.e., the bioluminescence).
Photogenic crystalloids - Some photocytes have crystalline-like inclusions that are thought to be the actual site within the cell where light is produced.
Reflectors - The primary reflectors are structures at the back of a photophore that reflect light toward the exterior. These may be broad-band reflectors that reflect all light or narrow-band reflectors that selectively reflect specific colors. Light not reflected by the latter structure passes through it and is absorbed by chromatophores that usually surround the reflector. Secondary reflectors can be found in various regions near the distal parts of the photophore. These generally have a role in controlling the directionality of the emitted light.
Figure. Close-up photograph of the ventral surface of the head of a young (ca. 18 mm ML) Abralia trigonura showing a photophore (arrow or any of the green photophores) of the type illustrated above. Note that there is a variety of other types of photophores visible in this photograph. |
Cerebrospinal fluid (CSF), clear, colourless liquid that fills and surrounds the brain and the spinal cord and provides a mechanical barrier against shock. Formed primarily in the ventricles of the brain, the cerebrospinal fluid supports the brain and provides lubrication between surrounding bones and the brain and spinal cord. When an individual suffers a head injury, the fluid acts as a cushion, dulling the force by distributing its impact. The fluid helps to maintain pressure within the cranium at a constant level. An increase in the volume of blood or brain tissue results in a corresponding decrease in the fluid. Conversely, if there is a decrease in the volume of matter within the cranium, as occurs in atrophy of the brain, the CSF compensates with an increase in volume. The fluid also transports metabolic waste products, antibodies, chemicals, and pathological products of disease away from the brain and spinal-cord tissue into the bloodstream. CSF is slightly alkaline and is about 99 percent water. There are about 100 to 150 ml of CSF in the normal adult human body.
The exact method of the formation of the CSF is uncertain. After originating in the ventricles of the brain, it is probably filtered through the nervous-system membranes (ependyma). The CSF is continually produced, and all of it is replaced every six to eight hours. The fluid is eventually absorbed into the veins; it leaves the cerebrospinal spaces in a variety of locations, including spaces around the spinal roots and the cranial nerves. Movement of the CSF is affected by the downward pull of gravity, the continual process of secretion and absorption, blood pulsations in contingent tissue, respiration, pressure from the veins, and head and body movements.
Examination of the CSF may diagnose a number of diseases. A fluid sample is obtained by inserting a needle into the lumbar region of the lower back below the termination of the spinal cord; this procedure is called a lumbar puncture or spinal tap. If the CSF is cloudy, meningitis (inflammation of the central nervous system lining) may be present. Blood in the fluid may indicate a hemorrhage in or around the brain. |
Adhesion is an extremely important factor in living nature: insects can climb up walls, plants can twine up them, and cells are able to adhere to surfaces. During evolution, many of them developed mushroom-shaped adhesive structures and organs. Lars Heepe and his colleagues at Kiel University have discovered why the specific shape is advantageous for adhesion. The answer is in homogeneous stress distribution between a surface and the adhesive element.
The results have recently been published in the scientific journal Physical Review Letters.
Not only the roughness of contacting surfaces but also their contact shapes, also called contact geometry, determine adhesion strength between them. In nature, mushroom-shaped contact geometry prevails. It evolved in diverse terrestrial and aquatic organisms independently -- at the nano, micro and macro scale. Examples are among others the bacteria Caulobacter crescentus that clings to surfaces (nano scale), the mushroom-shaped hairs of specific leaf beetles (micro scale), and the Virginia creeper plant (Parthenocissus) (macro scale). "This particular contact geometry developed independently in various living organisms. This fact might indicate an evolutionary adaptation of organisms to optimal adhesion," says Stanislav Gorb, biologist at the Institute of Zoology at Kiel University.
However, it remained unclear what are the mechanical advantages of the mushroom shape. In order to answer this question, an interdisciplinary research team including engineering physicist Lars Heepe, biophysicist Alexander Kovalev, theoretical physicist Alexander Filippov and biologist Stanislav Gorb took a closer look at the so-called Gecko®-Tape -- an adhesive developed at Kiel University in collaboration with the company Gottlieb Binder GmbH & Co. KG. Its microscopic adhesive elements were inspired by gecko feet and leaf beetles. It adheres even to wet and slippery surfaces and can be re-used endlessly and removed without leaving any residues.
"We have examined the detachment behaviour of single mushroom-shaped adhesive microstructures under the microscope, at the highest resolution both in time and space," says Heepe. The scientists took pictures of the detachment process at 180,000 frames per second. "We discovered that the actual moment of detachment -- or more precisely: the moment when a defect in the contact area starts to develop up to its complete separation -- lasts only a few micro-seconds." The contact rips apart with up to 60 percent of the speed of sound of the material, or 12 metres per second. "This can be achieved only if a homogeneous stress distribution exists between the mushroom-shaped adhesive element and the surface," Heepe explains. Reaching such high speeds in this short time requires very much elastic energy, which can only be stored when stress in contact is homogeneously distributed. Other adhesive geometries, such as flat punch, create stress concentrations and start to separate at the edges. With the mushroom head, its thin plate prevents the formation of stress peaks and detaches itself from the inside to the outside. A lot of strength is necessary to do this -- therefore, adhesion is strong.
"With our experiments we have been able to unravel an important effect of a very successful adhesion mechanism in nature," Heepe concludes the work of the interdisciplinary scientific team at Kiel University. Their high-speed analysis also confirms a theoretical model recently presented by an Italian group of scientists.
The findings of the study made in Kiel can be used for further development of glue-free adhesive surfaces with enhanced performance. It also takes the scientists at the Collaborative Research Centre 677 "Function by Switching" a step closer to one of their declared goals: They want to create photoswitchable adhesive systems that can be turned between an adhesive and non-adhesive state by light of certain wave lengths.
- Lars Heepe, Alexander E. Kovalev, Alexander E. Filippov, Stanislav N. Gorb. Adhesion Failure at 180 000 Frames per Second: Direct Observation of the Detachment Process of a Mushroom-Shaped Adhesive. Physical Review Letters, 2013; 111 (10) DOI: 10.1103/PhysRevLett.111.104301
Cite This Page: |
US History connecting with PE?
You bet! And here’s how it happened:
One day, the 8th grade PE teacher (Erin Visch-Krahn) popped by my classroom to talk about a collaboration piece she was working on with another teacher. And when I say “talk about”, what I mean is “brag about”. I was jealous as I am ALWAYS looking for ways to collaborate and here was EVK putting joint projects together in the first month of school.
I demanded that she and I create something. The problem was, what could possibly link our curriculum? I asked her what upcoming unit would she be focusing on. Erin said her students would spend a few classes on dodgeball. My students, on the other hand, were in the middle of a unit on the US Revolutionary War. Hmmmm…..
Then, like a dodgeball to the face, it hit us: dodgeball? War? Eureka!
Within minutes, we had created our school’s first US Revolutionary battle reenactment activity. Instead of muskets and cannons, students would employ dodgeballs and fiery rhetoric.
Here’s how it worked:
- Prepping the US History Students
I provided students with a worksheet. On one side (entitled Lawful Loyalist) students had to list 3 reasons why the Loyalist side of the war was right and the rebels were wrong. On the flipside (entitled Raging Rebels) students had to list 3 reasons why the Loyalists were crazy and why the king had to go. For both sides, students had to include old timey insults – names people would have called each other during colonial times.
Once complete, students reviewed both points-of-view to familiarize themselves with the arguments of both sides and the curse words.
- Teams Divided
Days later, in their PE class, students were randomly split into 2 teams. One team wore red pinnies, the other wore blue. The red side represented the Loyalists (red = British Redcoats!) and the blue side were the Rebels.
- Inspiration Speeches/Name Calling
Before the dodgeball game began, there was time set aside for speeches. Students were selected by the PE teacher to step forward and explain – on behalf of their team – why the battle was happening. Students had to use the ideas and arguments outlined in their handout. Also, upon concluding the exchange of ideas, insults were to be exchanged. During all this, the PE teacher observed and used a rubric to check who was participating and to what degree.
Once the rhetoric and posturing was complete, the actual dodgeball game began.
Once the game ended, the winning team was declared victorious and gloated, in a most unsportsmanlike way, at the losers.
What I like about this activity is that it connects two classes that normally do not connect. Next, by connecting competition with the revolution, the activity helps students understand the passions on both sides of this struggle. For PE, this activity takes a ho-hum traditional game and, by putting context around it, lifts it into something more powerful. I also like how it links learning with physical activity, making it (for those more kinesthetic learners) a more active and engaging event.
This was super easy to organize and it really encouraged me to find other ways to link my curriculum with that of my colleagues. |
About the Historical News Releases
This is an archived Argonne News Release Item.
For similar items about Nuclear Energy: Nuclear Energy Historical News Releases
For more information about this item, please contact at Argonne.
Mysterious little particle has long Argonne history
ARGONNE, Ill. (Nov. 13, 1996) — How small is "small"?
A particle that barely exists, as humans measure existence, is so remarkably small that trillions pass through our bodies every second with no effect.
That particle is the neutrino, and it could pass through a chunk of lead thicker than the Earth as easily as a person walks through fog.
The history of the neutrino and the history of Argonne National Laboratory long have been intertwined. The legendary physicist Enrico Fermi, who was the first director of the organization that eventually became Argonne National Laboratory, "invented" the neutrino in the 1930s to account for an atomic energy imbalance. He never actually saw physical evidence of a neutrino, and he expected that no one ever would. That expectation marked one of the few times Fermi was wrong.
Fermi and other scientists studying a form of radioactivity in which a neutron decays into a proton and an electron calculated that the combined energy of the proton and electron was less than that of the original neutron. To balance the energy equation a third particle was needed, and so the neutrino was "born."
To explain why this mysterious particle had never been detected, the scientists theorized that it had no charge, no mass, and thus could pass through any object -- detectors included -- without interacting with anything.
Neutrinos, they decided, were inherently undetectable.
But the neutrino's existence was proven in the 1950s and the little particle quickly became an element of what physicists call "the standard model," science's current dominant theory of matter and energy.
And in 1970, Argonne scientists saw evidence that Fermi had been wrong when they observed a neutrino's tracks in a hydrogen bubble chamber.
In fairness to Fermi, the device that permitted the neutrino observation -- Argonne's Zero Gradient Synchrotron (ZGS), a 12.5-billion-electron-volt particle accelerator featuring a 12-foot hydrogen bubble chamber surrounded by a 107-ton superconducting magnet -- was almost certainly beyond even his vision in the 1930s. Superconductors, materials that lose all resistance to electricity when cooled to near absolute zero, allow construction of efficient electromagnets that use far less energy and create more powerful magnetic fields than larger, heavier magnets that use conventional materials.
Today, neutrinos continue to occupy the attention of Argonne scientists.
An Argonne team is readying an experiment that could prove that neutrinos do have mass. If they do, and because there are so many neutrinos in the universe, it might turn out that the little particle no one thought could be detected actually accounts for much of the mass of the universe -- more than all the stars and planets combined.
That experiment currently is scheduled to get under way in 2001.
Last Modified: Wed, September 25, 2013 9:26 PM |
What is Electromagnetic Compatibility?
Electromagnetic Compatibility (EMC) is the ability of an electrical device to function properly in its environment without being affected by electromagnetic interference from other devices. Therefore, EMC includes two test standards: Electromagnetic Interference (EMI) and Electromagnetic Susceptibility (EMS). Read on to learn more about EMC and how you can prevent it from negatively affecting your electronics.
EMC Certification Standards
Electromagnetic compatibility standards are important for manufacturers who have to deal with EMC. There are many different emc standards and many different industries that need EMC testing.
- IEC: The International Electrotechnical Commission, it includes 3 branches:
CISPR: International Special Committee on Radio Interference
TC77: Technical Committee on Electromagnetic Compatibility in Electrical Equipment (including Power Grids)
TC65: Industrial Process Measurement and Control
- ISO: International Organization for Standardization;
- ETSI: the European Telecommunications Standards Committee;
- CCIR: International Radiocommunication Advisory Committee;
FCC: Federal Pass;
VDE: German Association of Electrical Engineers;
VCCI: Japanese civil interference;
BS: British Standard;
ABSI: American National Standard;
GOSTR: Russian government standard;
GB, GB/T: Chinese National Standard.
How to test electromagnetic compatibility?
There are many ways to test electromagnetic compatibility (EMC). One common method is to use an EMC chamber. This is a room that is specially designed to block out external electromagnetic fields, so that the only fields present are those generated by the device under test. By measuring the device’s response to various types of electromagnetic fields, it is possible to determine whether it is compatible with those fields. Other methods of testing EMC include using anechoic chambers and Faraday cages.
1. choose EMC testing laboratory
Shielded rooms, open areas, anechoic chambers, reverberation chambers, TEMs, and GTEMs are among the most common locations for EMC testing. Among them, anechoic chambers are the most common test locations. An anechoic chamber is used to shield electromagnetic waves other than the test equipment from interfering with other electromagnetic waves. Its principle is to absorb electromagnetic waves using ferrite absorbing materials to eliminate electromagnetic interference in the environment.
The currently known types of anechoic chambers can be divided into antenna pattern test rooms, radar cross section test rooms, electromagnetic compatibility (EMC) test rooms, and electronic warfare (countermeasures) test rooms according to their uses. The most common ones are full anechoic chamber and semi-anechoic chamber. The size and selection of RF absorbing materials are mainly determined by the layman’s size and test requirements of the equipment under test (EUT).
The anechoic chamber is filled with cone-shaped absorbing materials, and there is a pyramid-shaped composite sponge absorbing body impregnated with absorbing powder. Its size is related to the absorbing frequency, and its function is also to absorb unnecessary electromagnetic waves and eliminate reflected signals. It can meet the electromagnetic wave absorption rate of 30MHz-40GHz frequency band and 10-20dB. The electromagnetic wave absorber used in the electromagnetic wave shielding anechoic chamber is adapted to the size of the anechoic chamber, and the thickness is continuously reduced in order to effectively utilize the space.
2. Select EMC test equipment
In the EMC test process, the supporting test equipment will be different according to the different industries of the equipment to be tested. details as follows:
EMI test equipment: EMI receivers, EMI accessories, conducted EMI test accessories, radiated EMI test antennas, harmonic flicker analyzers, near-field probes, etc.
EMS test equipment: EMS signal generator, EMS ancillary equipment, etc.
3. Test Procedure
Many different metrics can be used to measure EMC on an individual device or piece of equipment. However, there are a few metrics that are most common for EMC measurements for electronics.
EMI Testing metrics:
- Harmonic current (2nd to 40th harmonic);
- Flashing Flicker;
- Conducted disturbance (CE);
- Radiation disturbance (RE);
EMS Testing metrics:
- Electrostatic discharge immunity (ESD);
- Radiated electromagnetic field (80MHz~1000MHz) immunity (RS);
- Electrical fast transient/burst immunity;
- Surge (lightning strike) immunity;
- Injection current (150kHz~230MHz) immunity (CS);
- Voltage dip and short interruption immunity.
3.1 Harmonic Test
Harmonic testing mainly examines the influence of harmonics in low-voltage power supply networks on these frequency-sensitive equipment.
Test tandard: EN61000-3-2
- a) Specify limits for harmonic currents emitted to the public grid.
- b) Specify limits for the harmonic content of the input current generated by the equipment under test in the specified environment.
- c) Applicable to electrical and electronic equipment connected to the public low-voltage network with an input current less than or equal to 16A.
Principle of harmonic experiment: Due to the working mode of electronic equipment, nonlinear components and various interference noises, the input current is not a complete sine wave, and it often contains rich high-order harmonic components, causing pollution to the power grid. This phenomenon is called harmonic distortion.
3.2 Voltage Fluctuation and Flicker
The purpose of this standard is to ensure that the product does not cause undue flickering effects (flickering lights) to the lighting equipment it is connected to.
Test Standard: EN 61000-3-3
- a) Limits on the effects of constant voltage fluctuations and flicker on the public grid.
- b) Guidance for specifying limits and methods of evaluation of voltage variations produced by the prototype under test under specified conditions.
- c) It is suitable for 220V to 250V, 50Hz electrical and electronic equipment connected to the public low-voltage network with an input current of less than or equal to 16A per phase.
The picture below shows the allowable minute rate of change or change time for each relative voltage change value. It can be understood that the larger the voltage change range, the smaller the allowable change speed, or the longer the required change time.
- Pst value should not be greater than 1.0;
- Pit value shall not be greater than 0.65;
- the value of d(t) during a voltage change shall not exceed 3.3% for more than 500ms;
- the relative steady-state voltage change, dc, shall not exceed 3.3%;
- maximum relative voltage change dmax shall not exceed 4%.
3.3 Conducted Emissions CE (0.15-30MHz)
Test Standard: EN61000-6-4
A) Electronic and electrical measurement and test equipment;
B) Electronic and electrical control equipment;
C) electrical and electronic laboratory equipment;
Classification of equipment
Class A: (non-household)equipment suitable for use in all establishments other than domestic and those directly connected to a low voltage power supply network which buildings supplies used for domestic purposes.
Class B: (Household)equipment suitable for use in domestic establishments and in establishments directly connected to a low voltage power supply network which supplies buildings used for domestic purposes.
When the frequency of the interference noise of electronic equipment is less than 30MHz, it mainly interferes with the audio frequency band. For the wavelength of this type of electromagnetic wave, the cable of electronic equipment is less than the wavelength of one wave (the wavelength of 30MHz is 10m), and the radiation efficiency into the air is very low. In this way, if the noise voltage induced on the cable can be measured, the degree of electromagnetic noise interference in this frequency band can be measured, and this type of noise is conducted noise.
A line impedance stabilization network (LISN) is a device used to measure the electromagnetic interference (EMI) emitted by electronic devices. It is typically used in conjunction with an oscilloscope or spectrum analyzer.
The Effect of LISN:
1. Play a high-frequency isolation function between the EUT and the power supply to prevent noise from the power supply from entering the EUT, affect the measurement results.
2. Simulate the actual power supply impedance and provide a specified impedance between the power terminals of the EUT to unify the measurement results.
3. Keep the impedance in the test band stable at 50 ohms to achieve the input with the measurement receiver/spectrum analyzer impedance matching.
3.4 Radiated Emission RE (30-1000MHz)
Classification of equipment
Class A: equipment suitable for use in all establishments other than domestic and those directly connected to a low voltage power supply network which supplies buildings used for domestic purposes. non-domestic
Class B: equipment suitable for use in domestic establishments and in establishments directly connected to a low voltage power supply network which supplies buildings used for domestic purposes.
a) Electrical and electronic measurement and test equipment
b) Electronic and electrical control equipment
c) Electrical and electronic laboratory equipment
The principle of radiated emission experiment:
When the total length of the antenna is greater than 1/20 of the signal wavelength λ, effective radiation emission will be generated into the space. When the length of the antenna is an integer multiple of λ/2, the radiated energy is the largest. When the noise frequency is greater than 30MHz, the cables, openings and gaps of electronic equipment are easy to meet the above conditions, resulting in radiation emission.
3.5 Electrostatic discharge (ESD)
The purpose of electrostatic discharge is to test the ability of a single device or system to resist electrostatic discharge interference.
Standard: IEC 61000-4-2 Criteria B
Experiment principle: The ESD experiment is to simulate the electrostatic discharge generated by the human body and objects when they contact the equipment, or the discharge of the human body and the object to the adjacent objects, including the direct exchange of energy, causing damage to the device or the near field (electric field and magnetic field) caused by the discharge. change), resulting in malfunction of the device.
3.6 radiated susceptibility (RS)
The purpose of radiated susceptibility is to test the ability of a single device or system to resist externally electric field disturbances.
Standard: IEC 61000-4-3 Criteria A
- Frequency range: 80MHz-2.5GHz
- Modulation: 80% AM, 1kHz sin-wave
- Frequency step size: 1%
- Dwell time: 3s
3.7 Fast Burst EFT
The purpose of the experiment is to investigate the ability of a single device or system to resist fast transient disturbances. These transient disturbances are caused by transient actions such as interruption of inductive loads, resulting in the appearance of pulse groups, high pulse repetition frequency, short rise time, and single pulse energy. Low level will cause the device to malfunction.
Standard: IEC 61000-4-4 Criteria B
The purpose of the experiment is to examine the ability of the EUT to resist surge interference. These transient disturbances are caused by short-circuit faults of other equipment, switching of the main power system, and indirect lightning strikes.
Standard: IEC 61000-4-5 Criteria B
3.9 Conducted Radio Frequency Interference (CS)
The purpose of the experiment is to examine the ability of a single device or system to resist conducted disturbances.
Standard: IEC 61000-4-6 Criteria A
Experimental principle: It mainly investigates the immunity of the continuous interference voltage of 0.15MHz-80MHz introduced from the wire or cable from the outside world.
- Frequency range: 0.15MHz-80MHz
- Modulation: 80% AM, 1kHz sin-wave
- Frequency step size: 1%
- Dwell time: 3s
3.10 Voltage dips
The purpose of the experiment is to investigate the ability of the EUT to resist voltage dips and sags.
Standard: IEC 61000-4-11 Criteria B & C
How to improve electromagnetic compatibility?
1. EMC shielding design
The effectiveness of your EMC shielding design relies on the type of material you choose as well as how it is implemented. You can further improve its performance by combining different types of materials together or by choosing a certain orientation for each specific layer of your shielding.
1.1 Ventilation hole and opening design
1.2 Structural lap joint shielding design
1.3 The cable passes through the shielding body
If the conductors pass out of the shield, the shielding effectiveness of the shield will be significantly degraded. This penetration is typically when the cable exits the shield.
1.4 Design principles for cables going out of the shielding body
1.4.1 When shielded cables are used, when the shielded cables exit the shielding body, the clip wire structure is adopted to ensure reliable grounding between the shielding layer of the cable and the shielding body and provide a sufficiently low contact impedance.
1.4.2 When using shielded cables, use shielded connectors to transfer the signals out of the shielding body, and ensure the reliable grounding of the shielding layers of the cables through the connectors.
1.4.3 When using an unshielded cable, use a filter connector to transfer. Due to the high frequency characteristic of the filter, it is ensured that there is a sufficiently low high frequency impedance between the cable and the shield.
1.4.4 When using unshielded cables, the cables should be short enough inside (or outside) of the shield to prevent interference signals from being effectively coupled out, thereby reducing the impact of cable penetration.
1.4.5 The power line goes out of the shield through the power filter. Due to the high-frequency characteristic of the filter, it is ensured that there is a sufficiently low high-frequency impedance between the power line and the shield.
1.4.6 Using optical fiber outlet. Since the optical fiber itself has no metal body, there is no problem of cable penetration.
1.5 Poor grounding
1.6 Shielding materials and applications
The material that we need to shield includes conductive cloth, reed, conductive rubber, and more.
1.7 Cut-off waveguide ventilation plate
2. EMC grounding design
2.1 The concept and purpose of grounding
2.1.1 One is for safety, called protective grounding. The metal casing of electronic equipment must be connected to the ground, so as to avoid the occurrence of excessive ground voltage on the metal casing due to accidents, which may endanger the safety of operators and equipment.
2.1.2 The second is to provide a low-impedance path for the current to return to its source, that is, the working ground.
2.1.3 Lightning protection grounding to provide current discharge for lightning strikes.
2.2 Grounding provides signal return
2.3 Single point grounding
Suitable for systems with operating frequency below 1MHz.
2.4 Multi-point grounding and mixed grounding
3. EMC Wave filter design
3.1 Wave Filter Definition
A wave filter is a device that alters the frequency content of a signal by selectively attenuating certain frequencies while allowing others to pass.
3.2 Type of wave filters
The common filter types include: low pass filter, high pass filter, band-pass filter, and band-stop filter. As figure shows below:
If a filter passes low frequencies and blocks high frequencies, it is called a low pass filter. If it blocks low frequencies and passes high frequencies, it’s a high pass filter. There are also bandpass filters, which pass only a relatively narrow frequency range. And a band-stop filter, which blocks only a relatively narrow range of frequencies.
3.3 Wave Filter components
3.3.1 Capacitor (general capacitor, three-terminal capacitor);
3.3.2 Inductance (general inductance, common mode inductance, magnetic beads);
3.4 Differential mode filter and common mode filter design
4. EMC PCB Design
4.1 PCB design
4.1.1 Layout: similar circuits are arranged in one piece, the principle of controlling the minimum path, high-speed circuits should not be close to the small panel, and the power module should be close to the position of the single disk.
4.1.2 Layering: The high-speed wiring layer must be close to a ground layer, the power supply is adjacent to the ground, a layer of ground should be placed under the component surface, two surface layers may be placed close to the ground layer, and the inner layer should be indented by 20H compared to the surface layer.
4.1.3 Wiring: The 3W principle, the differential pair lines are of equal length, and the close walking, high-speed or sensitive lines cannot cross the partition.
4.1.4 Grounding: similar circuits are distributed separately and connected at a single point on the board.
4.1.5 Filtering: power supply module, functional circuit design board-level wave filter circuit.
4.1.6 Interface circuit design: interface circuit design filter circuit to achieve effective isolation between inside and outside.
4.2 The basic principles of layout
4.2.1 Referring to the functional block diagram of the principle, based on the signal flow, it is divided into functional modules.
4.2.2 Separate layout of digital circuits and analog circuits, high-speed circuits and low-speed circuits, interference sources and sensitive circuits.
4.2.3 Avoid placing sensitive devices or strong radiation devices on the welding surface of the single board.
4.2.4 The loop area of sensitive signals and strong radiation signals is the smallest.
4.2.5 Strong radiation devices or sensitive devices such as crystals, crystal oscillators, relays, switching power supplies, etc. should be placed away from single-board handle bars, external interface connectors, and sensitive devices. The recommended distance is ≥1000mil.
4.2.6 Sensitive devices: keep away from strong radiation devices, the recommended distance is ≥1000mil.
4.2.7 Isolation devices, A/D devices: the input and output are separated from each other, and there is no coupling path (such as adjacent reference planes), preferably across the corresponding partition.
4.3 Special device layout
4.3.1 Power part (placed at the power inlet).
4.3.2 Clock part (away from the opening, close to the load, wiring inner layer).
4.3.3 Inductive coil (away from EMI source).
4.3.4 Bus driver part (inner layer of wiring, away from the opening, close to the sink).
4.3.5 Filter components (separate input and output, close to the source, short leads).
4.4 Layout of filter capacitors
4.4.1 All branch power supply interface circuits.
4.4.2 Near components with high power consumption.
4.4.3 Areas with large current changes, such as input and output terminals of power modules, fans, relays, etc.
4.4.4 PCB power interface circuit.
4.5 Layout of decoupling capacitors
4.5.1 close to the power pins.
4.5.2 Appropriate location and quantity.
4.6 The basic principles of the layout of the interface circuit
Devices such as filtering, protection, and isolation of interface signals are placed close to the interface connector, and they are protected first and then filtered.
Isolation devices such as interface transformers and optocouplers are completely isolated from the primary and secondary.
No crossover of signal network between transformer and connector.
The BOTTOM layer area corresponding to the transformer should be placed as far as possible without other devices.
The interface chip (network port, E1/T1 port, serial port, etc.) should be placed as close as possible to the transformer or connector.
Short traces, wide spacing between different types of traces (except for signals and their return lines, differential lines, and shielded ground lines), fewer vias, no loops, small loop area, wireless head.
For traces with delay requirements, their lengths meet the requirements.
There is no right angle, and arc chamfering is preferred for key signal lines.
The signal traces of adjacent layers are perpendicular to each other or the parallel wiring of key signals of adjacent layers is less than or equal to 1000MIL. |
Those Two Eggs
If a butterfly lays 100 eggs, how many do you think survive to become healthy adult butterflies? The chances are, only two can! The rest get destroyed at various stages right from the egg to adult.
The environment is always full of surprises and hostile. Like any other insect species, the butterfly has to survive in the midst of drought, rain, wind and predators. In the struggle, only a few can manage to live and grow.
The average life span of a butterfly is three weeks. During this brief time, the male mates with the female. The female lays the eggs. Their job is completed. The eggs hatch and the cycle goes on despite odds.
For a female butterfly, the only objective is to lay eggs. After mating, it goes in search of a suitable plant. This search is important because, the larvae that will emerge from the eggs will feed only on a particular plant. The mother therefore will have to choose the right plant that will serve as food for its offspring.
The female butterfly can recognize the food plant by its shape and color of leaves. It then alights on the leaf and strokes the leaf with its feet. The leaf is scratched and its odors are released. The butterfly smells the odors to make sure that the plant is ideal for laying the eggs.
At this stage, the female has its abdomen full of eggs. The eggs develop in the ovaries of the reproductive system. Such a female is called a gravid female.
The female settles on a convenient spot and begins to lay the eggs. The spot is usually the undersurface of a leaf.
At the tip of the female abdomen is the ovipositor, a tubular extension of the genital opening. It facilitates egg deposition. In simple terms it is an egg laying tube. Eggs pass through this tube one by one, get fertilized by the sperm (that are received and stored in a sperm pouch during mating) and are deposited. A sticky substance flows out of the tube which enables the eggs to stick to the leaf surface.
Some species of butterflies lay their eggs in groups or clusters on a plant. Others lay a single egg per plant and distribute their eggs. Both ways, the idea behind, is survival ofcourse.
But in the end only two percent of them hatch and develop into adults Though this seems enough to fill our gardens with flying colors, the butterflies need to be conserved and protected . We need more of them. The task lies only in our hands. |
While hydrogen is the simplest element, we still struggle to predict its high pressure behaviour – but researchers in the UK and Switzerland may have just solved one hydrogen puzzle. Bingqing Cheng from the University of Cambridge, UK, and colleagues used machine learning to study how hydrogen changes between liquid states at high temperatures and pressures. That meant that they could use computer power more efficiently than other theoretical chemistry methods. As such, they could simulate systems containing over a thousand atoms, rather than just a few hundred.
Previous simulations suggested a sudden change from an insulating liquid containing hydrogen molecules and a conductive metallic liquid of hydrogen atoms. Cheng and her colleagues found that sometimes a more continuous transition is possible. ‘The high pressure hydrogen turns atomic in a smooth and gradual way,’ she explains. The team found that in a narrow range of conditions, hydrogen forms a supercritical state intermediate between the molecular and atomic liquids. This could explain conflicting experimental results, Cheng says, where gradual transitions only happen under some conditions. But the consequences could be truly out of this world.
‘Our conclusion on supercriticality can potentially change our understanding of the inner structures of giant planets,’ Cheng explains. In giant planets there can be abundant hydrogen at the right temperature and pressure to be liquid. Cheng adds that the insulating and the metallic liquid layers in them may have a gradually changing density profile, instead of an abrupt change as previously thought. Whether hydrogen oceans are conductive or insulating could influence their magnetic fields, she notes.
To produce these findings, the chemists trained a neural network using data from more conventional density functional theory (DFT) and Quantum Monte Carlo (QMC) simulations. They put hydrogen atom positions into the network, together with the energies of the atoms and the forces they experience in each case. The network ‘remembers’ links between structure and properties, Cheng explains. It can then predict the properties of new structures with more atoms by comparing atomic arrangements to its memory.
DFT and QMC both showed abrupt changes between hydrogen’s two liquid phases, but today’s computers can only simulate a few hundred atoms with these methods. The openly available neural network simulated 1728 atoms and found a smoother transition with a supercritical phase. ‘We believe the results coming from the larger, more realistic system is more trustworthy,’ Cheng says.
Lilia Boeri from Sapienza University in Italy, says that the study is ‘an impressive application of machine learning methods’. ‘This is one of the first cases where I have seen this method applied to a question that is probably intractable by standard DFT calculations,’ Boeri says. ‘This allowed the authors to determine almost unambiguously the nature of the liquid–liquid transition in high-pressure hydrogen, a question which has been debated for decades, and has important consequences for planetary models. ’
B Cheng et al, Nature, 2020, DOI: 10.1038/s41586-020-2677-y |
This article will focus on electroplating. What is it?
Galvanization is a whole section in the science of “Electrochemistry”, when a thin layer of another metal is applied to metal products, in our case it is a layer of rhodium or gold. The “Galvanizing” method is used to strengthen the surface of jewelry and protect it from the effects of the external environment, changing the color of the metal, as well as giving hypoallergenic properties to silver products.
Before you begin to coat the product, it must be very well polished and inspected for defects, cleaned of dust and dirt. Products are cleaned and galvanized in several stages.
The products are hung on special frames with hooks and the degreasing procedure begins, because Contaminants from previous operations remain on the surface. The products are immersed in a container with a special solution to remove grease deposits, then into distilled water to rinse them. Products are cleaned in distillate after each process to avoid the penetration of one solution into another and for a better finish.
This is followed by so-called electrolytic degreasing, using salts to completely clean the surface. Then, after washing in distillate, the products are immersed in a bath of special acid to activate the surface and prepare it for coating.
Then they are immersed in a container into which a solution of rhodium or gold is poured and the “anodes” are located. These are special plates made from different metal alloys that do not react with the solution, but only conduct current to ensure the movement of electrons. For example, for a rhodium solution, the anode is made of platinized titanium. Next, an electric current is applied, and due to the potential difference, gold or rhodium, which are in solution, is deposited on the products.
After the process is completed, the finished products must be washed and dried; To do this, they are first dipped into the distillate and then placed under a steam generator, with a steam temperature of 300 degrees Celsius, in order to finally wash away all the remaining solutions so that they do not leave stains on the coating. Then the products are placed in a special drying apparatus with a so-called fluidized bed - corn seeds, which interact with the surface of the product and absorb remaining moisture at a temperature of about 70 degrees. After this, the products are removed, inspected for defective coatings, and transferred to a warehouse for further assembly. |
Students enter words in the gaps, based on the context within a given article, individually or collaboratively.
This activity helps improve your vocabulary and sentence structure and your communication skills.
Type: Individual or Group collaboration
Instructions: Click on the gap and type in a word. Click on the light bulb icon (if any) for help.
The words of sentences are scrambled and students must sort them into their original order.
This activity helps you study sentence structure by providing you with genuine text and allowing you to select suitable materials to practice on.
Instructions: Put the bold words in the correct order by drag-drop them into the correct position.
This activity is for image collections only.
A randomly chosen image is shown to one player (called the "describer"), while the other player (the "guesser") must identify it by asking questions.
This activity helps improve your communication skills and vocabulary.
Type: Collaboration in pairs
Instructions: The "describer" sees a single image and describes it to their partner through the chat box. Based on what their partner says, the "guesser" selects one of the images by double-clicking. Both score a point if it is the correct image. If a timer is shown, the "guesser" must make their choice before time runs out.
Students collaborate to predict words they think will occur in a given text, This activity provides a learning environment in which you help each other by sharing information and exchanging ideas.
Type: Group collaboration
Instructions: In the text box, type your guesses of what words you think might be in the article. Use the title and/or image to help you think of words. |
Since we first published our State of Charge report in 2012, the environmental benefits of electric vehicles (EVs) have continued to grow. Driving the average EV is responsible for fewer global warming emissions than the average new gasoline car everywhere in the US—a fact attributable to more efficient EVs and an increasingly clean electricity grid.
Read our latest report, "Driving Cleaner",and latest analysis "Are Electric Vehicles Really Better for the Climate? Yes. Here’s Why," for more information.
What are the global warming emissions of electric cars on a life cycle basis—from the manufacturing of the vehicle’s body and battery to its ultimate disposal and reuse? To answer this, the Union of Concerned Scientists undertook a comprehensive, two-year review of the climate emissions from vehicle production, operation, and disposal. We found that battery electric cars generate half the emissions of the average comparable gasoline car, even when pollution from battery manufacturing is accounted for.
A life cycle analysis of EVs
All vehicles experience three distinct life stages: manufacturing, operation, and end-of-life. Each stage is linked with carbon dioxide and other greenhouse gas emissions—but those emissions differ between gas-powered cars and electric cars.
Both types of vehicle begin in much the same way. Raw materials are extracted, refined, transported, and manufactured into various components that are assembled into the car itself. Because electric cars store power in large lithium-ion batteries, which are particularly material- and energy-intensive to produce, their global warming emissions at this early stage usually exceed those of conventional vehicles. Manufacturing a mid-sized EV with an 84-mile range results in about 15 percent more emissions than manufacturing an equivalent gasoline vehicle. For larger, longer-range EVs that travel more than 250 miles per charge, the manufacturing emissions can be as much as 68 percent higher.
These differences change as soon as the cars are driven. EVs are powered by electricity, which is generally a cleaner energy source than gasoline. Battery electric cars make up for their higher manufacturing emissions within eighteen months of driving—shorter range models can offset the extra emissions within 6 months—and continue to outperform gasoline cars until the end of their lives.
The specific emissions of any given EV model will depend on the vehicle’s efficiency and the electricity that powers it (check out our interactive tool to explore EV emissions in your area). For everyone in the country, charging the average new EV produces far fewer global warming pollutants than driving the average new gasoline car. In some of the country’s cleanest regions (including parts of California, New York, and the Pacific Northwest), driving an electric car is equivalent to getting 85 miles per gallon.
By the end of their lives, gas-powered cars spew out almost twice as much global warming pollution than the equivalent electric car. Disposing of both types of vehicles (excluding reusing or recycling their batteries) produces less than a ton each.
Electric vehicles already result in far less climate pollution than their gas-powered counterparts, and they’re getting cleaner. Optimizing EV production and the disposal or reuse of batteries could further increase their environmental benefits. And as electricity becomes cleaner (which it is), the difference between electric cars and gasoline cars will only grow—cementing the role of electric vehicles in halving U.S. oil use and cutting global warming emissions.
Read more by downloading the full report, use our interactive tool to explore EV emissions in your area, or read most recent updates in our blog. |
The most obvious feature of any tortoise is the shell. This is the tortoises primary defence mechanism against would-be predators. The shell has remained almost unaltered by two hundred million years of evolution. The shell is basically an extension of the rib cage, which unlike most vertebrates is housed on the “outside” rather than inside the body.
The shell is made up of two halves, the underneath known as the plastron and the top known as the carapace. Both parts are fused together at the sides by a “bridge”.
The whole shell of the tortoise is made up of numerous small bones which are covered by separate plates of keratin called scutes. As a tortoise grows, extra layers of keratin are added underneath the existing layer, causing “growth rings”. Contrary to popular belief, a tortoise cannot be accurately aged by counting these rings. However they can tell us approximately how many spurts of growth the tortoise has had, thus we could also gauge what type of seasonal changes the tortoise has in its natural environment. Abundant vegetation means more food, which relates to more growth. Sparse vegetation due to extreme climatic conditions would mean little food, leading to little or no keratin growth.
Very old tortoises often have extremely worn scutes, giving their shells an alost completely smooth appearance.
The scutes of the carapace are split into five categories;
- The Nuchal – the scute directly above the head
- The Supracaudal – the scute directly above the tail
- The Vertebrals – a single line of scutes which run centrally from the head to the tail
- The Costals – run parallel to, and at either side of, the Vertebrals
- The Marginals – flank the Costals and attach to the “bridge”
The Marginal scutes have a large influence on the overall shape of a tortoise’s shell. In some species, most noticeably Testudo Marginata, the Marginal scutes are extremely flared.
The scutes of the plastron are also separately categorised, of which there are two scutes in each category. Starting from the head moving down to the tail we have;
- The Gular
- The Humeral
- The Pectoral
- The Abdominal
- The Femoral
- The Anal
Some tortoises have a flexible “hinge” on their plastron which they can use for extra protection from predators by clamping the carapace and plastron firmly shut. Some females of other species have a much less flexible plastron, but nevertheless flexible enough to move slightly to aid her egg laying duties.
The skeleton of a tortoise is made up of two parts; the exoskeleton (carapace and plastron) and the endoskeleton (internal bones). The endoskeleton consists of two main groups; the appendicular skeleton (limb bones and girdles) and the axial skeleton (ribs, vertebrae and skull).
A very brief description of the bones;
- Skull and lower Jaw Mandible - consisting of many small bones fused together
- Cervical Vertebrae - neck bones
- Dorsal Vertebrae - a rib branches off each dorsal vertebrae, which are fused to the carapace
- Humerus - upper foreleg bones
- Radius and Ulna - lower foreleg bones
- Carpals – wrist bones of front legs
- Phalanges – digit bones
- Scapula and Coracoid – bones of the pectoral girdle
- Femur – upper rear leg bones
- Fibula and Tibia – lower rear leg bones
- Tarsals – ankle bones of rear legs
- Metatarsals – bones of the feet
The muscular system in tortoises is quite different to that of most other vertebrates. Muscles which are usually used to flex and twist in the backbone in nearly all animals are almost completely obsolete in tortoises due to their spine being rigid. However, they have enormously well-developed muscles in their flexible necks, allowing them to retract into their shells.
They also have well developed leg and tail muscles, and possess considerably powerful muscles in their lower mandibles – if you have ever tried to pry open a reluctant tortoise’s mouth then you will have “felt” the full force of these muscles in action.
Although the tortoise has the same digestive organs as most other vertebrates, it has adapted to cope extremely well in severe conditions where food and water conservation is at a premium.
The tortoise can extract and assimilate moisture and nutrients from food items which to the human eye look completely “dried up” and would be of no nutritional benefit to most other living creatures. Tortoises can achieve this by means of a “hindgut system” which is effectively like having two digestive tracts, the latter of which reabsorbs any moisture from the waste products already produced by the former. Arid habitat tortoises can also effectively split up their urinary waste in the kidneys, storing valuable water in the bladder and only expelling the waste product in the form of insoluble uric acid crystals. The crystals have a similar look to toothpaste when passed.
The main difference between a tortoise’s respiration and ours is the volume of CO2 they can contain in their blood. Normally when we hold our breath, the CO2 in the blood makes us want to start breathing again, but tortoises are much more tolerable of this, allowing them to inhale less frequently. If you startle a tortoise, its first reaction is to retract into the shell and the only way a tortoise can do this is by emptying its lungs. A frightened tortoise will consequently remain for some time with almost empty lungs whilst in this state.
Tortoises, like other reptiles, are cold blooded. This means they need to seek an external active heat source to keep their body at an optimum temperature range, enabling their vital organs to function properly. Tortoises do this by positioning their carapaces toward the sun (or an artificial radiant heat source in captive situations), a practice which has continued from long before evolution had even considered creating a mammal.
The colouration or “melanism” of a tortoise’s carapace varies in accordance with its geographical surroundings i.e. tortoises from extremely hot places like parts of Egypt and Morocco tend to be lighter in colour, thus reflecting some of the searing heat. Turkish Testudo Ibera, for example, are extremely melanistic, enabling them to absorb more heat.
A tortoise’s carapace incorporates tiny pores which hep to trap in the radiant heat. It’s worth noting that an owner should never use any oils on their pets shell, as this will significantly hinder its thermoregulation capabilities.
Just as ours, a tortoise’s heart pumps blood to all the vital organs and muscle groups, but a large amount of blood is also effectively send underneath the carapace to “warm up” before continuing to circulate around the body.
An external basking temperature range of between 25-35°c is needed to allow the animal to internally thermoregulate its body temperature to the 30°c required for optimum metabolic efficiency.
Tortoises are extremely sensitive creatures. Despite popular belief, they can feel the slightest touch to their skin and shells. It was once thought that a tortoise’s carapace was void of any nerve endings, and as such horrific acts were often carried out and even recommended by media and literature of that time. This included drilling holes through the shells and tethering the animals.
At the time of writing, there has been little study on the effectiveness of a tortoise’s eyesight. We know that tortoises have good all-around vision due to having their eyes at the side of their head as oppose to having binocular vision like humans, but we do not know how sensitive or acute their vision actually is.
It is thought that tortoises certainly use their eyes to catch movement but perhaps have difficulty picking out detail. Some tortoise owners insist that their pet is fond of certain colours, often red. Although whether it is an actual colour preference or whether the animal is merely associating it with a favourite food item is open to debate.
Numerous publications have tried to give the impression that tortoises are virtually deaf, although it is fair to say that their hearing is significantly different to ours and perhaps less sensitive to high frequency sounds, but they are no means deaf.
The ears themselves have no external auricle and can be best described as simple ear “flaps” or “scales” which are located behind the tortoise’s eyes towards the rear of the head.
This is the primary sense that a tortoise uses and it considerably more acute than most owners realise. A tortoise relies heavily on scent for its daily activities including finding food, finding a mate, finding appropriate nesting areas, smelling for predators etc. A tortoise uses smell for everything it does.
Despite their strange appearance and clumsy looking way of rambling around, tortoises are in fact very agile. They are incredible diggers and even better climbers; this is due, in part, to their excellent sense of balance.
The sense of balance becomes even finer as the tortoise matures. Hatchlings observed in captive situations always notoriously seem to end up on their backs, while adults seem to be sturdier on their feet, although this does vary from one individual to another. |
Article body copy
Giant Andean condors are some of nature’s best garbage disposal units. These gigantic birds—as heavy as 15 kilograms with a three-meter wingspan—soar through the Patagonian sky, scanning the ground below them for carrion. They pick the landscape clean, feasting on whatever carcasses they can find. Historically, the scavengers have been known to eat everything from dead llamas and rheas to sea lions and beached whales.
But the global decline in marine mammals over the past century—a result of industrial whaling and other forces—means the birds are now much less likely to find meals near the sea. Most of the condors spend their nights on the western side of the Andes Mountains, along the southern Chilean coast. The depletion of marine mammals here means that the birds are now forced to make 150-kilometer-long flights inland to find food on the eastern side of the mountain range.
“They breed on both sides of the Andes, but interestingly they are now just feeding on the Argentine side, where large amounts of herbivores exist,” says Sergio Lambertucci, a conservation biologist at the National University of Comahue in Argentina, who led a recent study analyzing the condors’ shifting diet.
To see how the decline in marine mammals has affected the condors, Lambertucci and his colleagues tracked down 24 Andean condor specimens in museums around the world. They compared the proportions of key isotopes in the birds’ feathers with those taken from 53 living condors. All the museum specimens were collected in Patagonia between 1841 and 1939, while the modern birds were captured between 2010 and 2013.
“Stable isotopes act as an identifier of the type of food consumed by these scavengers,” says ecologist Joan Navarro, from the Institute of Marine Sciences in Barcelona, Spain, who was also involved in the project. The isotopes give the researchers a way to estimate the contributions of different food sources to the birds’ diet.
The scientists found that in the past, marine mammals made up one-third of the Andean condor’s diet. Today, that proportion has dropped to just eight percent.
Lambertucci says the change in diet can be explained by the massive reductions in marine mammal populations over the past century and by an increasing human use of the coastline, which makes it harder for condors to reach carcasses near the sea. He adds that the birds can’t find food on the western slopes of the Andes either because they are covered in thick rainforest, which the birds can’t see through. In contrast, the eastern slopes lead to open grasslands and desert of the Patagonian Steppe.
The more restricted diet is bad for the condors, which are critically endangered in the northern Andes, because they have to work harder to find food. “A century ago they had a food source a few kilometers from their nests, but now they need to fly dozens of kilometers daily,” Lambertucci says.
Douglas McCauley, an ecologist at the University of California, Santa Barbara, who was not involved in the research, says the study shows how population changes in one species can echo through the entire food web.
“By far the most scary thing about extinction or animal loss are the impacts that these changes can have on the network of interactions that link life together in nature,” McCauley says.
We know there are fewer whales in the Southern Ocean—and that in and of itself is not good, McCauley says. “But we are only beginning to understand and detect the repercussions that come from these losses.” |
- Medical sign
Signs may have no meaning for, and can even go unnoticed by, the patient, but may be full of meaning for the healthcare provider, and are often significant in assisting a healthcare provider in diagnosis of medical condition(s) responsible for the patient's symptoms.
The term sign is not to be confused with the term indication, which denotes a valid reason for using some treatment.
- 1 Signs and semiotics
- 2 Eponymous signs
- 3 Signs versus symptoms
- 4 Types of signs
- 5 Technological development creating signs detectable only by physicians
- 6 Signs as tests
- 7 Examples of signs
- 8 See also
- 9 References
- 10 External links
Signs and semiotics
The art of interpreting clinical signs was originally called semiotics (a term now used for the study of sign communication in general) in English. This term, then written semeiotics (derived from the Greek adjective σημειοτικός: semeiotikos, "to do with signs"), was first used in English in 1670 by Henry Stubbes (1631–1676), to denote the branch of medical science relating to the interpretation of signs:
- …nor is there any thing to be relied upon in Physick, but an exact knowledge of medicinal phisiology (founded on observation, not principles), semeiotics, method of curing, and tried (not excogitated, not commanding) medicines…
Signs versus symptoms
Signs are different from symptoms, the subjective experiences, such as fatigue, that patients might report to their examining physician.
For convenience, signs are commonly distinguished from symptoms as follows: Both are something abnormal, relevant to a potential medical condition, but a symptom is experienced and reported by the patient, while a sign is discovered by the physician during examination of the patient.:75
A slightly different definition views signs as any indication of a medical condition that can be objectively observed (i.e., by someone other than the patient), whereas a symptom is merely any manifestation of a condition that is apparent to the patient (i.e., something consciously affecting the patient). From this definition, it can be said that an asymptomatic patient is uninhibited by disease. However, a doctor may discover the sign hypertension in an asymptomatic patient, who does not experience "dis-ease", and the sign indicates a disease state that poses a hazard to the patient. With this set of definitions, there is some overlap – certain things may qualify as both a sign and a symptom (e.g., a bloody nose).
Lester S. King, author of Medical Thinking, argues that an "essential feature" of a sign is that there is both a sign [or "signifier"] and a "thing signified". And, because "the essence of a sign is to convey information", it can only be a sign, properly speaking, if it has meaning. Therefore, "a sign ceases to be a sign when you cannot read it".:73–74 A person, who has and exercises the knowledge required to understand the significance or indication or meaning of the sign, is necessary for something to be a complete sign. A physical phenomenon that is not actually interpreted as a sign pointing to something else is, in medicine, merely a symptom. Thus, King rejects "these present-day views [distinguishing signs from symptoms based on patient-subjective versus clinician-objective], however widely accepted, as quite faulty, at variance not only with ordinary usage but with the entire history of medicine.":77
Types of signs
Medical signs may be classified by the type of inference that may be made from their presence,:80–81 for example:
- Prognostic signs (from progignṓskein, προγιγνώσκειν, "to know beforehand"): signs that indicate the outcome of the current bodily state of the patient (i.e., rather than indicating the name of the disease). Prognostic signs always point to the future. Perhaps the most famous prognostic sign is the facies Hippocratica:
"[If the patient's facial] appearance may be described thus: the nose sharp, the eyes sunken, the temples fallen in, the ears cold and drawn in and their lobes distorted, the skin of the face hard, stretched and dry, and the colour of the face pale or dusky.… and if there is no improvement within [a prescribed period of time], it must be realized that this sign portends death."
- Anamnestic signs (from anamnēstikós, ἀναμνηστικός, "able to recall to mind"): signs that (taking into account the current state of a patient's body), indicate the past existence of a certain disease or condition. Anamnestic signs always point to the past. (Whenever we see a man walking with a particular gait, with one arm paralysed in a particular way, we say "This man has had a stroke"; and, if we see a woman in her late 50s with one arm distorted in a particular way, we say "She had polio as a child".)[dubious ]
- Diagnostic signs (from diagnōstikós, διαγνωστικός, "able to distinguish"): signs that lead to the recognition and identification of a disease (i.e., they indicate the name of the disease).
- Pathognomonic signs (from pathognomonikós, παθογνωμονικός, "skilled in diagnosis", derived from páthos, πάθος, "suffering, disease", and gnṓmon, γνώμον, "judge, indicator"): the particular signs whose presence means, beyond any doubt, that a particular disease is present. They represent a marked intensification of a diagnostic sign. (An example would be the palmar xanthomata seen on the hands of people suffering from hyperlipoproteinaemia.) Singular pathognomonic signs are relatively uncommon.
"[Thus] a symptom is a phenomenon, caused by an illness and observable directly in experience. We may speak of it as a manifestation of illness. When the observer reflects on that phenomenon and uses it as a base for further inferences, then that symptom is transformed into a sign. As a sign it points beyond itself — perhaps to the present illness, or to the past or to the future. That to which a sign points is part of its meaning, which may be rich and complex, or scanty, or any gradation in between. In medicine, then, a sign is thus a phenomenon from which we may get a message, a message that tells us something about the patient or the disease. A phenomenon or observation that does not convey a message is not a sign. The distinction between signs and symptom rests on the meaning, and this is not perceived but inferred.":81
Technological development creating signs detectable only by physicians
Prior to the nineteenth century there was little difference in the powers of observation between physician and patient. Most medical practice was conducted as a joint co-operative interaction between the physician and the patient as equals. Whilst each noticed much the same things, the physician had a more informed interpretation of those things: "the physicians knew what the findings meant and the layman did not".:82
Advances in the 19th century
- The 1808 introduction of the percussion technique:
“ "The process through which "the physician can assess the state of the underlying lung by sensing the character of vibrations by gentle taps on the chest wall [something which] greatly facilitated the diagnosis of pneumonia and other respiratory diseases" ”
The techniques, which had been first described by the Viennese physician Leopold Auenbrugger (1722–1809) in 1761, became far more widely known following the publication of Jean-Nicolas Corvisart's translation[clarification needed] of Auenbrugger's work in 1808.
- The 1819 introduction by René Laënnec (1781–1826) of the technique of auscultation (using a stethoscope to listen to the circulatory and respiratory functions of the body). Laënnec's publication was translated into English, 1821–1834, by John Forbes.
- The 1846 introduction by surgeon John Hutchinson (1811–1861) of the spirometer, an apparatus for assessing the mechanical properties of the lungs via measurements of forced exhalation and forced inhalation. (The recorded lung volumes and air flow rates are used to distinguish between restrictive disease (in which the lung volumes are decreased: e.g., cystic fibrosis) and obstructive diseases (in which the lung volume is normal but the air flow rate is impeded; e.g., emphysema).)
- The 1851 invention by Hermann von Helmholtz (1821–1894) of the ophthalmoscope, which allowed physicians to examine the inside of the human eye.
- The 1895 clinical use of X-rays which began almost immediately after they had been discovered that year by Wilhelm Conrad Röntgen (1845–1923).
- The 1896 introduction of the sphygmomanometer, designed by Scipione Riva-Rocci (1863–1937), to measure blood pressure.
Alteration of the relationship between physician and patient
The introduction of the techniques of percussion and auscultation into medical practice altered the relationship between physician and patient in a very significant way, specifically because these techniques relied almost entirely upon the physician listening.
Not only did this greatly reduce the patient's capacity to observe and contribute to the process of diagnosis, it also meant that the patient was often instructed to stop talking, and remain silent.
As these sorts of evolutionary changes continued to take place in medical practice, it was increasingly necessary to uniquely identify data that was accessible only to the physician, and to be able to differentiate those observations from others that were also available to the patient, and it just seemed natural to use "signs" for the class of physician-specific data, and "symptoms" for the class of observations available to the patient.
King proposes a more advanced notion; namely, that a sign is something that has meaning, regardless of whether it is observed by the physician or reported by the patient:The belief that a symptom is a subjective report of the patient, while a sign is something that the physician elicits, is a 20th-century product that contravenes the usage of two thousand years of medicine. In practice, now as always, the physician makes his judgments from the information that he gathers. The modern usage of signs and symptoms emphasizes merely the source of the information, which is not really too important. Far more important is the use that the information serves. If the data, however derived, lead to some inferences and go beyond themselves, those data are signs. If, however, the data remain as mere observations without interpretation, they are symptoms, regardless of their source. Symptoms become signs when they lead to an interpretation. The distinction between information and inference underlies all medical thinking and should be preserved.:89
Signs as tests
In some senses, the process of diagnosis is always a matter of assessing the likelihood that a given condition is present in the patient. In a patient who presents with haemoptysis (coughing up blood), the haemoptysis is very much more likely to be caused by respiratory disease than by the patient having broken their toe. Each question in the history taking allows the medical practitioner to narrow down their view of the cause of the symptom, testing and building up their hypotheses as they go along.
Examination, which is essentially looking for clinical signs, allows the medical practitioner to see if there is evidence in the patient's body to support their hypotheses about the disease that might be present.
A patient who has given a good story to support a diagnosis of tuberculosis might be found, on examination, to show signs that lead the practitioner away from that diagnosis and more towards sarcoidosis, for example. Examination for signs tests the practitioner's hypotheses, and each time a sign is found that supports a given diagnosis, that diagnosis becomes more likely.
Special tests (blood tests, radiology, scans, a biopsy, etc.) also allow a hypothesis to be tested. These special tests are also said to show signs in a clinical sense. Again, a test can be considered pathognonomic for a given disease, but in that case the test is generally said to be "diagnostic" of that disease rather than pathognonomic. An example would be a history of a fall from a height, followed by a lot of pain in the leg. The signs (a swollen, tender, distorted lower leg) are only very strongly suggestive of a fracture; it might not actually be broken, and even if it is, the particular kind of fracture and its degree of dislocation need to be known, so the practitioner orders an x-ray. The x-ray film shows a fractured tibia, so the film is said to be diagnostic of the fracture.
Examples of signs
- ^ eMedicine/Stedman Medical Dictionary Lookup![clarification needed]
- ^ Definition at University of Western Ontario[clarification needed]
- ^ Stubbe, H. (Henry Stubbes), The Plus Ultra reduced to a Non Plus: Or, A Specimen of some Animadversions upon the Plus Ultra of Mr. Glanvill, wherein sundry Errors of some Virtuosi are discovered, the Credit of the Aristotelians in part Re-advanced; and Enquiries made..., (London), 1670, p. 75
- ^ See list of eponymous medical signs, and "Who Named It?" for more information on eponymous signs.
- ^ a b c d e f g King, Lester S. (1982). Medical Thinking: A Historical Preface. Princeton, NJ: Princeton University Press. ISBN 0691082979.
- ^ Chadwick, J. & Mann, W.N.(trans.) (1978). Hippocratic writings. Harmondsworth, UK: Penguin. pp. 170–171. ISBN 0-14-044451-3.
- ^ a b Jewson, N. D., "Medical Knowledge and the Patronage System in 18th Century England", Sociology, Vol.8, No.3, (1974), pp. 369–385.
- ^ a b Jewson, N. D., "The Disappearance of the Sick Man from Medical Cosmology, 1770–1870", Sociology, Vol.10, No.2, (1976), pp. 225–244.
- ^ Tsouyopoulos N (1988). "The mind-body problem in medicine (the crisis of medical anthropology and its historical preconditions)". Hist Philos Life Sci 10 Suppl: 55–74. PMID 3413276.
- ^ Weatherall, D. (1996). Science and the Quiet Art: The Role of Medical Research in Health Care. New York: W. W. Norton & Company. pp. 46. ISBN 0-393-31564-9.
- Who Named It?: eponymous signs.
Symptoms and signs: circulatory (R00–R03, 785) CardiovascularTachycardia/Bradycardia · Palpitation
Heart sounds: Heart murmur (Systolic, Diastolic, Continuous) · Gallop rhythm (Third heart sound, Fourth heart sound) · Pericardial friction rub · Split S2 · Heart click
Myeloid/blood Symptoms and signs: respiratory system (R04–R07, 786) Hemorrhage Abnormalities
of breathingRespiratory sounds: Stridor · Wheeze · Crackles · Rhonchi · Hamman's sign
Apnea · Dyspnea · Hyperventilation/Hypoventilation · Hyperpnea/Tachypnea/Hypopnea/Bradypnea · Orthopnea/Platypnea
Biot's respiration · Cheyne-Stokes respiration · Kussmaul breathing
Hiccup · Mouth breathing/Snoring · Breath-holding
Other Chest, general Symptoms and signs: digestive system and abdomen (R10–R19, 787,789) GI tractUpper GI tract Accessory Abdominopelvic Abdominal – general Symptoms and signs: skin and subcutaneous tissue (R20–R23, 782) Disturbances of skin sensation/
Circulation Edema Other Symptoms and signs: nervous and musculoskeletal systems (R25–R29, 781.0, 781.2–9) Primarily nervous systemPrimarily CNSLack of coordinationOtherPrimarily PNS Primarily muscularMovement disordersOther Primarily skeletal Primarily jointJoint locking
anat(h/c, u, t, l)/phys
noco(arth/defr/back/soft)/cong, sysi/epon, injr
proc, drug(M01C, M4)
Symptoms and signs: urinary system (R30–R39, 788) Pain Control Volume Other urination disorders Symptoms and signs: cognition, perception, emotional state and behaviour (R40–R46, 780.0–780.5, 781.1) CognitionFainting/SyncopeOther Emotional state Behavior Perception/
Symptoms and signs: Speech and voice / Symptoms involving head and neck (R47–R49, 784) Aphasia/Dysphasia Other speech disturbances Symbolic dysfunctions Voice disturbances Other Symptoms and signs: general / constitutional (R50–R61, 780.6–780.9) Temperature Aches/Pains Malaise and fatigue MiscellaneousFlu-Like Symptoms Symptoms and signs: Symptoms concerning nutrition, metabolism and development (R62–R64, 783) Ingestion/Weight Growth
cof, enz, met
noco, nuvi, sysi/epon, met
Wikimedia Foundation. 2010.
Look at other dictionaries:
Tetany (medical sign) — SignSymptom infobox Name = Tetany Caption = DiseasesDB = 29143 ICD10 = ICD10|R|29|0|r|25 ICD9 = ICD9|781.7 ICDO = OMIM = MedlinePlus = eMedicineSubj = eMedicineTopic = MeshID = D013746 Tetany is a medical sign, the involuntary contraction of… … Wikipedia
Sign — A sign is an entity which signifies another entity. A natural sign is an entity which bears a causal relation to the signified entity, as thunder is a sign of storm. A conventional sign signifies by agreement, as a full stop signifies the end of… … Wikipedia
Sign (disambiguation) — A sign is an entity which indicates another entity. This may refer to:*Commercial signage *Sign (display device) **Stop sign *Sign (semiotics)*Plus and minus signs, used to indicate negative and non negative numbers in mathematics **Sign function … Wikipedia
Medical test — Intervention X ray of a hand. X rays are a common medical test. MeSH … Wikipedia
Medical diagnosis — (often simply termed diagnosis) refers both to the process of attempting to determine and/or identify a possible disease or disorder (and diagnosis in this sense can also be termed (medical) diagnostic procedure), and to the opinion reached by… … Wikipedia
sign — n 1 Sign, mark, token, badge, note, symptom can denote a sensible and usually visible indication by means of which something not outwardly apparent or obvious is made known or revealed. Sign is the most comprehensive of these terms, being… … New Dictionary of Synonyms
Medical College of Wisconsin Psychiatry — Medical College of Wisconsin has a fully accredited program in Psychiatry training that offers both four year and three year options. The three year program is available to individuals who have chosen psychiatry later in their careers. The… … Wikipedia
Medical radiography — Diagnostics ICD 10 PCS B?0, B?1, B?2 ICD 9 CM 87 … Wikipedia
Medical Lake, Washington — City Location of Medic … Wikipedia
Medical Automation — is a cross academic and business conference organized by a non profit called MedicalAutomation.org. This is an annual conference. The 2005 conference was held in Helsinki, Finland and the 2006 Conference is being held in Loudoun County, Virginia … Wikipedia |
This page has been archived and is no longer updated. Despite seeming like a relatively stable place, the Earth’s surface has changed dramatically over the past 4. Mountains have been built and eroded, continents and oceans have moved great distances, and the Earth has fluctuated from being extremely cold and almost completely covered with ice to being very warm and ice-free. These changes typically occur so slowly that they are barely detectable over the span of a human life, yet even at this instant, the Earth’s surface is moving and changing. As these changes have occurred, organisms have evolved, and remnants of some have been preserved as fossils. A fossil can be studied to determine what kind of organism it represents, how the organism lived, and how it was preserved.
Potassium-Argon Dating Methods
GSA Bulletin ; 69 2 : — Lipson’s companion paper on the potassium-argon dating of sedimentary rocks is discussed. Some limitations in the present geological time scale are considered. The sedimentary minerals to which K-A dating may be applied and methods used in the preparation of glauconite for analysis are described. Possible errors due to contamination, argon inheritance, and argon loss by diffusion are discussed.
Evidence by Gentner and co-workers for argon diffusion in sylvite is reviewed critically.
limitations to Potassium-Argon dating? – not all rock types are suitable for this method of dating – can only date rocks around /
新作注目!!お買い物マラソン 安い 3/21 兜 20時~お得なクーポン&ポイントアップ テレビ台!ドレッサー【Emeraldas エメラルダス 一面姿見収納】【最先端税込】の
Argon-argon dating works because potassium decays to argon with a known decay constant. However, potassium also decays to 40 Ca much more often than it decays to 40 Ar. This necessitates the inclusion of a branching ratio 9. This led to the formerly-popular potassium-argon dating method.
They are fossils captured in volcanic rock that can be given an absolute date. By comparing the ratio of potassium to argon, scientists gauge how long this natural twenty thousand years — a mere moment in Earth’s 4-billion-year history.
Conventional K-Ar ages for granitic, volcanic, and metamorphic rocks collected in this area. New age determinations with descriptions of sample locations and analytical details. Compilation of isotopic and fission track age determinations, some previously published. Data for the tephrochronology of Pleistocene volcanic ash, carbon, Pb-alpha, common-lead, and U-Pb determinations on uranium ore minerals are not included.
Presents data for mineral deposits and unaltered and hydrothermally altered volcanic rocks. Data presented were acquired in three USGS labs by three different geochronologists. Analytical methods and data derived from each lab are presented separately. Digital compilation and reinterpretation of published and unpublished geologic mapping of Alaska. This map, compiled from geologic mapping conducted by the U. A revision of DDS correcting locations and providing the data in more convenient formats.
Digital geologic map information with a consistent set of attributes, part of a national compilation of similar maps. Available in formats compatible with GIS. Map, report, and geospatial data on the geology of the northeastern part of the , scale Dillingham quadrangle, Alaska.
Since the early twentieth century scientists have found ways to accurately measure geological time. The discovery of radioactivity in uranium by the French physicist, Henri Becquerel , in paved the way of measuring absolute time. Shortly after Becquerel’s find, Marie Curie , a French chemist, isolated another highly radioactive element, radium.
The realisation that radioactive materials emit rays indicated a constant change of those materials from one element to another.
One of the most widely used dating methods is the potassium-argon It’s simple; the geologist will change his assumed history for that rock.
It assumes that all the argon—40 formed in the potassium-bearing mineral accumulates within it and that all the argon present is formed by the decay of potassium— The method is effective for micas, feldspar, and some other minerals. August 11, Retrieved August 11, from Encyclopedia. Then, copy and paste the text into your bibliography or works cited list. Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.
The minimum age limit for this dating method is about years. This potassium isotope has a half-life of 1. Cite this article Pick a style below, and copy the text for your bibliography. Learn more about citation styles Citation styles Encyclopedia. More From encyclopedia. The two main types of dating methods are… Carbon Dating , Carbon dating is a technique used to determine the approximate age of once-living materials.
About this article potassium-argon dating All Sources -. Updated About encyclopedia.
Potassium-Argon dating the decay products. The preservation of fossils. Start a similar procedure later in pressure or argon. As early as with other methods of the other elements. History and what can break down into one of 1. A geologic history for estimating the potassium-argon isotopic dating.
Brief History of the Potassium-‐Argon Dating Laboratory in the ANU. to The laboratory was initiated by Professor J C Jaeger, head of the Department.
I have just completed the data reduction on a low potassium basalt from the Medicine Lake, California, the basalt of Tionesta. The recent development of small volume low-background noble gas extraction systems and low-background high-sensitivity mass spectrometers have improved our ability to more accurately and precisely date geologic events.
However, the dating of Quaternary, low potassium rocks continues to test the limits of the method because of small quantities of radiogenic argon and large atmospheric argon contamination. In these early studies the vertical succession of sedimentary rocks and structures were used to date geologic units and events relatively. In addition, faunal succession and the use of “key” diagnostic fossils were used to correlate lithologic units over wide geographic areas.
Although lithologic units could be placed within a known sequence of geologic periods of roughly similar age, absolute ages, expressed in units of years, could not be assigned. Until the twentieth century geologists were limited to these relative dating methods. For a complete discussion on the development of the Geologic time scale see Berry, Following the discovery of radioactivity by Becquerel a,b,c near the end of the nineteenth century, the possibility of using this phenomenon as a means for determining the age of uranium-bearing minerals was demonstrated by Rutherford In his study Rutherford measured the U and He He is an intermediate decay product of U contents of uranium-bearing minerals to calculate an age.
One year later Boltwood developed the chemical U-Pb method. These first “geochronology studies” yielded the first absolute ages from geologic material and indicated that parts of the Earth’s crust were hundreds of millions of years old. During this same period of time Thomson and Campbell and Wood demonstrated that potassium was radioactive and emitted beta-particles.
History of the K/Ar-Method of Geochronology
The potassium-argon K-Ar isotopic dating method is especially useful for determining the age of lavas. Developed in the s, it was important in developing the theory of plate tectonics and in calibrating the geologic time scale. Potassium occurs in two stable isotopes 41 K and 39 K and one radioactive isotope 40 K. Potassium decays with a half-life of million years, meaning that half of the 40 K atoms are gone after that span of time.
Its decay yields argon and calcium in a ratio of 11 to The K-Ar method works by counting these radiogenic 40 Ar atoms trapped inside minerals. What simplifies things is that potassium is a reactive metal and argon is an inert gas: Potassium is always tightly locked up in minerals whereas argon is not part of any minerals. Argon makes up 1 percent of the atmosphere. So assuming that no air gets into a mineral grain when it first forms, it has zero argon content.
That is, a fresh mineral grain has its K-Ar “clock” set at zero. The method relies on satisfying some important assumptions:. Given careful work in the field and in the lab, these assumptions can be met. The rock sample to be dated must be chosen very carefully. Any alteration or fracturing means that the potassium or the argon or both have been disturbed.
Potassium argon dating history
Potassium, an alkali metal, the Earth’s eighth most abundant element is common in many rocks and rock-forming minerals. The quantity of potassium in a rock or mineral is variable proportional to the amount of silica present. Therefore, mafic rocks and minerals often contain less potassium than an equal amount of silicic rock or mineral. Potassium can be mobilized into or out of a rock or mineral through alteration processes.
Due to the relatively heavy atomic weight of potassium, insignificant fractionation of the different potassium isotopes occurs.
Potassium-Argon Dating Potassium-Argon dating is the only viable technique for dating very old archaeological materials. Geologists have used this method to date rocks as much as 4 billion years old. It is based on the fact that some of the radioactive isotope of Potassium, Potassium K ,decays to the gas Argon as Argon Ar By comparing the proportion of K to Ar in a sample of volcanic rock, and knowing the decay rate of K, the date that the rock formed can be determined.
How Does the Reaction Work? Potassium K is one of the most abundant elements in the Earth’s crust 2. One out of every 10, Potassium atoms is radioactive Potassium K These each have 19 protons and 21 neutrons in their nucleus. If one of these protons is hit by a beta particle, it can be converted into a neutron. With 18 protons and 22 neutrons, the atom has become Argon Ar , an inert gas. For every K atoms that decay, 11 become Ar How is the Atomic Clock Set?
Radiocarbon dating k-ar dating of isotope of the history of the assump. A given potassium is done by measuring the theory of. Geologists have used to argon with a radiometric dating, l. However, the occurrence in volcanic ejecta such validation, potassium-argon dating of plant life is the department. It has an amazing history spanning about 4.
Most people envision radiometric dating by analogy to sand grains in an hourglass: the grains fall at a known rate, so that the ratio of grains between top and bottom is always proportional to the time elapsed. In principle, the potassium-argon K-Ar decay system is no different. Of the naturally occurring isotopes of potassium, 40K is radioactive and decays into 40Ar at a precisely known rate, so that the ratio of 40K to 40Ar in minerals is always proportional to the time elapsed since the mineral formed [ Note: 40K is a potassium atom with an atomic mass of 40 units; 40Ar is an argon atom with an atomic mass of 40 units].
In theory, therefore, we can estimate the age of the mineral simply by measuring the relative abundances of each isotope. Over the past 60 years, potassium-argon dating has been extremely successful, particularly in dating the ocean floor and volcanic eruptions. K-Ar ages increase away from spreading ridges, just as we might expect, and recent volcanic eruptions yield very young dates, while older volcanic rocks yield very old dates.
Though we know that K-Ar dating works and is generally quite accurate, however, the method does have several limitations. First of all, the dating technique assumes that upon cooling, potassium-bearing minerals contain a very tiny amount of argon an amount equal to that in the atmosphere. While this assumption holds true in the vast majority of cases, excess argon can occasionally be trapped in the mineral when it crystallizes, causing the K-Ar model age to be a few hundred thousand to a few million years older than the actual cooling age.
Secondly , K-Ar dating assumes that very little or no argon or potassium was lost from the mineral since it formed. But given that argon is a noble gas i. Finally —and perhaps most importantly—the K-Ar dating method assumes that we can accurately measure the ratio between 40K and 40Ar. I emphasize this assumption, because it is so commonly overlooked by those unfamiliar with radiometric dating!
We often take it for granted that measuring chemical concentrations should be an easy task, when it is not. |
What Causes Pressure Variations and Winds?
THE MOVEMENT OF AIR IN THE ATMOSPHERE produces wind, or movement of air relative to Earth's surface. Circulation in the atmosphere is caused by pressure differences generated primarily by uneven insolation. Air flows from areas of higher pressure, where air sinks, to areas of lower pressure, where air rises.
How Do We Measure the Strength and Direction of Wind?
Wind speed and direction are among the most important measurements in the study of weather and climate. On short time scales, wind can indicate which way a weather system is moving and the strength of a storm. When considered over longer time scales, winds indicate general atmospheric circulation patterns, a key aspect of climate.
1. Wind directions can be assessed as easily as throwing something light into the air and tracking which way it goes, but it is best done with a specially designed measuring device that can measure the wind speed and direction. Wind speed is expressed in units of distance per time (km/hr) or as knots, which is a unit expressing nautical miles per hour. One knot is equal to 1.15 miles/hr or 1.85 km/hr.
2. Wind direction is conveyed as the direction from which the wind is blowing. Wind direction is commonly expressed with words, such as a northerly wind (blowing from the north). It can instead be described as an azimuth in degrees clockwise from north. In this scheme, north is 0°, east is 090°, south is 180°, and west is 270°.
3. The atmosphere also has vertical motion, such as convection due to heating of the surface by insolation. A local, upward flow is an updraft and a downward one is a downdraft.
What Causes Air to Move?
Air moves because there are variations in air pressures, in density of the air, or in both (recall that pressure and density are related via the Ideal Gas Law). Such pressure and density variations are mostly caused by differential heating of the air (due to differences in insolation) or by air currents that converge or diverge. The atmosphere is not a closed container, so changes in volume (i.e., air being compressed or expanded) come into play. These volume changes can make air pile up or spread out, resulting in variations in air pressure.
1. Movement of air occurs to equalize a difference in air pressure between two adjacent areas, that is, a pressure gradient. Air molecules in high-pressure zones are packed more closely together than in low-pressure zones, so gas molecules in high-pressure zones tend to spread out toward low- pressure zones. As a result, air moves from higher to lower pressure, in the simplest case (as shown here) perpendicular to isobars.
2. High-pressure zones and low-pressure zones can be formed by atmospheric currents that converge or diverge. Converging air currents compress ore air into a smaller space, increasing the air pressure. Diverging air currents move air away from an area, causing low pressure. Forces associated with converging and diverging air are called dynamic forcing.
3. Most variations in air pressure and most winds, however, are caused by thermal effects, specifically differences in insolation from place to place. This cross section shows a high-pressure zone caused by the sinking of cold, high-altitude air toward the surface. In the adjacent low-pressure zone, warmer near-surface temperatures have caused air to expand, become less dense, and rise, causing low pressure. Near the surface, air would flow away from the high pressure and toward the low pressure. Different air currents would form higher, in the upper troposphere, to accommodate the sinking and rising of the air. In an atmosphere that doesn't have strong updrafts or downdrafts, at any altitude the upward pressure gradient will approximately equal the downward force of gravity.
What Forces Result from Differences in Air Pressure?
Differences in air pressure, whether caused by thermal effects or dynamic forcing, result in a pressure gradient between adjacent areas of high and low pressure. Associated with this pressure gradient are forces that cause air to flow. Pressure gradients can exist vertically in the atmosphere or laterally from one region to another.
1. Elevation differences cause the largest differences in air pressure. At high elevations, there is less atmospheric mass overhead to exert a downward force on the atmosphere. As a result, density decreases with elevation, and air pressure does too.
2. These vertical variations in air pressure cause a pressure gradient in the atmosphere, with higher pressures at low elevations and lower pressures in the upper atmosphere. This pressure gradient can be thought of as a force directed from high pressures to lower ones. This pressure-gradient force is opposed by the downward-directed force of gravity, which is strongest closer to Earth's surface.
3. Lateral variations in air pressure also set up horizontal pressure gradients, and a pressure-gradient force directed from zones of higher pressure to zones of lower pressure. On the map below, the pressure-gradient force acts to cause air to flow from high pressure toward lower pressures, as illustrated by the blue arrows on the map.
4. Places where isobars are close have a steep pressure gradient, and a strong pressure-gradient force, so movement of the atmosphere (i.e., winds) will generally be strong in these areas.
5. Places where isobars are farther apart have a more gentle pressure gradient, a weak pressure-gradient force, and generally lighter winds. Although winds tend to blow from high to low pressure, other factors, such as Earth's rotation, complicate this otherwise simple picture, causing winds patterns to be more complex and interesting.
How Does Friction Disrupt Air Flow?
As is typical for nature, some forces act to cause movement and other forces act to resist movement. For air movements, the pressure-gradient force acts to cause air movement and friction acts to resist movement.
1. Friction occurs when flowing air interacts with Earth's surface.
2. As represented in this figure by the shorter blue arrows low in the atmosphere, wind is slowed near the surface, because of friction along the air-Earth interface. As the air slows, it loses momentum (which is mass times velocity). Some momentum from the moving air can be transferred to the land, such as when strong winds pick up and move dust or cause trees to sway in the wind. It is also transferred to surface waters, causing some currents in oceans and lakes and forming surface waves.
3. Friction with Earth's surface, whether land or water, also causes the wind patterns near the surface to become more complicated. On land, air is forced to move over hills and mountains, through valleys, around trees and other plants, and over and around buildings and other constructed features. As a result, the flow patterns become more curved and complex, or turbulent, near the surface, with local flow paths that may double back against the regional flow, like an eddy in a flowing river. Friction from the surface is mostly restricted to the lower 1 km of the atmosphere, called the friction layer.
4. Stronger winds occur aloft, in part because these areas are further removed from the frictional effects of Earth's surface. Some friction occurs internally to the air, even at these heights, because adjacent masses of air can move at different rates or in different directions. Friction can also accompany vertical movements in updrafts and downdrafts. |
New Horizons blew me away with the first images of Pluto. But there was more to the Pluto system than just a stunning world filled with nitrogen glaciers, icy peaks and smooth plains.
Pluto’s largest moon Charon held a few surprises of its own. Deep canyons carve their way across the mostly gray surface. I say mostly gray because Charon’s most stunning feature is an enormous red splotch at the moon’s north pole.
An enhanced color view of Charon makes the red pop from the surrounding terrain.
Analysis of the data and images shows the red splotch isn’t coming from Charon. It’s coming from Pluto. Methane gas leaks away from Pluto’s atmosphere and is grabbed by Charon’s gravity. It then freezes on the moon’s frigid, icy north pole. But that still doesn’t explain the red color. That’s where the sun comes in. Ultraviolet light from the sun causes the methane to transform into heavier hydrocarbons and eventually into tholins which give the surface its red color.
These same tholins are responsible for turning parts of Pluto’s ice red. The image below is also color enhanced.
Here’s how Will Grundy, a New Horizons co-investigator and lead author of the new paper, describes it:
“Who would have thought that Pluto is a graffiti artist, spray-painting its companion with a reddish stain that covers an area the size of New Mexico? Every time we explore, we find surprises. Nature is amazingly inventive in using the basic laws of physics and chemistry to create spectacular landscapes.”
Let’s take a deeper look into how the methane turns into tholins and gives Charon’s north pole the red tint we see today. New Horizons’ team used data gathered by the spacecraft and models to get a better glimpse at the weather at Charon’s north pole.
Charon’s north pole alternates between 100 years of sunlight and 100 years of darkness. During these long, dark winters – temperatures plunge to -430 degrees Fahrenheit. This is important because it’s cold enough for methane gas to freeze into a solid.
According to Grundy, methane molecules bound around the surface of Charon until they either drift back into space or make their way to the north pole. Once at the north pole, they freeze “forming a thin coating of methane ice that lasts until sunlight comes back in the spring,” says Grundy.
As the sun rises on the north pole, the methane ice sublimates away (melts from solid to gas, skipping the liquid phase). While the methane ice turns quickly into a gas, the heavier hydrocarbons formed in this process stay on the surface. Exposure to sunlight turns these hydrocarbons into a reddish material called tholins. This process has repeated for millions of years and gives us the deep, reddish color we see today.
As for the south pole? It’s sitting in a long, dark winter right now – but New Horizons was able to confirm (thanks to light reflecting from Pluto, or Pluto-shine) the same process was happening on the south pole.
Alan Stern, New Horizons principal investigator, says the discovery has implications for other small planets throughout the Kuiper Belt. “It opens up the possibility that other small planets in the Kuiper Belt with moons may create similar, or even more extensive ‘atmospheric transfer’ features on their moons.”
That possibility may be answered as New Horizons heads for its next target – 2014 MU69. New Horizons will be on the lookout for moons around the small Kuiper Belt object when it flies by on January 1, 2019. And if it finds any, you can bet the New Horizons team will be looking for the same process between Pluto and Charon there. |
Research links intermittent fasting to benefits including:
• improved markers of health
• a reduced risk of chronic health conditions
• improved brain health
While the modern world appears awash with fad diets, people seem to be giving a fair bit of attention to intermittent fasting.
As its name implies, intermittent fasting involves eating nothing for extended periods of time.
Recently, a group of scientists at the University of California, Irvine investigated the impact of fasting on our circadian clock.
Daily sleep–wake cycles, or circadian rhythms, drive the ebb and flow of human life; they control much more than just our sleepiness levels. Our 24-hour cycles involve metabolic, physiological, and behavioral changes that impact every tissue of the body.
Perhaps the most well-known way to influence the clock is via exposure to bright lights, but this isn’t the only way; food intake also impacts the clock.
Fasting is a natural phenomenon for most animals, because food is not always readily available. In times of hardship, certain metabolic changes occur to allow the body to adapt. For instance, when glucose is scarce, the liver begins to create ketones from fatty acids, which the body can use as an emergency energy source.
Fasting is able to essentially reprogram a variety of cellular responses. Therefore, optimal fasting in a timed manner would be strategic to positively affect cellular functions and ultimately benefitting health and protecting against aging-associated diseases.
Weight Loss –
• Intermittent fasting may drive weight loss by lowering insulin levels.
• The body breaks down carbohydrates into glucose, which cells use for energy or convert into fat and store for later use. Insulin is a hormone that allows cells to take in glucose.
• Insulin levels drop when a person is not consuming food. During a period of fasting, it is possible that decreasing insulin levels causes cells to release their glucose stores as energy.
• Repeating this process regularly, as with intermittent fasting, may lead to weight loss.
Lower risk of type 2 diabetes
• Intermittent fasting may also have benefits for diabetes prevention, as it can help weight loss and potentially influence other factors linked to an increased risk of diabetes.
• Being overweight or obese is one of the main risk factors for developing type 2 diabetes.
Reduced risk of cancer
Obesity is a risk factor for many different cancers, so the weight loss aspect of intermittent fasting could be responsible for the reduced cancer risk that some studies hint at.
Intermittent fasting can also decrease several biological factors with links to cancer, such as insulin levels and inflammation.
There are signs that intermittent fasting could reduce the risk of cancer. However, further research in humans is necessary to support this claim, though positive results have been seen in respect of animal studies.
reference – medical news. |
Healthy soils and landscapes are a very important natural resource. Soils provide us with many natural resources such as food and fibre production and perform important services such as filtering pollutants, absorbing water to reduce flooding and degrading organic waste.
Soils can easily become degraded, leading to reduced vegetation and water quality. Local Land Services supports a range of activities that are aimed at addressing key issues such as:
- soil fertility - salinity, acidity, nutrients
- soil biology - the number, condition and type of soil biota
- physical characteristics - structure, sodicity and erosion.
Supporting healthy soils and landscapes is critical to supporting resilient, profitable and sustainable farm businesses.
Industry collaborators and resources
NSW Department of Primary Industries
LLS work closely with DPI to provide up to date technical advice and support.
NSW Office of Environment and Heritage
The Office of Environment and Heritage works to protect and conserve the environment, including the natural environment, Aboriginal country, culture and heritage and our built heritage, and manages NSW national parks and reserves.
Natural Resources Commission
The Natural Resources Commission provides the NSW Government with independent advice on managing natural resources.
Department of the Environment (Commonwealth)
The Department of the Environment is responsible for all federal matters pertaining to the environment. |
The teaching aims of this lesson are to give students an understanding of the different factors that influence choice and what we mean by the term ethical consumerism, and an understanding of the meaning of the term ‘fair trade’.
Students will be able to recognise that there are many ‘ethical’ factors that can affect the way in which consumers make their choices and decisions, the importance of recycling, the different methods used by other countries to produce goods and they should be able to make more-informed decisions on which goods they purchase.
This resource includes:
Lesson plan PDF
Make Money Make Sense has been developed by Eastbourne Citizens Advice Bureau and East Sussex Trading Standards, providing teachers with all they need to teach the financial literacy aspects of the citizenship curriculum. Download more of its resources here or on its website at moneymakesense.co.uk.
Sign up here for your free Brilliant Teacher Box Set
Looking for smarter ways to assess primary English? |
Gastrointestinal cancer is cancer that affects the organs in the digestive system, including the esophagus, stomach, pancreas, gallbladder, liver, small and large intestine, anus and rectum. It is characterized by the uncontrolled growth of normal cells that make up the digestive tract.
The exact cause of gastrointestinal cancer is not clear. However, certain risk factors such as excessive alcohol intake, smoking, advanced age, diet rich in animal fat and salt, poorly preserved food and obesity may increase your risk of developing gastrointestinal cancer.
Gastrointestinal cancer significantly impacts your quality of life. It not only affects you physically, but also emotionally. Pain, fatigue, stress and the side effects of treatment become a part of your life.
The gastrointestinal system is a long tube running right through the body, with specialized sections that are capable of digesting and extracting useful components entering the mouth and expelling waste products from the anus. Once food has been chewed and mixed with saliva in the mouth, it is swallowed and passes down the esophagus (food pipe), a long, narrow tube. The food pipe is lined by muscles that expand and contract, pushing food into the stomach.
The stomach secretes acid and other digestive enzymes for digestion and stores food before it enters the intestine. The liver is the main organ of metabolism and energy production. It produces bile, which is stored in the gallbladder, and also stores iron, vitamins and trace elements. The pancreas, located behind the stomach, produces enzymes and hormones that aid in digestion and metabolism. Once food has been mulched and partially digested by the stomach, it is pushed into the duodenum (first part of the small intestine). Secretions of the gallbladder and pancreas empty into the small intestine, the site where most of the chemical and mechanical digestion and virtually all of the absorption of useful materials takes place. The large intestine is the last part of the digestive tube and the location of the terminal phases of digestion, where waste is processed and stored in the rectum, and excreted through the anus.
Symptoms of gastrointestinal cancer may include abdominal pain, discomfort or tenderness, change in shape, frequency or consistency of bowels, blood in stool, bloating, vomiting, nausea, fatigue, loss of appetite and unintentional weight loss.
Your doctor diagnoses gastrointestinal cancer by performing a thorough physical examination and reviewing your medical history. Certain tests may be ordered to assist and confirm the diagnosis, which includes:
- Blood tests: The tests include full blood count and tumor marker tests.
- Upper endoscopy: Upper endoscopy is a procedure in which a long, thin flexible tube with a tiny camera is passed through your mouth and down your throat to examine the lining of the esophagus, stomach and duodenum.
- Fecal test: Fecal samples are examined under the microscope for abnormalities.
- Barium swallow: You are given a liquid that contains barium to swallow. X-ray imaging can detect this barium, which coats the walls of the esophagus and stomach, making abnormalities visible more clearly.
- Biopsy: A small sample of tissue is removed and examined under the microscope for abnormal cells.
- Colonoscopy: A colonoscope, a long narrow tube with a camera is inserted from the rectum to examine your colon.
These tests help identify the location and stage (stage 0 to stage 4, in order of severity) of cancer, which is important for designing the treatment plan.
Treatment depends on the stage of the cancer, location, your age and general health. Several treatment options are available for treating gastrointestinal cancers. The standard approaches include surgery, radiation therapy, chemotherapy and target therapy.
- Surgical procedures vary depending upon the size and site of the cancer. Some of the common surgeries include:
- Fulguration: use of electric current to kill tumor cells
- Cryosurgery: involves freezing the cancer cells to destroy them
- Resection: removal of the cancerous growth
- Radio frequency ablation: use of high energy radio waves to kill cancer cells
- Radiation therapy is a procedure where high-energy rays are targeted at the cancer cells to destroy them.
- Chemotherapy involves the use of anti-cancer drugs given intravenously (through the veins) or orally (by mouth). This type of treatment is extremely useful in cases where the cancer has spread to different parts of the body. These drugs work against the cells that divide quickly; thereby, slowing down the growth of cancer.
- Target therapy stops new blood vessels from developing in the cancer cells. With no blood supply, the growth of cancer cells slows down.
The outcome of treatment varies from person to person. Treatment in some cases can make you free of cancer, while in others, it is given to slow down the progression of the cancer and add to your days of living. The factors that can affect your prognosis include the location, stage and type of cancer, your age, health before cancer, and your response to treatment.
If left untreated, cancer usually spreads to other areas of the body, eventually leading to death.
Surgery for gastrointestinal cancers is indicated for stages 0, I, II and III cancers and surgical removal is considered the primary treatment for cancer. It involves the complete removal of the primary tumor along with a margin of healthy tissue to ensure that there are no residual cancer cells. The surgical procedure depends to a large degree on the spread of cancer through the tract wall, to other organs or to the lymph nodes. If infected, lymph nodes and adjoining organs are removed along with the gastrointestinal cancer. In some cases, surgery is combined with radiotherapy or chemotherapy.
Gastrointestinal cancer surgeries are performed under general anesthesia. Some of the common surgeries are mentioned below.
Total esophagectomy or esophagectomy
Esophageal cancer surgery aims at treating cancer by surgically removing the whole (total esophagectomy) or part of the esophagus (esophagectomy) and the surrounding tissue that is affected. The remaining esophagus is then reattached to the stomach. Surgery for esophageal cancer can be performed by either an open approach or minimally invasive approach using laparoscopy.
Gastrectomy is the removal of the stomach to treat gastric cancer. It can be carried out through subtotal gastrectomy, where only a part of the stomach is removed, or total or radical gastrectomy, where the whole stomach is freed from the surrounding tissue, cut and carefully removed. The remaining part of the stomach is reattached or in case of total gastrectomy, the esophagus is connected to the small intestine.
Pancreatectomy is the removal of the entire or part of the pancreas. There are many types of pancreatectomy. Also known as pancreaticoduodenectomy, the Whipple procedure involves the removal of the head (wide part) of the pancreas along with parts of the gallbladder, small intestine, bile duct, and sometimes a part of the stomach. The remaining structures are reconnected so that enzymes and bile can flow normally into the intestine. Distal pancreatectomy is usually performed when cancer is found in the middle or tapering end of the pancreas. Total pancreatectomy or complete resection is opted for when the tumor extends across the pancreas.
Cholecystectomy is surgery to treat cancers of the gallbladder. The procedure may also involve the removal of parts of other neighboring organs such as the liver, common bile duct, pancreas, small intestine and/or lymph nodes.
Hepatectomy is surgery to remove the liver along with some of the healthy tissue around it. It may involve the excision of only a part or the whole liver, in which case a healthy liver is transplanted to replace the diseased one.
Endoscopic mucosal resection
The endoscopic mucosal resection (EMR) procedure is indicated to treat gastrointestinal cancer that has spread to the lining of the tract. Your surgeon inserts an endoscope (a thin long tube with a light source and camera) through the mouth to the cancerous growth present in the esophagus, stomach or upper small intestine. Cancers in the colon are reached by an endoscope inserted through the anus. Surgical tools are passed through the EMR to remove the cancerous tissue. The surgery is non-invasive as it does not involve any cuts on the body.
Abdominoperineal resection is a surgical procedure that involves the removal of the lower end of the large bowel i.e., colon, rectum or anus. The surgery is indicated for the treatment of anal cancer and rectal cancer. Abdominal perineal resection can be performed by open surgery (laparotomy), through a large incision on the abdominal wall or through 3 to 4 smaller incisions (laparoscopic).
Palliative surgeries are performed to provide relief from symptoms, prevent or help control cancer. Some examples of palliative surgeries include the placement of a stent to open up a blocked duct or bypassing a tumor, so food or other substances can flow freely.
After the surgery, you will be shifted to the recovery room until the sedative effect has worn off. Avoid driving for at least a few days after surgery. The post-operative guidelines differ for different cancer surgeries. For gastrectomy, you may be recommended vitamin B12 injections as absorption of vitamin B12 occurs through the upper part of the stomach. Inform your doctor immediately if you experience fever, chills, vomiting, black or bright red bowel, fainting and shortness of breath after surgery.
Benefits of this approach
The biggest benefit of gastrointestinal cancer surgery is the ability to completely remove the cancer. For extensive cancers, surgery is indicated to remove cancer cells to a maximum extent making it easier to be treated with other therapies such as chemo or radiation therapy. Surgery can also be used to treat symptoms of cancer and in many cases prevent/control its growth.
You may be instructed not to eat or drink or smoke anything before the procedure. If the procedure is performed in the colon, your surgeon will prescribe a solution for you the day before surgery to cleanse your bowel. Your surgeon will review your daily medications and may instruct you on the medications that you need to avoid.
Surgery is the only reliable option for a curative treatment. However, as with any procedure, gastrointestinal cancer surgery may involve certain risks and complications which include bleeding, infection, leakage from the newly connected region after excision, formation of blood clots, damage to nearby organs, frequent heartburn and vitamin deficiencies.
Post-op stages of recovery and care plan
After the procedure, you will be given specific instructions with regard to your diet. You are advised not to lift heavy objects for a few days after the surgery. The care plan varies depending upon the type of surgery and location of cancer. For gastrectomy, your doctor may refer you to a nutritionist to plan your diet and you need to eat small meals more often as the size of the new stomach is smaller. You can gradually resume your daily activities. |
stereotype worksheets middle school science variation and classification worksheets middle school interior design worksheets for middle school.
types of paragraphs worksheets paragraph examples narrative persuasive descriptive and many kinds of paragraphs worksheets.
critical thinking worksheets critical thinking worksheets for grade critical thinking math word free critical thinking exercises for college students.
counting worksheets 1 20 counting worksheets 1 counting worksheets 1 20.
teaching worksheets for first grade grade money worksheets enchanting learning money worksheets for 3rd grade activities worksheets.
fun multiplication facts worksheets fun multiplication facts worksheets kids multiply worksheet for free math multiplication facts fun worksheets.
stressed and unstressed syllables worksheets free syllables worksheets page 1 worksheet closed syllable grade stressed and unstressed syllables lessons.
art reading comprehension worksheets art reading comprehension worksheets language arts reading comprehension worksheets.
2nd grade math expanded form worksheets collection of free place value worksheets grade expanded form ready 2nd grade math worksheets expanded form.
multiplying 3 factors worksheets grade 3 multiplication worksheet multiplying by whole tens missing multiplication 3 factors worksheets. |
Bumblebees and other native bees were long ignored by farmers because they produce little or no honey and don’t form large, portable colonies like honeybees do.
But the true importance of bees is their ability to pollinate plants, that is, to perform the essential task of transferring pollen from plants’ male to female reproductive organs, staring the process of fruit and seed formation. Bumblebees are now being heralded as important crop pollinators, especially in these times of declining honeybee populations. And bumblebees are especially effective pollinators because they, as well as some other native bees, can employ a method not practiced by honeybees, called “sonication” or “buzz pollination.”
Buzz pollination can be useful for releasing or collecting pollen from many types of flowers, but it is essential for some, including tomatoes, blueberries, and our native manzanitas. The anthers (male reproductive organs) of these flowers have only small pores through which pollen is released, like the holes in a pepper shaker. Sometimes wind or visits from insects can inadvertently shake out some pollen, but the amounts are small. Also, many of these flowers do not produce nectar, so honeybees ignore them anyway.
Bumblebees, by contrast, actively collect and eat not just nectar but also protein-rich pollen. And a bumblebee can cause a flower to discharge a visible cloud of pollen through buzz pollination. The bumblebee grasps the flower with its legs or mouthparts and vibrates its flight muscles very rapidly without moving its wings. This vibration shakes electrostatically charged pollen out of the anthers, and the pollen is attracted to the bumblebee’s oppositely charged body hairs. The bumblebee later grooms the pollen from its body into pollen-carrying structures on its back legs for transport to its nest.
Sometimes bumblebees employ buzz pollination on flowers that don’t require it, for example, California poppies. This may release the already accessible pollen more quickly and efficiently. They also use the energy of buzz pollination for other purposes, for example, compacting soil in their underground burrows (bumblebees don’t build hives like honeybees) or moving a pebble or other obstacle.
Honeybees cannot perform buzz pollination (so far, only a few kinds bees are known to do it), and therefore they cannot pollinate some important crops and wild plants. In fact, commercially-grown greenhouse tomatoes were traditionally pollinated by handheld electric vibrators with names like “Electric Bee” or “Pollinator II.”
Although discovered relatively recently, buzz pollination is no secret. Buzz-pollinating bumblebees make a distinctive, middle-C buzz, which is noticeably higher pitched than the buzz of flight. No special equipment is needed to hear the sound of buzz pollination, just listen for a distinctive middle-C “raspberry” next time you find a plant buzzing with bumblebees. |
UNIT 1 Biology: Unity and Diversity
A website made for students by a student this website, The Micoscopic Plant: Cell Analogy, aims to provide students entering or currently completing VCE Biology Unit 1 an extra reference other that their Biology text books. From this resource students will be able to better grasp the concepts of organelle functions due to the descriptive yet accurate information provided for each structure aswell as the creative analogy that accompanies each decription which will better clarify student comprehension.
Area Study 1: Cells in Action
In this unit students examine the cell as the structural and functional unit of the whole organism. Students investigate the needs of individual cells, how specialised structures carry out cellular activities and how the survival of cells depends on their ability to maintain a dynamic balance between their internal and external environments. Whether life forms are unicellular or multicellular, whether they live in the depths of the ocean or in the tissues of another living thing, all are faced with the challenge of obtaining nutrients and water, a source of energy, a means of disposing of their waste products, and a means of reproducing themselves.Though there are many observable differences between living things, they have many fundamental features and biological processes in common. Students explore the diversity of organisms and look for patterns of similarities and differences. They investigate how the structure and functioning of interdependent systems in living things assist in maintaining their internal environment. They relate differences in individual structures and systems to differences in overall function. As students consider the development of ideas and technological advances that have contributed to our knowledge and understanding of life forms and cell biology, they come to understand the dynamic nature of science. Students investigate technological applications and implications of bioscientific knowledge. (VCCA 2012) |
Capturing and Handling of White Whales (<i>Delphinapterus leucas</i>) in the Canadian Arctic for Instrumentation and Release
For many decades, humans have captured white whales (Delphinapterus leucas) for food, research, and public display, using a variety of techniques. The recent use of satellite-linked telemetry and pectoral flipper band tags to determine the movements and diving behaviour of these animals has required the live capture of a considerable number of belugas. Three principal techniques have been developed; their use depends on the clarity and depth of the water, tidal action, and bottom topography in the capture area. When the water is clear enough so that the whales can be seen swimming under the water and herded into shallow sandy areas, a hoop net is placed over the whale's head from an inflatable boat. When the water is murky and the belugas cannot easily be seen under the water, but can be herded into relatively shallow sandy areas, a seine net is deployed from a fast-moving boat to encircle them. If the whales are in deep water and cannot be herded into shallow water, a stationary net is set from shore to entangle them. Once captured, the whales have to be restrained in a way that allows them to breathe easily, have the tags attached, and be released as quickly as possible. The methods have proved to be safe, judging from the whales' rapid return to apparently normal behavioural patterns. |
In 1919, the first experimental test of Einstein’s general theory of relativity took place; measuring the mass of the Sun by the way it bends light around it. Almost 100 years later, the theory is finally being used to measure the mass of other stars.
Relativity, in 1914, had predicted a large body would distort space and time around it, meaning light from another object behind that mass would deflect the path of light around it.
Just like a lens in front of a light source, the gravity from a large enough object would bend the light around it. But the only object big enough to measure this distortion was the Sun.
During a solar eclipse in 1919, there was the first opportunity to measure what we now call gravitational lensing. The Sun was directly in front of a star cluster called Hyades, and it was observed in two expeditions, one in Brazil and one in Príncipe.
“The two expeditions to observe the 1919 eclipse of the Sun used photographs to image the star background around the dark eclipsed Sun,” says Terry Oswalt, professor of engineering physics at Embry–Riddle Aeronautical University in Florida. “These were compared to plates taken of the exact same field at another time when the Sun was not present.”
The two groups showed the images of the background stars were deflected away from the Sun’s centre, by the exact amount predicted by Einstein’s theory of general relativity.
“As time went by, this phenomenon – known as gravitational lensing – has become a powerful research tool in astrophysics,” says Jorge Pinochet, from the Universidad Alberto Hurtado in Santiago, Chile. For years, gravitational lensing was used to measure the mass of distant, massive objects like clusters of galaxies. Now, it’s being used the originally way, to weigh a star, once again.
Lasy year, in June, gravitational lensing was used to measure the mass of the first star other than the Sun, the white dwarf Stein 2051 B. While the techniques have changed hugely in the 100 years, the principle remains as crucial as ever.
“In the case of Stein 2051 B, the images were electronic, but the principle is identical to that used in 1919,” says Oswalt. “Measure the exact positions of background stars when there is a massive something in the foreground.”
This time, however, the distance from Earth to the star is so much larger, which means the amount the light bends is 1,000 times smaller than the solar eclipse of 1919. “Only the Hubble Space Telescope currently has sufficient imaging quality to detect such tiny shifts,” says Oswalt.
The fact we now have the technology to identify such tiny shifts in a little stream of photons opens up the doors for the masses of many more stars to be measured in future, with experiments like Panstars and the Large Synoptic Survey Telescope.
Above all, however, the result shows just how much of a boss Einstein was. “It underscores just how well Einstein’s theory of general relatively works to 1,000 times better precision than was possible in 1919,” says Oswalt. |
Viewpoint: Counting the Quanta of Sound
At the origin of every musical note is a mechanical oscillator that resonates at a specific frequency. But what the ear cannot distinguish is that the energy of these vibrations is discretized into an integer number of quanta of motion, or phonons. Most vibrating objects contain an uncountable number of phonons, but researchers have, for some time now, been able to prepare massive mechanical oscillators in their quantum ground state, where the average phonon number is smaller than one. This hard-won accomplishment not only involved getting rid of all thermal excitations in the oscillator through intense cooling, but it also required inventing a system of motion detection with a sensitivity at the quantum level . An emerging technique consists of coupling the oscillator motion to another quantum object: a superconducting qubit, which can serve a role in the detection as well as the manipulation of states of motion [2–4]. Using such a “qubit sound system,” two separate teams have managed to measure the number of phonons directly in a macroscopic mechanical oscillator. In one case, the oscillator is a membrane whose center of mass vibrates like a drumhead , while in the other case, the oscillator is a type of sound-wave cavity called an acoustic wave resonator . By demonstrating unprecedented control over states of motion, these results may open the door to the use of oscillators as gravity sensors and quantum memory devices.
The motivation behind the two groups’ efforts is to measure and control so-called Fock states that are characterized by a definite phonon number. The set of Fock states includes the zero-phonon state, the one-phonon state, the two-phonon state, and so on. Some previous experimenters operated mechanical oscillators near their ground states, in which case one can infer that the systems were predominantly in the zero-phonon Fock state. But in most other experiments, researchers have not measured Fock states directly. Instead, they have estimated an average phonon number (or average energy) by observing the oscillator’s position, momentum, or both.
Even though their aims are similar, the two groups have very different strategies and methods. Jeremie Viennot and colleagues at the University of Colorado, Boulder, studied the motion of an aluminum membrane that was a few micrometers across (Fig. 1). This vibrating “drumhead” is categorized as a center-of-mass oscillator, meaning that a macroscopic part of the object is elastically displaced . This displacement happens at relatively low frequencies in the megahertz range, immensely different from the gigahertz transition frequencies of superconducting qubits. The Boulder team therefore faced a major challenge in engineering a strong off-resonant coupling between their oscillator and a superconducting qubit. The type of superconducting qubit that they used was a charge qubit, whose states are identified by the presence of excess Cooper pairs in a small island . Charge qubits were considered prime candidates for coupling to mechanical oscillators [2, 3], but they are infamously sensitive to movements of charges in the surroundings. The Boulder team managed to operate their qubit in a way that was insensitive to environmental charges, while still having a nonresonant interaction with charges that move in response to the mechanical oscillator’s vibrations.
As a result of this off-resonant interaction, the frequency of the qubit should be shifted in proportion to the number of phonons in the oscillator. The researchers managed to make the one-phonon-induced shift large enough, and the qubit resonance narrow enough, that only seven phonons suffice to shift the qubit resonance by its linewidth. When the team measured the qubit spectrum with standard techniques, they could discern the probability that the system was in a particular Fock state. Notably, this measurement technique—which was invented to measure Fock state distributions for a microwave resonator and is implemented here for the first time in a mechanical setting—does not destroy the fragile state of the mechanical oscillator.
The Boulder team went beyond just identifying Fock states; they also used the qubit to pump phonons into and out of the oscillator. For example, if they drove the qubit at a frequency larger than its resonance frequency by exactly the mechanical frequency, then this excess of energy was converted into excitations of the mechanical oscillator. These excitations could be targeted for specific number states, allowing the authors to nearly empty the Fock state distribution at particular phonon numbers by shoving the initial populations of these states toward higher phonon numbers. Conversely, driving the qubit at a frequency lower than its resonance frequency pumped phonons out of the mechanical oscillator. Putting this into practice, the researchers could shift the weight of the Fock state distribution toward lower phonon numbers, creating a high zero-phonon ground-state population in a manner distinct from any earlier work.
Interestingly, Yiwen Chu and colleagues at Yale University were able to do similar manipulations but in a very different type of system . The oscillator studied by the Yale group was a half-millimeter-thick sapphire chip, which supports a propagating sound wave but does not have macroscopic moving parts. Such acoustic resonators vibrate at much higher frequencies in the microwave range, allowing them to be addressed resonantly with qubits. These higher frequencies also mean that cryogenic cooling suffices to eliminate all thermal excitations and reach the motional quantum ground state.
In their resonantly coupled system, the Yale team managed to swap excitations coherently between the qubit and the mechanical oscillator. In these kinds of experiments, if the oscillator is initially in the quantum ground state, the swapping of quanta takes place at the so-called vacuum Rabi frequency . However, in the Yale team’s experiments, the Rabi frequency attained an individual discrete value for each Fock state higher than zero. By measuring these Rabi oscillations of the qubit state, the researchers could therefore extract the phonon Fock state occupancies from the frequency components. When the oscillator was in its ground state, the team could prepare the qubit in its excited state and perform the exchange, causing the phonon number to increase by one. By iterating this “phonon stepping” process multiple times, they could prepare the mechanical oscillator in a high-phonon-number Fock state.
The experiments both show an unprecedentedly high degree of control over states of motion. And each strategy has its advantages. The Boulder team’s method measures phonons nondestructively, whereas the Yale group’s resonant readout irreversibly destroys the mechanical state. However, the Yale group’s preparation scheme can yield high-purity Fock states, which can be good for quantum information. Taken together, the two categories of oscillators span a broad range of mechanical frequencies, which makes them complementary for applications. Manipulating massive quantized center-of-mass motion would enable researchers to probe the unexplored grey area between the microscopic quantum world and our familiar observable world. Thanks to their extreme delicacy, these states of motion might also prove sensitive to gravity at small scales. And finally, because of their high coherence and easy integration into other physical systems, quantum states of mechanical oscillators are considered excellent candidates for memory storage in quantum information technologies and for converting between electronic signals and optical signals.
- A. D. O’Connell et al., “Quantum ground state and single-phonon control of a mechanical resonator,” Nature464, 697 (2010).
- M. D. LaHaye, J. Suh, P. M. Echternach, K. C. Schwab, and M. L. Roukes, “Nanomechanical measurements of a superconducting qubit,” Nature 459, 960 (2009).
- J.-M. Pirkkalainen, S.U. Cho, F. Massel, J. Tuorila, T.T. Heikkilä, P.J. Hakonen, and M.A. Sillanpää, “Cavity optomechanics mediated by a quantum two-level system,” Nat. Commun. 6, 6981 (2015).
- A. P. Reed et al., “Faithful conversion of propagating quantum information to mechanical motion,” Nat. Phys.13, 1163 (2017).
- J. J. Viennot, X. Ma, and K. W. Lehnert, “Phonon-number-sensitive electromechanics,” Phys. Rev. Lett. 121, 183601 (2018).
- Y. Chu et al., “Climbing the phonon Fock state ladder,” arXiv:1804.07426.
- J. D. Teufel, T. Donner, Dale Li, J. W. Harlow, M. S. Allman, K. Cicak, A. J. Sirois, J. D. Whittaker, K. W. Lehnert, and R. W. Simmonds, “Sideband cooling of micromechanical motion to the quantum ground state,” Nature475, 359 (2011).
- Y. Nakamura, Yu. A. Pashkin, and J. S. Tsai, “Coherent control of macroscopic quantum states in a single-Cooper-pair box,” Nature 398, 786 (1999).
- D. I. Schuster et al., “Resolving photon number states in a superconducting circuit,” Nature 445, 515 (2007).
- M. Brune, F. Schmidt-Kaler, A. Maali, J. Dreyer, E. Hagley, J. M. Raimond, and S. Haroche, “Quantum Rabi oscillation: A direct test of field quantization in a cavity,” Phys. Rev. Lett. 76, 1800 (1996). |
Take a Closer Look
Look very closely at the tree and its surroundings. As you do this,
write down what you see, hear, feel, and smell in your notebook.
Use the ideas below or any others you wish.
- Spend a few minutes watching your tree. Write down all the
living things you see. Are there squirrels? spiders? birds? Draw
pictures of some of the animals you see.
- Sit by your tree and listen. Don't talk to anyone, just listen.
What do you hear in or near the tree? Can you hear the branches
moving? Are birds singing? Is a squirrel chattering to you from
one of the branches? Write about what you hear.
- Feel the bark of the tree and describe how it feels. Make a
rubbing of the bark. Hold a piece of paper against the bark.
Scribble over most of the paper with the side of a crayon. Leave
some space to write a few words about the rubbing. Write about
how the bark looks and feels.
- Smell a piece of bark, a leaf, or any flowers on the tree. What
do these things smell like? Do they smell like anything you have
smelled before? Write about what you smell.
- Make a tracing of a leaf from the tree. Look for other trees with
the same kind of leaves. Draw a map to show where these trees
- Wrap a tape measure around the tree and record it's width.
Measure the width of another tree that looks the same as yours.
Which one do you think is the oldest? the youngest? How do you
- Use a magnifying glass to look more closely at your tree. How
does the tree and its leaves look different?
- Draw a line down the middle of a piece of paper. On the left
side, write at least four things you learned about your tree by
looking closely at it. Back at school, look in a tree book to find
out what kind of tree you have adopted. On the right side, write
four things you learned about the tree after reading about it.
Activity Search |
Reading Center |
Math Center |
Social Studies Center
Education Place |
You may download, print and make copies of this page for use in your classroom,
provided that you include the copyright notice shown below on all such copies.
Copyright © 1997 Houghton Mifflin Company. All Rights Reserved. |
According to recent estimates from World Health Organization (WHO), there were about 219 million cases of malaria in 2010 and an estimated 660000 deaths worldwide.
Malaria is a parasitic disease transmitted from one human to another through the bite of infected Anopheles mosquitoes. The most common symptoms of malaria are fever and flu-like illness. The symptoms of malaria develop slowly after infection. The parasites called sporozoites enter the bloodstream and multiply inside the red blood cells. They break open within 48 to 72 hours, infecting more red blood cells. It may take 6 to 12 months for an infected person to find out his/her illness.
Most symptoms are caused by the release of merozoites into bloodstream. Destruction of the red blood cells may lead to anaemia. The first symptoms of malaria occur within 10 days to 4 weeks after infection. They can appear as early as 8 days or as long as a year after getting the infection and in cycles of 48 to 72 hours.
The expected duration of malaria depends on what type of malaria you are infected with. Among other factors that may affect the duration of malaria are patient’s immune status and particularly if he has been been infected with malaria before.
There are four kinds of common malaria. Falciparum malaria is the most common and is much more serious than other types. It is a medical emergency that requires a hospital admission. Plasmodium malaria is the slowest replicating form of malaria. It can cause mild infections which last at least for weeks, if not for months. In a few cases, malaria symptoms last much longer because the infected person’s immune system is able to keep the powers of the parasite low enough so that the symptoms are not noticeable. In comparison to aforementioned malaria forms, other forms are less severe and their symptoms can last for 1-2 weeks.
Treatment and Prognosis
The treatments used for malaria usually eliminate the parasite but it can take several weeks for the infected person’s body to recover from the disease. He/she may feel weak and tired for several weeks even after the treatment as your body replaces blood cells damaged by the parasite.
The possible treatments for chloroquine-resistant infections include the combination of quinidine or quinine plus doxycycline, tetracycline or clindamycin, mefloquine or artesunate and the combination of pyrimethamine and sulfadoxine (Fansidar). The treatment option of medication partly depends on where and when you were infected.
Malaria can be diagnosed by looking at blood samples; the parasites are visible under the microscope. After malaria is diagnosed, treatment should begin immediately. The treatment of malaria subsides its symptoms and cures it within two weeks. Without proper treatment, malaria episodes (fever, chills, sweating) can return periodically over a period of years.
Read more articles on Malaria. |
Effects of Nuclear Weapons
The energy of a nuclear explosion is released in the form of a blast wave, thermal radiation (heat) and nuclear radiation. The distribution of energy in these three forms depends on the yield of the weapon. For nuclear weapons in the kiloton range, the energy is divided in various forms, roughly as 50% blast, 35% thermal and 15% nuclear radiation. Each one of these forms causes devastation on a scale that is unimaginable. Below these effects are discussed separately for a 15 kiloton bomb, which was the explosive power of the bomb detonated by the U.S. in Hiroshima during World War II. This is also the size of the weapons now possessed by India, Pakistan, North Korea and would likely be roughly the size weapon created by terrorists.
Because of the tremendous amount of energy released in a nuclear detonation, temperatures of tens of millions of degrees C develop in the immediate area of a nuclear detonation (contrast this with the few thousand degrees of a conventional explosion). This compares with the temperature inside the core of the Sun. At these temperatures, every thing near ground-zero vaporizes (from a few hundred meters in 15 kiloton weapons to more than a kilometer in multimegaton weapons). The remaining gases of the weapon, surrounding air and other material form a fireball.
The fireball begins to grow rapidly and rise like a balloon. As the fireball rises and subsequently expands as it cools, it gives the appearance of the familiar mushroom cloud. The vaporized debris, contaminated by radioactivity, falls over a vast area after the explosion subsides – creating a radioactive deadly fallout with long-term effects.
Figure 1 : Illustration of blast effects for a15 kiloton explosion. Zones 1 and 2 correspond to the "killing field" where the fatalities are universal.
Because of the very high temperatures and pressures at ground zero, the gaseous residues of the explosion move outward. The effect of these high pressures is to create a blast wave traveling several times faster than sound. A 15 kiloton weapon creates pressure created in excess of 10 Psi (pounds per square inch) with wind speeds in excess of 800 km per hour up to about a 1.2 km radius. Most buildings are demolished and there will be almost no survivors (much larger strategic nuclear weapons will greatly extend this radius of destruction).
Beyond this distance, and up to about 2.5 km the pressure gradually drops to 3 Psi and the wind speed comes down to about 150 km per hour as in a severe cyclonic storm. There will be injuries on a large scale and some fatalities. Beyond this zone of fatalities, the pressure drops to less than 1 Psi, enough to shatter windows and cause serious injuries. It is the high speed combined with high pressures which causes the most mechanical damage in a nuclear explosion. Human beings are quite resistant to pressure, but cannot withstand being thrown against hard objects nor to buildings falling upon them.
Blast effects are most carefully considered by military warplanners bent upon destroying specific targets. However, it is the thermal effects which hold the greatest potential for environmental damage and human destruction. This is because nuclear firestorms in urban areas can create millions of tons of smoke which will rise into the stratosphere and create massive global cooling by blocking sunlight. In any nuclear conflict, it is likely that this environmental catastrophe will cause more fatalities than would the initial immediate local effects of the nuclear detonation.
Figure 2 : Illustration of thermal effects for a 15 kiloton bomb. Regions 1, 2, 3 refer to the degree of burns sustained during the explosion. People who sustain third degree burns are unlikely to survive without immediate medical attention
The surface of the fireball also emits large amounts of infrared, visible and ultraviolet rays in the first few seconds. This thermal radiation travels outward at the speed of light. As a result this is by far the most widespread of all the effects in a nuclear explosion and occurs even at distances where blast effects are minimal.
The range of thermal effects increases markedly with weapon yield (thermal radiation decays only as the inverse square of the distance from the detonation). Large nuclear weapons (in the megaton class and above) can start fires and do other thermal damage at distances far beyond the distance at which they can cause blast damage.
Even with a 15 kiloton detonation, the intensity of the thermal radiation can exceed 1000 Watts per square cm. This is similar to getting burnt by an acetylene torch used for welding metals. For a 15 kiloton bomb, almost everyone within 2 km will suffer third degree burns (which damage the skin and tissues below it); for 550 kiloton bomb, third degree burns occur in a radius up to 9 km. There will be almost no survivors since no immediate medical attention will be available (the entire U.S. has specialized facilities to treat 1500 burn victims).
When studying the effects of a single weapon, it is important to remember that thousands of U.S. and Russian nuclear weapons with yields 8 to 50 times larger than 15 kilotons remain on high-alert, quick-launch status. In a U.S.-Russian nuclear war, these scenarios would occur thousands of times over in virtually every major city in the U.S., Russia, and NATO member states (and probably in China).
It is the cumulative effects of these firestorms – the creation of a stratospheric smoke layer resulting in deadly global climate change – which ultimately become the primary environmental consequence of nuclear war which threatens the continued human existence.
There basically are two kinds of ionizing radiation created by nuclear explosions, electromagnetic and particulate. Radiation emitted at the time of detonation is known as prompt or initial radiation, and it occurs within the first minute of detonation. Anyone close enough to the detonation to be killed by prompt radiation is likely to be killed by blast and thermal effects, so most concerns about the health effects of radiation focus upon the residual or delayed radiation, which is caused by the decay of radioactive isotopes and is commonly known as radioactive fallout.
If the fireball of the nuclear detonation touches the surface of the Earth, large amounts of soil, water, etc. will be vaporized and drawn up into the radioactive cloud. This material then also becomes highly radioactive; the smaller particles will rise into the stratosphere and be distributed globally while the larger particles will settle to Earth within about 24 hours as local fallout. Lethal levels of fallout can extend many hundreds of kilometers and miles from the blast area. Contaminated areas can remain uninhabitable for tens or hundreds of years.
Radiation injury has a long-term effect on survivors. Reactive chemicals released by ionization cause damage to DNA and disrupt cells by producing immediate effects on metabolic and replication processes. While cells can repair a great deal of the genetic damage, that takes time, and repeated injuries make it that much more difficult. Immediate treatment requires continual replacement of blood so that the damaged blood cells are replaced, and treatment of bone marrow and lymphatic tissues which are amongst the most sensitive to radiation. One must remember in this context that there are very few hospitals equipped to carry out such remedial procedures.
Radiation injury is measured in a unit called rem. Some authorities consider 5 rem/year tolerable for workers who are occupationally exposed to radiation —a typical value for exposure to medical X-rays is 0.08 rem. 1.5 rem/year is considered tolerable for pregnant women. It should be remembered that natural radiation is always present in the atmosphere over most places on the earth, but at lower levels. However, there is no threshold, universally agreed upon, at which a dose of radiation can be declared safe.
Things which get irradiated by “prompt” radiation themselves become radioactive. People in the area of a nuclear explosion, and those subject to radioactive fallout stand more risk of contracting cancer. A 1000 rem exposure for the whole body over a lifetime (which is entirely possible for those surviving a nuclear war) brings about an 80% chance of contracting cancer.
Cancer from radiation exposure will occur over the entire lifetime of exposed populations. For example, only one-half of the predicted numbers of cancer have occurred in the people exposed to the radiation produced by the atmospheric weapons tests and the explosions of the US atomic bombs in Hiroshima and Nagasaki that took place 50 to 60 years ago.
We have no idea what the long-term genetic consequences will be from the massive release of radioactive fallout on a world-wide basis.
Ionizing radiation from the fireball produces intense currents and electromagnetic fields, usually referred to as the electromagnetic pulse (EMP). This pulse is felt over very large distances. A single high-yield nuclear detonation will create destructive EMP over hundreds of thousands of square kilometers beneath where the explosion occurs.
EMP from high-yield nuclear detonations will subject electrical grids to voltage surges far exceeding those caused by lightning. Modern VLSI chips and microprocessors, present in most communication equipment. TVs, radios, computers and other electronic equipment are extremely sensitive to these surges and immediately get burnt out. Thus all possible communication links to the outside world are cut off. Restoring these facilities will be an arduous (and expensive) task assuming that the infrastructure required to complete this task would still exist following a nuclear war.
Warplanners consider the EMP from the detonation of a high-yield warhead as capable of disrupting the entire communication system of their nation, and in this way a single missile launch could begin a nuclear war.
Massive absorption of warming sunlight by a global smoke layer would cause Ice Age temperatures on Earth. NASA computer models predict 40% of the smoke would stay in the stratosphere for 10 years. There the smoke would also destroy much of the protective ozone layer and allow dangerous amounts of UV light to reach the Earth's surface.
Half of 1% of the explosive power of the deployed nuclear arsenal can create nuclear darkness. 100 Hiroshima-size weapons exploded in the large cities of India and Pakistan would put 5 million tons of smoke in the stratosphere and drop average global temperatures to Little Ice Age levels. Shortened growing seasons could cause up to 1 billion people to starve to death.
A large nuclear war could put 150 million tons of smoke in the stratosphere and make global temperatures colder than they were 18,000 years ago during the coldest part of the last Ice Age. Killing frosts would occur every day for 1-3 years in the large agricultural regions of the Northern Hemisphere. Average global precipitation would be reduced by 45%. Earth’s ozone layer would be decimated. Growing seasons would be eliminated.
A large nuclear war would utterly devastate the environment and cause most people to starve to death. Already stressed ecosystems would collapse. Deadly climate change, radioactive fallout and toxic pollution would cause a mass extinction event, eliminating humans and most complex forms of life on Earth.
The U.S. and Russia keep hundreds of missiles armed with thousands of nuclear warheads on high-alert, 24 hours a day.
They can be launched with only a few minutes warning and reach their targets in less than 30 minutes. We must end this madness. |
The sign in front of this hill (back right) reads:
The hill in front of you, known as a kame, was formed thousands of years ago when water from melting glacial ice flowed through a large crack in the ice. Glacial melt water carried sand, gravel, and rocks, depositing them at the base of the crack to form the kame. To help picture this, imagine how sand flows through an hourglass and creates a rounded pile of sand in the bottom of the hourglass.
More geology on Michigan in Pictures. |
Common Diseases of the Kidneys
Chronic Kidney Disease
CKD is a progressive decline in renal function, demonstrated by an estimated glomerular filtration rate under 60 for three or more months, resulting in a buildup of waste products in the blood, and in electrolyte imbalances and anemia. Albumin to creatinine ratio may also be used to establish a diagnosis of CKD. Other blood, urine, and kidney imaging tests may also indicate CKD (Cohen 2010; Ferri 2014c).
Features of CKD include progressive retention of nitrogenous waste products in the blood (uremia), electrolyte imbalance, metabolic acidosis, and anemia (Duranton 2012). While prolonged exposure to acute insults such as drugs or infection are capable of causing CKD, chronic conditions such as diabetes mellitus and hypertension are more commonly the cause (Mehdi 2009; Cohen 2010).
Acute Kidney Injury
Acute kidney injury is a rapid impairment of renal function that occurs in a matter of hours or days. Acute kidney injury can result from insults within the kidney itself (renal causes), reduction of blood flow into the kidney (prerenal causes), or damage to the lower urinary tract that causes the backup of uremic toxins into the kidney (postrenal causes) (Ferri 2014a; NKF 2013b).
Blood flow to the kidneys can be reduced by hemorrhage, dehydration, heart failure, pulmonary embolism, sepsis (a systemic inflammatory response caused by infection of the blood), excessive blood calcium, and some drugs, such as nonsteroidal anti-inflammatory drugs (NSAIDs) (Ferri 2014a).
The kidneys may be directly damaged by autoimmune disease, lymphoma, infection, certain medications, and conditions that lead to rapid tissue breakdown (Ferri 2014a).
The urinary tract can be damaged by obstruction of the ureters, bladder, or urethra by stones, tumors, prostatic hyperplasia, trauma, or infection (Ferri 2014a; Elsevier BV 2012).
Kidney stone(s), a condition also known as nephrolithiasis, is one of the most common kidney diseases. There are several types of kidney stones, each composed of an accumulation of a different type of compound naturally present in the body, and having different risk factors for their formation. Calcium oxalate stones are the most common kidney stones in humans, accounting for 76% of stones; their most common cause is high urine calcium levels (Finkielstein 2006). Other relatively common stones include calcium phosphate, usually due to high urine pH; uric acid, common in cases of acidic urine and patients with gout or metabolic syndrome; struvite, often the result of urinary tract infections; and cystine, resulting from genetic disorders of amino acid transport (Elsevier BV 2012). |
Usually when you see a picture of our solar system's planets, they look something like this:
Now obviously, the relative sizes of the planets are all wrong, and they should be much, much farther apart. And, if you're using an Apple computer, each planet should be themed with a shiny, candy-like coating. But one other important aspect of the picture is not scientifically accurate. The planets shouldn't all have the same brightness.
As you get farther away from the Sun, its light becomes progressively dimmer. I corrected the brightness of the planets in the picture, taking the distance to the Sun into account, and here's what I got:
Even though Venus is farther away from the Sun than Mercury, it actually appears brighter, because the clouds of its thick atmosphere reflect six times as much light as Mercury's dark Moon-like surface.
But aside from that, the Sun's brightness falls off really fast as you move away from it. This is because the intensity of the Sun's light is proportional to the inverse of the square of its distance. So, if you're four times farther away from the Sun than Earth is, the Sun will appear sixteen times darker than it would on Earth. Also, the planets are spaced farther and farther apart as you move out in the solar system.
The upshot of all this is, if you were living on a moon of Saturn, you'd have a pretty big heating bill, because that far from the Sun, the temperature would be roughly two hundred degrees below zero in the daytime. Because there are no plants, the atmosphere wouldn't have any oxygen in it. Plus, the commute would suck and the schools wouldn't be very good. On the other hand, you wouldn't have to worry about mowing the lawn, you wouldn't need sunscreen, and you wouldn't have to listen to your neighbor's barking dog all night, because if they let it run around outside, it wouldn't have anything to breathe as it was freezing to death.
Now, it's true that if you really were out there orbiting Saturn, you'd still be able to see easily, because your eyes would adjust to the fact that the Sun would only appear one percent as bright as it would near Earth. This is more than enough light to see by. It'd probably be a lot brighter than the light bulbs inside your house at night. But it wouldn't have quite the same punch as a nice sunny day on Earth.
It's important to note that your eyes are capable of perceiving a huge range of brightnesses. The full Moon is half a million times dimmer than the Sun, and it still provides just enough illumination to see by if you're stumbling around in the wilderness at night, being chased by government agents because you've just escaped from the secret laboratory where you were grown in a vat. (I am not speaking from personal experience.) So, you wouldn't have that much trouble getting around near Saturn, even if the sunlight was rather dim.
How is it that NASA's pictures don't come out all black? Space probes orbiting the more distant planets simply leave the camera's shutter open longer to collect more light, brightening up the picture. This is just what a handheld camera does automatically when you use it inside your house.
Below, on the left, is a picture of the surface of Saturn's moon Titan, taken by the Huygens probe in 2005. Titan is the only moon in the solar system with a substantial atmosphere. On the right, I've corrected the image to account for the intensity of the Sun at that distance, relative to the sunlight on Earth. I'm not kidding here. This is really how much darker it would be. Eventually your eyes would adapt, and you'd see something, but it would be a lot dimmer than Earth during the daytime.
The farther you get from the Sun, the smaller it appears in the sky. The picture on the left, below, is a sunset on Earth, and the picture on the right is a sunset on Mars, taken by the Spirit rover. I've scaled the images so that the Sun from Mars is two thirds the size of the Sun from Earth, which is how it really would appear. The sunset on Mars is usually blue because of dust in the atmosphere.
I like looking at the Martian sunset picture and imagining how cold it is there. The fact that the Sun would be a little smaller and dimmer and more blue would make Mars seem extra chilly and desolate and not worth visiting.
We're pretty lucky that we live on one of the better planets that's huddling closely around the warmth of our star. Consequently, most of the surface of our planet is often nice and toasty and warm and dry, except of course for the England part of our planet. If you live there, it might seem rather unpleasant, but fortunately, science has demonstrated that it could be much, much worse. You could be living on Titan, freezing in the dark, in a puddle of liquid methane, next to somebody's dead poodle. |
By Pure Matters
Whether throwing a ball, paddling a canoe, lifting boxes or pushing a lawn mower, we rely heavily upon our shoulders to perform a number of activities.
Normally, the shoulder has a wide range of motion, making it the most mobile joint in the body. Because of this flexibility, however, it is not very stable and is easily injured.
The shoulder is made up of two main bones: the end of upper arm bone (humerus) and the shoulder blade (scapula). The end of the humerus is round, and it fits into a socket in the scapula. The scapula extends around the shoulder joint to form the roof of the shoulder, and this joins with the collar bone (clavicle). Surrounding the shoulder is a bag of muscles and ligaments. Ligaments connect the bones of the shoulders, and tendons connect the bones to surrounding muscle. Four muscles begin at the scapula and go around the shoulder, where their tendons fuse to form the rotator cuff.
When the shoulder moves, the end of the humerus moves in the socket. Very little of the surface of the bones touch each other. Ligaments and muscles keep the humerus from slipping out of the socket and keep the clavicle attached to the scapula.
To keep shoulders healthy and pain-free, it's important to know how to spot and avoid common injuries.
Shoulder instability occurs when the shoulder feels like it might slip out of place. It occurs most often in young people and athletes. The shoulder becomes unstable when muscles and ligaments that hold it together are stretched beyond their normal limits. For younger people, this condition may be a normal part of growth and development. Shoulders generally stiffen or tighten with age.
In athletes, shoulder instability is caused by activities such as tackling or pitching that put extreme force on the shoulder. Symptoms of shoulder instability are pain that comes on either suddenly or gradually, a feeling that the shoulder is loose, or a weakness in the arm. Treatment may be rest, physical therapy or surgery.
A shoulder separation, also called a sprain, occurs when the ligaments that hold the clavicle to the roof of the shoulder tear. If this happens, the clavicle is pushed out of place and forms a bump at the top of the shoulder. Sprains are common in falls, when the hand or arm is outstretched to stop the fall, or when the fall is on a hard surface. Symptoms are severe pain when the sprain occurs, a misshapen shoulder and decreased movement of the shoulder. Treatment depends on the severity of the sprain. Ice applied immediately after the injury helps decrease pain and swelling. Keeping the arm in a sling to limit the movement of the shoulder allows ligaments to heal; this is followed by physical therapy exercises. Sometimes, surgery is needed.
If the ligaments that hold the shoulder muscles to bones tear and can't hold the joint together, the shoulder is dislocated. A fall onto an outstretched hand, arm or the shoulder itself, or a violent twisting, can cause a shoulder dislocation. The main symptom is pain in the shoulder that becomes worse when the shoulder is moved. Treatment for a dislocation is ice applied immediately after the injury to decrease pain, swelling and bleeding around the joint. Within 15 to 30 minutes of the injury, the joint will be painful and swollen. A dislocated shoulder needs immediate medical care. Doctors treat dislocations by using gentle traction to pull the shoulder back into place. When the shoulder pops out of the socket repeatedly, the condition is called recurrent instability. Recurrent instability can be treated with surgery to repair the torn ligaments.
Rotator cuff tear
The rotator cuff is a group of four muscles of the upper arm that raise and rotate the arm. The muscles are attached to the bones by tendons. The job of muscles is to move bones. The tendons of the rotator cuff allow the muscles to move the arm. If the tendons tear, the humerus can't move as easily in the socket, making it difficult to move the arm up or away from the body.
As people age and their physical activity decreases, tendons begin to lose strength. This weakening can lead to a rotator cuff tear. Rotator cuff injuries occasionally occur in younger people, but most of them happen to middle-aged or older adults who already have shoulder problems. This area of the body has a poor supply of blood, making it more difficult for the tendons to repair and maintain themselves. As a person ages, these tendons degenerate. Using your arm overhead puts pressure on the rotator cuff tendons. Repetitive movement or stress to these tendons can lead to impingement, in which the tissue or bone in that area becomes misaligned and rubs or chafes.
The rotator cuff tendons can be injured or torn by trying to lift a very heavy object while the arm is extended, or by trying to catch a heavy falling object.
Symptoms of a torn rotator cuff include tenderness and soreness in the shoulder during an activity that uses the shoulder. A tendon that has ruptured may make it impossible to raise the arm. It may be difficult to sleep lying on that side, and you may feel pain when pressure is put on the shoulder.
Treatment depends on the severity of the injury. If the tear is not complete, your health care provider may recommend RICE, for rest, ice, compression and elevation. Resting the shoulder is probably the most important part of treatment, although after the pain has eased, you should begin physical therapy to regain shoulder movement. Your doctor may prescribe a non-steroidal anti-inflammatory drug (NSAID) for pain.
This extreme stiffness in the shoulder can occur at any age. It affects approximately 2 percent of Americans, most often between 40 to 60 years of age. Although the causes are not completely understood, it can affect people with diabetes, thyroid disease, heart disease, or Parkinson's disease. It can also occur if the shoulder has been kept immobile for a period of time. It occurs when a minor shoulder injury heals with scar tissue that affects how the joint moves. This scar tissue reduces flexibility in the shoulder and makes it more prone to injury. The major symptom is the inability to move the shoulder in any direction without pain. Treatment can be NSAIDs, cortisone injections or physical therapy. You can reduce further injury and stiffness by stretching before starting activities.
Sudden increases in activity can place extensive stress on the shoulders and lead to a decrease in flexibility. This is a common problem in middle age, especially among "weekend warriors," or people who don't exercise regularly but go out every now and then for an intense sport.
Although painful and inconvenient, these overuse problems can usually be treated with rest, NSAIDs and stretching exercises.
Beginning as early as age 50, some people develop osteoarthritis, which causes painful movement. This occurs as the smooth surfaces of the cartilage that line the bones of the shoulder joint are worn away, and joints begin to wear out and become larger. The most common cause of osteoarthritis is overuse. Treatments for arthritis in the shoulder depend on the severity of pain. The usual treatments are rest, NSAIDs and cortisone injections. In some instances, a replacement of the shoulder joint is necessary. |
One animal credited with opening the West
As a species, humans are far-and-away the most capable organisms with reference to the ability to modify the environment. After us, the impact of other organisms falls off pretty fast.
Critters, such as locusts, come to mind. Numbering in the billions, these voracious insects can bring widespread devastation to seasonal plant growth in Africa. Bison, which once numbered in the tens of millions on this continent, could eat their way through untold tons of grass on the Great Plains. Both of these examples, however, are of a short-term nature.
When we speak of years-long transformations we have to look at other things like coral. It is said the Great Barrier Reef, stretching well over 1,000 miles along the Australian coast, is the single largest structure created by living organisms. That's impressive. But it takes unfathomable numbers of coral polyps to make this happen.
Taken as individuals, though, none of the above-mentioned species measure up. I can only think of one lowly shy animal even in the ballpark with humans in terms of long-term environmental shuffling. As it happens, this critter was also critical to the exploration and settlement of a large part of the U. S. and Canada. I'm talking about the beaver (Castor canadensis).
Once numbering as many as 90 million individuals, this animal was a primary motivator spurring early forays into the North American west. Or at least its hide was. Thousands of pioneering fur-trappers and voyageurs ventured into parts unknown seeking the lush waterproof fur of this large rodent to meet the demands of clothing makers, both here and in Europe. Fortunes were made in the fur trade (think John Jacob Astor or Hudson's Bay Company). Though once silk became the hot fabric du jour, demand for beaver fell and the animal began a slow recovery from near extinction.
Beavers, which can reach up to 60 pounds, are the second largest rodents in the world and continue to grow until they die. Armed with four impressive incisors, these animals are well known for their ability to fell rather large trees. It's an herbivore feeding on shoots, leaves and young wood.
In smaller watersheds their reputation as dam builders is storied. Entire drainages can be changed for decades by generations of beaver families erecting successive dams. This often creates multiple layers of terraced still pools which alter the plant life for miles. In this way, the animal is credited with preventing some erosion and being nature's "kidneys," by purifying water. In forested areas this goes mostly unnoticed. But when the animals ply their trade in urban zones, problems arise. Damage to landscapes can be severe as beavers saw down trees along rivers, mostly at night. Often trappers are brought in to eliminate nuisance animals.
Beavers are said to mate for life and build impressive domed homes known as lodges. When rivers are too large to dam, the animals simply bore into the banks and create makeshift dens there. Inside, families consist of adults, yearlings, and kits (young). When beavers are two years old, they leave the lodge to find new territory.
This is an animal well-suited to its watery environment with webbed hind feet for propulsion through the water while its large, fleshy, scaled tail is used as a rudder. Extremely thick gray fur acts as insulation from the cold underneath a blanket of rich brown-to-black hairs.
Even in the absence of dams, it's not hard to find evidence of beavers. Woodchip piles around gnawed trees, matted runs leading to water, and curious piles of fresh twigs (collected in fall for winter food caches) stacked in the water are just a few obvious signs.
Seeing them is slightly tougher. Being nocturnal animals, your best bet is during dusk or dawn when beavers are more active. They are fairly wary critters and usually shun close approach. When alarmed, beavers perform a signature alarm by loudly slapping the water with its flat fleshy tail and submerging. It sounds like someone doing a cannonball dive.
While not nearly as numerous as it once was, the beaver remains historically significant. It's Canada's national animal and is featured on the reverse side of their nickel. It was also the mascot of the 1976 Summer Olympics in Montreal. This meek retiring critter is still sought by fur trappers even now. The glory days of the trade, however, ended more than a century and a half ago. |
Nanotechnology can change the properties of many materials. This ranges from increasing the strength of materials to increasing the reactivity of materials.
Researchers at MIT have developed a method to add carbon nanotubes aligned perpendicular to the carbon fibers, called nanostiching. They believe that having the nanotubes perpendicular to the carbon fibers help hold the fibers together, rather than depending upon epoxy, and significanly improve the properties of the composite.
Researchers at Rensselaer Polytechnic Institute have found that adding graphene to epoxy composites may result in stronger/stiffer components than epoxy composites using a similar weight of carbon nanotubes. Graphene appears to bond better to the polymers in the epoxy, allowing a more effective coupling of the graphene into the structure of the composite. This property could result in the manufacture of components with higher strength-to-weight ratios for such uses as windmill blades or aircraft components.
Researchers at North Carolina University have shown how to make magnesium alloy stronger. They introduced nano-spaced stacking faults in the crystalline structure of the alloy. The stacking faults prevent defects in the structure of the alloy from spreading, making the alloy stronger. The researchers believe that the techniques they used to strenghten the alloy can be implemented in existing plants, allowing a fast implementation.
More about Nanotechnology and Strong Materials
A catalyst using platinum-cobalt nanoparticles is being developed for fuel cells that produces twelve times more catalytic activity than pure platinum. In order to achieve this performance, researchers anneal nanoparticles to form them into a crystalline lattice, reducing the spacing between platinum atoms on the surface and increasing their reactivity.
Using pellets containing nanostructured palladium and gold as a catalyst to breakdown chlorinated compounds contaminating groundwater. Since palladium is very expensive the researchers formed the pellets of nanoparticles that allow almost every atom of palladium to react with the chlorinated compounds, reducing the cost of the treatment.
Researchers at Los Alamos National Laboratory have demonstrated a catalyst made from nitrogen-doped carbon-nanotubes, instead of platinum. The researchers believe this type of catalyst could be used in Lithium-air batteries, which can store up to 10 times as much energy as lithium-ion batteries.
Using a nanocatalyst containing cobalt and platinum to remove nitrogen oxide from smokestacks
Researchers at USC are developing a lithium ion battery that can recharge within 10 minutes using silicon nanoparticles in the anode of the battery. The use of silicon nanoparticles, rather than solid silicon, prevents the cracking of the electrode which occurs in solid silicon electrodes.
Researchers have used nanoparticles called nanotetrapods studded with nanoparticles of carbon to develop low cost electrodes for fuel cells. This electrode may be able to replace the expensive platinum needed for fuel cell catalysts.
Researchers at North Carolina State University have demonstrated the use of silicon coated carbon nanotubes in anodes for Li-ion batteries. They are predicting that the use of silicon can increase the capacity of Li-ion batteries by up to 10 times. However silicon expands during a batteries discharge cycle, which can damage silicon based anodes. By depositing silicon on nanotubes aligned parallel to each other the researchers hope to prevent damage to the anode when the silicon expands.
Researchers at Rice University have demonstrated that atomically thin sheets of boron nitride can be used as a coating to prevent oxidation. They believe this coating could be used for coating parts that need to be light weight, but work in harsh environments, such as jet engines.
By building an object atom by atom or molecule by molecule, molecular manufacturing, also called molecular nanotechnology, can produce new materials with improved performance over existing materials. For example, an airplane strut must be very strong, but also lightweight. A molecular fabricator could build the strut atom by atom out of carbon, making a lightweight material that is stronger than a diamond. Remember that a diamond is merely a lattice of carbon atoms held together by bonds between the atoms. By placing carbon atoms, one after the other, in the shape of the strut, such a fabricator could create a diamond-like material that is lightweight and stronger than any metal.
More about Molecular Manufacturing
Compiled by Earl Boysen of Hawk's Perch Technical Writing, LLC and UnderstandingNano.com. You can find him on Google+. |
Leading End-of-the-Year Student Discussions About Transitions
Six ways to lead healthy, productive discussions with your students.
- Grades: PreK–K, 1–2, 3–5, 6–8
Bringing students together to discuss can be an effective way to offer support. Here are some pointers for making discussions as effective as possible.
Remember that discussions belong to the students. Your role is to help the discussion along, not to provide the answers.
Set a nonjudgmental tone. Remember that these conversations are not academic; they are about children's feelings. Rely on phrases like "I see," "Mmmmm," and "Anyone else?" to facilitate without judgment. Realize that if you make a positive comment to one child, others will want that reaction, too, and may gear their participation to get your approval.
Be prepared for unsettling contributions. A child may say something that requires special attention: "I hate summer vacation 'cause I have to be with my baby-sitter and she hits me." When that happens, say, "Ray, thank you for your comment. I'd like to talk about it when you and I have some time alone. But for right now does anyone else have something else to add?" Follow up privately at a more appropriate time.
Make sure everyone has a chance to speak. This means no one dominates the discussion and no one interrupts. Key phrases like these can help: "What can anyone add to that?" "What does anyone else think?"
Keep discussions non-threatening. For instance, you might begin by saying, "At the end of every year I like to have a discussion about school ending. Usually half the kids are looking forward to summer and half aren't. I'm wondering what all of you are thinking." With a beginning like that, you let children know that it is comfortable for them to feel either way.
Ask questions that set up hypothetical questions. For instance, ask: "What are some things that make the end of the year hard? What could make it fun? What are some things that kids look forward to? What are some good things about moving on in school? What are some of the things that make it tough?" Questions like these keep situations removed enough to make it easier for children to say what's on their mind. Give children the option of writing their responses and then choosing to share or not. |
#include <vector> size_type capacity() const;
capacity method returns the number of elements that the vector can hold
before it will need to allocate more space.
For example, the following code uses two different methods to set the capacity of two vectors. One method passes an argument to the constructor that initializes the vector with 10 elements of value 0, and the other method calls the reserve method. However, the actual size of the vector remains zero.
vector<int> v1(10); cout << "The capacity of v1 is " << v1.capacity() << endl; cout << "The size of v1 is " << v1.size() << endl; vector<int> v2; v2.reserve(20); cout << "The capacity of v2 is " << v2.capacity() << endl; cout << "The size of v2 is " << v2.size() << endl;
When run, the above code produces the following output:
The capacity of v1 is 10 The size of v1 is 10 The capacity of v2 is 20 The size of v2 is 0
C++ containers are designed to grow in size dynamically. This frees the programmer from having to worry about storing an arbitrary number of elements in a container. However, sometimes the programmer can improve the performance of their program by giving hints to the compiler about the size of the containers that the program will use. These hints come in the form of the reserve method and the constructor used in the above example, which tell the compiler how large the container is expected to get.
capacity method runs in constant time. |
Marfan syndrome involves the body's connective tissue and is characterized by abnormalities in the skeleton, heart, and eyes. It is caused by an abnormal gene * that usually is inherited. People with Marfan syndrome are generally taller than average, have little body fat, and have long: thin fingers.
for searching the Internet and other reference sources
What Is Marfan Syndrome?
Marfan syndrome was first described in 1896 by the French physician Antonine Marfan. Some famous people of the past, such as Abraham Lincoln, who was very tall and lanky, and the brilliant violinist Niccolo Paganini, who had very long fingers, are believed by some to have had Marfan syndrome. Today, the disorder has received attention in the media largely as a result of health problems and deaths among very tall athletes, such as some basketball and volleyball players. Still, the disorder is rare.
Marfan syndrome affects only about 1 to 2 persons of every 10,000. In the United States, it has been estimated that 40,000 or more people have the disorder. It affects men and women in equal numbers, as well as people of all racial and ethnic groups. Marfan syndrome can affect the heart and aorta * , the eyes, and the skeleton.
What Causes Marfan Syndrome?
For many years, it had been known that Marfan syndrome was inherited. It had been observed that if someone had the disorder, each of his or her children would have about a 50 percent chance of developing it as well. However, it was not known what gene or genes were responsible for the disorder.
Then, in the early 1990s, researchers found that the condition is caused by a single abnormal gene. This gene is involved in the production of a type of protein, called fibrillin, which gives connective tissue its strength. Connective tissue is the material that holds in place all the structures of the body. When the gene is defective, it causes critical changes in fibrillin that may weaken and loosen the connective tissue. This effect, in turn, causes the wide range of features, such as tall stature and loose joints, that are found in Marfan syndrome. It is not as yet known just how alterations in the genes produce these features.
* genes are chemicals in the body that help determine a person's characteristics, such as hair or eye color. They are inherited from a person's parents and are contained in the chromosomes found in the cells of the body.
* aorta is the main artery that carries blood from the heart to the body.
Although anyone born to a parent with Marfan syndrome has a 50-50 chance of inheriting the disorder, an estimated 25 percent of people with Marfan syndrome do not have a parent who has it. This is because a person can have the defective gene owing to a spontaneous mutation, or change, in the normal gene.
What Are the Signs and Symptoms of Marfan Syndrome?
The characteristic signs and symptoms of Marfan syndrome usually do not begin to become apparent until about age 10. When they do emerge, they may involve any or all of three parts of the body: the skeleton, the circulatory system (heart and blood vessels), and the eyes.
A person who has Marfan syndrome usually (but not always) grows to be very tall and thin. The fingers also tend to be long and thin, or "spidery." The head is sometimes elongated too, and the chest may have a caved-in look. The joints tend to be supple and loose, and are prone to becoming dislocated. Sometimes there may be scoliosis (sko-lee-O-sis), a side-to-side curvature of the spine.
The Circulatory System
The most serious features of Marfan syndrome involve the heart and aorta, the main artery that carries blood directly from the heart to the body. A characteristic defect in one of the valves of the heart (mitral valve) can cause irregular heart rhythm. Weakness in the aorta can allow it to widen, eventually leading to the development of an aneurysm (AN-yoo-riz-um), a weakness or bulge. If undiscovered or untreated, the weak spot in the aorta can rupture, causing severe internal hemorrhage and death, without warning.
A common symptom of Marfan syndrome is myopia (my-O-pee-uh), or nearsightedness. In addition, in about half of individuals with the disorder, there is dislocation of the lens of the eye, which can make cataracts (clouding of the lens of the eye) more likely to develop.
How Is Marfan Syndrome Diagnosed?
Marfan syndrome can be difficult to diagnose. As yet no single laboratory test can identify it. Some people with the condition do not have all of its characteristic signs. Conversely, most people who are tall, lanky, and nearsighted do not have Marfan syndrome. (Again, the disorder is rare.)
Accurate diagnosis is made from a combination of one's family history and a complete physical examination that focuses on the skeleton, heart and aorta, and the eyes. An echocardiogram (ek-o-KAR-de-o-gram), a picture of the heart produced by using sound waves, can detect abnormalities in the heart and aorta. Eye doctors can look for possible lens dislocations.
The recent identification of the gene that causes Marfan syndrome, and of fibrillin as the component of connective tissue affected by the gene, will likely aid in future diagnosis.
How Is Marfan Syndrome Treated and Prevented?
Treatment and prevention of complications depend upon the individual symptoms of the person affected by the syndrome. Main aspects include annual echocardiograms to watch for enlargement of the aorta and to monitor heart function, and continuing eye examinations to detect lens dislocation. Medications called beta-blockers may be prescribed to lower blood pressure to help prevent aneurysms from developing in the aorta. Braces can be used to correct spinal curvature.
In terms of lifestyle, strenuous sports may have to be avoided to reduce the risk of damage to the aorta. Genetic counseling is advisable for anyone thinking about having children, because of the risk that children will inherit the condition. Although there is no cure for Marfan syndrome, working closely with one's doctor in an ongoing monitoring and treatment program can greatly improve the outlook for long life.
Abraham Lincoln had elongated fingers and was very tall (6 feet, 4 inches), which are attributes that are among the most visible and easily recognized signs of Marfan syndrome. For this reason, some experts believe that he may have had the disorder. However, because the syndrome was not medically known in his day, and because many others with these characteristics do not have it, no one knows for sure. Today, people growing up with Marfan syndrome might find encouragement in knowing that Abraham Lincoln may have had some of the difficulties that they have experienced, and that he overcame them. |
substances move by diffusion, osmosis and activetr
life processes need gases or other dissolved substances before they can happen. For example, for photosynthesis to happen, carbon dioxide and water have to get into plant cells. And for respiration to take place, glucose and oxygen both have to get inside cells. Waste substances also need to move out of the cells so that the organism can get rid of them. These substances move to where they need to be by diffusion, osmosis and active transport. Diffusion is where particles move from an area of high concentration to an area of low concentration. For example, different gases can simply diffuse though one another, like whena weird smell spreads through a room. Alternatively, dissolved particles can diffuse in and out of cells through cell membranes. Osmosis is similar, but only refers to water. The water moves acroos a partially permeable membrane (e.g. a cell membrane) from an area of high concentration to an area of low concentration. Diffusion and osmosis both involve stuff moving from an area where theres a high concentration of it, to an area where there's a lower concentration of it. Sometimes substances need to move in the other direction which is where active transport comes in. in life processes, the gases and dissolved sunstances have to move through some sort of exchange surface. The exchange surface structures have to allow enough of the necessary substances to pass through.
Exchange surfaces are adapted to maximise effectiveness!
the structure ofleaves lets gases diffuse in and o
carbon dioxide diffuses into the air spaces within the leaf, then it diffuses into the cells where photosynthesis happens. The leaf's structure is adapted so that this can happen easily. The underneath of the leaf is an exchange surface. It's covered in tiny little holes called stomata which the carbon dioxide diffuses in through. Water vapour and oxygen also diffuse out through the stomata. (water vapour is actually lost from all over the leaf surface, but most of it is lost through the stomata). The size of the stomata are controlled by the guard cells. These close the stomata if the plant is loosing water faster than it is being replaced by the roots. Without these guard cells the plant would soon wilt. The flattened shape of the leaf increases the area of this exchange surface so that it's more effective. The walls of the cells inside the leaf form another exchange surface. The air spaces inside the leaf increases the area of this surface so there's more chance of carbon dioxide to get into the cells.
The water vapour escapes by diffusion because there's a lot of it inside the leaf and less of it in the air outside. This diffusion us called transpiration and it goes quicker when the air around the leaf is kept dry i.e. transpiration is quickest in hot, dry, windy conditions.
the breathing system part 1
You need to get oxygen from the air into your bloodstream so that it can get to your cells for respiration. You also need to get rid of carbon dioxide in your blood. This all happens inside the lungs. Breathing is how the air gets in and out of your lungs.
The lungs are in the Thorax
The thorax is the top part of your "body". It's seperated from the lower part of the body by the diaphragm. The lungs are like big pink sponges and are protected by the ribcage. The air that you breath in goes through the trachea. This splits into two tubs called the "bronchi" (each one is "aa bronchus"), one going to each lung. The bronchi split into progressively smaller tubes called bronchioles. The bronchioles finally end at snall bags called alveoli whre the gas exchange takes place.
breathing system part 2
intercostal muscles and diaghram contract.
Thorax volume increases.
This decreases the pressure, drawing air in.
intercostal muscles and diaphragm relax.
thorax volume decreses.
air is forced out. |
|Varves: Dating Sedimentary Strata|
This lesson discusses the clear evidence of geological events over many millions of years. Students count the number of varves (annual layers of sediment) in shale billets, taken from the Green River Formation in Wyoming. The count is then extended to reflect the entire 260 meters of sediments where the billets originated, a period of approximately 2 million years. This provides a tangible experience for a sense of time, from both a human perspective (vast period) and a geological perspective (very short period).
Intended for grade levels:
Type of resource:
No specific technical requirements, just a browser required
Cost / Copyright:
Copyright 1999 ENSI (Evolution and the Nature of Science Institutes). This material may be copied only for noncommercial classroom teaching purposes, and only if this source is clearly cited.
DLESE Catalog ID: DLESE-000-000-004-785
Resource contact / Creator / Publisher:
Author: John Banister-Marx
Contributor: Larry Flammer
Evolution and the Nature of Science Institutes |
Limited information is available on the Maya property system. Communal lands were owned by the nobles and ruling class, and were worked by commoners. Commoner families were also permitted to own small parcels of land that they used for subsistence agriculture. This land could be passed down to the owner's sons. Commoners were required to pay tribute to the ruler, their local elite lords, and to the gods in the form of labor, goods, offerings, and a portion of their harvests from their communal and private lands. They were also required to work on annual labor projects, such as building temples, palaces, and causeways.
In addition to the agricultural industry, the Maya produced cacao, cotton, salt, honey, dye, and other exotic goods for trade. The Maya had traveling merchants, but very little is known about them. There is evidence that they traded across the Maya region and Central Mexico, and conducted trade by sea. The Maya had markets to sell their surplus crops, but it is not known how the markets functioned or were governed. The Maya did have a currency system, and used cacao beans, gold, copper bells, jade, and oyster shell beads as forms of money. Counterfeiting was a problem, and occurred when unscrupulous individuals removed the flesh of cacao beans and replaced it with avocado rinds or dirt. The Maya additionally conducted business using the barter system.
The Maya used contracts, which were formalized when the parties drank balché (a mild alcoholic drink) in front of witnesses. Interest was not charged on loans and there were no criminal penalties for going into debt. Individuals who could not pay their debts would become slaves of the people who they owed money to. If a debtor passed away, his family would assume responsiblity for paying his debts.
Sources: Foster (2002), Salcedo Flores (2009), and Sharer (1996).
Image Information: Payment of tribute to Maya ruler (Reents-Budet, ceramic vase). |
Presentation on theme: "In this presentation you will:"— Presentation transcript:
1In this presentation you will: Explore the concept of magnetism and magnetic fields.ClassAct SRS enabled.
2Magnets were first put to use to help navigate since they always point on the same direction. But why do they behave that way?In this presentation you will learn about magnetism and some properties of magnets.Next >
3MagnetsMagnets have the effect of influencing certain materials near to them.An invisible force field surrounds a magnet that is capable of pushing or pulling certain objects.The force field surrounding a magnet is known as a magnetic field.Next >
4MagnetsAll materials can be magnetized under the effect of a magnetic field, but the effect is only easily seen in certain materials.SteelIronMaterials that strongly exhibit this magnetic effect are called ferromagnetic.CobaltCommon ferromagnetic materials are iron, cobalt, nickel and steel. These materials are used to make permanent magnets.Next >
5MagnetsFerromagnetic materials have magnetic domains that are like tiny magnets. These domains are randomly arranged and so cancel each other out.Magnetic domainWhen a ferromagnetic material is exposed to an external magnetic field, the domains in the material start to become more aligned, increasing the magnetic effect of the material.If all of the magnetic domains are aligned in the same direction, the magnetic effect is at a maximum.Next >
6Magnets If a permanent magnet . . . . . . is exposed to heat or vibration, the magnet can loose its magnetism.This happens because the magnetic domains in the material start to become misaligned, decreasing the magnetic effect of the material.Next >
7Induced MagnetsFerromagnetic materials fall into two categories: Hard and Soft.Steel paperclips (Hard)Iron nails (Soft)Hard magnetic materials cannot be easily magnetized, but will retain their magnetism when the magnetizing field is removed.When the external magnetic field is removed, the magnetic domains will stay partly aligned so the material remains slightly magnetic.Next >
8Induced MagnetsSoft magnetic materials can be easily magnetized, but quickly lose their magnetism when the magnetizing field is removed.These materials are used to make temporary magnets and they are commonly used in cores for transformers and electromagnets.Iron nails (Soft)Steel paperclips (Hard)Next >
9Question1Which of the following is NOT a magnetic material that permanent magnets can be manufactured from?A) IronB) AluminumC) CobaltD) Nickel
10Magnetic PolesIf the end of a magnet is free to move, it will always point toward the Earth's geographical North Pole.This end is known as a north-seeking pole or, more simply, as a North Pole.The other end of a magnet is called a South Pole. This is the basis of a magnetic compass.Next >
11Attraction and Repulsion The Law of Magnetic Poles states that:Unlike poles of two magnets brought close together will attract each other.Like poles will repel each other.Next >
12Magnetic FieldThe magnetic field is the region surrounding a magnet where the magnetic force can be detected.A compass can be used to map out the magnetic field around a magnet by recording the direction that a compass points at different positions around the magnet.Next >
13Magnetic FieldIf a magnetic field is strong enough, it can be mapped using iron filings.The field is made up of lines of flux or force. These lines are closer together where the field is strongest, at the poles.To retain a magnetic field and make a permanent magnet, a hard magnetic material such as steel is required.Next >
14Magnetic FieldThe diagram below shows the magnetic field surrounding a magnet. It can be seen that the field goes from the north pole to the south pole.The lines are closest together at the poles, indicating that the magnetic field is strongest at the poles.Next >
15Question2Magnetic field lines are formed around a magnet. Technician A says the lines are known as lines of flux. Technician B says the lines show the force. Who is correct?A) Technician A only.B) Technician B only.C) Both technician A and technician B.D) Neither technician A nor technician B.
16Question3If two magnets are pushed together at their opposite (unlike) poles, will they attract each other?Answer Yes or No.
17Summary After completing this presentation you should be able to: Show knowledge and understanding of magnetism.Show knowledge and understanding about induced magnets.Show knowledge and understanding about magnetic fields.End > |
Reindeer populations are in decline across their circumpolar range, which encircles the high latitude Northern Hemisphere. This medium-sized member of the deer family (Family Cervidae) is important to subsistence lifestyles of aboriginal people in northern Canada, Alaska and Greenland, to Sami people of Scandinavia, and to many Indigenous peoples of Siberia. Reindeer are notable in modern times as a symbol of transporting Santa Claus on Christmas Eve. Reindeer (Rangifer tarandus), also known as Caribou in North America, are central to nutrient cycling on the tundra and are a main prey species for northern carnivores, including wolves (Canis lupus), bears (Ursus arctos and U. americanus), wolverine (Gulo gulo) and lynx (Lynx canadensis and L. lynx).
Seven subspecies of caribou and reindeer are currently recognized. Barren-ground caribou (R.t. groenlandicus) reside in herds often numbering >10,000, and undertake long seasonal migrations between tundra summer ranges and taiga winter ranges in northern Canada and Greenland. Grant's caribou (R.t. granti) share similar life history traits to barren-ground caribou, but are found west of the Brooks Mountain Range in Alaska. Reindeer (R.t. tarandus) are found across the tundra and taiga of northern Norway, Sweden, Finland and Siberia. Like North American caribou, they migrate in large herds between distinct summer and winter ranges. Unlike caribou, numerous semi-domestic reindeer populations exist, and these are herded by numerous Indigenous peoples (e.g. Sami, Eveny, Komi) for meat, milk and hides. Svalbard reindeer (R.t.platyrhynchus) are found only on the Svalbard Islands, north of Norway. This subspecies is shorter and fatter than other reindeer subspecies, since it does not coexist with predators. Peary caribou (R.t. pearyi) are genetically and morphologically similar to Svalbard reindeer, but are found on the high arctic islands of Canada. Woodland caribou (R.t. caribou) are found in parts of Canada's boreal forest. Unlike other Rangifer subspecies, woodland caribou live in small groups (typically less than 100) and do not undertake long seasonal migrations. Forest reindeer (R.t. fennicus) are the Scandinavian analogue of woodland caribou, also residing in small groups and residing within the boreal forest year-around.
Caribou and reindeer around the world are currently in decline. Recent research suggests that caribou and reindeer numbers have fallen by approximately 60% over the past 30 years. There are multiple factors behind this population decline, factors linked to global climate change and industrial landscape change within caribou and reindeer habitat. The influence of climate oscillation versus industrial landscape change varies depending on whether a caribou or reindeer population is migratory or non-migratory.
Changes in insect and plant phenology
Progressively earlier insect seasonal emergence as a forest disturbance in the Arctic insect, and earlier plant emergence are consistent with warming trends, and these factors may have negative effects on caribou/reindeer body condition and thus population dynamics. A number of insect species, including mosquitoes (Aedes spp.), warble flies (Hydoderma tarandi) and nose bot flies (Cenephemia trompei) harass and parasitize caribou. The abundance and activity level of these insects is positively correlated with ambient temperature, and caribou or reindeer that are harassed by these insects spend less time feeding. They also expend considerable energy using "escape" behaviours such as running around and shaking themselves to avoid being bitten. When caribou or reindeer feed less in the summer, they gain less body mass. If they do not gain enough fat or muscle prior to winter, females are less likely to conceive. Over-winter survival of poorly-nourished indivduals is also poor.
The timing of calving in spring is closely tied to the timing of plant emergence. However, caribou appear not to have adjusted their calving period to coincide with earlier plant emergence. Early-emergent plant matter is more nutritious than older, senescent plant matter, and if caribou calves and their dams miss out on this flush of plant growth, calf survival and female body condition may suffer.
Migratory caribou and reindeer appear to be most affected by these changes. In short, factors that compromise the nutritional status of caribou and reindeer generally lead to decreased survival and productivity.
Extreme weather events
Increased precipitation may accompany warmer winters in the Arctic, especially in the form of freezing rain. When freezing rain falls over snow and forms an impenetrable ice layer, this is termed an "ice on snow event." These ice on snow events may prevent caribou and reindeer from reaching their winter forage. Caribou and reindeer feed chiefly on lichens in the winter and cannot readily dig through ice to reach this forage. Ice on snow events are implicated in population declines of caribou and reindeer living on islands, where they do not have the option of moving to different wintering grounds. Indeed, ice on snow events are linked with mass starvation and decline of Peary caribou, as well as past population declines of Svalbard reindeer. Ice on snow events also significantly impact over-winter survival of semi-domestic reindeer, which typically have smaller winter ranges than wild populations.
Industrial change: changes in predator-prey dynamics
The persistence of non-migratory woodland caribou is threatened by industrial landscape change, because landscape change alters how woodland caribou interact with their chief predator, the wolf. Forest overharvesting and certain petroleum infrastructure (e.g. seismic lines) have removed large areas of old-growth coniferous forest, the preferred habitat of woodland caribou, leading to early seral stage forest regrowth. This new growth is ideal moose and deer habitat, and these species are able to support large wolf populations because they have a higher reproductive rates than caribou. The predator population becomes larger than what the caribou population can support, leading to shrinking woodland caribou numbers.
The loss of caribou and reindeer populations will have significant adverse consequences for the northern indigenous peoples who rely on this species for subsistence. Not only are caribou and reindeer a source of economic value, e.g. meat, but they sustain countless cultural values including education in traditional ways of life, spirituality and kinship/bonding through hunting and herding caribou and reindeer. Declining caribou and reindeer populations may have negative consequences for nutrient cycling on the tundra, since defecation by caribou/reindeer returns nitrogen to the soil which, in turn, may increase diversity of plant and invertebrate assemblages. Whether caribou and reindeer populations will recover from the current decline is unknown. Although the species population numbers have fluctuated in the past, recovery from past declines does not guarantee recovery from future declines. Indeed, the fate of this species will likely be determined by the pace of climate change and industrial development.
- M.A.Cronin, M.D.Macneil and J.C.Patton. 2005. Variation in Mitochondrial DNA and Microsatellite DNA in Caribou (Rangifer tarandus) in North America. Journal of Mammalogy 86(3): 495–505
- Peter Gravlund, Morten Meldgaard, Svante Pääbo, and Peter Arctander. 1998. Polyphyletic Origin of the Small-Bodied, High-Arctic Subspecies of Tundra Reindeer (Rangifer tarandus). Molecular Phylogenetics and Evolution 10 (2): 151–9
- R.S.Sommer and A.Nadachowski. 2006. Glacial refugia of mammals in Europe: evidence from fossil records. Mammal Rev 36 (4): 251–265 |
Patterns in Mathematics: How Many Valentines?
2/12/2014 5:00:00 PM
As part of the Teacher’s Lab focusing on Patterns in Mathematics from Annenberg Learner, this activity
challenges students to solve a math problem, first by making the numbers involved easier. The solution to the simple problem is given, but then asks the students to explain which methods they might have used to arrive at the answer. Eight possible ways of figuring out the simple problem are shown. After explaining those possible explanations, it’s back to the original problem. The primary object of this exercise is to have students look for a pattern in a specific example, and extrapolate that to apply to a broader solution. The explanations are clear and very understandable.
Want to read more stories like this?
comments powered by |
Sometimes and All the Time Foods
Students recognize a wide variety of fruit and have a greater understanding about the importance of fruit in their diet, and determine that some fruits' skins are edible and others are not.
3 Views 9 Downloads
Making Healthy Choices, Making Healthy Food: PreK-3 Curriculum Support
From examining how much sugar is in foods and looking at fruit and vegetable varieties to making mini rainbow tarts, this unit provides youngsters with a fabulous overview of proper nutrition and eating habits.
Pre-K - 3rd Health CCSS: Adaptable |
The West Nile virus can be spread to people via a mosquito bite from an infected person. When a mosquito carrying the West Nile virus bites you, you often don’t get sick or have any symptoms. When symptoms appear, they are frequently mild and can include rashes on the skin, fever, headaches, body aches, weariness, and joint pain.
Rarely, issues that affect the central nervous system, including encephalitis or meningitis, can cause a person to become seriously ill. High fever, excruciating headache, stiff neck, confusion or disorientation, convulsions, paralysis, and coma are some of their symptoms.
How Is The West Nile Virus Transmitted To Children?
Mosquitoes carrying the West Nile virus propagate the disease. Not all mosquitoes carry the West Nile virus, and not all people bitten by mosquitoes acquire the disease. However, you should take precautions for your family’s safety if you reside in a region where the virus is prevalent.
All About The West Nile Virus In Children
Viruses, which are microscopic germs, are what cause the contagious illness known as the West Nile virus, which can make you ill. West Nile virus-carrying mosquitoes can transmit the disease by biting humans or animals like horses.
Most patients with the West Nile virus only have minimal or no symptoms. The West Nile virus can occasionally lead to a serious neurological infection or an infection of the brain and nerves. An infection of the nervous system could seriously threaten your health.
How Widespread Is The West Nile Virus?
Numerous people experience mosquito bites. The West Nile virus seldom causes illness. Only one in five West Nile virus victims exhibits any symptoms. The majority of children who do develop symptoms have mild aches and pains that are similar to influenza (the flu).
Fewer than 1% of people contracting the West Nile virus develop a serious infection that can be fatal.
What Symptoms Does The West Nile Virus Cause?
Most West Nile virus victims don’t become ill. Some people have minor symptoms that usually go away by themselves. Sometimes, doctors refer to the condition as West Nile fever. It frequently resembles the flu.
A few West Nile virus minor signs include:
First Signs Of West Nile Virus In Children
Typically, a mosquito bite carrying the West Nile virus won’t make you sick or cause any symptoms. However, children’s early West Nile virus symptoms can vary. When symptoms appear, they are frequently mild and can include fever, body aches, weariness, joint pain, and skin rashes.
After 3 to 14 days of infection, symptoms frequently start to manifest. Rarely, issues that affect the central nervous system, including encephalitis or meningitis, can cause a person to become seriously ill.
High fever, excruciating headache, stiff neck, confusion or disorientation, convulsions, paralysis, and coma are only a few of their symptoms. If a child exhibits any of these symptoms, it is important to seek medical assistance immediately.
How Can The West Nile Virus In Children Be Avoided?
Taking precautions to avoid mosquito bites can help prevent the West Nile virus in kids. Here are some suggestions for avoiding mosquito bites:
- Utilize an insect repellent. You should spray your child’s skin and clothing with bug repellent. Always follow the guidelines provided.
- Protective garments for your child. To protect their skin, dress your youngster in long sleeves and trousers. Wearing garments in a light color can also assist in repelling insects.
- Protect your child from mosquito bites by covering baby carriers and strollers with mosquito netting.
- Use screens on your windows and doors to prevent mosquitoes from entering your home.
- Get rid of standing water around your home, including in buckets, flower pots, and other containers, as this serves as a mosquito breeding ground.
- It is important to remember that no vaccine is available to protect against West Nile virus infection.
The West Nile Virus can infect humans when a mosquito bites them.
The typical mild symptoms that follow a mosquito bite include fever, headache, body pains, fatigue, joint discomfort, swollen lymph nodes, and skin rashes. Rarely, issues that affect the central nervous system, including encephalitis or meningitis, can cause a person to become seriously ill.
While the timing of the first West Nile virus symptoms in children can vary, they typically show up 3 to 14 days after infection. Getting medical help immediately is critical if a youngster exhibits any of the symptoms above.
You can prevent West Nile virus in children by avoiding mosquito bites by using insect repellent, wearing protective clothes, utilizing mosquito netting, using screens, and removing standing water. |
Module 4 - Functions
Over the two weeks, our class will focus on accomplishing two main objectives. The first and foremost goal is to delve into the intricacies of using functions in Python programming. Functions are crucial building blocks that allow you to create organized, modular, and more manageable code. They enable you to compartmentalize different tasks within your program, thereby simplifying complex operations and facilitating easier debugging and maintenance.
Functions in Python serve a dual purpose: they can accept variables as input and return variables as output. This input-output mechanism allows for greater flexibility and reusability in your code. For example, you can create a function that calculates the square of a number and use it throughout your program without having to rewrite the same logic multiple times. This not only makes your code more efficient but also significantly easier to read and understand.
By mastering the concept of functions, you will gain a powerful tool that is fundamental to Python and programming in general. You’ll learn how to declare functions, pass parameters, and return values, among other things. Understanding functions will also pave the way for more advanced programming topics, including object-oriented programming and error handling, which we will explore later in this course. Overall, becoming proficient in using functions will elevate the quality and efficiency of your coding projects. |
New spheres trick, trap and terminate water contaminantRice University scientists have developed something akin to the Venus’ flytrap of particles for water remediation.
Micron-sized spheres created in the lab of Rice environmental engineer Pedro Alvarez are built to catch and destroy bisphenol A (BPA), a synthetic chemical used to make plastics.
The research is detailed in the American Chemical Society journal Environmental Science & Technology.
BPA is commonly used to coat the insides of food cans, bottle tops and water supply lines, and was once a component of baby bottles. While BPA that seeps into food and drink is considered safe in low doses, prolonged exposure is suspected of affecting the health of children and contributing to high blood pressure.
The good news is that reactive oxygen species (ROS) – in this case, hydroxyl radicals – are bad news for BPA. Inexpensive titanium dioxide releases ROS when triggered by ultraviolet light. But because oxidating molecules fade quickly, BPA has to be close enough to attack.
That’s where the trap comes in.
Close up, the spheres reveal themselves as flower-like collections of titanium dioxide petals. The supple petals provide plenty of surface area for the Rice researchers to anchor cyclodextrin molecules.
Cyclodextrin is a benign sugar-based molecule often used in food and drugs. It has a two-faced structure, with a hydrophobic (water-avoiding) cavity and a hydrophilic (water-attracting) outer surface. BPA is also hydrophobic and naturally attracted to the cavity. Once trapped, ROS produced by the spheres degrades BPA into harmless chemicals.
In the lab, the researchers determined that 200 milligrams of the spheres per liter of contaminated water degraded 90 percent of BPA in an hour, a process that would take more than twice as long with unenhanced titanium dioxide.
The work fits into technologies developed by the Rice-based and National Science Foundation-supported Center for Nanotechnology-Enabled Water Treatment because the spheres self-assemble from titanium dioxide nanosheets.
“Most of the processes reported in the literature involve nanoparticles,” said Rice graduate student and lead author Danning Zhang. “The size of the particles is less than 100 nanometers. Because of their very small size, they’re very difficult to recover from suspension in water.”
The Rice particles are much larger. Where a 100-nanometer particle is 1,000 times smaller than a human hair, the enhanced titanium dioxide is between 3 and 5 microns, only about 20 times smaller than the same hair. “That means we can use low-pressure microfiltration with a membrane to get these particles back for reuse,” Zhang said. “It saves a lot of energy.”
Because ROS also wears down cyclodextrin, the spheres begin to lose their trapping ability after about 400 hours of continued ultraviolet exposure, Zhang said. But once recovered, they can be easily recharged.
“This new material helps overcome two significant technological barriers for photocatalytic water treatment,” Alvarez said. “First, it enhances treatment efficiency by minimizing scavenging of ROS by non-target constituents in water. Here, the ROS are mainly used to destroy BPA.
“Second, it enables low-cost separation and reuse of the catalyst, contributing to lower treatment cost,” he said. “This is an example of how advanced materials can help convert academic hypes into feasible processes that enhance water security.”
Source: Rice University.
Published on 10th October 2018 |
Lesson One: Introduction to Digital and Physical Archives
- Before Teaching
- Lesson Plan
- Activities, Materials & Presentations
- Curriculum Standards
- Download Lesson Plan [PDF format]
- Introduction to Digital and Physical Archives: Distance Learning Video
Archives are facilities that house physical collections, where records and materials are organized and protected. Archival materials are used to write history. Through the internet, digital archives make those records more accessible to students, researchers, and the general public.
Students learn to navigate a digital archive by browsing and performing effective keyword searches. Through this process, students learn how to use the Helen Keller Archive. They also learn the value of preserving information.
- Understand the function and significance of an archive.
- Describe the different capabilities of a physical and a digital archive.
- Know more about how archives can increase accessibility for people with visual and/or hearing impairments.
- Navigate the digital Helen Keller Archive using the Search and Browse tools.
- What is an archive?
- How do I use a digital archive?
- Why are archives important?
- Computer, laptop, or tablet
- Internet connection
- Projector or Smartboard (if available)
- Worksheets (provided, print for students)
- Helen Keller Archive: https://www.afb.org/HelenKellerArchive
- American Foundation for the Blind: http://www.afb.org
The Library of Congress images below can be used to illustrate and explain the Define an archive section of this lesson.
Library of Congress: The Library of Congress Manuscript Reading Room
Courtesy of the LOC Manuscript Division.
The digital Helen Keller Archive homepage.
Other Digital Archive Examples
- Sports: Baseball Hall of Fame; primarily physical archive with partial photographic digital collection (https://baseballhall.org/about-the-hall/477) (https://collection.baseballhall.org)
- Politics: United Nations; primarily physical archive with online exhibits (https://archives.un.org/content/about-archives) (https://archives.un.org/content/exhibits
- Comics: Stan Lee Archives (https://rmoa.unm.edu/docviewer.php?docId=wyu-ah08302.xml)
- History: Buffalo Bill Collection (https://digitalcollections.uwyo.edu/luna/servlet/uwydbuwy~60~60)
- Dogs: American Kennel Club; primarily physical archive with partial digital collection (https://www.akc.org/about/archive/) (https://www.akc.org/about/archive/digital-collections/)
- Art: Metropolitan Museum of Art Archives; physical archive with separate digital collections and library (https://www.metmuseum.org/art/libraries-and-research-centers/museum-archives)
- Travel: National Geographic Society Museum and Archives (https://nglibrary.ngs.org/public_home)
- National Geographic digital exhibits (https://openexplorer.nationalgeographic.com/ng-library-archives)
- Space travel: NASA Archive; partially digitized (https://www.archives.gov/space)
- Music: Blues Archive; partially digitized (http://guides.lib.olemiss.edu/blues)
- Books: J.R.R.Tolkien; physical archive (https://www.marquette.edu/library/archives/tolkien.php)
Ask and Discuss
- Do you have a collection? Baseball cards, rocks, seashells, gel pens, shoes, vacation souvenirs?
- Do you and/or your parents save your schoolwork or art projects?
- Where and how do you store old photos? Text messages?
- Personal collections are a kind of archive.
- Things that you store and organize (to look at later) make up a basic archive.
- If you wrote a guide for your friend to use when searching through your [vacation photos/baseball cards/drafts of your papers], you would be running an archive like the pros!
- Optional: Select a sample archive to show students; options provided in resource section.
Define an Archive
- Optional: Use the definitions provided in the lesson definitions.
- To be an archive, a collection must be:
- Composed of unique documents, objects, and other artifacts; and
- Organized to make sense of a collection so that people can find what they are looking for.
- An archive is sometimes also:
- Organized by an institution, managed by archivists, and made available to researchers.
- Tells us about a person, organization, or physical things.
- Typically held and protected in a physical repository, but may be made accessible electronically in a digital platform.
What are the advantages of a physical archive, where you can have the materials right in front of you, versus seeing them on a screen?
- Hands-on encounter with the past. For example, how would it feel to see/read from the original Declaration of Independence at the National Archives?
- Analyze material properties of objects and manuscripts.
- Wider range of access to all the items held in the archive (not all items are digitized).
- Can flip through a physical folder rather than load a new page for every document.
- What do you think is “easier”?
- Have any students experienced something like this?
What are the advantages of a digital archive, where you can have the materials available to you in digital format, on a website?
- Accessible worldwide on the internet—you don’t have to travel to see what’s in the archive.
- Keyword searchable.
- Useful information in the format of transcriptions and metadata often included.
- Accessible to people with disabilities, including those with impaired vision/hearing.
- For example, the digital Helen Keller Archive allows users to change the text size and color of text and provides description for multimedia including photographs, film, and audio.
Who is Archiving Information About You Right Now?
- How is the public able to access that information now? In the future?
- Is there information you would not want them to access now? In the future? Why?
Using the Helen Keller Archive
Open the digital Helen Keller Archive: https://www.afb.org/HelenKellerArchive
Note: The digital Helen Keller Archive team strongly recommends that this or similar demonstration be included in the lesson, unless the teacher has formally taught these students browse and search techniques. We find that students are used to “Google” style searches, which are not as effective on specialized sites like digital archives.
We are going to use the digital Helen Keller Archive.
Who has heard of Helen Keller? Why is she famous? What did she do?
- Keller lost her sight and hearing at a young age but learned to sign, read, write, speak, and graduated college.
- She used her fame to advocate on behalf of blind and deaf communities, fought for education/employment for blind people and the inclusion of people with disabilities in society.
- She was politically active: Anti-war, advocated for socialism and workers’ rights, as well as the suffrage movement and women’s rights.
- Distribute student version of How to Search [download PDF] and How to Browse [download PDF] and explain that you will be going through a few sample searches as a class. Invite the class to follow along if feasible.
- Pull up the Helen Keller Archive home page and ask the class to explain the difference between search and browse. For example:
- The Browse tool follows the structure and order of the physical collection. Browse is the best way to see how an archive is organized and what it contains.
- The Search tool uses a keyword search term or terms. Search is the best way to find a specific item.
Show the Browse Function
- Click the Browse tab.
- Click Browse by Series; point out the series titles and ask students to explain what each “series” contains.
- In this archive, series are organized based on the type of materials (letters, photographs, and more).
- Explain that this is how a physical archive is organized (in series, subseries, boxes, and folders).
- Browse for a type of item. Guide students through the choices they have at each level.
- For example: “Browse the photographs in this archive. This series is divided into photographs and photo albums. Let’s explore the photographs. How are these organized? It looks like they are organized alphabetically by subject matter. Wow, there are two folders here just for Helen Keller’s dogs! Let’s take a peek.”
- Optional: Ask students to browse for “boomerang given to Helen in Australia”.
Show the Search Function
- Click the Simple Search tab.
- Ask the class to pick a word to search based on either their knowledge of Helen Keller or class curriculum on late 19th/early 20th century.
- For example: Let’s search for documents related to the women’s suffrage movement. The best way to start a keyword search is with a simple keyword. Let’s use “suffrage.”
- Point out the filters in the left hand column and explain how they are used narrow search results. Ask students to choose one area to refine search to narrow their results for a specific reason.
- For example: “Let’s select 1910-1920 so we can find material written before the 19th Amendment was passed.”
- Works like a library or e-commerce website.
- Optional: Ask students to search for a speech given by Helen Keller while she was traveling abroad. She gave many – they can choose any one. Brainstorm effective search terms and ways they might refine their results, and warn students it will take more than one step to find a speech that qualifies.
- Show the Browse by subject functions and ask how they are similar to, or different from, searching by Keyword(s).
- Use same topic as keyword search (or as close as possible). For example: Can you find “suffrage” in this subject list?
- Explain that not all topics will be present. For example, there is no subject header for “computers”.
- Break students into working groups.
- Assign each group a “scavenger hunt” item (see in class worksheet).
- Optional: Collect scavenger hunt items in a private list to be shared with the whole class.
Sample Scavenger Hunt List
- Flyer for a 1981 dance production “Two In One”
- Film of Helen Keller testing a new communication device in 1953
- Medal from the Lebanese government
- Photograph of Helen Keller at a United Nations meeting in 1949
- Or choose your own …
Activities, Materials & Presentations
Activities & Presentations for Teachers
Activities for Students
- Exploring the Digital Helen Keller Archive [PDF format]
- Exploring the Digital Helen Keller Archive – The Needle in the Haystack [PDF format]
Materials (Students & Teachers)
- Definitions: [PDF format]
- Frequently Asked Questions [PDF format]
- How to Search [PDF format]
- How to Browse [PDF format]
This Lesson Meets the Following Curriculum Standards:
Evaluate the advantages and disadvantages of using different mediums (e.g., print or digital text, video, multimedia) to present a particular topic or idea.
Conduct short research projects to answer a question, drawing on several sources and generating additional related, focused questions for further research and investigation.
Gather relevant information from multiple print and digital sources, using search terms effectively; assess the credibility and accuracy of each source; and quote or paraphrase the data and conclusions of others while avoiding plagiarism and following a standard format for citation.
Integrate and evaluate content presented in diverse media and formats, including visually and quantitatively, as well as in words.
Empire State Information Fluency Continuum
- Uses organizational systems and electronic search strategies (keywords, subject headings) to locate appropriate resources.
- Participates in supervised use of search engines and pre-selected web resources to access appropriate information for research.
- Uses the structure and navigation tools of a website to find the most relevant information. |
The following are ten ways you can nurture the five different areas of speech and language development in typically developing infants and toddlers.
1) Eye contact. When communicating with your child, look at his or her face and eyes as often as possible. This helps your child learn that it is appropriate to look at people during communication. Children learn a lot about you through facial expressions and acquire articulation skills by watching the movement of your mouth.
2) Taking turns. Talk to your child and then pause to give them a moment to verbalize. This teaches them the art of turn taking. This skill can also be accomplished during play, using objects and toys.
3) Give your child space. When your child is trying to communicate with you and you know what they want, give them a few seconds before you instantly meet their needs. This will give them the opportunity to vocalize (coo and babble), point, or attempt a word.
4) Give your child choices and then let them express their choice by pointing, vocalizing, or attempting words. The feelings of confidence a child gains by expressing their own choice are building blocks for further exploration of expressive language.
5) Get your child to follow instructions. Start with simple requests that only involve one element, such as “smile” or “kiss.” Then increase to two elements when one element becomes easy for your child (i.e. “Hand up,” or “Touch your nose,” and so on).
6) Read simple books to your child with one or two pictures on each page. Ask them questions that can be answered verbally or by pointing to the correct picture. Try not to put too much pressure on them. If your child does not respond after about 10 or 15 seconds, model the answer for them with a positive tone of voice.
7) Reinforce and demonstrate. If your child produces a verbal attempt that resembles a word, praise them with a pleasant tone of voice and then model the word that you think they attempted. For example, if the child says “ba” for ball, say “You said ball. Yes, it is a ball!”
8) Explore. There are wonderful opportunities to model vocabulary out in the community. A simple trip to the market can be a great chance to name items for your child.
9) Observe how often other people understand your child’s speech. This will give you an idea of how clear his or her articulation really is (parents usually understand their children more than an outside listener). Don’t worry if your toddler is not producing all the sounds in the English language. Many sounds may not develop until four years of age or later. However, you should consider consulting a speech pathologist if it is extremely hard to understand your child’s speech at 3 years of age.
10) Articulate your words clearly when you communicate with your child. Speak slowly and remember to look directly at your child’s face. While speech and language development varies with each child, there is no question that positive daily involvement from a parent and/or a loving caregiver makes the process much smoother. You, the parent, are the “super model” for your child’s speech and language development. Taking time to put these tips into action can give you a thoughtful approach as you interact with your amazing little communicator.
Original text by By Karin Howard, M.A., CCC-SLP |
As coronavirus (COVID-19) spreads around the world, health professionals are demanding that people limit their personal risk of contracting the virus by thoroughly washing their hands, practicing social distancing, and not touching their nose, mouth, or eyes. In fact, it may surprise you to learn that the eyes play an important role in spreading COVID-19.
Coronavirus is transmitted from person to person through droplets that an infected person sneezes or coughs out. These droplets can easily enter your body through the mucous membranes on the face, such as your nose, mouth, and yes — your eyes.
But First, What Is Coronavirus?
Coronavirus, also known as COVID-19, causes mild to severe respiratory illness associated with fever, coughing, and shortness of breath. Symptoms typically appear within 2 weeks of exposure. Those with acute cases of the virus can develop pneumonia and other life-threatening complications.
Here's what you should know:
Guard Your Eyes Against COVID-19
- Avoid rubbing your eyes. Although we all engage in this very normal habit, try to fight the urge to touch your eyes. If you absolutely must, first wash your hands with soap and water for at least 20 seconds.
- Tears carry the virus. Touching tears or a surface where tears have fallen can spread coronavirus. Make sure to wash your hands after touching your eyes and throughout the day as well.
- Disinfect surfaces. You can catch COVID-19 by touching an object or surface that has the virus on it, such as a door knob, and then touching your eyes.
Coronavirus and Pink Eye
Pink eye, or conjunctivitis, refers to an inflammation of the membrane covering the front of the eyeball. Conjunctivitis is characterized by red, watery, and itchy eyes. Viral conjunctivitis is highly contagious and can be spread by coughing and sneezing, too.
According to a recent study in China, viral conjunctivitis may be a symptom of COVID-19. The study found conjunctival congestion in 9 of the 1,099 patients (0.8%) who were confirmed to have coronavirus.
If you suspect you have pink eye, call your eye doctor in Raleigh right away. Given the current coronavirus crisis, we ask patients to call prior to presenting themselves at the office of Dr. Naheed Kassam, as it will allow the staff to assess your condition and adequately prepare for your visit.
Contact Lenses or Eyeglasses?
Many people who wear contact lenses are thinking about switching to eyeglasses for the time being to lower the threat of being infected with coronavirus.
Wearing glasses may provide an extra layer of protection if someone coughs on you; hopefully that infected droplet will hit the lens and not your eye. However, one must still be cautious, as the virus can reach the eyes from the exposed sides, tops and bottoms around your frames. Unlike specialized safety goggles, glasses are not considered a safe way to prevent coronavirus.
Contact Lenses and COVID-19
If you wear contacts, make sure to properly wash your hands prior to removing or inserting them. Consider ordering a 3 to 6 month supply of contact lenses and solution; some opticals provide home delivery of contact lenses and solutions. At this stage there is no recommendation to wear daily lenses over monthlies.
Don't switch your contact lens brand or solution, unless approved by your optometrist or optician.
Regularly Disinfect Glasses
Some viruses such as coronavirus, can remain on hard surfaces from several hours to days. This can then be transmitted to the wearer's fingers and face. People who wear reading glasses for presbyopia should be even more careful, because they usually need to handle their glasses more often throughout the day, and older individuals tend to be more vulnerable to COVID-19 complications. Gently wash the lenses and frames with warm water and soap, and dry your eyeglasses using a microfiber cloth.
Stock up on Eye Medicine
It's a good idea to stock up on important medications, including eye meds, in order to get by in case you need to be quarantined or if supplies run short. This may not be possible for everyone due to insurance limitations. If you cannot stock up, make sure to request a refill as soon as you're due and never wait until the last minute to contact your pharmacy.
It is important that you continue to follow your doctor’s instructions for all medications.
Digital Devices and Eyestrain
At times like this, people tend to use digital devices more than usual. Take note of tiredness, sore eyes, blurry vision, double vision or headaches, which are symptoms of computer vision syndrome if they are exacerbated by extensive use of digital devices, and might indicate a need for a new prescription in the near future. This usually isn't urgent, but if you're unsure, you can call our eye doctor's office.
Children and Digital Devices
During this time your children may end up watching TV and using computers, tablets and smartphones more frequently and for more extended periods too. Computer vision syndrome, mentioned above, can affect children as well. We recommend limiting screen time to a maximum of 2 hours per day for children, though it's understandably difficult to control under the circumstances.
Try to get your child to take a 10 to 15 minute break every hour, and stop all screen time for at least 60 minutes before sleep.
Children and Outdoor Play
Please follow local guidelines and instructions regarding outdoor activities for your children. If possible, it's actually good for visual development to spend 1-2 hours a day outside.
From all of us at The Eye Center in Raleigh, we wish you good health and please stay safe. |
If Sheffield Peabody's farm was a home office, then his neighbors often were employees. Since families were seen as reliable sources of labor, it was very common for different farmers to send members of their families, as well as themselves, to different farms that scattered the area. Due to the convenience that the neighbors provided in terms of location, these traveling farmers were an easy source of labor for the different families in the county, and because allowed for different family members to earn themselves extra supplies and money for their families. These relationships also built up a network of trust between families, as it not only allowed neighbors to perform their different chores and duties with more relative ease, but also provided a way for the community to bond with one another.
While there were many different workers that came to and left the Peabody farm, there were a few that stayed consistent in their visits. However, even though they were constantly coming and going to the farm, and giving each other a significant amount of help, Peabody refers to them offhandedly, as if their constantly matriculating presence gives them a sort of amorphous identity. In different entries his will either simply refer to them by their initials, or even call them their full name. For example, during one entry he writes, “Cleaned some oats, A.M. George Lamont fitted potatoe ground. Henry Muck got 33 potatoes for seed. I took 3 calves to Depot. Sold to Alley Becker. Mary went to the valley with me. Moose cut a colt for me.” (May 21, 1888) Here he shows a daily interaction between him and some of his neighbors working with one another, and yet he never refers to them as friends or with a shortened title, aside from his wife. It demonstrates how the different farmers were addressed, and how he treated his friends in comparison to how he addressed his family.
It was a strong network of aid that these farmers had with one another, and in today’s modern workforce it can be seen as an unusual notion. This network, however, of farmers aiding farmers allowed for these men and women to create their own strong community, even though they could be miles apart. They strongly relied on one another for work and aid, and their help was accounted for within Peabody’s diaries, as evident in the above photo. It helps to ensure that “the Boys” were given their fair share for their part in this communal network of farmers. |
The following is by Dennis Shea (NCAR)
The detection, estimation and prediction of trends and associated statistical and physical significance are important aspects of climate research. Given a time series of (say) temperatures, the trend is the rate at which temperature changes over a time period. The trend may be linear or non-linear. However, generally, it is synonymous with the linear slope of the line fit to the time series. Simple linear regression is most commonly used to estimate the linear trend (slope) and statistical significance (via a Student-t test). The null hypothesis is no trend (ie, an unchanging climate). The non-parametric (ie., distribution free) Mann-Kendall (M-K) test can also used to assess monotonic trend (linear or non-linear) significance. It is much less sensitive to outliers and skewed distributions. (Note: if the distribution of the deviations from the trend line is approximatly normally distributed, the M-K will return essentially the same result as simple linear regression.) The M-K test is often combined with the Theil-Sen robust estimate of linear trend. Whatever test is used, the user should understand the underlying assumptions of both the technique used to generate the estimates of trend and the statistical methods used for testing. For example, the Student t-test assumes the residuals have zero mean and constant variance. Further, a time series of N values may have fewer than N independent values due to serial correlation or seasonal effects. The estimate of the number of independent values is sometimes called the equivalent sample size. There are methodologies to estimate the number of independent values. It is this value that should be used in assessing the statistical significance in the (say) Student t-test. Alternatively, the series may be pre-whitened or deseasonalized prior to applying the regression or M-K test statistical tests.
There are numerous caveats that should be kept in mind when analyzing trend. Some of these include: (1) Long term, observationally based estimates are subject to differing sampling networks. Coarser sampling is likely to result in larger uncertainties. Variables which have a large spatial autocorrelation (eg, temperature, sea level pressure) may have smaller sampling errors than (say) precipitation which generally has lower spatial correlation; (2) The climate system within which the observations are made is not stationary; (3) Station, ship and satellite observations are subject to assorted errors. These could be random, systematic and external such as changing instruments, observation times or observational environments. Much work has been done on creating time series that takes into account these factors; (4) While reanalysis projects provide unchanging data assimilation and model frameworks, the observational mix changes over time. That may introduce discontinuities in the time series that may cause a trend to be estimated significant when in fact it is an artifact of the discontinuities; (5) Even a long series of random numbers may have segments with short term trends. For example, the well known surface temperature record from the Climate Research Unit which spans 1850-present, shows an undeniable long-term warming trend. However, there are short term negative trends of 10-15 years embedded within this series. Also, the rate of warming changes depending on the starting date used in that time series; (6) As noted above, a series on N observations does not necessarily mean these observations are independent. Often, there is some temporal correlation. This should be taken into account for example when computing the degrees of freedom of the t-test.
Cite this page
National Center for Atmospheric Research Staff (Eds). Last modified 05 Sep 2014. "The Climate Data Guide: Trend Analysis." Retrieved from https://climatedataguide.ucar.edu/climate-data-tools-and-analysis/trend-analysis.
Acknowledgement of any material taken from this page is appreciated. On behalf of experts who have contributed data, advice, and/or figures, please cite their work as well. |
Precision is Key.
Precision machining is uses power driven tools to remove material to shape a workpiece into a specified design. With precision machining, there are very tight specifications and thus little room for error. Precision machining is often used for objects made up of many small parts that need to fit together or things that need to be very durable. For tools that need repair or restoration, precision machining can help return them to their original shape or state.
Precision machining follows designs created with computer aided design (CAD) and computer aided manufacturing (CAM) programs, which helps with accuracy. Some materials that can be precision machined are plastic, metal, glass, and ceramic. Tools that are used in precision machining include milling machines, lathes, electric discharging machines (aka. EDM’s), saws, grinders, and high-speed robotics.
Let's Build Your Next Project
Or Call Us 1.408.969.0888 |
Pilot Bore Mean?
A pilot bore is the initial horizontal hole drilled along the intended route for the final pipe product. The main purpose of the pilot bore is to map the predetermined path of the pipe installation.
During pilot boring, the drill is controlled by an operator from a remote location. Real-time locating technologies are used to assist the operator in steering the pilot bore to avoid underground obstacles, such as water, wastewater, and electrical utilities.
Pilot boring is typically associated with horizontal directional drilling (HDD). Before drilling begins, relatively small workspaces are excavated at the start and endpoints of the pipe installation. These excavations, also known as exit and entry pits, are used to facilitate drilling equipment and installation personnel.
The entry pit usually contains the drilling rig, power unit, drill pipe skid, mud recycling unit, and the control cabin. Erosion and noise control features may also be located at the entry pit as required.
The pilot bore process starts by inserting the drill string and cutting head assembly into the entry pit. The pilot bore penetrates the wall at the entry pit and advances until the cutting head reaches the exit pit.
Once the pilot bore is complete, the hole is enlarged with a larger cutting tool to the diameter required to install the product pipe. This process is known as reaming or back-reaming.
A pilot bore is also known as a pilot hole.
Trenchlesspedia Explains Pilot Bore
Figure 1: The horizontal drilling process showing the drilling of the pilot bore in phase 1 (source)
During the drilling of the pilot bore drilling mud (or drilling fluid,) is pumped through the drill string and into the pilot hole via nozzles in the cutting head. The drilling fluid mixes with the surrounding soil, creating a slurry that helps suspend and remove the cuttings. In addition to removing entrained cuttings, the drilling fluid also lubricates the drilling equipment and stabilizes the surrounding soil in the pilot bore.
One of the main features of the pilot boring process is its ability to be steered and controlled. This helps outline the path for the final pipe while avoiding obstacles that may interfere with the installation process. To assist the operator in locating the drill in real-time, several locating technologies may be used. The most common of these are:
Walk-Over Locating Systems
In walk-over locating systems, personnel located on the ground level use hand-held equipment to track the pilot bore's progress as it moves through the ground. A transmitter located on the drill head transmits detailed data about the location of the pilot drill bit to a receiver at ground level. This data is sent to the controller, who adjusts the path of the pilot bore as needed.
Wire-Line Locating Systems
These locating systems are similar to their walk-over counterparts. However, instead of being wireless, wire-line locating systems use insulated cables to power the transmitter. Since these systems do not use onboard batteries, the battery life of wire-line locators is superior to walk-over systems.
Like walk-over systems, depth and location readings are transmitted to a receiver at the ground level and sent to an operator.
Gyro-Guided Drilling Systems
Gyro-guided systems operate differently from both walk-over and wire-line systems. Instead of transmitting electromagnetic signals, a system of gyroscopes is used to determine the pilot drill head's location-based parameters. Since the gyroscopes do not need to determine magnetic north to function, they are unaffected by surrounding magnetic disturbances. |
Every individual has something relevant to say, something you never thought of. A human mind is restricted to its experiences, knowledge and perspectives. No one can do all or know all. This is the central argument for the need of diversity, and today, on the International Day for Women and Girls in Science, we want to emphasize the need for women’s perspectives and abilities in scientific research.
Scientific research itself has shown results indicating the strengths that women can carry into their research careers. For example, a review by Meyers-Levy & Loken (2014) shows that women tend to be more inclusive or comprehensive than men in detecting and selecting data. Not only is it important to have as large a mix of abilities within a team as possible, but it is simply unacceptable to leave women and girls with potential and with the desire to participate in science behind.
Along with the efforts we make at BIRA-IASB in terms of education and outreach, we want to mark this day by putting forward a work of art from Ward Neefs, symbolizing our intentions to keep making efforts towards inclusiveness:
This watercolor painting proposes opening up scientific research to all the women of the world. From exploring the tiniest structures inside atoms to the greatness of galaxies. From elementary life forms to complex living creatures on our Earth.
Discovering the secrets of water, earth, and air. Following the trace of light. Investigating evolution. Paving roads to explore the universe. Women’s brilliant minds and collaborative spirits will allow us to make the next big scientific leaps.
– Ward Neefs |
Butterfly adults are characterized by their four scale-covered wings, which give the Lepidoptera their name. These scales give butterfly wings their color: they are pigmented with melanins that give them blacks and browns, as well as uric acid derivatives and flavones that give them yellows, but many of the blues, greens, reds and iridescent colors are created by structural coloration produced by the micro-structures of the scales and hairs.
As in all insects, the body is divided into three sections: the head, thorax, and abdomen. The thorax is composed of three segments, each with a pair of legs. The long proboscis can be coiled when not in use for sipping nectar from flowers.
Nearly all butterflies are diurnal, have relatively bright colors, and hold their wings vertically above their bodies when at rest, unlike the majority of moths which fly by night, are often cryptically colored (well camouflaged), and either hold their wings flat or fold them closely over their bodies.
Butterfly larvae, caterpillars, have a hard head with strong mandibles used for cutting their food, most often leaves. They have cylindrical bodies, with ten segments to the abdomen, generally with short prolegs on segments 3–6 and 10; the three pairs of true legs on the thorax have five segments each. Many are well camouflaged; others are aposematic with bright colors and bristly projections containing toxic chemicals obtained from their food plants. The pupa or chrysalis, unlike that of moths, is not wrapped in a cocoon.
Many butterflies are sexually dimorphic.
Butterflies range in size from a tiny 1/8 inch to a huge almost 12 inches.
Butterflies are distributed worldwide except Antarctica, totaling some 18,500 species.
Many butterflies, such as the painted lady, monarch, and several danaine migrate for long distances. These migrations take place over a number of generations and no single individual completes the whole trip. Many migratory butterflies live in semi-arid areas where breeding seasons are short. The life histories of their host plants also influence butterfly behavior.
Butterflies feed primarily on nectar from flowers. Some also derive nourishment from pollen, tree sap, rotting fruit, dung, decaying flesh, and dissolved minerals in wet sand or dirt. Butterflies are important as pollinators for some species of plants. In general, they do not carry as much pollen load as bees, but they are capable of moving pollen over greater distances.
Adult butterflies consume only liquids, ingested through the proboscis. They sip water from damp patches for hydration and feed on nectar from flowers, from which they obtain sugars for energy, and sodium and other minerals vital for reproduction. Several species of butterflies need more sodium than that provided by nectar and are attracted by sodium in salt; they sometimes land on people, attracted by the salt in human sweat. Some butterflies also visit dung and scavenge rotting fruit or carcasses to obtain minerals and nutrients. In many species, this mud-puddlingbehavior is restricted to the males, and studies have suggested that the nutrients collected may be provided as a nuptial gift, along with the spermatophore, during mating.
Butterflies use their antennae to sense the air for wind and scents. The antennae come in various shapes and colors. The antennae are richly covered with sensory organs known as sensillae. A butterfly's sense of taste is coordinated by chemoreceptors on the tarsi, or feet, which work only on contact, and are used to determine whether an egg-laying insect's offspring will be able to feed on a leaf before eggs are laid on it. Many butterflies use chemical signals, pheromones; some have specialized scent scales or other structures. Vision is well developed in butterflies and most species are sensitive to the ultraviolet spectrum. Many species show sexual dimorphism in the patterns of UV reflective patches. Color vision may be widespread but has been demonstrated in only a few species. Some butterflies have organs of hearing and some species make stridulatory and clicking sounds.
Many species of butterfly maintain territories and actively chase other species or individuals that may stray into them. Some species will bask or perch on chosen perches. The flight styles of butterflies are often characteristic and some species have courtship flight displays. Butterflies can only fly when their temperature is above 27 °C (81 °F); when it is cool, they can position themselves to expose the underside of the wings to the sunlight to heat themselves up. If their body temperature reaches 40 °C (104 °F), they can orientate themselves with the folded wings edgewise to the sun. Basking is an activity which is more common in the cooler hours of the morning. Some species have evolved dark wingbases to help in gathering more heat.
Butterflies in their adult stage can live from a week to nearly a year depending on the species. Many species have long larval life stages while others can remain dormant in their pupal or egg stages and thereby survive winters.
Butterflies may have one or more broods per year.
Courtship is often aerial and often involves pheromones. Butterflies then land on the ground or on a perch to mate. Copulation takes place tail-to-tail and may last from minutes to hours. The male passes a spermatophore to the female; to reduce sperm competition, he may cover her with his scent.
The vast majority of butterflies have a four-stage life cycle; egg, larva (caterpillar), pupa (chrysalis) and imago (adult). In the genera Colias, Erebia, Euchloe, and Parnassius, a small number of species are known that reproduce semi-parthenogenetically; when the female dies, a partially developed larva emerges from her abdomen.
Butterfly eggs are protected by a hard-ridged outer layer of shell, called the chorion. This is lined with a thin coating of wax which prevents the egg from drying out before the larva has had time to fully develop.Butterfly eggs vary greatly in size and shape between species, but are usually upright and finely sculptured. Some species lay eggs singly, others in batches. Many females produce between one hundred and two hundred eggs.
Butterfly eggs are fixed to a leaf with special glue which hardens rapidly.Eggs are almost invariably laid on plants. Each species of butterfly has its own host plant range and while some species of butterfly are restricted to just one species of plant, others use a range of plant species, often including members of a common family.The egg stage lasts a few weeks in most butterflies, but eggs laid close to winter, especially in temperate regions, go through a diapause (resting) stage, and the hatching may take place only in spring.
Butterfly larvae, or caterpillars, consume plant leaves and spend practically all of their time searching for and eating food.Although most caterpillars are herbivorous, a few species are predators.Some larvae, especially those of the Lycaenidae, form mutual associations with ants. They communicate with the ants using vibrations that are transmitted through the substrate as well as using chemical signals. The ants provide some degree of protection to these larvae and they in turn gather honeydew secretions.
Caterpillars mature through a series of developmental stages known as instars. Near the end of each stage, the larva undergoes a process called apolysis, mediated by the release of a series of neurohormones. During this phase, the cuticle, a tough outer layer made of a mixture of chitin and specialized proteins, is released from the softer epidermis beneath, and the epidermis begins to form a new cuticle. At the end of each instar, the larva moults, the old cuticle splits and the new cuticle expands, rapidly hardening and developing pigment. Development of butterfly wing patterns begins by the last larval instar.
When the larva is fully grown, hormones such as prothoracicotropic hormone (PTTH) are produced. At this point the larva stops feeding, and begins "wandering" in the quest for a suitable pupation site, often the underside of a leaf or other concealed location. There it spins a button of silk which it uses to fasten its body to the surface and moults for a final time. While some caterpillars spin a cocoon to protect the pupa, most species do not.Most of the tissues and cells of the larva are broken down inside the pupa, as the constituent material is rebuilt into the imago. To transform from the miniature wings visible on the outside of the pupa into large structures usable for flight, the pupal wings undergo rapid mitosis and absorb a great deal of nutrients.
The reproductive stage of the insect is the winged adult or imago.After it emerges from its pupal stage, a butterfly cannot fly until the wings are unfolded. A newly emerged butterfly needs to spend some time inflating its wings with hemolymph and letting them dry, during which time it is extremely vulnerable to predators. |
Invasive species colonize and spread widely in places where they are not normally found. Invasives often affect native species by eating them, out-competing them and introducing unfamiliar parasites and pathogens. For example, the invasive Kudzu plant, native to southeast Asia, overgrows seemingly anything in its path in the southeast US.
Natural selection wrought by invasive species can often be strong and natives will either go extinct or adapt. During adaptation, selection will favor those individuals with characteristics that best allow them to survive and reproduce in the face of the invader. The offspring of the survivors will inherit their parents’ beneficial traits, and the population will evolve.
In the 1950s, the brown anole lizard, Anolis sagrei, arrived in south Florida from Cuba. It quickly boomed to become arguably Florida’s most abundant vertebrate by biomass. The effects of this invasion might not be very noticeable to humans (though their cats have paid rapt attention). But the brown anole certainly makes an impression on Florida’s only native anole species, the green anole, Anolis carolinensis.
This is because the green and the brown anoles enjoy similar lifestyles. They are similarly sized, both active during the day and both defend territories. They eat similar food – mostly insects and spiders – and use similar habitat – the ground and lower parts of trees and bushes. Because of these similarities, we expect the invasive brown anole to impose strong natural selection on the native green.
Thus, my colleagues and I asked: how is the green anole responding to the brown anole invasion?
Researchers before us had observed that green anoles living with brown anoles tend to live higher up in the trees, presumably to escape competition for food and space. Definitive evidence required an experiment. So we got to work on small, man-made islands near Cape Canaveral.
In 1995, we introduced the brown anole to three islands that – until then – had only green anoles. Within a few months, the green anole moved up into the trees and stayed there: clear experimental evidence that the brown anole affects the green by changing green anole behavior.
Fifteen years later, my colleagues and I wondered whether the green anoles had adapted anatomically to their new life up in the trees. We were specifically interested in toepads on their feet; other anole species that live high in trees tend to have large toepads, the better to grasp smoother, narrower branches higher up.
We would have liked to study toepad evolution in the same populations we’d looked at earlier. But the original control islands, with only green anoles, had been invaded by the brown anole by the time we revisited them in 2010. So instead, we chose five large islands that had just green anoles, the only such islands left in the lagoon. We compared their green anoles to the green anoles on six large islands that had been naturally invaded by the brown anole. We did know that the brown anoles had hit the scene sometime between 1995 and 2010 because we had surveyed the islands in 1995 and found them free of brown anoles at that time.
Using little lassos on the end of fishing poles, we captured green anoles. At our field station, we anesthetized them and took digital scans of their toepads. Then we let the lizards wake up and recover overnight, and we released them the next day at the spot we caught them. We often wondered whether their friends believed their abduction stories.
We found that on the invaded islands, green anoles evolved larger toepads. It took only 20 generations – less than 15 years – for the toepads to increase by about 5%. That may not sound like much, but that’s a rapid evolutionary pace. For comparison, if the American population was evolving to be taller at that speed, the average height of American men would be seven inches taller in 20 generations. Our findings further support the notion that when natural selection is strong, evolution can proceed quite quickly.
Why did selection favor larger toepads? Like geckos, anoles’ toes have specialized scales with fine hairs on them that cling to surfaces. Anoles with larger toepads are better at clinging. We think that the green anoles were under selection to get better at maneuvering on narrow, flexible and slippery twigs and leaves high in trees. Thus, green anole hatchlings that were born with larger toepads were better able to grow, survive, and reproduce. They passed their genetic traits on to the next generation.
In this case, it appears that the green anole has been able to adapt to coexist with the brown anole. It will not be going locally extinct any time soon. We’ll just have to look up to find it. |
Melodic Minor Scales
I like to think of the melodic minor scale as the chameleon scale as it changes its colours. The ascending scale creates more tension by sharpening the sixth and seventh steps, and the descending scale relaxes that tension by flattening the seventh and sixth steps. The sequence of intervals for the ascending scale of A melodic minor is as follows:
Step 1 – 2 (a – b): whole tone
Step 2 – 3 (b – c): semitone
Step 3 – 4 (c – d): whole tone
Step 4 – 5 (d – e): whole tone
Step 5 – 6 (e – f#) whole tone
Step 6 – 7 (f# – g#) whole tone
Step 7 – 8 (g# – a) semitone
The descending half of the melodic minor scale is identical to that of the natural minor scale:
Step 8 – 7 (a – g) whole tone
Step 7 – 6 (g – f) whole tone
Step 6 – 5 (f – e) semitone
Step 5 – 4 (e – d) whole tone
Step 4 – 3 (d – c) whole tone
Step 3 – 2 (c – b) semitone
Step 2 – 1 (b – a) whole tone
So the ascending scale shares its first five steps with the natural and harmonic minor scales, and its sixth to eighth steps with its major counterpart (note: the major key with the same keynote and NOT the relative major). As already mentioned, the descending melodic minor scale is identical to the descending natural minor scale. We now know that harmonic minor scales form the harmonic basis of minor keys, so it stands to reason (and the name suggests) that melodic minor scales form the melodic basis. The raised sixth step prevents the dissonant augmented second interval found in harmonic minor scales and the raised seventh provides a strong resolution from a leading tone to the tonic. Since descending passages don’t require the tension and definition provided by a leading tone, the descending melodic minor offers a sound truer to the overall minor structure.
The diagram below shows the structure of A melodic minor ascending on the keyboard:
Here’s a video diagram showing the lowest octave of A melodic minor ascending and descending on the cello.
A Bit of History
The development of melodic and harmonic minor scales as we know and use them in Western music happened over a long period. Their predecessors are modes, which date back to ancient civilisations - notably the Ancient Greeks. Mediaeval modes and scales share certain similarities, but follow different rules and form the basis of two different musical languages with distinctly different sounds. It was during the Renaissance period, when polyphonic¹ music really came into its own that the modal system, which had served the simpler homophonic² and monophonic³ musical styles of the Mediaeval period perfectly well, began to prove inadequate, as did the notation system. The rise of polyphony meant that music was becoming considerably more harmonically complex. The need for stronger definition in harmonic resolution drove the development of major and minor keys, and in particular the need for different types of minor scales to cater for a strong leading tone (the raised seventh) and the avoidance of awkward dissonance in melodic vocal lines (the augmented second interval in the harmonic minor scale). Dissonance was a major consideration and was avoided wherever possible in the harmonic structure of renaissance music. For this reason we see elements of all three minor scales in minor keys.
By the early baroque era (from 1600 onwards), a harmonic language based on tonality (harmony based on a key center) rather than modality had emerged. Melodic and harmonic minor scales and major scales were in common use. The range of key signatures increased considerably, and the use of key signatures with sharps was introduced. Equal temperament tuning, a system whereby the octave is divided into twelve equal semitones gained wider acceptance by keyboard makers by the 1630s. Although it did not become the principal tuning system for another two centuries, it enabled the 24 keys found in the circle of fifths - the cornerstone of Western art music from 1600 - 1900.
¹Polyphonic: Musical texture in two or more (usually at least three) relatively independent parts [The Oxford Companion to Music Edited by Alison Latham, 2002]
²Homophonic: Music in which one voice or part is clearly melodic, the others accompanimental and chiefly chordal. The term 'homophony' has also been used to describe part-writing where all parts move in the same rhythm; a more precise term for this is homorhythm. [The Oxford Companion to Music Edited by Alison Latham, 2002]
³Monophonic: A term used to denote music consisting of only one melodic line, with no accompaniment or other voice parts (e.g. plainchant, unaccompanied solo song). [The Oxford Companion to Music Edited by Alison Latham, 2002]
The following table shows major keys, their relative minor keys and the associated key signatures. |
In fact, mosquitoes are one of the deadliest animals on the planet when you factor in all mosquito-borne diseases.
Some people think mosquitoes can also transmit HIV. However, this isn’t true.
Read on to learn more about why it’s impossible for a mosquito to transmit HIV to humans.
Even if a mosquito bites a person with HIV, then bites someone else, they can’t transmit HIV to the second person.
This is because of the mosquito’s biology, and the biology of HIV itself. Mosquitoes can’t transmit HIV for the following reasons:
HIV doesn’t affect mosquitoes, so they can’t transmit it to humans
Mosquitoes (and other insects) lack the receptor HIV uses to recognize immune cells. This means that mosquitoes can’t get an HIV infection. Instead, the virus just gets broken down and digested in the mosquito’s stomach.
Because they can’t get an HIV infection, mosquitoes can’t transmit HIV to humans.
A mosquito’s feeding mechanism
A mosquito’s proboscis — the elongated part of its mouth it uses to bite humans — has two tubes.
One tube is used for sucking blood from humans. The other injects saliva into the bite. This means only saliva, not blood (from either a mosquito or another person) goes into your body when you get a mosquito bite.
HIV can’t be transmitted through saliva, so it can’t be transmitted through a mosquito’s bite.
It would take too many bites
HIV actually isn’t very easily transmittable. It takes a large amount of the virus being transmitted for someone to contract it.
Even if some HIV were still in a mosquito’s body when it bit you — if it had yet to be fully digested — there wouldn’t be enough of it to transmit to you.
HIV is transmitted through direct contact with certain bodily fluids that contain HIV. These fluids include:
These fluids must enter the person’s body for them to contract HIV.
HIV is mainly transmitted through sex without a condom or other barrier method, and through the sharing of needles.
In some cases, HIV can be transmitted during pregnancy, childbirth, or breastfeeding. However, antiretroviral therapy can greatly lower the risk of this occurring, and it’s safe to take during pregnancy.
HIV is highly unlikely to be transmitted through saliva.
HIV can only be transmitted when a person with the virus has a detectable viral load (the amount of HIV in their blood). Taking daily medication (antiretroviral therapy) for HIV can lead to an undetectable viral load, which means HIV can’t be transmitted to others.
Although mosquitoes can’t transmit HIV, there are many diseases they do transmit.
Mosquitoes in different parts of the world transmit different diseases. This is due to the fact that different pathogens thrive in different environments. In addition, different mosquito species often transmit different diseases.
Diseases that mosquitoes transmit include:
Mosquito-borne diseases are the most common and dangerous threat from mosquitoes. But in rare cases, mosquito bites can also cause severe allergic reactions.
If you have trouble breathing or swelling in your face or throat after being bitten by a mosquito, call 911 or go to the nearest emergency room immediately. These are symptoms of a serious allergic reaction called anaphylaxis, which can be life threatening. |
Thalassemia is a group of inherited blood disorders that can be passed from parents to their children and affect the amount and type of hemoglobin the body produces.
Hemoglobin (Hb or Hgb) is a substance present in all red blood cells (RBCs). It is important for proper red blood cell function because it carries the oxygen that RBCs deliver around the body. One portion of hemoglobin called heme is the molecule with iron at the center. Another portion is made of up four protein chains called globins. Each of the four globin chains holds a heme group containing one iron atom. Depending on their structure, the globin chains are designated as alpha, beta, gamma, or delta.
Not all hemoglobin is the same. Different types of hemoglobin are classified according to the type of globin chains they contain. The type of globin chains present is important in hemoglobin's ability to transport oxygen.
Normal hemoglobin types include:
- Hemoglobin A – this is the predominant type of Hb in adults (about 95-98%); Hb A contains two alpha (α) protein chains and two beta (ß) protein chains.
- Hb A2 – makes up about 2-3.5% of Hb found in adults; it has two alpha (α) and two delta (δ) protein chains.
- Hb F – makes up to 2% of Hb found in adults; it has two alpha (α) and two gamma (γ) protein chains. Hb F is the primary hemoglobin produced by a developing baby (fetus) during pregnancy. Its production usually falls to a low level within a year after birth.
People with thalassemia have one or more genetic mutations that they have inherited and that result in a decreased production of normal hemoglobin. When the body doesn't make enough normal hemoglobin, red blood cells do not function properly and oxygen delivery suffers. This can lead to anemia with signs and symptoms that can range from mild to severe, depending on the type of thalassemia that a person has. Examples of signs and symptoms include weakness, fatigue, and pale skin (pallor). See the Classifications section for more about the signs, symptoms, and complications of the different types of thalassemia.
For hemoglobin, there are four genes in our DNA that code for the alpha globin chains and two genes (each) for the beta, delta, and gamma globin chains. Since everyone inherits a set of chromosomes from each parent, each person inherits two alpha globulin genes and one beta globulin gene from each parent. (For general information on genetics, see The Universe of Genetic Testing.) A person may inherit mutations in either the alpha or beta globin genes.
With thalassemias, mutations in one or more of the globin genes cause a reduction in the amount of the particular globin chain produced. This can upset the balance of alpha to beta chains, resulting in unusual forms of hemoglobin or an increase in the amount of normally minor hemoglobin, such as Hb A2 or Hb F. The thalassemias are usually classified by the type of globin chain whose synthesis is decreased. For example, the most common alpha chain-related condition is called alpha thalassemia. The severity of this condition depends on the number of genes affected.
Other types of mutations in the genes coding for the globin chains can result in a globin that is structurally altered, such as hemoglobin S, which causes sickle cell. The inherited disorders that result in the production of an abnormal hemoglobin molecule are described in the article on Hemoglobin Abnormalities. Together, thalassemia and hemoglobin abnormalities are called hemoglobinopathies. |
An interstellar object that whizzed through our solar system last year is confounding astronomers trying to understand how planets, comets and asteroids form.
The object, called 'Oumuamua, has a composition that suggests it should have formed close to its parent star. But in a twist, astronomers said it's hard to imagine how the object left its parent solar system, because it's hard to eject an object orbiting so close to a star.
'Oumuamua (pronounced oh-MOO-ah-MOO-ah) was discovered on Oct. 19, 2017, using the NASA-funded Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) at the University of Hawaii. ['Oumuamua Explained: An Interstellar Visitor in Pictures]
After looking at 'Oumuamua's high speed and highly inclined path through the solar system, scientists at the International Astronomical Union's Minor Planet Center concluded the object was interstellar. The discovery of 'Oumuamua marks the first time an interstellar object was confirmed in our solar system.
"This object was likely ejected from a distant star system," said Elisa Quintana, an astrophysicist at NASA's Goddard Space Flight Center, in a NASA statement.
"What's interesting is that just this one object flying by so quickly can help us constrain some of our planet-formation models," added Quintana, who is a co-author of a new paper in the journal the Monthly Notices of the Royal Astronomical Society. The paper, released today (March 27), describes what 'Oumuamua observations are revealing about the formation of planetesimals, which are small, rocky objects that could come together under gravity's pull to form planets.
Observations of 'Oumuamua suggest that the object was probably pretty dry. Before its discovery, 'Oumuamua zoomed past the sun at about 196,000 mph (315,400 km/h). While the object was traveling fast enough to escape our solar system, its speed was somewhat similar to that of a comet passing by the sun, NASA said.
Comets are loose collections of ice and rock. As they draw near the sun, their surface warms, and this loosens gas and dust to escape into space. 'Oumuamua didn't leave behind such a trail.
Some scientists have suggested that in its own solar system, 'Oumuamua likely formed in a different region than comets formed in our own neighborhood. But the new paper has a counterargument.
Solar systems such as our sun and its planets form out of vast clouds of gas, dust and ice. Objects, such as comets, that form far away from their parent sun can remain icy. If the objects are close to the sun, it's too hot for ice to remain, so they coalesce into objects such as asteroids.
But if 'Oumuamua formed as close to its star as an asteroid, it's difficult to imagine how it was ejected away from that zone, the new paper suggests.
"The total real estate that's hot enough for that is almost zero," said lead author Sean Raymond, an astrophysicist at the French National Center for Scientific Research and the University of Bordeaux, in the same statement. "It's these tiny, little, circular regions around stars. It's harder for that stuff to get ejected, because it's more gravitationally bound to the star. It's hard to imagine how 'Oumuamua could have gotten kicked out of its system if it started off as an asteroid."
"If we understand planet formation correctly, ejected material like 'Oumuamua should be predominantly icy," added Thomas Barclay, an astrophysicist at Goddard and the University of Maryland, Baltimore County. "If we see populations of these objects that are predominantly rocky, it tells us we've got something wrong in our models."
How 'Oumuamua's journey began
While researchers are further investigating where 'Oumuamua formed, they have come up with a plausible scenario for how it was ejected. Based on simulations from other work, they suggest a gas giant planet — something similar to a Jupiter — flung 'Oumuamua into interstellar exile.
As a gas giant plows by small objects such as asteroids, the planet exerts intense gravitational forces on the objects. In some cases, gravity breaks the objects apart. In the case of 'Oumuamua, the planet's gravity exerted pressure on the object, forcing it into the cigar-like shape observed today.
"The researchers calculated the number of interstellar objects we should see, based on estimates that a star system likely ejects a couple of Earth-masses of material during planet formation," NASA said. "They estimated that a few large planetesimals will hold most of that mass but will be outnumbered by smaller fragments like 'Oumuamua." |
Rolihlahla (Nelson) Mandela was born on July 18th, 1918, in a small village called Mvevo, South Africa. On his first day of school, his teacher has said his first name will be English. Hence, she called him Nelson. The name stuck with him. Then, The Union of South Africa was the official name of the country. It was a self-governed part of the British Empire.
Racism played a crucial role in Mandela’s youth. In the early 1940s, he studied law at the University of Witwatersrand. He was the only black student in the class. So, in 1944, he joined the African National Congress. The ANC was a black South Africans group fighting for equal rights.
But, things turned even worse after the 1948 elections. That resulted in racial segregation, the Apartheid. Hence, Mandela started a peaceful opposition campaign – inspired by Gandhi. Further, the state followed Mandela due to his attraction to communism.
In 1960 the ANC was banned. A year later, the illegal ANC formed a military group called Umkhonto we Sizwe (Spear of the Nation). Mandela abandoned the peaceful campaign. The tension in South Africa grew.
Nelson Mandela and his journey to the presidency
In October 1962, the apartheid regime sentenced Mandela to five years in prison for treason. However, two years later, his sentence was extended for life. In prison, Mandela experienced more abuse. So, people all around the world demanded freedom for Mandela.
The new president Frederik de Klerk released Nelson Mandela from prison in 1990. He spent over 27 years in prison. Once again, he started a campaign in support of equal rights. Further, he once again became the leader of ANC. And he transformed the group into a social democratic party. Along with de Klerk, he received the Nobel peace prize.
Finally, South Africa held its first equal elections in 1994. Nelson Mandela won and became the country’s first black president. The signing of a democratic constitution banning racial abuse was among his most notable achievement. He continued his work for human rights between 1999, when his mandate ended, until December 5th, 2013, when he died.
“It always seems impossible, until it is done.”Nelson Mandela |
The core discipline of the trivium is language. Language is the foundation of human communication, and thus of relationships. Wherever we are, we will communicate with others, and we want our students to do so intentionally and excellently.
- Our grammar classes establish a firm foundation and grasp of English grammar and give models of imitation in writing.
- The study of Latin reinforces a precise grasp of English syntax, increases vocabulary through Latin derivatives, and provides students with a greater consciousness of the basis of language, rather than relying on intuition and habit.
- In 7th-8th grade, students continue to build on the foundation by imitating great forms of writing.
- By 9th grade, students should have mastered the basics of English grammar and syntax and are able to refine their writing through diligent practice in their humanities courses. Teachers provide individual feedback and coach students in the three canons of writing: invention, organization, and elocution.
- Our logic and rhetoric classes feed into students’ writing ability by providing them with the tools to craft sound arguments and develop a winsome and beautiful style of their own. |
Research the FCC (Federal Communication Commission) guidelines on low-power radio transmitters requiring no licenses. Determine what the maximum power output is, frequency range(s), antenna lengths, transmission time, and any other restrictions relevant to building a small transmitter circuit.
I cannot answer the question here, as FCC guidelines are subject to change.
This will be an interesting topic for you and your students to explore as they begin to design their transmitter circuits. In fact, it should be the very first step in the design process!
In the very early days of radio communication, a popular style of transmitter was the spark gap circuit. Explain how this circuit functioned, and why it is no longer used as a practical transmitter design.
“Spark gap” transmitter circuits were built very much like you would expect, from their names: an air gap through which a high-voltage electric spark jumped. Because the pulse durations of the sparks were so short, the equivalent output frequencies spanned a very wide range, ultimately rendering this technology impractical due to interference between multiple transmitters.
Anyone who has ever heard “popping” noises on an AM radio produced by a (pulsed) electric fence of the type used around farms to keep animals from wandering off will understand how spark-gap transmitters broadcast across a large range of frequencies.
This question could very well lead into a fascinating discussion on Fourier transforms, if your students are so inclined. According to Fourier theory, the shorter the duration of a pulse, the broader its frequency range. The product of uncertainties for the pulse’s location in time and its frequency is equal to or greater than a certain constant. Theoretically, a pulse of infinitesimal width would encompass an infinitely wide (infinitely uncertain) range of frequencies.
Incidentally, the math behind this is precisely the same as for Heisenberg’s Uncertainty Principle: that quantum physics theory which states the certainty of a particle’s position is inversely proportional to the certainty of its momentum, and visa-versa. Contrary to popular belief, this phenomenon is not an artifact induced by the act of measuring either position or momentum. It is not as though one could obtain perfectly precise measurements of position and momentum if only one had access to the perfect measuring device(s). Rather, this Principle is a fundamental limit on the certainty possessed by a particle with regard to its position and momentum. Likewise, an infinitesimal pulse has no definite frequency.
How is the frequency of your transmitter’s oscillator circuit established? What would you have to do to change that frequency?
The answer to this question, of course, will vary according to the design of oscillator used in your transmitter circuit. Note that there is likely more than one way to change the frequency of your circuit, so be prepared to give multiple answers during discussion!
This question helps students research and understand their particular oscillator circuit(s). Be it Hartley, Colpitts, or some crystal-controlled topology, students need to know how and why the oscillation frequency is fixed.
Explain the difference between AM (Amplitude Modulation) and FM (Frequency Modulation).
This is an easy question to find the answer to. I’ll leave the job to you!
Ask your students to explain which type of modulation their transmitter circuit will use, and what advantages one modulation type may have over the other.
Published under the terms and conditions of the Creative Commons Attribution License
In Partnership with Concurrent Technologies
by Luke James
by Jake Hertz
by Jake Hertz
by Luke James |
The solar panels we use typically have one layer of semiconducting solar cells.
Tandem solar cells comprise more than one layer, and are effectively a stack of different solar cells on top of each other.
Tandem solar cells can either be individual cells or connected in series, and tandem solar cells typically have much higher efficiencies than the conventional single-junction solar cells that are used today in most solar modules.
The most common arrangement for tandem cells is to grow them monolithically so that all the cells are grown as layers on the substrate and tunnel junctions connect the individual cells.Thus it becomes optimised to each section of the solar spectrum and capture more energy from the sun.
Because it has multiple p-n junctions, a tandem solar cell is a type of multi-junction cell.
You Might Want to Check Out these Questions Too on Types of Solar Panels
- What is a Crystalline Silicon/c-Si Solar Cell/Module?
- How do Monocrystalline Solar Panels Compare with Polycrystalline Solar Panels?
- What is an Amorphous Silicon (a-Si) Solar Module? Is it Used Now?
- What are CdTe (Cadmium Telluride ) Thin Film Solar Cells? Are These Commercially Available?
- What are CIGS (Copper Indium Gallium Selenide) Thin Film Solar Cells? Are These Available Commercially Today?
- What is a CIS Solar Module? Are These Commercially Available Today?
- What are Bifacial Solar Cells? Are they Commercially Available?
- What are Concentrating Photovoltaics (CPV)? What are their Benefits?
- What Single-junction Solar Cells? How are they Different from Conventional Solar Cells?
- What is a Transparent Solar Cell? And What are Its Advantages Over Conventional Cells?
- What are Heterojunction Solar Cells? Are these Commercially Available?
- What is the Smallest Sized Solar Panel Available Currently?
- What is the Largest Capacity/Size Solar Panel Currently?
- What are n-type Solar Cells? And how are they Different from p-type Cells?
- What are Multi-junction Solar Cells? How are They Different from Conventional Solar Cells? |
Updated: Mar 21, 2020
Less than four years before the American Civil War commenced at Fort Sumter, South Carolina, the United States Army received a shocking lesson in how unprepared it was for large-scale operations in hostile territory. In 1857, President James Buchanan declared the Utah Territory to be in a state of rebellion, and ordered the Army to send a few thousand troops to secure the region, defend presidential political appointments in the territory, and establish a federal presence in the area.
Members of the Church of Jesus Christ of Latter-Day Saints (LDS), commonly called “Mormons,” had commenced a movement into modern-day Utah in 1847. The leader of the LDS church, Brigham Young, hoped that by creating a physical isolation from the United States, his congregation might be safe from attacks and persecution. Although the area the LDS chose to settle was technically still a part of Mexico in 1847, its possession transferred to the United States through the Treaty of Guadalupe Hidalgo the following year. Young expected to develop the region into an independent state, Deseret, that could both peacefully coexist with its more powerful neighbors and offer a haven to any LDS members or would-be converts that wished to join their society. When the California gold rush triggered the movement of thousands of migrants along the western trails, many of those emigrants moved through the area claimed by the LDS, causing a certain degree of friction and apprehension. Further, the 1850 Compromise formally established the Utah Territory, a federal designation that remained until the Utah achieved statehood in 1896.
Due to the designation of Utah as a territory, President Millard Fillmore needed to appoint a political governor for the territory. Recognizing the likely outcome if he choose anyone from outside the LDS faith, Fillmore opted instead to name Brigham Young as the territorial governor, and appointed other church leaders to key positions within the territorial administration. Fillmore’s successor, Franklin Pierce, seemed to largely ignore any aspects of governing the Utah Territory, and left Young in place as its governor. However, in 1856, the LDS practice of polygamy became a national political issue. Both the Democratic and Republic Party candidates for federal offices denounced polygamy as a barbaric practice that must be ended in the United States and its territories. The Republicans tied polygamy to the question of chattel slavery, which they also abhorred and wished to abolish, while the Democrats demonized polygamy as a means of distracting from the slavery question. Thus, regardless of who won the election to the presidency, the LDS was likely to face substantial problems in the near future.
James Buchanan emerged from the bruising presidential contest, and decided almost immediately to remove Young as governor of the territory—but did not deign to notify Young of that fact. In his place, Buchanan named Alfred Cumming as the new territorial governor, and sent him to Utah carrying the proclamation of his new office. Young had already heard of the impending change, even if he had not been formally notified, and feared that the U.S. government was on the verge of a campaign to annihilate the LDS. His perception was bolstered by news that General Winfield Scott, the commanding general of the Army, had ordered a force of 2,500 troops to Utah to build one or more fortifications to establish firmer control over the region.
Young ordered his followers to commence preparations to resist an invasion. He designated locations to stockpile supplies, reactivated the Nauvoo Legion (a militia force of all able-bodied LDS men between the ages of 15 and 60), and instituted the manufacture of firearms and other weapons. His militia leaders prepared their families for evacuations, and adopted a plan to conduct a guerrilla campaign rather than directly facing the U.S. Army in the field. To that end, they intended to pursue a scorched-earth policy, burning crops and other potential resources in the face of any advance by the enemy.
In September of 1857, members of the Nauvoo Legion incited Paiutes to launch an attack upon the Baker-Fancher wagon train, a party of approximately 140 emigrants heading toward California that was encamped at Mountain Meadow. The emigrants resisted the attack, but were tricked into laying down their arms and then led into an ambush by the LDS militia. Only 17 children under the age of 7 were spared in the ensuing slaughter—those children were adopted by local families, while the possessions of the wagon train were publicly auctioned. News of the massacre reached the Army officers in the region, who interpreted it as proof that the LDS were in open revolt.
The Army campaign left Fort Leavenworth, Kansas, in July—already far too late in the campaign season to safely reach Utah and establish control. Its commander, Colonel Albert Sidney Johnston, decided that he should delay his advance in the face of potential resistance, and ordered his troops into terribly spartan winter quarters to await the arrival of spring. He sent repeated requests for resupply and reinforcements, but could do little to prosecute the campaign until the spring of 1858.
Thankfully, Johnston’s delay allowed time for diplomacy and cooler heads to prevail. Friends of the LDS had not been idle in Washington, D.C., and many worked to convince Buchanan that the Utah Territory was not in revolt. To test this concept, Buchanan penned an offer of pardons on April 6, 1858. Any members of the LDS who laid down their arms, accepted Cumming as the territorial governor, and submitted to federal control would not be held accountable for their previous actions. Brigham Young immediately accepted the pardon, although he denied the territory had ever been in a state of revolt, and he encouraged his followers to do the same, effectively defusing the situation.
The Army marched into Salt Lake City in June of 1858, without a shot being fired. There were certainly those in its leadership who had hoped to punish the LDS for their resistance, but others in the Army delegation, such as Lieutenant Colonel Philip St. George Cooke (the former commander of the Mormon Battalion in 1847) held the LDS citizens in high regard. Johnston elected to construct Camp Floyd approximately 50 miles southwest of Salt Lake City, in a sparsely populated area. He correctly assumed this would reduce any friction with local inhabitants, while still being close enough to guard the overland trails and serve as a reminder of federal authority. Camp Floyd was only in operation for three years before its troops were recalled east to serve in the Civil War, although a small garrison was reestablished in 1862. During the Civil War, the LDS church and the inhabitants of the Utah Territory effectively remained neutral. Although the federal government passed a series of anti-polygamy laws in the early 1860s, President Abraham Lincoln chose not to attempt any enforcement of them in Utah, in exchange for Young’s efforts to keep the territory from joining the Confederacy.
The Utah Expedition was ultimately a blundering campaign that emerged from a series of misunderstandings and prejudice on both sides. Neither side wished to compromise, and both tended to assume the worst about their counterparts without spending much effort to determine whether their assumptions were correct. The cooling of tempers was primarily aided by the weather and timing of the expedition—had it been launched earlier in the spring, it might have actually triggered a substantial amount of bloodshed. While Brigham Young might have lost his temporal post as the governor of the territory, neither he nor Cumming ever suffered from any illusions about who possessed true power and authority in the region, or from where it was derived. If anything, the Army’s invasion of the Utah Territory demonstrated its utter unpreparedness for even minor campaigns involving more than a few hundred troops, barely a decade after the successful invasion of Mexico. It should have served as a warning for Winfield Scott, at least, that the Army was completely incapable of large-scale operations. Should an invasion of a different territory in the United States prove necessary, the Army needed to relearn many of its lessons on how to build, equip, and supply a large force operating in the face of enemy resistance. |
Childhood trauma occurs more than you think. More than two-thirds of children reported at least 1 traumatic event by age 16. After a disaster or traumatic event, youth and adolescents may complain about physical aches or pains because they cannot identify what is really bothering them emotionally. When we better understand childhood trauma, impact and symptomology, we can better understand the language it speaks when expressed by children, in the classroom and in the home.
It is important that we recognize the many faces and voices influenced by traumatic childhood experiences and because trauma is defined by the person who experiences it, no single list can include all causes.
With that said, potentially traumatic events include:
•Psychological, physical, or sexual abuse
•Community or school violence
•Witnessing or experiencing domestic violence
•National disasters or terrorism
•Commercial sexual exploitation
•Sudden or violent loss of a loved one
•Refugee or war experiences
•Military family-related stressors (e.g., deployment, parental loss or injury)
•Physical or sexual assault
•Serious accidents or life-threatening illness
The national average of child abuse and neglect victims in 2013 was 679,000, or 9.1 victims per 1,000 children.
Each year, the number of youth requiring hospital treatment for physical assault-related injuries would fill every seat in 9 stadiums.
1 in 4 high school students was in at least 1 physical fight.
1 in 5 high school students was bullied at school; 1 in 6 experienced -.
19% of injured and 12% of physically ill youth have post-traumatic stress disorder.
More than half of U.S. families have been affected by some type of disaster (54%).
It’s important to recognize the signs of traumatic stress and its short and long-term impact.The signs of traumatic stress may be different in each child. Young children may react differently than older children.
•Fear being separated from their parent/caregiver
•Cry or scream a lot
•Eat poorly or lose weight
Elementary School Children
•Become anxious or fearful
•Feel guilt or shame
•Have a hard time concentrating
•Have difficulty sleeping
Middle and High School Children
•Feel depressed or alone
•Develop eating disorders or self-harming behaviors
•Begin abusing alcohol or drugs
•Become involved in risky sexual behavior
The Body’s Alarm System
Everyone has an alarm system in their body that is designed to keep them safe from harm. When activated, this tool prepares the body to fight or run away. The alarm can be activated at any perceived sign of trouble and leave kids feeling scared, angry, irritable, or even withdrawn.
Healthy Steps Kids Can Take to Respond to the Alarm
•Recognize what activates the alarm and how their body reacts
•Decide whether there is real trouble and seek help from a trusted adult
•Practice deep breathing and other relaxation methods
Impact of Trauma
The impact of child traumatic stress can last well beyond childhood. In fact, research has shown that child trauma survivors may experience:
•Learning problems, including lower grades and more suspensions and expulsions
•Increased use of health and mental health services
•Increase involvement with the child welfare and juvenile justice systems
Tips for Talking With and Helping Children and Youth Cope After a Disaster or Traumatic Event: A Guide for Parents, Caregivers, and Teachers helps parents and teachers recognize common reactions young people of different age groups experience after experiencing trauma and offers tips for how to respond in a helpful way and when to seek support. Learn more about the guide. |
Encourage students in grades 4 and up to improve their research skills and test scores using Note Taking: Lessons to Improve Research Skills and Test Scores.
This 48-page activity book helps students develop strategies for effective note-taking from these sources:
- online resources
- classroom lectures
The book illustrates techniques these types of note-taking techniques:
- Venn diagrams
- note cards
- cause and effect
The book also includes teacher ideas for note-taking activities, references, and answer keys. |
Bioengineers dream of growing spare parts for our worn-out or diseased bodies. They have already succeeded with some tissues, but one has always eluded them: the brain. Now a team in Sweden has taken the first step towards this ultimate goal.
Growing artificial body parts in the lab starts with a scaffold. This acts as a template on which to grow cells from the patient's body. This has been successfully used to grow lymph nodes, heart cells and voice boxes from a person's stem cells. Bioengineers have even grown and transplanted an artificial kidney in a rat.
Growing nerve tissue in the lab is much more difficult, though. In the brain, new neural cells grow in a complex and specialised matrix of proteins. This matrix is so important that damaged nerve cells don't regenerate without it. But its complexity is difficult to reproduce. To try to get round this problem, Paolo Macchiarini and Silvia Baiguera at the Karolinska Institute in Stockholm, Sweden, and colleagues combined a scaffold made from gelatin with a tiny amount of rat brain tissue that had already had its cells removed. This "decellularised" tissue, they hoped, would provide enough of the crucial biochemical cues to enable seeded cells to develop as they would in the brain. |
Gene Cloning in Plasmid Vector | By - Dr. Suresh Kaushik
Gene cloning is a technique which is commonly used in biotechnology. In this technique a section of DNA is put into a vector that acts as a vehicle to transfer the DNA to a host cell e.g. a bacterium. The plamid vector multiplies in the bacterial cell. Thus, many copies of the original section of DNA are produced. Ultimately, a colony of bacteria forms, each containing one or several copies of the section of DNA contained in the vector DNA. The colony is termed a clone and the molecule of DNA contained in the vectors has been cloned. If the original section of DNA represented a gene, the process is known as gene cloning.
A DNA fragment to be cloned is obtained through the application of restriction endonucleases. Most of restriction enzymes cleave duplex DNA at specific palindromic sequences. An inherent advantage of this method is the ability to precisely excise the desired fragment through its restriction sites. The complimentary ends of the two DNAs specifically are covalently joined through action of the enzyme DNA ligase to form recombinant plasmid.
How can one select only those host organisms that contain a properly constructed vector? In the case of plasmid transformation, this is usually done through the use of antibiotics and/or chromogenic substrates. Addition of an antibiotic such as ampicillin will eliminate any colonies that did not take up the plasmid. The plasmids intact gene contains a gene that confers antibiotic resistance. In case of blue/white colony selection, E. coli transformed with a plasmid containing a foreign DNA insert in its polylinker region that lacks β-galactosidase activity because the insert interrupts the protein-encoding sequence of the lacZ' gene. Thus, when grown in the presence of blue dye X-gal bacterial colonies that have an insert in their polylinker region form colorless colonies, whereas bacteria containing only plasmids that lack an insert form blue colonies.
A cloned structural gene must be inserted into an expression vector, a plasmid that contains the properly positioned transcriptional and translational control sequences for the protein's expression. With the use of a relaxed control plasmid and an efficient promoter, the production of protein of interest may reach thirty percent of the host's total cellular protein. The ability to synthesize a given protein in large quantities is already having enormous medical, agricultural, and industrial impact.
Gene cloning has contributed hugely to agricultural and medical research. It has mainly been applied in agriculture to develop genetically modified crops or transgenic crops resistance to pests and diseases and improve nutritional quality of crops. It has allowed genes and their functions to be studied in much greater detail than was previously possible. In the future, there will be further possibilities for understanding gene function, protein formation and metabolism using gene cloning techniques.
About Author / Additional Info:
Dr. Suresh Kaushik
A Biotechnogical Professional and author from India |
Sunlight contains ultraviolet (UV) radiation, which causes premature aging of the skin, wrinkles, cataracts, and skin cancer. The amount of damage from UV exposure depends on the strength of the light, the length of exposure, and whether the skin is protected. There are no safe UV rays or safe suntans.
Sun exposure at any age can cause skin cancer. Be especially careful in the sun if you burn easily, spend a lot of time outdoors, or have any of the following physical features:
- Numerous, irregular, or large moles.
- Fair skin.
- Blond, red, or light brown hair.
It's important to examine your body monthly because skin cancers detected early can almost always be cured. The most important warning sign is a spot on the skin that is changing in size, shape, or color during a period of 1 month to 1 or 2 years.
Skin cancers often take the following forms:
- Pale, wax-like, pearly nodules.
- Red, scaly, sharply outlined patches.
- Sores that don't heal.
- Small, mole-like growths - melanoma, the most serious type of skin cancer.
Block Out UV Rays
- Cover up. Wear tightly-woven clothing that blocks out light. Try this test: Place your hand between a single layer of the clothing and a light source. If you can see your hand through the fabric, the garment offers little protection.
- Use sunscreen. A sun protection factor (SPF) of at least 15 blocks 93 percent of UV rays. You want to block both UVA and UVB rays to guard against skin cancer. Be sure to follow application directions on the bottle.
- Wear a hat. A wide brim hat (not a baseball cap) is ideal because it protects the neck, ears, eyes, forehead, nose, and scalp.
- Wear UV-absorbent shades. Sunglasses don't have to be expensive, but they should block 99 to 100 percent of UVA and UVB radiation.
- Limit exposure. UV rays are most intense between 10 a.m. and 4 p.m. If you're unsure about the sun's intensity, take the shadow test: If your shadow is shorter than you, the sun's rays are the day's strongest.
Preventing Skin Cancer
For more information about preventing,detecting, and treating skin cancer, check out these sources:
American Cancer Society
Centers for Disease Control and Prevention
The Skin Cancer Foundation
OSHA 3166-06R 2003 |
A gene that allows mice to accept human bone marrow cells more efficiently is presented online in Nature Immunology. The gene, called Sirpa, limits graft failure of transplanted blood stem cells and was found by studying different strains of immunodeficient mice that varied in their ability to accept human blood stem cell transplants.
Such mice serve as valuable tools to study human blood cell deficiencies or diseases. That closely related inbred mouse strains showed differences in engraftment frequencies quickly narrowed the search for the gene. The mouse Sirpa genes are polymorphic, meaning that stable inherited differences are found in these genes that can alter their function.
The protein encoded by Sirpa, SIRP-alpha, interacts with another protein called CD47 expressed on the surface of the human blood cells. This interaction is thought to prevent a class of immune scavenging cells called phagocytes from attacking and eating cells that lack CD47 or have CD47 molecules that cannot be recognized by SIRP-alpha. This latter possibility is why some mouse strains rejected their human blood transplant while other strains did not.
These differences, however, are not restricted to mice. Humans likewise display polymorphisms in the human SIRPA gene. The authors speculate such differences might explain why some bone marrow transplants are rejected despite being 'tissue matched' by other criteria for donor-recipient compatibility. If correct, SIRPA can be added to this list of markers used to screen for suitable donors to make bone marrow transplant safer and more successful.
Jayne S Danska (Hospital for Sick Children, Toronto, ON, Canada)
Abstract available online.
(C) Nature Immunology press release.
Message posted by: Trevor M. D'Souza |
Aggression in Youth
Aggression in Youth
What is aggression?
Everyone gets angry sometimes, even small children. But some children and teens have so much trouble controlling their anger that they shove, hit, or make fun of other people. This causes them trouble at home and at school. They often have a hard time making friends. And their aggression makes parenting them a challenge.
Aggression is any behavior that hurts other people. It can be physical—hitting or pushing—or verbal, such as name-calling. Aggression also can be social. Children may make fun of other kids or ignore them to make them feel left out. Older children and teens may gossip about peers or spread rumors about them on social media. Bullying is a common type of aggression.
Both boys and girls can be physically or socially aggressive. But boys often express anger in a physical way. Girls tend to be socially aggressive.
The reasons some children are more aggressive than others are complex. Some children may be born with an aggressive personality. They may be more impulsive than other children: They act without thinking about what might happen. They may learn to be aggressive by being around angry adults and peers. Nonaggressive children often don't want to be around them, so aggressive kids can spend time with other aggressive kids, which encourages more aggression.
Parenting an aggressive child can be hard and tiring. You may feel overwhelmed, embarrassed, and even angry yourself. But help is available for you and your child. With patience, support, and help, most children can learn to handle conflict without harming others.
When is aggression a serious problem?
All children have to learn how to deal with anger and frustration. Many toddlers go through a phase of temper tantrums, where they yell and scream and swing their arms and legs when they're upset. School-age children may throw things or get into a fight on the playground. As they grow, most children learn from adults—and from other children—how to express anger or handle conflict in a way that doesn't hurt others.
Aggression is a problem if it happens often and gets your child in trouble at home and at school. Aggression may be a sign of a problem called oppositional defiant disorder (ODD). Children with ODD may have tantrums and talk back to their parents or other adults. If this hostile behavior gets worse, it can lead to a more serious problem called conduct disorder. Older children and teens with conduct disorder may break rules, skip school, and steal or destroy property. Conduct disorder is linked to depression, substance abuse, dropping out of school, and crime (which can lead to going to jail or prison).
Extreme aggression is sometimes also called maladaptive aggression.
If a child of any age shows repeated aggression for 6 months or more, it may be a sign of ODD. Older children usually have to show a pattern of severe aggression for a year before they are diagnosed with conduct disorder.
When to call a doctor
Call your child's doctor right now if:
- Your child hurts or threatens to hurt others or himself or herself.
- Your child attacks you, or you fear for your safety.
Call your child's doctor if:
- Your child has been caught stealing several times.
- Your child is repeatedly sent home from school for behavior problems.
What can increase the risk of aggression in youth?
A child or teen's home life and other surroundings can raise the risk of aggression. Children may become aggressive if they:
- See violence in their neighborhood.
- Feel pressure to join a gang.
- Live in a home with weapons.
- Use alcohol or drugs.
- Are being bullied.
- Live in a home with parents who are aggressive, have marital problems, or have a problem with drugs or alcohol.
- Spend a lot of time without adult supervision.
- Have parents who discipline with harsh language and spanking.
- Watch violent movies or TV programs or play violent video games.
How is an aggression problem diagnosed?
To see if your child has a problem with aggression, a pediatrician or a mental health professional will ask about your child's behavior at home and at school. Does your child act out of control and have trouble calming down? Does your child throw things? Does he or she get in fights with other children? How often do the outbursts happen? Are they occurring more often?
The doctor or counselor may watch your child at home or at school. Your child's teachers also may be interviewed.
Your child may have a physical exam and tests to see if he or she has a health problem that could cause aggression or make it worse.
What is the treatment for aggression problems?
Counseling is the main treatment for aggression in youth. Your child may have counseling alone and with you. Through role-playing and other methods, your child can learn how to cope with things that make him or her angry. In some cases, a child may need medicine to treat a mood disorder or another condition that may lead to aggressive behavior.
Counseling can help parents learn how to guide their child to make better choices. Parents sometimes make aggression worse without meaning to. They may get so frustrated by their child's anger or tantrums that they punish him or her by yelling or spanking. They also may forget to praise good behavior. Counseling can help you provide consistent discipline. You show your child the rules and what will happen (the consequences) if he or she breaks them.
How can you prevent aggression in your child?
Set rules and consequences
- Make house rules for your family. Let your child know the consequences (such as loss of certain privileges) for not following the rules.
- If you say you will take away a privilege, do it. It can be hard to follow through when your child says he or she is sorry. But your child needs to know you mean what you say.
- Create a chart with rules and chores for younger children. Your child can earn stars or other stickers for completed chores or good behavior. These stars can be turned in for privileges, such as more play time or a game night with the family.
- Ask your child how he or she would feel if someone pushed him or her on the playground.
- Read stories to young children about a child coping with a problem in a positive way.
- When reading with your child or watching a TV show, ask what was good about a character's behavior, and what was not good. What could the character have done differently to make a better choice?
Model good behavior
- Teach toddlers not to hit or bite others. Gently pull your child away and say "no" firmly.
- Use your own behavior to show your child how to act. Try not to yell when correcting your child's behavior.
- Catch your child being good. Praise your child when he or she handles conflict in a positive way or shows empathy for others.
- Involve your child in a sport. Or help your child find a hobby or social activity to share with other kids.
- Encourage your child's friendships with nonaggressive peers. Even one friend who is a positive role model can help a child feel accepted and make good choices.
Other Works Consulted
- Leff S, et al. (2009). Aggression, violence, and delinquency. In WB Carey et al., eds., Developmental-Behavioral Pediatrics, 4th ed., pp. 389–396. Philadelphia: Saunders Elsevier.
- Walter HJ, DeMaso DR (2011). Disruptive behavior disorders. In RM Kliegman et al., eds., Nelson Textbook of Pediatrics, 19th ed., pp. 96–100. Philadelphia: Saunders.
ByHealthwise StaffPrimary Medical Reviewer John Pope, MD, MPH - Pediatrics Kathleen Romito, MD - Family Medicine Louis Pellegrino, MD - Developmental Pediatrics
Current as ofDecember 7, 2017 |
Papaver is a genus of poppies, belonging to the Poppy family (Papaveraceae).
Its 120-odd species include the opium poppy and corn poppy. These are annual, biennial and perennial hardy, frost-tolerant plants growing natively in the temperate climates of Eurasia, Africa and North America (Canada, Alaska, Rocky Mountains). One section of the genus (Section Meconella) has an alpine and circumpolar arctic distribution and includes some of the most northerly-growing vascular land plants.
Papaver grows in disturbed soil. Its seeds may lay dormant for years until the soil is disturbed. Then they bloom in great numbers under cool growing conditions.
The large, showy terminal flowers grow on long, hairy stalks, to a height of even 1m or more as in the Oriental Poppy (Papaver orientale). Their color vary from the deepest crimson, lilac, or white, or violet, to bright yellow or soft pink. The tissue-paper-like flowers may be single, double or semi-double. The size of these flowers can be amazing, as the Iceland Poppy (Papaver nudicaule) grows to 15-20 cm across.
The flower buds are nodding or bent downwards, turning upwards as they are opening. There are two layers. The outer layer of two sepals drops off as the bud opens. The inner layer consists of 4 (but sometimes 5 or 6) petals. There are many stamens in several whorls around a single pistil.
The ovary later develops in a poricidal capsular fruit, capped by the dried stigma. The numerous, tiny seeds escape with the slightest breeze through the pores of the capsule.
Poppies have a long history. They were already grown as ornamental plants since 5,000 BC in Mesopotamia. They were found in Egyptian tombs. In Greek mythology, the poppy was associated with Demeter, goddess of fertility and agriculture. People believed they would get a bountiful crop if poppies grew in their field, hence the name 'corn poppy'. In this case, the name 'corn' was derived from 'korn', the Greek word for 'grain'.
They are also sold as cut flowers in flower arrangements, especially the Iceland Poppy. They deserve a prominent place in any garden, border, or in meadow plantings. They are probably one of the most popular wildflowers.
In the course of history, poppies have always been attributed important medicinal properties. The alkaloid rhoeadine is derived from the flowers of the Corn Poppy (Papaver rhoeas). This is used as mild sedative. The stems contain a latex or milky sap.
In Flanders Fields by John McCrae, May 1915
In Flanders fields the poppies blow Between the crosses, row on row, That mark our place; and in the sky The larks, still bravely singing, fly Scarce heard amid the guns below.
We are the Dead. Short days ago We lived, felt dawn, saw sunset glow, Loved and were loved, and now we lie In Flanders fields.
Take up our quarrel with the foe: To you from failing hands we throw The torch; be yours to hold it high. If ye break faith with us who die We shall not sleep, though poppies grow In Flanders fields.
The poppy's significance to Remembrance Day is a result of Canadian military physician John McCrae's poem In Flanders Fields. The poppy emblem was chosen because of the poppies that bloomed across some of the worst battlefields of Flanders in World War I, their red colour an appropriate symbol for the bloodshed of trench warfare. An American YMCA Overseas War Secretaries employee, Moina Michael, was inspired to make 25 silk poppies based on McCrae's poem, which she distributed to attendees of the YMCA Overseas War Secretaries' Conference. She then made an effort to have the poppy adopted as a national symbol of remembrance, and succeeded in having the National American Legion Conference adopt it two years later. At this conference, a Frenchwoman, Anna E. Guérin, was inspired to introduce the widely used artificial poppies given out today. In 1921 she sent her poppy sellers to London, England, where they were adopted by Field Marshall Douglas Haig, a founder of the Royal British Legion, as well as by veterans' groups in Canada, Australia and New Zealand. Some people choose to wear white poppies, which emphasises a desire for peaceful alternatives to military action.
The Royal Canadian Legion suggests that poppies be worn on the left lapel, or as close to the heart as possible.
Remembrance Day – also known as Poppy Day, Armistice Day (the event it commemorates) or Veterans Day – is a day to commemorate the sacrifices of members of the armed forces and of civilians in times of war, specifically since the First World War. It is observed on 11 November to recall the end of World War I on that date in 1918. (Major hostilities of World War I were formally ended at the 11th hour of the 11th day of the 11th month of 1918 with the German signing of the Armistice.) The day was specifically dedicated by King George V, on 7 November, 1919, to the observance of members of the armed forces who were killed during war; this was possibly done upon the suggestion of Edward George Honey to Wellesley Tudor Pole, who established two ceremonial periods of remembrance based on events in 1917.
_____________California State Flower_____________
Vast fields of Golden Poppies have ever been one of the strong and peculiar features of California scenery. The gladsome beauty of this peerless flower has brought renown to the land of its birth. Present everywhere, at all times in some form, it is not surprising that it has taken firm hold of the affections of the people, and that the homage of the nature-loving world is so freely offered it.
--Emory E. Smith, The Golden Poppy, 1902
Eschscholzia californica was the first named member of the genus Eschscholzia, which was named by the German botanist Adelbert von Chamisso after another botanist, Johann Friedrich von Eschscholtz, his friend and colleague on Otto von Kotzebue’s scientific expedition to California and the greater Pacific in the early 19th century.
The California poppy is the California state flower. It was selected as the state flower by the California State Floral Society in December 1890, winning out over the Mariposa lily (genus Calochortus) and the Matilija poppy (Romneya coulteri) by a landslide, but the state legislature did not make the selection official until 1903. Its golden blooms were deemed a fitting symbol for the Golden State. April 6 of each year is designated "California Poppy Day."
Horticulturalists have produced numerous cultivars with various other colors and blossom and stem forms. These typically do not breed true on reseeding.
A common misconception associated with the plant, because of its status as a state flower, is that the cutting or damaging of the California poppy is illegal. There is no such law in California, outside of state law that makes it a misdemeanor to cut or remove any plant growing on state or county highways or public lands except by authorized government employees and contractors; it is also against the law to remove plants on private property without the permission of the owner (Cal. Penal Code Section 384a).
California poppy leaves were used medicinally by Native Americans, and the pollen was used cosmetically. The seeds are used in cooking.
California Indians cherished the poppy as both a source of food and for oil extracted from the plant. Its botanical name, Eschsholtzia californica, is sometimes known as the flame flower, la amapola, and copa de oro (cup of gold), the poppy grows wild throughout California. It became the state flower in 1903. Every year April 6 is California Poppy Day, and Governor Wilson proclaimed May 13-18, 1996, Poppy Week.
__________Drying Seed Pods & Seeds____________
Look carefully at a flower. When the flower fades leave it where it is, do not remove it. Eventually at the base of the flower there will be some swelling. This is where the seeds are forming. Let the fading petals stay where they are and let them fall naturally, do not remove them. Allow the developing seedpod to grow without disturbance. The base of the flower will swell even more and soon you'll notice that the flower stem and the swollen base are turning a papery-brown color. This is an indication that the seeds are near to maturity and are almost fully ripe. In effect the plant is achieving its goal of reproduction for that flower, it has produced viable seeds and no longer needs to expend energy to keep it nourished, so it no longer sends nutrient-rich moisture up the stem to the seedpod. That is why the stem is browning—it is no longer being supplied with nutrients and water and so it is drying and dying back.
This drying action will continue for several days to weeks more and as the swollen pod continues to dry you will see it begin to open. Some pods make star-shaped openings; plants like poppies create a ridge of small circular openings near the top of the pod, this ridge of openings function like a saltshaker—tipping the pod over and shaking it will disperse the seeds. Some flowers, like snapdragon or columbine, make cup-like seedpods—their ripe seeds can easily pour from the pod. All flower pods will open in their own way when the seeds are ripe. These openings are essential so the plant can disperse seeds. When you see that the pods and stems are both brownish in color, AND you notice that the pods are starting to open you can then collect the seedpods knowing that the seeds are fully mature and ripe.
Some plants, like grasses, don't make seed pods but instead develop and mature their ripe seeds directly along the flowering stem. Some grass seeds may be so lightweight that their seeds can waft away on breezes, some grasses have heavier seeds that can drop from the plant when ripe, some seeds grow hook-like extensions that catch onto the coat of a passing animal--they are snagged-off the stem and carried far away before falling to the ground.
After maturity, gather the dried seedpods. Place the pods on open plates or in open bowls and stash them in a safe place where they won't be disturbed so the seeds in the pods can continue to dry naturally for another week or two. Afterwards, remove the seeds from the pods. Ripe seed from grasses can be stripped from the plant with your hand. To assure that the seeds are thoroughly dry spread them on an open plate for a few more days. Occasionally stir the seeds to make sure the bottom layers will get a chance to finish drying too.
Storing dry seed is easy. Some people use paper packets, some people use coin envelopes or small mailing envelopes will do fine too. Some people use small plastic reclosable bags but they make sure the seeds are bone-dry before placing them into the plastic bag and closing it. Seed, which is not completely dry, can grow mold and spoil inside plastic packets. Always label the packets so you know what's inside—you can write the name and information on the packets or make labels for the packets with a graphics program and your home printer. Store the seed packets where they'll be away from heat or direct sunlight. You can use cardboard file boxes to keep seeds in, some people reuse popcorn gift-tins that have a tight fitting lid. These are especially good if the seeds are stored where there might be mice—hungry mice looking for seeds can easily chew into cardboard and some plastics, it’s almost impossible for them to gnaw through the metal wall of a popcorn tin. Some people place their containers of seeds on shelves or in closets, or some use a drawer in their bedroom or dining room, some people like to store their seeds in plastic containers inside their refrigerator. It’s important to remember that when you store seeds in the refrigerator they are not going to benefit from cold-stratification unless they have been first sown into a moist sowing medium. Storing seeds in the refrigerator provides the benefit of cold storage but does not provide the benefit of cold-stratification, it will not enhance the germination of seeds which require cold-stratification for germination.
___________Poppy Pods Saving Seeds__________
Poppies are among the very easiest of flowers to gather seeds from. There are many varieties of annual and perennial poppy and the seed gathering method is similar to all.
Allow the poppy to flower and do not deadhead. The petals will drop and a seed pod will develop at the end of the stem. When the seeds have matured the capsule will brown and make a series of small openings just beneath the crown cap of the pod. These openings function similarly to a salt shaker whereas the seeds can be poured from the dried pod.
Gather the pods after they have browned and made seed dispersal openings. Allow them to dry for a week or so on a plate in a warm room. Afterwards the pods may be turned upside down and the seeds will then fall from it. A few taps on the pod will help to remove all the seeds from within.
Poppy seeds are very small black balls. They are often used in baked foods such as poppyseed streudl or poppyseed rolls and bagels, or they can be tossed with buttered noodles.
Papaver is a genus of poppies, belonging to the Poppy family (Papaveraceae). |
Cell behavior, once shrouded in mystery, is revealed in new light
MU researchers gain better understanding of cell behavior using a specialized microscope
Gavin King and a team of University of Missouri researchers are one step closer to understanding cell behavior with the help of a specialized microscope.
Download photo from the MU News Bureau
Story posted: Oct. 30, 2018
By: Eric Stann
COLUMBIA, Mo. – A cell’s behavior is as mysterious as a teenager’s mood swings. However, University of Missouri researchers are one step closer to understanding cell behavior, with the help of a specialized microscope.
Previously, in order to study cell membranes, researchers would often have to freeze samples. The proteins within these samples would not behave like they would in a normal biological environment. Now, using an atomic force microscope, researchers can observe individual proteins in an unfrozen sample — acting in a normal biological environment. This new observation tool could help scientists better predict how cells will behave when new components are introduced.
“What’s missing right now in cell biology is the ability to predict cell behavior,” said Gavin King, associate professor of physics and astronomy in the MU College of Arts and Science, and joint assistant professor of biochemistry. “We don’t know all of the details yet on a number of biological processes. For example, when a drug is introduced to a cell, it must pass through the membrane, which may create a reaction. The more knowledge we have about that reaction, the better we will be able to create drugs that can target a specific area and, possibly, result in fewer side effects.”
The atomic force microscope is capable of tracing the three-dimensional shape of an individual protein in biological conditions (in fluid at room temperature). It is comprised of a robotic arm with a tiny needle attached on one end. Researchers position the arm precisely on the sample they wish to analyze. Then, by very gently tapping the needle multiple times into the specimen in various points, a real-time, three-dimensional image of a protein is developed.
For this study, researchers focused on imaging the consequences of a chemical reaction occurring within one particular protein from E.coli that is responsible for transporting other proteins across the cell membrane. They picked E.coli for this study because of the simplicity of its cells. While researchers could not control the precise moment the reaction occurred, the force microscope’s tapping motion allowed researchers to watch in real time how that protein changed its shape in response to the release of chemical energy. These conformational changes are directly related to the protein’s biological function.
“We can keep our eyes on just one protein, add various components, and then watch what happens,” King said. “It is like making a movie of a single molecule doing its biological work. We are really in the early days of understanding the mechanical details of how cells work, but as these tools become increasingly more precise they could provide us with essential information in the future.”
The study, “Single molecule observation of nucleotide induced conformational changes in basal SecA-ATP hydrolysis,” was published in Science Advances. Funding was provided by the National Science Foundation (CAREER Award #: 1054832) and a Burroughs Welcome Fund Career Award. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.
In addition to King, the publication was co-written by Nagaraju Chada, Kanokporn Chattrakun and Brendan P. Marsh of the MU Department of Physics and Astronomy, along with Chunfeng Mao and Priya Bariya of the MU Department of Biochemistry. |
The largest glacier in Greenland is even more vulnerable to sustained ice losses than previously thought, scientists have reported.
Jakobshavn glacier, responsible for feeding flotillas of icebergs into the Ilulissat icefjord — and possibly for unleashing the iceberg that sank the Titanic — is an enormous outlet for the larger Greenland ice sheet, which itself contains enough ice to raise seas by more than 20 feet.
Of that ice, more than 6 percent flows toward the ocean through Jakobshavn, which has raced inland since the 1990s, pouring ever more of its mass into the seas — a change that scientists believe has been caused by warming ocean temperatures. If all of the ice that flows through this region were to melt, it would raise global sea levels by nearly two feet. Just from 2000 to 2011, the ice loss through Jakobshavn alone caused the global sea level to rise by a millimeter.
The U.S. east coast in particular is already a “hot spot” for sea level rise, with rates in many areas exceeding the current global average of a little over 3 millimeters per year.
But until now, researchers have not been sure how far Jakobshavn’s ice extends below sea level — or how much deeper it gets further inland. That’s crucial because Jakobshavn is undergoing a dangerous “marine ice sheet instability,” in which oceanfront glaciers that grow deeper further inland are prone to unstoppable retreat down what scientists call a “retrograde” slope.
That’s where the new science comes in: Researchers who flew over Jakobshavn in a helicopter toting a gravimeter, used to detect the gravitational pull of the ice and deduce its mass, say they’ve found the glacier extends even deeper below sea level than previously realized, a configuration that sets the stage for further retreat.
“The way the bed looks, sort of makes it more prone to continuous retreat for decades to come,” said one of the study’s authors, Eric Rignot, a researcher with NASA’s Jet Propulsion Laboratory and the University of California-Irvine.
Rignot conducted the research with colleagues from his own institution, New York University, the University of Kansas, and Sander Geophysics, which makes the gravimeter used in the study. The study was led by Lu An of the University of California-Irvine, and published in Geophysical Research Letters.
Here’s a visualization of Jakobshavn’s historic retreat going all the way back to 1851, courtesy of NASA. The image only shows through 2010, but the glacier has continued to retreat since then:
The study found that Jakobshavn is close to 0.7 miles thick where it currently touches the ocean (with only a little over a football field of ice rising above the water and the rest submerged). However, only about five to 10 miles inland toward the center of Greenland, the glacier grows considerably thicker and plunges deeper below sea level, eventually becoming over a mile thick, with nearly a mile of that mass extending below the sea surface. Moreover, the glacier is deep over a vast area.
Along most of that distance, the ice will get steadily deeper and thicker, favoring faster retreat. The glacier front is currently moving backward at about a third of a mile per year, and so will plunge into these deeper regions over the course of coming decades.
The greater depth of Jakobshavn matters for at least two reasons. As it retreats, Jakobshavn will present a thicker ice front to the warm ocean, leading to even greater ice flow and loss. Meanwhile, the depth itself favors increased ice loss because at the extreme pressures involved, the freezing point of ice itself actually changes, making it more susceptible to melting by relatively warm seawater in the deepest part of the fjord.
Granted, there are still some key limits on the speed of retreat — in part because the glacier remains hemmed in by the walls of a fjord that will exert continual friction upon its ice.
“The greater depth of the trough indicated by the new data will favor faster retreat, but it is such a narrow trough that some stabilization from the sides is likely to continue, so that there is still no worry of the whole ice sheet suddenly falling in the ocean,” said Richard Alley, a glaciologist at Penn State University who was not involved in the study. Alley said this still makes Jakobshavn less of a worry than Thwaites glacier in West Antarctica, which is far wider and less constrained.
The deep passageway below Jakobshavn also underscores that, like another enormous Greenland glacier (the far northern Petermann glacier), this outlet connects with a vast and submerged system of canyons that deeply undercut Greenland and travel to its center beneath the ice sheet.
“It’s a direct conduit to the heart of Greenland,” Rignot said. (For a NASA video showing the two vast canyons beneath Greenland’s ice, one connected to Jakobshavn and one to Petermann, see here.)
A few simple calculations help underscore why a further retreat — and accompanying thickening — of Jakobshavn glacier could be such a problem.
Greenland is the world’s largest contributor to sea level rise — according to NASA, it is contributing 281 billion tons of ice to the ocean every year, nearly enough to cause 1 millimeter of sea level rise annually. (It takes 360 billion tons to do that.) That’s from all the various Greenland glaciers and from general melting of the ice sheet that then runs off into the ocean — but Jakobshavn is the single biggest ice loser.
Its estimated contribution of a millimeter of sea level rise over 11 years equates to a loss of about 32 billion tons of ice per year — but the new study says the glacier’s loss could increase by 50 percent as it reaches its deepest point. So, bump that up to around 50 billion tons per year.
That would put Jakobshavn on par with the biggest ice losers in Antarctica — the enormous Pine Island and Thwaites glaciers — and would help push Greenland to the point where it’s contributing over a millimeter per year to rising seas all on its own, further increasing the rate of global sea level rise (which is also driven by Antarctica, smaller glaciers worldwide, and oceans that expand as they warm).
And that, in turn, will make it much harder to adapt to rising seas. |
A new sound detector that mimics how a parasitic fly uses its extraordinary hearing to pinpoint its victims, could one day help soldiers track down snipers and lead to better phones and hearing aids that stifle background noise.
The new invention was inspired by the ear of a small yellow nocturnal fly, Ormia ochracea. The fly can identify the origin of sounds with uncanny accuracy—and uses that ability with deadly force. When a female of the species hears the mating song of a male cricket, she flies onto the back of the singer and deposits her offspring, which invade, consume and kill their host.
The fact that this fly can pinpoint sound so well is a bit of a surprise. We humans know where sounds are coming from because of the distance separating our ears—if a sound comes from one side, the ear on that side will hear it slightly before the ear on the other side. But because the yellow nocturnal fly is so tiny, sound waves hit both sides of its head at essentially the same time. Its ears are separated by about the width of a nickel, which means it takes sound about four millionths of a second to go from one ear to the other.
Nearly 20 years ago, engineer Ronald Miles at Binghamton University discovered the secret: The fly's ear possesses a structure resembling a teeter-totter that's about 1.5 millimeters long. When a sound reaches the insect from the side, one end of the teeter-totter starts tipping before the other, which tells the fly which side sound is coming from, says Neal Hall, an engineer at the University of Texas at Austin and coauthor on the new study.
This is a photograph of the biologically-inspired microphone taken under a microscope, providing a top-side view. The tiny structure rotates and flaps about the pivots, producing an electric potential across the electrodes. (Photo by N. Hall/UT Austin)
The new sound detector Hall and his colleagues developed is essentially a silicon teeter-totter only 2 millimeters wide. Under both ends of the see-saw are springs made of piezoelectric materials, which turn mechanical force into electrical signals. Thanks to these springs, the researchers can measure how the seesaw beam flexes and rotates. "Doing so enabled us to fully emulate the mimic and replicate the fly's special ability," Hall says.
The research, funded by DARPA, could have military applications, Hall says. "One can imagine battlefield scenarios where being able to determine the location of an event based on sound alone is important—situations in which visual cues are denied. Finding a hidden sniper using sound emitted from the gunshot is an example."
Or, imagine a smartphone that can filter our ambient noise. "We envision a smartphone app that uses directional microphones to focus only on a specific speaker of interest while rejecting all other ambient noise," Hall says. "for example, plates dropping in the background, or screaming toddlers."
This tech could lead to a new generation of smarter hearing aids that focus only on conversations or sounds of interest to wearers, instead of amplifying everything as current hearing aids do. The discomfort from this amplified background noise a major reason why only 2 percent of Americans wear hearing aids even though perhaps 10 percent of the population could benefit from wearing one, Hall says. |
Removal of oxygen can be a very effective method of extinguishing fires where this method is possible. Only a slight decrease of the oxygen concentration in air decreases the fire intensity and below 16 % oxygen in the air there is no risk for a fire. Many alternative gases have been used purpose include carbon dioxide, halon and nitrogen. (halon is now disappearing from use, as it is considered a non-environmentally friendly gas).
The disadvantage for all of these gases has been that human beings could be suffocated, if the gas is injected before all humans have been evacuated. CO2 is heavier than air and is often used in building sand other areas where the gas can be contained and the displaced air can raise above the fire.
Nitrogen is lighter than air and is uses for injection where the fire is at an upper surface and the nitrogen can be contained, as it can in a transformer tank. Some manufactures of transformer fire extinguishing systems, have used nitrogen for injection into the base of oil filled transformers to extinguishing a fire burning from the oil surface. In this application nitrogen will stir and cool the oil in the transformer tank and displace the air above the oil and suppress the fire.
Reference: Guide for Transformer Fire Safety Practices (CIGRE Working Group A2.33) |
You may sometimes see small specks or clouds moving in your field of vision. These are called floaters. You can often see them when looking at a plain background, like a blank wall or blue sky. Floaters are actually tiny clumps of cells or material inside the vitreous, the clear, gel-like fluid that fills the inside of your eye.
While these objects look like they are in front of your eye, they are actually floating inside it. What you see are the shadows they cast on the retina, the layer of cells lining the back of the eye that senses light and allows you to see. Floaters can appear as different shapes, such as little dots, circles, lines, clouds or cobwebs.
When the vitreous gel pulls on the retina, you may see what look like flashing lights or lightning streaks. These are called flashes. You may have experienced this same sensation if you have ever been hit in the eye and seen "stars." The flashes of light can appear off and on for several weeks or months.
As we grow older, it is more common to experience floaters and flashes as the vitreous gel changes with age, gradually pulling away from the inside surface of the eye.
- Seeing dots, circles, lines or "cobwebs"
- Seeing flashes of light or seeing "stars"
- CentraSight Telescope Implant
- Retinavites II
- ForeseeHome AMD Monitoring
- Macula Risk PGx AMD Testing |
Introduction to solar desalination
Solar desalination is the use of solar radiation to desalinate water. Direct solar desalination uses the energy from the sun directly in the desalination – for example to heat the water. Decoupled solar desalination first generates electricity from the sun, then uses that electricity to power a desalination system.
Solar stills (distillers) are one of the earliest forms of desalination, using solar radiation to evaporate water which then condenses out on a cold surface.
Seawater greenhouses use mats wetted with saline water to humidify the greenhouse using either natural or forced ventilation.
Concentrated Solar Stills
Concentrated solar stills (CSS) use concentrated solar thermal collectors as the heat source for a thermal desalination process. In commercial CSS systems multiple effect distillation (MED) is used for the desalination. In the broadest terms, CSS systems desalinate a much greater volume of water per unit solar collection area than conventional solar stills.
Plasmonics is the use of solar radiation to excite metals. In the context of desalination, plasmonics results in very localised boiling on the lighted metal’s surface.
Decoupled solar desalination
Decoupled solar desalination (DCD) uses a solar power plant – typically photovoltaics – to generate electricity. This electricity is then used to power any form of desalination plant. Local DSD colocates the power generation and desalination steps. Remote DSD is more an accounting term in which all power used in the desalination process is balanced by solar power plants, installed for the purpose, elsewhere. |
A research paper is an essay or academic paper in which the content is supported by data or other sources. In other words, instead of just sitting down and writing something from the top of your head, you research about what other people have said about the subject and you then formulate your own ideas and theories on the basis of existing data and knowledge.
In some ways, knowing how to write a research paper is similar to learning how to write any sort of paper. You will need to:
Your teacher is sure to be impressed with your research and writing skills if you follow the steps and do your research correctly.
Coming up with a topic is the first step, because until you know what you want to write about, it can be difficult to do research.
The topic you choose should be narrow enough that you can research or learn about it adequately, but not so narrow that you can't find anything to say.
You can pick a topic by brainstorming ideas, and then doing some preliminary research to make sure that information exists on the topic and that the scope of the topic is appropriate for your paper.
After you have come up with a topic or gotten a general idea of what you are going to write about, it is time to begin doing your research.
The next step to knowing how to write a research paper is to understand how to do research. Research occurs when you look up information about your topic. For example, if you are writing a paper on the Revolutionary War, you may want to read American history books that deal with the subject. You can then use information from those books to narrow your topic, say to a particular battle in the revolutionary war, and then find books (or sections of books) on that particular battle.
You can do research in a number of different ways. Using books is the traditional form, but even book research has become easier now that library card catalogs are all online. You can simply visit your local library and use the computer to see if they have any books on your topic. Find those books on the shelf and begin reading what the authors have to say about your subject.
Research has become even easier as a result of the Internet. You can type your topic into a search engine and likely get hundreds or even thousands or millions of results. Just be careful and remember that not everything on the Internet is reliable or true. If you are doing a research paper, you may want to stick to sites with a .org, .edu or .gov ending, since those tend to be more reliable. Regardless of which website you use, make sure you check the Internet source to ensure that they are reliable and that the facts are true.
Your research paper will have to include references, resources, and research. Therefore, you will be using citations in your paper. These can include:
As you do research, write relevant notes and keep track of where you got the information. Write down all possible information you may need about your source to cite it correctly, whether you are using APA or MLA format. You'll want to cite this source info in your paper.
The actual format for the research citation will change depending on which format (e.g. APA or MLA) you use. In general, most formats will require you to:
After you have done your research, continue writing your paper as you would any other.
Outline what you plan to say. In addition to helping you stay organized, creating an outline will also help to filter out any unnecessary information. Include notes to yourself in the outline about which research points you are going to use in each paragraph. Make sure:
Now you are ready to write from your outline. Create your citations.
Follow the appropriate format as instructed by your professor for the citations and works cited page as well as all pages in the research paper to ensure that your research paper is formatted correctly and well received. |
Your social group can have a huge impact on how you view the world. But new research shows that the people you hang out or work with might also affect how well you can identify fact from fiction.
Many people have difficulty authenticating online information, and today’s personalized systems on social media are making it even harder to distinguish fact from fake news. A 2016 study, for example, found that 60 percent of college students were unable to correctly evaluate if a tweet was an accurate source of information or not.
Now, researchers have found that social dynamics, or group behaviors and interactions, have a significant impact on evaluating online sources, even when groups have equal access to information. The findings appear in the journal Heliyon.
The researchers wanted to explore how students use online sources and how they work together to identify misleading information from factual information.
In the study, researchers gave graduate student groups scavenger hunt tasks that required using online sources and personal knowledge to answer questions correctly. Even though each group had equal access to the internet, individual interactions with one another had the most impact on group performance.
“We imagined that working in groups would actually help the students find the correct information, but that was not the case,” says study coauthor Isa Jahnke, an associate professor in the College of Education at the University of Missouri. “In fact, group dynamics outweighed information access, and discussion and decision-making was more important than the facts.”
For example, one group that performed poorly missed a question because two team members ignored the logic and personal knowledge of the third team member, who had the correct answer. The group that performed best chose to research the questions individually, before coming together to share their answers. The researchers say this strategy might have worked best because there was no discussion that might have influenced other members’ thoughts on the correct answer.
Coauthor Michele Kroll, a doctoral candidate in information science and learning technologies, says teachers who want to support cooperation among students need to consider other factors beyond giving them equal access to factual information and sources, including educating them about the impact group dynamics can have on identifying correct information online.
“Students might need further instruction and guidelines on how to evaluate online information, especially on social media,” Kroll says.
“Teachers might also consider creating guidelines for how groups will work together in these situations so that every student has the opportunity to be heard,” she says.
Source: University of Missouri |
Puberty changes the way how we recognise faces
Apart from the many mental and physical changes that teenagers go through as they enter puberty, new research has found that adolescents also begin to view faces differently.Updated: Oct 02, 2016 14:46 IST
Apart from the many mental and physical changes that teenagers go through as they enter puberty, new research has found that adolescents also begin to view faces differently.
The face, which is known as the index of mind, is as unique as fingerprints and can reveal a great deal of information about our health, personality, age, and feelings. The transition into adulthood literally changes the way people see faces -- which includes showing a bias toward adult female faces as children, to preferring peer faces that match their own developmental stage in puberty.
This process is part of the social metamorphosis that prepares them to take on adult social roles, the study said. “For the first time, the study has shown how puberty, not age, shapes humans’ ability to recognise faces as they grow into adults,” said Suzy Scherf, Assistant Professor at the Pennsylvania State University.
The findings showed that puberty shapes the subtle emergence of social behaviours that are important for adolescents’ transition to adulthood. “This likely happens due to hormones influencing the brain and the nervous system reorganisation that occurs during this time,” Scherf added.
For the study, the researchers recruited 116 adolescents and young adults -- all in the same age group -- and separated them into four pubertal groups depending on their stage of puberty.
Any differences in the way they responded to faces were related to their pubertal status, not their age. The participants were presented with 120 gray-scale photographs of male and female faces.
There were images of pre-pubescent children, young adolescents in early puberty, young adolescents in later puberty, and sexually mature young adults. Using a computerised game, the researchers then measured their face-recognition ability.
After studying 10 target faces with neutral expressions, participants were shown another set of 20 faces with happy expressions and had to identify whether they had seen each face previously or if they were new.
The results showed that the pre-pubescent children had a bias to remember adult faces, which they call the caregiver bias. In contrast, adolescents had a bias to remember other adolescent faces, exhibiting a peer bias.
Further, among adolescents who were the same age, those who were less mature in pubertal development had better recognition memory for other similarly less mature adolescents, while those who were more mature in pubertal development had better recognition memory for peers who were similar in their level of development.
“This shows that adolescents are very clued into each other’s pubertal status. They can literally see it in each other’s faces, perhaps implicitly, and this influences how they keep track of each other,” Scherf stated.
The study, published in the journal Psychological Science, will help scientists uncover how puberty impacts the developing human brain and guide them in framing new mental health treatment.
Follow @htlifeandstyle for more
First Published: Oct 02, 2016 14:46 IST |
The concept of a growth mindset was developed by psychologist Carol Dweck and popularized in her book, Mindset: The New Psychology of Success. In recent years, many schools and educators have started using Dweck’s theories to inform how they teach students.
“In a growth mindset, people believe that their most basic abilities can be developed through dedication and hard work—brains and talent are just the starting point. This view creates a love of learning and a resilience that is essential for great accomplishment,” writes Dweck.
Students who embrace growth mindsets—the belief that they can learn more or become smarter if they work hard and persevere—may learn more, learn it more quickly, and view challenges and failures as opportunities to improve their learning and skills.
STEM Challenges are all about this idea of working hard and persevering. In the course of doing a STEM Challenge, students see their failures as opportunities to improve their learning and skill at that particular challenge.
As I approach the topic of Growth Mindset with my students, I use a pretty simple lab. Students must build a flashlight, but there's a catch, they must do it one piece at a time and no one student can do two parts back to back.
Each group needs a disassembled flashlight (I get the mini flashlights from the Dollar Tree) with batteries. You will need to take it apart into as many pieces as possible. Students need to work together to build the flashlight in the fastest time. While it seems easy, this lab takes students awhile to figure out, then even longer to get a pretty fast time. The challenge of working together to build it, with each student doing different parts, is a tough one.
You can get the flashlight lab here. Ready to open the world of growth mindset to your students? |
Frisch-Peierls Memorandum, March 1940
On the Construction of a "Super-bomb" based on a Nuclear Chain Reaction in Uranium
The possible construction of "super-bombs" based on a nuclear chain reaction in uranium has been discussed a great deal and arguments have been brought forward which seemed to exclude this possibility. We wish here to point out and discuss a possibility which seems to have been overlooked in these earlier discussions.
Uranium consists essentially of two isotopes, 238U (99.3%) and 235U (0.7%). If a uranium nucleus is hit by a neutron, three processes are possible: (1) scattering, whereby the neutron changes directions and if its energy is above 0.1 MeV, loses energy; (2) capture, when the neutron is taken up by the nucleus; and (3) fission, i.e. the nucleus breaks up into two nuclei of comparable size, with the liberation of an energy of about 200 MeV.
The possibility of chain reaction is given by the fact that neutrons are emitted in the fission and that the number of these neutrons per fission is greater than 1. The most probable value for this figure seems to be 2.3, from two independent determinations.
However, it has been shown that even in a large block of ordinary uranium no chain reaction would take place since too many neutrons would be slowed down by inelastic scattering into the energy region where they are strongly absorbed by 238U.
Several people have tried to make chain reactions possible by mixing the uranium with water, which reduces the energy of the neutrons still further and thereby increases their efficiency again. It seems fairly certain however that even then it is impossible to sustain a chain reaction.
In any case, no arrangement containing hydrogen and based on the action of slow neutrons could act as an effective super-bomb, because the reaction would be too slow. The time required to slow down a neutron is about 10-5 sec and the average time loss before a neutron hits a uranium nucleus is even 10-4. In the reaction, the number of neutrons would increase exponentially, like et/τ where τ would be at least 10-4 sec. When the temperature reaches several thousand degrees the container of the bomb will break and within 10-4 sec the uranium would have expanded sufficiently to let the neutrons escape and so to stop the reaction. The energy liberated would, therefore, be only a few times the energy required to break the container, i.e. of the same order of magnitude as with ordinary high explosives.
Bohr has put forward strong arguments for the suggestion that the fission observed with slow neutrons is to be ascribed to the rare isotope 235U, and that this isotope has, on the whole, a much greater fission probability than the common isotope 238U. Effective methods for the separation of isotopes have been developed recently, of which the method of thermal diffusion is simple enough to permit separation on a fairly large scale.
This permits, in principle, the use of nearly pure 235U in such a bomb, a possibility which apparently has not so far been seriously considered. We have discussed this possibility and come to the conclusion that a moderate amount of 235U would indeed constitute an extremely efficient explosive.
The behavior of 235U under bombardment with fast neutrons is not experimentally, but from rather simple theoretical arguments it can be concluded that almost every collision produces fission and that neutrons of any energy are effective. Therefore it is not necessary to add hydrogen, and the reaction, depending on the action of fast neutrons, develops with very great rapidity so that a considerable part of the total energy is liberated before the reaction gets stopped on account of the expansion of the material.
The critical radius γo- i.e. the radius of sphere in which the surplus of neutrons created by the fission is just equal to the loss of neutrons by escape through the surface-is, for a material with a given composition, in a fixed ration to the mean free path of neutrons, and this in turn is inversely proportional to the density . It therefore pays to bring the material into the densest possible form, i.e. the metallic state, probably sintered or hammered. If we assume for 235, no appreciable scattering, and 2.3 neutrons emitted per fission, then the critical radius is found to be 0.8 time the mean free path. In the metallic state (density 15), and assuming a fission cross-section of 10-23 cm2, the mean free path would be 2.6 cm and γo would be 2.1 cm, corresponding to a mass of 600 grams. A sphere of metallic 235U of a radius greater than γo would be explosive, and one might think of about 1 kg as suitable size for a bomb.
The speed of the reaction is easy to estimate. The neutrons emitted in the fission have velocities of about 10-9 cm/sec and they have to travel 2.6 cm before hitting a uranium nucleus. For a sphere well above the critical size the loss through neutron escape would be small, so we may assume that each neutron after a life of 2.6 x 10-9 sec, produces fission, giving birth to two neutrons. In the expression et/τ for the increase of neutron density with time, it would be about 4 x 10-9 sec, very much shorter than in the case of a chain reaction depending on slow neutrons.
If the reaction proceeds until most of the uranium is used up, temperatures of the order of 1010 degrees and pressure of about 1013 atmospheres are produced. It is difficult to predict accurately the behavior of matter under there extreme conditions, and the mathematical difficulties of the problem are considerable. By a rough calculation we get the following expression for the energy liberated before the mass expands so much that the reaction is interrupted:
E = 0.2M(r2/τ2)√((r/ro)-1)
(M, total mass of uranium; r, radius of sphere; ro, critical radius; τ, time required for neutron density to multiply by a factor e). For a sphere of radius 4.2 cm (ro = 2.1 cm), M = 4700 grams, τ = 4 x 10-9 sec, we find E = 4 x 1020 ergs, which is about one-tenth of the total fission energy. For a radius of about 8 cm (m = 32 kg) the whole fission energy is liberated, according to the formula (1). For small radii the efficiency falls off even faster than indicated by formula (1) because τ goes up as r approaches ro. The energy liberated by a 5 kg bomb would be equivalent to that of several thousand tons of dynamite, while that of a 1 kg bomb, though about 500 times less, would still be formidable.
It is necessary that such a sphere should be made in two (or more) parts which are brought together first when the explosion is wanted. Once assembled, the bomb would explode within a second or less, since one neutron is sufficient to start the reaction and there are several neutrons passing through the bomb every second, from the cosmic radiation. ( Neutrons originating from the action of uranium alpha rays on light-element impurities would be negligible provided the uranium is reasonably pure.) A sphere with a radius of less than about 3 cm could be made up in two hemispheres, which are pulled together by springs and kept separated by a suitable structure which is removed at the desired moment. A larger sphere would have to be composed of more than two parts, if the parts, taken separately, are to be stable.
It is important that the assembling of the parts should be done as rapidly as possible, in order to minimize the chance of a reaction getting started at a moment when the critical conditions have only just been reached. If this happened, the reaction rate would be much slower and the energy liberation would be considerably reduced; it would, however, always be sufficient to destroy the bomb.
For the separation of the 235U, the method of thermal diffusion, developed by Clusius and others, seems to be the only one which can cope with the large amounts required. A gaseous uranium compound, for example uranium hexafluoride, is placed between two vertical surfaces which are kept at a different temperature. The light isotope tends to get more concentrated near the hot surface, where it is carried upwards by the convection current. Exchange with the current moving downwards along the cold surface produces a fractionating effect, and after some time a state of equilibrium is reached when the gas near the upper end contains markedly more of the light isotope than near the lower end.
For example, a system of two concentric tubes, of 2mm separation and 3 cm diameter, 150 cm long, would produce a difference of about 40% in the concentration of the rare isotope between its end without unduly upsetting the equilibrium.
In order to produce large amounts of highly concentrated 235U, a great number of these separating units will have to be used, being arranged in parallel as well as in series. For a daily production of 100 grams of 235U of 90% purity, we estimate that about 100,000 of these tubes would be required. This seems a large number, but it would undoubtedly be possible to design some kind of a system which would have the same effective area in a more compact and less expensive form.
In addition to the destructive effect of the explosion itself, the whole material of the bomb would be transformed into a highly radioactive stage. The energy radiated by these active substances will amount to about 20% of the energy liberated in the explosion, and the radiations would be fatal to living beings even a long time after the explosion.
The fission of uranium results in the formation of a great number of active bodies with periods between, roughly speaking, a second and a year. The resulting radiation is found to decay in such a way that the intensity is about inversely proportional to the time. Even one day after the explosion the radiation will correspond to a power expenditure of the order 1,000 kW, or to the radiation of a hundred tons of radium.
Any estimates of the effects of this radiation on human beings must be rather uncertain because it is difficult to tell what will happen to the radioactive material after the explosion. Most of it will probably be blown into the air and carried away by the wind. This cloud of radioactive material will kill everybody within a strip estimate to be several miles long. If it rained the danger would be even worse because the active material would be carried down to the ground and stick to it, and persons entering the contaminated area would be subjected to dangerous radiations even after days. If 1% of the active material sticks to the debris in the vicinity of the explosion and if the debris is spread over an area of, say, a square mile, any person entering this area would be in serious danger, even several days after the explosion.
In estimates, the lethal dose penetrating radiation was assumed to be 1,000 Roentgen; consultation of a medical specialist on X-ray treatment and perhaps further biological research may enable one to fix the danger limit more accurately. The main source of uncertainty is our lack of knowledge as to the behavior of materials in such a super-explosion, an expert on high explosives may be able to clarify some of these problems.
Effective protection is hardly possible. Houses would offer protection only at the margins of the danger zone. Deep cellar or tunnels may be comparatively safe from the effects of radiation, provided air can be supplied from an uncontaminated area (some of the active substance would be noble gases which are not stop by ordinary filters)
The irradiation is not felt until hours later when it may become too late. Therefore it would be very important to have an organization which determines the exact extent of the danger area, by means of ionization measurements, so that people can be warned from entering it.
O. R. Frisch |
CENTRO ESCOLAR SOLALTO
9th Pre-IB Biology
Teacher Javier Aguirre, B.A.
NAME_____________________________________ Date: _________
Science Fair Guide
Phase 4 – Writing a Report
• Handout: Science Fair Guide – Resources for Students
• Paste and complete today’s handout in your notebook
• Read pages 55 and 56 of your Science Fair Guide and answer the following questions.
1. What does the written report represent? It represents your ideas and conclusions
about your project, so you will want to make sure that it is well thought out and
2. Before you begin to write your report, what questions should you ask yourself?
• How did you first decide on your idea?
• What was your favorite aspect of the experiment?
• What was something new that you learned?
• What was something unexpected that happened?
• What were the ups and downs of the whole process?
• What did your data show?
• What would you do differently next time?
3. What is an outline? It is a framework of what is going to go inside the report.
4. What should you include at the beginning of your report? Why? Any background
information that a reader would need to understand your project.
5. What information should you include with your charts and graphs? Always title and label
your figures, and, if possible, write a sentence telling what they illustrate. |
Enamel Hypoplasia/Hypocalcification in Cats
When tooth enamel -- the outer coating of the tooth -- is allowed to develop normally it has a smooth and white appearance. Abnormal environmental or physical conditions can interfere with the development of tooth enamel, causing it to take on a discolored, pitted or otherwise unusual appearance.
Bodily influences, like a fever over an extended period of time, may cause pitting and discolored enamel surfaces. Local influences, like injury (even from baby tooth extraction) over a short period of time can cause specific patterns or bands to appear on the developing teeth. These types of traumas can result in less than normal deposits of enamel, medically termed hypocalcification. The lack of sufficient enamel may cause the teeth to be more sensitive, with exposed dentin (which is normally hidden underneath the enamel), and occasionally fractures of severely compromised teeth. The teeth usually remain fully functional.
Symptoms and Types
- Irregular, pitted enamel tooth surface with discoloration of diseased enamel and potential exposure of underlying dentin (light brown appearance)
- Early or rapid accumulation of plaque and calculus on roughened tooth surface
- Possible gingivitis and/or accelerated periodontal/gum disease
- Injury during enamel formation on the teeth
- Fever, trauma (e.g., accidents, excessive force used during deciduous/baby tooth extraction)
Discolored teeth may be found by your veterinarian during a routine physical exam, which normally includes a complete oral exam. Intraoral radiographs (X-rays) can then be taken by your veterinarian to determine if the roots of the teeth are still alive.
A low level of calcium in the blood
A medical condition in which the gums become inflamed
The white substance over the crown of teeth
The tissue that holds the tooth in place in the mouth |
Last week, your body released an egg (ova) when you ovulated, approximately two weeks after the start of your most recent period. In a Fallopian tube, the egg was fertilised by a sperm.
This week, that fertilised egg is starting it's journey through your body and is growing and changing all the time. The egg now slowly travels down the Fallopian tube towards the uterus, taking up to a week to do so. As it goes, this one cell splits into two, and then subdivides again and again. By the time the egg reaches the womb, it has become a cluster of over 100 cells known as an embryo. If more than one egg happens to have been released and both (or more!) get fertilised, this leads to a multiple pregnancy. Multiple embryos also occur if one fertilised egg splits, thus creating identical siblings.
By the time the egg reaches the womb, it has become a cluster of over 100 cells known as an embryo. If more than one egg happens to have been released and both (and more!) get fertilised, this leads to a multiple pregnancy.Once the embryo reaches the womb, it will plant itself into the uterine lining within a few days’ time. The lining, known as the endometrium, is normally shed as blood during menstruation if an embryo does not implant. When implantation of an embryo into the uterine lining does occur, some women experience a very light spotting of blood, usually the only external sign that something amazing is happening in the womb. Don’t worry if this implantation spotting happens to you; it is totally normal and not a sign of anything untoward. Conversely, you may not notice any spotting at all in early pregnancy, which is perfectly normal as well.
Some women also experience a little pain or cramping during the implantation process, but again, not everyone does and it’s certainly nothing to worry about if you don’t. However, this pain should never be severe—a great amount of pain and discomfort can be a sign of dangerous complications such as ectopic pregnancy (where the egg implants outside of the womb). If you experience severe pain, chills, fever, or heavy bleeding in early pregnancy, seek medical advice immediately.
After implantation is complete, your body will send out hormone signals to prevent the lining of the womb falling away to give you a menstrual period, as would normally happen each month. The site of implantation will now become the place at which your baby’s placenta attaches to the wall of your uterus.
Once the embryo has implanted, it is known as a blastocyst and is around 0.01 centimetres in diameter—the size of a tiny speck of dust. The blastocyst is invisible to the naked eye, but growing rapidly. Still very small, but already around 50% bigger than it was only a week ago!
The majority of women are still likely to be unaware that they are pregnant at this point, although if you are charting (measuring the very fine fluctuations in your temperature that occur around ovulation and when you get your period), you may have an idea that something different is going on this month. Over the next week or so, you may start to see some more noticeable symptoms of pregnancy. |
Rabies Control Program
Disease & Symptoms
Rabies is a communicable disease that is caused by a neurotropic virus that is usually transmitted by the bite of an infected warm-blooded animal. The disease is often fatal characterized by:
- Delirium with death due to paralysis
- Fear of water
Virus Introduction & Process
The rabies virus is introduced by the bite or scratch of an infected animal. Onset of rabies in humans is usually 3-8 weeks, sometimes as short as 9 days depending on the location of the wound and its distance from the brain. In dogs and cats, the onset of symptoms will occur usually within 3 to 7 days. This is why dogs and cats are quarantined for a 10-day period. A staff member of this department will check on the animal after the 10 days and will release the animal if it is still alive. The person bitten is then notified that the animal has survived the quarantine period and is therefore, free from rabies.
Local Rabies Issues
Cape May County experienced a rabies epizootic outbreak in 1995. There were no cases of rabies in 1998, 1999 and 2000. However, in 2001 there were 3 cases of rabies, all in Dennis Township. Thereafter, there have been no cases of rabies.
This department assists the municipalities in supplying vaccine, syringes, registration forms and brochures for the rabies immunization clinics for dogs and cats. This department also annually inspects kennels, shelters and pet shops. Please check with your local municipality for the date of the free rabies clinic. It is important that all dogs and cats be vaccinated against rabies.
Physicians are required to notify the Department of Health of animal bites to as soon possible. Download and complete the Physician Reporting Form (PDF) and submit it to the Department of Health. Complete forms can be faxed to 609-465-6564.
For more information on rabies, visit the National Rabies Management Program. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.