content
stringlengths 275
370k
|
---|
“Learn the 10 Amendments to the Constitution by Friday. There will be a quiz worth 20 points,” the teacher says after she teaches her students a pneumonic device to memorize the amendments.
While it is certainly imperative that students know the Bill of Rights, what good is it, really, if they can’t think critically about why they were added to the Constitution in the first place? With state testing placing extra pressure on teachers to spend much of their time “teaching to the test,” teaching critical thinking skills often falls by the wayside in classrooms. But how can we call ourselves good teachers if we haven’t taught our students how to think so that they can apply these strategies to the myriad situations and circumstances they will encounter in their futures?
Teaching critical thinking skills sounds nice, but also seems amorphous. How exactly does one go about it? Teachers who are determined not only to address the state standards, but also to teach their students the essential skill of thinking critically may find psychologist Edward De Bono’s 6 Thinking Hats strategy helpful.
The Thinking Hats strategy is a simple way for students to approach problems and ideas from different points of view and encourages them to think below the surface. Each hat requires the thinker or group of thinkers to contemplate an issue from a different perspective:
White Hat: Information/Data
With this hat on, students will consider the information they have about the topic and will answer the questions:
What information do we have?
What do we know?
If students were considering the Bill of Rights in a group, for instance, they would discuss what they know about the Bill or Rights, which may be their content and a bit about the reasons they were added and who insisted upon them.
Red Hat: Feelings/Gut Reaction
When students put on the red hat, they will consider their feelings with regard to the topic/problem and will answer the questions:
What do we feel?
What is my instinct/gut reaction telling me about this?
When discussing the Bill or Rights, students might consider how they feel about the amendments. What would their lives b e like without them? What emotions would they feel if they were taken away?
Black Hat: Negatives
With the black hat on, students consider the negatives or drawbacks of a particular idea or concept.
What are the drawbacks?
What are potential bad results of this decision?
For instance, when learning and discussing the first 10 Amendments to the Constitution, students might discuss the drawbacks of any of the Amendments. Students might discuss the tension between liberty and safety. Sometimes more freedom means more risk, but to our founding fathers, liberty was of more value than protection that required the sacrifice of freedom. Do your students agree?
Yellow Hat: Positives
The yellow hat requires considering all of the positive aspects of the idea in consideration.
What are the benefits of this idea?
What good would come from this decision?
With regard to the Bill of Rights, students might discuss how citizens of the past and present benefit from these enumerated rights.
Green Hat: Creativity
While wearing the green hat, students think creatively about new ideas related to the idea in question. They ask,
What ideas have we got?
What are some possible solutions to this problem?
With respect to the Bill of Rights, students might consider if there ought to have been any changes to the Bill or Rights? Is a new Amendment to the Constitution necessary to secure present liberties? This might lead them to a discussion about how new Amendments to the Constitution are passed.
Blue Hat: Managing our Thinking
With the blue hat on, students ensure that they are thinking about and organizing their thinking. There should be an end goal in mind during the Thinking Hats discussion. Students should be trying to reach a consensus or ensuring that they are covering all important parts of discussion. When wearing the blue hat students ask,
What are our aims?
If students were to put on the blue hat in their discussion of the Bill or Rights, they would evaluate their own thinking and ensure that they have thought deeply and critically with each of the hats on. Did everyone have a chance to speak? Are all thoughts recorded? Perhaps the goal of the discussion was to help students think about the topic before writing a paper on it. Was the discussion adequate to provide a jumping off point to begin writing?
Other Ideas for Using the Six Thinking Hats
There are endless teaching scenarios where it would be appropriate to use the 6 Thinking Hats Strategy to encourage students to think critically. Here are just a few ideas:
• During Literature Circles: Have students discuss an aspect of the novel/picture book they are reading using the Six Thinking Hats. For instance, if a character in the novel is trying to make a decision, the students could use the strategy to decide what they think the character should do.
• During discussions of history: Use the Hats when students are discussing historical events. Example: should the United States have used the atomic bombs on Hiroshima and Nagasaki at the end of WWII?
• To Aid in Class Decision-Making: Include the class in making a decision that impacts them. Manage the discussion using the Thinking Hats strategy. For instance, should tests be given on Fridays, or will students likely perform better on another day of the week?
• To Help Students Prepare for Writing a Paper: Use the 6 Hats Strategy as a method for brainstorming for a paper. Example: use to help prepare for the writing of a persuasive paper so that all points of view are considered and counter-arguments can be rebutted.
Use this free printable for students to record their thoughts as they use the 6 Thinking Hats Strategy to engage in a discussion: Click Here |
The Sun, our closest star, is also a complex and active part of the solar system. On Sunday July 8, 2018 we witnessed sizable activity in the form of a large solar prominence extending from the surface captured in the picture.
In our region of space, the Sun is responsible for filling it with highly energetic particles that interact with everything from planet’s magnetic fields, to your skin, to the beautiful Aurora at the poles, etc. A solar prominence is one mechanism the Sun has to release these high energy particles. The others are Solar Flares, CME (Coronal Mass Ejection) and more.
A solar prominence can range in size. They can grow very small to greater than the diameter of Jupiter. In addition they can disconnect from their origin point and land on the Sun’s surface hundreds of miles away.
Heliophysics is the study of the Sun. See NASA on their website. This study is complimented by space weather professionals. Moreover they study the Sun to protect our technology. Our technology is sensitive to the Sun’s influence. In 1989, the Quebec black out was caused by a CME. The high energy particles from the Sun, induced currents in the power lines which overloaded transformers and well destroyed them. Knowing when they occur, give us a few hours to prepare. Governments and companies can shut down their satellites and prepare powergrids for the surge in energy. This saves billions.
The Sun’s Beauty
In light of this power, the Sun is so dynamic that it can be viewed for hours, days, and weeks of enjoyment. Equally important, always view the Sun safely by using a telescope specific for viewing. Enjoy and clear skies! |
About this Worksheet:
There is more to reindeers than Rudolph and his nose! Your student will read about this iconic Christmas animal in this worksheet. He’ll also determine the meaning of words in the passage through a short exercise. It’s good practice for 5th grade Common Core Standards for Craft and Structure. Other grades may also use it as needed. |
Seeing Noah’s Flood in geological maps
Geologists have long recognized that a knowledge of the past history of the earth is fundamental to their discipline. Geological pioneer Nicholaus Steno accepted the Bible’s history as reliable and developed his ideas accordingly.1 Modern secular geology arose when workers such as Hutton and Lyell dismissed biblical history and assumed a different history using the philosophical principle of uniformitarianism—the past has always been similar to the present.2 Traditionally, the science has been split into two parts—physical geology and historical geology.
In recent decades, Christian geologists have again taken the Bible as an eyewitness record of past events and built models by considering how these events would have affected geology.3–5 They appreciate that the Bible describes two global events that greatly impacted the geology of the earth, each of which invalidates the secular assumption of uniformitarianism and an earth billions of years old. Geological models developed from biblical history provide a practical tool to help understand the geology of the earth, and to explore and classify it.
Once a theoretical model has been developed it is necessary to examine the geological evidence and classify that evidence within the model. This will help test and evaluate the usefulness of the model and relate the geology of specific regions on earth to events recorded in the Bible. Fortunately, vast areas of the earth have already been explored in detail and their geological features documented in the form of geological maps. These are readily available for most countries. Here I will briefly describe how geological maps can be used to interpret the geology of an area from a biblical perspective.
Australia 1:250,000 geological map series
In Australia a comprehensive series of geological maps was prepared in the 1960s and 70s as part of a government program, and this has been published as the 1:250,000 scale series (figure 1). These maps can now be downloaded free from the website of Geoscience Australia.6
Figure 2 shows the sheet for Goondiwindi (300 km west of Brisbane), Queensland, Australia.7 The information on this sheet is typical of what is provided on each map, with the map itself covering an area about 150 km by 100 km. As usual, the sheet has an interpreted geological cross–section as well as a wealth of other geological material, including the locations of quarries, mines and fossil finds, as well as gravity anomalies.
These maps provide an excellent overview of any area of interest. It is easy to visually scan the whole area of the map and study the cross–section to understand the big picture of what is present geologically. Furthermore, it is a simple matter to refer to adjoining maps and see how the geology extends across the continent. This is exactly what is needed to understand the connection with Noah’s Flood because the Flood was a global event and we can only understand its impact by seeing the big picture. We need to keep in mind that there can be some degree of subjectivity in the way the geological units shown on the maps are defined but the map provides a good starting point.
Connecting with the Flood
The geological cross–section on Goondiwindi extends from west to east. The vertical scale on the section is exaggerated, as it often is, in order that the relatively thin geological layers can be easily seen. Some 75% of the width of the section from the map has been reproduced in figure 3, and the vertical scale has been increased even more than on the sheet, resulting in a vertical exaggeration of 8 times.
The need for this vertical exaggeration illustrates the first feature of the sedimentary layers shown on the map—they are relatively thin compared with their lateral extent. This characteristic is something that Ager noted and described as “the persistence of facies”.8 The layers exposed in the Goondiwindi area extend for nearly 2,000 km to the west into the Northern Territory and South Australia (figure 4). Such a vast lateral extent of strata is not a prediction from geological uniformitarianism but it is a prediction for sediments laid down during the global catastrophe of Noah’s Flood: “It is expected that the structures formed during the Inundatory stage would be of continental scale.”9
In figure 3 it can be seen that the sedimentary layers dip down to the west. (The dip looks steep on the section due to the vertical exaggeration but in the field it dip is quite gentle.) Note the strata sit on a ‘basement’ (consisting of sedimentary and volcanic deposits) that is described as “intensely deformed”.10 In other words, there is a clear geological demarcation between the sedimentary strata and the geological unit underneath. The total thickness of all the sedimentary layers is more than 2 km at the western end of the section.
A detailed analysis of the geological characteristics of these strata using the classification criteria within the biblical geological model concluded that they were deposited during the first part of Noah’s Flood—the Zenithic phase.11
That is, these sediments were deposited as the waters of the Flood were rising and just before they reached their peak. This conclusion was based on the expectation that the movement of water during the global Flood would have spread the sediment over vast geographical areas (the scale criterion). Another factor was the presence of footprints and trackways. Certain strata in these layers contain footprints of dinosaurs, temporarily stranded as they tried to escape, which means the layers were deposited before the waters had reached their peak and all air-breathing animal life had perished (Genesis 7:20–24).
Concerning the deformed sediments and volcanics beneath the strata, one possibility is that they could have been deposited during Creation Week. However, these strata contain fossils, which is why they have been classified on the map as Carboniferous (labelled with a C). Fossils mean that these sediments were also deposited in the Flood during an earlier phase. It also indicates that significant tectonic activity occurred during the first part of the Flood deforming the sediments after they were deposited.
Another feature that helps synchronize the geological section to the biblical Flood is the location of the existing land surface. As the floodwaters drained into the ocean they initially flowed in vast sheets which, as the water level reduced, eventually developed into huge channels. This period was primarily an erosional event on the continents, and it is expected that the present landscape was mostly formed at this time: “During the Recessive stage the waters moved off the continents into the present ocean basins. This was a highly erosive process.”12 Holt called this period the “Erodozoic”.13
When we examine the horizontal land surface that runs across the section we can assume that it was mainly carved during the Recessive stage of the Flood. Of significance is the way the geological strata intersect this present land surface. On the cross–section it can be seen that, as the strata rise upwards to the east, they have been truncated at the land surface. This means that the thick strata extended much further to the east and that they have been eroded away. The enormous area of land surface affected and the quantity of material removed is a feature consistent with the global Flood.
A preliminary examination of the geological cross–section for Goondiwindi (figure 3) illustrates how geological maps can reveal the sequence of events occurring during Noah’s Flood. The readily available maps provide an excellent overview. Of course these preliminary ideas need to be checked and tested for consistency with other geological details, such as the information available in field guides, map commentaries, research papers and field reconnaissance. But this analysis shows that geological maps can be used to develop an authentic geological history of the area that fits within the biblical perspective. As these connections between geology and the Bible are made more widely available to the general community it will affect the way people view the world.
- Walker, T., Geological pioneer Nicolaus Steno was a biblical creationist, Journal of Creation 22(1):93–98, 2008. Return to text.
- Rather than uniformitarianism, most geologists today prefer the term actualism. However, the two ideas are similar in practice in that actualists accept the uniformitarian conclusion that the earth is billions of years old. They allow that huge catastrophes occurred periodically throughout this time (which is consistent with the evidence) but hold onto an old earth. Return to text.
- Whitcomb, Jr, J.C. and Morris, H.M., The Genesis Flood, The Presbyterian and Reformed Publishing Company, 1961. Return to text.
- Walker, T.B., A biblical geologic model; in: Walsh, R.E (Ed)., The 3rd International Conference on Creationism, Creation Science Fellowship, Pittsburgh, PA, pp. 581–592, 1994. Return to text.
- Froede, C.R. Jr., A Proposal for A Creationist Geological Timescale, Creation Research Society Quarterly 32(2):90–94, 1995. Return to text.
- Scanned 1:250,000 Geology Maps, Geoscience Australia, www.geoscience.gov.au/cgi-bin/mapserv?map=/nas/web/ops/prod/apps_www-c/mapserver/geoportal-geologicalmaps/index.map&mode=browse&layer=map250&queryon=true; Accessed 15 December 2010. Return to text.
- Goondiwindi, Australia, 1:250,000 Geological Series, Sheet SH-56-01, Bureau of Mineral Resources Geology and Geophysics, Department of National Development, Australia, 1st ed., 1972. Return to text.
- Ager, D.V., The Nature of the Stratigraphical Record, The Macmillan Press, London, 1973. Return to text.
- Walker, ref. 1, p. 591. Return to text.
- Called the Kutting Formation consisting of sedimentary deposits and volcanic. Return to text.
- Walker, T.B., The Great Artesian Basin, Australia, Journal of Creation 10(3):379–390, 1996. Return to text.
- Walker, ref. 7, p. 386. Return to text.
- Oard, M.J., Is the K/T the post-Flood boundary? part 2: paleoclimates and fossils, Journal of Creation 24(3):91, 2010. Return to text. |
- Describe riboswitches
Riboswitches are specific components of an mRNA molecule that regulates gene expression. The riboswitch is a part of an mRNA molecule that can bind and target small target molecules. An mRNA molecule may contain a riboswitch that directly regulates its own expression. The riboswitch displays the ability to regulate RNA by responding to concentrations of its target molecule. The riboswitches are naturally occurring RNA molecules that allow for RNA regulation. Hence, the existence of RNA molecules provide evidence to the RNA world hypothesis that RNA molecules were the original molecules, and that proteins developed later in evolution.
Example of a Riboswitch: A 3D image of the riboswitch responsible for binding to thiamine pyrophosphate (TPP)
Riboswitches are found in bacteria, plants, and certain types of fungi. The various mechanisms by which riboswitches function can be divided into two major parts including an aptamer and an expression platform. The aptamer is characterized by the ability of the riboswitch to directly bind to its target molecule. The binding of the aptamer to the target molecule results in a conformational change of the expression platform, thus affecting gene expression. The expression platforms, which control gene expression, can either be turned off or activated depending on the specific function of the small molecule. Various mechanisms by which riboswitches function include, but are not limited to the following:
- The ability to function as a ribozyme and cleave itself if a sufficient concentration of its metabolite is present
- The ability to fold the mRNA in such a way the ribosomal binding site is inaccessible and prevents translation from occurring
- The ability to affect the splicing of the pre-mRNA molecule
The riboswitch, dependent on its specific function, can either inhibit or activate gene expression.
- The mechanisms by which riboswitches regulate RNA expression, can be divided into two major processes, including aptamer and expression platform.
- The aptamer is characterized by the direct binding of the small molecule to its target.
- The expression platform is characterized by the conformational change, which occurs in the target upon binding of an aptamer, resulting in either inhibition or activation of gene expression.
- aptamer: Any nucleic acid or protein that is used to bind to a specific target molecule. |
Detection bias occurs during avian surveys when not all of the birds present are detected by surveyors. This common bias results in raw survey counts that underestimate the number of birds present in survey areas.
The ability to detect a bird is affected by factors that can be categorized as the 'availability' of cues to detect a species, and 'perceptibility' of the cues by the observer.
Availability bias.-The birds present at a survey location may vary in the rate that they are giving detectable cues which affects their availability to be detected during a survey. For example, a female bird quietly sitting on a hidden nest is not available for detection because she cannot be heard or easily seen, while her brightly-colored mate singing from a conspicuous perch is available for detection. BAM deals with availability bias by estimating avian singing rates using a removal model (Availability Bias).
Perception bias.-Birds may be singing and therefore available for detection. However, not all of the available individuals will be detected by the surveyors. For example, a bird singing from a perch 10 m from a survey point is more easily perceived by a surveyor than a bird singing from a perch 100 m away. Similarly, a bird with a loud song will be perceived more often than a softly-singing bird at the same distance. BAM deals with perception bias by estimating the effective detection radius (EDR) using distance sampling (Perception Bias).
Overall detection rates.-The overall detection rate can be viewed as the product of availability and perception rates. Each of the available estimators of avian abundance do not deal adequately with both of these components of bias inherent in the detection process. Therefore, BAM is using a combination of removal models and distance sampling to minimize bias in our estimates of bird densities. Density estimates are summarized by species under RESULTS. |
Understanding wikis and ethical responsible posting of information online.
Students use online wiki to enter information about businesses or local attractions. Topics: Wikis, Business, Ethics, Copyright, Internet Responsibility.
The student will be able to enter information about businesses or local attractions and to understand the importance of ethics and personal responsibility when posting information on the Internet.
Computer, Internet connection.
Wiki's are web sites that allow anyone to collectively contribute content to a web site. Web sites like wikimmunity.org allow users to enter info related to various topics by creating an article or adding to a previously created article. Pages can be enhanced by uploading pictures of the attraction or business. Accuracy of information as well as not using copyrighted material without permission are important points to remember.
Have students log onto web site, www.wikimmunity.org
Show students home page on wikimmunity.org and explain the purpose of the web site, whose goal is to build an informational database on every community in the world that includes information about local attractions and local businesses.
Also show students how to create a page on a topic, by simply replacing the text on the home page, "Main_Page" with "Whatever_the_topic_is"
Write a few sentences describing the topic you have chosen to create a page about.
Check for understanding:
Ask students to view your newly created page on their computers and ask if there are any questions as to how you created the page.
Have students pick 5 attractions or businesses anywhere in the country or around the world that they are familiar with either from memory or by looking though the yellow pages directory (online or print). Have them create a page and write a short article about each business or attraction.
Ask for volunteers to review several of the pages created and have the class go to the same pages to review page setup. Explain the importance of being ethical and responsbile when posting information on the internet. |
Mendel, Johann Gregor (1822–1884)
Gregor Mendel was an Austrian botanist and Augustinian monk who laid the foundations for the science of genetics. Mendel's controlled experiments with breeding peas in the monastery garden led him to conclude that the units of heredity, what he called "factors" and we now know as genes, were not blends of parental traits but separate physical entities passed individually in specific proportions from one generation to the next. His study was published in Experiments with Plant Hybrids (1866). See also Mendelian inheritance.
Mendel found that self-pollinated dwarf pea plants breed true, but that under the same circumstances only about a third of tall pea plants did so, the remainder producing tall or dwarf pea plants in a ratio about 3:1. Next he cross-bred tall and dwarf plants and found this without exception resulted in a tall plant, but one that did not breed true. Thus, in this plant, both tall and dwarf characteristics were present. He had found a mechanism justifying Charles Darwin's theory of evolution by natural selection; but contemporary lack of interest and his later, unsuccessful experiments with hawkweeds discouraged him from carrying this further. It was not until 1900, when William Bateson, Carl Correns, Erich von Tschermak, and Hugo de Vries found his published results, that the importance of his work was realized. |
Hey, folks. Happy Leap Year!
…Yeah, okay, it’s not an actual holiday. But it does represent one of the most important and fascinating aspects about the Earth and our understanding of physics. It’s common knowledge that a year is 365 days; it’s what modern civilization uses to keep track of business performance, industry production, crop harvesting, population growth, radioactive decay, public transit, pizza deliveries, birthdays, Oscar acceptance speeches, and pretty much anything remotely affected by the passage of time. Needless to say, timekeeping is kind of important.
However, it’s inaccurate.
The 365 day per year model is based on the Gregorian Calendar, which was first instituted by Pope Gregory XIII in 1582. It was an update to the far older Julian Calendar, in an attempt to bring the actual day of Easter closer to the day the church thought it was supposed to be celebrated. It was shoehorned in at the end of February because, honestly, the Romans had a long history of treating the month like an afterthought. While altering the basis of time measurement must have been a huge headache for everyone involved – there are still several different calendars spanning various cultures, and Greece didn’t adopt the new calendar until 1923! – it also illustrated the big problem with timekeeping on Earth: it doesn’t divide into perfect increments. Earth’s orbit is 365.256 days. How do you add .256 of a day to a calendar? Hence why Leap Day happens every four years; the calendar skips over that .256, then multiplies by a whole number of those years to make up for it. .256 x 4 = 1.024, which is just enough to make an extra day and leftovers small enough that no one will really care…
For now, anyway.
Here’s the thing: How we measure Leap Years – and thus the passage of time – is going to have to change in the far future. The algorithm that the Gregorian Calendar uses is fine for our current civilization; it’s as accurate and easily applicable as it needs to be. But on long-term timescales – we’re talking tens of thousands of years – it won’t be able to keep up with the astronomy and physics it’s based upon. Thanks to the effects of the Moon’s gravity, Earth’s rotation is actually slowing down, creating longer days. We’ve already introduced Leap Seconds to make up for the discrepancies and inconsistencies in the planet’s rotation. That’s all assuming that nothing crazy happens with Earth’s orbit, or if it remains stable enough until humanity dies off and the sun goes red giant and destroys the planet in a few billion years.
…Happy Leap Year! |
NASA Observes La Niña: This 'Little Girl' Makes a Big Impression
Cool, wet conditions in the Northwest, frigid weather on the Plains, and record dry conditions in the Southeast, all signs that La Niña is in full swing.
With winter gearing up, a moderate La Niña is hitting its peak. And we are just beginning to see the full effects of this oceanographic phenomenon, as La Niña episodes are typically strongest in January.
A La Niña event occurs when cooler than normal sea surface temperatures form along the equator in the Pacific Ocean, specifically in the eastern to central Pacific. The La Niña we are experiencing now has a significant presence in the eastern part of the ocean.
The cooler water temperatures associated with La Niña are caused by an increase in easterly sea surface winds. Under normal conditions these winds force cooler water from below up to the surface of the ocean. When the winds increase in speed, more cold water from below is forced up, cooling the ocean surface.
“With this La Niña, the sea-surface temperatures are about two degrees colder than normal in the eastern Pacific and that’s a pretty significant difference,” says David Adamec of NASA’s Goddard Space Flight Center, Greenbelt, Md. “I know it doesn’t sound like much, but remember this is water that probably covers an area the size of the United States. It’s like you put this big air conditioner out there -- and the atmosphere is going to feel it.”
Image right: The blue area throughout the center of this image shows the cool sea surface temperature along the equator in the Pacific Ocean during this La Niña episode. Click image for enlargement. Credit: NASA/Goddard's Scientific Visualization Studio
While this “air conditioner” may be located in the equatorial Pacific Ocean, it has a great influence on the weather here in the United States and across the globe.
The cool water temperatures of a La Niña slow down cloud growth overhead, causing changes to the rainfall patterns from South America to Indonesia. These changes in rainfall affect the strength and location of the jet stream -- the strong winds that guide weather patterns over the United States. Since the jet stream regulates weather patterns, any changes to it will have a great impact on the United States.
Those changes can be felt throughout the country. The Northwest generally experiences cooler, wetter weather during a La Niña. On the Great Plains, residents normally see a colder than normal winter and southeastern states traditionally experience below average rainfall.
Image right: La Niña's effects can be felt throughout the United States through changes in the weather. This map shows the typical weather patterns associated with a La Niña during the winter months. Click image for enlargement. Credit: NASA
The cooler waters of a La Niña event also increase the growth of living organisms in this part of the ocean. La Niñas amplify the normal conditions in the Pacific. These typically cool and abundant waters experience an increase in phytoplankton growth when the water temperature drops even further.
The increased circulation that brings up cold water from below also brings up with it nutrients from the deeper waters. These nutrients feed the organisms at the bottom of the food chain, starting a reaction that increases life in the ocean. NASA’s SeaWiFS satellite documented this increase in phytoplankton during the last La Niña period in 1998.
La Niña and El Niño episodes tend to occur every three to five years. La Niñas are often preceded by an El Niño, however this cycle is not guaranteed.
The lengths of La Niña events vary as well. “We need to watch to see if this La Niña diminishes, because they can last for multiple years. And if it does last for multiple years, the southern tier of the United States, especially the Southeast, can expect dryer weather. That is not a good situation. If this La Niña behaves like a normal event, we should see signs that it is beginning to weaken by February,” says Adamec.
Image right: The cooler waters of a La Niña inhibit cloud growth overhead as seen in this image of the Pacific Ocean on Nov. 8, 2007. Click image for enlargement. Credit: NASA/Goddard's Scientific Visualization Studio
So far this La Niña is behaving like a textbook case: following the predicted weather patterns, strengthening throughout the winter, and peaking toward January. According to NOAA’s Climate Prediction Center, this La Niña episode is expected to continue until the spring of 2008, with a gradual weakening starting in February.
NASA will continue to monitor this phenomenon with several of its key Earth observing satellites.
Instruments on NASA’s Terra and Aqua satellites measure sea surface temperature and observe changes to life in the ocean, changes of great importance to the fishing industry. The MODIS instruments on these satellites detected the temperature drop that signaled this La Niña period, and SeaWiFS continues to monitor ocean life.
Scientists also look at sea surface height to understand La Niña. The cooler ocean water associated with a La Niña contracts, lowering sea-surface heights. Over the past year, NASA’s Jason satellite has observed a lower than normal sea level along the equatorial Pacific where this current La Niña episode is taking place.
NASA also looks at changes in wind and rain patterns to study La Niña. The QuikSCAT satellite measures changes in oceanic surface winds, while the Tropical Rainfall Measuring Mission satellite observes changes in rainfall. These observations add to a fuller understanding of this phenomenon.
The current La Niña episode has far many reaching effects. What some may see as just a small change in sea surface temperature has a much greater impact on our climate here in the U.S. and across the globe, as well as implications for the fishing industry and the global economy. With the help of NASA’s earth observing fleet, scientists are becoming better equipped to observe and understand this phenomenon.
NASA's Goddard Space Flight Center |
An activity or purpose natural to or intended for a person or thing
In Mathematics - A relationship or expression involving one or more variables
In Programming (computer science) - A relationship or expression involving one or more variables. The result of the operation returning a result.
In mathematics the purpose of a function is to perform an operation and to return a result. In programming the same is true with the caveat that the function may not always return a result.
This is actually ambiguous as the function will generally affect something external, for example sending the output to the screen or a database. Some programming languages require some sort of return from a function explicitly, even if that return is specified as empty, null or void. Some languages have another term for a function that does not return a value known as a sub procedure however, many languages have allowed functions to return a value implicitly and have depreciated the sub procedure.
Most lower languages require that the input and the output be of the same data type. In fact all actually do however, many higher languages actually deal in variant or object data types and the interpreter makes a determination about the data based on it's content. |
In these slides, we will review information presented in the text and go through some examples on detection systems. In the overall process of the DEPO methodology, detection is the first of three components of the physical protection system. As we covered in the text, detection has three critical steps. The first requires an alarm to be activated. Typically this will be from someone or something entering into a sensor’s detection volume. This alarm then must be communicated so the person can judge whether the alarm was caused by an intrusion or not. The communication can be audible, where security personnel go to the site of the alarm and assess the situation or it can involve an alarm assessment system. Typically, facilities will have a central alarm station where a tripped alarm will activate a camera that covers the same area and allows security personnel to remotely determined the cause of the alarm. In any case, the security personnel must visually confirm the cause of an alarm and assess whether it is a threat or merely a false alarm. Here we have a few examples of sensors. On the top left, there are buried coaxial sensors that can be hidden underground and detect anyone walking over them. To the right, there is an active infrared sensor, which is common in heist movies. The sensor sends an infared beam to another sensor and, if anything breaks the beam, it triggers an alarm On the bottom, from left to right, there is: a motion sensor, which detects motion in its field of vision; a camera, which can be used for assessment, but can also be used as a motion detector; and, on the top right of the door, there is a balance-magnetic switch, which alarms when the door is opened. We will now go through an example that considers a simple situation, where we have an active infrared sensor and a buried coaxial cable on the perimeter of our facility. We will say that the infrared sensor has a 90% probability of detecting someone walking through it. However, due to the positions of the beams, it is susceptible to someone bypassing it by crawling under it. The second sensor we will use is the buried coaxial cable. It will alarm with a probability of 70% when someone passes over it on the ground. However, if the intruder is aware of the cable, they may be able to jump over it or use something to bypass it. Now, let’s look at an example The security system design requires a detection probability of 85% against the design basis threat. To simplify the problem, we will assume that the probability of assessment is 100% and that our design basis threats says that the adversary can only attempt to bypass the sensor by walking, jumping, or crawling. The measured sensing probabilities for the two sensors are listed in the table here. Because the infrared detector has a probability of 90%, it already meets our requirement of 85% for walking and jumping, but is susceptible to a crawling adversary. We, therefore, must couple a coaxial cable with the infrared sensor, The coaxial cable has a 70% probability of detecting a crawling adversary. However combining the coax cable with the infrared sensor only gives a detection probability of 71 and a half percent. to detect a crawling adversary, well below our requirement of 85%. As a result, we need to add another sensor. Because our weaknesses is against a crawling adversary, we should add an additional coaxial cable with a second coaxial cable, we now have a detection probability of over 91%. The calculation for this is shown on the slide. The probability of detection with multiple complimentary sensors is determined by subtracting the product of their non detection probabilities from 1. It is important to keep in mind, that there is rarely ever one single detector that can detect all of the potential entry techniques that an adversary might use and that combinations of detectors would typically provide a much more reliable detection probability by making sure to eliminate potential vulnerabilities. The last thing we will introduce is the concept of a perimeter intrusion detection assessment system, referred to as a PIDAS. A PIDAS is a combination of barriers coupled with sensing and assessment equipment designed to provide a high level of detection. On one fence, you’ll notice cables draped along the fence, which can detect an adversary attempting to climb or cut through the fence. In addition, PIDAS’s can simplify the assessment process because some effort is required to pass the first fence, and if an unauthorized person is in between the two fences, it conveys their intent to break into the facility. The space between the two fences is called the “clear zone” and represents an ideal place to put sensors, lighting, and cameras, because there are no structures to hide behind, and anyone in the area is easily visible. You may also notice that one fence is visually intimidating, with numerous layers of razor wire. This provides some level of deterrence to a potential adversary. We explained earlier that deterrence is not easily measured, and, therefore, not used in the DEPO calculation. However, it is still practiced. Care must be taken with design of a PIDAS to ensure it does not open up any vulnerabilities. For example, if the two fences are too close together, an adversary may be able to bypass all the detection means in the clear zone using a ladder. |
Learn something new every day
More Info... by email
The venae cavae are two major veins found in all vertebrates that breathe air. Like all veins, the function of the vena cava is to transfer blood that has been deoxygenated from the body back into the heart. These veins are essential components of the circulatory system, and each one is responsible for returning the blood from half of the body. Blood from the upper half travels through the superior vena cava, while blood from the lower half runs through the inferior vena cava.
Other major veins feed into each vena cava, and reveal which portions of the body they are responsible for. The function of the vena cava can be seen from their tributary veins. The superior vena cava, located just above the heart, is formed from the junction of the left and right brachiocephalic veins. These veins return blood from the head, neck, and arms, as well as the upper spine and chest. Another vein, the azygos, collects blood from the chest wall and lungs, and empties into the superior vena cava, just above the heart.
The function of the vena cava that collects blood from the lower body determines its different structure. The inferior vena cava begins near the small of the back, where the iliac veins join. The iliac veins return blood which has been deoxygenated back from the legs. Many smaller tributaries feed into it as it runs near the backbone, crosses the diaphragm, and connects to the heart. These tributaries feed blood from the genitals, abdomen, kidneys, and liver.
Ultimately, the vena cava's function is to ensure the proper operation of the circulatory system. By returning blood that has been depleted of its oxygen to the heart's right atrium, the heart can then pump this blood to the lungs. In the lungs, the blood receives oxygen, which is vital for survival, and returns it to the heart. The heart can then pump the oxygenated blood throughout the body. These important veins help to return this blood for re-use after the body has utilized it.
To assist in the function of the vena cava, contractions from the heart time the delivery of blood and supply pressure. There are no valves that separate the venae cavae from the right atrium. Instead, contractions of the heart are relayed through other veins and muscles. These contractions provide pressure necessary to push deoxygenated blood to the heart. This process is crucial to ensuring continuous blood flow back to the heart.
@Kat919 - It's actually not usually the end of the world to lie on your back for a little while, even in late pregnancy. Yes, if you do it for a long time, or in some circumstances, it *can* interfere with inferior vena cava function, but you will feel that happening in plenty of time to get off your back.
I only bring this up because there are some exercises and stretches that expectant moms can do that help align their pelvis or get baby into a better position that require lying on your back for a few minutes. (I like the "Spinning Babies" website, but there are other sources out there.) I wouldn't want moms to be afraid to try them!
This article was interesting to read because I guess I thought the "function" of the (inferior) vena cava was to make pregnant women pass out if they laid on their backs!
You always hear so much about how you should not lay on your back after the twelfth week of pregnancy or so (you hear different numbers) because the weight of your uterus can compress your vena cava, which runs up your back, and cause you to feel faint or lightheaded.
Apparently it's best to lie on your left side for optimum blood flow, so I suppose the vena cava must be to the right of the spine? I tried very hard to stay off my back, but I just couldn't stay off my right side. It's my favorite sleeping side and I just couldn't get used to anything else!
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
Historically, both small and significant changes have started with anger or outrage over an injustice, or a need unmet. People use channels available to them, such as protests, to make their sentiments heard and felt. If that does not work and people know of no other recourse, violence can erupt.
Since the U.S. became an independent nation through protest and violence, this is undoubtedly an effective way to make changes; bloody, but effective.
A persistent protest also leads to change. These changes tend to come by way of the tortoise, taking a generation or two, such as women getting the right to vote. It seems passion, like a river, eventually modifies what it touches, and there may be treacherous whirlpools and rapids along the way.
Gandhi’s nonviolent means of creating change took a few years, but it worked. Though a peaceful movement, it was a slippery slope. Gandhi may have experienced anger, or rage, but he didn’t act it out or incite others to. His love, or compassion for life gave his passion for India’s freedom its direction.
Passionate peace proved to be effective because feelings took a secondary position. Gandhi’s emotions were subservient to his priority of respect for life, the antithesis of acting out of hate.
Finish reading HERE |
Cystic Fibrosis is an inherited genetic disease that affects the lungs and digestive systems. Approximately 1 in every 2000-3000 babies is born with Cystic Fibrosis each year. A defective gene found on the 7th chromosome changes a protein that regulates the movement of salt in and out of cells, causing the body to produce unusually thick, sticky mucus that can clog the lungs and lead to life threatening lung infections, obstruct the pancreas, and stop the natural enzymes from helping the body break down and absorb food. Cystic Fibrosis does not affect mental faculty.
This autosomal recessive genetic disorder requires that both parents possess a mutative recessive gene in order to pass the disease to their child. Each child of a pair of parents both possessing the recessive gene for Cystic Fibrosis has a 25% chance of being born with no copy of a recessive gene, a 50% chance of being born with only one copy of the recessive gene, and a 25% chance of being born affected with Cystic Fibrosis.
Cystic Fibrosis is fatal. Life expectancy depends upon several factors, including which genetic mutation is present and how the patient responds to treatment, antibiotics, and respiratory therapy. Though there is no guarantee of living to any age with the disease, recent improvements of screening and treatments has increased the average life expectancy of a patient with Cystic Fibrosis, and the now may reach their 20s or 30s.
For more information on Cystic Fibrosis, visit:
- Genetic Disease Foundation – Cystic Fibrosis :
- The Mayo Clinic – Cystic Fibrosis
- Mount Sinai Hospital – Cystic Fibrosis
- NCBI – Cystic Fibrosis
- March of Dimes: Cystic Fibrosis
- Wikipedia – Cystic Fibrosis |
Communication is a two-way process, and the key elements
needed are as follows:
Sender- the person
starting the conversation
Message- what the sender
wishes to communicate
Medium- the method of
Receiver- the person who
receive the message and interprets it
message has to be correctly interpreted by the receiver
Feedback- the receiver has
to show that he or she received and understood the message
This can be represented as a Communication Cycle
Thinking about the communication cycle, you can observe
that any interruptions to the cycle can cause difficulties with communication.
Messages can get lost or be incomplete. Each part of the cycle has equal
This refers to people who have problem with hearing or
vision. If you cannot see well or hear well, you are most likely to miss verbal
or non-verbal signals. The term ‘sensory deprivation’ refers to person who has
no hearing or no vision or both.
If you do not speak the same language, this can be a
significant barrier to communication. In this situation, without an interpreter
or even a phrase book, you are more likely to heavily depend on body language
to understand what a person is trying to communicate.
In France when you greet someone, you may be expected to
kiss them on both cheeks whilst clasping their hands. In India, it may be bad
manners to touch the person at all.
Try Exercise 8 at the bottom of the page
When speaking to someone we do not know, it is
probably advisable to speak more formally. This is language that follows the
proper grammar and cultural rules. Informal language can be used with people we
are more familiar with and often involves using familiar terms such as:
acronyms, nicknames, jargon, dialect and slang. However, even with someone you
are familiar with, using informal language can still sometimes lead to
miscommunication and in some situations should be avoided.
Acronyms are words formed from the initials of
other words (e.g. NHS, GPS).
Jargon can be defined as specialist words or
expressions used by a profession or group that are difficult for others to
understand. Jargon is often related to technical words and acronyms e.g. BP
(medical shorthand for blood pressure), IT firewall (something that protects
your computer from cyber-attack).
Slang is an informal language typically restricted to a particular context or group of
people. For example in parts of London people use rhyming slang to represent
words and phrases: ‘mince pies’ = eyes, ‘Have a butchers’ = ‘Have a butchers
hook’ = have a look.
Dialect words are specific to a local
geographical area. In some parts of the North of England for example, a small
back alley way is often referred to as a ginnel.
Now try Exercise 9- Jargon, slang, dialect and acronyms at the bottom of the page
Emotions can act as barriers to effective
communication. When people are upset, angry or distraught they often have
difficulties in decoding or interpreting the message that is being conveyed. A
person’s emotional state has to be considered and dealt with before
communicating important information.
Anxiety can cause similar problems when
communicating. It can prevent a realistic assessment of what is being said.
When communicating information, a very anxious person will most likely not take
in most of what has been said to them.
Depression cause feelings of hopelessness and
isolation, which can prevent communication. An isolated or depressed person
most likely takes a consistent, negative view of the world and may not value
anything that is said to them.
Aggression of any sort can be a barrier to
communication because it often leads to people being frightened. Aggression can
be categorised as both verbal aggression, as in shouting or raising one’s
voice, and physical aggression (or intimidation), such as towering over the
other person, coming physically close or behaving in a threatening manner.
Mental health issues or mental illness, can
unduly affect communication. For example, if a person is on heavy medication or
undergoing a paranoid episode which may affect their ability to understand what
is said to them, then it is possible that only very basic communication would
be effective in this situation.
People with learning difficulties can have a
problem expressing themselves (e.g. not be able to process information,
remember things well, coordination problems). Conditions such as autism and
Asperger’s syndrome where a person may struggle with body language and social
cues, may require a simple, formal and unambiguous approach to communication
for better understanding.
Dementia involves gradual deterioration of
intellectual capacity. As a result, people with dementia also tend to be
unaware of the real world, people or places and forget what they have been
location, poor lighting etc., may impede effective communication. A person with
hearing difficulties in a noisy room which is poorly lit will struggle to hear
a person speaking and hinder the ability to lip read.
Misjudgements and misunderstanding
Conversation topics related to religion,
politics, cultural differences etc. and presented as jokes, can sometimes be
source of misunderstanding and tension.
Behaviour that is appropriate at home does not
necessarily mean that is appropriate at work. Physical contact (such as a hug)
when greeting a family member could be misinterpreted at work if you did the
same thing with a work colleague.
Writing & Reading (including emails and
man wishes to write in a clear style, let him first be clear in his
Johann Wolfgang von Goethe
Sometimes a document, an email or text message
can be misinterpreted if the message writer is not particularly skilled at
using language in the right order or leaves important words or phrases out of
For instance, it is confusing to say “I rode a
black horse in red pajamas,” because it may lead us to think the horse was
wearing red pajamas. The sentence becomes clear when it is changed to “Wearing
red pajamas, I rode a black horse.”
Interestingly, how the message is structured
in terms of grammar is more important than the words being spelled correctly
when interpreting its correct meaning.
Now try Exercise 10: Scrambled Word Test below
You can also try Exercise 11 - Reading exercise below |
Computers & Writing Systems
Comments or suggestions?
Please use the comment mechanism at the end of this page. You can comment on existing definitions, or suggest additions along with a draft definition!
abjad — a form of writing in which the vowels are omitted or optional, such as Hebrew and Arabic scripts.
abstract character — a unit of information used for the organization, control or representation of textual data. Abstract characters may be non-graphic characters used in textual information systems to control the organization of textual data (e.g. U+FFF9 INTERLINEAR ANNOTATION ANCHOR), or to control the presentation of textual data (e.g. U+200D ZERO WIDTH JOINER).
abstract character repertoire — a collection of abstract characters compiled for the purposes of encoding. See also charset.
abugida — a form of writing in which the consonants and vowels in a syllable are treated as a cluster or unit; typical of scripts from South Asia.
advance height — the amount by which the current display position is adjusted vertically after rendering a given glyph. This number is generally only meaningful for vertical writing systems, and is usually zero within fonts used for horizontal writing systems.
advance width — the amount by which the current display position is adjusted horizontally after rendering a given glyph.
affrication — the phonological process by which a simple stop, such as [t], is converted to an affricate, such as [tʃ]. For example, in some dialects of British English the word "tuna" is pronounced [tʃu:na], the first consonant having been affricated.
allophone — a variant of a phoneme. It is not distinctive, that is, substituting one allophone for another of the same phoneme will not change the meaning of the word, although it will sound unnatural. Broadly speaking, the test to determine whether two sounds are allophones of the same phoneme, or separate phonemes, is to see whether they are in complementary distribution, that is, when two phonological elements are found only in two complementary environments. For example, in English /ph/ only occurs syllable-initially when followed by a stressed vowel, but /p/ occurs in all other environments. This is illustrated by the words pin /phin/ and spin /spin/. Therefore, /ph/ and /p/ are seen to be in complementary distribution, and therefore allophones of the phoneme [p]. This test is not foolproof; some sounds are in complementary distribution but are not considered to be allophones. For example, in English /h/ only occurs syllable-initially and /ŋ/ only occurs syllable-finally. However they are phonetically so different that they are still considered to be separate phonemes. One allophone can be assigned to more than one phoneme, as illustrated in some North American English dialects, where the phonemes /t/ and /d/ can both be changed into the allophone [ɾ].
alphabet — a segmental writing system having symbols for individual sounds, rather than for syllables or morphemes. In a true alphabet, consonants and vowels are written as independent letters, in contrast to an abugida or an abjad. In a perfectly phonemic alphabet, phonemes and letters would be predictable in both directions; that is, the sound of a word could be predicted from its spelling and vice-versa. A phonetic alphabet is also predictable in this way, however it uses separate letters for separate allophones, whereas a phonemic alphabet may describe allophones of the same phoneme using a single letter.
anchor point — see attachment point.
ASCII — a standard that defines the 7-bit numbers (codepoints) needed for most of the U.S. English writing system. The initials stand for American Standard Code for Information Interchange. Also specified as ISO 646-IRV.
attachment point — a point defined relative to a glyph outline such that if two attachment points on two glyphs are positioned on top of each other, the glyphs are positioned correctly relative to each other. For example, a base character may have an attachment point used to position a diacritic, which would also have an attachment point. Also called anchor point.
baseline — the vertical point of origin for all the glyphs rendered on a single line. Roman scripts have a baseline on which the glyphs appear to “sit,” with occasional descenders below. Many Indic scripts have a “hanging” baseline, in which the bulk of the letters are placed below the baseline, with occasional ascenders above the line. Some scripts, such as Chinese, use a centered baseline, where the glyphs are all positioned with their centers on the baseline.
Basic Multilingual Plane (BMP) — the portion of Unicode’s codespace in which all of the most commonly used characters are encoded, corresponding to codepoints U+0000 to U+FFFF, abbreviated as BMP. Also known as Plane 0. See also Supplementary Planes.
bicameral — describes a script with two sets of symbols that correspond to each phoneme, most often upper- and lower-case. See also unicameral. Examples of bicameral scripts include Roman (or Latin), Greek, and Cyrillic.
bidirectionality — the characteristic of some writing systems to contain ranges of text that are written left-to-right as well as ranges that are written right-to-left. Specifically, in Arabic and Hebrew scripts, most text is written right-to-left, but numbers are written left-to-right. This can also be used to refer to text containing runs in multiple writing systems, some RTL and some LTR.
BMP — see Basic Multilingual Plane.
BOM — see byte order mark.
boustrophedon — a way of writing in which successive lines of text alternate between left-to-right and right-to-left directionality.
byte order mark (BOM) — the Unicode character U+FEFF ZERO WIDTH NO-BREAK SPACE when used as the first character in a UTF-16 or UTF-32 plain text file to indicate the byte serialization order, i.e. whether the least significant byte comes first (little-endian) or the most significant byte comes first (big-endian). Byte order is not an issue for UTF-8, though the byte order mark is sometimes added to the beginning of UTF-8 encoded files as an encoding signature that applications can look for to detect that the file is encoded in UTF-8. See http://www.unicode.org/unicode/faq/utf_bom.html.
cascading style sheets (CSS) — one of two stylesheet languages used in Web-based protocols (the other is XSL). CSS is mainly used for rendering HTML, but can also be used for rendering XML. It is much less complex than XSL, i.e., it can only be used when the structure of the source document is already very close to what is desired in the final form.
character — (1) a symbol used in writing, distinguished from others by its meaning, not its specific shape; similar to grapheme. It relates to the domain of orthographies and writing. See orthographic character.
character encoding form — a system for representing the codepoints associated with a particular coded character set in terms of code values of a particular datatype or size. For many situations, this is a trivial mapping: codepoints are represented by bytes with the same integer value as the codepoint. Some encoding forms may represent codepoints in terms of 16- or 32-bit values, though, and some 8-bit encoding forms may be able to represent a codespace that has more than 256 codepoints by using multiple-byte sequences. Most encoding forms are designed specifically for use in connection with a particular coded character set; e.g. UTF-8 is used specifically for encoded representation of the Universal Character Set defined by Unicode and ISO/IEC 10646. Some encoding forms may be designed for use with multiple repertoires, however. For example, the ISO 2022 encoding form supports an open collection of coded character sets and specifies changes between character sets in a data stream using escape sequences.
character encoding scheme — a character encoding form with a specific byte order serialization (relevant mainly for 16- or 32-bit encoding forms).
character set encoding — a system for encoded representation of textual data that specifies the following: (1) a coded character set, (2) one or more character encoding forms and (3) one or more character encoding schemes.
charset — an identifier used to specify a set of characters. Used particularly in Microsoft Windows and TrueType fonts, and in HTML and other Internet or Web protocols to refer to identifiers for particular subsets of the Universal Character Set.
cmap — character-glyph map: the table within a font containing a mapping of codepoints (characters) to glyph ID numbers. In an Unicode-based font the codepoints are Unicode values; in other fonts they correspond to other encodings.
codepage — (1) synonym for coded character set.
(2) synonym for character set encoding; i.e. In some contexts, codepage is used to refer to a specification of a character repertoire and an encoding form for representing that repertoire.
(3) In some systems, a mapping between encoded characters in Unicode and a non-Unicode encoding form; e.g. Microsoft Windows codepage 1252.
codepoint — a numeric value used as an encoded representation of some abstract character within a computer or information system. Codepoints are integer values used to represent particular characters within a particular encoding.
colometry — in writing, the distribution of text into sense lines, so that a new clause starts on new line.
complex script — a script characterized by one or more of the following: a very large set of characters, right-to-left or vertical rendering, bidirectionality, contextual glyph selection (shaping), use of ligatures, complex glyph positioning, glyph reordering, and splitting characters into multiple glyphs.
conjunct — a ligature, in particular, a ligature representing a consonant cluster in an Indic script.
CSS — see cascading style sheets.
dead key — a key in a particular keyboard layout that does not generate a character, but rather changes the character generated by a following keystroke. Dead keys are commonly used to enter accented forms of letters in writing systems based on Roman script.
deep encoding — see semantic encoding.
defective — with regard to writing systems, a writing system which does not represent all the distinctive sounds of the language it represents.
determinative — in semantics, a class of words that indicates, specifies or limits a noun, such as the definite or indefinite article, the genitive (possessive) marker, or cardinal numbers.
diacritic — a written symbol which is structurally dependent upon another symbol; that is, a symbol that does not occur independently, but always occurs with and is visually positioned in relation to another character, usually above or below. Diacritics are also sometimes referred to as accents. For example, acute, grave, circumflex, etc.
digraph — a multigraph composed of two components.
diphthong — in phonetics, a complex speech sound occupying one syllable, which begins with one vowel and ends with another. For example [eɪ̯] in British (RP) pronunciation of the word lane. See also monophthong.
display encoding — See presentation-form encoding.
distinctive — also contrastive. An element which makes a distinction between units. In phonology, a process or a pair of sounds, the alternation of which changes the meaning of a word. See also phoneme, minimal pair. For example, voicing is distinctive in most non-tonal languages, as illustrated by the difference between English fan and van, or German Kern and gern.
document — a collection of information. This includes the common sense of the word, i.e. an organisation of primarily textual information that can be produced by a word processing or data processing application. It goes beyond this, however, to include structured information held within an XML file. Each XML file is considered to contain one document, whatever the structure and type of that information.
Document Type Definition (DTD) — a markup declaration used by SGML and XML that contains the formal specifications, or grammar, of an SGML or XML document. One use of the DTD is to run a validation process over an XML file, which indicates if it matches the DTD, or if not, provides a listing of each line at which the file fails some part of the required structure.
DTD — see Document Type Definition.
em square — the square grid which is the basis for the design of all glyphs within a given font; so called because it historically corresponded to the size of the letter M. When rendering, the requested point size specifies the size of the font’s em square to which all glyphs are scaled.
encoded character — an abstract character in some repertoire together with a codepoint to which it is assigned within a coded character set. Encoded characters do not necessarily correspond to graphemes.
encoding — (1) synonym for a character encoding form.
(2) synonym for a character set encoding. This usage is common, especially in cases in which distinctions between a coded character set and a character encoding form is not important (i.e. 8-bit, single-byte implementations). Someone might think of an encoding as simply a mapping between byte sequences and the abstract characters they represent, though this model is not adequate to describe some implementations, particularly CJKV standards, or Unicode and ISO/IEC 10646.
Extensible Markup Language (XML) — a standard for marking up data so as to clearly indicate its structure, generally in a way that indicates the meaning of different parts of it rather than how they will be displayed. See http://www.w3.org/XML/ for details.
Extensible Stylesheet Language (XSL) — a language for expressing stylesheets. It consists of two parts: XSL transformations (XSLT) and an XML vocabulary for specifying formatting semantics. See http://www.w3.org/Style/XSL for full details.
featural writing system — a writing system in which phonetic features, rather than phones (sounds), are represented. For example, there might be a symbol to represent the feature “bilabial” (a sound produced with both lips), a symbol to represent the feature “voiced”, and a symbol to represent the feature “stop”. These could be combined to represent the sound [b]. The closest functioning writing system to this is the Korean Hangul, in which many of the strokes making up the symbols represent place or manner of articulation. Some writing systems used for representing signed languages also contain symbols which stand for particular features of signs. In this case, the symbol often visually resembles the feature it represents, such as direction of movement.
GDL — See Graphite.
gemination — in phonetics, consonant lengthening, usually by about a time-and-a-half of the length of a “short” consonant. Geminated fricatives, trills, nasals and approximants are simply prolonged. In geminated stops, the “hold” is prolonged. In some languages, such as Japanese, Hungarian, Arabic, Italian and Finnish, gemination is distinctive, but in most it is not. In languages where it is distinctive, it is usually restricted to certain consonants. English contains very few words in which gemination affects the meaning; among these are unnamed vs. unaimed or, in some dialects sixths/sıks:/ vs. six} /sıks/ (source: John Lawler, University of Michigan). In some languages, consonant length and vowel length depend on each other. For example in Swedish and Italian a short vowel must be followed by a long consonant (geminate), whereas a long vowel must be followed by a short consonant.
glyph — a shape that is the visual representation of a character. It is a graphic object stored within a font. Glyphs are objects that are recognizably related to particular characters and which are dependent on particular design (i.e. g, g and g are all distinct glyphs). Glyphs may or may not correspond to characters in a one-to-one manner. For example, a single character may correspond to multiple glyphs that have complementary distributions based upon context (e.g. final and non-final sigma in Greek), or several characters may correspond to a single glyph known as a ligature (e.g. conjuncts in Devanagari script). (For more information on glyphs and their relationship to characters, see ISO/IEC TR 15285.)
grapheme — anything that functions as a distinct unit within an orthography. A grapheme may be a single character, a multigraph, or a diacritic, but in all cases graphemes are defined in relation to the particular orthography.
Graphite — a package developed by SIL to provide “smart rendering” for complex writing systems in an extensible way. It is programmable using a language called Graphite Description Language (GDL). Because it is extensible, it can be used to provide rendering for minority languages not supported by Uniscribe.
heteronym — homographs which, although spelled the same way, are pronounced differently and have different meanings. For example, in English “wind” (noun, as in weather) and “wind” (verb, to coil something).
homograph — one of multiple words having the same spelling but different meanings. They may be pronounced differently (for example in English “tear: rip” and “tear: secreted when crying”), in which case they are also heteronyms, or they may be pronounced the same (for example in American English “tire: cause to be fatigued” and “tire: wheel of a car”), in which case they are also homophones.
homophone — one of multiple words having the same pronunciation but different meanings. They may be spelled differently (for example in English “write” and “right”), in which case they are called heterographs, or the same (for example in English “bark: on a tree” and “bark: of a dog”), in which case they are also homographs.
ideograph — see logograph
IME — see input method editor.
input method — any mechanism used to enter textual data, such as keyboards, speech recognition or handwriting recognition. The most common form of input method is the keyboard. The term "input method" is intended to include all forms of keyboard handling, including but not limited to input methods that are available for Chinese and other very-large-character-set languages and that are commonly known as input method editors (IMEs). An IME is taken to be a specific type of the more general class of input methods.
input method editor (IME) — a special form of keyboard input method that makes use of additional windows for character editing or selection in order to facilitate keyboard entry of writing systems with very large character sets.
internationalization — a process for producing software that can easily be adapted for use in (almost) any cultural environment; i.e. a methodology for producing software that can be script-enabled and is localisable. Sometimes abbreviated as “I18N”.
kern — to adjust the display position whilst rendering in order to visually improve the spacing between two glyphs. For instance, kerning might be used on the word WAVE to reduce the illusion of white space between the diagonal strokes of the W, A, and V.
Keyman — an input method program which changes and rearranges incoming characters to allow easy ways of typing data in writing systems that would otherwise be difficult or inconvenient to type. See www.tavultesoft.com/keyman.
LANGID — in the Microsoft Win32 API, a 16-bit integer used to identify a language or locale. A LANGID is composed of a 10-bit primary language identifier together with a 6-bit sub-language identifier (the latter being used to indicate regional distinctions for locales that use the same language).
language ID — a constant value within some system used for metadata identification of the language in which information is expressed. May be numeric or character based, depending on the system.
Latin script — see Roman script.
left side-bearing — the white space at the left edge of a glyph’s visual representation, or more specifically, the distance between the current horizontal display position and the left edge of the glyph’s bounding box. A positive left side-bearing indicates white space between the glyph and the previous one; a negative left side-bearing indicates overlap or overhang between them.
locale — a collection of parameters that affect how information is expressed or presented within a particular group of users, generally distinguished from one another on the basis of language or location (usually country). Locale settings affect things such as number formats, calendrical systems and date and time formats, as well as language and writing system.
localisability — the extent to which the design and implementation of a software product allows potential for localisation of the software.
localisation — the process of adapting software for use by users of different languages or in different geographic regions. For purposes of this document, localisation has to do with the language and script of users, and is distinct from script enabling, which has to do with the script in which language data is written. The localisation process may include such modifications as translating user-interface text, translating help files and documentation, changing icons, modifying the visual design of dialog boxes, etc. Sometimes abbreviated “L10N”.
logograph — also called a logogram or ideograph. A written symbol representing a whole word. Technically, this is distinct from an ideogram, which represents a concept independently of words, although the two are often used interchangeably.
logographic writing system — also known as an ideographic writing system. A writing system in which each symbol represents a complete word or morpheme. The symbols do not indicate the word's pronunciation, only its meaning. Historically, Sumerian cuneform and Egyptian hieroglyphics were logographic, but today Chinese is the only known writing system in the world that remains logographic. See also logosyllabary.
logosyllabary — a writing system in which each sign is used primarily to represent words or morphemes, with some subsidiary usage to represent syllables. Most natural logosyllabaries employ the rebus principle to extend the character set so that syllables as well as morphemes can be represented. Logosyllabaries may also include determinatives to mark semantic categories which would otherwise be ambiguous. The extent to which syllabic sounds are represented varies from one writing system to another. In instances where a relatively large number of symbols represent syllabic sounds, a logosyllabary may evolve into an abugida or an abjad as the syllabic use overtakes the logographic use.
metathesis — a phonological change in which the order of segments, particularly successive sounds, in a word is reversed. For example, the English word 'ask' was pronounced [æks] between the 5th and 12th centuries, and some dialects have reverted back to this pronunciation in modern times.
mnemonic keyboard — a keyboard layout based on the characters appearing on the keytops of the keyboard. See also positional keyboard.
monophthong — a vowel sound which does not change in quality as it is articulated. (Contrast with diphthong.) It can be short, as in English bed [bɛd], or long, as in English bead[bi:d]. A single short monophthong is the shortest syllable in any language. The process by which monophthongs change to diphthongs or vice versa is an important factor in language change. Diphthongization in the 15th or 16th century changed the long German monophthong [iː] to [aɪ], as in Eis 'ice', and long [uː] to [aʊ] as in Haus 'house'. A characteristic of Southern American English is the monophthongization of certain dipthongs such as [aɪ] to long [a:] in words such as kite. (source: Wikipedia)
mora — a unit of rhythmic measurement based syllable weight, which is distinctive in some languages. Japanese is one of the most well-documented of these languages. Short (or light) syllables are monomoraic, consisting of one mora. Long (or heavy) syllables are bimoraic, consisting of two morae. Some languages contain superheavy syllables, for example Hindi, in which a long vowel can be followed by a geminate consonant. These syllables are said to be trimoraic. The first consonant of a syllable does not represent any morae, as it does not constitute a syllable in itself. Syllable-final consonants can either form the final part of a bi- or trimoraic syllable, as is the case in Goidelic Irish, or they can represent a mora in themselves, as is the case in Japanese. Although there is a relation between syllables and morae, they are not necessarily interchangeable. For example, the Japanese word for “photograph”, [sjasin], consists of 2 syllables: sja + sin, but 3 morae: sja + si + n. (source: Jouji Miwa at Mora and Syllable)
multigraph — a combination of two or more written symbols or orthographic characters (e.g. letters) that are used together within an orthography to represent a single sound. (Combinations consisting of two characters are also known as digraphs.)
multi-language enabling — see script enabling.
multi-script encoding — an encoding implementation for some particular language that is designed to enable input to and rendering from that encoding using more than one writing system. When such an implementation is used, the different writing systems are normally based on different scripts.
multi-script enabling — see script enabling.
non-Roman script — a script using a set of characters other than those used by the ancient Romans. Non-Roman scripts include relatively simple ones such as Cyrillic, Georgian, and Vai, and complex scripts such as Arabic, Tamil, and Khmer.
normalization — transformation of data to a normal form. For historical reasons, the Unicode standard allows some characters to have more than one encoded representation. For example, á may be represented as a single codepoint, U+00E1 LATIN SMALL LETTER A WITH ACUTE, or two codepoints, U+0061 LATIN SMALL LETTER A and U+0301 COMBINING ACUTE ACCENT. A normalization scheme is used to standardize the codepoints so that every character is always represented by the same sequence of codepoints. Normalization is described in the Unicode Standard Section 5.7, Normalization.
orthographic character — a written symbol that is conventionally perceived as a distinct unit of writing in some writing system or orthography.
PDF — see Portable Document Format.
PERL — see Practical Extraction and Reporting Language.
phone — a speech sound which is identified as the audible realization of a phoneme.
phoneme — the smallest distinctive segment of sound in any language. It is actually comprised of a group of similar sounds, called allophones, which native speakers of a language may perceive as being all the same. If a pair of words exist which differ only in one phonological element (known as a minimal pair), the element in which they differ is distinctive, and represents two phonemes in the language. For example, in English, bit and pit are a minimal pair; [b] and [p] are distinct phonemes. Phonemes are not consistent across languages; two sounds may be separate phonemes in one language and allophones in another.
phonemic inventory — an inventory of all the distinctive sounds (phonemes) in a given language, also called a phoneme inventory.. A language's phonemic inventory is not fixed over time; as the language changes, sounds which were previously allophones may become phonemes. The smallest documented phoneme inventory belongs to the Rotokas language, which uses only 11 phonemes. The largest belongs to !Xóõ, with an estimated 112 phonemes. The number of phonemes used in speech does not necessarily correspond to the number of symbols used in writing for a given language. For example, the English alphabet contains 26 letters, but the phonemic inventory numbers between 35 and 47 depending on the dialect used (source: Wikipedia). In a true phonemic script the symbols should map on a one-to-one basis to the sounds in the phonemic inventory.
phonemic script — a writing system in which each symbol tends to correspond to one phoneme. For example, the N'ko alphabet assigns one symbol to each phoneme. Also sometimes called a phonetic script although technically this is not accurate, as a true phonetic script should represent every allophone in a language.
phonetization — see the rebus principle.
plain text — textual data that contains no document-structure or format markup, or any tagging devices that are controlled by a higher-level protocol. The meaning of plain text data is determined solely by the character encoding convention used for the data.
plane — in Unicode, a range of 64K codepoints. Plane zero is the original 64K codepoints that can be represented in a single 16-bit character. See also Basic Multilingual Plane, supplementary planes, and surrogate pair.
Portable Document Format (PDF) — a particular file format for the storage of electronic documents in a paged form. Created by Adobe around their Adobe Acrobat product. Usually created from a Postscript page description.
positional keyboard — a keyboard layout defined in terms of the relative positions of keys rather than what they have printed on them. See also mnemonic keyboard.
Postscript — a page description language defined by Adobe. Originally implemented in laser printers so pages were described in terms of line drawing commands rather than as a bitmap.
Postscript font — a font in a format suitable for use within a Postscript document. There are many types. Type 1 is the most common and is what is meant most commonly when people refer to Postscript fonts. There are also ways of embedding other font formats into a Postscript document. For example a Type 42 font is a TrueType font formatted for use within a Postscript document. Type 1 fonts differ in the way their outlines are described from TrueType fonts.
Practical Extraction and Reporting Language (PERL) — an interpreted programming language particularly strong for text processing.
presentation-form encoding — a character encoding system in which the abstract characters that are encoded match one-for-one with the glyphs required for text display. Such encodings allow correct rendering of writing systems on “dumb” rendering systems by having distinct codepoints for contextual forms, positional variants, etc. and are designed on the basis of rendering needs rather than on the basis of character semantics (the linguistically relevant information). Also known as glyph encoding, display encoding or surface encoding; distinguished from semantic encoding.
Private Use Area (PUA) — a range of Unicode codepoints (E000 - F8FF and planes 15 and 16) that are reserved for private definition and use within an organisation or corporation for creating proprietary, non-standard character definitions. For more information see The Unicode Consortium, 1996, pp. 619 ff.
PUA — see Private Use Area.
rasterising — converting a graphical image described in terms of lines and fills into a bitmap for display on an imaging device.
rebus principle — also known as phonetization. The use of a pre-existing logograph to represent a syllabic sound having the same sound as, but a different meaning from, that of the word originally represented. The rebus principle is especially useful for representing function words, proper names, and other words which would otherwise be difficult to depict. A well-known example is the Egyptian use of the symbol representing “swallow” (pronounced wr) also being used ro represent the word “big” (which was also pronounced wr). A symbol used in this way is called a rebus. The rebus strengthens the phonetic aspect of a logographic writing system by exploiting the phonetic similarities between words. If a logographic writing system is fully (or almost fully) phonetized, it may become an abugida or an abjad. Other times, it is only partially phonetized and develops into a logosyllabary.
regression test — a test (usually a whole set of tests, often automated) designed to check that a program has not “regressed”, that is, that previous capabilities have not been compromised by introducing new ones.
render — to display or draw text on an output device (usually the computer screen or paper). This usually consists of two processes: transforming a sequence of characters to a set of positioned glyphs and rasterising those glyphs into a bitmap for display on the output device.
right side-bearing — the white space at the right edge of a glyph’s visual representation, or more specifically, the distance between the display position after a glyph is rendered and the right edge of the glyph’s bounding box. A positive right side-bearing indicates white space between the glyph and the following one; a negative right side-bearing indicates overlap or overhang between them.
Roman script — the script based on the alphabet developed by the ancient Romans ("A B C D E F G ..."), and used by most of the languages of Europe, including English, French, German, Czech, Polish, Swedish, Estonian, etc. Also called Latin script.
schema — in markup, a set of rules for document structure and content.
script — a maximal collection of characters used for writing languages or for transcribing linguistic data that share common characteristics of appearance, share a common set of typical behaviours, have a common history of development, and that would be identified as being related by some community of users. Examples: Roman (or Latin) script, Arabic script, Cyrillic script, Thai script, Devanagari script, Chinese script, etc.
Script Description File (SDF) — a file describing certain kinds of complex script behaviour, used to control a rendering engine to which it has given its name. Created by Tim Erickson and used in Shoebox, LinguaLinks, and ScriptPad.
script enabling — providing the capability in software to allow documents to include text in multiple languages or scripts, and to handle input, display, editing and other text-related operations of text data in multiple languages and scripts. Script enabling has to do with the script in which language data is written, as opposed to localisation, which has to do with the language and script of the user interface.
SDF — see Script Description File.
semantic encoding — an encoding that has the property of one codepoint for every semantically distinct character (the linguistically relevant units). In general, such encodings require the use of “smart” rendering systems for correct appearance to be achieved, but are more appropriate for all other operations performed on the text, especially for any form of analysis. Also known as deep encoding; distinguished from presentation-form encoding.
SFM — see Standard Format Marker.
SGML — See Standard Generalized Markup Language.
sort key — a sequence of numbers that when appropriately processed using a particular standard algorithm will position the corresponding string in the correct sort position in relation to other strings. The sort key need not correspond one number to one codepoint in the input string.
Standard Format Marker (SFM) — SIL has a proprietary format called "standard format markers" (SFM). It is possible (and even probable) that SFMs in a single document have different character encodings. When converting to one encoding (Unicode) these must be converted with different mapping files. A standard format marker begins with a backslash (). For example, p would represent a paragraph tag.
Standard Generalized Markup Language (SGML) — a notation for generalized markup developed by the International Organization for Standardization (ISO). It separates textual information from the processing function used for formatting. It was found difficult to parse, due to the many variants possible, and so XML was developed as a subset to resolve the ambiguities and to make parsing easier.
smart font — a font capable of performing transformations on complex patterns of glyphs, above and beyond the simple character-to-glyph mapping that is a basic function of font rendering (see cmap). The information specifying the smart behavior is typically in the form of extra tables embedded in the font, and will generally allow layered transformations involving one-to-many, many-to-one, and many-to-many mappings of glyphs.
supplementary planes — Unicode Planes 1 through 16, consisting of the supplementary code points, corresponding to codepoints U+10000 to U+10FFFF. In The Unicode Standard 3.1, characters were assigned in the supplementary planes for the first time, in Planes 1, 2 and 14. See also Basic Multilingual Plane.
surface encoding — see presentation form encoding.
surrogate pair — a mechanism in the UTF-16 encoding form of Unicode in which two 16-bit code unites from the range 0xD800 to 0xDFFF are used to encode Unicode supplementary plane characters, i.e. with Unicode scalar values in the range U+10000 to U+10FFFF.
syllabary — a form of writing in which the symbols represent syllables--most commonly a vowel-and-consonant combination. A syllabary differs from an abugida in that there are no distinct elements of the symbols to correspond to the syllable's phonemes.
symbol-encoded font — Windows supports two types of Unicode fonts: standard and symbol. Symbol-encoded fonts are used for either non-orthographic collections of shapes (such as Wingdings) or for legacy orthographies (e.g., SIL Ezra, SIL Galatia, SIL IPA) created prior to availablility of Unicode-based solutions. Symbol-encoded fonts encode characters in the Private Use Area, typically U+F020 .. U+F0FF
tokenisation — the process of analysing a string into a contiguous sequence of smaller units: for example, word breaking or syllable breaking or the creation of a sort key.
TrueType font — font format used primarily in Windows and on the Mac, allows for glyph scaling and hinting.
unicameral — describes a script with only one set of symbols per phoneme. See also bicameral.
Unicode Scalar Value (USV) — a number written as a hexadecimal (base 16) value that serves as the codepoint for Unicode characters. Characters in the BMP are written with four hex digits, eg: U+0061, U+AA32. Characters in supplementary planes use five or six digits.
Uniscribe (Unicode Script Processor) — due to technical limitations in OpenType, it is necessary to pre-process strings before applying OpenType smart behaviour. Microsoft uses a particular DLL (Dynamic Link Library) called Uniscribe to do this pre-processing. Uniscribe does all of the script specific, font generic processing of a string (such as reordering) leaving the font specific processing (such as contextual forms) to the OpenType lookups of a font.
USV — see Unicode Scalar Value.
UTF-8 — an encoding form for storing Unicode codepoints in terms of 8-bit bytes. Characters are encoding listing sequences of 1-4 bytes. Characters in the ASCII character set are all represented using a single byte. See http://www.unicode.org/unicode/faq/utf_bom.html.
UTF-32 — an encoding form for storing Unicode codepoints in 32-bit words. Since 32 bits encompasses the entire range of Unicode, every codepoint is encoded as a single 32-bit word. See Unicode Technical Report #19.
virama — the generic name for a written symbol, particularly common in Brahmic abugidas, having the function of silencing the inherent vowel in every consonant character. The virama can be used either to represent a word-final consonant or the first consonant(s) in a consonant cluster. The shape of the symbol varies from script to script, but it is often a diacritic, written above, below or alongside the consonant which it modifies.
VOLT — See Visual OpenType Layout Tool.
writing system — an implementation of one or more scripts to form a complete system for writing a particular language. Most writing systems are based primarily upon a single script; writing systems for Japanese and Korean are notable exceptions. Many languages have multiple writing systems, however, each based on different scripts; e.g. the Mongolian language can be written using Mongolian or Cyrillic scripts. A writing system uses some subset of the characters of the script or scripts on which it is based with most or all of the behaviours typical to that script and possibly certain behaviours that are peculiar to that particular writing system.
x-height — the distance from the baseline of a line of text to the top of the main body of lower-case letters, that is, without ascenders or descenders. It is the height of a lower-case x, as well as a lower-case u, v, w, and z. Curved letters such as a, e, n, and s tend to be slightly taller than the x-height for aesthetic purposes.
XML — see Extensible Markup Language.
XSL — see Extensible Stylesheet Language.
XSLT — see Extensible Stylesheet Language Transformations.
Note: If you want to add a response to this article, you need to enable cookies in your browser, and then restart your browser.
Note: the opinions expressed in submitted contributions below do not necessarily reflect the opinions of our website.
I would add samples of the different things you talk about .... so Hebrew text, Arabic text, ....
I might also suggest that it might be helpful to have a taxonomy to which the glossary words belong. That is they are not all equally confusing to learners and readers. A reader-learner might more easily confuse a grapheme, a glyph and a character. But "kern" is less likely to be confuse with these previous terms. (Mostly because it is in a different semantic set.) So the terms in the glossary are related, and a reader of the glossary is likely not only to want to understand the specific term but what differentiates it from other terms (concepts) in its semantic group.
Above the entry for Phonetization says : phonetization — see the rebus principle.
However there does not seem to be an entry for"rebus principle".
Thanks, Hugh. I've restored the missing entry on the rebus principle.
Note: If you want to add a response to this article, you need to enable cookies in your browser, and then restart your browser. |
(Phys.org) —Researchers at the University of Arkansas have identified that water, when chilled to a very low temperature, transforms into a new form of liquid.
Through a simulation performed in "supercooled" water, a research team led by chemist Feng "Seymour" Wang, confirmed a "liquid-liquid" phase transition at 207 Kelvins, or 87 degrees below zero on the Fahrenheit scale.
The properties of supercooled water are important for understanding basic processes during cryoprotection, which is the preservation of tissue or cells by liquid nitrogen so they can be thawed without damaged, said Wang, an associate professor in the department of chemistry and biochemistry in the J. William Fulbright College of Arts and Sciences.
"On a miscrosecond time scale, the water did not actually form ice but it transformed into a new form of liquid," Wang said. "The study provides strong supporting evidence of the liquid-liquid phase transition and predicted a temperature of minimum density if water can be cooled well below its normal freezing temperature. Our study shows water will expand at a very low temperature even without forming ice."
The findings were published online July 8 in the journal Proceedings of the National Academy of Sciences. Wang wrote the article, "Liquid–liquid transition in supercooled water suggested by microsecond simulations." Research associates Yaping Li and Jicun Li assisted with the study.
The liquid–liquid phase transition in supercooled water has been used to explain many anomalous behaviors of water. Direct experimental veri?cation of such a phase transition had not been accomplished, and theoretical studies from different simulations contradicted each other, Wang said.
The University of Arkansas research team investigated the liquid–liquid phase transition using a simulation model called Water potential from Adaptive Force Matching for Ice and Liquid (WAIL). While normal water is a high-density liquid, the low-density liquid emerged at lower temperatures, according to the simulation.
Explore further: Scientists reveal structure of a supercooled liquid
More information: Liquid–liquid transition in supercooled water suggested by microsecond simulations, www.pnas.org/cgi/doi/10.1073/pnas.1309042110 |
The molecule that gave rise to life on Earth had to be able to replicate itself, and researchers at Massachusetts General Hospital have found fresh evidence that this molecule was RNA, a close relative of DNA.
DNA is an unlikely candidate for the original basis for life because it is not capable of replicating itself. It requires the help of the cells chemical machinery to do the replication. This cellular machinery is made of RNA.
An alternative theory is that life began with RNA, as this closely-related molecule is capable of both coding information and of catalysing replication reactions. Replication involves nucleotide building blocks joining up alongside the strand of RNA to make a new strand of RNA.
A paper published in the journal ACS Central Science suggests that RNA molecules have more flexibility in how they interact with nucleotides, the building blocks that make up the longer strands of RNA.
Researchers at the Howard Hughes Medical Institute at Massachusetts General Hospital used X-ray crystallography to see how RNA was matching up with its nucleotides. They found that in addition to the pairing up that would allow faithful replication of the strands, there were some rogue matches that might have halted replication.
"Base-pairing in RNA is essentially the same as in DNA: G pairs with C, and U (instead of T) pairs with A. We were looking at G monomers pairing with C residues on an RNA strand," study author Jack Szostak tells IBTimes UK.
Szostak and his colleagues were surprised to find that the nucleotides were sometimes lining up in an unusual way. "We expected to see only G pairing to C with the standard Watson-Crick geometry and pattern of hydrogen bonds. We were surprised to see, in addition, some different kinds of G-C pairs – a different geometry and a different pattern of hydrogen bonds."
The findings mean that when investigating whether RNA really could self-replicate in a way that could kick off the chain reaction leading to life, the researchers will have to take into account the unconventional ways that the nucleotides are lining up along the strand. This could be the reason that RNA replication is so full of errors, Szostak says. "On the other hand, it points to a simple way that a ribozyme polymerase could improve accuracy: by enforcing the correct Watson Crick geometry of base-pairing," he says.
Catching RNA in the act of full-on self-replication has not yet been achieved in the laboratory, and so hard evidence for the RNA world is some way off. "We and others have made a lot of progress, so we are getting closer, but there is still a lot to learn," says Szostak . |
The reproductive organs of a male mosquito may be the size of a pinhead, but the sperm they contain (testes shown in green) present a prime target in a new genetic war against malaria. Wiping out malaria-laden mosquitoes is a sure fire way to reduce spread of a disease that kills up to a million people each year. Chemical warfare isn’t working – mosquitoes outmanoeuvre insecticides. But now, armed with the mosquito’s genetic code, scientists are developing birth-control measures – altering genes and rendering the pests sterile. The strategy is feasible because unique properties of seminal fluids (represented in yellow) that are transferred along with sperm mean the female only mates once in a lifetime. Consequently, a host of females copulating with a single sterile male would put paid to an army of mosquitoes. Researchers hope introducing sterile mosquitoes into the wild will, in future, add force to the battle against malaria.
Written by Caroline Cross
BPoD stands for Biomedical Picture of the Day. Managed by the MRC London Institute of Medical Sciences the website aims to engage everyone, young and old, in the wonders of biomedicine. Images are kindly provided for inclusion on this website through the generosity of scientists across the globe. |
A log book is a systematic daily or hourly record of activities, events and occurrences. Log books are often used in the workplace, especially by truck drivers and pilots, to log hours and distances covered.Continue Reading
Originally, log books were created for the purpose of ships to record the distances they covered. Since then, most ships have started using computerized systems and no longer use a formal log book for that purpose. Instead, log books are used to keep a record of events and to help navigate in the event that the radio, radar or GPS fail.
Log books are also used, often in an electronic format, by nuclear power plants and other energy producers. Log books also play a critical role in the manufacturing business. Their main use is for tracking and evaluating the manufacturing process.
A pilot's log book is used as a record of time and training to be used toward future certificates and ratings. It is also used as currency to comply with various regulations. Similarly, truck drivers use log books to ensure that they are keeping up with various transportation laws. Truck drivers are required to keep a detailed 24-hour log of their location and time spent off and on duty because they are only allowed to drive for a certain number of consecutive hours.Learn more about Education |
|This article needs additional citations for verification. (November 2007)|
Particle board, also known as particleboard and chipboard, is an engineered wood product manufactured from wood chips, sawmill shavings, or even sawdust, and a synthetic resin or other suitable binder, which is pressed and extruded. Particleboard is a composite material.
Particleboard is cheaper, denser and more uniform than conventional wood and plywood and is substituted for them when appearance and strength are less important than cost. However, particleboard can be made more attractive by painting or the use of wood veneers onto surfaces that will be visible. Though it is denser than conventional wood, it is the lightest and weakest type of fiberboard, except for insulation board. Medium-density fibreboard and hardboard, also called high-density fiberboard, are stronger and denser than particleboard. Different grades of particleboard have different densities, with higher density connoting greater strength and greater resistance to failure of screw fasteners.
A major disadvantage of particleboard is that it is very prone to expansion and discoloration due to moisture, particularly when it is not covered with paint or another sealer. Therefore, it is rarely used outdoors or in places where there are high levels of moisture, with the exception of some bathrooms, kitchens and laundries, where it is commonly used as an underlayment - in its moisture resistant variant - beneath a continuous sheet of vinyl flooring. It does, however, have some advantages when it comes to constructing the cabinet box and shelves. For example, it is well suited for attaching cabinet door hinges to the sides of frameless cabinets. Plywood has the potential to feather off in sheaves when extreme weight is placed on the hinges. In contrast, particle board holds the screws in place under similar weight.
History and development
Modern plywood, as an alternative to natural wood, was re-invented in the 19th century ( given that it was well known by the ancient Egyptians, several thousand years before ) but by the end of the 1940s a shortage of lumber made it difficult to manufacture plywood affordably. Particleboard was intended to be a replacement. Its inventor was Max Himmelheber of Germany. The first commercial piece was produced during World War II at a factory in Bremen, Germany.[clarification needed] For its production, waste material was used - such as planer shavings, offcuts or sawdust - hammer-milled into chips and bound together with a phenolic resin. Hammer-milling involves smashing material into smaller and smaller pieces until they can pass through a screen. Most other early particleboard manufacturers used similar processes, though often with slightly different resins.
It was found that better strength, appearance and resin economy could be achieved by using more uniform, manufactured chips. Producers began processing solid birch, beech, alder, pine and spruce into consistent chips and flakes; these finer layers were then placed on the outside of the board, with its core composed of coarser, cheaper chips. This type of board is known as three-layer particleboard.
More recently, graded-density particleboard has also evolved. It contains particles that gradually become smaller as they get closer to the surface
Particleboard or chipboard is manufactured by mixing wood particles or flakes together with a resin and forming the mixture into a sheet. The raw material to be used for the particles is fed into a disc chipper with between four and sixteen radially arranged blades (the chips from disk chippers are more uniform in shape and size than from other types of wood chipper). The particles are then dried, after which any oversized or undersized particles are screened out.
Resin is then sprayed through nozzles onto the particles. There are several types of resins that are commonly used. Amino-formaldehyde based resins are the best performing when considering cost and ease of use. Urea Melamine resins are used to offer water resistance with increased melamine offering enhanced resistance. is typically used where the panel is used in external applications due to the increased water resistance offered by phenolic resins and also the colour of the resin resulting in a darker panel. Melamine Urea phenolic formaldehyde resins exist as a compromise. To enhance the panel properties even further the use of resorcinol resins typically mixed with phenolic resins are used, but this is usually used with plywood for marine applications and a rare occasion in panel production.
Panel production involves various other chemicals—including wax, dyes, wetting agents, release agents—to make the final product water resistant, fireproof, insect proof, or to give it some other quality.
Once the resin has been mixed with the particles, the liquid mixture is made into a sheet. A weighing device notes the weight of flakes, and they are distributed into position by rotating rakes. In graded-density particleboard, the flakes are spread by an air jet that throws finer particles further than coarse ones. Two such jets, reversed, allow the particles to build up from fine to coarse and back to fine.
The sheets formed are then cold-compressed to reduce their thickness and make them easier to transport. Later, they are compressed again, under pressures between 2 and 3 megapascals (290 and 440 psi) and temperatures between 140 and 220 °C (284 and 428 °F). This process sets and hardens the glue. All aspects of this entire process must be carefully controlled to ensure the correct size, density and consistency of the board.
The boards are then cooled, trimmed and sanded. They can then be sold as raw board or surface improved through the addition of a wood veneer or laminate surface.
Particle board has had an enormous influence on furniture design. In the early 1950s, particle board kitchens started to come into use in furniture construction but, in many cases, it remained more expensive than solid wood. A particle board kitchen was only available to the very wealthy. Once the technology was more developed, particle board became cheaper.
Large companies such as IKEA and Fantastic Furniture base their strategies around providing furniture at a low price; for example, IKEA’s stated mission is to “create well-designed home furniture at prices so low that as many people as possible will be able to afford it”. They do this by using the least expensive materials possible, as do most other major furniture providers. In almost all cases, this means particle board or MDF or similar. However, manufacturers, in order to maintain a reputation for quality at low cost, may use higher grades of particle board, e.g., higher density particle board, thicker particle board, or particle board using higher-quality resins. One may note the amount of sag in a shelf of a given width in order to differentiate the difference.
In general the much lower cost of sheet goods (particle board, medium density fiberboard, and other engineered wood products) has helped to displace solid wood from many cabinetry applications.
Safety concerns are two part, one being fine dust released when particleboard is machined (e.g., sawing or routing), and occupational exposure limits exist in many countries recognizing the hazard of wood dusts. The other concern is with the release of formaldehyde. In 1984 concerns about the initial indoor level of formaldehyde led the United States Department of Housing and Urban Development to set standards for construction of manufactured homes. This however was not solely because of the large amounts of pressed wood products that manufactured homes contain but also because of other building materials such as Urea-formaldehyde foam insulation. Formaldehyde is classified by the WHO as a known human carcinogen.
- Engineered wood
- Glued laminated timber
- Melamine resin the substance used to glue together particleboard
- Medium-density fiberboard
- Oriented strand board
- Pressed wood
- "Wood dust hazards" (pdf). UK HSE.
- "Formaldehyde Factsheet" (webpage). Illinois Department of Public Health.
- IARC Monographs on the Evaluation of Carcinogenic Risks to Humans Volume 88 (2006) Formaldehyde, 2-Butoxyethanol and 1-tert-Butoxypropan-2-ol (pdf, html), WHO Press, 2006( English )
|Look up particle board in Wiktionary, the free dictionary.| |
ScienceShot: CT Scans for the Oceans
For decades, fishermen have used underwater listening devices to find schools of fish, and sonar operators have bounced sound waves off of enemy submarines to reveal their presence. Both techniques work because sound travels easily through water. Now researchers may have found a way to take undersea listening to the next level, using the ambient noise in the ocean. The technique, called acoustic illumination, is similar to computed tomography, in which medical images are derived from the way bone and tissue distort x-rays passing through the body. Today in Geophysical Research Letters, researchers describe how they installed lines of underwater microphones at two different locations in the Pacific Ocean. Then the scientists identified and isolated ambient underwater sounds and tracked them back to their sources by computing the differences in time it took for the sounds to reach the two arrays. Using the method, scientists could track underwater temperature changes for wide swaths of ocean—because sound's speed depends on water temperature--or they could record the changing migration patterns of fish, whales, and other marine life.
See more ScienceShots. |
Intervals are the raw material from which melodies are created, and provide the building blocks of harmony. They are the musician's tool to control feeling and tell a story.
This section of ten web pages explores intervals from the very basics of vibration to the terminology used in music theory. A general reference for these topics is W. A. Mathieu's Harmonic Experience ([Mathieu 1997]).
What is Sound?
Sound is a fluctuation in the air pressure — an area where air particles are more densely packed or more loosely packed than the surrounding air.
Unlike the pressure changes brought about by weather (e.g. “a low-pressure front bringing in moisture in the early evening …”), sound involves local, relatively rapid change in air pressure.
Because air is a gas, air particles are free to move about and bump into neighboring air particles.
Local changes in air pressure tend to spread out through the air
with the character of a wave motion that is triggered as the air particles interact
with their neighbors ([Everest 2001]).
This domino effect tends to move through the air at the speed of sound — about 760 miles per hour or one mile every five seconds at typical conditions ([Dean 1979]).
We hear sounds because our ears respond to changes in air pressure.
If the change in air pressure oscialates — high pressure / low pressure / high pressure / low pressure
— our ears, nervous system, and brain convert changes in air pressure into a perception of sound.
If you strike the drum head, it vibrates and pushed the air into waves of high pressure (more tightly packed air molecules) and low pressure (less tightly packed air molecules). Those waves of alternating pressure
are called air pressure waves.
They affect the neighboring air and radiate out from the source, with reducing pressure and precision. When it reaches our ears, it is perceived as sound.
So air molecules do not move from the intrument
to the ear — we would feel that as wind. Each molecule vibrates back and forth in a limited region. This understanding came fairly recently — Jean Baptiste Joseph Fourier (1768–1830) came to understand this principle and then to map out mathematically the frequencies and sine waves involved in sound ([Fourier 1888]).
In the example above, the vibrations of a drum head die out quickly and we perceive a brief sound. If the sound source continues to produce air pressure waves, we perceive a long sound. And if the oscillation happens at a steady rate, we perceive a steady tone or pitch or note.
The picture above is often represented as a graph of the changing air pressure over time:
The blue line represents how the air pressure changes over time. The distance in time between two neighboring troughs in the graph (the low-pressure points) or two neighboring peaks in the graph (the high-pressure points) is called one cycle. If the difference in pressure between the low- and high-pressure points is greater, the blue line would appear taller and we would perceive a louder sound.
When people say that sound is vibrational energy, one aspect of what they are talking about is the energy that is transferred from the drum head to your ear, in waves of air pressure.
It's easy to see how the head of a bass drum shown above causes air pressure waves. As the drum head vibrates back and forth, the air molecules are pushed and pulled into areas of higher pressure and lower pressure.
The string on a stringed instrument has a similar effect on the air. A guitar string vibrates when plucked by a finger or a pick, a piano string vibrates when struck by a hammer inside the piano, and a violin string continuously vibrates when vibrated by the coarse hairs of a bow drawn across it.
But what about a flute?
How flutes cause air pressure waves is best shown by these moving images from
Luchtwervels in een blokfluit «Air Vortices in a Recorder»
([Hirschberg 1999] ). They are both images of how the air coming out of the flue crosses the sound hole and hits the splitting edge of a recorder:
Airstream at the sound hole of a recorder
The left image shows the behavior of the airstream coming out of the flue at the onset of a tone - what would be the attack at the start of a note. The air initially flows up and away without any vibration or oscillating pattern.
The right image shows what happens a bit later, after an oscillation has been established. This osciallation happens because of the specific shape of all the aspects of the flute, but in particular the shape of the flue, the sound hole, the splitting edge, and the flute's sound chamber.
The caption in [Hirschberg 1999] for the right image (translated from Dutch, thanks to Google Translate) says:
Oscillations of the air jet in the mouth of a recorder with a fundamental frequency of 513 Hz.
The core gap is 1 mm, the distance between the core and the output gap of the labium is 4 mm.
The visualization is obtained using the so-called Schlieren technique: the whistle blows CO2
and creates a contrast in the refractive index.
Presumably, the “core” referred to is the height of the flue.
So the air pressure wave is created in flutes by an oscillation of air above and below the splitting edge.
The number of full cycles of oscillation (corresponding to the number of peaks in graph of the vibration) that happen every second is typically called the frequency. Frequency is usually measured in the number of "cycles per second" or "Hertz" (abbreviated "Hz") after the German physicist Heinrich Hertz. So if there are 100 peaks in air pressure each second, the sound is said to have a frequency of 100 Hz.
Our ears are designed to hear sounds with frequencies of between roughly 25 and 15,000 Hertz.
Instruments that produce air pressure waves that have a steady frequency are said to be "pitched instruments", because the sound we hear has a recognizable tone or pitch. All flutes are pitched instruments.
Its common to hear physicists talk about frequencies in terms of Hertz, but musicians tend to use a system of musical note names that have been developed. The musical notes better represent how we hear music, but they also make it easy to forget the vibrational basis of our music. When something in music says "A=440", it's a shorthand for saying that the musical note named "A" corresponds to 440 Hertz. |
Q Why is it hard to predict which flu vaccine will be most effective each year?
— Megan Costello, communications specialist at Morgridge Institute for Research
A Thomas Friedrich, associate professor of pathobiological sciences at University of Wisconsin-Madison:
A vaccine is meant to train your immune system to recognize a virus so it can fight off that virus if you ever come in contact with it.
If you get some diseases, like smallpox or polio, and survive, you are immune to those viruses for the rest of your life. The aim of a vaccine is to give you that immunity without giving you the disease.
The process is pretty straightforward for smallpox and polio because the viruses that cause those diseases don’t change very much over time.
Other viruses, like influenza or HIV, mutate rapidly. The immune response from a vaccine needs to cope with this level of variability in the virus.
At the moment, vaccines for the flu give you an immune response that’s effective but highly specific. Predicting the right type of flu to arm against is the key because the immune response is not good at dealing with changes in virus structure that influenza can get through mutation.
There’s a process to design flu vaccines that starts 18 months ahead of the flu season. Scientists get together in Geneva and try to decide: Of all flu strains infecting people today in the world, which ones are likely to be most common 18 months from now when the flu season happens?
Sometimes we predict right, and sometimes we predict wrong.
Viral evolution is difficult to understand.
We’re usually pretty good at explaining why a virus evolved in the past, but that doesn’t mean we can take a situation today, play it forward 18 months and predict with 100 percent certainty how virus evolution is going to happen. |
Singular & Plural Pronouns
Singular pronouns are simply pronouns that refer to singular nouns. But it can get a little tricky when you think about the fact that singular pronouns can be personal pronouns, which, as you have learned, refer to a person or thing. They will also be definite or indefinite, which means they can refer to someone or something specific (definite) or not (indefinite).
So words like he and she are singular, personal, definite pronouns, and words like anybody and anyone are singular, indefinite pronouns.
Plural pronouns are simply pronouns that refer to plural nouns. But, like singular pronouns, plural pronouns can also be personal and definite or indefinite, and they refer to plural nouns or groups of nouns.
For example, words like they and we are plural, personal, definite pronouns, and words like many and both are plural, indefinite pronouns.
TIP! Don’t worry if you feel a little confused about the fact that singular and plural pronouns can also be personal pronouns. They are definite or indefinite as well. However, most likely, you won’t find yourself in situations where you have to label pronoun types. What you do need to know is that, when you choose a singular or plural pronoun, you have to make sure it agrees with the noun you’re replacing. So, if you’re replacing a singular noun, be sure to use a singular pronoun. |
Knowing how to get rid of negative exponents is key to fully simplifying an expression. Get some practice working with negative exponents by watching this tutorial!
Got a fraction raised to a power? Learn how to split that exponent and put it in the numerator and denominator of your fraction using the power of a quotient rule. This tutorial shows you how!
Did you know that another word for 'exponent' is 'power'? To learn the meaning of these words and to see some special cases involving exponents, check out this tutorial!
Taking the square root of a perfect square always gives you an integer. This tutorial shows you how to take the square root of 36. When you finish watching this tutorial, try taking the square root of other perfect squares like 4, 9, 25, and 144.
Trying to take the square root of a fraction? This tutorial shows you how to take the square root of a fraction involving perfect squares. Check it out!
Remember that addition and subtraction are opposite operations and multiplication and division are opposite operations? Turns out, squaring and taking the square root are opposite operations too! See why in this tutorial!
Dealing with a word problem involving really big (or really small) numbers? This one has both! In this tutorial, you'll see how to use scientific notation to solve a word problem.
Multiplying together two really large numbers? What about two really small numbers? How about one of each? Scientific notation to the rescue! Watch this tutorial and learn how to multiply using scientific notation.
Trying to convert a really large or really small number to scientific notation? Watch this tutorial and you'll be a pro in no time!
Trying to convert a number in scientific notation to decimal notation? Watch this tutorial and you'll be a pro in no time!
Working with exponents can be lots of fun, as long as you understand how they work. In this tutorial you'll see how exponents add when you multiply the same number raised to different exponents!
Sometimes you'll see a number with an exponent raised to another exponent, and the first time you see it, you probably think it's a typo! But it's not a typo, it's a real thing, and there's a really nice trick for making it simpler that you'll see in the video.
There's a great trick for raising a product of two number to an exponent, and this tutorial shows you exactly that trick works.
Working with exponents can be lots of fun, as long as you understand how they work. In this tutorial you'll see how exponents add when you divide the same number raised to different exponents!
A lot of people get a little uneasy when they see 0, especially when that 0 is the exponent in some expression. After all, there seem to be so many rules about 0, and so many special cases where you're not allowed to do something. Well it turns out that a zero in the exponent is one of the best things that you can have, because it makes the expression really easy to figure out. Watch this tutorial, and next time you see 0 in the exponent, you'll know exactly what to do!
Do you ever panic when you see a negative number in the exponent of some mathematical expression? Well if you do, then panic no more! This tutorial will help you overcome your fear, and will help you understand what negative exponents actually mean :)
Sometimes a number is so big (or so small), that it takes a while to write it all down. Luckily, this number can be written quicker using scientific notation! Watch this tutorial and learn about scientific notation.
Anytime you square an integer, the result is a perfect square! The numbers 4, 9, 16, and 25 are just a few perfect squares, but there are infinitely more! Check out this tutorial, and then see if you can find some more perfect squares!
Trying to order numbers in scientific notation? This tutorial provides a great example of that! Check it out:
What is the product of powers? Follow along with this tutorial to learn about what the product of powers is and how to use it!
The quotient of powers rule can be very useful when you're simplifying with numbers. Follow along to learn more about this handy rule!
When you have a number raised to a power and then THAT is raised to a power, simplifying things may be easier than you think. Follow along with this tutorial and see!
The power of a product rule can be a very handy tool when you're simplifying an expression. This tutorial introduces you to this rule and shows you how to use it.
The power of a quotient rule is just one of many tools that can help you simplify an expression. Learn more about it with this tutorial. |
Last summer, President Obama announced that the U.S. Department of Agriculture, Energy and Navy would invest up to $510 million in order to spur the biofuels industry and enhance U.S. energy security. As a result of government support through tax breaks and subsidies, both ethanol and biodiesel have been successfully integrated into the U.S. energy market. However, while biofuels are generally perceived as more “sustainable” than regular gasoline, controversy remains over the environmental costs of their production, as well as their impact on food prices.
Ethanol stored in Brazil: Raízen, the featured facility, will produce 2 billion litres of ethanol a year (Photo via Flickr, by Shell).
At the most basic level, biofuels are simply material from living or recently living organisms that is converted into fuel. Ethanol is derived from the starches and sugars in plants, and biodiesel is derived from sources such as animal fats, vegetable oil, and cooking grease. To reduce emissions of carbon monoxide and other pollutants during fuel burning, ethanol is typically blended with gasoline, and biodiesel is blended with diesel or used in its pure form.
In theory, the carbon emissions released from the burning of biofuels are offset during feedstock cultivation, when the plants photosynthesize carbon dioxide and store it in their biomass. However, the many other phases of the biofuel life cycle—including the farming and refining processes, and the transport of the fuels from producer to consumer—may result in a net increase in greenhouse gas emissions.
Growing ethanol feedstocks such as corn and sugar cane requires huge amounts of land, increasing the global demand on already limited farmland. To boost agricultural productivity, growers apply vast quantities of fertilizer during farming, which releases nitrogen dioxide, a powerful greenhouse gas. Corn, in particular, generally requires more fertilizer than most other biofuel feedstocks.
In light of these and other challenges, including rising food prices, rampant deforestation, and widespread water shortages, biofuel does not appear to be the solution to U.S. energy needs. As ethanol production increases, and as more corn is required for fuel production, it is clear that biofuels in their current form are not a sustainable alternative to fossil fuels. They will only make the country dependent on corn, as it is now dependent on oil.
To address the shortcomings of current biofuel production, scientists are developing new techniques and feedstocks to enhance sustainable production. Switchgrass, a North American perennial tallgrass, sequesters far more carbon dioxide than corn and other row crops, and is drought tolerant, making it a promising alternative feedstock. It requires little fertilization and can grow well on marginal land. Moreover, switchgrass cultivation would not compete with food cultivation, although some farmers may eventually switch to growing switchgrass instead of food crops if it were profitable to do so.
A specialist in algae science researches biofuel production in Los Alamos National Laboratory (Photo via Flickr, by LANL).
The use of switchgrass for ethanol production is becoming increasingly viable. Until recently, scientists had struggled to release the polysaccharides from the plant’s tough lignin. To reduce these complex carbohydrates into fermentable sugars, researchers introduced a corn gene into switchgrass’s DNA, which increases its starch content, making it easier to extract the sugars.
While this discovery makes switchgrass an appealing alternative to corn, more research is needed before this grassy feedstock will be widely adopted. Switchgrass has been planted in a monoculture for only a few decades, so the long-term effects on land use and carbon sequestration are uncertain. In addition, an energy-using pre-treatment is necessary to efficiently release the polysaccharides. Despite these early uncertainties, switchgrass offers a potentially cheap and efficient way to produce clean fuel for the future.
Another promising biofuel contender is algae, which brings similar benefits to switchgrass in terms of both carbon sequestration and ease of production. Although it is too early to know if biofuel is the sustainable solution to U.S. gasoline demand, the government must support continued scientific and economic research into these and other approaches to sustainable biofuel production.
(Written by Alison Singer, Edited by Antonia Sohns) |
With age, often, comes hearing loss. There are two main reasons for this. First, as a person ages, everything becomes less flexible. That includes the eardrum, a membrane that works by vibrating in response to sound—as the eardrum becomes stiffer and less able to vibrate, it becomes less able to detect sound and transmit it to the brain.
The other reason is a life of noises. Loud noise damages the ears. The damage may be minor, but it is also lasting, and it accumulates. After a lifetime of listening to loud noises, a person's hearing will start to deteriorate. In fact, more and more people are being diagnosed with hearing loss at younger ages than before. This is partly due to more sensitive and reliable tests, which people are undergoing earlier in life, but it's also because of the loudness of modern living, particularly headphones.
Often, however, hearing loss in teenagers is missed. Because it is thought of as a condition, if not fate, or old people, hearing loss is not always even suspected in young people, despite more than half of all people with some form of hearing difficulty having developed it in childhood. Compounding this, many adolescents have difficulty noticing or recognizing hearing loss and adequately conveying that to medical professionals. In principle, young patients will be asked questions designed to find those who may need more objective audiological testing. However, the questions asked are not always useful for patients in that age range, leading to underdiagnosis.
Moreover, noise-induced hearing loss can start as young as age 20. This occurs when the "hair cells" within the ear are damaged by exposure to loud sounds. The hair cells, once destroyed, do not recover, leading to a permanent reduction in auditory acuity. This doesn't merely damage the ability to hear, it actually changes the way the brain processes speech. Not only do sounds not get through as efficiently, when they do get through, they are not processed correctly. However, new medical techniques may hold out hope for recovering hearing ability lost to noise. Researchers have developed a clear picture of the structure of the cells supporting these hair cells, and had some success in repairing them in experimental animals. |
Every material absorbs and reflects some solar energy. However, some materials absorb far more than they reflect, and vice versa. The amount of solar energy a material will absorb or reflect depends on a number of physical properties. Dense materials tend to absorb more solar energy than less dense materials. Color and coating also affect the amount of solar energy that an object can absorb or reflect.
As the density of a material increases, its ability to absorb solar energy typically also increases. For example, dense materials, such as adobe, concrete and brick, absorb a large amount of solar energy. Less dense materials, such as Styrofoam and some wood, do not absorb as much solar energy. These properties may vary according to the coating of the material. For example, if a dense material such as concrete were coated with a highly reflective coating, it would not absorb as much energy.
How Does Color Affect Absorption and Reflection?
Solar energy reaches us at different wavelengths. The different wavelengths associated with visible light make up the different colors of the rainbow. When we see a material's color, we are seeing the reflection of that wavelength of light. For example, a blue material reflects blue light. White materials reflect a large amount of visible light. Black materials absorb a large amount of visible light. Therefore, darker materials will absorb more solar energy than lighter materials.
Sciencing Video Vault
Where Does the Energy Go?
When a material absorbs solar energy, the energy is transferred to the atoms in that material. Eventually, this material is released as heat. Depending on the properties of the material, this process can occur at different speeds and intensities. For example, concrete will release heat slowly, whereas a piece of metal might radiate heat quickly after absorbing it. The difference in heat emission is related to the difference in thermal conductivity of the materials. Metal conducts heat more readily than concrete. Therefore, heat will spread through metal quicker than it does through concrete.
How Can We Use This Knowledge?
We can use the knowledge of material properties in order to construct efficient devices, buildings and other technology. For example, material properties related to heat emission are extremely useful in building passive solar structures. In a passive solar building, it is important to use material that will store the day's solar energy and slowly emit it over night. In building design, this property is called a material's "thermal mass." |
People often think of cystic fibrosis as only affecting the respiratory organs of the body. This the place cystic fibrosis patients experience the most severe symptoms of cystic fibrosis. The common signs of cystic fibrosis in the pulmonary and respiratory system are commonly known. Coughing, wheezing, frequent bouts of bronchitis and pneumonia, sinusitis and fleshy growths in the nose. Thick, sticky mucus that is discolored when it is brought up is a sure sign of cystic fibrosis.
Cystic fibrosis often will show up in other parts of the body and will not be as easily recognized. This disease will often show up in the digestive organs. A common sign of digestive problems is stomach cramping, pain, and excess gas. Other common signs of a possible diagnosis of cystic fibrosis are diarrhea, stools that look greasy and smell foul. In worse cases a patient may experience a bowel blockage. A patient with cystic fibrosis may have severe vomiting.
The reproduction organs may also be affected. If a child has a delayed onset of puberty and have other signs of the disease this could be a sign of cystic fibrosis. Most males with cystic fibrosis will be sterile. Women with this disease may have difficulty becoming pregnant and be aware of the dangers of passing on the mutant gene that causes cystic fibrosis.
Another part of the body that can be affected by cystic fibrosis is the blood system. Anemia may occur. Anemia is a decrease in the number or make-up of the blood cells. Bleeding disorders such as inability to clot are also associated with cystic fibrosis.
Bone and joint problems is a little known sign of cystic fibrosis. We normally think of respiratory and digestive problems when a person is diagnosed with this disease. It might be surprising to learn that CF may affect arthritis, stunt the growth of a cystic fibrosis patient, severe pain in the joints and bones, and cases of osteoporosis may be attributed to this disease.
Another unusual sign of cystic fibrosis is an enlargement of the fingertips and toes. The digits swell, this symptom is called clubbing. This symptom is not rare but is often overlooked when searching for causes of respiratory, digestive, and reproduction problems. Other miscellaneous signs of cystic fibrosis is the inability to gain weight while still having an appetite and eating a normal diet. Salty skin and sweat, and liver problems are signs that may be overlooked in a diagnosis of cystic fibrosis. This disease could be mistaken for other problems with similar symptoms. Genetic testing needs to be done to confirm the diagnosis of cystic fibrosis. Knowing your family history may help your doctor decide what is causing your distress.
Tags: apps for people with cystic fibrosis , cystic fibrosis , cystic fibrosis apps |
The work was reported June 8 in the advance online issue of The EMBO Journal, a publication of the European Molecular Biology Organization (EMBO).
Tumor cells that grow aggressively often have an irregular number of chromosomes, the structures in cells that carry genetic information. The normal number of chromosomes in a human cell is 46, or 23 pairs. Aggressive tumor cells often have fewer or more than 23 pairs of chromosomes, a condition called aneuploidy.
To date it has not been clear how tumor cells become aneuploid.
"Checkpoint proteins" within cells work to prevent cells from dividing with an abnormal number of chromosomes, but scientists have been puzzled by evidence that aneuploidy can result even when these proteins appear to be normal.
What MIT researchers have discovered is a reason these checkpoint proteins may be unable to sense the defective cells, which tend to have very subtle errors in them. (These subtle errors are believed to be the cause of aneuploidy and the rapid growth of tumors.)
Before cells divide, individual chromosomes in each pair of chromosomes must attach to a set of tiny structures called microtubules. If they attach correctly, the checkpoint proteins give them the go-ahead to divide. If they don't, the checkpoint proteins are supposed to stop them from dividing.
"The checkpoint proteins are like referees in a tug-of-war contest," said Viji Draviam, a research scientist in MIT's Department of Biology and lead author of the paper. "They make sure that all chromosomes are lined up in the right places before the cell is allowed to divide."
Scientists have known about the function of checkpoint proteins for at least 20 years, and they have suspected that mutations in checkpoint proteins cause the irregular number of chromosomes in the aneuploid cells. But they have been perplexed by the infrequent occurrence of mutations in aneuploid tumors.
"It's puzzling that the suspected culprits - the aneuploidy-inducing checkpoint mutations - are rarely found at the scene of the crime, in the aneuploid tumors," Draviam said.
That lingering question prompted Draviam and her colleagues to study how two other key molecules - a known tumor suppressor protein called APC and its partner protein EB1 - work together to assure that cells divide normally.
They discovered that if they removed either protein from a cell or if they interrupted the way the proteins work together, the cell would become aneuploid. In other words, the checkpoint proteins need to sense that the APC and EB1 proteins both are present for normal cell division to take place.
"This is important because it is the first demonstration that interrupting the normal function of these proteins will cause the cell to become aneuploid," Draviam said. "Our research sheds light on what could go wrong to cause an irregular number of chromosomes in cells even when the checkpoint proteins appear to be functioning properly."
Draviam's co-authors are graduate students Irina Shapiro and Bree Aldridge and MIT Professor of Biology and Biological Engineering Peter Sorger.
The research was funded by the National Institutes of Health.
Last reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009
Published on PsychCentral.com. All rights reserved. |
Written by: Beth Davies-Stofka
Christian leaders explored the core beliefs of Christianity in accordance with their intellectual training, launching theological controversies over the use of philosophical terms such as person and substance to define and understand the nature of God. The Christological controversies over the nature of the incarnation of God in Jesus and the Trinitarian controversy over the relationship of God, Jesus, and the Holy Spirit were particularly sharp and of continuing significance. Both the conclusions formulated and some of the controversies associated with them persist to the present day.
The politics of the Roman Empire had two significant and lasting effects on Christianity. In the first case, the fledgling Christian communities suffered official persecution by the Romans for almost three centuries, leaving a lasting legacy of martyrdom. In the second case, early Christian communities responded to questions of communal organization by adopting the hierarchical model of Roman political organization. This is most apparent in the roles and relationships of clergy, where bishops have authority over priests, and archbishops or popes have authority over everyone.
1. How was Christianity linked to a Jewish identity?
2. Why was Jesus revered amongst early Christian followers?
3. How did the Roman Empire influence Christian communities? |
Human beings have struggled for centuries to gain equal rights. Western civilization has been characterized by the hegemonic domination by white males. This power structure has frequently and historically excluded women and minorities. In the United States, despite the foundational creed that “all men are created equal,” it has taken centuries of struggle to gain equal rights.
Equal rights, of equality before the law, means that all individuals are subject to the same laws of justice. People must be treated equally without regard to race, gender, national origin, skin color, religion or disability. African Americans were one of the first groups granted equal rights in the United States, through the passage of the 14th Amendment, which outlawed slavery. Achieving equal rights in reality took another century of struggle. Women, as well, although granted the right to vote in 1920, continue to work towards equal rights, most recently through pay equity.
One of the more visible equal rights struggles in the United States is the issue of marriage equality. Homosexuals are one of the last groups to face structural discrimination, and the movement towards being allowed to legally marry a person of the same sex is a profound and current equal rights struggle in the United States, one that is being waged both in the courts and in the popular opinion of the American public. |
Health is the level of useful or metabolic efficiency of a living organism. In humans it is the ability of individuals or communities to adjust and self-manage when facing physical, mental or social challenges. The World Health Organization (WHO) defined health in its broader wisdom in its 1948 constitution as a state of complete physical, mental, and social well-being and not just the absence of disease or illness. This description has been subject to controversy, in particular as lacking operational worth and because of the problem created by use of the word complete. Other definitions have been proposed, among which a recent meaning that correlates health and personal satisfaction. Classification systems such as the WHO Relations of International Classifications, including the International Classification of Functioning, Disability and fitness (ICF) and the International Classification of Diseases (ICD), are commonly used to define and measure the components of health.
Generally, the context in which an individual lives is of great importance for both his health status and value of their life. It is increasingly recognized that health is maintained and improved not just through the advancement and application of health science, but also through the efforts and intelligent lifestyle choices of the individual and culture. According to the World Health Organization, the main determinants of health include the social and financial environment, the physical environment, and the person’s individual characteristics and behaviors.
- Maintaining health
Achieving and maintaining health is an ongoing process, shaped by both the evolution of healthcare information and practices as well as private strategies and organized interventions for staying healthy.
A main way to maintain your personal health is to have a healthy diet. A healthy diet includes a variety of plant-based and animal-based foods that give nutrients to your body. Such nutrients give you energy and keep your body running. Nutrients help make and strengthen bones, muscles, and tendons and also regulate body processes (i.e. blood pressure).
Physical exercise enhances or maintains bodily fitness and overall health and wellness. It strengthens muscles and improves the cardiovascular structure.
Sleep is an essential component to maintaining health. In children, sleep is also vital for growth and progress. Ongoing sleep deprivation has been linked to an increased risk for several chronic health problems. In addition, sleep deprivation has been shown to correlate with both increased susceptibility to illness and slower healing times from illness. In one study, people with chronic insufficient sleep, set as six hours of sleep a night or fewer were found to be four times more likely to catch a cold compared to those who reported sleeping for seven hours or further a night. Due to the role of sleep in regulating metabolism, insufficient sleep can also play a role in weight gain or, conversely, in impeding weight loss. Additionally, in 2007, the International Agency for Research on Cancer, which is the cancer examine agency for the World Health Organization, declared that change work that involves circadian trouble is probably carcinogenic to humans, speaking to the dangers of long-term nighttime work due to its intrusion on sleep. |
King and Queen
At least one king and queen are at the center of every colony. The queen's sole purpose is to reproduce. Some live for as long as 30 years. The queen will add 5,000 to 10,000 eggs annually.
Queens can lay thousands of eggs every year. Eggs hatch into nymphs.
While in the nymph state, termites diverge into different castes: workers, soldiers, reproductives, and supplementary reproductives.
Workers are blind, wingless termites that maintain the colony, build and repair the nest and tubes, forage for food, and care for the other termites. They are the most numerous caste and the most likely to be found in infested wood. A mature colony can contain from 20,000 to 5 million workers, averaging 300,000.
Soldiers are sterile, wingless, and blind. Their sole function is to defend the colony.
These termites will eventually leave the colony as adult Swarmers. After swarming, they shed their wings and pair up. Each mail-female pair attempts to start a new colony.
These termites help increase the population of established colonies and can serve as replacements for the king or queen if they should die. |
Anaphylaxis is a rare, life-threatening hypersensitive response to insect proteins which is characterized by contraction of smooth muscle and dilation of capillaries due to release of pharmacologically active substances. Anaphylaxis occurs within a few minutes after exposure. The clinical response depends upon the tissue that is affected. Examples of local anaphylaxis include asthma, hay fever and edema of the tissues of the throat. Anaphylactic shock is often a severe, and sometimes fatal, systemic reaction in a susceptible individual characterized especially by respiratory symptoms, fainting, itching, and hives. For some reason all the common allergies such as hayfever, allergic asthma and food allergy have become more common.
True anaphylaxis is caused by immunoglobulin E (IgE)-mediated release of mediators from mast cells and basophils. Researchers have definite ideas about why this might be so. The classic form, described in 1902, involves prior sensitization to an allergen with later re-exposure, producin symptoms via an immunologic mechanism. An anaphylactoid reaction produces a very similar clinical syndrome but is not immune-mediated. The annual incidence of anaphylactic reactions is about 30 per 100,000 persons, and individuals with asthma, eczema, or hay fever are at greater relative risk of experiencing anaphylaxis.
Anaphylactoid (anaphylaxis-like) or pseudoallergic reactions are similar to anaphylaxis. However, they are not mediated by antigen-antibody interaction, but result from substances acting directly on mast cells and basophils, causing mediator release or acting on tissues such as anaphylotoxins of the complement cascade. Idiopathic (nonallergic) anaphylaxis occurs spontaneously and is not caused by an unknown allergen. Munchausen's anaphylaxis is a purposeful self-induction of true anaphylaxis. All forms of anaphylaxis present the same and require the same rigorous diagnostic and therapeutic intervention.
Anaphylaxis is a life threatening allergic reaction that affects millions of Americans every year. Anaphylaxis can be caused by a variety of allergens , with the most common being food , medications , insect venom , and latex . If this problem is untreated, it results in shock, respiratory and cardiac failure, and death. The immediate treatment is the use of adrenalin (epinephrine) to counteract the effects- this is usually given as an injection. Anaphylaxis may occur after ingestion, inhalation, skin contact or injection of a trigger substance. Every patient prone to anaphylaxis should have an "allergy action plan" on file at school, home or in their office to aid family members, teacher and/or co-workers in case of an anaphylactic emergency. The Asthma and Allergy Foundation of America provides a free "plan" form anyone can print from their site. Action plans are considered essential to quality emergency care.
Causes of Anaphylaxis
Some cause are listed here :
Symptoms of Anaphylaxis
Symptoms of anaphylaxis can vary from mild to severe and are potentially deadly. Here is a list of possible symptoms that may occur alone or in any combination:
Treatment and Prevention of Anaphylaxis
The treatment of anaphylaxis should follow established principles for emergency resuscitation. Anaphylaxis has a highly variable presentation, and treatment must be individualized for a patient's particular symptoms and their severity. Treatment recommendations are based on clinical experience, understanding pathologic mechanisms, and the known action of various drugs. Rapid therapy is of utmost importance.
At the first sign of anaphylaxis the patient should be treated with epinephrine. Next, the clinician should determine whether the patient is dyspneic or hypotensive. Airway patency must be assessed, and if the patient has suffered cardiopulmonary arrest, basic cardiopulmonary resuscitation must be instituted immediately. If shock is present or impending, the legs should be elevated and intravenous fluids administered. Epinephrine is the most important single agent in the treatment of anaphylaxis, and its delay in or failure to be administered is more problematic than its administration.
Treatment of anaphylaxis consists of both short- and long-term management . The immediate goal is to maintain an adequate airway and support the blood pressure. Patients having severe reactions should be given oxygen. If they seem to be having trouble breathing, lay them on the ground and tilt their head back. This helps get the tongue out of the way of air flow.
Diagnosis of Anaphylaxis
Because of the profound and dramatic presentation, the diagnosis of anaphylaxis is usually readily apparent. When sudden collapse occurs in the absence of urticaria or angioedema, other diagnoses must be considered, although shock may be the only symptom of Hymenoptera anaphylaxis. These include cardiac arrhythmia, myocardial infarction, other types of shock (hemorrhagic, cardiogenic, endotoxic), severe cold urticaria, aspiration of food or foreign body, insulin reaction, pulmonary embolism, seizure disorder, vasovagal reaction, hyperventilation, globus hystericus, and factitious allergic emergencies. The most common is vasovagal collapse after an injection or a painful stimulation.
In vasovagal collapse, pallor and diaphoresis are common features associated with presyncopal nausea. There is no pruritus or cyanosis. Respiratory difficulty does not occur, the pulse is slow, and the blood pressure can be supported without sympathomimetic agents. Symptoms are almost immediately reversed by recumbency and leg elevation. Hereditary angioedema must be considered when laryngeal edema is accompanied by abdominal pain. This disorder usually has a slower onset, and lacks urticaria and hypotension, and there is often a family history of similar reactions. There is also a relative resistance to epinephrine, but epinephrine may have life-saving value in hereditary angioedema.
Disclaimer : All information on www.healthatoz.info is for educational purposes only. It is not a substitute for professional medical advice. For specific medical advice, diagnoses, and treatment, please consult your doctor. |
Historical background on antislavery slavery was the most important and divisive issue in 19th-century american politics and society in the states of the north, on the other hand, slavery came under successful attack in the states north of maryland, slavery was either gone or being ended by. Here are three scenes from the history of slavery in north america the stories they have uncovered throw african slavery—still the narrative that dominates our national memory—into a different light, revealing that the seeds of that system were sown in earlier attempts to exploit native. For faster navigation, this iframe is preloading the wikiwand page for treatment of slaves in the united states. Free black slave holders could be found at one time or another in each of the thirteen original states and later in every state that countenanced slavery, historian r sort of true historian steven mintz describes the situation more accurately in the introduction to his book african-american voices: a. Trace the history of slavery and abolition through the ages, from the days of ancient egypt and rome to the birth of the anti-slavery movement and the latest united nations treaties many historical timeline entries are adapted from new slavery: a reference handbook by kevin bales, second edition.
The first government of the united states was based on this, which was created in 1777 this was an agreement that california would be admitted to the union, the slave trade in the district of the united states and great britain fought this war partially over territorial expansion in north america. At the conclusion of the slave narrative project, a set of edited transcripts was assembled and microfilmed in 1941 as the seventeen-volume slave narratives: a folk history of slavery in the united states from interviews with former slaves. Slave-like conditions persisted in the sugar cane fields [there] well into the 20 th century, when there in both cases,it was used to pacify slaves getting back to the united states proper, how did there had been a long tradition of smokeable cannabis in latin america [after its introduction to the region.
Half-free, we learn from berlin and harris's introduction, reflected the evolving nature of slavery in the urban north the dutch west india company that after an especially severe winter, ten fires blazed in the city over three short weeks a grand jury called by the supreme court quickly concluded that. In 'the american slave coast: a history of the slave-breeding industry,' ned and constance there's an important fundamental difference between the history of slavery in the united states at its essence, american food began as a cuisine of survival free from the burdens of tradition and elitism. Get the most out of introduction to the united states by following along with the learning guide, which has a complete written transcript of every word spoken in the c: recent american history and other important historical information 78 name one war fought by the united states in the 1900s.
A free negro (or free black), was the term used prior to the abolition of slavery in the united states to describe african americans who were not slaves almost all african americans came to the united states as slaves, but from the onset of american slavery, slaveholders freed both male and female slaves for various reasons. After the revolution, some slaves—particularly former soldiers—were freed, and the northern states abolished slavery but with the ratification of the constitution of the united states, in 1788, slavery became more firmly entrenched than ever in the south. Immediately download the history of slavery in the united states summary, chapter-by-chapter analysis everything you need to understand or teach history of slavery in the united states more slaves came to america from africa than anywhere else the first slaves were in new york the. This book is an introductory history of racial slavery in the americas bergad's stated purpose is to integrate new interdisciplinary knowledge about three major slave societies-cuba, brazil, and the united states (why not puerto rico)-in a digestible form for advanced undergraduates, graduate. Throughout history, slavery has existed where it has been economically worthwhile to those in power the counterpart to helping slaves escape — picking up fugitives — also created laws southern states offered rewards to defray the costs of capture or passed statutes requiring owners to.
Even before the united states officially became the nation that we know it as today divorce was a hot topic in the colonies one of the earliest instances of a divorce law was in the colony of massachusetts bay, who created a judicial tribunal that dealt with divorce matters in 1629 this legislative body was. Racialized history of the united states by the legacy of slavery and segregation, of indian extermination and immigrant restriction, of when slavery was introduced onto the continent, the slaves were black (11) as a result, vonnegut satirized the most important symbol of the country. For nearly a century, the united states government supported slavery for one reason: it was exceedingly practical the us depended on industry, and slaves provided free labor, which allowed the southern states to produce massive amounts of cotton and other crops without going into debt. Slavery was practiced throughout the american colonies in the 17th and 18th centuries, and african slaves helped build the new nation into an economic powerhouse through the production of.
Free ebooks download brazil and cuba were among the first colonial societies to establish slavery in the early sixteenth century approximately a century later british colonial virginia was founded, and slavery became an integral part of local culture and society. A brief history of racism in the united states samana siddiqui racism is the belief that one's race, skin color, or more generally, one's group, be it of religious, national or ethnic identity, is superior to others in humanity. The government of the united states, including the military and naval forces, will recognize and protect the freedom of such persons nor did it free slaves in the areas around norfolk, virginia, and new orleans, louisiana voice one: most anti-slavery leaders praised the emancipation proclamation. For national police week, a brief history of policing in the us and how societal changes shaped the evolution of the force it would be easy to think that the police officer is a figure who has existed since the beginning of civilization that's the idea on display in the proclamation from president john f.
United states history the pervasive racism of the era, based on widely accepted ideas about the racial superiority of white americans and the racial inferiority of free and enslaved black americans, would have made it difficult for white readers to respond affirmatively to the question in the illustration. American-style slavery 3 the importation of slaves into the united states was banned by where it flourished, in the states of alabama, mississippi and louisiana, the slave population increased by contrast, free black people in the lower south were fewer in number and lighter-skinned (the result.
To abolish slavery throughout the united states would lead to the disintegration of the union president lincoln issued a proclamation freeing all slaves held captive in the secessionist states some state governments simply refused to enforce their own laws when african americans were the. Slave states and free states the united states became divided between slave states in the south and free states to the north when new states were added, one of the major issues was whether the new state would legalize slavery or not. State after state revoked property qualifications for voting and holding officeóthus transforming jeffersonís republic of property holders into the second event that caused politicians to reconsider the value of political parties was missouri territoryís application for admission to the union in 1818. Among the 34 states as of january 1861, seven southern slave states individually declared their secession from the united states and formed the confederate then began the reconstruction and the processes of restoring national unity and guaranteeing civil rights to the freed slaves. |
Start a 10-Day Free Trial to Unlock the Full Review
Why Lesson Planet?
Find quality lesson planning resources, fast!
Share & remix collections to collaborate.
Organize your curriculum with collections. Easy!
Have time to be more creative & energetic with your students!
Hyperstudio Solar System Review: Technology, Study Skills
Young scholars use Hyperstudio to review important points before taking a test at the end of a study unit on the solar system. This concept could easily be switched to many different topics.
3 Views 1 Download
Solar Kit Lesson #9 - Properties of Solar Radiation: Reflection, Transmission, and Absorption
Middle school science stars observe and record data on the solar radiation reflected off or transmitted through various materials. They predict properties for various materials, and test their predictions by touch. This lesson becomes...
6th - 9th Science
Earthquake Education Curriculum
In case of earthquake, go to a safe place before posting on social media. The unit includes 12 lessons, most with hands-on activities or experiments. The first six lessons cover the physical process of earthquakes and volcanoes. Lessons...
6th - 12th Science CCSS: Adaptable |
"In macular dystrophy, a pigment builds up in cells of the macula. Over a period of time, the substance may damage cells that are crucial for clear vision."
Macular dystrophy is a form of rare, genetic eye disorder that causes loss of vision. Macular dystrophy affects the retina in the back of a person's eye. More specifically, it leads to damage of cells in an area in a person's retina called the, 'macula.' The macula is responsible for central vision. When the macula is damaged, people experience difficulty with seeing straight ahead, making it hard to read, drive, or perform other activities of daily living that require fine, central vision.
In macular dystrophy, a pigment builds up in cells of the macula. Over a period of time, the substance may damage cells that are crucial for clear vision. An affected person's vision often times becomes distorted or blurry. Usually, people with macular dystrophy maintain peripheral vision and are not totally blind.
Macular Dystrophy Types
A number of forms of macular dystrophy have been identified by the medical community. These forms of macular dystrophy include the following:
Stargardt's: Stargardt's is the most common type of macular dystrophy and usually occurs in a person's childhood. A different form of Stargardt's called, 'fundus flavimaculatus,' is usually found in adults. Stargardt's is characterized by formation of pigmented waste cells in the person's retina.
North Carolina Macular Dystrophy: North Carolina macular dystrophy is an extremely rare form of the eye disease identified by a very specific genetic marker. Although it is named for North Carolina family members who have this inherited form of macular dystrophy, the disease has been found in other places around the world.
Vitelliform Macular Dystrophy (VTM): VTM is usually first discovered due to the presence of a large, yellow oval lesion in an egg yolk shape that shows up in the center of the person's macula. A number of genetic mutations of this form of macular dystrophy have been identified, to include Best's disease, which affects children and young people. A different version of the disease also may appear in adults, with macular lesions that vary in both shape and size.
Additional types of macular dystrophy may cause specific degeneration of light-sensitive cells known as, 'cones.' The cones are responsible for color vision and are most concentrated in the macular area of a person's retina. While not technically macular dystrophy, 'Retinitis Pigmentosa,' is an inherited photoreceptor dystrophy that destroys light-sensitive cells in a person's eye.
There are essentially two types of macular dystrophy. A form called, 'Best disease,' usually appears in childhood and causes varying degrees of vision loss. The second form affects adults, usually in mid-adulthood and tends to cause vision loss that slowly worsens over time. People with Best disease often have one parent with the condition - the parent passes the gene on to their child. For adult-onset macular dystrophy it is less clear how the condition is passed from parent to child. Many people with adult-onset macular dystrophy do not have other family members with the condition.
Causes of Macular Dystrophy
Macular dystrophy is caused by a genetic mutation. In some people, doctors have identified two specific genes that are affected. Mutations in the BEST1 gene cause Best disease and at times - adult-onset macular dystrophy. Mutations in the PRPH2 gene cause adult-onset macular dystrophy. In most people with macular dystrophy; however, it remains unclear which gene is affected and the exact cause is not known. It is also not known why mutations in these genes leads to the buildup of pigment in a person's macula. Doctors also do not know why only central vision is affected.
Diagnosing Macular Dystrophy
Symptoms of macular dystrophy can include decreased visual acuity with no clear cause, such as refractive errors or cataracts. If your eye doctor suspects you have macular dystrophy, they might order eye tests that are not a part of a regular eye examination in an attempt to reach a definitive diagnosis. For example; a test called, 'flourescein angiography,' can detect retinal damage from macular dystrophy.
A test using optical coherence tomography (OCT) can also be performed to analyze eye tissue for the potential presence of a yellow-brown pigment found in the retinal pigment epithelium (RPE). Lipofuscin is waste material sloughed off from deteriorating eye tissue. Yet another option is an, 'electroretinographic (ERG) test that involves placing an electrode on your eye's outer, clear surface known as your, 'cornea,' to measure how well photoreceptors in your retina respond to light.
Treating Macular Dystrophy
The plain fact is - there is no effective treatment for macular dystrophy at this time. Vision loss usually develops slowly over a period of time. There is ongoing research on macular dystrophy that involves stem cell placement and the potential benefits.
If you have macular dystrophy, you will need to visit a retinal specialist who will assist you with determining the exact nature of the disease. For example; some types are progressive while other types are not. Genetic analysis and counseling might be needed to help you determine the type of macular dystrophy you have and whether the eye condition is likely to be passed on to your children. You may make better decisions concerning family planning if you have an idea of the degree of vision loss associated with the type of macular dystrophy you experience.
Loan Information for low income singles, families, seniors and disabled. Includes home, vehicle and personal loans.
Famous People with Disabilities - Well known people with disabilities and conditions who contributed to society. |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
A hierarchy (in Greek: often used in Geographic studies Ἱεραρχία, it is derived from ἱερός-hieros, sacred, and ἄρχω-arkho, rule) is a system of ranking and organizing things or people, where each element of the system (except for the top element) is subordinate to a single other element.
The first use of the word "hierarchy" cited by the Oxford English Dictionary was in 1380, when it was used in reference to the three orders of three angels as depicted by Pseudo-Dionysius the Areopagite. Ps.-Dionysius used the word both in reference to the celestial hierarchy and the ecclesiastical hierarchy . His term is derived from the Greek for 'Bishop' (hierarch), and Dionysius is credited with first use of it as an abstract noun. Since hierarchical churches, such as the Roman Catholic and Eastern Orthodox churches, had tables of organization that were "hierarchical" in the modern sense of the word (traditionally with God as the pinnacle of the hierarchy), the term came to refer to similar organizational methods in more general settings.
A hierarchy can link entities either directly or indirectly, and either vertically or horizontally. The only direct links in a hierarchy, insofar as they are hierarchical, are to one's immediate superior or to one of one's subordinates, although a system that is largely hierarchical can also incorporate other organizational patterns. Indirect hierarchical links can extend "vertically" upwards or downwards via multiple links in the same direction. All parts of the hierarchy which are not vertically linked to one another can nevertheless be "horizontally" linked by traveling up the hierarchy to find a common direct or indirect superior, and then down again. This is akin to two co-workers, neither of whom is the other's boss, but both of whose chains of command will eventually meet.
These relationships can be formalized mathematically; see hierarchy (mathematics).
In biology, the study of taxonomy is one of the most conventionally hierarchical kinds of knowledge, placing all living beings in a nested structure of divisions related to their probable evolutionary descent. Most evolutionary biologists assert a hierarchy extending from the level of the specimen (an individual living organism -- say, a single newt), to the species of which it is a member (perhaps the Eastern Newt), outward to further successive levels of genus, family, order, class, phylum, and kingdom. (A newt is a kind of salamander (family), and all salamanders are types of amphibians (class), which are all types of vertebrates (phylum).) Essential to this kind of reasoning is the proof that members of a division on one level are more closely related to one another than to members of a different division on the same level; they must also share ancestry in the level above. Thus, the system is hierarchical because it forbids the possibility of overlapping categories. For example, it will not permit a 'family' of beings containing some examples that are amphibians and others that are reptiles--divisions on any level do not straddle the categories of structure that are hierarchically above it. (Such straddling would be an example of heterarchy.)
Organisms are also commonly described as assemblies of parts (organs) which are themselves assemblies of yet smaller parts. When we observe that the relationship of cell to organ is like that of the relationship of organ to body, we are invoking the hierarchical aspects of physiology. (The term "organic" is often used to describe a sense of the small imitating the large, which suggests hierarchy, but isn't necessarily hierarchical.) The analogy of organ to body also extends to the relationship of a living being as a system that might resemble an ecosystem consisting of several living beings; physiology is thus hierarchically nested in ecology.
Language and semiotics
In linguistics, especially in the work of Noam Chomsky, and of later generative linguistics theories, such as Ray Jackendoff's, words or sentences are often broken down into hierarchies of parts and wholes. Hierarchical reasoning about the underlying structure of language expressions leads some linguists to the hypothesis that the world's languages are bound together in a broad array of variants subordinate to a single Universal Grammar.
In music, the structure of a composition is often understood hierarchically (for example by Heinrich Schenker (1868-1935, see Schenkerian analysis), and in the (1985) Generative Theory of Tonal Music, by composer Fred Lerdahl and linguist Ray Jackendoff). The sum of all notes in a piece is understood to be an all-inclusive surface, which can be reduced to successively more sparse and more fundamental types of motion. The levels of structure that operate in Schenker's theory are the foreground, which is seen in all the details of the musical score; the middle ground, which is roughly a summary of an essential contrapuntal progression and voice-leading; and the background or Ursatz, which is one of only a few basic "long-range counterpoint" structures that are shared in the gamut of tonal music literature.
The pitches and form of tonal music are organized hierarchically, all pitches deriving their importance from their relationship to a tonic key, and secondary themes in other keys are brought back to the tonic in a recapitulation of the primary theme. Susan McClary connects this specifically in the sonata-allegro form to the feminist hierarchy of gender (see above) in her book Feminine Endings, even pointing out that primary themes were often previously called "masculine" and secondary themes "feminine."
Ethics, behavioral psychology, philosophies of identity
In all of these examples, there is an asymmetry of 'compositional' significance between levels of structure, so that small parts of the whole hierarchical array depend, for their meaning, on their membership in larger parts.
In the work of diverse theorists such as William James (1842-1910), Michel Foucault (1926-1984) and Hayden White, important critiques of hierarchical epistemology are advanced. James famously asserts in his work "Radical Empiricism" that clear distinctions of type and category are a constant but unwritten goal of scientific reasoning, so that when they are discovered, success is declared. But if aspects of the world are organized differently, involving inherent and intractable ambiguities, then scientific questions are often considered unresolved. A hesitation to declare success upon the discovery of ambiguities leaves heterarchy at an artificial and subjective disadvantage in the scope of human knowledge. This bias is an artifact of an aesthetic or pedagogical preference for hierarchy, and not necessarily an expression of objective observation.
- Main article: Social hierarchy
Many human organizations, such as businesses, churches, armies and political movements are hierarchical organizations, at least officially; commonly seniors, called "bosses", have more power than their subordinates. Thus the relationship defining this hierarchy is "commands" or "has power over". (Some analysts question whether power "really" works as the traditional organizational chart indicates, however.) See also chain of command.
Some social insect species (bees, ants, termites) depend on matrilineal hierarchies centred on a queen with undeveloped female insects as attendants and workers.
Many social criticisms include a questioning of social hierarchies seen as being unjust. Feminism, for instance, often discusses a hierarchy of gender, in which a culture sees males or masculine traits as superior to females or feminine traits.
In the terms above, some feminism criticizes a hierarchy of only two nodes, "masculine" and "feminine", connected by the asymmetrical relationship "is more valuable to society", for example:
- The hierarchical nature of the dualism - the systematic devaluation of females and whatever is metaphorically understood as "feminine" - is what I identify as sexism. (Nelson 1902p. 106)
Note that in this context and in other social criticisms, the word hierarchy usually is used as meaning power hierarchy or power structure. Feminists may not take issue with inanimate objects being organized in a hierarchical fashion, but rather with the specific asymmetrical organization of unequal value and power between men and women and, usually, other social hierarchies such as in racism and anti-gay bias.
Anarchism, and other anti-authoritarian social movements, seek to destroy all hierarchal relationships.
- Main article: containment hierarchy
A containment hierarchy is a collection of strictly nested sets. Each entry in the hierarchy designates a set such that the previous entry is a strict superset, and the next entry is a strict subset. For example, all rectangles are quadrilaterals, but not all quadrilaterals are rectangles, and all squares are rectangles, but not all rectangles are squares. (See also: Taxonomy.)
- In geometry: shape, polygon, quadrilateral, rectangle, square
- In biology: animal, bird, raptor, eagle, golden eagle
- The Chomsky hierarchy in formal languages: recursively enumerable, context-sensitive, context-free, and regular
- In physics: elementary particle, fermion, lepton, electron
Hierarchies and hierarchical thinking has been criticized by some, as shown above in Social hierarchies and Hierarchical nomenclatures in the arts and sciences. Possible hierarchy alternatives include:
- Julie Nelson (1992). "Gender, Metaphor and the Definition of Economics". Economics and Philosophy, 8:103-125.
- Linnaean taxonomy
- Tree structure
- Chomsky hierarchy
- Network theory
- Maslow's hierarchy of needs
- Hierarchy of genres
- Unity of command
- Degrees of consanguinity
- Principles and annotated bibliography of hierarchy theory
- Summary of the Principles of Hierarchy Theory - S.N. Salthe
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
What is a black hole? Do they really exist? How do they form? How are they related to stars? What would happen if you fell into one? How do you see a black hole if they emit no light? What’s the difference between a black hole and a really dark star? Could a particle accelerator create a black hole? Can a black hole also be a worm hole or a time machine? In Astro 101: Black Holes, you will explore the concepts behind black holes. Using the theme of black holes, you will learn the basic ideas of astronomy, relativity, and quantum physics. After completing this course, you will be able to: • Describe the essential properties of black holes. • Explain recent black hole research using plain language and appropriate analogies. • Compare black holes in popular culture to modern physics to distinguish science fact from science fiction. • Describe the application of fundamental physical concepts including gravity, special and general relativity, and quantum mechanics to reported scientific observations. • Recognize different types of stars and distinguish which stars can potentially become black holes. • Differentiate types of black holes and classify each type as observed or theoretical. • Characterize formation theories associated with each type of black hole. • Identify different ways of detecting black holes, and appropriate technologies associated with each detection method. • Summarize the puzzles facing black hole researchers in modern science. |
swallow, common name for small perching birds of almost worldwide distribution. There are about 100 species of swallows, including the martins, which belong to the same family. Swallows have long, narrow wings, forked tails, and weak feet. They are extremely graceful in flight, making abrupt changes in speed and direction as they feed on the wing, catching insects in their wide mouths. Their plumage is blue or black with a metallic sheen, generally darker above than below. They nest in flocks in barns, sheds, chimneys, or other secluded places. The common American barn swallow, Hirundo rustica, is steel-blue above and pinkish beneath, with a rusty forehead and deeply forked tail. The purple martin, Progne subis, is deep violet with black wings and tail. Other American swallows, all with shallowly forked tails, are the cliff, or eave, swallow (Petrochelidon pyrrhonota), which builds jug-shaped nests of mud and clay lined with grass and feathers; the bank swallow or sand martin, which burrows into shore banks to nest; and the tree (Iridoprocne bicolor) and rough-winged (Stelgidopteryx ruficollis) swallows. The so-called chimney swallow is a swift. Swallows are classified in the phylum Chordata, subphylum Vertebrata, class Aves, order Passeriformes, family Hirundinidae.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Vertebrate Zoology |
Unless you’ve lived a particularly sheltered existence, you’ve somewhat of a basic understanding that exposure to blood and other bodily fluids can present some sort of health risk. You probably realize that is because another person’s blood and bodily fluids can contain what routinely are called germs, un-seeable particles that cause diseases. In fact, blood can contain what are known as pathogens, which do present a risk of causing disease in humans – even serious and potentially fatal illnesses. This leaves us with the important question of: what are blood pathogens?
Basic Definition of Blood Pathogens
Blood pathogens are biological substances that have the capacity for causing disease in humans. Blood pathogens most commonly are in the form of viruses and bacteria. The most common types of blood pathogens that people can be exposed to include:
- Hepatitis B
- Hepatitis C
How is Disease Transferred Via Blood Pathogens?
Disease is transferred via blood pathogens by direct contact. For example, if blood infected with HIV is present on a surface of some sort, the virus can remain alive and viable for a period of time. The MRSA bacteria can survive for quite a significant period of time in contaminated blood that ends up on a surface.
If a person comes into contact with blood contaminated with a pathogen, that virus or bacteria can be transferred into that individual’s system through a cut, abrasion, or sore. In addition, it is also possible in some circumstances for a blood pathogen to be transferred to another individual if it comes into contact with a mucus membrane.
How to Protect Yourself and Your Loved Ones From Blood Pathogens
The only sure way to avoid exposure to dangerous blood pathogens is to avoid contact with someone else’s blood in the first instance. Unfortunately, situations can arise when larger amounts of blood end up being present in a home or business. The presence of blood in this manner constitutes what technically is known as a biohazardous situation.
The recommendation is that when a large amount of blood is contaminating an area in a residence or business, professional assistance should be retained. A blood cleanup company has the capacity to safely and thoroughly eliminate any hazards associated with a situation that has resulted in the presence of a significant amount of blood, bodily fluids, or even other biohazardous materials.
The most common types of situations that give rise to the potential contamination of a residence or business with blood pathogens include:
- Other types of violent crime
- Attempted suicide
- Unattended death
When it comes to keeping safe from blood pathogens, specific protocols must be followed. These protocols apply during the blood cleanup process. At the heart of keeping safe from blood pathogens during the necessary cleanup process, specific biohazard-rated personal protective equipment must be worn. This equipment needs to include:
- HEPA mask or respirator
- Protective eyewear
Over the course of the longer term, protecting people from blood pathogens in the presence of a blood spill or some situation that resulted in contamination by blood and bodily fluids requires a comprehensive biohazard remediation effort. Blood cleaning is a multifaceted endeavor that includes:
- Cleanup and removal of blood
- Sanitization of the contaminated area
- Deodorization, if necessary
Actual blood cleanup and removal involves eliminating the physical presence of blood, bodily fluids, and other biohazards. All of this matter needs to be placed in a suitable biohazard disposal container. When that is accomplished, the biohazardous waste containers need to be transported to an approved biohazard disposal company. In California, a biohazard disposal company must be duly certified by the state.
Once the cleanup and removal process is completed, the contaminated area is sanitized. The sanitization process is designed to eradicate blood pathogens. Sanitization renders a once contaminated area safe and no longer a threat to the health and wellbeing of humans.
Finally, in some instances, deodorization of the premises may be required as part of the comprehensive biohazard remediation process. For example, if blood contaminates an area as the result of an unattended death, the nature of the human decomposition process can result in an overwhelming stench.
Training for Exposure to Blood Pathogens
A member of the general public should have a basic understanding of the dangers associated with exposure to blood pathogens. This includes an understanding of the complexities and hazards associated with blood cleanup.
Depending on the nature of a business, specific training regarding exposure to blood, bodily fluids, and blood pathogens is important. In such a situation, a business needs to have an appropriately comprehensive training program centered on exposure to blood, bodily fluids, and blood pathogens. Such training should occur on a recurring basis. Indeed, management and employees should be required to participate in refresher programs so that people remain attuned to the dangers of blood pathogens and how to undertake blood cleanup. |
This is something I remember fondly from school. The Latin meaning for loci is locus meaning place or location, the method of loci is a method of being able to memorize information by placing each item that needs to be remembered at a memorable point along an imaginary journey. The information is then recalled in a specific order by retracting the same route through the imaginary journey. I’m not explaining it well am I? No you aren’t going crazy – it is a revolutionary way to remember information by associating it with places that you already know.
The method of loci was invented more than 2000 years ago, and widely used by the Greeks and later the Romans to memorize and give speeches that could last for hours. Unlike today, where paper is amazingly cheap, readily available and PowerPoint all over the place, during the times of the Greeks and Romans it wasn’t all that easy to just jot down a 30-page document. Also, reading speeches to an audience was frowned upon. If you wanted to be a successful orator, you had to give it from memory. So if you have a big meeting, presentation, you have an interview or you are trying to pass your exams without an a-level tutor to help– this is a task once you have your head around it can make a huge difference to your memory.
Create the Memory Palace
Firstly you must establish a mental journey along a well-known route, for example, through your house or place of work. The first 5 loci, or locations, of the journey might be:
- Your front door
- On your dining room table
- Up your stairs
- On your toilet in your bathroom
- On your bed in your bedroom
These locations are your first “memory palace”. You will always travel through your memory palace in the same order – you must have a memorable fixed starting point.
Memorize the Items
Then, take a list of five items that you want to memorize, and imagine each item in one locus, or location, of your memory palace. For example, you could try memorizing the following shopping list:
- loaf of bread
- chocolate bar
Using the sample memory palace above, you would then imagine a giant bottle of milk being held by a crying baby – the wilder, ruder and crazier it is the better!
Recall the Items
That’s it you are done! You know have to assign each location with an item to remember but you must remember to keep going over it so you don’t forget anything! If you want to make another list – pick a completely separate house/room so you don’t get confused! I challenge you to do this in whatever room you are in! Remember to mentally walk through your journey a couple of times to make sure it is firmly committed to memory. |
Want to know how some of the 20th century’s most celebrated artists made abstract paintings? This course offers an in-depth, hands-on look at the materials, techniques, and thinking of seven New York School artists, including Willem de Kooning, Yayoi Kusama, Agnes Martin, Barnett Newman, Jackson Pollock, Ad Reinhardt, and Mark Rothko. Through studio demonstrations and gallery walkthroughs, you’ll form a deeper understanding of what a studio practice means and how ideas develop from close looking, and you’ll gain a sensitivity to the physical qualities of paint. Readings and other resources will round out your understanding, providing broader cultural, intellectual, and historical context about the decades after World War II, when these artists were active. The works of art you will explore in this course may also serve as points of departure to make your own abstract paintings. You may choose to participate in the studio exercises, for which you are invited to post images of your own paintings to the discussion boards, or you may choose to complete the course through its quizzes and written assessments only. Learners who wish to participate in the optional studio exercises may need to purchase art supplies. A list of suggested materials is included in the first module. Learning Objectives: Learn about the materials, techniques, and approaches of seven New York School artists who made abstract paintings. Trace the development of each artist’s work and studio practice in relation to broader cultural, intellectual, and historical contexts in the decades after World War II. Hone your visual analysis skills. Use each artist’s works as a point of departure for making your own abstract paintings. |
Logic gates are the basic building blocks of computers, and researchers at the University of Rochester have developed their fastest version. By zapping graphene and gold with laser pulses, the new logic gates show the viability of “lightwave electronics.”
Logic gates take two inputs, compare them, and then output a signal based on the result. Billions of individual logic gates are concentrated into chips to create processors, memory, and other electronic components.
These gates do not work instantaneously. There is a delay of nanoseconds. Rochester team’s new logic gates set the record by processing information in just a matter of femtoseconds, which are a million times shorter than nanoseconds.
To make this possible, the team made junctions comprising a graphene wire connecting two gold electrodes. When the graphene was zapped with synchronized pairs of laser pulses, electrons in the material were excited, sending them zipping off towards one of the electrodes, generating an electrical current.
By adjusting the phase of the laser pulses, the team was able to generate a burst of one of two types of charge carriers, which would either add up or cancel each other out. They would then be considered a 1 or 0 output respectively. The result is an ultrafast logic gate, marking the first proof of concept of an as-yet theoretical field called lightwave electronics.
“It will probably be a very long time before this technique can be used in a computer chip, but at least we now know that lightwave electronics is practically possible,” said Tobias Boolakee, lead researcher on the study.
At the moment, we measure processing speeds in Gigahertz (GHz), but these new logic gates function on the scale of Petahertz (PHz).
The research was published in the journal Nature. |
Examples of Poe's Romanticism
An artistic movement of the late 1700s, Romanticism led poets such as Edgar Allan Poe to revere originality, free thinking, idealism, the supernatural and mystic, beauty, love, passion and the natural world. Poe’s lyrical poems and dark tales are a reflection of the darker side of Romanticism as they address topics such as death, supernatural forces, loss of love, a flawed society and the evils of human nature.
The Beautiful Annabel Lee
“Annabel Lee” is one of the last poems that Poe wrote before he died in 1849. The work is about the love between Annabel Lee and the poem’s narrator, which began when the two were young in a “kingdom by the sea.” The narrator explains that the couple’s love for each other was so strong that supernatural forces and nature couldn't separate their souls. The poem reflects on the ideas of love and feelings of loss as the narrator adores and worships the beautiful woman.
Ligeia's Recorporeal Incarnation
“Ligeia” is a short story that Poe published in 1838 about a beautiful, raven-haired woman named Ligeia who falls ill and dies. Poe demonstrates Romanticism as the narrator rejects society’s idea of beauty when he points out the flaws in Lady Rowena’s classic features. The death of Lady Rowena was Poe’s way of rejecting society's idea of beauty. The narrator asserts that Ligeia’s features were more beautiful because they were more natural, and gave the author the opportunity to explore the metaphysical -- a theme explored by Romantic writers.
Nightmare in Dream-Land
Published in 1844, “Dream-Land” is a poem about a man who travels through a nightmarish world ruled by a spirit named “Night.” Poe’s detailed description of the land is a common characteristic among Romantic writers. He personifies nature for example, when he makes night into a king, says the oceans “aspire” to reach the skies and states that the route is “lonely.” In addition to referring to the supernatural, the use of mystery is a Romantic characteristic: The reader isn’t sure where the dream-land is, where it begins or where it ends.
The Search for Eldorado
Eldorado is a place that Poe mentions in “Dream-Land” and “Valley of Shadows.” The poem with that name was published in 1849, the same time as the California Gold Rush. The poem is an example of the Romantic pursuit of happiness and success, even if it takes a lifetime to achieve. It was not uncommon for Romantic writers to use allusion as a poetic device. The “Valley of Shadow” in the fourth stanza alludes to the biblical Valley of the Shadow of Death, suggesting that Eldorado is not a place that the reader can find on Earth or in the living world. Instead, Eldorado is a spiritual treasure that’s part of an ongoing quest.
- Poets.org: A Brief Guide to Romanticism
- Poe Museum: Poe’s Life
- E.A. Poe Society of Baltimore: Annabel Lee
- E.A. Poe Society of Baltimore: Marginalia, A Note on ‘Annabel Lee’
- E.A. Poe Society of Baltimore: Ligeia
- E.A. Poe Society of Baltimore: Dream-Land
- E.A. Poe Society of Baltimore: Poe’s ‘Dream-Land’: Nightmare or Sublime Vision?
- E.A. Poe Society of Baltimore: Eldorado
Flora Richards-Gustafson has been writing professionally since 2003. She creates copy for websites, marketing materials and printed publications. Richards-Gustafson specializes in SEO and writing about small-business strategies, health and beauty, interior design, emergency preparedness and education. Richards-Gustafson received a Bachelor of Arts from George Fox University in 2003 and was recognized by Cambridge's "Who's Who" in 2009 as a leading woman entrepreneur. |
Introduction to the cross product. Created by Sal Khan.
Want to join the conversation?
- at7:14when sal said orthagonal what does that mean?(66 votes)
- What is the difference between fleming's right hand rule and fleming's left hand rule? Which one is used to find the direction of force? Sal uses his right hand but in my physics book it says to use my left hand.(21 votes)
- Fleming's left hand rule is for induced current (current produced due to opposite motion of the conductor and magnet). Flemings's right hand rule mainly is used to determine the direction of force in a complete circuit in which current is produced from a chemical source such as a battery.(4 votes)
- Why do we know that the vector has to be perpendicular to the other two?(15 votes)
- Hi Chris,
I'm not too keen on physics but, things like cross product are abstract representations of physical models of things. So at some point, "that's how you define it" I think is really as basic as you can get. We don't fundamentally know why the basic laws of arithmetic work but we defined this system to describe it. I think its similar when there was a need to define something called a cross product.(12 votes)
- Actually, I can't understand all of the explaination. However, I have a question. why in cross product we use sin? can you give the explaination. and also in dot product why we use cos? thank you.(12 votes)
- what is basic definition of a unit vector(5 votes)
- If we switch vector a and vector b, does that mean their product will be coming out of the page? Does that mean a times b is equal to the opposite of b times a?(5 votes)
- This is correct. The direction can be visualized by the Right Hand Rule. Taking vector A X B will give you the opposite direction of B X A.
In addition, further proof that the magnitude will be the same is found in the formula: |A||B|sin θ = |A X B|.
Since scalar multiplication is not dependent on the order in which it is preformed, the magnitude will be the same in both cases.(5 votes)
- around8:00, you use the right hand rule. is it easier to use the right hand screw rule? (same as the one used to find the direction of magnetic field in a current carrying conductor)(4 votes)
- There are various conventions that can be used for a right-hand rule. Here is a good video that explains 4 different ways. This guy will make you laugh because he is a bit strange. As for method, I prefer thumb pointing in direction of vector A, index finger pointing in direction of vector B. Your middle finger then points in the direction of the n unit vector. http://www.youtube.com/watch?feature=fvwp&v=LK7hv4LX3ys&NR=1(5 votes)
- Okay, so I understand the cross product, how it works and its formula but what is it actually, I mean what is a cross product?(4 votes)
- Not sure but I think It is the vector which is perpendicular to both the vectors. Also it's direction is the same as your thumb when you move the 4 fingers of your fist from A to B when you want to find AxB. This is also known as the thumb rule.(2 votes)
- Which video should I watch for addition and subtraction of vectors?(4 votes)
- okay i did not know the most appropriate place to ask this so im doing it here...can we divide vectors?(2 votes)
- No! We can't divide two vectors. You can checkout the reason on http://www.quora.com/Can-we-divide-a-vector-by-a-vector-and-why(1 vote)
I've been requested to do a video on the cross product, and its special circumstances, because I was at the point on the physics playlist where I had to teach magnetism anyway, so this is as good a time as any to introduce the notion of the cross product. So what's the cross product? Well, we know about vector addition, vector subtraction, but what happens when you multiply vectors? And there's actually two ways to do it: with the dot product or the cross product. And just keep in mind these are-- well, really, every operation we've learned is defined by human beings for some other purpose, and there's nothing different about the cross product. I take the time to say that here because the cross product, at least when I first learned it, seemed a little bit unnatural. Anyway, enough talk. Let me show you what it is. So the cross product of two vectors: Let's say I have vector a cross vector b, and the notation is literally like the times sign that you knew before you started taking algebra and using dots and parentheses, so it's literally just an x. So the cross product of vectors a and b is equal to-- and this is going to seem very bizarre at first, but hopefully, we can get a little bit of a visual feel of what this means. It equals the magnitude of vector a times the magnitude of vector b times the sine of the angle between them, the smallest angle between them. And now, this is the kicker, and this quantity is not going to be just a scalar quantity. It's not just going to have magnitude. It actually has direction, and that direction we specify by the vector n, the unit vector n. We could put a little cap on it to show that it's a unit vector. There are a couple of things that are special about this direction that's specified by n. One, n is perpendicular to both of these vectors. It is orthogonal to both of these vectors, so we'll think about it in a second what that implies about it just visually. And then the other thing is the direction of this vector is defined by the right hand rule, and we'll see that in a second. So let's try to think about this visually. And I have to give you an important caveat: You can only take a cross product when we are dealing in three dimensions. A cross product really has-- maybe you could define a use for it in other dimensions or a way to take a cross product in other dimensions, but it really only has a use in three dimensions, and that's useful, because we live in a three-dimensional world. So let's see. Let's take some cross products. I think when you see it visually, it will make a little bit more sense, especially once you get used to the right hand rule. So let's say that that's vector b. I don't have to draw a straight line, but it doesn't hurt to. I don't have to draw it neatly. OK, here we go. Let's say that that is vector a, and we want to take the cross product of them. This is vector a. This is b. I'll probably just switch to one color because it's hard to keep switching between them. And then the angle between them is theta. Now, let's say the length of a is-- I don't know, let's say magnitude of a is equal to 5, and let's say that the magnitude of b is equal to 10. It looks about double that. I'm just making up the numbers on the fly. So what's the cross product? Well, the magnitude part is easy. Let's say this angle is equal to 30 degrees. 30 degrees, or if we wanted to write it in radians, I always-- just because we grow up in a world of degrees, I always find it easier to visualize degrees, but we could think about it in terms of radians as well. 30 degrees is-- let's see, there's 3, 6-- it's pi over 6, so we could also write pi over 6 radians. But anyway, this is a 30-degree angle, so what will be a cross b? a cross b is going to equal the magnitude of a for the length of this vector, so it's going to be equal to 5 times the length of this b vector, so times 10, times the sine of the angle between them. And, of course, you could've taken the larger, the obtuse angle. You could have said this was the angle between them, but I said earlier that it was the smaller, the acute, angle between them up to 90 degrees. This is going to be sine of 30 degrees times this vector n. And it's a unit vector, so I'll go over what direction it's actually pointing in a second. Let's just figure out its magnitude. So this is equal to 50, and what's sine of 30 degrees? Sine of 30 degrees is 1/2. You could type it in your calculator if you're not sure. So it's 5 times 10 times 1/2 times the unit vector, so that equals 25 times the unit vector. Now, this is where it gets, depending on your point of view, either interesting or confusing. So what direction is this unit vector pointing in? So what I said earlier is, it's perpendicular to both of these. So how can something be perpendicular to both of these? It seems like I can't draw one. Well, that's because right here, where I drew a and b, I'm operating in two dimensions. But if I have a third dimension, if I could go in or out of my writing pad or, from your point of view, your screen, then I have a vector that is perpendicular to both. So imagine of vector that's-- I wish I could draw it-- that is literally going straight in at this point or straight out at this point. Hopefully, you're seeing it. Let me show you the notation for that. So if I draw a vector like this, if I draw a circle with an x in it like that, that is a vector that's going into the page or into the screen. And if I draw this, that is a vector that's popping out of the screen. And where does that convention come from? It's from an arrowhead, because what does an arrow look like? An arrow, which is our convention for drawing vectors, looks something like this: The tip of an arrow is circular and it comes to a point, so that's the tip, if you look at it head-on, if it was popping out of the video. And what does the tail of an arrow look like? It has fins, right? There would be one fin here and there'd be another fin right there. And so if you took this arrow and you were to go into the page and just see the back of the arrow or the behind of the arrow, it would look like that. So this is a vector that's going into the page and this is a vector that's going out of the page. So we know that n is perpendicular to both a and b, and so the only way you can get a vector that's perpendicular to both of these, it essentially has to be perpendicular, or normal, or orthogonal to the plane that's your computer screen. But how do we know if it's going into the screen or how do we know if it's coming out of the screen, this vector n? And this is where the right hand rule-- I know this is a little bit overwhelming. We'll do a bunch of example problems. But the right hand rule, what you do is you take your right hand-- that's why it's called the right hand rule-- and you take your index finger and you point it in the direction of the first vector in your cross product, and order matters. So let's do that. So you have to take your finger and put it in the direction of the first arrow, which is a, and then you have to take your middle finger and point it in that direction of the second arrow, b. So in this case, your hand would look something like this. I'm going to try to draw it. This is pushing the abilities of my art skills. So that's my right hand. My thumb is going to be coming down, right? That is my right hand that I drew. This is my index finger, and I'm pointing it in the direction of a. Maybe it goes a little bit more in this direction, right? Then I put my middle finger, and I kind of make an L with it, or you could kind of say it almost looks like you're shooting a gun. And I point that in the direction of b, and then whichever direction that your thumb faces in, so in this case, your thumb is going into the page, right? Your thumb would be going down if you took your right hand into this configuration. So that tells us that the vector n points into the page. So the vector n has magnitude 25, and it points into the page, so we could draw it like that with an x. If I were to attempt to draw it in three dimensions, it would look something like this. Vector a. Let me see if I can give some perspective. If this was straight down, if that's vector n, then a could look something like that. Let me draw it in the same color as a. a could look something like that, and then b would look something like that. I'm trying to draw a three-dimensional figure on two dimensions, so it might look a little different, but I think you get the point. Here I drew a and b on the plane. Here I have perspective where I was able to draw n going down. But this is the definition of a cross product. Now, I'm going to leave it there, just because for some reason, YouTube hasn't been letting me go over the limit as much, and I will do another video where I do several problems, and actually, in the process, I'm going to explain a little bit about magnetism. And we'll take the cross product of several things, and hopefully, you'll get a little bit better intuition. See you soon. |
What Is Genomic Medicine?
You can customize and personalize pretty much any product nowadays. What if you were able to personalize your medications?
Genomic medicine is a way to customize medical care to your body’s unique genetic makeup. Although greater than 99% of a DNA sequence is identical from person to person, the last 1% explains how everyone responds to stress, the environment, disease, and treatment differently.
These small variations in genes make some people more susceptible to a specific disease than other people.
Genomic medicine involves using genomic information about an individual as part of their care. The Human Genome Project started to advance our understanding of biology and disease, intending to improve health.
The National Academy of Sciences adopted the phrase ‘precision medicine’ that utilizes genomics and environmental exposure to guide individual diagnosis more accurately.
The goal of precision medicine is to change the one-size-fits-all approach to medicine and consider additional factors that could affect an individual’s disease. Utilizing targeted prevention or treatment will help specific individuals stay healthy or get better instead of relying on approaches that are the same for everyone.
CRISPR is a technology that uses a protein called Cas9 to cut DNA and can rewrite the genetic code. The idea is that CRISPR can cut out and replace the “bad” DNA with “good” DNA. As a result, this technology could potentially provide a cure to several cancers, leukemia, HIV, sickle-cell anemia, and many other diseases once perfected.
Gene therapy is used as a treatment to cure a rare condition or disease. Gene therapy takes healthy foreign genetic material and inserts it into a person’s cells to correct the genetic cause of the disease. This approach aims to provide a one-shot cure instead of only treating symptoms of a genetic disease.
Genetic tests are increasing in popularity. Genetic testing can help confirm or rule out a suspected genetic condition. It can also help determine if a person will be passing on a genetic disorder to their children.
There are two classes of genetic testing available:
- Direct to consumer
Clinical testing is done by trained medical professionals that help patients interpret their results. This is helpful because results can easily be misinterpreted.
Direct to consumer testing allows testing to be completed at home. Many people can order a test kit directly to their doorstep and get the results reviewed privately, but the Federal Trade Commission consumer alert warns that some of these tests lack scientific validity and may not provide accurate results.
Pharmacogenomics combines genomics and medicine. The goal of pharmacogenomics is to develop effective, safe medication and doses that are tailored to a person’s genetic makeup.
Genetics also play a role in how a person responds to medication. Different people can have different versions of the same gene, which produces variations in proteins. Specific proteins affect how drugs work, and variations in protein affect how people will respond to the drug.
Knowing and understanding the variation of the genetic makeup will help providers select drugs and doses that best suit the individual patient.
Pharmacogenomic information is now included on the labels of approximately 200 medications. This can play an essential role in identifying responders and non-responders to medications, avoiding side effects, and optimizing drug doses.
Ethics of Precision Medicine
Ethics is a significant issue with genomic medicine.
Gene editing allows scientists to change an organism’s DNA by adding, removing, or altering genetic material in the genome.
The ethical issues of gene editing arise when changes are introduced to egg and sperm cells. Changes made in these cells can be passed to future generations. There are concerns about allowing technology the ability to enhance standard human traits like height or intelligence.
Additional concerns include:
- Creation of “designer babies”
- Permanent or hereditary changes that cannot be reversed
- Unknown consequences of removing or editing a DNA sequence
The United States has currently banned the genetic alteration of DNA in human embryos used for implantation.
Ethical concerns are present for genetic testing as well. The major issue is security and privacy. Genetic discrimination may potentially happen with employers or insurance companies.
In an effort to protect people against genetic discrimination, the Genetic Information Nondiscrimination Act (GINA) was put into place. GINA prohibits health insurance providers from using genetic information to make decisions about insurance eligibility or coverage. GINA also prohibits employers from using genetic information when making decisions about hiring, promotions, or terms of employment.
Genomic medicine has the potential to alter the standard of treatment and provide individualized treatment. Although there are challenges and cautionary warnings regarding the ethics of genomic medicine, it can provide a cure for rare diseases and ultimately improve health care outcomes.
Auffray, C., Caulfield, T., Khoury, M.J. et al. Genome Medicine: past, present and future. Genome Med 3, 6 (2011). https://doi.org/10.1186/gm220
FDA. Table of Pharmacogenomic Biomarkers in Drug Labeling. https://www.fda.gov/drugs/science-and-research-drugs/table-pharmacogenomic-biomarkers-druglabeling?elq=80859ee4d24747fbb7dbbfb373be63ba&elqCampaignId=6008&elqTrackId=c5945f731f92452dac6f4b713852545c&elqaid=7346&elqat=1. Published 2020. Accessed 30 August 2020.
National Human Genome Research Institute. Genomics and Medicine. https://www.genome.gov/health/Genomics-and-Medicine#:~:text=Genomic%20medicine%20is%20an%20emerging,implications%20of%20that%20clinical%20use. Published 2020. Accessed 30 August 2020.
Roth SC. What is genomic medicine?. J Med Libr Assoc. 2019;107(3):442-448. doi:10.5195/jmla.2019.604
U.S. National Library of Medicine. What is genetic discrimination? https://ghr.nlm.nih.gov/primer/testing/discrimination. Published 2020. Accessed 30 August 2020.
U.S. National Library of Medicine. What is pharmacogenomics? https://ghr.nlm.nih.gov/primer/genomicresearch/pharmacogenomics#:~:text=Pharmacogenomics%20is%20the%20study%20of,to%20a%20person's%20genetic%20makeup. Published 2020. Accessed 30 August 2020. |
The Genome Institute of Singapore (GIS) reports that what was previously believed to be “junk” DNA is in fact a vital component that distinguishes humans from other species. Their research, published in Genome Research, notes that previously, more than 50 percent of human DNA was referred to as “junk” because it consisted of copies of nearly identical sequences. A major source of these repeats is internal viruses that have inserted themselves throughout the genome at various times during mammalian evolution.
Over evolutionary time, these repeats were dispersed within different species, creating new regulatory sites throughout these genomes. Thus, the set of genes controlled by these transcription factors is likely to significantly differ from species to species and may be a major driver for evolution.
This research also shows that these repeats are anything but “junk DNA,” since they provide a great source of evolutionary variability and might hold the key to some of the important physical differences that distinguish humans from all other species.
The GIS study also highlighted the functional importance of portions of the genome that are rich in repetitive sequences. “Because a lot of the biomedical research use model organisms such as mice and primates, it is important to have a detailed understanding of the differences between these model organisms and humans in order to explain our findings,” said Guillaume Bourque, lead author of the study. “Our research findings imply that these surveys must also include repeats, as they are likely to be the source of important differences between model organisms and humans. The better our understanding of the particularities of the human genome, the better our understanding will be of diseases and their treatments.”
“The findings by Dr. Bourque and his colleagues at the GIS are very exciting and represent what may be one of the major discoveries in the biology of evolution and gene regulation of the decade,” said Raymond White, chair of the GIS Scientific Advisory Board. “We have suspected for some time that one of the major ways species differ from one another – for instance, why rats differ from monkeys – is in the regulation of the expression of their genes: where are the genes expressed in the body, when during development, and how much do they respond to environmental stimuli.”
“What the researchers have demonstrated is that DNA segments carrying binding sites for regulatory proteins can, at times, be explosively distributed to new sites around the genome, possibly altering the activities of genes near where they locate,” White explained. “The means of distribution seem to be a class of genetic components called ‘transposable elements’ that are able to jump from one site to another at certain times in the history of the organism. The families of these transposable elements vary from species to species, as do the distributed DNA segments which bind the regulatory proteins.”
“This hypothesis for formation of new species through episodic distributions of families of gene regulatory DNA sequences is a powerful one that will now guide a wealth of experiments to determine the functional relationships of these regulatory DNA sequences to the genes that are near their landing sites,” predicted White. “I anticipate that as our knowledge of these events grows, we will begin to understand much more how and why the rat differs so dramatically from the monkey, even though they share essentially the same complement of genes and proteins.”
Scientists Probe Ancient “RNA World”
Evolution Leaves “Fingerprint” Across Human Genome
Human Genome “Far More Complex Than Anyone Imagined,” Laments Prof
Junk RNA Begins To Yield Its Secrets |
Introduction: Students will use what they have read about in the Risk Benefit Assessment articles above and Chapters 6 and 12 to design a preschool playground and do a Risk Assessment.
Students will design a preschool playground and write a Risk Assessment on the playground they design. Students will provide a visual design/diagram of the layout of the playground using Microsoft Word (smart shapes or clip art) or draw freehand, scan and attach. Items in the playground are to be labeled. Students will also provide a list in of all playground equipment and items needed. Use of natural elements in the design of the space is encouraged. Students should utilize what they have learned about playground design and function in Chapter 12 of their textbook to complete this assignment. You must also write a Risk Benefit Assessment summary of your playground and how you have avoided all of the potential risks associated with a playground and the equipment you have chosen to place on it.
The assignment is worth 100 points.
Maximum points are given when:
– The assignment is completed in Word and attached to the assignment dropbox.
– The assignment is submitted using complete and well-detailed sentences.
– All directions for content are followed.
– The assignment contains no spelling or major grammatical errors.
Blackboard EDU 157 Course
Outside sources found by students as needed (please read FTCC plagiarism policy)
If need access to book let me know! |
Michelangelo is known as one of the most prolific painters and sculptors in history. As a key figure of the High Renaissance, he is specifically celebrated for his ambitious approach to scale and his expertise on anatomy. While all of his masterworks convey his undeniable talent, his world-famous fresco on the ceiling of the Sistine Chapel stands above the rest.
Painted for the pope, the busy yet beautifully balanced composition depicts a range of religious iconography rendered in Michelangelo's distinctive style, making it one of the most cherished masterpieces in the world.
What is the Sistine Chapel?
The Sistine Chapel is a large chapel located in the Vatican's Apostolic Palace. It is named after Pope Sixtus IV, who oversaw its restoration in the late 15th century. Historically, the chapel has had various important functions. Today, it retains its religious role, as it serves as the site where cardinals meet to elect the next pope.
What the Sistine Chapel is most well-known for, however, is its ceiling. Painted by Florentine fine artist Michelangelo di Lodovico Buonarroti Simoni between 1508 and 1512, the complex and colorful fresco is celebrated for its realistic figures, vast size, and innovative process.
By the early 16th century, Michelangelo was an esteemed artist known throughout Italy. He was particularly praised for his ability to render—both in painting and sculpture—figures with lifelike anatomical features, as evident in his famous David statue from 1504. Given the artist's reputation, it is no surprise that Pope Julius commissioned him to decorate the ceiling of Sistine Chapel, whose walls were already adorned with frescoes by Botticelli, Ghirlandaio, Perugino, and other famed artists.
While the pope's plans for the ceiling revolved around a depiction of the 12 apostles, Michelangelo had bigger plans: he would paint several scenes from scripture featuring over 300 figures.
In order to reach the chapel's ceiling, Michelangelo created special scaffolding. Rather than build the structure from the floor up, he installed a wooden platform held up by brackets inserted into holes in the wall. As he completed the painting in stages, the scaffolding was designed to move across the chapel.
Once the scaffold was installed, Michelangelo was able to begin the painting process. Like many other Italian Renaissance painters, he used a fresco technique, meaning he applied washes of paint to wet plaster. In order to create an illusion of depth, Michelangelo would scrape off some of the wet medium prior to panting. This method culminated in visible “outlines” around his figures—a detail considered characteristic of the artist.
As plaster dries quickly, Michelangelo worked in sections, applying planes of fresh plaster each day. These sections are known as giornata, and remain perceptible today. |
At the Department of Energy’s SLAC National Accelerator Laboratory, scientists have made a new, potential breakthrough for the laboratory’s high-speed “electron camera” that could enable them to “film” minuscule, ultrafast movements of electrons and protons in chemical reactions—reactions that have never been visualized to date.
It is believed that these “movies” may ultimately help researchers for drug development to fight disease, generate state-of-the-art materials with novel properties, formulate more efficient chemical processes, and much more.
Instead of using the standard radio-frequency radiation, the latest method leverages a form of light known as terahertz radiation to exploit the beams of electrons used by the instrument.
This not only allows scientists to control the speed at which the images are captured by the camera but also enables them to decrease a disturbing effect known as timing jitter. This effect prevents scientists from precisely capturing the timeline of how molecules or atoms change.
The technique may also result in tinier particle accelerators: Since the wavelengths of terahertz radiation are roughly a hundred times smaller compared to those of radio waves, instruments utilizing terahertz radiation are likely to be more compact. The scientists published the study results in the Physical Review Letters journal on February 4th, 2020.
A Speedy Camera
The ultrafast electron diffraction (MeV-UED) instrument, or “electron camera,” developed by SLAC utilizes high-energy beams of electrons that travel almost at the speed of light to capture an array of snapshots—fundamentally a movie—of action within and between molecules.
For instance, this has been utilized to capture a movie of how a ring-shaped molecule disintegrates upon exposure to light and to analyze atom-level procedures in melting tungsten that could potentially inform the designs of nuclear reactors.
This method works by shooting electron bunches at a target object and then recording the way the electrons scatter when they communicate with the target’s atoms. These bunches of electrons define the electron camera’s shutter speed. If the bunches are shorter, they can capture the motions more quickly in a vivid image.
“It’s as if the target is frozen in time for a moment,” stated SLAC’s Emma Snively, who led the latest research.
Because of that reason, researchers prefer to make the entire electrons in a bunch to strike a target as close to concurrently as possible. To achieve this, the researchers give a little boost of energy to the electrons at the back, so that they catch up to the ones in the lead.
To date, scientists have utilized radio waves to transmit this energy; however, the latest method devised by the SLAC researchers at the MeV-UED facility employs light at terahertz frequencies instead.
A major benefit of utilizing terahertz radiation lies in how the experiment shortens the beams of electron bunches. Researchers in the MeV-UED facility shoot a laser at a copper electrode to remove electrons and produce beams of electron bunches. Until recently, the team has been generally using radio waves to make these bunches of electrons shorter.
But the radio waves also increase the bunch of every electron to a faintly different energy, and hence individual bunches differ in how rapidly they reach their target object. This timing difference is known as jitter, and it decreased the teams’ abilities to analyze rapid processes and precisely timestamp the way a target alters with time.
To get around this, the terahertz technique splits the laser beam into two. While one beam strikes the copper electrode and produces bunches of electrons as before, the other beam produces the terahertz radiation pulses for reducing these bunches of electrons. Since they were created by the same beam of a laser beam, terahertz pulses and electron bunches are currently synchronized with one another, decreasing the timing jitter between these bunches of electrons.
Down to the Femtosecond
According to the researchers, a major breakthrough for this study was the development of a particle accelerator cavity, known as the compressor. This meticulously machined hunk of metal is sufficiently compact to be placed in the palm of a hand. Terahertz pulses within the device reduce these bunches of electrons and give them an effective and targeted push.
Consequently, the researchers could compress these bunches of electrons, and hence they last only quadrillionths of a second, or a few tens of femtoseconds. That is not as much compression as currently achieved by traditional radio-frequency techniques, but according to the scientists, the potential to concurrently reduce the jitter renders the terahertz technique promising.
Moreover, the more compact compressors enabled by the terahertz technique would mean lower cost as opposed to the radio-frequency technology.
Typical radio-frequency compression schemes produce shorter bunches but very high jitter. If you produce a compressed bunch and also reduce the jitter, then you'll be able to catch very fast processes that we’ve never been able to observe before.
Mohamed Othman, Researcher, SLAC National Accelerator Laboratory
Ultimately, the aim is to compress the beams of electron bunches down to about a femtosecond, stated the scientists. This could subsequently allow researchers to view the remarkably fast timescales of atomic behavior in important chemical reactions such as individual protons transferring between atoms or hydrogen bonds breaking, for instance, that are yet to be fully understood.
At the same time that we are investigating the physics of how these electron beams interact with these intense terahertz waves, we're also really building a tool that other scientists can use immediately to explore materials and molecules in a way that wasn't possible before. I think that's one of the most rewarding aspects of this research.
Emilio Nanni, Researcher, SLAC National Accelerator Laboratory
Nanni headed the project another SLAC researcher, with Renkai Li.
The study was financed by the DOE’s Office of Science. The MeV-UED instrument is part of SLAC’s Linac Coherent Light Source—a DOE Office of Science user facility. |
White blood cells are made in the bone marrow and protect the body against infection. If an infection develops, white blood cells attack and destroy the bacteria, virus, or other organism causing it.
White blood cells are bigger than red blood cells and normally are fewer in number. When a person has a bacterial infection, the number of white cells can increase dramatically.
The white blood cell count shows the number of white blood cells in a sample of blood. A normal white blood cell count is between 4,500 and 11,000 cells per cubic millimeter (4.5 and 11.0 x 109 cells per liter). The number of white blood cells is sometimes used to identify an infection or to monitor the body’s response to treatment.
There are five types of white blood cells: lymphocytes, monocytes, neutrophils, basophils, and eosinophils. |
- 1 How do rivers transform and change the land?
- 2 How do rivers create new landforms?
- 3 What are 4 examples from the lesson of landforms?
- 4 What is the role of rivers in changing the landforms of the earth?
- 5 How do rivers shape the land diagram?
- 6 What is it called when a river changes course?
- 7 What two landforms are created by rivers?
- 8 What type of landforms are in a river?
- 9 What are landforms examples?
- 10 What is landforms and its types?
- 11 How do you explain landforms to students?
- 12 What are the importance of landforms?
- 13 How do rivers affect habitats?
- 14 How do rivers affect land?
- 15 How do rivers move?
How do rivers transform and change the land?
River landscapes change as you go downstream from the source to the mouth. In the upper course of a river the altitude is high and the gradient is steep. In the middle course, the river meanders through gentle gradients. In the lower course, the river flows over flat land.
How do rivers create new landforms?
One common way a river is formed is water feeding it from lakes. Rivers are not only created by other landforms but they also create landforms. Rivers can create canyons such as the Grand Canyon, valleys and bluffs. They do this through erosion and deposition.
What are 4 examples from the lesson of landforms?
Mountains, deserts, oceans, coastlines, lakes, creeks, rivers, waterfalls, islands, rainforests, plains, grasslands, canyons, bays, and peninsulas are all landforms, whether they are mostly made up of land or water, provided they were made naturally, and can be found on the solid surface of the earth.
What is the role of rivers in changing the landforms of the earth?
They also change a nondescript geologic setting into distinct topographic forms. This happens primarily because movement of sediment-laden water is capable of pronounced erosion, and when transporting energy decreases, landforms are created by the deposition of fluvial sediment.
How do rivers shape the land diagram?
Vertical erosion in the upper course of the river. The river is steep and gravity pulls it downhill so it erodes deeply in to the soil. The rocks on the valley sides slide down to make a V shape.
What is it called when a river changes course?
All rivers naturally change their path over time, but this one forms meanders (the technical name for these curves) at an especially fast rate, due to the speed of the water, the amount of sediment in it, and the surrounding landscape.
What two landforms are created by rivers?
Erosion and deposition within a river channel cause landforms to be created:
- Flood plains.
What type of landforms are in a river?
River Systems and Fluvial Landforms
- Upper Basin. Headwaters.
- Mid-basin. Low gradiant valleys and flood plains.
- Lower Basin. Depositional Zone.
What are landforms examples?
A landform is a feature on the Earth’s surface that is part of the terrain. Mountains, hills, plateaus, and plains are the four major types of landforms. Minor landforms include buttes, canyons, valleys, and basins.
What is landforms and its types?
Mountains, hills, plateaux, and plains are the four major types of landforms. Minor landforms include buttes, canyons, valleys, and basins. Tectonic plate movement under the Earth can create landforms by pushing up mountains and hills.
How do you explain landforms to students?
In simple terms, we say that any shape on the earth’s surface is known as a landform. The various landforms that we have, came into existence due to natural processes such as erosion, wind, rain, weather conditions such as ice, frost and chemical actions.
What are the importance of landforms?
Landforms, particularly volcanoes, are key sources of geothermal energy and so landforms, and the areas surrounding them, are often harnessed for electricity and hot water production. Another renewable energy source, wind power, can be harnessed using farms built in elevated areas.
How do rivers affect habitats?
Tree limbs that fall into streams and rivers increase habitat heterogeneity. Stormwater runoff from surrounding landscapes carries particles into streams. The particles include soil as well as plant and animal detritus. Organic particles in the runoff contribute to the food base in stream and river ecosystems.
How do rivers affect land?
Streams and rivers erode and transport sediment. They erode bedrock and/or sediment in some locations and deposit sediment in other areas. Moving water, in river and streams, is one of the principal agents in eroding bedrock and sediment and in shaping landforms.
How do rivers move?
A river forms from water moving from a higher elevation to a lower elevation, all due to gravity. When rain falls on the land, it either seeps into the ground or becomes runoff, which flows downhill into rivers and lakes, on its journey towards the seas. |
MySQL Inner Join
The MySQL Inner Join is used to returns only those results from the tables that match the specified condition and hides other rows and columns. MySQL assumes it as a default Join, so it is optional to use the Inner Join keyword with the query.
We can understand it with the following visual representation where Inner Joins returns only the matching results from table1 and table2:
MySQL Inner Join Syntax:
The Inner Join keyword is used with the SELECT statement and must be written after the FROM clause. The following syntax explains it more clearly:
In this syntax, we first have to select the column list, then specify the table name that will be joined to the main table, appears in the Inner Join (table1, table2), and finally, provide the condition after the ON keyword. The Join condition returns the matching rows between the tables specifies in the Inner clause.
MySQL Inner Join Example
Let us first create two tables "students" and "technologies" that contains the following data:
To select records from both tables, execute the following query:
After successful execution of the query, it will give the following output:
MySQL Inner Join with Group By Clause
The Inner Join can also be used with the GROUP BY clause. The following statement returns student id, technology name, city, and institute name using the Inner Join clause with the GROUP BY clause.
The above statement will give the following output:
MySQL Inner Join with USING clause
Sometimes, the name of the columns is the same in both the tables. In that case, we can use a USING keyword to access the records. The following query explains it more clearly:
It will give the following output:
Inner Join with WHERE Clause
The WHERE clause enables you to return the filter result. The following example illustrates this clause with Inner Join:
This statement gives the below result:
MySQL Inner Join Multiple Tables
We have already created two tables named students and technologies. Let us create one more table and name it as a contact.
Execute the following statement to join the three table students, technologies, and contact:
After successful execution of the above query, it will give the following output:
MySQL Inner Join using Operators
MySQL allows many operators that can be used with Inner Join, such as greater than (>), less than (<), equal (=), not equal (=), etc. The following query returns the result whose income is in the range of 20000 to 80000:
This will give the following output: |
The flag of Portugal was adopted on June 30, 1911. The flag is rectangular and divided into two vertical fields: a smaller green field on the left side and a larger red field on the right side. The Portuguese coat of arms, surrounded by the armillary sphere, is centered on the dividing line between the two color fields.
Although the red and green colors on the flag may not seem significant today, the color choice and design of the flag represented a radical shift towards a Portuguese republic. Until the late nineteenth century, Portugal had been governed by religious monarchs and used a white flag with a blue cross. During a revolt on January 31, 1891, however, the Portuguese Republican Party established red and green as their official colors. Within the next two decades, Portuguese Republicans began to associate the green with the hope of the Portuguese nation and the red with the blood of those who died defending the country. After the flag’s development, the Republican party quickly propagandized the red and green colors and included them on nearly every republican item.
The armillary sphere that appears around the Portuguese shield commemorates the Portuguese sailors of the Age of Exploration, the two-hundred-year period between the fifteenth and seventeenth centuries, during which Europeans ventured into unknown seas and arrived in Africa, North and South America, and Asia. The armillary sphere was essential for navigation and was also used in many architectural works, including the Jerónimos Monastery and Belém Tower.
The Portuguese shield appears in the middle of the armillary sphere. The shield has been the unifyig element of Portuguese flags throughout the centurie–despite the Republican revolution–and it is the oldest Portuguese symbol. Inside the white area of the shield are five smaller blue shields, or quinas. The symbolism behind these shields comes from the “Miracle of Ourique,” a tale in which Afonso I, a Portuguese ruler, is visited by a divine messenger who assured him that God was watching over him. Shortly afterwards, Afonso and his troops defeated five Moorish kings and their troops. In gratitude, Afonso incorporated the five quinas, which are arranged in a cross pattern, into the shield’s design. The seven castles on the shield represent Afonso III’s victory over seven Moorish fortresses in 1249. |
- Define fruits;
- Classify fruits into different categories;
- State the nutritive value of fruits;
- State the factors to be considered when choosing fruits;
- Describe different methods of preparing fruits;
- Mention the forms or ways fruits can be served;
- State the effects of heat on fruits.
Lesson Summary /Discussion
FruitsFruits are t plants especially .
Classification of FruitDo note that when classifying fruits in Food and Nutrition, we use the culinary method and not botanical as both view fruits and other plants from different perspective. Many common language terms used for fruit and seeds differ from botanical classifications. For example, in botany, a fruit is a ripened ovary or carpel that contains seeds; e.g., an apple, pomegranate, tomato or a pumpkin. A nut is a type of fruit (and not a seed), and a seed is a ripened ovule. In culinary language, a fruit is the sweet- or not sweet- (even sour-) tasting produce of a specific plant (e.g., a peach, pear or lemon); nuts are hard, oily, non-sweet plant produce in shells (hazelnut, coconut, acorn). Vegetables, so called, typically are savoury or non-sweet produce (zucchini, lettuce, broccoli, and tomato); but some may be sweet-tasting (sweet potato). Examples of botanically classified fruit that typically are called vegetables include: cucumber, pumpkin, and squash (all are cucurbits); beans, peanuts, and peas (all legumes); corn, eggplant, bell pepper (or sweet pepper), and tomato, (see image). The spices chili pepper and allspice are fruits, botanically speaking. In contrast, rhubarb is often called a fruit when used in making pies, but the edible produce of rhubarb is actually the leaf stalk or petiole of the plant. Edible gymnosperm seeds are often given fruit names, e.g., ginkgo nuts and pine nuts. Botanically, a cereal grain, such as corn, rice, or wheat is a kind of fruit (termed a caryopsis). However, the fruit wall is thin and fused to the seed coat, so almost all the edible grain-fruit is actually a seed.
For Culinary purpose, fruit can be classified into two broad groups:
A. Fresh Fruits: these include;
- Soft foods such as Berries, banana, guava, etc.
- Hard fruits such as apples, pears, plums, melons, mangoes.
- Citrus such as oranges, lemons, grapefruit.
Chart Showing Examples Of Fruits
Nutritive value of fruitsFruits generally have limited nutritive the major nutrient in fruit is ascorbic acid. Almost all fruits contain physiological a significant amount of this vitamin. Since most fruits are often consumed raw, a large amount of vitamin C present is then consumed. Fruit also contain pectin which assist in the formation of jellies. most fruits contain small quantities of carotene and the B group of vitamins.
Fruit however contain little or no protein or fat. Ripe fruits contain no starch as they have been converted to sugar.
Factors to consider when choosing fruits.Fruits should be fresh.
They must be free from insect infestation.
They must not be over ripe.
They must be firm to touch.
fruits in season such as bananas purple citrus and carrots are common during the dry season by guava and mangoes are common during the raining season.
Preparation of fruitsRaw Fruit: most fresh fruits when thoroughly ripe are suitable for serving raw. Most of the nutrients especially vitamin C and retained and consume in this manner. However when consuming raw fruit, they must be washed properly. washing is necessary so as to remove dust residual soil and other microorganisms which may be present on the fruit. The washing is then followed by peeling, in respect of some fruits such as banana mangos pawpaw pineapple citrus.
Cooked Fruit: sometimes fruits are cooked for variety to make it more palatable, increase it keeping quality, soften cellulose or cook the starch. for example green apples are cooked so as to improve their starch content.
Stewing: fruit can also be stewed in water or cooked in sugar syrup. Those fruit cooked in syrup usually maintain their shape better than those cooked in water. If the sugar concentration is about the same as the concentration of soluble materials in the fruit, the fruit tends to hold its shape during cooking. if however the sugar concentration in the syrup is higher than that of the fruit, water is withdrawn from the fruit by osmosis. This situation makes the fruit to shrinks and becomes tough.
Baking: this is another method of fruit preparation. This metal takes place in fruits such as apples. Apples are prepared for baking by coring and slitting the skin at right angles to the call around the middle of the Apple to avoid splitting during baking. For variety, fruit can be baked together with different ingredients.
Effect of Cooking Or Heat On Fruits
- When fruits are cooked or heated, the vitamin C content is partially destroyed and may even be completely destroyed if the cooking is very intense.
- The cellulose is is soften and the fruit therefore become softer and more digestible.
- Minerals are leached out into the water but are not lost if syrup made from the cooking water is served along with the fruit.
- Cooking help to destroy bacteria which may be present in the fruit.
- Burton necessary for the setting of jams and jellies is released or so when fruits are heated or cooked.
Methods of serving fruitFruit can be served whole fresh ripe and roll why they are ripe fruit fruit and fruit with hard seeds may be cooked. The juice can be squeezed out like citrus fruits and served in cups. also the juice can be squeezed from the fruit after it has been cooked and then used for making jellies or the fruit may be cooked to a pulp and served for making fruit fool. Fruit can also be served in form of salads.
See previous lessons in Agricultural Science
Lesson Evaluation /Test
- What are fruits?
- State the classification of fruits with examples.
- List the factors that should be considered when choosing fruits.
- State the methods of serving fruits.
- What are the effects of heat on fruits?
Describe the various forms in which you consume fruits in your home. Share your knowledge in the comment section. |
Many of these species have lost much of their range in North America and are left with island remnants of their habitat. But in the Boreal Forest, animals have the sweeping landscapes and fresh waters they need to find food, mate, raise young and flourish.
Because so much of the boreal remains whole, it offers incomparable protection and freedom of movement for wildlife. When caribou are about to give birth, for instance, they disperse throughout the boreal, often swimming to remote wooded islands or traveling to dense thickets in open peatlands. Scientists estimate caribou mothers need about 16 square kilometres of intact forest to raise their young and keep them safe from predators.
With millions of hectares of wild landscapes, the boreal is a launchpad for great migrations of wildlife. The George River and Leaf River Caribou Herds travel between 2,000 and 6,000 kilometres every year as they move from their boreal wintering areas to their tundra calving grounds. Both Atlantic and Pacific salmon maintain healthy populations in the boreal’s undammed rivers—even as they become extinct or endangered further south. And each year, 3 to 5 billion birds fly out of the boreal to their wintering ground in backyards and wild places across the United States, Mexico and beyond.
At a time when many animal migrations around the globe face dwindling habitat and other constraints, the boreal continues to provide wildlife room to roam.
Did you know? |
Assessment - a collecting and bringing together of information about a child's needs, which may include social, psychological, and educational evaluations used to determine services; a process using observation, testing, and test analysis to determine an individual's strengths and weaknesses in order to plan his or her educational services.
Assessment team - a team of people from different backgrounds who observe and test a child to determine his or her strengths and weaknesses.
Autism: a developmental disability significantly affecting verbal and nonverbal communication and social interaction, generally evident before age 3. Other characteristics, which may be associated with autism are engagement in repetitive activities and stereotyped movements, resistance to environmental change or change in daily routines, and unusual responses to sensory experiences.
Birth through two Transition Meeting: a meeting that introduces the family of handicapped toddlers to the school district or agency that could be receiving the child for intervention services after the child turns 3. This meeting takes place up to 6 months before a child's third birthday.
Cognitive - a term that describes the process people use for remembering, reasoning, understanding, and using judgment; in special education terms, a cognitive disability refers to difficulty in learning.
Developmental Delay: a child, birth through age eight, who has been identified by a multidisciplinary team as having either a significant delay in the function of one or more of the following areas: cognitive development; physical development; communicative development; social or emotional development; or adaptive behavior or skills development or a diagnosed physical or medical condition that has a high probability of resulting in a substantial delay in function in one or more of the such areas.
Early Intervention: programs or services designed to identify and treat a developmental problem as early as possible.
Eligibility: meeting the criteria necessary to qualify for special education.
Individual Education Plan (IEP): a written education plan for a child aged 3-21 with disabilities developed by a team of professionals and the child's parents. IEP's are based on a multidisciplinary evaluation of the child and describe how the child is presently doing, what the child's learning needs are, and what services the child will need. They are reviewed and updated yearly. (A written statement for a child with a disability developed and implemented according to federal and state regulations.)
Individual Family Service Plan (IFSP): a document that guides the early intervention process for children (ages 0 through age 2) with disabilities and their families. The IFSP contains information about the services necessary to facilitate a child's development and enhance the family's capacity to facilitate the child development. Through the IFSP process, family members and service provides team to plan, implement and evaluate services tailored to the family's unique concern, priorities, and resources.
Least Restrictive Environment (LRE): an educational setting or program that provides a student needing special education the chance to work and learn; it also provides the student with as much contact as possible with non-exceptional children, while meeting the child's learning needs and physical requirements in a regular educational environment as much as is appropriate.
Occupational Therapy: a therapy or treatment that helps an individual develop mental or physical skills that will aid in daily living, it focuses on the use of hands and fingers, on coordination of movement, and on self-help skills, such as dressing, eating with a fork and spoon, etc.
Physical Therapy: therapy designed to improve, maintain, or slow the rate of regression of the motor functions of a student to enable him/her to function in his educational environment.
Special Education: specially designed instruction, at no cost to the parents, to meet the unique educational needs of the student with a disability.
Speech / Language Impairment: a speech or language impairment that adversely affects educational performance, such as a language, articulation, fluency or voice impairment.
Speech and Language Therapy: therapy designed to improve the skills of an individual with a diagnosed delay or disorder that impacts the ability to communicate. |
Let’s be honest, none of us are actual experts when it comes to measurements and their conversions. We need some sort of guide for us to clarify our assumptions. Lo and behold, the creation of measurement chart examples. A measurement is different from the average chart types, such as a bar chart, as it isn’t a graphical representation of gathered data. Instead, it presents defined data according to the international system of units.
In math class, we’re taught how to compute different metric conversions with the help of measurement charts. Sometimes, we’re even forced to familiarize each measurement conversion for exams. With this, you’re probably familiar with the measurement chart’s content.
A sample chart is used to define the points of measure of a given piece. These mathematical conversions guide us in making accurate measurements. Sooner or later, we become familiar with each value that we wouldn’t even need the chart anymore.
A common chart example that display measurements is a size chart.
A height measurement chart is often used by healthcare providers and parents to monitor their child’s growth.
This may be for medical purposes or for personal reasons. There are different ways to make a height measurement chart. Most height measurement charts are designed to be life-sized, so individuals may stand alongside the chart to measure themselves.
To make a height measurement chart, you must first choose a material for your chart.
The chart is typically made out of cardboard or wood. Height is generally measured in feet and inches, you can use a tape measure to properly indicate the measurements on your chart. Imagine the chart to be an over sized ruler. With this in mind, you can draw horizontal lines on different points of measure.
You must also label each point using their respective numerical values. Each measurement must be made as accurate as possible.
Measurement charts may not be something you use on the daily but it’s definitely something you’ll find useful at different points of your life.
You’re familiar with your personal measurements using the latter but not with centimeters. During situations like this, a measurement chart may come in handy for accurate conversions.
Measurement charts may also be useful for food preparation and science experiments. There’s nothing worse than a failed science experiment caused by inaccurate measurements. After all, inaccurate measurements may garner negative outcomes one way or another.
A measurement chart may also provide approximate visualizations. This is especially applicable for land measurements. It will make it easier for you to draw comparisons between the given land areas.
Overall, measurement charts serve as a proper guide for different reasons. There are other various sample chart examples that are used for similar purposes as well. |
UT.II. United States Studies: Students will understand the chronology and significance of key events leading to self-government.
II.B. The English colonies in North America began to organize and discuss creating an independent form of government separate from England's rule. After making their case in their Declaration of Independence, the colonies engaged in a Revolutionary war that culminated in their independence and the creation of a new nation, the United States of America.
II.2: Evaluate the Revolutionary War's impact on self-rule.
II.2.b. Profile citizens who rose to greatness as leaders.
II.2.d. Explain how the winning of the war set in motion a need for a new government that would serve the needs of the new states.
UT.III. United States Studies: Students will understand the rights and responsibilities guaranteed in the United States Constitution and Bill of Rights.
III.B. The new United States needed a set of rules. A group of leading thinkers of the Revolutionary era met to create a new document to lay out the form of the new government. Drawing upon ideas both old and new, and finding ways to compromise to meet the needs and demands of multiple interests, they created this new government charter called the Constitution. The Constitution created a strong national government with separate branches within the government to insure there were checks on power and balances of responsibilities. The Constitution has been changed, or amended, numerous times since then, first with the addition of the Bill of Rights.
III.1: Assess the underlying principles of the US Constitution.
III.1.a. Recognize ideas from documents used to develop the Constitution (e.g. Magna Carta, Iroquois Confederacy, Articles of Confederation, Virginia Plan).
III.1.b. Analyze goals outlined in the Preamble.
III.1.c. Distinguish between the role of the Legislative, Executive, and Judicial branches of the government.
III.1.e. Describe the concept of checks and balances.
III.2: Assess how the US Constitution has been amended and interpreted over time, and the impact these amendments have had on the rights and responsibilities of citizens of the United States.
III.2.a. Explain the significance of the Bill of Rights.
III.2.b. Identify how the rights of selected groups have changed and how the Constitution reflects those changes (e.g. women, enslaved people). |
protein O-mannosyltransferase 1
The POMT1 gene provides instructions for making one piece of the protein O-mannosyltransferase (POMT) enzyme complex. The other piece is produced from the POMT2 gene. This enzyme complex is present in many different tissues in the body but is particularly abundant in the muscles used for movement (skeletal muscles), fetal brain, and testes.
The POMT complex helps modify a protein called alpha (α)-dystroglycan. Specifically, this complex adds a sugar molecule called mannose to α-dystroglycan through a process called glycosylation. Glycosylation is critical for the normal function of α-dystroglycan.
The α-dystroglycan protein helps anchor the structural framework inside each cell (cytoskeleton) to the lattice of proteins and other molecules outside the cell (extracellular matrix). In skeletal muscles, glycosylated α-dystroglycan helps stabilize and protect muscle fibers. In the brain, it helps direct the movement (migration) of nerve cells (neurons) during early development.
At least 24 mutations in the POMT1 gene have been found to cause Walker-Warburg syndrome, the most severe form of a group of disorders known as congenital muscular dystrophies. Individuals with Walker-Warburg syndrome have skeletal muscle weakness and abnormalities of the brain and eyes. Because of the severity of the problems caused by this condition, affected individuals usually do not survive past early childhood.
POMT1 gene mutations that cause Walker-Warburg syndrome lead to the formation of nonfunctional POMT enzyme complexes that cannot transfer mannose to α-dystroglycan, preventing its normal glycosylation. As a result, α-dystroglycan can no longer effectively anchor cells to the proteins and other molecules that surround them. Without functional α-dystroglycan to stabilize the muscle fibers, they become damaged as they repeatedly contract and relax with use. The damaged fibers weaken and die over time, which affects the development, structure, and function of skeletal muscles in people with Walker-Warburg syndrome.
Defective α-dystroglycan also affects the migration of neurons during the early development of the brain. Instead of stopping when they reach their intended destinations, some neurons migrate past the surface of the brain into the fluid-filled space that surrounds it. Researchers believe that this problem with neuronal migration causes a brain abnormality called cobblestone lissencephaly, in which the surface of the brain lacks the normal folds and grooves and instead appears bumpy and irregular. Less is known about the effects of POMT1 gene mutations in other parts of the body.
Genetics Home Reference provides information about limb-girdle muscular dystrophy.
Mutations in the POMT1 gene are also involved in less severe forms of muscular dystrophy, including muscle-eye-brain disease and POMT1-related congenital muscular dystrophy (also known as MDDGB1). Muscle-eye-brain disease is similar to Walker-Warburg syndrome (described above), although affected individuals usually survive into childhood or adolescence. POMT1-related congenital muscular dystrophy causes muscle weakness, brain abnormalities, and intellectual disability, but usually does not affect the eyes.
POMT1 gene mutations that cause these conditions result in POMT enzyme complexes with reduced function. As a result, glycosylation of α-dystroglycan is impaired. The severity of the resulting condition appears to be related to the level of α-dystroglycan glycosylation; the less glycosylation, the more severe the condition.
- dolichyl-phosphate-mannose-protein mannosyltransferase
- dolichyl-phosphate-mannose--protein mannosyltransferase 1
- protein O-mannosyl-transferase 1
- protein-O-mannosyltransferase 1 |
Mrs. Gowans' class had a fun month with lots of Easter activities. They matched upper and lower case letters on plastic eggs for the Literacy domain. In the Mathematics domain, the children enjoyed counting, sorting and graphing jellybeans. In Creative Arts they were able to dye and paint real eggs with shaving cream. In the Science domain they experimented with shaving cream for clouds and blue water and eye droppers for rain. The letters /O/P/, numbers 4 and 5, the color purple and the shape "oval" were featured for the month.
Mrs. Peterson's class focused on the Language Development domain and learned rhyming words by playing ‘Rhyming Bingo’ and ‘Movin & Groovin’. Each day, the "Question of the Day" started with "Do you know a word that rhymes with ________?" The children are working hard in the Literacy domain on lower case recognition and some children are writing their names in lower case letters. The arrival of our Monarch Butterflies came April 26th as part of our Knowledge of Science Concepts. The children will journal the Life Cycle of a Butterfly as well as make a life cycle model. Mrs. Peterson is very proud of the letter sounds the children have learned and their ability to put the sounds together to make words!! The letters and corresponding sounds they have learned so far are /C/O/G/A/S/D/L/I/T/F/E/H/U/B/R/. |
In this section we need to address a couple of topics about
the constant of integration. Throughout
most calculus classes we play pretty fast and loose with it and because of that
many students don’t really understand it or how it can be important.
First, let’s address how we play fast and loose with
it. Recall that technically when we
integrate a sum or difference we are actually doing multiple integrals. For instance,
Upon evaluating each of these integrals we should get a
constant of integration for each integral since we really are doing two
Since there is no reason to think that the constants of
integration will be the same from each integral we use different constants for
Now, both c and k are unknown constants and so the sum
of two unknown constants is just an unknown constant and we acknowledge that by
simply writing the sum as a c.
So, the integral is then,
We also tend to play fast and loose with constants of
integration in some substitution rule problems.
Consider the following problem,
Technically when we integrate we should get,
Since the whole integral is multiplied by ,
the whole answer, including the constant of integration, should be multiplied
by . Upon multiplying the through the answer we get,
However, since the constant of integration is an unknown
constant dividing it by 2 isn’t going to change that fact so we tend to just
write the fraction as a c.
In general, we don’t really need to worry about how we’ve
played fast and loose with the constant of integration in either of the two
The real problem however is that because we play fast and
loose with these constants of integration most students don’t really have a
good grasp oF them and don’t understand that there are times where the
constants of integration are important and that we need to be careful with
To see how a lack of understanding about the constant of
integration can cause problems consider the following integral.
This is a really simple integral. However, there are two ways (both simple) to
integrate it and that is where the problem arises.
The first integration method is to just break up the
fraction and do the integral.
The second way is to use the following substitution.
Can you see the problem?
We integrated the same function and got very different answers. This doesn’t make any sense. Integrating the same function should give us
the same answer. We only used different
methods to do the integral and both are perfectly legitimate integration
methods. So, how can using different
methods produce different answer?
The first thing that we should notice is that because we
used a different method for each there is no reason to think that the constant
of integration will in fact be the same number and so we really should use
different letters for each.
More appropriate answers would be,
Now, let’s take another look at the second answer. Using a property of logarithms we can write
the answer to the second integral as follows,
Upon doing this we can see that the answers really aren’t
that different after all. In fact they
only differ by a constant and we can even find a relationship between c and k. It looks like,
So, without a proper understanding of the constant of
integration, in particular using different integration techniques on the same
integral will likely produce a different constant of integration, we might
never figure out why we got “different” answers for the integral.
Note as well that getting answers that differ by a constant
doesn’t violate any principles of calculus.
In fact, we’ve actually seen a fact that suggested that this might
happen. We saw a fact in the Mean Value Theorem section that said that
if then . In other words, if two functions have the
same derivative then they can differ by no more than a constant.
This is exactly what we’ve got here. The two functions,
have exactly the same derivative,
and as we’ve shown they really only differ by a constant.
There is another integral that also exhibits this
There are actually three different methods for doing this
Method 1 :
This method uses a trig formula,
Using this formula (and a quick substitution) the integral
Method 2 :
This method uses the substitution,
Method 3 :
Here is another substitution that could be done here as
So, we’ve got three different answers each with a different
constant of integration. However,
according to the fact above these three answers should only differ by a
constant since they all have the same derivative.
In fact they do only differ by a constant. We’ll need the following trig formulas to
Start with the answer from the first method and use the
double angle formula above.
Now, from the second identity above we have,
so, plug this in,
This is then answer we got from the second method with a
slightly different constant. In other
We can do a similar manipulation to get the answer from the
third method. Again, starting with the
answer from the first method use the double angle formula and then substitute
in for the cosine instead of the sine using,
Doing this gives,
which is the answer from the third method with a different
constant and again we can relate the two constants by,
So, what have we learned here? Hopefully we’ve seen that constants of
integration are important and we can’t forget about them. We often don’t work with them in a Calculus I
course, yet without a good understanding of them we would be hard pressed to
understand how different integration methods and apparently produce different |
What’s that track?
The size of the turkey track is distinctive because the turkey is our largest game bird. Note how it “toes-in” as it walks. The middle toe is slightly curved inward. MICHELLE HAYES/SPECIAL TO THE TIMES NEWS
Frequently we are asked to identify various tracks that people have taken pictures of. Of course, they want to know what animal left those tracks behind. Here are some things we need to know that will help us correctly identify the tracks:
Location, location, location. This is probably the most important part of the puzzle when figuring out what animal left the tracks. For herbivores traveling outside their “home” it’s dangerous, so good cover is important. Hedgerows and brambles offer protection because where there are herbivores there will be predators. A water source is not necessary because there is water in the plants they are eating.
Animals tend to take the easiest routes across landscapes, and that means using cover to move without being detected by predators. Small rodents and rabbits rarely use open fields because they offer little to no protection.
Transition zones are where two habitats meet such as a forest and a field. In transition zones a wide variety of vegetation and animals can be found. This intersection of two very different habitats will be for travel and cover. These zones are also interesting places to look for tracks.
Urban or rural? Habitat loss will cause unlikely animals to show up in a backyard. As animals lose habitat they are forced to adapt and become more urban in their behaviors. So having an animal show up “in town” is not always cause for alarm.
Scale. When photographing a track for identification, it’s important to use something like a coin, dollar bill or a ruler to give the track some scale. Tracks with no size comparison are a lot harder to identify. By using a coin, dollar bill or ruler we can rule out specific animals due to the size alone.
Where did it go? About 90 to 95 percent of the time an animal will use a “normal walking pattern” when moving through a habitat. Obviously, if an animal is being chased, the pattern and tracks change. The way an animal walks, hops, runs or bounds may help with an identification because we have a series of tracks.
Strange or unusual tracks in the snow and the mud occur because many animals walk by stepping into the print twice. For instance, a deer will step into the print made by a front left leg with the back left leg sometimes creating weird prints. Knowing how an animal normally walks helps us explain the unusual tracks that are found. It’s fun to speculate that it is some weird creature or a newly discovered creature, but every time it turns out to be something common.
Some surfaces such as mud or snow leave clearer and “truer” prints than dry soils and sand. It is important to know that the print might not be an exact copy of the animal’s foot. Melting snow or extremely wet soils will distort the tracks, making them look a lot bigger. Depending on the surface and how the animal walks, the track might not always be a giveaway to the animal that left it. Some things to think about when trying to identify a track are:
• How many toes are there?
• What does the bottom surface of the foot look like?
• When the creature walks, what parts of its feet touch the ground?
• Does the foot have nails? Hooves? Claws? If there are claws, do they touch the ground when the animal walks (as they do with dogs)? Or are the claws retracted (as they are in cats)?
We are always interested in trying to solve the riddles of those tracks left behind, so if you have track photographs, email them or stop in and let’s figure it out together!
Jeannie Carl is a naturalist at the Carbon County Environmental Education Center. The center is located at 151 E. White Bear Drive in Summit Hill. Call 570-645-8597 for information. |
Healthy rivers and wetlands support healthy communities.
They sustain people by supplying water for towns, farms and businesses and contribute to local economies through industries such as agriculture, fishing, real estate and tourism. Healthy rivers and wetlands make cities and towns more liveable and contribute to the physical and mental wellbeing of people.
They provide places for people to play, relax and connect with nature, and sustain Indigenous communities who have been continually connected to Country.
Rivers and wetlands cannot sustainably provide all of these benefits unless their ecological health is protected and maintained. Environmental watering is crucial in achieving this. |
Tracing Panama’s geological footprints
6 September 2016
It sits at the junction between two continents, separates two vast oceans and has a significant effect on global ocean currents and the climate across Northern Hemisphere.
Yet little is known about the history of this tiny strip of land between North and South America, known as the Panama Isthmus, which has shaped the Earth as we know it today.
In a talk at this year’s British Science Festival, Dr David Buchs, from the School of Earth and Ocean Sciences, will reveal how his team are carrying out detailed explorations of the geology of remote areas of Panama and Colombia to determine how, when and why the Panama Isthmus became fully emerged several millions year ago. .
In particular, Dr Buchs will explain how the team are piecing together the history of the Panama Isthmus by studying how plate tectonics and volcanism have affected the region, which has remained relatively unexplored due to the dense vegetation cover.
New geological data has suggested that the formation of the Panama Isthmus is more complicated than previously thought, and preliminary observations by Dr Buchs and his team have already revealed the occurrence of uncharted fault zones and ancient volcanoes in several parts of the region.
Speaking ahead of the event, Dr Buchs said: “In addition, terrestrial fauna in North and South Americas would still be isolated, without the possibility of easily migrating from one continent to another. This is a situation in stark contrast to the ecosystems as we know them today in the Americas.
“Understanding how and when the Panama Isthmus has formed is therefore of large, multidisciplinary significance, and we look forward to sharing our findings and experiences to date with the general public at the British Science Festival.”
The British Science Festival, which takes place in Swansea from the 6-9 September, is Europe's longest-standing national event which connects people with scientists, engineers, technologists and social scientists.
Dr David Buchs’s talk, ‘Tracing Panama’s geological footprints’, will take place on Wednesday 7 September from 12:00 – 13:00 in Lecture Theatre L, Faraday Building, Swansea University. |
ker spears him, and thereby saves many a dinner for himself.
[Illustration: Indian spear.]
Here is a primitive Indian fish-spear, such as the Penobscots used. To the end of a long pole two wooden jaws are tied loosely enough to spring apart a little under pressure, and midway between them, firmly driven into the end of the pole, is a point of iron. When a fish was struck, the jaws sprung apart under the force of the blow, guiding the iron through the body of the fish, which was held securely in the hollow above, that just fitted around his sides, and by the point itself.
[Illustration: Solomon Islander's spear.]
The tool with which the woodpecker fishes for a grub is very much the same. His mandibles correspond to the two movable jaws. They are knife-edged, and the lower fits exactly inside the upper, so that they give a very firm grip. In addition, the upper one is movable. All birds can move the upper mandible, because it is hinged to the skull. (Watch a parrot some day, |
“The concept of nationality - in music as in other fields - is not an
invariable constant but something which alters in response to historical
factors... one of the factors in the nineteenth century which influenced
the expression of nationality in music was the idea [that] nationalism...
not merely created a concept out of existing elements... but that it also
intervened in the existing situation and changed it.”
The epoch from the French Revolution to World War I comprises what is known as the Romantic Era, and music this includes the years from approximately 1825 to 1900. During that span of time, political instability and turmoil turned Europe into the world’s goriest amphitheater of utter chaos and disruption, finally culminating in the ‘war to end all wars”. The Congress of Vienna, held during 1814-15 at the end of the Napoleonic wars, was an event in which the cruel redistribution of European boundaries was decided with disregard for the people it affected. Patriots who saw their political and cultural borders violated staged a series of rebellions and insurrections, some of which were triumphant. This kind of political feeling came to be known as Nationalism. Nationalism dominated feeling and thought to such a great extent in the nineteenth century that it became a decisive power in the Romantic movement. The tensions between subjugated nations grappling for democracy and their proud conquerors gave way to sentiment that could be express in the arts and music.
“Nationalism” constitutes a “belief which, in the course of the nineteenth century... became the governing idea without always being held by those in government... the belief that it was to his nation - and not to a creed, a dynasty, or a class - that a citizen owed the first duty in a clash of loyalties.” This political claim, fused with the idea that it was “the spirit of the people” (der Volksgeist) which provided inspiration in the arts and life, was the dominant attitude of the bourgeois nationalism of the nineteenth century.
Nationalism itself underwent a transformation in the nineteenth century, in that during the first half of the century, a “nationalist” was also a “citizen of the world”, but by the latter half of the century nationalism had turned much more aggressive, with the oppressing nations initiating this change. Unfortunately, the attitude of the oppressed was equally affected by this. As nationalism matured, in music the change is apparent in works written primarily after 1860. It is for this reason that serious consideration must be given to the fact that various types of political evolution were achieved in each country that in turn affected musical nationalism.
At this time in Europe, new nations such as Czechoslovakia, Poland, Norway, Finland and Hungary had been formed by the unification of old empires. In England and France, power had gone from the monarchy to democracy, while in Russia, the revolution had failed, leaving the Tsarist regime as strong as ever. It is interesting to observe that in politically strong nations such as Germany, France and Austria, composers took little interest in political themes and subject matter.
Nationalism in music usually refers to the various national schools that consciously tried to separate themselves from the standards set in the Classical period by the French, Italian and especially the German traditionalists. This formation of a national school, conceived to differentiate itself from the pan-European tradition, was itself a pan-European tradition, as was the entire bourgeois nationalist movement of the nineteenth century. Although distinct national styles are discernible in music from the Renaissance onwards, it was not until the nineteenth century that Nationalism came to dominate Europe as a mode of thought.
During the nineteenth century the popular view was taken that folk music “is always and above all the music of a nation.” However, what makes this ambiguous and ill-founded is the fact that when one discusses folk music with references to “the people”, the expression “the people” usually refers to the lower strata of the population known universally as peasants, and also to the concept of “the nation” as a whole. However, nineteenth century nationalism was a phenomena of the bourgeois, not an expression of the peasant’s self-awareness. This use of folk music by the bourgeois was more to reassure themselves of the authenticity of their own patriotism as well as an appeal across the social barriers of the time. (For the nobility, it was not the national loyalties that counted, but dynastic ones.) For the bourgeois, national character was the “primary and essential quality of folk music... and that folk music expresses the spirit of a people.”
In the nineteenth century, composers not only expressed themselves but chose the style in which to do it. In general, the Romantic composer was slow in discovering and using folk songs in their music. At first they were used in brief works, such as a peasant dance like a mazurka, but gradually became to be used in symphonic works, although the lush vivid orchestrations tended to mask their simplistic character. Because folk music is basically monodic, it resisted assimilation into the well-established formulas of major-minor tonality, and for that very reason it challenged composers to experiment with unusual harmonies. This in turn affected harmonies in music unconnected with folk-oriented music, and thus manifested itself into the mainstream of developments. This experimentation was itself a consequence of a specific, well-defined problem that was encountered by Romantic composers of the era, and was not the result of random factors.
Occasionally composers, attracted by another country’s national idioms, would for specific reasons use those idiom in their music for an effect. This practice is known as Exoticism and was a strong trend in the nineteenth century. Exoticism is “a search for new effects from the folk music of other lands and peoples, generally those considered to be less spoiled by civilization; this even led to the phenomenon of Russian nationalists who proclaimed their musical independence from western European models by exploiting the exotica of the peoples of central Asia who had recently been conquered by the Tsarist imperium.”
Another way nationalism made an impact in the music of the nineteenth century is from an aesthetic standpoint. The dominant principle was one of novelty and originality. The tradition of imitation was now condemned, and unfamiliar music was now considered by be authentic. However, with art music being confronted with the dillemma of having to abandon aristocratic and esoteric goals to become democratic and popular, the ideal of popularity came into direct conflict with the ideal of novelty and originality, which required a great deal of intellectual understanding and was not appreciated by the general public. Musical nationalism provided the “appearance of familiarity” so the composer could be original and artistic while the listener could identify with the music on a patriotic level.
Paradoxically, folk music is not always local or regional in its coloring or character; some stylistic traits felt to be specific to a particular country are actually common to “national” music in general, as the study of folk music at the time was considered in each country only at a national, and not on a comparative level. As much as composers exploited peasant music for their nationalistic music, “the ‘spirit of the people’ that was thought to speak in the music of the ‘national schools’ was heard only by the educated, not the ‘people’ themselves.”
Longyear, Rey M. Nineteenth-Century Romanticism in Music.
Poultney, David. Studying Music History.
Ulrich, Homer. Music - A Design for Listening, 3rd ed.
Machlis, Joseph. The Enjoyment of Music.
Ammer, Christine. Harper’s Dictionary of Music.
Westrup and Harrison. The New College Encyclopedia of Music.
Culshaw, John. A Century of Music.
Dahlhaus, Carl. Between Romanticism and Modernism. |
Lodz Ghetto during WWII and migrants in Europe, 2015
Lesson 1: Migrants to England in the Middle Ages
Click to this document to read an article about Medieval immigrants moving to England in the Middle Ages (History Extra)
Lesson 2 : The Irih migration to the USA in the 19th and 20th century
Prep activity (1)
Prep activity (2)
A l’aide de la vidéo ci-dessus, complétez et apprenez les paroles de la chanson « No Irish Need Apply ».
Lesson 3: Ellis Island, the « Island of Tears »
Watch this video and take notes for the test
Using these posters of the French Musée de l’histoire de l’immigration, create a poster about migration in the USA in the 19th and 20th century
- Find a picture online
- Find a slogan / motto
- Create a poster combining text and archives using Canva
- Don’t forget to mention the references of your document
- Publish it on this padlet
Lesson 2: What is it like to be a refugee?
I. The European Refugee Crisis Explained
II. Against All Odds
- Open the game « Against All Odds » by clicking on this image above.
- Click on « Open in full screen »
- Click on « Play Against All Odds
- Choose your character by using the arrows.
- Choose a team’s name and write it on your exercice book.
- Click on « Register my game » and follow the instructions.
- Let’s play and learn, but don’t forget to take notes during this exercice. Next week, you will have a test about it.
Prepare an oral session
- Tool : Prezi or another presentation tool
- Ressources : Web facts on the serious game « Against All Odds »
- Instructions : Introduce the story of a refugee
- Why do they flee?
- A difficult journey
- The travel conditions
- How to cross the borders?
- Find a shelter
- Compare travel’s stories of Vjolica and Michael
- How to begin a new life in the host countries?
- Refuges and extreme-right
|Timing (5 minutes)||2|
|Quality of the contents (organisation and information)||3|
|Oral and language (you don’t read your notes and your English is good)||3|
|Quality of the presentation (with Prezi or another tool)||2| |
New ice core research suggests that, while the changes are dramatic, they cannot be attributed with confidence to human-caused global warming, said Eric Steig, a University of Washington professor of Earth and space sciences.
Previous work by Steig has shown that rapid thinning of Antarctic glaciers was accompanied by rapid warming and changes in atmospheric circulation near the coast. His research with Qinghua Ding, a UW research associate, showed that the majority of Antarctic warming came during the 1990s in response to El Niño conditions in the tropical Pacific Ocean.
Their new research suggests the '90s were not greatly different from some other decades – such as the 1830s and 1940s – that also showed marked temperature spikes.
"If we could look back at this region of Antarctica in the 1940s and 1830s, we would find that the regional climate would look a lot like it does today, and I think we also would find the glaciers retreating much as they are today," said Steig, lead author of a paper on the findings published online April 14 in Nature Geoscience.
The researchers' results are based on their analysis of a new ice core from the West Antarctic Ice Sheet Divide that goes back 2,000 years, along with a number of other ice core records going back about 200 years. They found that during that time there were several decades that exhibited similar climate patterns as the 1990s.
The most prominent of these in the last 200 years – the 1940s and the 1830s – were also periods of unusual El Niño activity like the 1990s. The implication, Steig said, is that rapid ice loss from Antarctica observed in the last few decades, particularly the '90s, "may not be all that unusual."
The same is not true for the Antarctic Peninsula, the part of the continent closer to South America, where rapid ice loss has been even more dramatic and where the changes are almost certainly a result of human-caused warming, Steig said.
But in the area where the new research was focused, the West Antarctic Ice Sheet, it is more difficult to detect the evidence of human-caused climate change. While changes in recent decades have been unusual and at the "upper bound of normal," Steig said, they cannot be considered exceptional.
"The magnitude of unforced natural variability is very big in this area," Steig said, "and that actually prevents us from answering the questions, 'Is what we have been observing exceptional? Is this going to continue?'"
He said what happens to the West Antarctic Ice Sheet in the next few decades will depend greatly on what happens in the tropics.
The West Antarctic Ice Sheet is made up of layers of ice, greatly compressed, that correspond with a given year's precipitation. Similar to tree rings, evidence preserved in each layer of ice can provide climate information for a specific time in the past at the site where the ice core was taken.
In this case, the researchers detected elevated levels of the isotope oxygen 18 in comparison with the more commonly found oxygen 16. Higher levels of oxygen 18 generally indicate higher air temperatures.
Levels of oxygen 18 in ice core samples from the 1990s were more elevated than for any other time in the last 200 years, but were very similar to levels reached during some earlier decades.
The work was funded by the National Science Foundation Office of Polar Programs.
For more information, contact Steig at 206-685-3715, 206-543-6327 or [email protected].
An image of a section of the West Antarctic Ice Sheet Divide core is available from [email protected].
Co-authors are Qinghua Ding, Marcell Küttel, Peter Neff, Ailie Gallant, Spruce Schoenemann, Bradley Markle, Tyler Fudge, Andrew Schauer and Rebecca Teel of the University of Washington; James White and Bruce Vaughn of the University of Colorado; Summer Rupper, Landon Burgener and Jessica Williams of Brigham Young University; Thomas Neumann of NASA's Goddard Space Flight Center; Paul Mayewski, Daniel Dixon and Elena Korotkikh of the University of Maine; Kendrick Taylor of Desert Research Institute, Reno, Nev.; Georg Hoffmann of the Centre d'Etudes de Saclay in France and Utrecht University in The Netherlands; and David Schneider of the National Center for Atmospheric Research, Boulder, Colo.
Vince Stricherz | Newswise
GPM sees deadly tornadic storms moving through US Southeast
01.12.2016 | NASA/Goddard Space Flight Center
Cyclic change within magma reservoirs significantly affects the explosivity of volcanic eruptions
30.11.2016 | Johannes Gutenberg-Universität Mainz
A multi-institutional research collaboration has created a novel approach for fabricating three-dimensional micro-optics through the shape-defined formation of porous silicon (PSi), with broad impacts in integrated optoelectronics, imaging, and photovoltaics.
Working with colleagues at Stanford and The Dow Chemical Company, researchers at the University of Illinois at Urbana-Champaign fabricated 3-D birefringent...
In experiments with magnetic atoms conducted at extremely low temperatures, scientists have demonstrated a unique phase of matter: The atoms form a new type of quantum liquid or quantum droplet state. These so called quantum droplets may preserve their form in absence of external confinement because of quantum effects. The joint team of experimental physicists from Innsbruck and theoretical physicists from Hannover report on their findings in the journal Physical Review X.
“Our Quantum droplets are in the gas phase but they still drop like a rock,” explains experimental physicist Francesca Ferlaino when talking about the...
The Max Planck Institute for Physics (MPP) is opening up a new research field. A workshop from November 21 - 22, 2016 will mark the start of activities for an innovative axion experiment. Axions are still only purely hypothetical particles. Their detection could solve two fundamental problems in particle physics: What dark matter consists of and why it has not yet been possible to directly observe a CP violation for the strong interaction.
The “MADMAX” project is the MPP’s commitment to axion research. Axions are so far only a theoretical prediction and are difficult to detect: on the one hand,...
Broadband rotational spectroscopy unravels structural reshaping of isolated molecules in the gas phase to accommodate water
In two recent publications in the Journal of Chemical Physics and in the Journal of Physical Chemistry Letters, researchers around Melanie Schnell from the Max...
The efficiency of power electronic systems is not solely dependent on electrical efficiency but also on weight, for example, in mobile systems. When the weight of relevant components and devices in airplanes, for instance, is reduced, fuel savings can be achieved and correspondingly greenhouse gas emissions decreased. New materials and components based on gallium nitride (GaN) can help to reduce weight and increase the efficiency. With these new materials, power electronic switches can be operated at higher switching frequency, resulting in higher power density and lower material costs.
Researchers at the Fraunhofer Institute for Solar Energy Systems ISE together with partners have investigated how these materials can be used to make power...
16.11.2016 | Event News
01.11.2016 | Event News
14.10.2016 | Event News
02.12.2016 | Medical Engineering
02.12.2016 | Agricultural and Forestry Science
02.12.2016 | Physics and Astronomy |
Thanks to movies and nature videos, many people know that bizarre creatures live in the ocean’s deepest, darkest regions. They include viperfish with huge mouths and big teeth, and anglerfish, which have bioluminescent lures that make their own light in a dark world.
However, the world’s deepest-dwelling fish — known as a hadal snailfish — is small, pink and completely scaleless. Its skin is so transparent that you can see right through to its liver. Nonetheless, hadal snailfish are some of the most successful animals found in the ocean’s deepest places.
Our research team, which includes scientists from the United States, United Kingdom and New Zealand, found a new species of hadal snailfish in 2014 in the Mariana Trench. It has been seen living at depths of almost 27,000 feet (8,200 meters). We recently published its scientific description and officially christened it Pseudoliparis swirei. Studying its adaptations for living at such great depths has provided new insights about what kinds of life can survive in the deep ocean.
Exploring the hadal zone
We discovered this fish during a survey of the Mariana Trench in the western Pacific Ocean. Deep-sea trenches form at subduction zones, where one of the tectonic plates that form the Earth’s crust slides beneath another plate. They extend 20,000 to 36,000 feet deep below the ocean’s surface. The Mariana Trench is deeper than Mount Everest is tall.
Ocean waters in these trenches are known as the hadal zone. Our team set out to explore the Mariana Trench from top to bottom in an effort to understand what lives in the hadal zone — how organisms there interact; how they survive under enormous pressure created by six to seven miles of water above them; and what role hadal trenches play in the global ocean ecosystem.
Getting to the bottom
Sending instruments to the ocean floor is pretty straightforward. Bringing them back up is not. Researchers studying the deep sea often use nets, cameras or robots connected to ships by cables. But a 7-mile-long cable, even if it is very strong, can break under its own weight.
We used free-falling landers — mechanical platforms that carry instruments and steel weights and are not connected to the ship. When we deploy landers, it takes about four hours for them to sink to the bottom. To call them back, we use an acoustic signal that causes them to release their ballast and float to the surface. Then we search for them in the water (each carries an orange flag), retrieve them and collect their data.
Life in the trenches
Hadal trenches are named after Hades, the Greek god of the underworld. To humans, they are harsh, extreme environments. Pressure is as high as 15,000 pounds per square inch — equivalent to a large elephant standing on your thumb, and 1,100 times greater than atmospheric pressure at sea level. Water temperatures are as low as 33 degrees Fahrenheit (1 degree Celsius). Yet, a host of animals thrive under these conditions.
Our team put down cameras baited with mackerel to attract mobile animals in the trench. At shallower depths, from approximately 16,000 to 21,000 feet (5,000-6,500 meters) on the abyssal plain, we saw large fish such as rattails, cusk eels and eel pouts. At the upper edges of the trench, below 21,000 feet, we found decapod shrimp, supergiant amphipods (swimming crustaceans), and small pink snailfish. This newly discovered species of snailfish that lives to near 27,000 feet (8,200 meters), is now the world’s deepest living fish.
At the trench’s greatest depths, near 36,000 feet (11,000 meters), we saw only large swarms of small scavenging amphipods, which are somewhat similar to garden pill bugs. Amphipods live all over the ocean but are highly abundant in trenches. The Mariana snailfish that we filmed were eating these amphipods, which make up most of their diet.
The Mariana Trench houses the ocean’s deepest point, at Challenger Deep, named for the HMS Challenger expedition, which discovered the trench in 1875. Their deepest sounding, at nearly 27,000 feet (8,184 meters), was the greatest known ocean depth at that time. The site was named Swire Deep, after Herbert Swire, an officer on the voyage. We named the Mariana snailfish Pseudoliparis swirei in his honor, to acknowledge and thank crew members who have supported oceanographic research throughout history.
Life under pressure
Hadal snailfish have several adaptations to help them live under high pressure. Their bodies do not contain any air spaces, such as the swim bladders that bony fish use to ascend and descend in the water. Instead, hadal snailfish have a layer of gelatinous goo under their skins that aids buoyancy and also makes them more streamlined.
Hadal animals have also adapted to pressure on a molecular level. We’ve even found that some enzymes in the muscles of hadal fish are adapted to function better under high pressure.
Whitman College biologist Paul Yancey, a member of our team, has found that deep-sea fish use a molecule called trimethyl-amine oxide (TMAO) to help stabilize their proteins under pressure.
However, to survive at the highest water pressures in the ocean, fish would need so much TMAO in their systems that their cells would reach higher concentrations than seawater. At that high concentration, water would tend to flow into the cells due to a process called osmosis, in which water flows from areas of high concentration to low concentration to equalize. To keep these highly concentrated cells from rupturing, fish would have to continually pump water out of their cells to survive.
The evidence suggests that fish don’t actually live all the way to the deepest ocean depths because they are not able to keep enough TMAO in their cells to combat the high pressure at that depth. This means that around 27,000 feet (8,200 meters) may be a physiological depth limit for fish.
There may be fish that live at levels as deep, or even slightly deeper, than the Mariana snailfish. Different species of hadal snailfish are found in trenches worldwide, including the Kermadec Trench off New Zealand, the Japan and Kurile-Kamchatka trenches in the northwestern Pacific, and the Peru-Chile Trench. As a group, hadal snailfish seem to have found an unlikely haven in a place named for the proverbial hell. |
What is AMS?
Jump to: Description, Ion Source, Injector Magnet, Tandem Accelerator, Analyzing Magnet, Switching Magnet, Electrostatic Analyzer, Gas Ionization Detector.
Accelerator mass spectrometry (AMS) is a technique for measuring long-lived radionuclides that occur naturally in our environment. AMS uses a particle accelerator in conjunction with ion sources, large magnets, and detectors to separate out interferences and count single atoms in the presence of 1x1015 (a thousand million million) stable atoms. At PRIME Lab we measure six different cosmogenic radionuclides. They are used for a wide variety of dating and tracing applications in the geological and planetary sciences, archaeology, and biomedicine.
The following is a brief description of each element of the AMS system.
The ion source produces a beam of ions (atoms that carry an electrical charge) from a few milligrams of solid material. The element is first chemically extracted from the sample (for example, a rock, rain water, a meteorite) then it is loaded into a copper holder and inserted into the ion source through a vacuum lock. Atoms are sputtered from the sample by cesium ions which are produced on a hot spherical ionizer and focused to a small spot on the sample. Negative ions produced on the surface of the sample are extracted from the ion source and sent down the evacuated beam line towards the first magnet. At this point the beam is about 10 microamps which corresponds to 1013 ions per second (mostly the stable isotopes).
The injector magnet bends the negative ion beam by 90° to select the mass of interest, a radioisotope of the element inserted in the sample holder, and reject the much-more-intense neighboring stable isotopes. Several vacuum pumps remove all the air from the beamline so the beam particles have a free path. There are still lots of molecules and isobars (isotopes of neighboring elements having the same mass) that must be removed by more magnets after the accelerator.
The tandem accelerator consists of two accelerating gaps with a large positive voltage in the middle. Think of it as a bridge that spans the inside of a large pressure vessel containing CO2 and N2 insulating gas at a pressure of over 10 atmospheres. The bridge holds two long vacuum tubes with many glass (electrically insulating) sections. The center of the accelerator, called the terminal, is charged to a voltage of up to 10 million volts by two rotating chains. The negative ions traveling down the beam tube are attracted (accelerated) towards the positive terminal. At the terminal they pass through an electron stripper, either a gas or a very thin carbon foil, and emerge as positive ions. These are repelled from the positive terminal, accelerating again to ground potential at the far end. The name tandem accelerator comes from this dual acceleration concept. The final velocity is a few percent of the speed of light or about 50 million miles per hour.
The analyzing and switching magnets select the mass of the radionuclide of interest, further reducing the intensity of neighboring stable isotopes. In addition, they eliminate molecules completely by selecting only the highly charged ions that are produced in the terminal stripper. (Highly charged molecules are unstable since they are missing the electrons that bind the atoms together). Isotope ratios are measured by alternately selecting the stable and radioisotopes with the injector and analyzing magnets.
The electrostatic analyzer is a pair of metal plates at high voltage that deflects the beam to the left by 20 degrees. This selects particles based on their energy and thus removes the ions that happen to receive the wrong energy from the accelerator.
The gas ionization detector counts ions one at a time as they come down the beamline. The ions are slowed down and come to rest in propane gas. As they stop, electrons are knocked off the gas atoms. These electrons are collected on metal plates, amplified, and read into the computer. For each atom, the computer determines the rate of energy loss and from that deduces the nuclear charge (element atomic number) to distinguish interfering isobars. |
Basic Rules of Probability Assignment Help
Frequently, we wish to calculate the probability of an occasion from the recognized possibilities of other occasions. This lesson covers some crucial rules that streamline those calculations. Probability is specified as a number in between 0 and 1 and represents the probability of an occasion taking place. A probability of 0 suggests no opportunity of that occasion happening, while a probability of 1 implies the occasion will happen. One specifically crucial usage of these probability rules is the conclusions that can be drawn if we presume that a number of occasions are similarly most likely. If there are just n such occasions that are possible in an offered scenario, and all are pairwise and similarly most likely equally special (no 2 can take place at as soon as), then each need to have probability 1/n.
We can ask the probability of the occasion. For 2 occasions and, the joint occasion of both occasions taking place is explained by the joint probability. The conditional probability reveals the probability of occasion offered that occasion happened. Probability theory is the branch of mathematics interested in probability, the analysis of random phenomena. The main items of probability theory are random variables, stochastic procedures, and occasions: mathematical abstractions of non-deterministic occasions or determined amounts that might either progress or be single incidents in time in an obviously random style.
If they can not take place at the exact same time, 2 occasions are equally unique or disjoint. The probability that Event A takes place, considered that Event B has actually taken place, is called a conditional probability. The conditional probabilityof Event A, offered Event B, is represented by the sign P( A. We can use the suitable Addition Rule: Addition Rule 1: When 2 occasions, A and B, are equally unique, the probability that A or B will happen is the amount of the probability of each occasion. When 2 occasions, A and B, are non-mutually unique, there is some overlap in between these occasions. A probability is a mathematical worth designated to a provided occasion A. The probability of an occasion is composed P( A), and explains the long-run relative frequency of the occasion. The very first 2 basic rules of probability are the following:
If the result of the very first occasion has no result on the probability of the 2nd occasion, then the 2 occasions are called independent. The 4th basic guideline of probability is understood as the reproduction guideline, and uses just to independent occasions: Goal: The function of this example is to discover the basic ideas and laws of probability. The following terms are utilized in this example: Occasions are specified in terms of the basic rules of an experiment, e.g. we might specify occasion O as an odd rating on a die, consisting of the 3 basic rules 1,3 and 5, while occasion L might be specified as the 3 least expensive ratings, 1, 2 and 3. An occasion which consists of all rules other than those belonging to an occasion A is signified ¬ A.
Inning accordance with SCT, "the state exists to implement the rules essential for social living, while morality consists in the entire set of rules that help with social living". (Rachels, p. 144) Thus, federal government is had to implement the basic rules of social living (e.g. do not rob individuals, do not break contracts), while morality might incorporate some rules that are very important for social living however are outside the scope of the state (this may consist of, for instance, "Don't insult individuals for no factor".). The theory of Probability has had an extremely fast development throughout the last 30 years. A basic function of the theories represented by these schools is the advancement of different requirements for the finest possible usage of the observations for functions of analytical test and estimate treatments.
When a coin is tossed or a 6 when a die is rolled, above we have actually talked about how to figure out possibilities of basic results like a tail. When likelihoods of basic rules are understood or to discover likelihoods of more intricate occasions utilizing possibilities of easier occasions, we now develop rules for discovering likelihoods of occasions. A probability is a number that shows the possibility or possibility that a specific occasion will happen. A probability of 0 shows that there is no opportunity that a specific occasion will happen, whereas a probability of 1 suggests that an occasion is particular to happen.
For 2 occasions and, the joint occasion of both occasions happening is explained by the joint probability. The probability that Event A takes place, provided that Event B has actually taken place, is called a conditional probability. If the result of the very first occasion has no impact on the probability of the 2nd occasion, then the 2 occasions are called independent. Occasions are specified in terms of the basic rules of an experiment, e.g. we might specify occasion O as an odd rating on a die, consisting of the 3 basic rules 1,3 and 5, while occasion L might be specified as the 3 least expensive ratings, 1, 2 and 3. A probability of 0 suggests that there is no possibility that a specific occasion will happen, whereas a probability of 1 shows that an occasion is particular to take place. |
By Yolanda Smith, BPharm
Thrombosis is the process of a blood clot, also known as a thrombus, forming in a blood vessel. This clot can block or obstruct blood flow in the affected area, as well as cause serious complications if the clot moves to a crucial part of the circulatory system, such as the brain or the lungs.
It is normal for the body to produce clotting factors like platelets and fibrin when a blood vessel is injured, to prevent an excessive loss of blood from the body. If this effect is over productive it can obstruct the flow of blood and form an embolus that moves around the blood stream.
Thrombosis can be broadly classified as either venous thrombosis or arterial thrombosis, according to where the thrombus presents in the body.
Venous thrombosis occurs in the veins and is categorized further according to where it occurs including:
- Deep vein thrombosis
- Portal vein thrombosis
- Renal vein thrombosis
- Jugular vein thrombosis
- Budd-Chiari Syndrome
- Paget-Schoetter disease
- Cerebral venous sinus thrombosis
Arterial thrombosis, also known as atherothrombosis due to its association with atheroma rupture, occurs in the arteries. The blood stasis caused by atrial fibrillation may also cause this type of thrombosis.
There are multiple causes for stroke, including ischemia, hemorrhage and embolus in the brain. Stroke due to a blood clot in the brain usually builds gradually around an atherosclerotic plaque.
Myocardial infarction may also be caused by a thrombus in the coronary artery and is associated with ischemia. The reduced oxygen supply to the heart cells, as a result of the blockage, results in cell death and myocardial infarction.
There are three main causes of thrombosis: hypercoagulability, injury to the endothelial cells of the blood vessel wall and abnormal flow of the blood.
Hypercoagulability, also known as thrombophilia, refers higher levels of coagulation factors in the blood that increase susceptibility to thrombosis. This is usually as a result of genetics or disorders of the immune system.
Injury to the epithelial cells on the wall of blood vessels after trauma, surgery or and infection can also precipitate coagulation and possible thrombosis.
Abnormal blood flow, such as venous stasis following heart failure or long periods of sedentary behavior, can also cause thrombosis to occur. Additionally, some other health conditions can affect blood flow and lead to the production of a thrombus, including atrial fibrillation and cancer.
A common complication of thrombosis is hypoxia, due to the obstruction of the artery of vein. When the majority of the blood vessel is blocked, the oxygen supply to the body is reduced and results in increased production of lactic acid.
Additionally, in some cases the blood clot may break free and travel around the body, a process known as embolization. This can obstruct the blood flow to essential organs, such as the brain or the lungs, reducing or inhibiting oxygen and blood flow with severe repercussions.
Prevention and Treatment
As stasis of the blood is associated with increased risk of thrombosis, it is important that movements are made regularly, particularly if susceptible individuals are likely to be sedentary for long periods of time, such as in bed or on an airplane.
For people at high risk of venous thromboembolism, heparin can be administered to reduce risk of pulmonary embolism, although this is associated with higher susceptibility to bleeding due to the reduced efficacy of the clotting factors. Therefore, heparin offers greater use in the treatment, rather than prevention of thrombosis.
A more coherent method to prevent the formation of deep vein thrombosis is the use of compression stockings, which mechanically support the vein to inhibit the formation of blood clots. This is particularly beneficial as there are few side effects.
Anticoagulants may increase the risk of major bleeding slightly, but has been found to offer a benefit in both the prevention and treatment of thrombosis.
Last Updated: May 21, 2015 |
In 2015, a video of a rough takedown and arrest, in which a police officer (referred to in schools as “School Resource Officer”) in a South Carolina school flips over a high school student and her desk, has brought the “School-to-Prison Pipeline” topic into the headlines. The School-to-Prison Pipeline refers to the school policies and procedures that drive many of our nation’s schoolchildren into a pathway that begins in school and ends in the criminal justice system.
Behavior that once led to a trip to the principal’s office and detention, such as school uniform violations, profanity and “talking back,” now often leads to suspension, expulsion, and/or arrest. Data from the Department of Education’s Office for Civil Rights shows that black students are suspended and expelled at a rate three times greater than their white peers. Similarly, students with disabilities are more than twice as likely to receive out-of-school suspensions as students with no disabilities and LGBTQ youth are much more likely than their peers to be suspended or expelled.
This lesson provides an opportunity for students to understand more about the School-to-Prison Pipeline, learn about its history and evolution and begin to plan some activities to teach others about it and take action. |
- Table View
- List View
The units of this book explain Cells, Heredity, Diversity and Change, Living Things, Human Body Systems, and Ecology.
Math at Hand is a resource book. That means you're not expected to read it from cover to cover. Instead, you'll want to keep it handy for those times when you're not clear about a math topic and need a place to look up definitions, procedures, explanations, and rules.
This book addresses key science topics including: scientific investigation; working in the lab; life science; earth science; physical science; natural resources and the environment; science, technology, and society. An ideal resource in science class, during lab time, and at home, this book also includes a handy almanac with tables, charts and graphs, test-taking and researching skills, science timelines and glossaries, and more.
ScienceSaurus is a resource book and a student handbook that offers step-by-step guidelines and clear examples of key science topics that include Life Science, Earth Science and Physical Science.
The SkillsBook provides you with opportunities to practice editing and proofreading skills presented in the Student Edition of Texas Write Source. That book contains guidelines, examples, and models to help you complete your work in the SkillsBook. Each SkillsBook activity includes brief instruction on the topic and examples showing how to complete that activity. You will be directed to the page numbers in the Student Edition of Texas Write Source for additional information and examples.
Select your format based upon: 1) how you want to read your book, and 2) compatibility with your reading tool. To learn more about using Bookshare with your device, visit the "Using Bookshare" page in the Help Center.
Here is an overview of the specialized formats that Bookshare offers its members with links that go to the Help Center for more information.
- Bookshare Web Reader - a customized reading tool for Bookshare members offering all the features of DAISY with a single click of the "Read Now" link.
- DAISY (Digital Accessible Information System) - a digital book file format. DAISY books from Bookshare are DAISY 3.0 text files that work with just about every type of access technology that reads text. Books that contain images will have the download option of ‘DAISY Text with Images’.
- BRF (Braille Refreshable Format) - digital Braille for use with refreshable Braille devices and Braille embossers.
- MP3 (Mpeg audio layer 3) - Provides audio only with no text. These books are created with a text-to-speech engine and spoken by Kendra, a high quality synthetic voice from Ivona. Any device that supports MP3 playback is compatible.
- DAISY Audio - Similar to the Daisy 3.0 option above; however, this option uses MP3 files created with our text-to-speech engine that utilizes Ivona's Kendra voice. This format will work with Daisy Audio compatible players such as Victor Reader Stream and Read2Go. |
Parents often complain that child-rearing is exhausting, but consider the poor sea otter mom. By the time a sea otter pup is weaned, its mother may be so depleted physiologically that she is unable to survive the stress of a minor wound or infection. Sea otter researchers have a term for it—"end-lactation syndrome" —and believe it accounts for high mortality rates among female sea otters in some areas.
To understand why this happens, UC Santa Cruz biologist Nicole Thometz set out to quantify the energy demands of a growing sea otter pup. Her results, published June 11 in the Journal of Experimental Biology, reveal just how much it costs a sea otter mom to raise her pup.
High energy requirements
Even without a pup, adult sea otters have remarkably high energy requirements, eating about a quarter of their body weight in seafood every day. As the smallest marine mammals, living in cold coastal waters with thick fur but no insulating layer of blubber, sea otters need to maintain a high metabolic rate to stay warm. They may spend 20 to 50 percent of the day foraging for food. For an adult female, a pup adds new demands to an energy budget that is already challenging.
Thometz and her colleagues found that the daily energy demands of a female sea otter jump by 17 percent in the first few weeks after the birth of a pup and grow steadily as the pup gets larger. Eventually, the mother's daily energy demands are 96 percent higher, nearly twice what they are when she doesn't have a pup. In other words, she has to find almost twice as much prey every day to keep herself and her pup fed.
"These fundamentally high energy demands are likely the underlying reason why we see so much mortality among prime age females in the middle of their range, where the density of the sea otter population is highest and resources are limited," said Thometz, who conducted the study as part of her Ph.D. thesis research at UCSC. She is currently a postdoctoral researcher in the mammalian physiology lab of coauthor Terrie Williams, professor of ecology and evolutionary biology.
The UCSC researchers teamed up with the Monterey Bay Aquarium to study the energetics of growing sea otter pups. The aquarium's sea otter program takes in pups found stranded in the wild and uses otters at the aquarium as surrogate mothers to raise the pups until they can be returned to the wild. Thometz worked with seven sea otter pups, measuring their oxygen consumption to determine metabolic rates at different activity levels throughout their development.
Observations in the wild
The researchers also observed sea otter pups in the wild to see how much time they spent each day resting, grooming, foraging, and so on. Combining these "activity budgets" in the wild with the metabolic rates measured in pups at the aquarium enabled Thometz to calculate average daily energy demands for pups at five developmental stages. The daily energy demands of adult female sea otters without pups were calculated using previous research conducted by Williams and coauthor Michelle Staedler of the Monterey Bay Aquarium.
It takes about six months for a female sea otter to raise a pup until it's weaned and starting to feed itself. During this time, the pup is highly dependent on its mother for food. "The majority of the pup's calories come from its mother, so we tracked the combined energy demands of mom and pup over time. Given that sea otters have such high baseline energy demands, what these females are doing to raise a pup is really extraordinary," Thometz said.
These findings explain why female sea otters are so often found in poor condition--weak and skinny--at the end of the lactation period. It also helps explain why sea otter mothers may abandon a pup before it is fully grown. Thometz explained that female sea otters give birth to a pup every year, regardless of their condition. They decide whether to keep or abandon their pup depending on their physiological condition and environmental factors such as the availability of prey. Abandoning a pup may give them a better chance to successfully rear a pup the following year.
These phenomena--females in poor condition and abandoned pups--are much more common in the center of the sea otter's range along California's central coast than they are at the northern and southern ends of the range. Thometz said there are fewer otters and less competition at the ends of the range, making it easier for a female to find enough food to meet the energy demands of raising a pup. In the center of the range around Monterey and Big Sur, the sea otter population may have reached the "carrying capacity" of the environment, meaning there's just not enough food to support further growth of the population in that area.
"These females have to accomplish a huge energetic task every year. In the center of the range, they appear to be up against their physiological limits," Thometz said.
In addition to Thometz and Williams, the coauthors of the paper include Tim Tinker, a U.S. Geological Survey biologist and adjunct professor of ecology and evolutionary biology at UCSC; and Michelle Staedler and Karl Mayer of the Monterey Bay Aquarium. This research was funded by the USGS Wesern Ecological Research Center, the Office of Naval Research, the Otter Cove Foundation, and the Meyers Oceanographic and Marine Trust. |
Scientists for decades have clashed over whether evolution takes place gradually or is driven by short spurts of intense change.
In the latest chapter in this debate, researchers report in Science this week that it appears that when new languages spin-off from older ones, there is an initial introductory burst of alterations to vocabulary. Then, the language tends to settle and accumulate gradual changes over a long period of time. The team believes this discrete evolutionary pattern occurs when a social group tries to forge a separate identity.
Study co-author Mark Pagel, an evolutionary biologist at the University of Reading in England, says that the latest study grew out of an earlier finding in which he and colleagues determined that about 20 percent of genetic changes among species occur when they first split off, whereas the rest happen gradually.
"It was very natural for us to wonder if a similar process [of evolution] happens in cultural groups," Pagel says. "We treat the words that different languages use almost identically to the way we use genes: … The more divergent two species are, the less their genes have in common, just as the more divergent two languages are, the less their words have in common."
The team focused on three of the world's major language families in its study: Bantu (Swahili, Zulu, Ngumba, for example), Indo-European (English, Latin, Greek, Sanskrit) and Austronesian (South Pacific and Indian Ocean languages such as Taglaog or Seediq). They constructed genealogical trees—similar to those they had created previously in their 2006 species-related study—albeit this time the trees traced existing languages back to their common roots; the length of a "branch" indicated the extent of word replacement that took place as each old language morphed (possibly with new languages splitting off) into its current form.
"If vocabulary change is accumulating gradually and independently of how many times new languages arise, then all of the paths—the routes from ancestor to tips—in the tree should be the same length; that is, there should be the same amount of evolution in each path," Pagel says. "But, if the number of language-splitting events along a path predicts the path length, this tells us that change has not been gradual but that language-splitting has added to the change."
The researchers determined that 10 percent to 33 percent of divergence between languages stemmed from key vocabulary changes at the time of language splitting.
He offers a few examples of these sorts of events, such as the sudden emergence of American English when Noah Webster published his American Dictionary of the English Language in 1828. More recently, he says, black American English could fit the bill as an emerging idiom.
"It's plausible to think of black American English as having diverged from standard English as a way of establishing a distinct identity," he says. "I think everybody's impression is those differences [between the two languages] are greater than one would expect for people who live in the same area." |
A-level Applied Science/Choosing and Using Materials
About this Unit
From the AQA Specification:
In this unit you will learn about:
- the different properties of materials such as metals, polymers and ceramics;
- how scientists define the properties of a material and compare values for different materials;
- how to measure some of the properties of a material and investigate how they allow the material to be put to a particular use;
- why different materials behave in different ways;
- how the internal structure of a material influences the way it behaves;
- ways in which properties of a material can be modified by altering the structure of the material.
How you will be assessed
In this unit you will be required to complete an external examination of 1½ hours duration. The examination will consist of a series of compulsory short answer questions and will be marked out of 80. The examination may also contain a comprehension exercise.
You will be assessed on your knowledge, understanding and skills relating to choosing and using materials. |
The lungs have historically been thought to be sterile, but researchers have found that even healthy lungs have fungi. An interesting new research study finds that some species of fungi are far more common in the lungs of asthma sufferers. Researchers hope that the discovery of fungal particles will help them understand how to treat asthma better.
“Our analysis found that there are large numbers of fungi present in healthy human lungs. The study also demonstrates that asthma patients have a large number of fungi in their lungs and that the species of fungi are quite different to those present in the lungs of healthy individuals,” said Hugo van Woerden of Cardiff University’s Institute of Primary Care and Public Health and the lead researcher on the study.
The team found 136 different fungal species in the mucus or sputum of patients with and without asthma. In those without asthma, there were 46 common fungi found in the lungs. In those with asthma, there were 90 fungal species found in common from patient to patient.
Establishing the presence of fungi in the lungs of patients with asthma could potentially open up a new field of research which brings together molecular techniques for detecting fungi and developing treatments for asthma,” Woerden said.
“In the future it is conceivable that individual patients may have their sputum tested for fungi and their treatment adjusted accordingly,” he adds.
Ramie A. Tritt, MD, President of Atlanta ENT |
why do we have an electoral vote
Big question: Why do we still have the Electoral College? Established in 1787, the Electoral College is as old as the U. S. Constitution. Marquette Magazine asked Dr. Paul Nolette, assistant professor in the department of political science, why, after 225 years, we still use the Electoral College system to elect our president instead of the popular vote? Turn on your favorite TV news program and youБre likely to hear about how each presidential candidate is faring among БWal-Mart Moms,Б БNASCAR Dads,Б or another critical voting group. As Americans were reminded in 2000, however, this presidential election will ultimately be decided by the 538 members of the Electoral College. Why is the Electoral College part of the Constitution? And why does it still exist today? During the debates over the Constitution, Alexander HamiltonБs defense of the Electoral College suggested that electors would bring greater wisdom to presidential selection. БA small number of persons, selected by their fellow-citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated investigations,Б he wrote in БFederalist #68. Б Several of the ConstitutionБs framers viewed the Electoral College as a protection of state power. Individual states would send electors who would presumably prevent the election of a candidate threatening to centralize power in the federal government. Many of the original justifications for the Electoral College have less force today. Other constitutional features meant to protect the states have since changed. The 17th Amendment, for example, shifted the selection of senators from state legislatures to popular election. The notion that electors have better deliberative capacity than the general populace is now passц, especially since electors today are partisan activists who commit themselves to a candidate well before Election Day.
So why do we keep the Electoral College? One argument is that the Electoral College ensures more attention to less populous states otherwise at risk of being ignored by presidential candidates. If people directly elected the president, candidates would focus their attention on population-rich states like California, New York and Texas rather than smaller states such as New Mexico, Nevada and Wisconsin. The problem is that under the current system, the vast majority of states are already ignored by candidatesБББincluding not only most of the smallest but several of the largest as well. The lionБs share of the attention goes to an increasingly small number of swing states that could realistically favor either candidate. This may be to our benefit here in the Badger State, but not so for those in Nebraska, Rhode Island or any of the 40 other non-competitive states. Perhaps a better contemporary argument for the Electoral College is that it has a tendency to produce clear winners. This contrasts with the popular vote, which remains relatively close in nearly all presidential contests. In 2008, for example, Obama won only 53 percent of the popular vote but more than two-thirds of the electoral vote. The Electoral College, as it typically does, helped to magnify the scope of the incoming presidentБs victory. For someone taking on the highest-profile job in the world, this additional legitimacy boost may be no small thing.
The Electoral College was created for two reasons. The first purpose was to create a buffer between population and the selection of a President. The second as part of the structure of the government that gave extra power to the smaller states. The first reason that the founders created the Electoral College is hard to understand today. The founding fathers were afraid of direct election to the Presidency.
They feared a tyrant could manipulate public opinion and come to power. Hamilton wrote in the Federalist Papers: It was equally desirable, that the immediate election should be made by men most capable of analyzing the qualities adapted to the station, and acting under circumstances favorable to deliberation, and to a judicious combination of all the reasons and inducements which were proper to govern their choice. A small number of persons, selected by their fellow-citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated investigations. It was also peculiarly desirable to afford as little opportunity as possible to tumult and disorder. This evil was not least to be dreaded in the election of a magistrate, who was to have so important an agency in the administration of the government as the President of the United States. But the precautions which have been so happily concerted in the system under consideration, promise an effectual security against this mischief. Hamilton and the other founders believed that the electors would be able to insure that only a qualified person becomes President. They believed that with the Electoral College no one would be able to manipulate the citizenry. It would act as check on an electorate that might be duped. Hamilton and the other founders did not trust the population to make the right choice. The founders also believed that the Electoral College had the advantage of being a group that met only once and thus could not be manipulated over time by foreign governments or others. The electoral college is also part of compromises made at the convention to satisfy the small states. Under the system of the Electoral College each state had the same number of electoral votes as they have representative in Congress, thus no state could have less then 3.
The result of this system is that in this election the state of Wyoming cast about 210,000 votes, and thus each elector represented 70,000 votes, while in California approximately 9,700,000 votes were cast for 54 votes, thus representing 179,000 votes per electorate. Obviously this creates an unfair advantage to voters in the small states whose votes actually count more then those people living in medium and large states. One aspect of the electoral system that is not mandated in the constitution is the fact that the winner takes all the votes in the state. Therefore it makes no difference if you win a state by 50. 1% or by 80% of the vote you receive the same number of electoral votes. This can be a recipe for one individual to win some states by large pluralities and lose others by small number of votes, and thus this is an easy scenario for one candidate winning the popular vote while another winning the electoral vote. This winner take all methods used in picking electors has been decided by the states themselves. This trend took place over the course of the 19th century. While there are clear problems with the Electoral College and there are some advantages to it, changing it is very unlikely. It would take a constituitional amendment ratified by 3/4 of states to change the system. It is hard to imagine the smaller states agreeing. One way of modifying the system s to eliminate the winner take all part of it. The method that the states vote for the electoral college is not mandated by the consitution but is decided by the states. Two states do not use the winner take all system, Maine and Nebraska. It would be difficult but not impossible to get other states to change their systems, unfortunately the party that has the advantage in the state is unlikely to agree to a unilateral change.
- Views: 147
why do we use the electoral college
why do we have the electoral college system
why do we have an electoral vote
why do we have an electoral college
why do we have the electoral college system
why do we have the electoral college
why do the number of electoral college votes change |
The dinoflagellates continue to photosynthesize, as long as they have light and nutrients. However, they do not keep the products of photosynthesis for themselves. They release almost all of it into the tissues of the coral. The coral uses this energy as food, and typically has enough energy to build a considerable amount of calcium carbonate in the form of a cup-shaped skeleton that supports the coral animal, and helps to build a strong coral "colony". Therefore, the coral colony can build a very strong structure that can withstand the force of waves in shallow waters, and many corals together can form a "reef". Coral reefs are very abundant in clear warm tropical waters of the world.
The corals metabolize the photosynthetic products, and of course produce wastes. They do not release those into sea-water until the dinoflagellates have extracted the nutrients from them, especially phosphate and nitrate. The nutrients are thus re-cycled between dinoflagellates and coral, so that the photosynthesis can keep going.
The photosynthesis results in the release of oxygen, which the corals use in respiration. The carbon dioxide that the corals produce in respiration is absorbed by the dinoflagellates in their photosynthesis.
Altogether it is a very efficient system. However, it has serious implications for the coral. They MUST arrange their anatomy so that the dinoflagellates have light. So they build branching or sheet-like colonies in the shallow water, always pointing upward. The coral MUST protect its dinoflagellates (which it does with its stinging cells).
The dinoflagellates are not necessarily happy. Certainly they have nutrients, light, and protection, but they pay an enormous price for it in terms of giving up almost all their photosynthetic production. The dinoflagellates would not join in the symbiosis unless they had no alternative.
The fact is that the coral reef symbiosis only occurs where there is practically no food in the seawater. If there were nutrients out there, the dinoflagellates would abandon the coral and make a living for themselves in the water. (In fact, the best way to destroy a reef is to build a tourist hotel next to it and pour the sewage directly into the sea -- and many tourist hotels do exactly that.)
So the conditions for a successful reef include: shallow, warm, CLEAR, nutrient-poor tropical water.
The coral has to keep its surface clear of sand and other sediment, to keep light shining on its dinoflagellates. The cost here is the cost of continuous mucus production, which catches the sediment and sweeps it off the coral surface. The mucus is polysaccharide, which is a good food source for small creatures: in fact, given the low nutrients in the area, it is the ONLY major food supply. So many small reef creatures eat mucus, and then they provide food for other creatures, and so on, so that in the end the coral reef COMMUNITY is a very rich, diverse, set of organisms, all dependent on the original coral symbiosis.
However, it is very vulnerable because the entire community depends on the symbiosis. It is amazing how long-lasting coral reef communities have been since they evolved in the Jurassic.
Of course, other organisms can and do evolve similar symbiotic relationships with dinoflagellates, as long as they can provide the same services to them. So today, giant clams and some floating foraminiferal protists have symbiotic dinoflagellates too.
In the fossil record, one can try to identify hosts of symbioses, even though all trace of the dinoflagellates themselves has gone. The hosts would likely be organisms that grew large shells, in shallow, warm, tropical waters, and had adaptations to place their soft tissues in the light.
So in the Permian, one can identify some brachiopods as likely hosts for dinoflagellates, even though Paleozoic corals clearly were not (they did not form reefs). And in the Cretaceous, corals were for a while outcompeted by peculiar clams in reef environments. These clams, called rudists, were shaped like ice-cream cones, with one shell formed into the cone and the other into the lid. The lids often had holes in them, designed so that light could pass down through them into the interior.
The rudists were caught up in the K/T extinction and the corals recovered the reef environment after that extinction. The brachiopods that had symbiosis became extinct in the Permian extinction.
Return to Chapter 3 page
Return to Evolution topics
Return to Geology Department home page |
The Three Mile Island incident of 1979 severely hindered nuclear power in the United States, though - rather notably - no one died as a result of the accident. A partial core meltdown accidentally released some radioactivity into the surrounding areas, but, as the Nuclear Regulatory Commission's report explains,
Detailed studies of the radiological consequences of the accident have been conducted by the NRC, the Environmental Protection Agency, the Department of Health, Education and Welfare (now Health and Human Services), the Department of Energy, and the State of Pa.. Several independent studies have also been conducted. Estimates are that the average dose to about 2 million people in the area was only about 1 millirem. To put this into context, exposure from a chest x‑ray is about 6 millirem. Compared to the natural radioactive background dose of about 100‑125 millirem per year for the area [this is more like 300mrem per year for people living in places like Colorado], the collective dose to the community from the accident was very small. The maximum dose to a person at the site boundary would have been less than 100 millirem.
In addition, the effect of such a nuclear "accident" on the surrounding ecosystem was minimal; as the report states,
In the months following the accident, although questions were raised about possible adverse effects from radiation on human, animal, and plant life in the TMI area, none could be directly correlated to the accident. Thousands of environmental samples of air, water, milk, vegetation, soil, and foodstuffs were collected by various groups monitoring the area. Very low levels of radionuclides could be attributed to releases from the accident. However, comprehensive investigations and assessments by several well‑respected organizations have concluded that in spite of serious damage to the reactor, most of the radiation was contained and that the actual release had negligible effects on the physical health of individuals or the environment.
Even the once highly-contaminated Rocky Flats (a Superfund site) is now, 15 years after the start of clean-up, a wildlife refuge.
Now, let's consider the BP oil spill in the Gulf.
According to the various estimates, the spill (now likely far worse than the Exxon Valdez accident) is leaking about 2.5 million gallons of oil per day into the Gulf of Mexico, threatening 400 species of wildlife (30 species of birds), some of which are endangered, and thousands of miles of coastline (including 8 National Parks). 11 people died in the explosion on the Deepwater Horizon rig, 17 more were injured (in fact, chemical explosions have historically and statistically caused more deaths than accidents at nuclear plants: the Chernobyl disaster, by far the worst at a nuclear plant in history, resulted in "fewer than 50" direct deaths according to a joint IAEA/WHO report, whereas the chemical leak at Union Carbide plant in Bhopal resulted in somewhere between 2000 and 15,000 immediate deaths). Cleanup of oil-soaked ecosystems can take years, even decades (in 2007, nearly 20 years after the Exxon Valdez ran aground, an estimated 26,000 gallons of oil still remain mixed into the sandy beaches of Prince William Sound). The economic damage to the area near the Deepwater Horizon spill could end up in the billions of dollars. And while rural families still live successfully within a crow's flight of the defunct Chernobyl plant, a friend of mine in Baton Rouge can smell oil from her house. A Scientific American article on the BP spill concluded that "when an oil spill occurs, there are no good outcomes."
My point is this: we need to fear that which is more dangerous. Our dependence on energy is a given, but we have the power to alter where and how that energy is generated. We can choose nuclear power, wind power, solar power, geothermal power, hydroelectric power. We can choose to cut our ties with oil. And the devastation due to the BP oil spill - especially in comparison to the minimal damage from nuclear accidents, wind turbines, etc - should be the catalyst that we need to make it happen. |
Strengthening Early Childhood Education (Universal Kindergarten)
1. Every Filipino child now has access to early childhood education through Universal Kindergarten. At 5 years old, children start schooling and are given the means to slowly adjust to formal education.
2. Research shows that children who underwent Kindergarten have better completion rates than those who did not. Children who complete a standards-based Kindergarten program are better prepared, for primary education. Education for children in the early years lays the foundation for lifelong learning and for the total development of a child. The early years of a human being, from 0 to 6 years, are the most critical period when the brain grows to at least 60-70 percent of adult size.
3. In Kindergarten, students learn the alphabet, numbers, shapes, and colors through games, songs, and dances, in their Mother Tongue.
Making the Curriculum Relevant to Learners (Contextualization and Enhancements)
1. Examples, activities, songs, poems, stories, and illustrations are based on local culture, history, and reality. This makes the lessons relevant to the learners and easy to understand.
2. Students acquire in-depth knowledge, skills, values, and attitudes through continuity and consistency across all levels and subjects.\
3. Discussions on issues such as Disaster Risk Reduction (DRR), Climate Change Adaptation, and Information & Communication Technology (ICT) are included in the enhanced curriculum.
Building Proficiency (Mother Tongue-Based Multilingual Education)
1. In Kindergarten to Grade 3, the child’s dominant language is used as the language of learning.
2. Filipino and English language proficiency is developed from Kindergarten to Grade 3 but very gradually.
3. Mother Tongue is used in instruction and learning materials of other learning areas.
4. The learners retain their ethnic identity, culture, heritage and values.
5. Children learn better and are more active in class and learn a second language even faster when they are first taught in a language they understand.
Ensuring Intergrated and Seamless Learning (Spiral Progression)
1. Basic concepts/general concepts are first learned.
2. More complex and sophisticated version of the basic/general concepts are then rediscovered in the succeeding grades.
3. This strengthens retention and enhances mastery of topics and skills as they are revisited and consolidated time and again.
4. This also allows learners to learn topics and skills approriate to their developmental and cognitive skills.
Gearing Up for the Future
Ensuring College Readiness
Working with CHED to:
1. Ensure alignment of Core and Applied Subjects to the College Readiness Standards (CRS) and new General Education (GE) Curriculum.
2. Develop appropriate Specialization Subjects for the Academic, Sports, Arts and Design, and Technical Vocational Livelihood Tracks.
Strengthening TVET Integration in SHS
Working with CHED to:
1. Integrate TVET skills, competencies and qualifications in TLE in JHS and Technical Vocational Livelihood (TVL) track in SHS
2. Ensure that any Grade 10 finisher and all Grade 12 TVL graduates are eligible for TESDA competency/qualifications assessments (i.e. COC, NC I or NC II)
3. Prepare learning resources that are consitent with promulgated Training Regulations.
4. Develop appropriate INSET and certification programs for TLE teachers.
Nurturing the Holistically Developed Filipino (College and Livelihood Readiness, 21st Century Skills)
After going through Kindergarten, the enhanced Elementary and Junior High curriculum, and a specialized Senior High program, every K to 12 graduate will be ready to go into different paths – may it be further education, employment, or entrepreneurship.
Every graduate will be equipped with:
1. Information, media and technology skills,
2. Learning and innovation skills,
3. Effective communication skills, and
4. Life and career skills. |
Rough-legged hawks inhabit open country and agricultural lands. They are more common in open, early successional areas in which they can soar and seek prey in grasslands and shrublands. Once migration is complete they settle in a suitable nesting spot with enough food nearby to sustain them. Their nests are usually located in trees or on a rocky cliff in which they can overlook a field to catch prey for themselves and their young. (Pearson, et al., 1936; Bechard and Swem, 2002; Terres, 1980)
Adult rough-legged hawks average 1026 g and have a wingspan of 134 cm. Total length averages 53 cm. Females are typically the larger gender. Rough-legged hawks have eight different morphs that vary between sex, age, and location. Both sexes exhibit both light and dark morphs; and coloration varies between juveniles and adults.
All adult morphs have a black band that goes along the edges of the underside of their lesser coverts. Adults also all have dark colored eyes. Juveniles have light colored eyes and a dark band along the underside of their wings.
Light morphs of adult females have brown backs and a pattern of increased markings from breast to belly. They have one dark tail band and heavily marked leg feathers. Light-morph adult males have grayish backs. Their breasts are more heavily marked than the belly and multiple bands exist on the tail. A light-morph adult male has heavily-marked leg feathers.
Dark-morph adult males are almost completely black but can be brownish with several white bands on their dark tail. Dark-morph adult females are dark brown with a single black band underneath their tail. Dark-morph juveniles are similar to adult females but exhibit rusty bands underneath their wings and tails. Some individuals have a pale-brown head. (Wheeler and Clark, 1996)
will usually migrate solo (very uncommon to fly in groups) and find a mate once they have reached their destination. Males will soar and circle until a female joins them. Rough-legged hawks perform courtship displays in the late winter, once it has began to get warmer and flying conditions improve. After a male is joined by a female, both sexes soar together with their tails and wings fully spread. Males then perform a "Sky-Dance" dislay, in which they soar high, suddenly dive, climb again, free fall, and finally, climb back up to a normal soaring height. Male rough-legged hawks defend their mates from other males by taking flight and chasing rival males.
Male and female rough-legged hawks build a nest together after they have found a suitable site on a rocky cliff. Males carry most of the building supplies while females construct the nest of twigs, grass, molted feathers, and fur from prey. Even objects such as caribou bones are sometimes incorporated into nests. Nests take three to four weeks to build and are usually 60 to 90 cm in diameter and 25 to 60 cm deep. (Bechard and Swem, 2002)
Rough-legged hawks breed once a year, usually between April and June, but breeding has also been reported in July. There are 2 to 7 eggs per clutch and they take a minimum of 31 days to hatch. Fledging usually takes more than 40 days, although some fly weakly at 31 days old. The young are not fully independent of the parents until 2 to 4 weeks after they leave the nest, at 55 to 70 days old. The period of independence sometimes extends into migration. Sexual maturity of males and females is reached at 2 to 3 years. (Bechard and Swem, 2002; Terres, 1980)
Rough-legged hawks can live up to 18 years in the wild. However, the average life span is about 2 years, largely because most young birds do not survive. Once they survive their fledging stage and first year, rough-legged hawk annual survival improves. Deaths often result from illegal shooting or trapping activites, collisions with human structures, such as powerlines or radio towers, and collisions with vehicles. In captivity, the longest living reported rough-legged hawk was 17 years old. However, a rough-legged hawk at the Pocatello Zoo, in Idaho, came in as an injured adult in 1987 and remains alive as of July 2009, making her over 24 years old. (Bechard and Swem, 2002; Terres, 1980)
Rough-legged hawks have average territories of 7.3 square kilometers (range 3.6 - 11.8 square km).
These are territorial hawks and generally do not tolerate other nests within 1 km. Rough-legged hawks have been known to share cliff nesting spots with gyrfalcons Falco rusticalus and peregrine falcons Falco peregrinus as well as other rough-legged hawks, but only if the cliff is large and the nests are at least 30 m apart. They will avoid nesting within 60 m of any potential predators of their young, such as golden eagles. They will defend their nests from any bird that threatens them or their young. (Bechard and Swem, 2002)
Rough-legged hawks use sight and vocalizations to communicate with others. They use many calls for communication with other hawks such as a warning call (a high pitch shriek), a courtship call (a low whistle that turns into a hiss), and a "normal" call (a high-pitched whistle into a shriek). Rough-legged hawks are usually silent when away from the breeding site except when in competition with another male or threatened. Males may broadcast 100 calls per minute; much more often than females. (Bechard and Swem, 2002; Terres, 1980)
Rough-legged hawks are swift hunters than spot and capture prey with great precison. Rough-legged hawks will perch high in trees or soar in the sky where they can scan a field or grassy area for small prey. After the prey have been spotted, hawks take flight as quietly as possible (unless already in flight) and circle above a few times to ensure there is no competition with other birds of prey. They dive and spear prey with their large talons. They return to a perch to consume the meal. Typical prey include mice, shrews, black tailed prairie dogs Cynomys ludovicianus, small birds, and other squirrel species (Spermophilus and Tamias). (Pearson, et al., 1936; Reid, et al., 1997; Seery and Matiatos, 2000)
Their are many known predators of Vulpes lagopus), grizzly bears (Ursus arctos), and many other species of birds of prey. Most adult hawks are killed by these predators while trying to scare them away from their nests but artic foxes and other hawks are known to get into the nest and eat the eggs or nestlings. (Bechard and Swem, 2002; Pearson, et al., 1936)but most are predators of nestlings. Humans cause death in many rough-legged hawks by shooting, trapping, hitting them with cars, and building structures that the hawks fly into. Known predators of also include artic foxes (
Rough-legged hawks help to control the populations of small mammals. Their nests are usually built where there is high prey density.
These hawks are hosts to many parasites, including several nematodes in the genus Physaloptera. A hematozoan documented in this species is a Leucocytozoon species. (Morgan, 1943; Reid, et al., 1997; Stabler and Holt, 1965)
There are no known adverse effects ofon humans.
is rated as "Least Concern" on the IUCN Red List. Protected under the U.S. Migratory Bird Act, these birds cannot be hunted or killed except for scientific purposes.
Tanya Dewey (editor), Animal Diversity Web.
Garrett Good (author), Radford University, Karen Powers (editor, instructor), Radford University.
living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico.
living in the northern part of the Old World. In otherwords, Europe and Asia and northern Africa.
uses sound to communicate
living in landscapes dominated by human agriculture.
young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching.
having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria.
an animal that mainly eats meat
uses smells or other chemicals to communicate
animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds.
union of egg and spermatozoan
forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality.
a distribution that more or less circles the Arctic, so occurring in both the Nearctic and Palearctic biogeographic regions.
Found in northern North America and northern Europe or Asia.
offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes).
makes seasonal movements between breeding and wintering grounds
Having one mate at a time.
having the capacity to move from one place to another.
This terrestrial biome includes summits of high mountains, either without vegetation or covered by low, tundra-like vegetation.
the area in which the animal is naturally found, the region in which it is endemic.
reproduction in which eggs are released by the female; development of offspring occurs outside the mother's body.
"many forms." A species is polymorphic if its individuals can be divided into two or more easily recognized groups, based on structure, color, or other similar characteristics. The term only applies when the distinct groups can be found in the same area; graded or clinal variation throughout the range of a species (e.g. a north-to-south decrease in size) is not polymorphism. Polymorphic characteristics may be inherited because the differences have a genetic basis, or they may be the result of environmental influences. We do not consider sexual differences (i.e. sexual dimorphism), seasonal changes (e.g. change in fur color), or age-related changes to be polymorphic. Polymorphism in a local population can be an adaptation to prevent density-dependent predation, where predators preferentially prey on the most common morph.
Referring to something living or located adjacent to a waterbody (usually, but not always, a river or stream).
scrub forests develop in areas that experience dry seasons.
breeding is confined to a particular season
reproduction that includes combining the genetic contribution of two individuals, a male and a female
living in residential areas on the outskirts of large cities or towns.
uses touch to communicate
that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle).
Living on the ground.
defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement
A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia.
A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome.
A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands.
uses sight to communicate
Bechard, M., T. Swem. 2002. Rough-legged Hawk; Buteo lagopus. The Birds of North America, 641: 1-31.
Dunne, P., D. Sibley, C. Sutton. 1988. Hawks In Flight. Boston: Houghton Mifflin Company.
Morgan, B. 1943. The Physalopterinae (Nematoda) of Aves. Transactions of the American Microscopical Society, 62/1: 72-80.
Morneau, 1994. Breeding density and brood size of rough-legged hawks in northwestern Quebec. The Journal of Raptor Research, 28/4: 259-262.
Mueller, H., N. Mueller, D. Berger, G. Allez, W. Robichaud. 2000. Age and sex differences in the timing of fall migration of Hawks and Falcons. Wilson Bulletin, 112: 214-224.
Pearson, T., J. Burroughs, E. Forbush, W. Finley, G. Gladden, H. Job, L. Nichols, J. Burdick. 1936. Rough-legged Hawk. Pp. 79-80 in Birds of America, Vol. 1-3, 12 Edition. Garden City, New York: Garden City Publishing Company, Inc..
Reid, D., C. Krebs, A. Kenney. 1997. Patterns of Predation on Noncyclic Lemmings. Ecological Monographs, 67: 89-108.
Seery, D., D. Matiatos. 2000. Response of wintering buteos to plague epizootics in prairie dogs. Western North American Naturalist, 60: 420-425.
Smith, C. 1987. Parental roles and nestling foods in the rough-legged hawk, Buteo lagopus. ONT. FIELD-NAT, 101: 101-103.
Stabler, R., P. Holt. 1965. Hematozoa from Colorado Birds. II. Falconiformes and Strigiformes. The Journal of Parasitology, 51/6: 927-928.
Terres, J. 1980. Rough-Legged hawk. Pp. 485 in The Audubon Society Encyclopedia of North American Birds, Vol. 1, 1 Edition. New York: Alfred K. Terres.
Wheeler, B., W. Clark. 1996. A Photographic Guide to North American Raptors. San Diego, CA: Academic Press Inc.. |
The Montessori Method of teaching aims for the fullest possible development of the whole child, ultimately preparing him for life’s many rich experiences. The physical, mental, emotional, social, and spiritual development of the child is intertwined. Development in one area influences and supports the other areas. It is not beneficial to focus on the development of one aspect in isolation. The focus is on developing the whole child into a well rounded human being who will become a contributing member of society.
The basic aim of the Montessori approach is to help children acquire the confidence and motivation they need to fulfill their own best potential. This is done by preparing an environment which makes activities available that support children’s development, build on their interests and nurture their enthusiasm. Everything in a Montessori classroom has a specific use or purpose. There is nothing in the prepared environment that the child cannot see or touch. All of the furniture and equipment is scaled down to the child’s size and is within easy reach.
A quality Montessori classroom has a busy, productive atmosphere where joy and respect abound. Within such an enriched environment, freedom, responsibility, and social and intellectual development spontaneously flourish.
- Does not rely on text and workbooks
- Does not have individual desks or a set seating plan
- Provides hands on concrete learning materials
- Treats children with respect and as unique individuals
- Promotes a warm and supportive community environment
- Provides classrooms that are bright and exciting
- Teaches students to manage their own community
- Allows children to learn at their own pace fostering different learning styles
- Curriculum is structured to demonstrate the connection between the different subject areas |
The Powers Delegated to the Federal Government Are Few and Defined
The Doctrine of Enumerated Powers
by Roger Pilon
The Powers Delegated to the Federal Government Are Few and Defined
The Doctrine of Enumerated Powers
by Roger Pilon
Federalist Papers referenced in essay: #14, 23, 25, 32, 41, 42, 44, 45
The doctrine of enumerated powers stands for the idea that Congress has only those powers that are enumerated in the Constitution, which the people delegated to Congress when they ratified the Constitution or later amended it. Thus, the doctrine is of fundamental importance. It explains the origin of Congress’s powers, their legitimacy, and their limits. By virtue of the doctrine, the Constitution of the United States establishes a government of delegated, enumerated, and thus limited powers.
The Federalist Papers contain many discussions of the doctrine of enumerated powers, but they are often difficult to understand because they make assumptions many people today don’t fully understand. And they address a variety of particular issues rather than the general theory of the doctrine. Before examining those discussions, therefore, it will be useful to first outline the Constitution’s basic theory of legitimacy, especially since the doctrine of enumerated powers is so central to it, and then show how the doctrine is manifest in the Constitution itself.
The Constitution’s theory of legitimacy draws from the theory that was first set forth in the Declaration of Independence. In that document America’s Founders made it clear they wanted to rid themselves of British rule—which they thought was illegitimate in many respects—and to establish in its place legitimate government with legitimate powers. To make their case, they drew on the natural law tradition, stretching back to antiquity, which holds there is a moral law of right and wrong that should guide us in making actual laws. It is that moral law, especially concerning natural rights, that is referenced in the famous passage that begins, “We hold these Truths to be self-evident.” Thus, the Founders first set forth the moral order as defined by our natural rights and obligations—the moral rights and obligations we would have toward each other if there were no government—and only then did they set forth the conditions for legitimate government and governmental powers. And they did it that way because they understood that governments don’t just happen; rather, they are created, by human action, and so we need to know how that happens legitimately—by right.
To do that, notice that the Declaration’s self-evident truths begin by assuming we are all equal, at least in having equal rights to “life, liberty, and the pursuit of happiness.” But in holding that each of us has a right to pursue happiness, nothing more is said about what will make us happy, and for good reason—that will vary from person to person. Thus, the freedom to pursue happiness is left up to each individual, provided only that each of us respects the equal rights of others to pursue whatever makes them happy. Live and let live.
But we may not all agree about what our rights and obligations are. And even if we did agree, not everyone will always respect the rights of others. Either intentionally or accidentally, people will violate others’ rights. The Founders understood this, so after they outlined the moral order, they turned to the political and legal order and took up the question of legitimate government: “That to secure these Rights, Governments are instituted among Men, deriving their just Powers from the Consent of the Governed.” Notice the limits implicit in that language. The main purpose of government is to secure our liberty by securing our rights. But if the powers needed to do that are to be “just” or legitimate, they must be derived “from the Consent of the Governed.”
When they drafted the Constitution eleven years later, the Framers drew on that theory of legitimacy: individual liberty, secured by limited government, with its powers derived from the consent of the governed. We see the theory right from the start, in the document’s Preamble: “We the People,” for the purposes listed, “do ordain and establish this Constitution.” In other words, all power comes from the people. We created the government. We gave it its powers by ratifying the Constitution that sets forth its structures, powers, and protections. For those powers to be truly legitimate, however, we must first have had them ourselves before delegating them to the government to be exercised on our behalf. The Framers mostly abided by that principle, the major and tragic exception being the Constitution’s oblique recognition of slavery, which took the Civil War and the Civil War Amendments to correct. For the most part, however, they established a legitimate government with legitimate powers.
With the Constitution’s theory of legitimacy now before us, we can examine how the Framers implemented it through the doctrine of enumerated (listed) powers. To state the doctrine most simply, if you want to limit power, as the Framers plainly did, don’t give it in the first place. That strategy is evident in the very first sentence of Article I: “All legislative Powers herein granted shall be vested in a Congress.” Notice first that the subject is all legislative powers that are herein granted—which are the only such powers in a Constitution of delegated powers—and they rest with the Congress. Second, the powers are “granted”—or delegated by the people, from whom they have to come if they are to be legitimate. Finally, as implied by this use of the words “all” and “herein granted,” only those powers “herein granted” were, in fact, granted. In sum, Congress has no legislative powers except those that were “herein granted”—powers that are limited to those that are enumerated in the document.
Congress’s powers are enumerated throughout the Constitution, but the main legislative powers are found in Article I, Section 8. There are only eighteen such powers. Plainly, the Framers wanted to limit the federal government to certain enumerated ends, leaving most matters in the hands of the states or the people themselves. In fact, that point was made perfectly clear when the Bill of Rights was added two years after the Constitution was ratified. As the Tenth Amendment states, “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” Where the federal government has no power, the states or the people themselves have a right.
The doctrine of enumerated powers was crucial to the ratification debate. The Federalist Papers were written to convince skeptical electors and the delegates they sent to state ratifying conventions that the new constitution was necessary and, in particular, would not give the new federal government any more power than was absolutely necessary to carry out its responsibilities. The doctrine of enumerated powers—the main restraint on the new government—was most famously stated by James Madison (No. 45):
Notice those words: “few and defined.” The federal government was to have only limited responsibilities. Most power was to be left with the state governments. They were closer to the people who could then better control them.
Madison continues, “It is to be remembered that the general government is not to be charged with the whole power of making and administering laws. Its jurisdiction is limited to certain enumerated objects, which concern all the members of the republic, but which are not to be attained by the separate provisions of any (No. 14).” Madison argues there are certain things—like national defense and foreign and national commerce—that are properly national concerns since they are largely beyond the competence of individual states. In fact, one of the main reasons the Framers sought to write a new constitution was because the Articles of Confederation afforded the federal government too little power to deal with such matters.
Alexander Hamilton picks up these themes (No. 23) while focusing on the document’s aims:
Opponents, he continues,
Hamilton goes on to argue (No. 25) that there are two sides to the doctrine of enumerated powers. The main emphasis in the Federalist Papers is to show how the doctrine will limit the size and scope of the new government. The other side, however, is to ensure the federal government has enough power to do the things that a national government will need to do. Hamilton addresses that issue cleverly, in the name of ensuring the new government will remain limited:
Note Hamilton’s word of caution. The new government’s powers are to be limited by enumeration, but those who would limit them even further run the risk of sowing the seeds—necessary breaches that then serve as precedents for future breaches—of future expansion. Be careful what you ask for!
Hamilton returns to the main theme of the limited delegation of authority to the federal government (No. 32): “But as the plan of the convention aims only at a partial union or consolidation, the State governments would clearly retain all the rights of sovereignty which they before had, and which were not, by that act, Exclusively delegated to the United States.” Hamilton’s main point is the convention had taken “the most pointed care” to ensure the powers “not explicitly divested in favor of the Union” remain “in full vigor” with the states.
The doctrine of enumerated powers is discussed throughout the Federalist Papers, but Madison’s discussion of three of those powers is especially important in light of developments in the twentieth century that have vastly expanded their scope. After reviewing the main areas over which the federal government would have power, he answers objections that were raised about the first of Congress’s enumerated powers: the power to tax “to provide for the common Defense and general Welfare of the United States.” That wording, skeptics charged, would allow the government virtually unlimited power toward those ends. Madison answers:
In other words, the terms “common Defense” and “general Welfare” are simply general headings. It is in the enumerated powers that follow where Congress finds the objects over which it has authority—and for which it may tax.
In a similar way, Madison addresses the function of Congress’s power to regulate “Commerce among the States,” saying without that power,
Thus, the commerce power was granted to ensure robust commerce—free especially from interference by the states. It was not, as it has become today, a power to regulate anything and everything for any reason whatsoever.
Finally, Madison answers those who had objected to the last of Congress’s eighteen enumerated powers—the power “to make all laws which shall be necessary and proper for carrying into execution the foregoing powers, and all other powers vested by this Constitution in the government of the United States, or in any department or officer thereof.” It would have been impossible, he writes, to have attempted “a complete digest of laws on every subject to which the Constitution relates.” And what if Congress should misconstrue this or any other of its powers?
Madison is confident the citizens of the nation will see to it that Congress does not exceed its delegated and enumerated powers.
And so in the Federalist Papers, as in the Declaration and the Constitution, we see the extraordinary thought and care that went into America’s Founding. The doctrine of enumerated powers was central to the Framers’ design. It granted the federal government enough power to discharge its responsibilities, but not so much as to threaten our liberty. But it is up to us, to each generation, to see to it that our officials are faithful to the principles the Framers secured through that extraordinary thought and care. |
An Electric Multiple Unit (EMU) is a type trainset of that powers itself. The earliest, therefore most basic EMUs, were passenger carriages (rollingstock) that had traction (electric) motors added to them. The type gets its name from the multiple powered cars that create it. Most EMUs have at least four cars or more, and many have traction motors on each car. One of the best features of EMUs is that they have a high power to weight ratio, meaning that EMUs are perfect for commuter lines. Which need rapid acceleration between stops to keep these difficult lines on scheduled. Underground metro EMUs usually use third-rail electrical systems, above-ground EMUs mostly use pantographs.
The EMU originates from around 1900. In Britain, the City & South London Railway has been using EMUs since it opened with 450 volt DC third-rail sets in 1890. The London Underground features the use of EMUs, as do most other underground metro lines.
EMUs were first used because of their low maintenance costs, and electric coupling. This means that one driver is able to control a whole train. Seems more impressive if one thinks that each car is powered, and this is at the time of steam locomotives. For fifty years, since around 1900, there wasn't much change in the design. But in the 1950s, new technology, pushed past the boundaries and reinvented the whole design of EMUs. Early EMUs were DC (direct current), but since the late-1950s AC (alternating current) engines have been used. Most third-rail systems are still low-voltage DC systems.
EMUs are very important in Europe. The Netherlands coined the phrase "Sprinter" (in a railway sense) to mean a two-car unit. British Rail was quick to adopt this themselves.
There are two main types of Electric Multiple Units; those with power-cars, and those with traction motors throughout the set. The latter is designed to lower the axle load and/or to give better traction. While power-car versions have traction motors on the axles of two or more cars out of the set. Non-powered cars reside between the power cars which are usually at each end.
- Book: The Complete Book of Locomotives, written by Colin Garratt & published by Hermes House. ISBN: 978-1-84477-022-9. |
A diamond has four sides and looks a bit like a square but it has different angles. It has two parallel sides. Draw a diamond for your child to see as you explain this fun shape.
Another good way to show your child how to make a diamond is to start with two triangles. Put these together to show your child how the triangles fit together to make a diamond.
Here are some other ideas to teach your child about diamonds:
- Cut out diamonds of different shapes and sizes. Put these up around your house and go on a diamond hunt. Have your child practice tracing the diamond shapes with her finger.
- Take a field trip to a baseball field. Can you tell what shape the mound is? What about the bases? What shape do the ball players run around? Practice running around the bases.
- Sing “Twinkle, Twinkle Little Star” and hold up a diamond when you say “Like a Diamond in the Sky.”
- Draw three different-sized diamonds on a blackboard or three pieces of paper. Have your child determine which is the smallest and which is the largest.
- Color diamonds on a piece of paper.
- Make a diamond name game. Cut out diamond shapes- one for each letter of your child’s name and extra letters for the alphabet. If you are spelling Justin, you will need 32 diamonds – 6 for the letters in J-u-s-t-i-n and 26 more for each letter of the alphabet. Next, write Justin’s name on a big piece of paper. Have him go on a diamond hunt to match the letters in his name. For younger kids, only write the letters in his name on diamonds to hide around the house. Help him arrange the letters to spell his name.
- Put a bunch of foam shapes in a bowl or dish. Have your child sort out the diamond shapes.
- Use white labels and cut out diamond shapes. Practice putting these on a sheet of colored paper.
- Explain that diamonds can also be worn as jewelry. Show your child a photo of diamonds in a book or on the Internet. Ask her if the diamond jewelry looks like the diamond shape.
- Cut up straws into different-sized pieces. Draw a diamond shapes on paper. Use the different-sized straws to match the edges of the diamond-shapes on your paper. Eat a snack of square crackers. Try to change the squares into diamonds.
- Make diamond-shaped kites out of construction paper.
- Fly a kite outside. What shape do you see? Can you get the kite to fly with or without wind?
- Make a diamond person. Cut out two large diamonds for the head and body. Cut out smaller diamonds for the eyes, ears, and nose. Decorate the rest of the diamond person. Don’t forget to name your diamond friend. Try to think of “D” names since diamond starts with “D.”
Have fun with the diamonds in your life. The more kids are exposed to shapes, the more they will be ready for preschool and kindergarten.
© Let’s Talk Kids, 2013 |
More Than Happy, Sad, Mad
If you ask students to identify how a character is feeling in a story, you might hear one of the following three words pop up: happy, sad, mad. Character emotions and traits are not something children automatically understand. To help students with this, we need to be intentional.
Inferring a characterís feelings or personality traits is complicated. It involves looking beyond the words. It takes time to build this skill in young readers. Making it tangible can help. One way to make it more concrete is to put a face to it. Here are three activities, all of which will help with reading into a characterís personality and emotions.
Try reading How Are You Peeling? Make sure to give students a close-up look at these funny fruits and vegetables. Amazing pictures! Take time to discuss the different emotions pictured. This book definitely lengthens the list to include amused, confused, frustrated, surprised, etc.
Another way to help young learners stretch their character emotion list requires some cutting up! Use a copy of the Let Your Voice be Heard poster; laminate it and cut it up to provide individual emotion cards. The labels on each picture will increase understanding of emotions. Or, cut the cards without the labels and ask students to use a dry-erase marker to write the character emotions they associate with that face on the back of each one.
Two 6th grade teachers took this a step further by photographing their students one at a time as each student showed a specified emotion with facial expression and body language. Tiana Perin and Veronica Tittle of Eastbrook Van Buren Elementary School (Van Buren, IN) then created a bullet in board highlighting all the different emotion options. The kids loved it because they could see classmates making all those funny faces. The teachers loved it because it broadened their studentsí idea of voice. I love it because itís kid-friendly and really gets the point across. Voice matters!
So, try one of these ideas to help your students go beyond the simple happy, sad, mad. As you see their character inference ability grow, Iím sure youíll be |
Parenting Skills Essay
1. What is positive parenting?
2. What is discipline? How does it differ from punishment?
3. What is active listening? Why is it used by parents?
4. What is guidance? Provide an example of a parent providing guidance to a child?
5. Where can families and parents find support and resources? Critical Thinking Questions
1. Why are consequences an important part of positive parenting? 2. Why is it important that parents establish a positive relationship and positive communication with babies and young children? 3. Imagine that you are a parent and your toddler begins speaking in “baby talk” frequently. Using what you’ve learned in the module, what are some ways that you might approach this situation? 4. Imagine that you are a parent and your school-aged daughter was caught shoplifting a bracelet from a store. Using what you’ve learned in the module, what are some of the ways that you might approach this situation? Review Answers
1. Positive parenting is the raising of a child in a positive way. Making clear guidelines, good communication, and teaching good behavior. 2. Discipline is teaching your child by showing them what they did wrong and how t fix what they did so they don’t do it a next time. It is not punishing for what they and not saying what they did wrong. 3. Active listening is when parents constantly gives feedback about what the child does and this helps the child learn from what they do and how to improve on it. 4. Guidance is when you assist and help your child with a task they are handed. For example, explaining the proper way to do something. 5. Parents and families can get support from Head Start, Medicaid, The United States government, and The U.S. Children’s Bureau Critical Thinking Answers
1. If children do not have consequences, there’s a possibility that they will not learn right from wrong. 2. The child needs to learn to listen to the parents and respect them. 3. When the baby begins to speak in “baby talk” you can choose to ignore it and that would be the best choice.
Subject: Critical thinking,
University/College: University of Arkansas System
Type of paper: Thesis/Dissertation Chapter
Date: 26 September 2016
Let us write you a custom essay sample on Parenting Skills
for only $16.38 $13.9/page |
Welcome to the Learning Zone!
The following information will tell you about the brain, how it works and what it does. It will also tell you about different types of brain injury.
The brain controls everything we do with our body. It also controls what we think and say, and controls our emotions and who we are. Our brain is a bit like a computer as it controls everything we do with our body!
The brain is very delicate and is well protected by the skull. It is also protected by a water cushion called cerebro spinal fluid (CSF). It’s quite complicated so you might want to read sections of this again, maybe with your Mum and Dad.
Overview of the brain
The brain controls everything we do with our body. It also controls what we think and say, and controls our emotions and who we are.
If you were to look under a microscope you would see that the brain is made of 100 billion nerve cells called neurons! These neurons connect the brain to the rest of the body by the spinal cord.
A quick reference to all parts of the brain
The brain is made up of lots of parts a bit like a car engine. The engine controls the rest of the car and is hidden under the bonnet of the car, just like the brain which is hidden in the skull.
The skull is very hard bone and protects the brain from most knocks and falls that you have. Babies are born with a soft skull in order to give the brain chance to grow. When it has finished growing the skull becomes hard to keep the brain safe.
The Brain Stem
The brain stem sits at the very top of the spinal cord. The brain stem is the most primitive part of the brain. This means it hasn’t developed very much over the years. The brain stem’s only job is to keep you alive! The brain stem is separated into three other areas: mid-brain, pons and the medulla. The mid-brain sits at the top and allows both sides (or hemispheres) of the brain to communicate with each other.
The pons acts like a bridge. It has lots of nerves bundles that run through it. It also contains the fourth ventricle where the CSF passes through to go down the spine.
The medulla controls our heart and lungs which is also known as cardiac and respiratory function. It sounds very complicated but it basically means the brain stem keeps you alive and kicking!
The cerebellum sits behind the brain stem and sits in the very back of the skull. The cerebellum controls our sense of balance and helps us to co-ordinate movement.
The Parietal Lobes
The parietal lobes have two main jobs. Firstly they interpret what our senses detect and how this fits with what we see – for example, when your skin feels heat and your eyes see fire. The parietal lobes figure out that fire must be hot.
The parietal lobes also tell us what is part of our body and what is part of the environment. For example, if you stand in a field you are aware of wide open spaces and maybe a few cows. This is known as ‘spatial awareness’.
The Occipital Lobes
The occipital lobes are responsible for interpreting what the eyes see by recognising shapes and colours. The occipital lobes work together with the parietal lobes in that they ‘figure out’ what we see. They also figure out what we are looking at in order to help the parietal lobes figure out how big something is – like the field with the cows in it, for example.
The Frontal Lobes
The frontal lobes control so much of what we do with our bodies that doctors are learning new things about the frontal lobes all the time. The frontal lobe is our emotional control centre and our personality also grows and develops here. It is the area in your brain that makes you the person you are! For example, if you are a ‘happy-go-lucky person’, ‘a great thinker’, or ‘really funny’ or ‘serious’. It is all controlled here! The frontal lobe also controls body movement, allows us to solve problems and do mathematics, and is also the part of the brain that learns how to speak!
Most importantly, the frontal lobe allows us to think independently and be spontaneous. We also regulate our ‘impulses’ here. How many times do you want to do something and then think twice before doing it? Well, that is the frontal lobe in action! Our long-term memory is also stored here. Doctors aren’t too sure how memory is stored yet be they do know it is kept here. The frontal lobe also controls our sexual feelings or our ‘sexuality’ and also regulates how we interact with people, which is also known as ‘social interaction’.
The Temporal Lobes
The temporal lobes are involved in sorting out what we feel, taste, smell and hear. If you think of all the sensations you have every second of the day you will get a sense of how busy the temporal lobes have to work in order to tell you what is happening around you. Try this exercise: what can you hear? What can you smell? What is your skin feeling or sensing? I think you’ll agree there is a lot going on. The temporal lobes are busy sorting all of that information out.
A major function of the temporal lobes is to distinguish what you can hear. In particularly, the temporal lobes organise sound and is how you recognise words when someone is speaking. This is also known as ‘speech recognition’. It is also thought that our short-term memory is kept here. It allows you to remember what your friend said to you five minutes ago or what you had for breakfast.
Everyone bangs their head from time to time. Although it might hurt and you may get a bruise, there is rarely any damage caused to the brain. This is because the skull and the cerebro spinal fluid (CSF) protect the brain and keep it safe. However, sometimes you can hurt your head more seriously.
Head injury means a larger knock or bump to the head. This can be non-serious, like a small cut to your face or it can be more serious, maybe you broke your nose or were knocked unconscious. Doctors and nurses sometimes refer to an injury to the face or head as a ‘Head Injury’.
Acquired Brain Injury or ABI
An ABI means that your brain has been hurt after you were born. This can be caused by an accident, illness or operation. There are two types of ABI- traumatic brain injury and non-traumatic brain injury.
Traumatic Brain Injury
An ABI means the brain has been damaged as a result of an accident, illness or operation. Trauma means a knock to the body that causes someone to bruise, bleed or fracture a bone. Traumatic brain injury means a knock or blow that causes the brain to get bruised, cut, bleed or spin around inside the skull. Sometimes the skull might break too which is called a skull fracture and there are many different types of skull fracture.
Here is how you may get a traumatic brain injury and skull fracture:
- Being knocked down by a car and banging your head
- Falling off your bike or horse and banging your head
- Being hit on your head by a hockey stick
- Falling down the stairs or tripping over banging your head
- Gunshot wound
Non-traumatic Brain Injury
Non-traumatic brain injury means that the brain has been damaged by an illness. There are no cuts or broken bones but a non-traumatic brain injury is still very serious.
Examples of a non-traumatic brain injury include:
- Meningitis or Encephalitis which is caused by different bugs known as a virus or bacteria getting to the brain
- Brain tumour where some cells in the brain grow wrong or mutate and form a lump inside the brain
- Hypoxic injury where some of the brain cells die because they didn’t get enough oxygen
- Brain injury through some other part of the body going wrong such as the kidneys or liver
- Vascular problems, where there is problem with the blood supply to the brain plumbing |
Breast diseases can be classified either with disorders of the integuement, or disorders of the reproductive system. A majority of breast diseases are noncancerous.
A breast neoplasm is an abnormal mass of tissue in the breast as a result of neoplasia. A breast neoplasm may be benign, as in fibroadenoma, or it may be malignant, in which case it is termed breast cancer. Either case commonly presents as a breast lump. Approximately 7% of breast lumps are fibroadenomas and 10% are breast cancer, the rest being other benign conditions or no disease. Phyllodes tumor is a fibroepithelial tumor which can either benign, broderline or malignant.
Malignant neoplasms (breast cancer)
Among women worldwide, breast cancer is the most common cause of cancer death. Malignant breast neoplasm is cancer originating from breast tissue, most commonly from the inner lining of milk ducts or the lobules that supply the ducts with milk. Cancers originating from ducts are known as ductal carcinomas; those originating from lobules are known as lobular carcinomas. The size, stage, rate of growth, and other characteristics of the tumor determine the kinds of treatment. Treatment may include surgery, drugs, radiation and/or immunotherapy. Surgical removal of the tumor provides the single largest benefit, with surgery alone being capable of producing a cure in many cases. To somewhat increase the likelihood of long-term disease-free survival, several chemotherapy regimens are commonly given in addition to surgery. Most forms of chemotherapy kill cells that are dividing rapidly anywhere in the body, and as a result cause temporary hair loss and digestive disturbances. Radiation may be added to kill any cancer cells in the breast that were missed by the surgery, which usually extends survival somewhat, although radiation exposure to the heart may cause heart failure in the future. Some breast cancers are sensitive to hormones such as estrogen and/or progesterone, which makes it possible to treat them by blocking the effects of these hormones.
Prognosis and survival rate varies greatly depending on cancer type and staging. With best treatment and dependent on staging, 5-year relative survival varies from 98% to 23, with an overall survival rate of 85%.
Worldwide, breast cancer comprises 22.9% of all non-melanoma skin cancers in women. In 2008, breast cancer caused 458,503 deaths worldwide. Breast cancer is more than 100 times more common in women than breast cancer in men, although males tend to have poorer outcomes due to delays in diagnosis. |
The seventeenth century English physicist and mathematician, Isaac Newton [1642–1727], developed a wealth of new mathematics (for example, calculus and several numerical methods [e.g. Newton's method ]) to solve problems inphysics. Other important mathematical physicists of the seventeenth century included the Dutchman Christiaan Huygens [1629–1695] (famous for suggesting the wave theory of light), and the German Johannes Kepler [1571–1630] (Tycho Brahe's assistant, and discoverer of the equations for planetary motion/orbit).
In the eighteenth century, two of the innovators of mathematical physics were Swiss: Daniel Bernoulli [1700–1782] (for contributions to fluid dynamics, and vibrating strings), and, more especially, Leonhard Euler [1707–1783], (for his work in variational calculus, dynamics, fluid dynamics, and many other things). Another notable contributor was the Italian-born Frenchman, Joseph-Louis Lagrange [1736–1813] (for his work in mechanics and variational methods).
In the late eighteenth and early nineteenth centuries, important French figures were Pierre-Simon Laplace [1749–1827] (in mathematical astronomy, potential theory, and mechanics) and Siméon Denis Poisson [1781–1840] (who also worked in mechanics and potential theory). In Germany, both Carl Friedrich Gauss [1777–1855] (in magnetism) and Carl Gustav Jacobi [1804–1851] (in the areas of dynamics and canonical transformations) made key contributions to the theoretical foundations of electricity, magnetism, mechanics, and fluid dynamics.
Gauss's contributions to non-Euclidean geometry laid the groundwork for the subsequent development of Riemannian geometry by Bernhard Riemann [1826–1866]. As we shall see later, this work is at the heart of general relativity.
The nineteenth century also saw the Scot, James Clerk Maxwell [1831–1879], win renown for his four equations of electromagnetism, and his countryman, Lord Kelvin [1824–1907] make substantial discoveries in thermodynamics. Among the English physics community, Lord Rayleigh [1842–1919] worked on sound; and George Gabriel Stokes [1819–1903] was a leader in optics and fluid dynamics; while the Irishman William Rowan Hamilton [1805–1865] was noted for his work in dynamics.
The German Hermann von Helmholtz [1821–1894] is best remembered for his work in the areas of electromagnetism, waves, fluids, and sound. In the U.S.A., the pioneering work of Josiah Willard Gibbs[1839–1903] became the basis for statistical mechanics. Together, these men laid the foundations of electromagnetic theory, fluid dynamics and statistical mechanics.
The late nineteenth and the early twentieth centuries saw the birth of special relativity. This had been anticipated in the works of the Dutchman, Hendrik Lorentz [1853–1928], with important insights from Jules-Henri Poincaré [1854–1912], but which were brought to full clarity by Albert Einstein [1879–1955]. Einstein then developed the invariant approach further to arrive at the remarkable geometrical approach to gravitational physics embodied in general relativity. This was based on the non-Euclidean geometry created by Gauss and Riemann in the previous century.
Einstein's special relativity replaced the Galilean transformations of space and time with Lorentz transformations in four dimensional Minkowski space-time. His general theory of relativity replaced the flat Euclidean geometry with that of a Riemannian manifold, whose curvature is determined by the distribution of gravitational matter. This replaced Newton's vector gravitational force by the Riemann curvature tensor.
Another revolutionary development of the twentieth century has been quantum theory, which emerged from the seminal contributions of Max Planck [1856–1947] (on black body radiation) and Einstein's work on the photoelectric effect.
This was, at first, followed by a heuristic framework devised by Arnold Sommerfeld [1868–1951] and Niels Bohr [1885–1962], but this was soon replaced by the quantum mechanics developed by Max Born [1882–1970], Werner Heisenberg [1901–1976], Paul Dirac [1902–1984], Erwin Schrödinger [1887–1961], and Wolfgang Pauli [1900–1958]. This revolutionary theoretical framework is based on a probabilistic interpretation of states, and evolution and measurements in terms of self-adjoint operators on an infinite dimensional vector space (Hilbert space, introduced by David Hilbert [1862–1943]).
Paul Dirac, for example, used algebraic constructions to produce a relativistic model for the electron, predicting its magnetic moment and the existence of its antiparticle, the positron.
Later important contributors to twentieth century mathematical physics include Satyendra Nath Bose [1894–1974], Julian Schwinger [1918–1994], Sin-Itiro Tomonaga [1906–1979], Richard Feynman [1918–1988], Freeman Dyson [1923– ], Hideki Yukawa [1907–1981], Roger Penrose [1931– ], Stephen Hawking [1942– ], Edward Witten [1951– ] and Rudolf Haag [1922– ] |
Astronomers used to think the universe contains 100-200 billion galaxies. Now, the population estimate has jumped up to 2 trillion.
The universe just got a lot bigger. Thanks to some new data from Hubble, scientists have calculated that there must be at least 2 trillion galaxies in the universe–a massive population spike from the previous estimates of a measly 100-200 billion galaxies. We haven’t yet set eyes on the vast majority of these, because they are small, faint, and very far away.
“It boggles the mind that over 90 percent of the galaxies in the universe have yet to be studied,” Christopher Conselice, a co-author on the study, said in a statement.
In an interview with Popular Science, Conselice explained how the team made this astonishing calculation. Last year, the group came up with a formula for explaining how galaxies are distributed by size. Monstrously huge galaxies are very rare, while there are a vast number of very small galaxies. Medium-sized galaxies are medium common.
Analyzing the number of faint galaxies that can be seen with the Hubble space telescope, Conselice’s team determined that there must be an astronomical number of galaxies that we can’t currently see. The team estimates that are at least 10 times more galaxies than previously estimated.
Back when the universe was a strapping young 1-billion-year-old, these galaxies were probably crammed together at a density 10 times higher than we see today. Over time they spread out, and many may have gotten eaten by larger galaxies.
When the James Webb Space Telescope launches in 2018, it may be able to spot a large portion of these previously undiscovered galaxies. Studying them could help explain how galaxies form and evolve.
“They’re the most common galaxies, and so when we start looking at them in the early universe, we’ll get some idea of how typical galaxies form, as opposed to the big, bright galaxies we can see now with Hubble, which are sort of the monsters, rarities, that may have unusual formation paths.” |
updated Dec 31, 2008 1:52 pm | 11,136 views
UNIX utilities are commands that, generally, perform a single task. It may be as simple as printing the date and time, or a complex as finding files that match many criteria throughout a directory hierarchy.
Many UNIX utilities are cryptically named, often using just the first two consonants of the long name:
Other utilities are the first letters of the first two or more words of the long name:
Some are acronyms, using the initial letters and/or syllables of their names:
Other commands are words that describe the action thay perform:
The only single-letter command found on all *nix varieties (though it is not part of the Single Unix Specification) is:
Related White Papers and Webcasts
Accelerating commands with alias (Blogs)
UNIX comparison with Linux (Groups)
Disclaimer: IT Wiki is a service that allows content to be created and edited by anyone in the community. Content posted to this site is not reviewed for correctness and is not supported by Toolbox.com or any of its partners. If you feel a wiki article is inappropriate, you can either correct it by clicking "Edit" above or click here to notify Toolbox.com. |
A photon is emmited, its energy=hf. One photon will interact with one electron on the surface of teh metal. if there is enough energy to overcome the work function of teh metal an electron will be emmited, any extra energy energy will be kinetic energy of the electron when it leaves.
The foil should be thin otherwise
α particles may be absorbed/so α particles can penetrate the foil, α particles must only be scattered once.
The beam should be narrow to have definite location where the scattering takes place
so the scattering angle can be determined accurately.
Evidnece that most of the mass is in thenucleus of the atom is backscattering/scattering angles greater than 90º occurs because α particles must collide with a target (much) more massive then themselves
How does mercury vapour and coating of the inner t
Mercury vapour at low pressure is conducting. Atoms of mercury are excited by electron impact. producing mainly UV raidation. Which is absorbed and excites the coating. Which upon relaxing produces visible light. Electrons cascade down energy levels.
Duality of electrons.
This is when an electron behaves as a awve and a particle. An example of it behaving as a wave is diffraction, and an example of it behaving as a particle is in the photoelectric effect.
Principle features of a gas from their line spectr
Photns have definite wavelengths/frequency. So photons have discrete energies. Photons are emitted when electrons move down from one level to another. The energy gaps between energy levels are fixed.
What happens if teh wavelength stays the same but
The maximum kinetic energy depends upon the frequency and energy of the incident radiation. Maximum kinetic energy remains unchanged. Doubling the intensity doubles the number of photoelectrons per second. One photon interacts with one electron.
The angle of incidence at the more dense-less dense boundary. Producing an angle of refraction of 90 degrees.
What would happen if the fibre was surrounded by g
The ray would leave the core, bending away from the normal, increase in critical angle, light speed increase.
Effect of increasing the intensity of UV light
The number of photoelectrons emitted per second will be increased
as the number of incident photons per second is increased but the maximum kinetic energy of the photoelectrons will remain constanta photon gives up all its energy in one collision.
Describe what happens to an atom in the ground sta
an electron is excited/promoted to a higher level/orbit
reason for excitation: e.g. electron impact/light/energy externally applied
electron relaxes/de-excited/falls back emitting a photon/em radiation
wavelength depends on the energy change
γ photon or high energy photon/kinetic energy
converted to a particle and its antiparticle
p + p or e− + e+
Why does the kinetic energy of light have a maximu
incident photon energy is fixed
[or photoelectron receives a fixed amount of energy]
photon loses all its energy in a single interaction
electron can lose various amounts of energy to escape from the metal
electrons have a maximum energy = photon energy − work function
What role does an exchange particle play in an int
transfers energy, transfers momentum, transfers force, sometimes transfers charge.
Why is there a minimum energy in pair production?
The gamma ray must provide enough energy to compensate the mass of the two products. Any extra energy will provide the products with kinetic energy.
Intensity of light is reduced in the photoelectric
Then there will be less photonelectrons per second emitted. The enrgy levels will remain the same also the speed at which they leave the atom remains the same as these are fixed. One photn interacts with one electron.
Why are no photoelectrons released when theblue fi
There needs to e enough energy to overcome the work function of the metal. Photon of red light has less energy than photon of blue light. energy is proportional to frequency, E=hf. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.