content
stringlengths 275
370k
|
---|
create a new process
Standard C Library (libc, -lc)
() system call causes creation of a
new process. The new process (child process) is an exact copy of the calling
process (parent process) except for the following:
- The child process has a unique process ID.
- The child process has a different parent process ID (i.e., the process ID
of the parent process).
- The child process has its own copy of the parent's descriptors. These
descriptors reference the same underlying objects, so that, for instance,
file pointers in file objects are shared between the child and the parent,
so that an lseek(2) on a descriptor in the
child process can affect a subsequent read(2)
or write(2) by the parent. This descriptor
copying is also used by the shell to establish standard input and output
for newly created processes as well as to set up pipes.
- The child process' resource utilizations are set to 0; see
- All interval timers are cleared; see
- The child process has only one thread, corresponding to the calling thread
in the parent process. If the process has more than one thread, locks and
other resources held by the other threads are not released and therefore
only async-signal-safe functions (see
sigaction(2)) are guaranteed to work in the
child process until a call to execve(2) or a
Upon successful completion,
() returns a
value of 0 to the child process and returns the process ID of the child
process to the parent process. Otherwise, a value of -1 is returned to the
parent process, no child process is created, and the global variable
is set to indicate the error.
() system call will fail and no child
process will be created if:
- The system-imposed limit on the total number of processes under execution
would be exceeded. The limit is given by the
sysctl(3) MIB variable
KERN_MAXPROC. (The limit is actually
ten less than this except for the super user).
- The user is not the super user, and the system-imposed limit on the total
number of processes under execution by a single user would be exceeded.
The limit is given by the sysctl(3) MIB
- The user is not the super user, and the soft resource limit corresponding
to the resource argument
RLIMIT_NPROC would be exceeded (see
- There is insufficient swap space for the new process.
() function appeared in
Version 6 AT&T UNIX |
(Depiction of a clustering model, Medium)
Getting started with machine learning starts with understanding the how and why behind employing particular methods. We’ve chosen five of the most commonly used machine learning models on which to base the discussion.
Before diving too deep, we thought we’d define some important terms that are often confused when discussing machine learning.
- Algorithm – A set of predefined rules used to solve a problem. For example, simple linear regression is a prediction algorithm used to find a target value (y) based on an independent variable (x).
- Model – The actual equation or computation that is developed by applying sample data to the parameters of the algorithm. To continue the simple linear regression example, the model is the equation of the line of best fit of the x and y values in the sample set plotted against each other.
- Neural network – A multilayered algorithm that consists of an input layer, output layer, and a hidden layer in the middle. The hidden layer is a series of stacked algorithms that iterate until the computer chooses a final output. Neural networks are sometimes referred to as “black box” algorithms because humans don’t have a clear and structured idea how the computer is making its decisions.
- Deep learning – Machine learning methods based on neural network architecture. “Deep” refers to the large number of algorithms employed in the hidden layer (often more than 100).
- Data science – A discipline that combines math, computer science, and business/domain knowledge.
Machine learning methods
Machine learning methods are often broken down into two broad categories: supervised learning and unsupervised learning.
Supervised learning – Supervised learning methods are used to find a specific target, which must also exist in the data. The main categories of supervised learning include classification and regression.
- Classification – Classification models often have a binary target sometimes phrased as a “yes” or “no.” A variation on this model is probability estimation in which the target is how likely a new observation is to fall into a particular category.
- Regression – Regression models always have a numeric target. They model the relationship between a dependent variable and one or more independent variables.
Unsupervised learning – Unsupervised learning methods are used when there is no specific target to find. Their purpose is to form groupings within the dataset or make observations about similarities. Further interpretation would be needed to make any decisions on these results.
- Clustering – Clustering models look for subgroups within a dataset that share similarities. These natural groupings are similar to each other, but different than other groups. They may or may not have any actual significance.
- Dimension reduction – These models reduce the number of variables in a dataset by grouping similar or correlated attributes.
It’s important to note that individual models are not necessarily used in isolation. It often takes a combination of supervised and unsupervised methods to solve a data science problem. For example, one might use a dimension-reduction method on a large dataset and then use the new variables in a regression model.
To that end, Model pipelining involves the act of splitting up machine learning workflows into modular, reusable parts to couple together with other model applications to build more powerful software over time.
What are the most popular machine learning algorithms?
Below we’ve detailed some of the most common machine learning algorithms. They’re often mentioned in introductory data science courses and books and are a good place to begin. We’ve also provided some examples of how these algorithms are used in a business context.
Linear regression is a method in which you predict an output variable using one or more input variables. This is represented in the form of a line: y=bx+c. The Boston Housing Dataset is one of the most commonly used resources for learning to model using linear regression. With it, you can predict the median value of a home in the Boston area based on 14 attributes, including crime rate per town, student/teacher ratio per town, and the number of rooms in the house.
K-means clustering is a method that forms groups of observations around geometric centers called centroids. The “k” refers to the number of clusters, which is determined by the individual conducting the analysis. Clustering is often used as a market segmentation approach to uncover similarity among customers or uncover an entirely new segment altogether.
Principal component analysis (PCA)
PCA is a dimension-reduction technique used to reduce the number of variables in a dataset by grouping together variables that are measured on the same scale and are highly correlated. Its purpose is to distill the dataset down to a new set of variables that can still explain most of its variability.
A common application of PCA is aiding in the interpretation of surveys that have a large number of questions or attributes. For example, global surveys about culture, behavior or well-being are often broken down into principal components that are easy to explain in a final report. In the Oxford Internet Survey, researchers found that their 14 survey questions could be distilled down to four independent factors.
K-nearest neighbors (k-NN)
Nearest-neighbor reasoning can be used for classification or prediction depending on the variables involved. It is a comparison of distance (often euclidian) between a new observation and those already in a dataset. The “k” is the number of neighbors to compare and is usually chosen by the computer to minimize the chance of overfitting or underfitting the data.
In a classification scenario, how closely the new observation is to the majority of the neighbors of a particular class determines which class it is in. For this reason, k is often an odd number to prevent ties. For a prediction model, an average of the targeted attribute of the neighbors predicts the value for the new observation.
Classification and regression trees (CART)
Decision trees are a transparent way to separate observations and place them into subgroups. CART is a well-known version of a decision tree that can be used for classification or regression. You choose a response variable and make partitions through the predictor variables. The computer typically chooses the number of partitions to prevent underfitting or overfitting the model. CART is useful in situations where “black box” algorithms may be frowned upon due to inexplicability, because interested parties need to see the entire process behind a decision.
How do I choose the best model for machine learning?
The model you choose for machine learning depends greatly on the question you are trying to answer or the problem you are trying to solve. Additional factors to consider include the type of data you are analyzing (categorical, numerical, or maybe a mixture of both) and how you plan on presenting your results to a larger audience.
The five model types discussed herein do not represent the full collection of model types out there, but are commonly used for most business use cases we see today. Using the above methods, companies can conduct complex analysis (predict, forecast, find patterns, classify, etc.) to automate workflows.
To learn more about machine learning models, visit algorithmia.com. |
Mercury is a naturally occurring chemical, but it can become harmful when it contaminates fresh and seawater areas. Fish and other aquatic animals ingest the mercury, and it is then passed along the food chain until it reaches humans. Mercury in humans may cause a wide range of conditions including neurological and chromosomal problems and birth defects.
What is mercury?
Mercury is naturally occurring element in the Earth's crust that is released into the environment with natural events such as volcanic activity. Mercury commonly occurs in three forms: elemental, inorganic and organic.
"Human activities like coal burning, gold mining and chloralkali manufacturing plants currently contribute the vast majority of the mercury released into our environment," said Dr. Anne M. Davis, an assistant professor of nutrition and dietetics director at the Didactic Program in Dietetics of the University of New Haven.
When mercury is released into the atmosphere, it is dissolves in fresh water and seawater. A type of mercury called methylmercury is most easily accumulated in the body is and is particularly dangerous. About 80 to 90 percent of organic mercury in a human body comes from eating fish and shellfish, and 75 to 90 percent of organic mercury existing in fish and shellfish is methylmercury, according to a paper published by the Journal of Preventative Medicine and Public Health.
Once in the water, mercury makes its way into the food chain. Inorganic mercury and methylmercury are first consumed by phytoplankton, single-celled algae at the base of most aquatic food chains. Next, the phytoplankton are consumed by small animals such as zooplankton. The methylmercury is assimilated and retained by the animals, while the inorganic mercury is shed from the animals as waste products, explained Davis. Small fish that eat the zooplankton are exposed to food-borne mercury that is predominantly in the methylated form. These fish are consumed by larger fish, and so on until it gets to humans.
"Because the methylmercury is highly assimilated and lost extremely slowly from fish, there is a steady build-up of this form of mercury in aquatic food chains, such that long-lived fish at the top of the food chain are highly enriched in methylmercury," said Davis. "Methylmercury therefore displays clear evidence of biomagnification, where its concentrations are higher in predator tissue than in prey tissue."
Mercury poisoning is a slow process that can take months or years, according to the National Institutes of Health (NIH). Since the process is so slow, most people don't realize they are being poisoned right away. Mercury from food sources is absorbed into the bloodstream through the intestinal wall, and then carried throughout the body. The kidneys, which filter the blood, can accumulate mercury over time. Other organs can also be affected.
Negative health effects from methylmercury may include neurological and chromosomal problems. According to the NIH, long-term exposure to organic mercury can cause:
- uncontrollable shaking or tremor
- numbness or pain in certain parts of the skin
- blindness and double vision
- inability to walk well
- memory problems
- death with large exposures
"Most notable are the effects of mercury on the brain," said Aimee Phillippi, a professor of biology at Unity College in Unity, Maine. "Mercury poisoning can result in hearing and vision changes, personality changes, memory problems, seizures or paralysis. When children are exposed to mercury, they may have developmental or muscle coordination problems. Mercury interferes with the calcium channels that cells, especially nerve and muscle cells, use to carry out their functions."
The toxicity of methylmercury may also have reproductive consequences. Pregnant woman who eat fish and seafood contaminated with methylmercury may have the increased risk of having a miscarriage, or having a baby with deformities or severe nervous system diseases. These birth defects can happen even if the mother doesn't seem to be poisoned.
A study by the Universidad de los Andes, Bogota, Colombia, found that eating food contaminated with methylmercury can even alter the chromosomes in humans.
What to look for
While the answer to mercury contamination could be to avoid all fish and seafood, that wouldn't be a healthy choice. "The problem lies in that fish high in methylmercury are also high in omega-3 DHA (docosahexaenoic) fatty acids, which are important for pregnant and lactating women and infants and children. DHA is needed for vision (retina development), immune status, fetal and infant brain development and cognition and heart health," said Davis.
The FDA recommends eating two to three servings (8 to 12 ounces or 227 to 340 grams) of fish each week. The key to getting the health benefits with the least amount of mercury is eating fish that are low in mercury content or by obtaining DHA through dietary supplements such as fish oil and algal oil.
Some fish and seafood that are low in mercury, according to the National Resource Defense Council include:
- Domestic crab
- Croaker (Atlantic)
- Haddock (Atlantic)
- Mackerel (N. Atlantic, Chub)
- Perch (ocean)
- Canned salmon
- Fresh salmon
- Shad (American)
- Sole (Pacific)
- Squid (calamari)
- Freshwater trout
Be sure to avoid fish that are commonly high in mercury, which include tilefish from the Gulf of Mexico, shark, swordfish and king mackerel, according to the FDA. Also avoid marlin, bluefish, grouper, Spanish and Gulf mackerel and Chilean sea bass.
Limit consumption of white (albacore) tuna and any freshwater fish that isn't one of the safe fish listed above to 6 oz (170 g) a week or 1 to 3 oz (28 to 85 g) per week for children. After eating the 6 oz (170 g) of fish, do not consume any more fish of any kind for the week.
There is a disagreement about how much fish pregnant women should consume. The Mayo Clinic recommends that women who are pregnant avoid eating any fish, while the FDA and the Environmental Protection Agency (EPA) recommend 12 ounces (340 g) per week. The 2010 Dietary Guidelines for Americans recommend 8 to 12 ounces (227 to 340 g) per week.
"Unless you get all your seafood tested, you cannot avoid mercury 100 percent," said Phillippi. "However, you can significantly reduce your exposure by choosing species that are lower on the food chain. Fish like haddock, flounder, pollock, herring, as well as most shellfish, eat lower on the food chain and are therefore less likely to have much mercury in them."
Phillippi went on to explain that the FDA does test for mercury in seafood and helps to keep fish with unacceptable levels out of the consumer market. This means an individual piece of fish that is bought commercially should not be cause for concern. When eating large quantities of fish, though, make eating choices based on the species for the lowest amount of mercury. |
'Teachers who intend to make a marked difference in their students' learning and lives will profit from reading this book. Not only will they find the material useful, they will be gratified and strengthened in their commitment' - Leah Welte, Teacher
Alpine School District, American Fork, UT
In this revised edition of Begin With the Brain, Martha Kaufeldt helps teachers identify and apply brain-compatible and learner-centred techniques to improve classroom instruction and management.
In easy-to-understand terms, this invaluable resource for beginning teachers lays out what cognitive research and best practice can teach us about:
- Classroom set up
- Routines and procedures
- Community building
- Fostering intellectual curiosity
- Self-assessment, and more
With succinct updates of the relevant neuroscience research and end-of-chapter notes on additional resources, the book is a ready-to-go source of field-tested, research-based teaching tips teachers can put to immediate use. |
Descriptions of Dizziness
Dizziness can be described in many ways, such as feeling lightheaded, unsteady, giddy, or feeling a floating sensation. “Vertigo” is a specific symptom experienced as an illusion of movement of one’s self or the environment. Some experience dizziness in the form of motion sickness, a nauseating feeling brought on by the motion of riding in an airplane, a roller coaster, or a boat. Dizziness, vertigo, and motion sickness all relate to the sense of balance and equilibrium. Your sense of balance is maintained by a complex interaction of the following parts of the nervous system:
- The inner ear (also called the labyrinth), which monitors the directions of motion, such as turning, rolling, forward-backward, side-to-side, and up-and-down motions.
- The eyes, which monitor where the body is in space (i.e., upside down, right side up, etc.) and also directions of motion.
- The pressure receptors in the joints of the lower extremities and the spine, which tell what part of the body is down and touching the ground. The muscle and joint sensory receptors (also called proprioception) tell what parts of the body are moving.
- The central nervous system (the brain and spinal cord), which processes all the information from the four other systems to maintain balance and equilibrium.
The symptoms of motion sickness and dizziness appear when the central nervous system receives conflicting messages from the other systems.
Causes of Dizziness
Circulation: If your brain does not get enough blood flow, you feel lightheaded. Almost everyone has experienced this on occasion when standing up quickly from a lying-down position. But some people have light-headedness from poor circulation on a frequent or chronic basis. This could be caused by arteriosclerosis or hardening of the arteries, and it is commonly seen in patients who have high blood pressure, diabetes, or high levels of blood fats (cholesterol). It is sometimes seen in patients with inadequate cardiac (heart) function, hypoglycemia (low blood sugar), or anemia (low iron).
- Certain drugs also decrease the blood flow to the brain, especially stimulants such as nicotine and caffeine. Excess salt in the diet also leads to poor circulation. Sometimes circulation is impaired by spasms in the arteries caused by emotional stress, anxiety, and tension.
- If the inner ear fails to receive enough blood flow, the more specific type of dizziness—vertigo—occurs. The inner ear is very sensitive to minor alterations of blood flow and all of the causes mentioned for poor circulation to the brain also apply specifically to the inner ear.
Neurological diseases: A number of diseases of the nerves can affect balance, such as multiple sclerosis, syphilis, tumors, etc. These are uncommon causes, but your doctor may perform certain tests to evaluate these.
Anxiety: Anxiety can be a cause of dizziness and lightheadedness. Unconscious overbreathing (hyperventilation) can be experienced as overt panic, or just mild dizziness with tingling in the hands, feet, or face. Instruction on correct breathing techniques may be required.
Vertigo: An unpleasant sensation of the world rotating, usually associated with nausea and vomiting. Vertigo usually is due to an issue with the inner ear. The common causes of vertigo are (in order):•
- Benign Positional Vertigo. Read more (link to BPPV)
- Meniere’s disease. Read more (link to Meniere’s)
- Migraine disease: Some individuals with a prior classical migraine headache history can experience vertigo attacks similar to Meniere’s disease. Usually there is an accompanying headache, but it can also occur without the headache.
- Infection: Viruses can attack the inner ear or balance (vestibular) nerves, causing acute vertigo (lasting days) with or without hearing loss (termed labyrinthitis or vestibular neuronitis, depending on the symptoms). Less commonly, a bacterial infection such as mastoiditis can extend into the inner ear and cause dizziness.
- Injury: A skull fracture that damages the inner ear produces a profound and incapacitating vertigo with nausea and hearing loss. The dizziness will last for several weeks and slowly improve as the other (normal) side takes over. BPV commonly occurs after head injury.
Allergy: Some people experience dizziness and/or vertigo attacks when they are exposed to foods or airborne particles (such as dust, molds, pollens, dander, etc.) to which they are allergic.
Seek Medical Attention
Call 911 or go to an emergency room if you experience:
- Dizziness after a head injury
- Fever over 101°F, headache, or very stiff neck
- Convulsions or ongoing vomiting
- Chest pain, heart palpitations, shortness of breath, weakness, a severe headache, inability to move an arm or leg, change in vision or speech
- Fainting and/or loss of consciousness
Consult your doctor if you:
- Have never experienced dizziness before
- Experience a difference in symptoms you have had in the past
- Suspect that medication is causing your symptoms
- Experience hearing loss
The doctor will ask you to describe your dizziness and answer questions about your general health. Along with these questions, your doctor will examine your ears, nose, and throat. Some routine tests will be performed to check your blood pressure, nerve and balance function, and hearing.
Possible additional tests may include a CT or MRI scan of your head, special tests of eye motion after warm or cold water or air is used to stimulate the inner ear (VNG-videonystagmography), and in some cases, blood tests or a cardiology (heart) evaluation. Balance testing may also include other special tests of the inner ear and balance system such as a VEMP. Your doctor will determine the best treatment based on your symptoms and the cause of them. Treatments may include medications and balance exercises.
Motion Sickness Tips
If you are subject to motion sickness:
- Do not read while traveling
- Avoid sitting in the rear seat
- Do not sit in a seat facing backward
- Do not watch or talk to another traveler who is having motion sickness
- Avoid strong odors and spicy or greasy foods immediately before and during your travel
- Talk to your doctor about medications |
Last year, NASA selected the DAVINCI mission as part of its Discovery program. It will investigate the origin, evolution, and present state of Venus in unparalleled detail from near the top of the clouds to the planet’s surface. Venus, the hottest planet in the solar system, has a thick, toxic atmosphere filled with carbon dioxide and an incredible pressure of pressure is 1,350 psi (93 bar) at the surface.
Named after visionary Renaissance artist and scientist Leonardo da Vinci, the DAVINCI mission Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging will be the first probe to enter the Venus atmosphere since NASA’s Pioneer Venus in 1978 and USSR’s Vega in 1985. It is scheduled to launch in the late 2020s.
Now, in a recently published paper, NASA scientists and engineers give new details about the agency’s Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging (DAVINCI) mission, which will descend through the layered Venus atmosphere to the surface of the planet in mid-2031. DAVINCI is the first mission to study Venus using both spacecraft flybys and a descent probe.
DAVINCI, a flying analytical chemistry laboratory, will measure critical aspects of Venus’ massive atmosphere-climate system for the first time, many of which have been measurement goals for Venus since the early 1980s. It will also provide the first descent imaging of the mountainous highlands of Venus while mapping their rock composition and surface relief at scales not possible from orbit. The mission supports measurements of undiscovered gases present in small amounts and the deepest atmosphere, including the key ratio of hydrogen isotopes – components of water that help reveal the history of water, either as liquid water oceans or steam within the early atmosphere.
NASA has selected the DAVINCI+ (Deep Atmosphere Venus Investigation of Noble-gases, Chemistry and Imaging +) mission as part of its Discovery program, and it will be the first probe to enter the Venus atmosphere since NASA’s Pioneer Venus in 1978 and USSR’s Vega in 1985. Named for visionary Renaissance artist and scientist, Leonardo da Vinci, the DAVINCI+ mission will bring 21st-century technologies to the world next door. DAVINCI+ may reveal whether Earth’s sister planet looked more like Earth’s twin planet in a distant, possibly hospitable past with oceans and continents. Credit: NASA’s Goddard Space Flight Center
The mission’s carrier, relay, and imaging spacecraft (CRIS) has two onboard instruments that will study the planet’s clouds and map its highland areas during flybys of Venus and will also drop a small descent probe with five instruments that will provide a medley of new measurements at very high precision during its descent to the hellish Venus surface.
“This ensemble of chemistry, environmental, and descent imaging data will paint a picture of the layered Venus atmosphere and how it interacts with the surface in the mountains of Alpha Regio, which is twice the size of Texas,” said Jim Garvin, lead author of the paper in the Planetary Science Journal and DAVINCI principal investigator from NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “These measurements will allow us to evaluate historical aspects of the atmosphere as well as detect special rock types at the surface such as granites while also looking for tell-tale landscape features that could tell us about erosion or other formational processes.”
DAVINCI will make use of three Venus gravity assists, which save fuel by using the planet’s gravity to change the speed and/or direction of the CRIS flight system. The first two gravity assists will set CRIS up for a Venus flyby to perform remote sensing in the ultraviolet and the near infrared light, acquiring over 60 gigabits of new data about the atmosphere and surface. The third Venus gravity assist will set up the spacecraft to release the probe for entry, descent, science, and touchdown, plus follow-on transmission to Earth.
The first flyby of Venus will be six and half months after launch and it will take two years to get the probe into position for entry into the atmosphere over Alpha Regio under ideal lighting at “high noon,” with the goal of measuring the landscapes of Venus at scales ranging from 328 feet (100 meters) down to finer than one meter. Such scales enable lander-style geologic studies in the mountains of Venus without requiring landing.
Once the CRIS system is about two days away from Venus, the probe flight system will be released along with the titanium three foot (one meter) diameter probe safely encased inside. The probe will begin to interact with the Venus upper atmosphere at about 75 miles (120 kilometers) above the surface. The science probe will commence science observations after jettisoning its heat shield around 42 miles (67 kilometers) above the surface. With the heatshield jettisoned, the probe’s inlets will ingest atmospheric gas samples for detailed chemistry measurements of the sort that have been made on Mars with the Curiosity rover. During its hour-long descent to the surface, the probe will also acquire hundreds of images as soon as it emerges under the clouds at around 100,000 feet (30,500 meters) above the local surface.
“The probe will touch-down in the Alpha Regio mountains but is not required to operate once it lands, as all of the required science data will be taken before reaching the surface.” said Stephanie Getty, deputy principal investigator from Goddard. “If we survive the touchdown at about 25 miles per hour (12 meters/second), we could have up to 17-18 minutes of operations on the surface under ideal conditions.”
DAVINCI is tentatively scheduled to launch June 2029 and enter the Venusian atmosphere in June 2031.
“No previous mission within the Venus atmosphere has measured the chemistry or environments at the detail that DAVINCI’s probe can do,” said Garvin. “Furthermore, no previous Venus mission has descended over the tesserae highlands of Venus, and none have conducted descent imaging of the Venus surface. DAVINCI will build on what Huygens probe did at Titan and improve on what previous in situ Venus missions have done, but with 21st century capabilities and sensors.”
Reference: “Revealing the Mysteries of Venus: The DAVINCI Mission” by James B. Garvin, Stephanie A. Getty, Giada N. Arney, Natasha M. Johnson, Erika Kohler, Kenneth O. Schwer, Michael Sekerak, Arlin Bartels, Richard S. Saylor, Vincent E. Elliott, 24 May 2022, The Planetary Science Journal.
NASA Goddard is the principal investigator institution for DAVINCI and will perform project management for the mission, provide science instruments as well as project systems engineering to develop the probe flight system. Goddard also leads the project science support team with an external science team from across the US. Discovery Program class missions like DAVINCI complement NASA’s larger “flagship” planetary science explorations, with the goal of achieving outstanding results by launching more smaller missions using fewer resources and shorter development times. They are managed for NASA’s Planetary Science Division by the Planetary Missions Program Office at Marshall Space Flight Center in Huntsville, Alabama.
Major partners for DAVINCI are Lockheed Martin, Denver, Colorado, The Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, NASA’s Jet Propulsion Laboratory, Pasadena, California, Malin Space Science Systems, San Diego, California, NASA’s Langley Research Center, Hampton, Virginia, NASA’s Ames Research Center at Moffett Federal Airfield in California’s Silicon Valley, and KinetX, Inc., Tempe, Arizona, as well as the University of Michigan in Ann Arbor.
#NASAs #DAVINCI #Space #Probe #Plunge #Hellish #Atmosphere #Venus |
In the thıs worksheet, there are four very cute chicks. Okay, what we will do? Today, we will teach number four to our children. To be not boring, we can start with painting these chicks thus our children let it all hang out. Then we can show some materials but these materials should be four totally. At first, we can count materials from one to four and our child can repeat after from us. Thus, we can contribute to our child’s aural and verbal ability. But on the other hand, the child also should learn how number four write so we should also teach this. To teach the writing of number four, we can hold child’s hand and write together or as you see in the worksheet, there are some numbered arrow marks, so our child can draw number four by tracing arrow marks. The child should repeat these movements a few time in order to improve his/ her writing and improve hand muscle.Number 4 trace worksheet for kindergarten, preschoolers, and kids. This Number 4 trace worksheet is in PDF format and very easy to print. First color the 4 chicks than start to trace. Have fun!
Please pin this worksheet and share with your followers on pinterest. |
The influence of the Roman Empire throughout the world is undeniable. Art, poetry, music, and architecture have especially benefited from the ingenuity of a civilization that at one time spanned from Northern Africa to the waters of the British Isles. However, culture does not spread without communication, that necessary link to human exchange of knowledge called language. Like all languages, Latin’s life stretches beyond pre-history, its origins forever lost. What we do know about Latin survives to use in a sporadic collection of writings that only hint at the language’s rich history.
Throughout the early part of the first millennium B.C., the Italian peninsula was subject to a string of wars and conflicts where multiple cultures battled for supremacy. The ebb and tide of some factions’ strength made lasting impressions on the peninsula and influenced the beginnings of Roman history to the extent that Latin almost surely would have perished had certain powers not won out over their rivals.
The Italic family of the centum branch of Indo-European languages is where Latin finds a home, among a multitude of languages and dialects. Some of the modern Romance languages that owe their origin to Latin are French, Italian, Spanish, Portuguese and Romanian. English, however, is often mistaken for a Romance language by beginning Latin students because of the huge number of words in English with direct and indirect Latin origins.
Although Latin scholars disagree on the beginning and ending dates for the different periods in the language’s history, Latin can be broken down into seven periods with approximate dates given below:
Old Latin (origin – 75 B.C.)
Classical Latin (75 B.C.E. – 200 A.D.)
Vulgar Latin (200 – 900)
Medieval Latin (900 – 1300)
Renaissance Latin (1300 to 1500)
New Latin (1500 – Present)
Contemporary Latin (1900 – Present)
The following articles present a brief history of the Latin language periods, providing a synopsis of cultural, grammatical, and style differences that mark each major division. In addition and where appropriate, it is indicated where Latin has had an influence on modern languages both subordinate and cognate to Latin.
It should be noted that most modern Latin courses are based on the classical period. This period is noted for its important works by Caesar, Cicero, Augustus and other prominent authors of the time. It may be interesting to the beginning student to witness the evolution of Latin though the classical period and beyond to gain a clearer perspective of the language. Students should especially take note of the cultural issues shaping the language. |
COSC 1550 – Introduction to Programming
Circle the letter of the statement that bests answers/completes each question. Each question is worth three points.
1. Even when there is no power to the computer, data can be retained in:
a. Secondary storage
b. The Input Device
c. The Output Device
2. Words that have a special meaning and may only be used for their intended purpose are known as:
b. Programmer Defined Words
c. Key Words
3. The name for a memory location that may hold data is:
a. Key Word
4. A _____________ is a complete instruction that causes the computer to perform some action.
d. Key Word
5. A variable declaration announces the name of a variable that will be used in a program, as well as:
a. What type of data it will be used to hold
b. The operators that will be used on it
c. The number of times it will be used in the program
d. The area of the code in which it will be used
6. Three primary activities of a program are:
a. Variables, Operators, and Key Words
b. Lines, Statements, and Punctuation
c. Integer, Floating-point and Character
d. Input, Processing, and Output
7. Mistakes that cause a program to produce erroneous results are called:
a. Syntax errors
b. Logic errors
c. Compiler errors
d. Linker errors
8. Computer programs are also known as ___________.
9. _______ is an example of a volatile type of memory, used for temporary storage.
a. A floppy disk
c. A hard disk
10. _____________ involves rules that must be followed when constructing a program.
11. Which of the following is a preprocessor directive?
a. pay = hours * rate;
b. #include <iostream.h>
c. // This program calculates the user's pay.
d. void main(void)
12. The programmer usually enters source code into a computer using:
b. A text editor
c. A hierarchy chart
d. A compiler
13. In a broad sense, the two primary categories of programming languages are:
a. Mainframe and PC
b. Single-tasking and Multi-tasking
c. Low-level and High-level
d. COBOL and BASIC
14. In the process of translating a source file into an executable file, which of the following is the correct sequence?
a. Source code, preprocessor, modified source code, linker,
object code, compiler, executable code.
b. Preprocessor, source code, compiler, executable code,
linker, modified source code, object code.
c. Source code, compiler, modified source code, preprocessor,
object code, linker, executable code.
d. Source code, preprocessor, modified source code, compiler,
object code, linker, executable code.
15. What is the output of the C++ cout instruction?
cout << 15 / 4 * 4 << endl;
16. In programming terms, a group of characters inside a set of double quotation marks is called a:
a. String constant
17. The ________ is used to mark the end of a complete C++ programming statement.
a. Pound Sign
c. Data type
18. Which character signifies the beginning of an escape sequence?
19. If you use a C++ Key Word as an identifier, your program will:
a. Execute with unpredictable results
b. Receive a compiler error
c. Refuse to compile
d. Compile, link, but not execute
20. In the C++ instruction, cookies = number % children;
given the following declaration statement:
int number = 38, children = 4, cookies;
what is the value of cookies after the execution of the statement?
21. This function in C++ allows you to identify how many bytes of storage on your computer system an integer data value requires.
22. Character constants in C++ are always enclosed in ______.
b. "double quotation marks"
d. 'single quotation marks'
23. _______________ are used to declare variables that can hold numbers that contain a decimal point.
a. Integer data types
b. Real data types
c. Floating point data types
d. Long data types
24. The float data type is considered _____ precision, and the double data type is considered _______ precision.
a. single, double
b. float, double
c. integer, double
d. short, long
25. A variable whose value can be either true or false has a data type of:
26. How many bytes of memory are needed to store the literal string "This is a test"?
27. A variable's _____ is the part of the program that has access to the variable.
a. data Type
28. This control sequence is used to skip over to the next horizontal tab stop.
29. Indicate whether the following variable names are valid or invalid. If a variable name is invalid, state the reason why.
30. Show the results that will display on the console, when the following code segments are executed. Show a blank line with a series of dots, such as . . . . . . . .
int year = 2004;
cout << "This year is " << year << endl;
cout << "\nNext year is " << year +1 << endl;
cout << "\n\nThe last year is " << "year + 1" << endl;
double length = 10.0;
double width = 2.0;
double area = 0.0;
cout << "The area of a rectangle is " << area << "\n" << endl;
cout << "The area of this rectangle with a length of "
<< " and a width of "
<< " is " << length* width
bool isEmployed = true;
cout << "It is " << isEmployed << " that I have a job" << endl; |
Mary Walton was a 19th century American inventor who devised methods for minimising the effects of pollution caused by the industrial revolution. An independent inventor, she was very disturbed by the thick smoke emitted by the factories that had sprung up in escalating numbers during the industrial revolution. In addition, she lived near the railway tracks and found the noise pollution caused by the trains too much to bear. Refusing to be a mute spectator to the onslaught of pollution of different kinds, she worked in her basement to come up with methods of reducing the negative effects of rampant pollution. She patented a device that minimized the smoke emitted into the air—a smoke-burner that brought about an improvement in locomotive and other chimneys. She also devised a more environment-friendly system for the railways and later sold the rights to her noise-reducing method to the Metropolitan Railroad of New York City. This system was soon adopted by other railway companies as well. Her inventions earned her much national acclaim and she became one of the few women of her era who actually received the recognition and financial benefits for their scientific endeavours in an overwhelmingly male dominated society.
- Not much is known about Mary Walton’s childhood or early life. From her own accounts it is known that she had no brothers; so her father—a progressive minded man—encouraged his daughters to get a good education and pursue their intellectual interests.Continue Reading BelowLater Years
- Mary Walton lived in an era when the industrial revolution spawned a number of factories and manufacturing units in the country. The industries in the American society were thriving, but they also gave birth to a new problem: rampant pollution of an unprecedented scale.An intelligent and creative woman, Mary Walton was an independent inventor. She was thoroughly disturbed by the thick smoke bellowing out of the locomotive and factory chimneys and worked hard to find a solution for this.She developed a method of reducing the environmental hazards caused by the emission of smoke from locomotive, industrial and residential chimneys, and patented this method in 1879. It is believed her method considerably helped to bring down the dangers posed by the growing levels of pollution in the nation.She lived near the railway tracks and found the deafening noise produced by trains as they hollered past her home to be particularly sickening. The elevated railway systems were rapidly expanding in New York City, and so were the rising levels of noise pollution.She worked in her basement using a model railroad track and performed experiments on it to devise a method of reducing the noise caused by trains running over the tracks. Finally she was able to create a system that deadened the sound produced by the trains and had it patented in 1881.Her sound-dampening system was so effective that she sold the rights to the Metropolitan Railroad for $10,000. Very soon, the system was adopted by other elevated railway companies as well.Major Works
- In 1879, she patented a device that deflected the smoke and emissions from chimneys into water tanks, later flushing them out into the cities’ sewage system. This method helped to minimize the smoke that was emitted into the atmosphere.Another of her major inventions helped to reduce the vibrations and noise pollution caused by trains running on the elevated railway systems that were becoming increasingly popular in New York City. This invention of hers earned her much recognition and financial gains.
How To CiteArticle Title- Mary Walton BiographyAuthor- Editors, TheFamousPeople.comWebsite- TheFamousPeople.comURL- https://www.thefamouspeople.com/profiles/mary-walton-7091.phpLast Updated- July 06, 2017
People Also Viewed |
Since 1997, researchers have been able to quantum teleport photons with a major record being set by researchers at the University of Science and Technology of China in Shanghai. In 2010, that team successfully teleported a photon over 16km. Now that same team has released new findings, in which they claim to have teleported photons nearly 100km, or over 60 miles.
Now, quantum teleportation isn’t quite the same thing as the teleportation in Star Trek. When researchers teleport a photon, they aren’t teleporting the actual photon, but rather the information contained in it through quantum entanglement. In essence, the second photon at the end of the teleport becomes the first one – or at least, it becomes an identical qubit of information. So the information is exchanged without actually travelling through the intervening distance.
(If that sounds bizarre and frightening, you’re in good company. Albert Einstein understatedly called the process of quantum entanglement “spooky action at a distance.”)
The challenge for quantum teleportation is that it has to be done in free space. Fiberoptics don’t work, because once you get to distances over about 1 kilometer, the fiber absorbs so much light that the information is lost. But while a fiberoptic cable can keep photons focused, moving over free space means using lasers – which inevitably causes the beam of light to spread out over time. However, using a powerful laser along with some other optical equipment, the researchers here developed a technique to keep the beam focused over the course of 97km, and successfully achieved quantum teleportation.
The ability to teleport information means that it could be possible to have worldwide communications that are impossible to listen in on. Because in quantum teleportation, the information doesn’t travel over any intervening distances, there’s no way to tap into the communication. As Technology Review notes, “these guys clearly have their eye on the possibility of satellite-based quantum cryptography which would provide ultra secure communications around the world.” |
This activity aligns with, but is not limited to, Realidades 1 Tema 8A, and it focuses on the preterite along with 8A vocabulary. Some previous chapter vocabulary is thrown in! Students are given scenarios and situations they need to formulate sentences into the preterite. In the first section, only the YO form is practiced. The second and third sections requires more student input.
I have used this as a warm-up, a speaking activity, and even as a homework assignment. You can have students write out answers as homework and/or have them e-mail you an audio clip.
Thanks for looking, and check out my other Realidades 1 material.
-The Spanish Señora |
Scientists claim to have found a biomarker of Sudden Infant Death Syndrome (SIDS), a “world-first breakthrough” that has the potential to someday reduce the number of babies dying unexpectedly.
Researchers from the Children's Hospital at Westmead in Australia found that levels of an enzyme called butyrylcholinesterase (BChE) were significantly lower in babies who later died of SIDS.
Since BChE is known to play a role in the brain’s arousal pathway, the researchers argue a deficiency of the enzyme likely reduces an infant’s ability to wake or respond to the external environment, raising the risk of SIDS.
The research was recently published in The Lancet’s journal eBioMedicine. It was led by Dr Carmel Harrington, Honorary Research Fellow at Children's Hospital at Westmead, who lost her own child to SIDS 29 years ago and has since dedicated her career to uncovering the cause of this tragic syndrome.
“An apparently healthy baby going to sleep and not waking up is every parent’s nightmare and until now there was absolutely no way of knowing which infant would succumb. But that’s not the case anymore,” Dr Harrington said in a statement.
“Babies have a very powerful mechanism to let us know when they are not happy. Usually, if a baby is confronted with a life-threatening situation, such as difficulty breathing during sleep because they are on their tummies, they will arouse and cry out. What this research shows is that some babies don’t have this same robust arousal response,” she explained.
SIDS, sometimes known as "cot death," is the sudden and unexpected death of a seemingly healthy baby less than one-year-old. The CDC estimates there are around 3,400 cases of SIDS and other unexpected infant deaths in the US every year. Thankfully, rates of the syndrome have been declining since the 1990s, although there are still major racial and ethnic differences, particularly among Native Americans, Alaska Natives, and Black people.
To reach their findings, the team looked at BChE levels in 722 Dried Blood Spots (DBS) taken at birth as part of the Newborn Screening Program. Levels of BChE were measured in both SIDS-related deaths and infants dying from other causes, then compared to 10 surviving infants with the same date of birth and gender.
Armed with this knowledge, the team says babies could potentially be screened for BChE to give parents and doctors an understanding of whether they are at a higher risk of SIDS. The researchers also hope to follow up the research by looking at ways to address the enzyme deficiency and actively reduce the risk of SIDS in high-risk babies.
“This discovery has opened up the possibility for intervention and finally gives answers to parents who have lost their children so tragically. These families can now live with the knowledge that this was not their fault,” Dr Harrington commented.
While the findings are promising, some groups have called for some caution when reading bold headlines about the research.
"The findings of this study are interesting and more work needs to be done," the Lullaby Trust, a British charity aiming to prevent unexpected deaths in infancy, said in a statement sent to IFLScience.
"While research is underway, we urge all parents and carers with infants to continue following the evidence-based safer sleep advice to reduce the risk of SIDS occurring. This includes: always sleeping baby on their back in a clear sleep space on a flat, firm and waterproof mattress with no bulky bedding, pillows or cot bumpers," they added.
"Claims that a cause of SIDS has been found could give false hope to families whose baby has died suddenly and unexpectedly and may downplay the continued importance of the safer sleep advice."
Updated 13/05/2022: This article has been updated to include a statement from the Lullaby Trust.
Correction: An earlier version of this article incorrectly claimed a cause for SIDS may have been found. It has been corrected to reflect that a biomarker that may increase the likelihood of the syndrome was instead identified. |
By next spring, vessels off the coast of British Columbia will no longer be permitted to approach within 200 metres of southern resident killer whales, announced DFO Minister Dominic LeBlanc on October 26; this is 100 metres more than the current distance. Approximately thirty offshore tour companies have decided to adopt this new minimum approach distance immediately. Will this protective measure be sufficient to prevent the extinction of a population of which there are just 77 individuals left?
The population of southern resident killer whales has been declining for the past two decades and no growth is projected under current conditions. The main threats to the survival of this population include the depletion of its preferred prey (chinook salmon), underwater noise and disturbance caused by human activities, and high levels of contaminants (such as PCBs) that accumulate in its tissues.
Noise from commercial and recreational vessels of all types obscures the sound frequencies used by killer whales to detect prey and communicate. Additionally, boats in proximity alter the killer whale’s behaviour, thereby reducing its feeding efficiency. The killer whale not only requires abundant prey, but also a habitat that is quiet enough to locate and hunt these prey. Reducing noise levels and disturbance by establishing a minimum approach distance is therefore an important first step to increasing this population’s chances of survival.
How far is far enough?
For certain species, a minimum approach distance has been established in a number of places where there is a large whale-watching industry. Ideally, the exact distance should reflect the radius within which the presence of boats masks the sound frequencies used by the animals or triggers behavioural changes (e.g. interruption of vital activities such as feeding, breathing, resting or caring for young). However, the impact that a boat has on an animal depends on many factors, including the type of craft, its speed and angle of approach, the presence of other noise sources and the species of whale in question. Determining an appropriate approach distance is therefore a complex task.
In British Columbia, there are currently no regulations that require captains to maintain a minimum distance from southern resident killer whales. There is, however, a recommendation to stay at least 100 metres away. The new rules will require vessels to maintain a distance of 100 metres from any marine mammal and 200 metres from southern resident killer whales.
On the other side of the border, Washington State has had a law since 2011 requiring boats to stay at least 180 metres away from killer whales.
Even tougher regulations are in effect on this side of the continent, in the Saguenay-St. Lawrence Marine Park, where watercraft must maintain a distance of at least 200 metres from any cetacean and at least 400 metres from any marine mammal that is endangered, including belugas and blue whales. Studies on St. Lawrence belugas and blue whales support this minimum approach distance of 400 metres. Published in 2017, a study conducted by researchers at Fisheries and Oceans Canada and the Rimouski Institute of Ocean Sciences shows that boats within 400 metres of a blue whale disturb the latter’s feeding activities.
Will establishing a minimum approach distance of 200 metres in British Columbia allow the population of southern resident killer whales to recover? Taking into consideration development projects in the region, which will increase noise pollution, as well as the predicted effects of climate change on the abundance of chinook salmon, a recently published study in Scientific Report estimates that the population of southern resident killer whales has an approximately 25% risk of becoming extinct within the next century, if nothing is done for its protection. Researchers estimate that the recovery of this population would require either a 30% increase in chinook salmon abundance or an increase of at least 15% in chinook salmon abundance combined with a 50% decrease in boat-related noise and disturbance. “The most important message of our study,” says Paul Paquet, a Raincoast Conservation Foundation researcher and one of the authors of this study, “is that with appropriate and resolute actions, the survival chances of these iconic whales over the next 100 years can be significantly improved.”
Further studies will be needed to demonstrate whether or not 200 metres is sufficient or if this minimum distance should be increased to 400 metres, as in the St. Lawrence. Its impact will also depend on other measures that will be implemented in killer whale habitat in Canada and the US to reduce noise such as imposing speed limits and increasing the availability of prey. |
The history of Ukraine is divided into the following periods:
Early modern period
During this period, the Ukraine began populating the first people. The first settlers were Pithecanthropus. The oldest of their staying in Ukraine - is the area near the village of Korolevo and Rokosovo. Then Neandrthals inhabited the territory of Ukraine, who learned to build homes with mammoth bones and make fire. Neanderthals began to wear clothes from the skins. Subsequently, the Cro-Magnons occupied the territory of Ukraine. Cro-Magnons were more similar to modern humans.
Kievan Rus - In 882, Kyiv was conquered from the Khazars by the Varangian noble Oleg who started the long period of rule of the Rurikid princes. In the 11th century, Kievan Rus' was, geographically, the largest state in Europe, becoming known in the rest of Europe as Ruthenia. Greatest prosperity Kievan Rus was during the reign of Yaroslav the Wise and Vladimir the Great, who embraced Christianity.
Yaroslav the Wise
Vladimir the Great
Yaroslav the Wise during his rule built an extraordinary architectural masterpiece - St. Sophia Cathedral, which is protected by UNESCO.
Baptism of Kyivan Rus
Sights of Kyiv Rus
Savior-Transfiguration Cathedral in Chernihiv
Assumption Church in Kiev
Early modern period
After the Union of Lublin in 1569 and the formation of the Polish–Lithuanian Commonwealth Ukraine fell under Polish administration, becoming part of the Crown of the Kingdom of Poland. The period immediately following the creation of the Commonwealth saw a huge revitalisation in colonisation efforts. Many new cities and villages were founded.
First mention of Cossacks appeared in 1489. In 1556 was founded the first Sich located on the island Khortytsya. Since then, the Cossacks in Ukraine is actively developing.
The Cossack with musket, emblem of the Zaporizhian Host, and later the Hetmanate and the Ukrainian State.
Zaporizhian Sich on the island Khortytsya
Dmitry Vyshnevetskyy - founder of the first Zaporizhzhya Sich.
Bohdan Khmelnitsky - led by the National Liberation War of the Ukrainian people against the Commonwealth
Philip Orlik - author of the first constitution in the history of Ukraine |
By Alexandra Dolce • May 29, 2020•Writers in Residence
On May 17, 2020 NASA unveiled The Artemis Accords. The Artemis Accords are a set of principles and processes where The United States and other countries agree to a common set of principles that refer to how the moon is to be explored. www.spaceref.com/artemis
According to NASA, the raison d’etre of this Accord is to “establish a common set of principles to govern the civil exploration and use of outer space”. Bilateral Artemis Accords agreements, based on the 1967 Outer Space Treaty are aimed at “creating a safe and transparent environment which facilitates exploration, science, and commercial activities for the benefit of humanity”. www.nasa.gov
The Artemis Principles include: Peaceful purposes-all activities on the moon and outer space will be conducted for peaceful purposes based on the Outer Space Treaty. Transparency- Artemis partner nations will be required to be transparent when describing their space policies and plans. Interoperability- partner nations should strive to support interoperability (the ability of computer systems or software to exchange and make use of inforamation) as much as possible. Emergency Assistance- NASA and partner nations commit to providing assistance to those in need on the moon and in outer space. Registration of Space objects- the Accord stresses the importance of registering objects that go into outer space and the moon. Release of Scientific Data- partners will agree to release their scientific data to ensure global benefit. Protecting Heritage-parties commit to the protection of sites and artifacts of historic value found in outer space and on the moon. Space Resources- resource extraction will be conducted under the umbrella of the Outer Space Treaty. (Specifically Articles II, VI and XI) Deconfliction of Activities- partner nations will provide public information regarding the location(s) of their operations “which will inform and scale and scope of Safety Zones”. Finally, Orbital Debris and Spacecraft Disposal- partner nations will comply with the principles of the Space Debris Guidelines of the United Nations.
These Artemis Accords are significant because they reflect the initial purpose of outer space exploration. The current U.S. Administration seems to want to privatize space and has a “winner takes all” mentality. The Accords shifts that focal point back to the original intent of space exploration which is “the common interest of all mankind in the progress of the exploration and use of outer space for peaceful purposes”.
Please refer to the Rescue Agreement and my initial post
Please refer to my initial post in January
Please refer to my post in April
Please refer to the Outer Space Treaty of 1967 |
The changeover to decimal currency
. The word could also mean any conversion to decimal
, such as metrication
of weights and measures, or changing hex or binary numbers to decimal, but in practice it refers to the eventual conversion of the British currency
and the same system in other Commonwealth
countries. Britain went over to decimal on 15th February 1971
. Australia had done so in 1966, and most of the Commonwealth changed over around those years too.
The old currency was 12 pence (plural of penny) = 1 shilling, 20 shillings = 1 pound. A large amount was written as e.g. £5 10s. 6d. An amount of just shillings and pence could also be written e.g. 2/6 for 2s. 6d. (pronounced "two and six"), and 2s. bare could be 2/- (never 2/0). Tables of conversion filled the backs of books like logarithm tables.
I don't know what the first decimal currency was, but in 1704 Tsar Peter the Great of Russia created the system that remains to this day, 100 kopecks = 1 rouble. The decimal system was popularized by the rational reforms of the United States and the French Revolution, and spread to the rest of the world (along with the metric system in most places): by the early 1900s only the British Empire was unconverted. I'd be curious to know if there were any other countries that kept non-decimal systems, as I can't think of any.
Actually Britain got into the act in 1849 by issuing the florin or two-shilling coin (one tenth of a pound). This was intended to pave the way for decimalisation, but it never eventuated. However, Canada did adopt dollars and cents in the mid nineteenth century, doubtless because they were next door to the USA.
Most countries changed the name of their currency on decimalisation (to dollars and cents in many), but Britain retained both pound and penny and dropped only the shilling. Instead of 240 pence = 1 pound it became 100 pence = 1 pound. We chose to keep the pound the same, and its subdivision was originally called the new penny, symbol p instead of the old symbol d (Latin denarius). About ten years afterwards, the word "new" was dropped from coinage, but the penny is now almost always called "pee", not penny.
The shilling and two-shilling (florin) pieces were renamed several years before decimalisation, since 1/- = 5p and 2/- = 10p exactly, but other old coins had to be scrapped: the halfpenny, penny, threepence, and sixpence. (The farthing or quarter of a penny had been abolished in the early 1960s.) Instead we got a half new penny (which has since disappeared, being worthless), and 1p, 2p, 20p, and 50p coins.
Given a currency of 240 pence = 1 pound, there are two ways to decimalise it. We chose to retain the pound as is, and revalue the new penny to a hundredth; and this is what was begun with the 1849 florin. The alternative method was proposed in 1887: proofs of a ducat of 100 existing pence (8/4) were issued, but the currency was never put into circulation. |
By the time children are finished with first grade, they have usually become fairly proficient at handwriting. It might be a bit shaky sometimes and perhaps really messy when they are in a hurry, but they have a fairly good grasp of how to hold a pencil and how to form the letters and the numerals.
But, what if your child is still having significant difficulties with handwriting? What do you do then?
Below, I will touch briefly on a few common handwriting issues and discuss when to seek additional help.
- Left-handedness - This is certainly not a problem, but it can pose challenges for young children when they are just learning to write. Since their hand moves in a different direction across the paper, their writing can be concealed as they write or their writing can be smudged. Finding a comfortable position that is effective might take a bit of extra attention. One suggestion is to encourage your child to tip their paper to the right so that their hand moves more below the line than on top of it.
- Backward letters - Writing letters backwards is very common with young children, and most correct themselves as they begin to learn the alphabet betters and discriminate between those that look alike. When you notice certain letters being written backwards (“b” and “d” are common culprits), gently point out the difference to your child. Using fun analogies makes it easier to remember: “b” has a belly, etc.
- Writing is too light - Sometimes children have a hard time using enough pressure to make their marks legible. This is often the result of weakness in fingers or upper body, so back up and encourage some of the strengthening activities that we mentioned in this post and this post. It could also be the result of an incorrect pencil grip, so check that and help them learn to check themselves when they write.
- Letters are huge - When children learn to write their letters are usually large due to the maturity of their muscle development. As they mature, their letters begin to get smaller and they are able to actually write on a line or in a given space. Huge letters generally indicate that muscle development is lagging a bit so activities that focus on muscle strength and coordination (like crayon flipping, etc.) can be helpful.
When to seek additional help:
Because handwriting develops naturally for many children, parents often avoid seeking help when they notice difficulties, assuming they will correct themselves. However, when a child struggles in handwriting, they can become frustrated, which can affect their self-image, performance on tests and can even lead to anxiety or behavior issues. Since handwriting is needed in virtually every subject, seeking help as soon as possible can help your child succeed in school and boost their self-confidence as well.
If your child continues to have difficulties with handwriting in general, first talk to their teacher. Most schools (public and private) have systems in place to screen and/or evaluate students for additional help. You can also enlist the aid of an OT (Occupational Therapist) privately, and in most cases, these services are covered by insurance. |
MIA: Writers Archive: Mary Beard
History of the United States. Charles Beard, Mary Beard, 1921
Friends of the Constitution in Power. – In the first Congress that assembled after the adoption of the Constitution, there were eleven Senators, led by Robert Morris, the financier, who had been delegates to the national convention. Several members of the House of Representatives, headed by James Madison, had also been at Philadelphia in 1787. In making his appointments, Washington strengthened the new system of government still further by a judicious selection of officials. He chose as Secretary of the Treasury, Alexander Hamilton, who had been the most zealous for its success; General Knox, head of the War Department, and Edmund Randolph, the Attorney-General, were likewise conspicuous friends of the experiment. Every member of the federal judiciary whom Washington appointed, from the Chief Justice, John Jay, down to the justices of the district courts, had favored the ratification of the Constitution; and a majority of them had served as members of the national convention that framed the document or of the state ratifying conventions. Only one man of influence in the new government, Thomas Jefferson, the Secretary of State, was reckoned as a doubter in the house of the faithful. He had expressed opinions both for and against the Constitution; but he had been out of the country acting as the minister at Paris when the Constitution was drafted and ratified.
An Opposition to Conciliate. – The inauguration of Washington amid the plaudits of his countrymen did not set at rest all the political turmoil which had been aroused by the angry contest over ratification. “The interesting nature of the question,” wrote John Marshall, “the equality of the parties, the animation produced inevitably by ardent debate had a necessary tendency to embitter the dispositions of the vanquished and to fix more deeply in many bosoms their prejudices against a plan of government in opposition to which all their passions were enlisted.” The leaders gathered around Washington were well aware of the excited state of the country. They saw Rhode Island and North Carolina still outside of the union. They knew by what small margins the Constitution had been approved in the great states of Massachusetts, Virginia, and New York. They were equally aware that a majority of the state conventions, in yielding reluctant approval to the Constitution, had drawn a number of amendments for immediate submission to the states.
The First Amendments – a Bill of Rights. – To meet the opposition, Madison proposed, and the first Congress adopted, a series of amendments to the Constitution. Ten of them were soon ratified and became in 1791 a part of the law of the land. These amendments provided, among other things, that Congress could make no law respecting the establishment of religion, abridging the freedom of speech or of the press or the right of the people peaceably to assemble and petition the government for a redress of grievances. They also guaranteed indictment by grand jury and trial by jury for all persons charged by federal officers with serious crimes. To reassure those who still feared that local rights might be invaded by the federal government, the tenth amendment expressly provided that the powers not delegated to the United States by the Constitution, nor prohibited by it to the states, are reserved to the states respectively or to the people. Seven years later, the eleventh amendment was written in the same spirit as the first ten, after a heated debate over the action of the Supreme Court in permitting a citizen to bring a suit against “the sovereign state” of Georgia. The new amendment was designed to protect states against the federal judiciary by forbidding it to hear any case in which a state was sued by a citizen.
Funding the National Debt. – Paper declarations of rights, however, paid no bills. To this task Hamilton turned all his splendid genius. At the very outset he addressed himself to the problem of the huge public debt, daily mounting as the unpaid interest accumulated. In a Report on Public Credit under date of January 9, 1790, one of the first and greatest of American state papers, he laid before Congress the outlines of his plan. He proposed that the federal government should call in all the old bonds, certificates of indebtedness, and other promises to pay which had been issued by the Congress since the beginning of the Revolution. These national obligations, he urged, should be put into one consolidated debt resting on the credit of the United States; to the holders of the old paper should be issued new bonds drawing interest at fixed rates. This process was called “funding the debt.” Such a provision for the support of public credit, Hamilton insisted, would satisfy creditors, restore landed property to its former value, and furnish new resources to agriculture and commerce in the form of credit and capital.
Assumption and Funding of State Debts. – Hamilton then turned to the obligations incurred by the several states in support of the Revolution. These debts he proposed to add to the national debt. They were to be “assumed” by the United States government and placed on the same secure foundation as the continental debt. This measure he defended not merely on grounds of national honor. It would, as he foresaw, give strength to the new national government by making all public creditors, men of substance in their several communities, look to the federal, rather than the state government, for the satisfaction of their claims.
Funding at Face Value. – On the question of the terms of consolidation, assumption, and funding, Hamilton had a firm conviction. That millions of dollars’ worth of the continental and state bonds had passed out of the hands of those who had originally subscribed their funds to the support of the government or had sold supplies for the Revolutionary army was well known. It was also a matter of common knowledge that a very large part of these bonds had been bought by speculators at ruinous figures – ten, twenty, and thirty cents on the dollar. Accordingly, it had been suggested, even in very respectable quarters, that a discrimination should be made between original holders and speculative purchasers. Some who held this opinion urged that the speculators who had paid nominal sums for their bonds should be reimbursed for their outlays and the original holders paid the difference; others said that the government should “scale the debt” by redeeming, not at full value but at a figure reasonably above the market price. Against the proposition Hamilton set his face like flint. He maintained that the government was honestly bound to redeem every bond at its face value, although the difficulty of securing revenue made necessary a lower rate of interest on a part of the bonds and the deferring of interest on another part.
Funding and Assumption Carried. – There was little difficulty in securing the approval of both houses of Congress for the funding of the national debt at full value. The bill for the assumption of state debts, however, brought the sharpest division of opinions. To the Southern members of Congress assumption was a gross violation of states’ rights, without any warrant in the Constitution and devised in the interest of Northern speculators who, anticipating assumption and funding, had bought up at low prices the Southern bonds and other promises to pay. New England, on the other hand, was strongly in favor of assumption; several representatives from that section were rash enough to threaten a dissolution of the union if the bill was defeated. To this dispute was added an equally bitter quarrel over the location of the national capital, then temporarily at New York City.
A deadlock, accompanied by the most surly feelings on both sides, threatened the very existence of the young government. Washington and Hamilton were thoroughly alarmed. Hearing of the extremity to which the contest had been carried and acting on the appeal from the Secretary of the Treasury, Jefferson intervened at this point. By skillful management at a good dinner he brought the opposing leaders together; and thus once more, as on many other occasions, peace was purchased and the union saved by compromise. The bargain this time consisted of an exchange of votes for assumption in return for votes for the capital. Enough Southern members voted for assumption to pass the bill, and a majority was mustered in favor of building the capital on the banks of the Potomac, after locating it for a ten-year period at Philadelphia to satisfy Pennsylvania members.
The United States Bank. – Encouraged by the success of his funding and assumption measures, Hamilton laid before Congress a project for a great United States Bank. He proposed that a private corporation be chartered by Congress, authorized to raise a capital stock of $10,000,000 (three-fourths in new six per cent federal bonds and one-fourth in specie) and empowered to issue paper currency under proper safeguards. Many advantages, Hamilton contended, would accrue to the government from this institution. The price of the government bonds would be increased, thus enhancing public credit. A national currency would be created of uniform value from one end of the land to the other. The branches of the bank in various cities would make easy the exchange of funds so vital to commercial transactions on a national scale. Finally, through the issue of bank notes, the money capital available for agriculture and industry would be increased, thus stimulating business enterprise. Jefferson hotly attacked the bank on the ground that Congress had no power whatever under the Constitution to charter such a private corporation. Hamilton defended it with great cogency. Washington, after weighing all opinions, decided in favor of the proposal. In 1791 the bill establishing the first United States Bank for a period of twenty years became a law.
The Protective Tariff. – A third part of Hamilton’s program was the protection of American industries. The first revenue act of 1789, though designed primarily to bring money into the empty treasury, declared in favor of the principle. The following year Washington referred to the subject in his address to Congress. Thereupon Hamilton was instructed to prepare recommendations for legislative action. The result, after a delay of more than a year, was his Report on Manufactures, another state paper worthy, in closeness of reasoning and keenness of understanding, of a place beside his report on public credit. Hamilton based his argument on the broadest national grounds: the protective tariff would, by encouraging the building of factories, create a home market for the produce of farms and plantations; by making the United States independent of other countries in times of peace, it would double its security in time of war; by making use of the labor of women and children, it would turn to the production of goods persons otherwise idle or only partly employed; by increasing the trade between the North and South it would strengthen the links of union and add to political ties those of commerce and intercourse. The revenue measure of 1792 bore the impress of these arguments.
Dissensions over Hamilton’s Measures. – Hamilton’s plans, touching deeply as they did the resources of individuals and the interests of the states, awakened alarm and opposition. Funding at face value, said his critics, was a government favor to speculators; the assumption of state debts was a deep design to undermine the state governments; Congress had no constitutional power to create a bank; the law creating the bank merely allowed a private corporation to make paper money and lend it at a high rate of interest; and the tariff was a tax on land and labor for the benefit of manufacturers.
Hamilton’s reply to this bill of indictment was simple and straightforward. Some rascally speculators had profited from the funding of the debt at face value, but that was only an incident in the restoration of public credit. In view of the jealousies of the states it was a good thing to reduce their powers and pretensions. The Constitution was not to be interpreted narrowly but in the full light of national needs. The bank would enlarge the amount of capital so sorely needed to start up American industries, giving markets to farmers and planters. The tariff by creating a home market and increasing opportunities for employment would benefit both land and labor. Out of such wise policies firmly pursued by the government, he concluded, were bound to come strength and prosperity for the new government at home, credit and power abroad. This view Washington fully indorsed, adding the weight of his great name to the inherent merits of the measures adopted under his administration.
The Sharpness of the Partisan Conflict. – As a result of the clash of opinion, the people of the country gradually divided into two parties: Federalists and Anti-Federalists, the former led by Hamilton, the latter by Jefferson. The strength of the Federalists lay in the cities – Boston, Providence, Hartford, New York, Philadelphia, Charleston – among the manufacturing, financial, and commercial groups of the population who were eager to extend their business operations. The strength of the Anti-Federalists lay mainly among the debt-burdened farmers who feared the growth of what they called “a money power” and planters in all sections who feared the dominance of commercial and manufacturing interests. The farming and planting South, outside of the few towns, finally presented an almost solid front against assumption, the bank, and the tariff. The conflict between the parties grew steadily in bitterness, despite the conciliatory and engaging manner in which Hamilton presented his cause in his state papers and despite the constant efforts of Washington to soften the asperity of the contestants.
The Leadership and Doctrines of Jefferson. – The party dispute had not gone far before the opponents of the administration began to look to Jefferson as their leader. Some of Hamilton’s measures he had approved, declaring afterward that he did not at the time understand their significance. Others, particularly the bank, he fiercely assailed. More than once, he and Hamilton, shaking violently with anger, attacked each other at cabinet meetings, and nothing short of the grave and dignified pleas of Washington prevented an early and open break between them. In 1794 it finally came. Jefferson resigned as Secretary of State and retired to his home in Virginia to assume, through correspondence and negotiation, the leadership of the steadily growing party of opposition.
Shy and modest in manner, halting in speech, disliking the turmoil of public debate, and deeply interested in science and philosophy, Jefferson was not very well fitted for the strenuous life of political contest. Nevertheless, he was an ambitious and shrewd negotiator. He was also by honest opinion and matured conviction the exact opposite of Hamilton. The latter believed in a strong, active, “high-toned” government, vigorously compelling in all its branches. Jefferson looked upon such government as dangerous to the liberties of citizens and openly avowed his faith in the desirability of occasional popular uprisings. Hamilton distrusted the people. “Your people is a great beast,” he is reported to have said. Jefferson professed his faith in the people with an abandon that was considered reckless in his time.
On economic matters, the opinions of the two leaders were also hopelessly at variance. Hamilton, while cherishing agriculture, desired to see America a great commercial and industrial nation. Jefferson was equally set against this course for his country. He feared the accumulation of riches and the growth of a large urban working class. The mobs of great cities, he said, are sores on the body politic; artisans are usually the dangerous element that make revolutions; workshops should be kept in Europe and with them the artisans with their insidious morals and manners. The only substantial foundation for a republic, Jefferson believed to be agriculture. The spirit of independence could be kept alive only by free farmers, owning the land they tilled and looking to the sun in heaven and the labor of their hands for their sustenance. Trusting as he did in the innate goodness of human nature when nourished on a free soil, Jefferson advocated those measures calculated to favor agriculture and to enlarge the rights of persons rather than the powers of government. Thus he became the champion of the individual against the interference of the government, and an ardent advocate of freedom of the press, freedom of speech, and freedom of scientific inquiry. It was, accordingly, no mere factious spirit that drove him into opposition to Hamilton.
The Whisky Rebellion. – The political agitation of the Anti-Federalists was accompanied by an armed revolt against the government in 1794. The occasion for this uprising was another of Hamilton’s measures, a law laying an excise tax on distilled spirits, for the purpose of increasing the revenue needed to pay the interest on the funded debt. It so happened that a very considerable part of the whisky manufactured in the country was made by the farmers, especially on the frontier, in their own stills. The new revenue law meant that federal officers would now come into the homes of the people, measure their liquor, and take the tax out of their pockets. All the bitterness which farmers felt against the fiscal measures of the government was redoubled. In the western districts of Pennsylvania, Virginia, and North Carolina, they refused to pay the tax. In Pennsylvania, some of them sacked and burned the houses of the tax collectors, as the Revolutionists thirty years before had mobbed the agents of King George sent over to sell stamps. They were in a fair way to nullify the law in whole districts when Washington called out the troops to suppress “the Whisky Rebellion.” Then the movement collapsed; but it left behind a deep-seated resentment which flared up in the election of several obdurate Anti-Federalist Congressmen from the disaffected regions.
The French Revolution. – In this exciting period, when all America was distracted by partisan disputes, a storm broke in Europe – the epoch-making French Revolution – which not only shook the thrones of the Old World but stirred to its depths the young republic of the New World. The first scene in this dramatic affair occurred in the spring of 1789, a few days after Washington was inaugurated. The king of France, Louis XVI, driven into bankruptcy by extravagance and costly wars, was forced to resort to his people for financial help. Accordingly he called, for the first time in more than one hundred fifty years, a meeting of the national parliament, the “Estates General,” composed of representatives of the “three estates” – the clergy, nobility, and commoners. Acting under powerful leaders, the commoners, or “third estate,” swept aside the clergy and nobility and resolved themselves into a national assembly. This stirred the country to its depths.
Great events followed in swift succession. On July 14, 1789, the Bastille, an old royal prison, symbol of the king’s absolutism, was stormed by a Paris crowd and destroyed. On the night of August 4, the feudal privileges of the nobility were abolished by the national assembly amid great excitement. A few days later came the famous Declaration of the Rights of Man, proclaiming the sovereignty of the people and the privileges of citizens. In the autumn of 1791, Louis XVI was forced to accept a new constitution for France vesting the legislative power in a popular assembly. Little disorder accompanied these startling changes. To all appearances a peaceful revolution had stripped the French king of his royal prerogatives and based the government of his country on the consent of the governed.
American Influence in France. – In undertaking their great political revolt the French had been encouraged by the outcome of the American Revolution. Officers and soldiers, who had served in the American war, reported to their French countrymen marvelous tales. At the frugal table of General Washington, in council with the unpretentious Franklin, or at conferences over the strategy of war, French noblemen of ancient lineage learned to respect both the talents and the simple character of the leaders in the great republican commonwealth beyond the seas. Travelers, who had gone to see the experiment in republicanism with their own eyes, carried home to the king and ruling class stories of an astounding system of popular government.
On the other hand the dalliance with American democracy was regarded by French conservatives as playing with fire. “When we think of the false ideas of government and philanthropy,” wrote one of Lafayette’s aides, “which these youths acquired in America and propagated in France with so much enthusiasm and such deplorable success – for this mania of imitation powerfully aided the Revolution, though it was not the sole cause of it – we are bound to confess that it would have been better, both for themselves and for us, if these young philosophers in red-heeled shoes had stayed at home in attendance on the court."
Early American Opinion of the French Revolution. – So close were the ties between the two nations that it is not surprising to find every step in the first stages of the French Revolution greeted with applause in the United States. “Liberty will have another feather in her cap,” exultantly wrote a Boston editor. “In no part of the globe,” soberly wrote John Marshall, “was this revolution hailed with more joy than in America.... But one sentiment existed.” The main key to the Bastille, sent to Washington as a memento, was accepted as “a token of the victory gained by liberty.” Thomas Paine saw in the great event “the first ripe fruits of American principles transplanted into Europe.” Federalists and Anti-Federalists regarded the new constitution of France as another vindication of American ideals.
The Reign of Terror. – While profuse congratulations were being exchanged, rumors began to come that all was not well in France. Many noblemen, enraged at the loss of their special privileges, fled into Germany and plotted an invasion of France to overthrow the new system of government. Louis XVI entered into negotiations with his brother monarchs on the continent to secure their help in the same enterprise, and he finally betrayed to the French people his true sentiments by attempting to escape from his kingdom, only to be captured and taken back to Paris in disgrace.
A new phase of the revolution now opened. The working people, excluded from all share in the government by the first French constitution, became restless, especially in Paris. Assembling on the Champs de Mars, a great open field, they signed a petition calling for another constitution giving them the suffrage. When told to disperse, they refused and were fired upon by the national guard. This “massacre,” as it was called, enraged the populace. A radical party, known as “Jacobins,” then sprang up, taking its name from a Jacobin monastery in which it held its sessions. In a little while it became the master of the popular convention convoked in September, 1792. The monarchy was immediately abolished and a republic established. On January 21, 1793, Louis was sent to the scaffold. To the war on Austria, already raging, was added a war on England. Then came the Reign of Terror, during which radicals in possession of the convention executed in large numbers counter-revolutionists and those suspected of sympathy with the monarchy. They shot down peasants who rose in insurrection against their rule and established a relentless dictatorship. Civil war followed. Terrible atrocities were committed on both sides in the name of liberty, and in the name of monarchy. To Americans of conservative temper it now seemed that the Revolution, so auspiciously begun, had degenerated into anarchy and mere bloodthirsty strife.
Burke Summons the World to War on France. – In England, Edmund Burke led the fight against the new French principles which he feared might spread to all Europe. In his Reflections on the French Revolution, written in 1790, he attacked with terrible wrath the whole program of popular government; he called for war, relentless war, upon the French as monsters and outlaws; he demanded that they be reduced to order by the restoration of the king to full power under the protection of the arms of European nations.
Paine’s Defense of the French Revolution. – To counteract the campaign of hate against the French, Thomas Paine replied to Burke in another of his famous tracts, The Rights of Man, which was given to the American public in an edition containing a letter of approval from Jefferson. Burke, said Paine, had been mourning about the glories of the French monarchy and aristocracy but had forgotten the starving peasants and the oppressed people; had wept over the plumage and neglected the dying bird. Burke had denied the right of the French people to choose their own governors, blandly forgetting that the English government in which he saw final perfection itself rested on two revolutions. He had boasted that the king of England held his crown in contempt of the democratic societies. Paine answered: “If I ask a man in America if he wants a king, he retorts and asks me if I take him for an idiot.” To the charge that the doctrines of the rights of man were “new fangled,” Paine replied that the question was not whether they were new or old but whether they were right or wrong. As to the French disorders and difficulties, he bade the world wait to see what would be brought forth in due time.
The Effect of the French Revolution on American Politics. – The course of the French Revolution and the controversies accompanying it, exercised a profound influence on the formation of the first political parties in America. The followers of Hamilton, now proud of the name “Federalists,” drew back in fright as they heard of the cruel deeds committed during the Reign of Terror. They turned savagely upon the revolutionists and their friends in America, denouncing as “Jacobin” everybody who did not condemn loudly enough the proceedings of the French Republic. A Massachusetts preacher roundly assailed “the atheistical, anarchical, and in other respects immoral principles of the French Republicans"; he then proceeded with equal passion to attack Jefferson and the Anti-Federalists, whom he charged with spreading false French propaganda and betraying America. “The editors, patrons, and abettors of these vehicles of slander,” he exclaimed, “ought to be considered and treated as enemies to their country.... Of all traitors they are the most aggravatedly criminal; of all villains, they are the most infamous and detestable."
The Anti-Federalists, as a matter of fact, were generally favorable to the Revolution although they deplored many of the events associated with it. Paine’s pamphlet, indorsed by Jefferson, was widely read. Democratic societies, after the fashion of French political clubs, arose in the cities; the coalition of European monarchs against France was denounced as a coalition against the very principles of republicanism; and the execution of Louis XVI was openly celebrated at a banquet in Philadelphia. Harmless titles, such as “Sir,” “the Honorable,” and “His Excellency,” were decried as aristocratic and some of the more excited insisted on adopting the French title, “Citizen,” speaking, for example, of “Citizen Judge” and “Citizen Toastmaster.” Pamphlets in defense of the French streamed from the press, while subsidized newspapers kept the propaganda in full swing.
The European War Disturbs American Commerce. – This battle of wits, or rather contest in calumny, might have gone on indefinitely in America without producing any serious results, had it not been for the war between England and France, then raging. The English, having command of the seas, claimed the right to seize American produce bound for French ports and to confiscate American ships engaged in carrying French goods. Adding fuel to a fire already hot enough, they began to search American ships and to carry off British-born sailors found on board American vessels.
The French Appeal for Help. – At the same time the French Republic turned to the United States for aid in its war on England and sent over as its diplomatic representative “Citizen” GenÍt, an ardent supporter of the new order. On his arrival at Charleston, he was greeted with fervor by the Anti-Federalists. As he made his way North, he was wined and dined and given popular ovations that turned his head. He thought the whole country was ready to join the French Republic in its contest with England. GenÍt therefore attempted to use the American ports as the base of operations for French privateers preying on British merchant ships; and he insisted that the United States was in honor bound to help France under the treaty of 1778.
The Proclamation of Neutrality and the Jay Treaty. – Unmoved by the rising tide of popular sympathy for France, Washington took a firm course. He received GenÍt coldly. The demand that the United States aid France under the old treaty of alliance he answered by proclaiming the neutrality of America and warning American citizens against hostile acts toward either France or England. When GenÍt continued to hold meetings, issue manifestoes, and stir up the people against England, Washington asked the French government to recall him. This act he followed up by sending the Chief Justice, John Jay, on a pacific mission to England.
The result was the celebrated Jay treaty of 1794. By its terms Great Britain agreed to withdraw her troops from the western forts where they had been since the war for independence and to grant certain slight trade concessions. The chief sources of bitterness – the failure of the British to return slaves carried off during the Revolution, the seizure of American ships, and the impressment of sailors – were not touched, much to the distress of everybody in America, including loyal Federalists. Nevertheless, Washington, dreading an armed conflict with England, urged the Senate to ratify the treaty. The weight of his influence carried the day.
At this, the hostility of the Anti-Federalists knew no bounds. Jefferson declared the Jay treaty “an infamous act which is really nothing more than an alliance between England and the Anglo-men of this country, against the legislature and the people of the United States.” Hamilton, defending it with his usual courage, was stoned by a mob in New York and driven from the platform with blood streaming from his face. Jay was burned in effigy. Even Washington was not spared. The House of Representatives was openly hostile. To display its feelings, it called upon the President for the papers relative to the treaty negotiations, only to be more highly incensed by his flat refusal to present them, on the ground that the House did not share in the treaty-making power.
Washington Retires from Politics. – Such angry contests confirmed the President in his slowly maturing determination to retire at the end of his second term in office. He did not believe that a third term was unconstitutional or improper; but, worn out by his long and arduous labors in war and in peace and wounded by harsh attacks from former friends, he longed for the quiet of his beautiful estate at Mount Vernon.
In September, 1796, on the eve of the presidential election, Washington issued his Farewell Address, another state paper to be treasured and read by generations of Americans to come. In this address he directed the attention of the people to three subjects of lasting interest. He warned them against sectional jealousies. He remonstrated against the spirit of partisanship, saying that in government “of the popular character, in government purely elective, it is a spirit not to be encouraged.” He likewise cautioned the people against “the insidious wiles of foreign influence,” saying: “Europe has a set of primary interests which to us have none or a very remote relation. Hence she must be engaged in frequent controversies, the causes of which are essentially foreign to our concerns. Hence, therefore, it would be unwise in us to implicate ourselves, by artificial ties, in the ordinary vicissitudes of her politics or the ordinary combinations and collisions of her friendships or enmities.... Why forego the advantages of so peculiar a situation?... It is our true policy to steer clear of permanent alliances with any portion of the foreign world.... Taking care always to keep ourselves, by suitable establishments, on a respectable defensive posture, we may safely trust to temporary alliances for extraordinary emergencies."
The Campaign of 1796 – Adams Elected. – On hearing of the retirement of Washington, the Anti-Federalists cast off all restraints. In honor of France and in opposition to what they were pleased to call the monarchical tendencies of the Federalists, they boldly assumed the name “Republican"; the term “Democrat,” then applied only to obscure and despised radicals, had not come into general use. They selected Jefferson as their candidate for President against John Adams, the Federalist nominee, and carried on such a spirited campaign that they came within four votes of electing him.
The successful candidate, Adams, was not fitted by training or opinion for conciliating a determined opposition. He was a reserved and studious man. He was neither a good speaker nor a skillful negotiator. In one of his books he had declared himself in favor of “government by an aristocracy of talents and wealth” – an offense which the Republicans never forgave. While John Marshall found him “a sensible, plain, candid, good-tempered man,” Jefferson could see in him nothing but a “monocrat” and “Anglo-man.” Had it not been for the conduct of the French government, Adams would hardly have enjoyed a moment’s genuine popularity during his administration.
The Quarrel with France. – The French Directory, the executive department established under the constitution of 1795, managed, however, to stir the anger of Republicans and Federalists alike. It regarded the Jay treaty as a rebuke to France and a flagrant violation of obligations solemnly registered in the treaty of 1778. Accordingly it refused to receive the American minister, treated him in a humiliating way, and finally told him to leave the country. Overlooking this affront in his anxiety to maintain peace, Adams dispatched to France a commission of eminent men with instructions to reach an understanding with the French Republic. On their arrival, they were chagrined to find, instead of a decent reception, an indirect demand for an apology respecting the past conduct of the American government, a payment in cash, and an annual tribute as the price of continued friendship. When the news of this affair reached President Adams, he promptly laid it before Congress, referring to the Frenchmen who had made the demands as “Mr. X, Mr. Y, and Mr. Z."
This insult, coupled with the fact that French privateers, like the British, were preying upon American commerce, enraged even the Republicans who had been loudest in the profession of their French sympathies. They forgot their wrath over the Jay treaty and joined with the Federalists in shouting: “Millions for defense, not a cent for tribute!” Preparations for war were made on every hand. Washington was once more called from Mount Vernon to take his old position at the head of the army. Indeed, fighting actually began upon the high seas and went on without a formal declaration of war until the year 1800. By that time the Directory had been overthrown. A treaty was readily made with Napoleon, the First Consul, who was beginning his remarkable career as chief of the French Republic, soon to be turned into an empire.
Alien and Sedition Laws. – Flushed with success, the Federalists determined, if possible, to put an end to radical French influence in America and to silence Republican opposition. They therefore passed two drastic laws in the summer of 1798: the Alien and Sedition Acts.
The first of these measures empowered the President to expel from the country or to imprison any alien whom he regarded as “dangerous” or “had reasonable grounds to suspect” of “any treasonable or secret machinations against the government."
The second of the measures, the Sedition Act, penalized not only those who attempted to stir up unlawful combinations against the government but also every one who wrote, uttered, or published “any false, scandalous, and malicious writing ... against the government of the United States or either House of Congress, or the President of the United States, with intent to defame said government ... or to bring them or either of them into contempt or disrepute.” This measure was hurried through Congress in spite of the opposition and the clear provision in the Constitution that Congress shall make no law abridging the freedom of speech or of the press. Even many Federalists feared the consequences of the action. Hamilton was alarmed when he read the bill, exclaiming: “Let us not establish a tyranny. Energy is a very different thing from violence.” John Marshall told his friends in Virginia that, had he been in Congress, he would have opposed the two bills because he thought them “useless” and “calculated to create unnecessary discontents and jealousies."
The Alien law was not enforced; but it gave great offense to the Irish and French whose activities against the American government’s policy respecting Great Britain put them in danger of prison. The Sedition law, on the other hand, was vigorously applied. Several editors of Republican newspapers soon found themselves in jail or broken by ruinous fines for their caustic criticisms of the Federalist President and his policies. Bystanders at political meetings, who uttered sentiments which, though ungenerous and severe, seem harmless enough now, were hurried before Federalist judges and promptly fined and imprisoned. Although the prosecutions were not numerous, they aroused a keen resentment. The Republicans were convinced that their political opponents, having saddled upon the country Hamilton’s fiscal system and the British treaty, were bent on silencing all censure. The measures therefore had exactly the opposite effect from that which their authors intended. Instead of helping the Federalist party, they made criticism of it more bitter than ever.
The Kentucky and Virginia Resolutions. – Jefferson was quick to take advantage of the discontent. He drafted a set of resolutions declaring the Sedition law null and void, as violating the federal Constitution. His resolutions were passed by the Kentucky legislature late in 1798, signed by the governor, and transmitted to the other states for their consideration. Though receiving unfavorable replies from a number of Northern states, Kentucky the following year reaffirmed its position and declared that the nullification of all unconstitutional acts of Congress was the rightful remedy to be used by the states in the redress of grievances. It thus defied the federal government and announced a doctrine hostile to nationality and fraught with terrible meaning for the future. In the neighboring state of Virginia, Madison led a movement against the Alien and Sedition laws. He induced the legislature to pass resolutions condemning the acts as unconstitutional and calling upon the other states to take proper means to preserve their rights and the rights of the people.
The Republican Triumph in 1800. – Thus the way was prepared for the election of 1800. The Republicans left no stone unturned in their efforts to place on the Federalist candidate, President Adams, all the odium of the Alien and Sedition laws, in addition to responsibility for approving Hamilton’s measures and policies. The Federalists, divided in councils and cold in their affection for Adams, made a poor campaign. They tried to discredit their opponents with epithets of “Jacobins” and “Anarchists” – terms which had been weakened by excessive use. When the vote was counted, it was found that Adams had been defeated; while the Republicans had carried the entire South and New York also and secured eight of the fifteen electoral votes cast by Pennsylvania. “Our beloved Adams will now close his bright career,” lamented a Federalist newspaper. “Sons of faction, demagogues and high priests of anarchy, now you have cause to triumph!"
Jefferson’s election, however, was still uncertain. By a curious provision in the Constitution, presidential electors were required to vote for two persons without indicating which office each was to fill, the one receiving the highest number of votes to be President and the candidate standing next to be Vice President. It so happened that Aaron Burr, the Republican candidate for Vice President, had received the same number of votes as Jefferson; as neither had a majority the election was thrown into the House of Representatives, where the Federalists held the balance of power. Although it was well known that Burr was not even a candidate for President, his friends and many Federalists began intriguing for his election to that high office. Had it not been for the vigorous action of Hamilton the prize might have been snatched out of Jefferson’s hands. Not until the thirty-sixth ballot on February 17, 1801, was the great issue decided in his favor.(1789-92)?
To volunteer for the MIA, Email our Admin Committee |
goes into orbit:
Imagine I'm on the moon
, and I have a bunch of tennis ball
s. If I throw a ball parallel
to the ground, but not very quickly
, it'll go for a while, but eventually gravity
will pull it to the ground and it'll stop. Now, as I throw the balls with greater and greater velocity
, they'll get farther and farther, but they'll all fall eventually. But then I throw one really, really fast
, and it never hits the ground! How can this be?
Let's examine the curved path of the ball as it travels any horizontal distance-- for simplicity's sake, 1 meter
. Within that distance, it will fall away from a perfectly horizontal
path (the path it would take if there were no gravity) by a certain amount; let's call it dvert
, and it increases as horizontal velocity increases, since it can travel farther before it hits the ground. Now, let's look at the planet we're on (in this case, the moon); more specifically, 1 meter of it. Since the planet isn't flat
(last time I checked
), it'll curve away from a perfectly flat distance by a certain amount within this 1-meter length.
Now here's the crazy part: If dvert
(the amount the ball falls away from horizontal as it travels 1 meter) ever equals the amount that the planet curves away from the horizontal, the ball will never hit the ground
! Because the earth is curving away at the same rate that the ball is, it will go into orbit. |
Let's see what the people in this class think with a show of hands.
This is a visual aid designed to be projected onto a whiteboard for whole class exposition. It shows the difference between some basic graphs and charts which students may choose to use when presenting data collection projects. Students can come up with their own ideas for questions they would like to ask the rest of the class. They could design a questionnaire to collect data but should be careful that theirs is not like this bad survey.
How did you use this resource?
Can you suggest how teachers could present or develop this resource?
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments.
Teacher, do your students have
access to computers? |
Load Factors in Steep Turns
At a constant altitude, during a coordinated turn in any aircraft, the load factor is the result of two forces: centrifugal force and weight. [Figure 5-52] For any given bank angle, the ROT varies with the airspeed—the higher the speed, the slower the ROT. This compensates for added centrifugal force, allowing the load factor to remain the same.
Figure 5-53 reveals an important fact about turns—the load factor increases at a terrific rate after a bank has reached 45° or 50°. The load factor for any aircraft in a coordinated level turn at 60° bank is 2 Gs. The load factor in an 80° bank is 5.76 Gs. The wing must produce lift equal to these load factors if altitude is to be maintained.
It should be noted how rapidly the line denoting load factor rises as it approaches the 90° bank line, which it never quite reaches because a 90° banked, constant altitude turn is not mathematically possible. An aircraft may be banked to 90° in a coordinated turn if not trying to hold altitude. An aircraft that can be held in a 90° banked slipping turn is capable of straight knife-edged flight. At slightly more than 80°, the load factor exceeds the limit of 6 Gs, the limit load factor of an acrobatic aircraft.
For a coordinated, constant altitude turn, the approximate maximum bank for the average general aviation aircraft is 60°. This bank and its resultant necessary power setting reach the limit of this type of aircraft. An additional 10° bank increases the load factor by approximately 1 G, bringing it close to the yield point established for these aircraft. [Figure 5-54]
Load Factors and Stalling Speeds
Any aircraft, within the limits of its structure, may be stalled at any airspeed. When a sufficiently high AOA is imposed, the smooth flow of air over an airfoil breaks up and separates, producing an abrupt change of flight characteristics and a sudden loss of lift, which results in a stall.
A study of this effect has revealed that an aircraft’s stalling speed increases in proportion to the square root of the load factor. This means that an aircraft with a normal unaccelerated stalling speed of 50 knots can be stalled at 100 knots by inducing a load factor of 4 Gs. If it were possible for this aircraft to withstand a load factor of nine, it could be stalled at a speed of 150 knots. A pilot should be aware of the following:
- The danger of inadvertently stalling the aircraft by increasing the load factor, as in a steep turn or spiral;
- When intentionally stalling an aircraft above its design maneuvering speed, a tremendous load factor is imposed.
Figures 5-53 and 5-54 show that banking an aircraft greater than 72° in a steep turn produces a load factor of 3, and the stalling speed is increased significantly. If this turn is made in an aircraft with a normal unaccelerated stalling speed of 45 knots, the airspeed must be kept greater than 75 knots to prevent inducing a stall. A similar effect is experienced in a quick pull up or any maneuver producing load factors above 1 G. This sudden, unexpected loss of control, particularly in a steep turn or abrupt application of the back elevator control near the ground, has caused many accidents.
Since the load factor is squared as the stalling speed doubles, tremendous loads may be imposed on structures by stalling an aircraft at relatively high airspeeds.
The following information primarily applies to fixed-wing airplanes. The maximum speed at which an airplane may be stalled safely is now determined for all new designs. This speed is called the “design maneuvering speed” (VA), which is the speed below which you can move a single flight control, one time, to its full deflection, for one axis of airplane rotation only (pitch, roll or yaw), in smooth air, without risk of damage to the airplane. VA must be entered in the FAA-approved Airplane Flight Manual/ Pilot’s Operating Handbook (AFM/POH) of all recently designed airplanes. For older general aviation airplanes, this speed is approximately 1.7 times the normal stalling speed. Thus, an older airplane that normally stalls at 60 knots must never be stalled at above 102 knots (60 knots × 1.7 = 102 knots). An airplane with a normal stalling speed of 60 knots stalled at 102 knots undergoes a load factor equal to the square of the increase in speed, or 2.89 Gs (1.7 × 1.7 = 2.89 Gs). (The above figures are approximations to be considered as a guide, and are not the exact answers to any set of problems. The design maneuvering speed should be determined from the particular airplane’s operating limitations provided by the manufacturer.) Operating at or below design maneuvering speed does not provide structural protection against multiple full control inputs in one axis or full control inputs in more than one axis at the same time.
Since the leverage in the control system varies with different aircraft (some types employ “balanced” control surfaces while others do not), the pressure exerted by the pilot on the controls cannot be accepted as an index of the load factors produced in different aircraft. In most cases, load factors can be judged by the experienced pilot from the feel of seat pressure. Load factors can also be measured by an instrument called an “accelerometer,” but this instrument is not common in general aviation training aircraft. The development of the ability to judge load factors from the feel of their effect on the body is important. A knowledge of these principles is essential to the development of the ability to estimate load factors.
A thorough knowledge of load factors induced by varying degrees of bank and the VA aids in the prevention of two of the most serious types of accidents:
- Stalls from steep turns or excessive maneuvering near the ground
- Structural failures during acrobatics or other violent maneuvers resulting from loss of control |
English 112A, Spring 2002 Study Questions
Charlotte's Web /
The Lion, the Witch, and the Wardrobe / Holes/ Walk Two Moons.
1. What is a runt? Explore the extended meanings of this term.How does this portray the differences between parents and children? Are their values different? Explain.
2. How are Fern's fears very like Wilbur's? Are they also similar to Hansel's and Gretel's? Do Fern's fears extend beyond the immediate problem? Explore the possibilities.
3. What is the meaning of the chapter "Escape"? What are the dynamics involved in this chapter? What, for example, does Wilbur decide to do? Why does he give up his freedom? This is an example of a dynamic tension, conflict, or polarity within the work.
4. Describe the character of Templeton? How does what Templeton does differ from Charlotte's work? from the other animals? Explore the nature of "ratness."
5. Children's literature can best be understood by examining important episodes. One such episode is in Chapter 5 (39-42) in which the characters discuss the importance of food. Discuss how food is a metaphor in the book.
6. Discuss the importance of Wilbur's failure to be able to spin a web? Connect this to your previous discussions of Templeton and being a runt. In this context, what does "versatility" mean (116-17)? Its value?
7. How is Charlotte's tale about her cousin connected to the story's theme? Explain the value of her identity as shown in the following paragraph. "I am not entirely happy about my diet of flies and bugs, but it's the way I'm made. . . . Way back for thousands and thousands of years we spiders have been laying for flies and bugs.
8. Look closely at the death of Charlotte. What is the dynamic connection between Charlotte's magnum opus, her death, and the everyone's role? "Is it a plaything?" "Plaything? I should say not. It is my egg sac, my magnum opus."
9. Explore the significance of the last paragraph of the book. "Wilbur never forgot Charlotte. . . . She was in a class by herself. It is not often that someone comes along who is a true friend and a good writer. Charlotte was both."
10. What is the meaning of the title of the book? Think of the implication when we say we are reading Charlotte's Web. Are we getting trapped the way Charlotte's victims are trapped? What about the power of words?
The Lion, the Witch, and the Wardrobe
1. What is the relevance of the setting to the story? "This story is about something that happened to [four children] when they were sent away from London during the war because of the air-raids" (1). What does this add to the story?
2. Picture the architecture of this very old house, in which "no one will mind what they do." What is implied in this statement? Connect the architecture of a house with all its sections, attics, basements etc. to conceptions of ideas of the self.
3. What are the laws of time and space that govern Narnia? Are they different from the ordinary world? Discuss how the transition is made between the natural world and the imaginative world (hint: in the closet). Compare and contrast the differences in chronotopes (we will explain this part of the questions in class).
4. This story involves the act of reading--interpreting signs--on several levels. What are some of the conventional signs which must be read or interpreted correctly? Why does Lucy seem better at this than Edmund?
5. Madness--lack of control--as a theme is touched upon directly or indirectly several times in this book, most notably in the figures of Lucy, Edmund, and the White Witch. How is this insanity akin to excessive imagination? What might your definition of madness be?
6. Look carefully throughout the book for images of darkness, "vanishing," light, warmth, cold, and touching. Is there a pattern? What conclusions can you draw?
7. C. S. Lewis incorporates mythologies from very different sources, mixing them together. Discuss a few of them, how Lewis's use differs from the original and how they make the story more understandable.
8. How is magic defined in this story? What are the characteristics of magic? synonyms?
9. Note the change in Edmund during the course of the story. How is his educational background different from his siblings? Examine Edmund's "addiction" to turkish delight.
10. Analyze the ways in LWW can be seen as a text for teens or adults. How does this story compare to Joseph Campbell's hero cycly?
1. The book starts by explaining what is not at Camp Green Lake. There used to be trees, a lake, a town, shade, and people, now there are only lizards, rattlesnakes, and scorpions. How is this puzzle attractive reading? How does this beginning illustrate the basic pattern of the narrative? A braided narrative? juxaposition as narrative form?
2. "My name is easy to remember," said Mr. Pendanski as he shook hands with Stanley just outside the tent. "Three easy words: pen, dance, key." How is this sentence a metacritical comment? In other words, how does this sentence show us how to read this book? How does Stanley's own name illustrate the narrative plot?
3. This is a story about juvenile delinquents, about learning, about reading, about crooks, thieves, and pig-stealings. Or is it? What satirical comments are being made about our views of these folks? Give two examples.
4. "Do you hear the empty spaces?" she [the Warden] asked (67). What are some other "empty spaces" in the book? Discuss the connection between empty spaces and the statement, "Zero was nobody" (81). How are these connected to black holes and time warps.
5. What are all the connections between Clyde Livingston, smelly feet, Zero, and Stanley? Digging holes makes character. Whose? Why?
6. "Doc Hawthorn was almost completely bald, and in the morning his head often smelled like onions" (109). Now look closely at this sentence: "A lot of people don't believe in yellow-spotted lizards either, but if one bites you, it doesn't make a difference whether you believe in it or not" (41). Examine the ways in which the ideas behind these two sentences are connected. This should also lead you to consider how myth and folk tales are woven into the fabric of this story.
7. There are some ethnic considerations elements in this story. What are they? How are these situations connected to a transcendence of these same elements?
8. Friendship. What is it? How do Stanley and Zero develop their friendship? What do they sacrifice?
9. Irony. What is irony? How does irony work in this book? Discuss some examples to illustrate how the Holes is an ironic demonstration of the onion-eating, layers of onion metaphor/episode in the book.
10. There seems to be nothing that is predictable in this book. Is this book in the absurdist tradition? a black comedy? What is serious becomes funny, and what is funny quickly turns serious. What seems to be a game turns deadly very quickly.
Walk Two Moons --Revised March 3, 2002
1. Look carefully at the following quotation: "Just over a year ago, my father plucked me up like a weed and took me and all our belongings. . . and stopped in front of a house in Euclid, Ohio" (1).
2."And that voice-it reminds me of dead leaves all blowing around on the ground" (23).
3.Look at the names of characters in the story. What are the definitions of these particular names? How are the individuals different from and/or similar to these definitions?
4. Trace the importance of the trees mentioned in this passage. Note some of the tales: the ones her mother tells her, Phoebe's tale, etc. See how they overlap one another.
5."On their moccasins, and I thought again about Phoebe's message: Don't judge a man until you've walked tow moons in his moccasins" (58).
6."Ever since my mother left us that April day, I suspected that everyone was going to leave, one by one" (59).
7."She kept climbing and climbing. It was a thumpingly tall ladder. She couldn't see me, and she never came down. She just kept going" (169).
8."Lately, I've been wondering if there might be something hidden behind the fireplace, because just as the firplace was behind the plaster wall and my mother's story was behind Phoebe's, I think there was a third story behind Phoebe's and my mother's, and that was about Gram and Gramps" (274).
9."One afternoon, after we had been talking about Prometheus stealing the fire from the sun to give to man, and about Pandora opening up the forbidden box with all the evils of the world in it, Gramps said that those myths evolved because people needed a way to explain where fire came from and why there was evil in the world" (276). Explain how this idea works, and show how it works in the book.
10."I still fish in the air sometimes" (277). Do a close reading of this sentence.
The View from Saturday
1. In class, we have talked about picture books and the importance of visual images. "Read" the cover of The View from Saturday. What does the picture on the front represent? (You may want to think about the relationship between domestic space and consciousness). What do you make of the back cover, particularly the light blue section titled "Meet the Souls"? What sort of expectations does it excite?
2. Consider the name of the group: "The Souls." Why is this name important? What does it tell us about the four individuals who constitute the group? Is there any relation to the name of the school they attend, Epiphany?
3. " ‘In the interest of diversity,’ she said, ‘I chose a brunette, a redhead, a blond, and a kid with hair as black as print on paper" (22).
4. What is a hybrid? Why does a hybrid possess power? Choose a hybrid other than Nadia and explain how that character’s or object’s hybridity makes it more than the parts it originates from.
5. " ‘Nothing.’ Nothing is never an answer, but sometimes nothing works. Sometimes nothing else does." (51) What is the idea of nothing? Look at Alice in Wonderland's treatment of this concept.
6. "Mostly, they could read—really read. Sixth grade still meant that kids could begin to get inside the print and to the meaning." (58). Several of the books we have read offer us a metacommentary on reading. Discuss this statement thoroughly.
7. "Sometimes silence is a habit that hurts." (70). How is this possible? Give some examples and examine a pattern as it develops in the novel.
8. Consider Julian’s backpack and the way in which Julian transforms the words Hamilton Knapp marks on it (72). What is the significance of Julian’s changed message? Of Ethan’s observations?
9. " ‘Chops,’ Julian said, ‘is to magic what doing scales is to a chanteuse. Without it you cannot be a magician, with it alone you cannot be an artist." What are "chops"? What does the analogy convey about chops?
10. "She thought that maybe—just maybe—Western Civilization was in decline because people did not take time to take tea at four o’clock." There is much going on in this seemingly simple statement. Explore as many implications as possible. |
Definition - What does Parallel Port mean?
A parallel port is an interface allowing a personal computer (PC) to transmit or receive data down multiple bundled cables to a peripheral device such as a printer. The most common parallel port is a printer port known as the Centronics port. A parallel port has multiple connectors and in theory allows data to be sent simultaneously down several cables at once. Later versions allow bi-directional communications. This technology is still used today for low-data-rate communications such as dot-matrix printing.
The standard for the bi-directional version of a parallel port is the Institute of Electrical and Electronics Engineers (IEEE) 1284. This standard defined bi-directional parallel communication between computers and other peripheral devices allowing data bits to be transmitted and received simultaneously.
This term is also known as a Centronics port or printer port and has now been largely superseded by the USB interface.
Techopedia explains Parallel Port
A parallel port is a type of interface on a personal computer (PC) transmitting or receiving data to a peripheral device such as a printer. The data is transmitted over a parallel cable extending no more than the standard 6 feet. If the cable is too long, the integrity of the data can be lost. The recommendation from Hewlett-Packard is a maximum of 10 feet.
Originally the parallel port was unidirectional and transmitted eight bits of data at a time down multiple strands of copper cable. It was introduced by CentronicsData Computer Corporation in 1970. The parallel port was designed to be used with printers and could transfer only a total of 300Kbits/sec. The standard for the unidirectional printer port was the standard printer port (SPP) or normal port developed in 1981. In 1987, the introduction of PS/2 connected other peripheral devices such as mice and keyboards. The PS/2 was a bidirectional parallel port (BPP), which could simultaneously transmit and receive eight bits of data.
In 1994 two new types of parallel ports were introduced - the enhanced parallel port (EPP) and the extended capabilities port (ECP). The enhanced parallel port (EPP) was quite a bit faster than older parallel ports, with transfer speeds of 500 KBps to 2 MBps. The port is used for newer models of printers and scanners. The ECP also supports an 8-bit bidirectional port. It is like EPP but uses direct memory access (DMA). It is utilized for non-printer peripherals such as network adapters or disc drives.
Also in 1994 the Standard Signaling Method for a Bi-directional Parallel Peripheral Interface for Personal Computers (IEEE 1284) standard was executed to avoid issues of incompatibility with the newer diverse parallel port hardware. The five modes of operation were specified as ECP mode, EPP mode, byte mode, nibble mode and compatibility mode. Each mode must support data transfer in the forward direction, backward direction or bidirectionally. To ensure that data integrity is maintained, the IEEE 1284 set standards for the connector, interface and cable.The parallel port transfers one bit of data on each of two wires, which increases the data transfer rate (DTR). Generally there are additional wires regulating signals to specify when transmitting or receiving data is available.
Originally parallel ports were intended for printers. The first parallel interface port for printers was made for the Centronics Model 101 (introduced in 1970), which transmitted data eight bits at a time. This parallel port could only transmit data but not receive it. Later the parallel port was bidirectional and used for input devices as well as printers. The bidirectional parallel port (BPP) could communicate with several peripheral devices such as scanners, zip drives, hard discs, modems and CD-ROM drives. The BPP is generally used for fast data transmission over small distances. Additional parallel ports are typically labeled LPT1, LPT2, etc.
When the IEEE 1284 standard was introduced in 1994, the length of cables, logic voltages and interfaces was standardized. With the IEEE 1284 standards, five modes of operation were specified to support data transfer in the forward direction, backward direction or bidirectionally. The five modes of operation are extended capability port (ECP mode), enhanced parallel port (EPP) mode, byte mode, nibble mode and the compatibility (Standard Parallel Port or SPP) mode.
The compatibility is unidirectional and is used mostly for printers. The nibble mode is bidirectional, which allows four successive bits to be transmitted using a single data line. It is used for enhanced printer status allowing the device to transmit data four bits at a time. The byte mode is bidirectional, which transmits data eight bits at a time using one data line. The EPP mode has an 8-bit bidirectional interface, which transmits data up to of 500 KBps to 2 MBps. The ECP mode has an 8-bit bidirectional interface, which uses DMA and can provide up to 2.5 MBps of bandwidth.Today, the universal serial bus (USB) has replaced the parallel port. In fact, several manufactures have completely excluded the parallel interface. Although for older personal computers (PCs) and laptops, a USB-to-parallel adapter is available for parallel printers or other peripheral devices having a parallel interface.
Join thousands of others with our weekly newsletter
The 4th Era of IT Infrastructure: Superconverged Systems:
Approaches and Benefits of Network Virtualization:
Free E-Book: Public Cloud Guide:
Free Tool: Virtual Health Monitor:
Free 30 Day Trial – Turbonomic: |
A new camera technology developed by scientists from Nanyang Technological University, Singapore (NTU Singapore) can take sharp, colour images without using a lens and colour filters. Using only a piece of ground glass and a monochrome sensor, the scientists created multi-coloured images by “reverse engineering” the light that is scattered by the translucent matt surface of the ground glass, thus obtaining the original image that was projected on to it.
Since different wavelengths of light are scattered differently by the ground glass, the NTU scientists created an algorithm to reconstruct the image. To do this they created a library of “speckle patterns” linked to each wavelength of light, including those in the infrared and ultraviolet spectral regions. In a conventional camera, optics made from glass or plastic lenses capture light and guide it onto the colour filters and camera sensor to obtain sharp colour images. These lenses are usually bulky in size and expensive due to the precision manufacturing required. By removing the need for a lens and colour filters and replacing them with ground glass, this innovation could potentially be used in compact cameras and smart phones to make them slimmer.
Assistant Professor Steve Cuong Dang from the NTU School of Electrical and Electronic Engineering, who led the research, said their new imaging technique could help to improve imaging applications in biomedical and scientific applications as well as opening new doors for other industries.
“Our technology can also reconstruct images in other multiple wavelengths invisible to the naked eye, like infrared and ultraviolet, which are used in imaging purposes for medicine, surveillance and astrophysics. It can also reconstruct images taken at the microscopic scale,” explained Professor Dang. “Our multispectral imaging technique uses a monochromic (black and white) camera coupled with a simple piece of ground glass, making it very cost-effective compared to existing multispectral cameras on the market. The unique feature of our camera is that it can capture any range of light spectrum, unlike existing cameras on the market which are pre-fixed. It is also less affected by optical alignment issues like conventional cameras, because there are no moving parts and no focusing optics.”
With only a snapshot picture and a computational algorithm, this multispectral imaging technique combines the strengths of vision technology and spectroscopy to do multiple analysis at very high speeds. A patent has been filed for this new technology by NTU’s innovation and enterprise arm, NTUitive, and the research team will be engaging industry partners to see how they can adapt their technology for real-world applications.
Details of the research was published in Optica in October 2017. |
What is Brain Injury?
Your brain controls your ability to think, talk, move, and breathe. In addition to being responsible for your senses, emotions, memory, and personality, your brain allows every part of your body to function - even when you're sleeping.
Brain injuries are often described as either traumatic or acquired based on the cause of the injury.
Traumatic brain injury (TBI) is an insult to the brain, not of a degenerative or congenital nature, which is caused by an external physical force that may produce a dimished or altered state of consciousness, and which results in an impairment of cognitive abilities or physical functioning. It can also result in the disturbance of behavioral or emotional functioning.
When you injure your brain, you injure an important part of the body.
A TBI can affect your ability to:
- Think and solve problems,
- Move your body and speak, or
- Control your behavior, emotions, and reactions.
A TBI is not:
- Hereditary. You can't inherit a TBI from your parents.
- Degenerative. A TBI will not cause your brain to gradually deteriorate.
- Congenital. You're not born with a TBI.
Acquired brain injury (ABI) is an injury to the brain that is not hereditary, congenital or degenerative.
Acquired brain injuries are caused by some medical conditions, including strokes, encephalitis, aneurysms, anoxia (lack of oxygen during surgery, drug overdose, or near drowning), metabolic disorders, meningitis, or brain tumors.
Although the causes of brain injury differ, the effects of these injuries on a person's life are quite similar. |
Collagen, which is the most abundant protein in mammals, is also the main fibrous component of skin, bone, tendon, cartilage, and teeth. Humans' dry weight of skin are made up of over 1/3 collagen. This extracellular protein is a rod-shaped molecule, about 3000 Å long and only 15 Å in diameter. There are at least twenty-eight different types of collagen that are made up of at least 46 different polypeptide chains that have been located in vertebrae and other proteins that contain collagenous domains. The defining characteristic of collagen is that it is a structural proteins that are composed of a right handed bundle of three parallel-left handed polyproline II-type helices. Because of the tight packing of PPII helices within the triple helix, every third residue, which is an amino acid, is Gly (Glycine). This results in a repeating pattern of an XaaYaaGly sequence. Although this pattern occurs in all types of collagen, there is some disruption of this pattern in certain areas located in within the triple helical domain of nonfibrillar collagens. The amino acid that replace the Xaa in the sequence is most likely (2S) –proline (Pro, 28%). The most likely replacement amino acid in the Yaa position is (2s,4R)- 4-hydroxyproline (Hyp, 38%). This means that the ProHypGly sequence is the most common triplet in collagen. Many research has been done on figuring out the structure of the collagen triple helices and how their chemical properties affects collagen's stability. It has been found that stereo electronic effects and preorganization are important factors in determining the stability of collagen. A type of collagen called type I collagen has the structure revealed in detail. Synthesizing artificial collagen fibrils, which are smaller strands of fiber, have now been possible and can now contain properties that natural collagen fibrils have. By continually understanding the mechanical and structural properties of native collagen fibrils, will help research devise and develop ways to create artificial collagenous materials that can be applied to many aspects of our lives such as biomedicine and nanotechnology.
Structure of Collagen
The structure of collagen has been developed intensively throughout history. At first, Astbury and Bell put forth their idea that collagen was made up a single extended polypeptide chain with all their amide bonds in the cis conformation. In 1951, other researches correctly determined the structures for alpha helix and the beta sheet. Pauling and Corey put forth their structure that three polypeptide strands are three polypeptide strands are formed together through hydrogen bonds in a helical conformation. In 1964, Ramachandran and Kartha developed an advanced structure for collagen in that it was a right handed triple helix of three left handed polypeptide 2 helices with all the peptide bonds in the trans conformation and two hydrogen bonds in each triplet. Afterwards, the structure was honed by Rich and Crick to the accepted triple helix structure today, which contains a single interstrand N-H(Gly)...O=C(Xaa) hydrogen bond per triplet and a tenfold helical symmetry with a 28.6 A axial repeat.
Function and diversity
Collagen, which is present in all multicelluar organism,is not one protein but a family of structurally related proteins. The different collagen proteins have very diverse function s. The extremely hard strucyure s of bone and teeth contain collagen and a calcium phosphate polymer. In tendons, collagen forms rope-like fibers of high tensile strength, while in the skin collagen forms loosely woven fibers that can expand in all directions. The different types of collagen are characterized by different polypeptide compositions. Each collagen is composed of three polypeptide chains,which may be all identical or may be of two different chains. A single molecule of type I collagen has a molecular mass of 285kDa, a width of 1.5nm and a length of 300nm.
|I||[alpha 1(I)]2, alpha 2(I)||Skin,bone,tendon,cornea,blood vessels|
|II||[alpha 1(II)]3||Cartilage, intervertebral disk|
|III||[alpha 1(III)]3||Fetal skin,blood vessels|
|IV||[alpha 1(IV)]2, alpha 2(IV)||Basement membrane|
|V||[alpha 1(V)]2, alpha 2(V)||Placenta,skin|
Overview of Biosynthesis
Collagen polypeptides are synthesized by ribosomes on the rough endoplasmic reticulum (RER). The polypeptide chain then passes through the RER and Golgi apparatus before being secreted. Along the way it is post-translationally modified: Pro and Lys residues are hydroxylated and carbohydrate is added. Before secretion, three polypeptide chains come together to form a triple-helical structure known as procollagen. The procollagen is then secreted into the extracellular spaces of the connective tissue where eextensions of the polypeptide chains at both the N and C termini (extension peptides) are removed by peptidases to form troppcollagen. The tropocollagen molecules aggregate and are extensively cross-linked to procuce the mature collagen fiber.
Stability of Triple Helix Structure
Collagen is important for animals as it contains many essential properties such as thermal stability, mechanical strength, and the ability to bond and interact with other molecules. Knowing how these properties are affected require an understanding of the structure and stability of collagen. Replacing amino acids in place of any of the XaaYaaGly positions can affect the structure and stability of collagen in numerous ways.
Replacing the Glycine position in the XaaYaaGly sequence often cause diseases has it is associated with mutations in the triple helical and non triple-helical domains of a variety of collagens. The damaging mutations to collagen is caused by the substitution of Gly involved in the last hydrogen bods within the triple helix. For example the amino acid replacing the Gly and the location of the substitution can effect the pathology of osteogenesis. Substituting the Gly in proline rich areas of the collagen sequence have less disruption then the areas of proline poor regions. The time delay caused by Glycine substitutions results in an overmodification of the protocollagen chains, which alter the normal state of the triple helix structure and thus contributing the development of osteogenesis.
Higher-Order collagen Structure.
Collagen is made up hieracharcal components from the smaller units of individual TC monomers that self assemble into the macromolecular fibers. In type 1 collagen, monomers make up microfibrils which then make up fibris.
TC monomers of type 1 collagen have a strange feature in that they are unstable at body temperature meaning they prefer to be disordered rather than structured and order. The question is that how can something unstable be a component of something so stable, like the triple helix structure of collagen. The answer to this question is that collagen fibrillogenesis stabilizes the triple helix, meaning when the monomers form together they have a stabilizing effect. This contributes to the strength of the collagen triple helix structure.
Collagen fibrillogenesis occurs through the formation of intermediate-sized fibril segments called microfibrils. There are two essential questions that need to be answered in order to understand the molecular structure of collagen fibrils. The first question is what is the arrangement of the individual TC monomers that make up the microfibril. The second question is then how do those microfibrils make up the collagen fibril. These questions are difficult to answer because individual natural microfibrils cannot be isolated and the big size and insolubility of mature collagen fibrils make it impossible for standard techniques to figure the structure out.
- Matthew D. Shoulders and Ronald T. Raines(2009). Collagen Structure and Stability'. "PubMed", p. 3-6.
- David Hames, Nigel Hooper. Biochemistry. 3rd edition. Taylor & Fancis Group, New York, 2005.
- Saad Mohamed (1994) "Low resolution structure and packing investigations of collagen crystalline domains in tendon using Synchrotron Radiation X-rays, Structure factors determination, evaluation of Isomorphous Replacement methods and other modeling." PhD Thesis, Université Joseph Fourier Grenoble 1 Link |
A lever is a simple machine made of three parts: two load arms and a fulcrum. Sometimes the two arms are referred to as the force arm and the load arm, to distinguish which arm is initiating movement. Levers come in three classes.
Transmission of Torque
Levers are ancient lifting tools dating back thousands of years. An individual wedges a plank under a load, uses a fulcrum to give the plank a swivel point and lifts the load by applying force to the opposite end of the plank. The product of the force and the distance to the fulcrum is the torque applied. If the torque applied to the plank exceeds the load at the other end, the plank will lift the load.
A lever reaches equilibrium when the forces applied to each of its arms, in respect to its fulcrum, are the same. As a rule, the closer one force is to the fulcrum, the less force the lever needs at the other end to achieve equilibrium. Furthermore, a lever's power can be amplified or diminished by either changing the forces or by changing the position of the fulcrum, thereby lengthening one load arm and shortening another.
Position of Fulcrum
Class-1 levers have the fulcrum situated between the load and the force. A playground teeter-totter is an example of a class-1 lever. Class-2 levers have the load situated between the force and the fulcrum. A wheelbarrow is a common example of a class-2 lever, with the fulcrum at the wheel, the force at the handles and the load in the barrow between. Class-3 levers have the force situated between the fulcrum and the load arm. Fishing rods are a good example of a class-3 lever, with the fisherman's elbow as the fulcrum, the fisherman's hand as the force, and the lure the fisherman casts as the load. |
1) When is the best time to administer a reward or punishment?
2) How much of a reward or punishment is needed?
3) How often should we deliver a reward or punishment?
4) What type of reward or punishment works best, and under which conditions?
A substance or activity can only become addictive if it is rewarding; i.e., if it is pleasurable or enjoyable (at least initially). Individuals who dislike particular substances or activities have little risk for developing an addiction to those substances or activities. Such dislikes are not uncommon. Some people do not enjoy certain substances or activities. This protects them from developing an addiction simply because those substances or activities are not enjoyable. They are not rewarding.
Addiction is a learned behavior because the initial pleasure or enjoyment was rewarding. According to the principles of operant conditioning, rewarded behaviors will increase. Of particular concern is that most addictive substances and activities are immediately rewarding. Research has taught us that when we immediately reward a behavior people (and animals) learn it more quickly. This also explains why the addictive substance or activity tends to replace other, more healthy sources of reward. These other types of rewards are frequently delayed (such as the return of good health). An unfortunate cycle also develops. As addiction progresses, the availability of natural, healthy pleasures (rewards) decline due to the addiction. Friendships are strained. Loved ones become bitter. Meaningful jobs or hobbies are lost or abandoned. When this happens, addicted people become more and more dependent on their addiction as their sole source of reward. This creates an unfortunate but powerful addictive cycle.
Punishment also plays an important role in the development of addiction. If there is an early and significant punishment (perhaps a DUI, or a medical problem) then the addiction might not develop. In many cases, punishments for addiction occur much later, when the addiction is already firmly in place. At this point, many chemical and physiological changes have already occurred in the brain. This makes it more difficult to discontinue the addiction. Simultaneously, unhealthy cognitive and emotional patterns have become well-established. This too makes it more difficult to change addictive behavior. Therefore, in these later stages of addiction punishment alone is usually insufficient to create a lasting change. The most successful approach is to increase rewards for healthy behavioral choices while eliminating rewards for addictive behavior.
Operant conditioning has resulted in several effective treatments. The basic idea is to reward addicted people for making healthier, recovery-oriented choices. However, research has made it very clear: The rewards must have some value, and the reward must be substantial. Again, this has a common sense ring to it. It's unlikely an addicted person would give up their addiction for a piece of chocolate. However, they might give it up for a car!
It follows that what might be rewarding to one person, would be meaningless to the next. For a very hungry person food might be very rewarding. However, if someone just finished a Thanksgiving feast, food is unlikely to be rewarding. Addictions research has demonstrated that by rewarding some people with inexpensive but desired items they can increase the number of abstinent days. This is particularly true for people with limited financial means. These same inexpensive items would not likely serve to change the behavior of someone with greater financial means.
CRAFT is a therapy that relies on operant conditioning (Community Reinforcement and Family Training; Meyers & Wolfe, 2004). The social portionof the Bio-Psycho-Social-Spiritual model stresses the importance of interpersonal relationships. Therefore, addiction treatment often needs to include family members or other people who have a close personal relationship with the addict.
CRAFT teaches concerned significant others (CSO's) to reward the addicted person's positive, healthy behaviors. These behaviors oppose addiction. The CSO's also learn to remove rewards for unhealthy behaviors. These behaviors support addiction. For instance, a wife may plan a pleasant evening for her husband when he comes home from work, without stopping at a bar. However, if he comes home drunk, her kind attention is withdrawn. In this case, she would excuse herself from his company for the rest of the evening. By rewarding healthy behavior, and withdrawing rewards for unhealthy behavior, the wife is applying the principles of operant conditioning. This approach will increase the husband's healthy behaviors but only if quality time with his wife is rewarding. Another husband might find time alone to be more rewarding. Once again, we must target the rewards to each person.
CRAFT teaches family members to allow the negative consequences of addiction to affect the addicted person directly. This is often difficult for family members. Out of care and concern for their loved one, they have prevented these consequences from occurring. Moreover, these negative consequences often affect the entire family, not just the addict. For example, suppose a spouse loses his/her job because of irregular work attendance due to addiction. This loss of income has a huge impact on the entire family! Therefore, it is quite natural for the healthy spouse to try to prevent these sorts of problems.
Regardless of the loving motives of family members, removing the negative consequences serves to prolong addiction. In contrast, allowing these consequences to occur serves as a deterrent (punishment). For instance, if a wife misses work on Monday morning because she has a hangover, her husband does not call in to work for her. Instead, he lets her make the call herself. In some severe situations, CSO's may even apply negative consequences of their own (such as moving out of the house). However, these sorts of drastic negative consequences (punishments) are the last resort, not the first. Research has made it clear: People maintain positive behavior much longer when they expect a reward, rather than a punishment. |
Because carnivorous plants grow in soggy, nutrient-poor soil or waters, they have developed methods to trap and digest insects. These plants might sound as though they are rare and tropical, but there are more than 600 varieties of carnivorous plants, 14 of which grow in Texas.
Pale Pitcher Plant
The pale pitcher plant (Sarracenia alata) is a very slender and tall plant that can reach heights of 26 inches or more. It has yellow-green leaves with red veins and produces pale yellow flowers. It is a pitfall plant, which means its leaves curl to produce deep pools of digestive enzymes. The top of the trap is covered with a waxy layer that prevents insects from escaping. It grows in bogs, swamps and other wet areas.
The small or dwarf butterwort (Pinguicula pumila) grows about 4 inches high, with a slightly smaller width. Its yellow-green curled leaves are covered with small hairs and a thick, sticky mucilage that it uses to catch insects. It grows in moist areas in open areas and in thin woods.
Although only two varieties of sundew are found in Texas, they are very different from each other. The dwarf sundew (Drosera brevifolia) has wedge-shaped leaves and is red or reddish-purple in color. The pink sundew (Drosera capillaris) has spoon-shaped leaves and appears red in strong sunlight but lime-green with red tentacles under normal light. Both species are small (about 2 to 3 inches wide) and flat and grow in bogs or wet areas. Leaves are covered with hairy tentacles that secrete a sticky fluid that the plant uses to catch its prey. Once caught, the leaves of the sundew will slowly curl around the insect in order to digest it.
Bladderworts can be found growing in the water or in very wet soil. They are rootless and free-floating plants that range in size from a few inches to a few feet tall. Above the surface, they produce tiny flowers on tall stalks in a variety of colors, usually yellow or violet. However, it is below the surface that the carnivorous action occurs. Bladderworts have tiny bladders attached to their leaves that, when triggered by sensitive hairs, suck in their prey.
Ten varieties of bladderwort can be found in Texas: leafy (U. foliosa L.), southern (U. juncea), horned (U. cornuta Michx.), humped (U. gibba L.), swollen (U. inflata Walter), common (U. macrorhiza), eastern purple (U. purpurea), little floating (U. radiata Small), striped (U. striata), and zigzag (U. subulata L.). |
Hardware tips -> Types of Computer Memory -
Random Access Memory (RAM) and Read Only Memory (ROM) are essential computer memory. Virtual Memory and Cache are other terms we often hear related to computer memory.
What is RAM?
As the name suggests, Random Access Memory can find and access the data randomly. Sequential access is the opposite of Random access. In Sequential access, to retrieve the data that is stored in the middle, all the data from the beginning has to be read sequentially until the searched data is found. So it takes time. Where as in RAM, the data can be directly jumped to the middle if necessary without having to read the data sequentially. So the reading is faster. In computers and printers RAM is used. In fact, RAM is the most important memory in computers and printers.
Types of RAM.
Dynamic RAM and Static RAM (SRAM) or two types of RAM. Of these, Dynamic RAM is cheaper and commonly used. Static RAM is very fast. A dynamic RAM has to be refreshed many thousand times in a second. Static RAM does not have to be refreshed.
RAM is a volatile memory. Hence, when there is a power cut, all the data in the RAM memory is lost. RAM is always referred to as the Main Memory. RAM is a READ / WRITE Memory.
All the programs and data that we use in computers have to be first transfered to RAM from hard disk before it can be used. Therefore the higher the RAM size the faster is your computer.
What is ROM - Read Only Memory
ROM also uses Random Access method to read data. The only difference is that the data written to a ROM memory can only be read. You can write to it.
Since RAM is volatile, the data is erased when the computer shuts down or power is cut, but ROM is non volatile. Therefore the data in ROM is always there even when there is no power.
All the computers have a ROM. The most essential programs that are required to start the computer are stored in this ROM memory.
Calculators, Laser Printers use ROM.
PROM - Programmable Read Only Memory - is a part of ROM memory. When manufactured the PROM is empty. Then data is written to it using a special device.
EPROM - Erasable Programmable Read Only Memory - is a type of ROM that can be erased using Ultra Violet rays and then re prorammed with different data or programs.
EEPROM - Electrically Erasable Programmable Read Only Memory - is a type of ROM that can be erased by using current.
What is Virtual Memory?
A part of the hard disk can be used as a Virtual Memory. Generally the size of the virtual memory in a computer is 2 or 2.5 times greater than the RAM memory size in that computer.
Assume you are starting an application. But there are already many programs started previously that is occupying the RAM space. And the remaining space in RAM is not sufficient to use the new application that you are starting. Then the virtual memory is used. Now the Operating System comes into play. It decides which are the applications that are not currently used and then moves them from the RAM memory to the Virtual memory in Hard Disk. Therefore the RAM is now free and the new application can occupy the space and be started.
If the program that is moved to the virtual memory is used again then the Operating System brings that application back from virtual memory to the RAM and some other idle application is moved to the virtual memory.
Therefore the virtual memory is also referred to as Swap memory.
What is Cache ?
A part of the main memory (RAM) can be used as Cache or it can be separate chip. |
Java includes a well designed collection of stream classes and interfaces that make up most of the java.io package.
What is a stream?
In every computer program, there is exchange of data between two or more sources. Streams were created to abstract the concept of exchanging information between various devices and provide a consistent interface for programmers to interact with different sources of I/O in their programs. The basic idea of a stream is that the data enters in one end of a data channel in a particular sequence and comes out on the other end in the same sequence. Thus, streams are ordered sequences of data, which have:
File I/O in Java
In Java, there are three streams, which are called the standard I/O streams. Besides standard I/O, Java includes a well-designed collection of stream classes and interfaces that make up most of the java.io package. The java.io package can be further subdivided into classes based on the data they operate on. Based on the type of the data, classes can operate on character data or the byte data.
The InputStream and OutputStream classes and all their descendants represent byte streams and the Reader and Writer classes and all their descendants represent character streams.
In Java, most input-output methods can trigger an IOException (like reaching the end-of-file, file not found etc.). Such exceptions must be caught, for the proper processing of the program. These exceptions can be caught, by either using try-catch block, or the enclosing method can be flagged. |
What’s the News: An international team of researchers, led by the National Center for Atmospheric Research, has learned that large magnetic waves are partly to blame for the Sun’s immensely hot corona. The study, published in the journal Nature, also suggests that the waves could be the driving force behind the solar wind.
The pair of observers that make up NASA’s Solar Terrestrial Relations Observatory (STEREO) have been traveling since 2006 to reach opposite sides of our star, and they just beamed back the first 360-degree solar images.
The satellites are in the same orbital path as Earth, more or less, and have just taken up their final positions — one is where we’ll be in three months, and the other where we were three months ago. (The first has NASA’s least imaginative name to date: STEREO A, for “ahead.” The second is called STEREO B, for…you can probably guess.) [TIME]
Seeing the far side of the sun isn’t just a scientific curiosity. It could also helps researchers figure out the sun’s violent outbursts, like the coronal mass ejections that could endanger astronauts and foul up satellites if one headed for Earth.
Into the great unknown, into the wild blue yonder, past the second star on the right and straight on till morning: That’s where NASA’s Voyager 1 is heading. The remarkable spacecraft was launched 33 years ago, and it’s now reaching the edge of our solar system. Within a few years, NASA says, it will enter interstellar space.
Phil Plait reports on how researchers realized they’d reached a milestone in Voyager 1’s journey:
Over all those years, there has been one constant in the Voyager flight: the solar wind blowing past it. This stream of subatomic particles leaves the Sun at hundreds of kilometers per second, much faster than Voyager. But now, after 33 years, that has changed: at 17 billion kilometers (10.6 billion miles) from the Sun, the spacecraft has reached the point where the solar wind has slowed to a stop. Literally, the wind is no longer at Voyager’s back.
Read the rest of his post at Bad Astronomy.
80beats: The Edge of the Solar System Is a Weird and Erratic Place
80beats: Near the Edge of the Solar System, Voyager 2 Finds Magnetic Fluff
80beats: NASA Spacecraft Will Soon Map the Solar System’s Distant Edge
80beats: Voyager 2 Hits the Edge of the Solar System—and Writes Home
The edge of the solar system is not some static line on a map. The boundary between the heliosphere in which we live and the vastness of interstellar space beyond is in flux, stretching and shifting more rapidly than astronomers ever knew, according to David McComas.
McComas and colleagues work with NASA’s Interstellar Boundary Explorer (IBEX), a satellite orbiting the Earth with its eye turned to the edge of the heliosphere—the bubble inflated by the solar wind that encapsulates the solar system and protects us from many of the high-energy cosmic rays zinging across interstellar space. This week in the Journal of Geophysical Research, the team published the results of IBEX’s second map of the region, and found that its makeup has changed markedly over the span of just six months. Says McComas:
“If we’ve learned anything from IBEX so far, it is that the models that we’re using for interaction of the solar wind with the galaxy were just dead wrong.” [National Geographic]
Auroras on Saturn form like those on Earth, when charged particles in the solar wind stream down the planet’s magnetic field towards its poles, where they excite gas in the upper atmosphere to glow. Some auroras on the ringed planet are also triggered when some of its moons, which are electrically conducting, move through the charged gas surrounding Saturn. [New Scientist]
At long last, here comes the sun (mission).
Never mind NASA’s numerous observatories; never mind the unmanned Pioneer 10 and Voyager probes careening toward the far reaches of the solar system—no craft has ever gone to the center of the solar system, the sun. This decade that will change. NASA is in the process of selecting the instruments for its Solar Probe Plus, a mission to launch by 2018 that will get closer to then sun than ever before, and hopefully find some answers to the open questions that remain about our life-giving star.
“The experiments selected for Solar Probe Plus are specifically designed to solve two key questions of solar physics: why is the Sun’s outer atmosphere so much hotter than the Sun’s visible surface, and what propels the solar wind that affects Earth and our Solar System,” said Dick Fisher, director of Nasa’s Heliophysics Division in Washington DC. [BBC News]
The probe isn’t quite setting the controls for the heart of the sun, Pink Floyd-style, but it will draw dangerously close.
Atmospheric Tag Team
Akatsuki, the Venus climate probe, will arrive at the second planet from the sun in December. There it will team up with the European Space Agency’s Venus Express probe, using five cameras to peer down into the turbulent atmosphere and study Venus‘ maniacal meteorology.
One of the main goals is to understand the “super-rotation” of the Venus atmosphere, where violent winds drive storms and clouds at speeds of more than 220 mph (360 kilometers per hour), 60 times faster than the planet itself rotates [MSNBC].
The Venus Express’ own findings since it reached the planet in 2006 have bolstered the idea that Venus was once alive with plate tectonics, oceans, and continents—that is, it was once much more Earth-like than its current, sweaty incarnation. In fact, Venus may still be active.
It’s alive! It’s alive! (Maybe.)
On May 18, the Japan Aerospace Exploration Agency (JAXA) says, it will launch into space a “solar yacht” called Ikaros—the Interplanetary Kite-craft Accelerated by Radiation of the Sun (named, of course, in honor of Icarus in Greek mythology). JAXA plans to control the path of Ikaros by changing the angle at which sunlight particles bounce off the silver-coloured sail [AFP].
Actually, the solar sail is a dual-purpose system, taking advantage of both the pressure and the energy of sunlight. The sail, which is less than the thickness of a human hair and 66 feet in diagonal distance, will catch the actual force of sunlight for propulsion as a sailboat’s sail catches the wind. But the solar sail is also covered in thin-film solar cells to generate electricity. And if you can make electricity, you can use it to ionize gas and emit it at high pressure, which is the propulsion systems most satellites use.
Potential velocity using a solar sailor has been theorized to be extremely high. “Eventually you’ll have these missions lasting many years, reaching speeds approaching 100,000 mph, getting out of the solar system in five years instead of 25 years,” said Louis D. Frieman, the Executive Director of the Planetary Society [Clean Technica]. The society has toyed around with its own solar sail.
For now, though, JAXA has a six-month test mission planned for Ikaros. If it works, they want to send a solar sail-powered mission to Jupiter and then the Trojan asteroids. That voyage would employ both the force of the sun and ion propulsion, and the Japanese are brimming with confidence: “Unlike the mythical Icarus, this Ikaros will not crash,” Yuichi Tsuda, an assistant professor at JAXA, said today [BusinessWeek].
80beats: Japan’s Damaged Asteroid Probe Could Limp Back to Earth in June
80beats: Spacecraft That Sails on Sunshine Aims For Lift-Off in 2010, on the Planetary Society’s own attempts at a solar sail.
DISCOVER: Japan Stakes Its Claim in Space
DISCOVER: One Giant Step for a Small, Crowded Country, on Japan’s moon aspirations
DISCOVER: Japan Sets Sail in Space
While astronomers continue to learn about peculiar phenomena in distant galaxies, our own sun’s behavior still presents a mystery. So NASA’s next mission, the Solar Dynamics Observatory, will watch every move the sun makes in the hope of fully figuring out its cycles of sunspots, solar flares, and other activity.
Set to launch next week aboard an Atlas V rocket, the SDO will snap 60 high-resolution images of the sun every minute. Using three specific science instruments, SDO will measure how much extreme ultraviolet light the Sun emits, map plasma flows in the Sun, map the surface of its magnetic field, and image the solar atmosphere [Astronomy]. Scientists hope this huge catalog of images, taken at a resolution far better than that of HDTV and measuring about 1.5 terabytes of data per day, will help them connect the flares and spots on the solar surface to what’s happening down below, inside the star.
After three-plus decades of exploring the gas giants, passing the orbit of Pluto, and reaching points beyond, Voyager 2 has found something interesting near the edge of the solar system: surprisingly magnetic fluff. Researchers document their findings in this week’s Nature.
Of course, this fluff isn’t made from the dust bunnies you find under your bed, the ‘Local Fluff’ (a nickname for the Local Interstellar Cloud) is a vast, wispy cloud of hot hydrogen and helium stretching 30 light-years across [Discovery News]. Astronomers already knew this fluff was out there near the boundary area between our solar system and interstellar space. What surprised them is that the fluff is much more magnetized than they’d expected. |
Creator: Cockpit Hill Factory
Context: This teapot was created in 1766 by a British manufacturer after the repeal of the Stamp Act by the British Parliament.
Audience: Americans celebrating the repeal of the Stamp Act.
Purpose: To increase the company's sales by appealing to the interests of American consumers.
Historical Significance: This teapot reveals the growth of a transatlantic consumer market during the years before the American Revolution. It also shows the shrewdness of British manufacturers who overlooked political interests in order to satisfy the desires of American consumers and generate sales. |
[Physics FAQ] - [Copyright]
Original by Warren G. Anderson 1996.
In 1975 Hawking and Bekenstein made a remarkable connection between thermodynamics, quantum mechanics and black holes, which predicted that black holes will slowly radiate away. (see Relativity FAQ Hawking Radiation). It was soon realized that this prediction created an information loss problem that has since become an important issue in quantum gravity.
In order to understand why the information loss problem is a problem, we need first to understand what it is. Take a quantum system in a pure state and throw it into a black hole. Wait for some amount of time until the hole has evaporated enough to return to its mass previous to throwing anything in. What we start with is a pure state and a black hole of mass M. What we end up with is a thermal state and a black hole of mass M. We have found a process (apparently) that converts a pure state into a thermal state. But, and here's the kicker, a thermal state is a MIXED state (described quantum mechanically by a density matrix rather than a wave function). In transforming between a mixed state and a pure state, one must throw away information. For instance, in our example we took a state described by a set of eigenvalues and coefficients, a large set of numbers, and transformed it into a state described by temperature, one number. All the other structure of the state was lost in the transformation.
In technical jargon, the black hole has performed a non-unitary transformation on the state of system. As you may recall, non-unitary evolution is not allowed to occur naturally in a quantum theory because it fails to preserve probability; that is, after non-unitary evolution, the sum of the probabilities of all possible outcomes of an experiment may be greater or less than 1.
In the face of such evolution, quantum mechanics falls apart, and we are faced with a dilemma. Do black holes really defy the tenets of quantum theory, or have we missed something in our thought experiment. Perhaps the black hole is not the same after it has evaporated to mass M as it was initially at mass M. Or perhaps there is some subtle correlation in the Hawking radiation that we are missing, but that supplies the missing information about the pure state.
This, then, is the black hole information loss problem. The fact that information is lost is reflected in the thermal nature of the emitted radiation. But any thermal system can be assigned an entropy via the Gibbs law dE = S dT. Thus, we can calculate the black hole entropy by dint of the fact that we can calculate the black hole temperature (by dint of the fact that the quantum radiation is thermal). This is, I think, what people are getting at when they say that black hole entropy is responsible for the information loss. I would say it the other way, that black hole information loss is responsible for black hole entropy. The simple fact of the matter is that they are the same thing in slightly different terms.
Two notes to finish off. First, you might think that the thermal nature of the black hole is inevitable since it is radiating, but you would be wrong. In most of these quantum radiation calculations, the spectrum of the radiation does not have a Planck spectrum. If that had been the case for black holes, too, then we would not be able to assign a temperature or an entropy to black holes. In that case, people probably still would not believe Bekenstein and instead of the information loss paradox we'd still be wondering how to reconcile black holes with the second law. The thermal spectrum of Hawking radiation is one of the most serendipitous results in modern physics, in my opinion, which is another way of saying that something deep and not understood is going on.
The second is an interesting sidelight. While it's true that the Gibbs law gives the correct Bekenstein-Hawking entropy from the calculated temperature, no one has been able (until a few months ago) to explain the entropy directly from quantum mechanical / statistical mechanical grounds. In fact, it has been proven that semiclassical gravity is insufficient to account for this entropy. This is a profound result, since the thermodynamical entropy is obtained at a semiclassical level (in fact, due to some quirks that I suspect are related to the non-linearity of gravity, it is essentially classical). Thus, we are faced with the disconcerting choice that A) thermodynamical entropy does not always have a statistical mechanical basis or B) gravity is not a fundamental interaction, but rather a composite effect of some more fundamental underlying theory. Option B is not disconcerting to superstring theorists, however, it is exactly their point of view. Interestingly, since about the beginning of the year, the superstring people have jumped into the "origin of black hole entropy" fray. It turns out that by using some old result about monopoles in certain types of field theories they have been able to count the string states that would contribute to a certain (unphysical) class of a black hole of a given mass. The entropy is exactly that given by the Bekenstein area formula. The experts assure me that this will be extended to more physical models in the future. An exciting prospect indeed, if it pans out. |
Chapter 6 - Waves
Superposition and Coherence:
Superposition occurs when 2 or more waves pass through each other.
At the instant the wars cross, the displacements due to each wave combine. Each wave then continues on in the original direction.
Principle of superposition = when 2+ waves cross, the resultant displacement is the vector sum of the individual displacements.
Interference can be constructive or destructive.
A crest + A crest = a bigger crest & a trough + a trough = a bigger trough. ∴ constructive.
A crest + a trough (of equal size) = 0. The 2 displacements cancel each other out completely ∴ destructive.
If crest /trough aren't the same size the inference isn't total. For interference to be noticeable the amplitudes should be almost equal.
You can use phasors to show superposition
You can use little rotating arrows to represent the phase of each point on a wave.
The phasor rotates ANTICLOCKWISE through one whole turn as the wave completes a full cycle.
To find resultant at timer, add the phasors tip to tail.
In PHASE means In STEP, 2 points in PHASE interfere CONSTRUCTIVELY.
2 points on a wave are in phase if they are both at the same point in a wave cycle.
Often use one wave cycle as 360 degrees (2π radians).
Points that have a phase difference of 0 or a multiple of 360 degrees are in phase ∴ phasors are pointing in same direction.
Points with a phase difference of odd-number multiples of 180 degrees are exactly out of phase. This is called ANTI PHASE. Their phasors point in opposite directions.
Points can have a phase difference of any angle.
You can talk about 2 different waves being in phase. In practice this happens because both waves came from the same oscillator.
In other situations, nearly always be a phase difference between 2 waves.
To get interference patterns the 2 sources must be coherent.
Interference happens in a jumble when observing waves of different wavelength and frequency.
To get clear interference patterns the 2+ sources must be coherent.
2 sources are coherent if they have the same wavelength and frequency and a fixed phase difference between them.
Constructive/Destructive Interference depends on path difference.
Constructive/Destructive depends on how much further one wave has travelled that the other wave.
The amount by with one waves path is longer is called the path difference.
Constructive interference occurs when path difference = number x wavelength
Destructive Interference occurs when path difference = (number+1/2) x wavelength
You get standing waves when a progressive wave is reflected at a boundary
A standing wave is the superposition of 2 progressive waves with the same wavelength, moving in opposite directions.
Unlike progressive waves, standing waves transmit no energy.
You can demonstrate standing waves by setting up driving oscillator at the end of a stretched string with the other end fixed. Wave generated is reflected back and forth.
For most frequencies the resultant pattern = jumble. If… |
What is it? This rubric helps teachers guide students in grades 6-12 in being effective critical thinkers in various phases of a project, and can be used to assess their performance. Alignment with CCSS in ELA is noted.
Why do we like it? We think this rubric is clear, concrete, and student-friendly. It shows how CCSS can be met through PBL.
How can you use it? Use this rubric to guide students and assess their work, or to inform your thinking as you create your own assessment tools. Schools and districts can adopt or adapt this rubric for use across all classrooms. |
Lesson Plan Reference
Grade Level 6-8
Difficulty Level 2 (Reinforcing / Developing)
Type of Assignment Individual or GroupCommon Core Standards
- [SCI-5-PS3-1] Use models to describe that energy in animals’ food (used for body repair, growth, motion, and to maintain body warmth) was once energy from the sun.
- [SCI-MS-LS2-2] Construct an explanation that predicts patterns of interactions among organisms across multiple ecosystems.
- [SCI-MS-LS2-3] Develop a model to describe the cycling of matter and flow of energy among living and nonliving parts of an ecosystem.
In this activity students are going to demonstrate their understanding of the transfer of energy between living things by creating different food chains.
This would be a great opportunity to speak to your students and explain what the arrows in the food chain represent: the flow of energy and also the transfer of matter.
As well as putting the animals in the correct trophic levels, students will have to find images of the animals (using Photos for Class or the Animals Character tab) and label the animal as a herbivore, omnivore or carnivore. Remind students that all food chains start with energy from the Sun. In most food chains, this energy is converted to glucose by photosynthesizing green plants.
The following food chains are used in the given example:
- Sun → Grass → Caterpillar → Sparrow → Hawk
- Sun → Tree → Squirrel → Fox
- Sun → Grass → Cow → Human
- Sun → Red Oat Grass → Termites → Mongoose → Caracal
Other examples of food chains
- Sun → Grass → Vole → Owl
- Sun → phytoplankton → Krill → Leopard Seal → Orca (Killer Whale)
- Sun → Typha (cattail) → Mouse → Opossum → Red Fox → Puma
- Sun → phytoplankton → zooplankton → Jellyfish → Shark
Have your students be more creative by giving them a habitat and getting them to research food chains in these habitats.
After completing this activity with students there is a great opportunity for them to evaluate their models. Lead students through the strengths and limitations of the models giving them an opportunity to make suggestions for improvements.
(These instructions are completely customizable. After clicking "Copy Assignment", change the description of the assignment in your Dashboard.)
Show your understanding of food chains by reordering the following plants and animals into food chains. Remember to use arrows to show the flow of energy.<
- Click "Use this Template" from the assignment.
- In the first row, put these animals into a food chain: Sparrow, Caterpillar, Grass and Hawk.
- In the second row, put these animals into a food chain: Squirrel, Tree, and Fox.
- In the third row, put these animals into a food chain: Human, Cow and Grass.
- In the last row, put these animals into a food chain: Mongoose, Caracal, Red Oat Grass, and Termites.
- For each food chain, label each organism as a herbivore, omnivore or carnivore.
- Use Photos for Class to find an example image for each one.
- Save and submit your storyboard. Make sure to use the drop-down menu to save it under the assignment title.
(Modify this basic rubric by clicking the link below. You can also create your own on Quick Rubric.) |
|Our subscribers' grade-level estimate for this page: 3rd|
Go to a cricket printout
Label Me! Printouts
Anatomy: Like all insects, crickets have a three-part body (head, thorax and abdomen), six jointed legs, and two antennae. Their body is covered with a hard exoskeleton. Crickets breathe through a series of holes called spiracles; they are located along the sides of the body. Crickets are brown or black. Crickets are very similar to grasshoppers, but the cricket's antennae are very long, the wings are held flat over the body, and the ovipositor is very long. Not all crickets have wings. Crickets sense sounds using tympani (hearing organs) located in their front legs.
Metamorphosis: Crickets undergo incomplete metamorphosis. They hatch from eggs that the female deposits in soil (or plant material) using her ovipositor. Immature crickets (called nymphs) look like small adults, but the wings and ovipositor (of the female) are not fully developed. They molt many times as they develop into adults.
Diet and Predators: Crickets are omnivores (they eat both plants and animals). They scavenge dead insects and eat decaying material, fungi and young plants. Their predators include birds, rodents, reptiles, other insects (including beetles and wasps), and spiders.
Classification: Kingdom Animalia (animals), Phylum Arthropoda (arthropods), Class Insecta (insects), Order Orthoptera (crickets, grasshoppers, etc.), Suborder Ensifera, Family Gryllidae (crickets), Genera Acheta, Gryllus, Oceanthus, Myrmecophila, many species.
|Search the Enchanted Learning website for:| |
Living La Vita – Paying Your Way
Financial Literacy Fundamentals
•Students will understand economics by applying a creative approach.
•Students will apply economic reasoning for life choices.
- Students will understand common economic terms and concepts.
◦Scarcity and Alternative
◦Choice and Opportunity Cost
- Students study the three (3) concepts either in class or on their own.
- After each module, students:
- Choose from the Financial Proverbs list those that most exemplify that particular financial concept.
- Find examples from their own lives of each financial concept
- Write a new proverb to reflect that financial concept.
- As a group, students come together to:
- Share their examples with each other.
- Share their personal proverbs.
- Pick a proverb, either from the Financial Proverb List or their own creations to create a PSA (public service announcement)
- Show students examples of Financial Literacy PSA’s (provided)
- Students prepare a 30 second to 1 minute video using their selected proverb to illustrate one of the economic concepts taught in the
lessons. Students may use their phone, video camera, digital camera or webcam. High-quality resolution is not required. Students may choose ONE of the following topics:
- Scarcity and Economics
- Choice and Opportunity Cost
- Human Capital
LESSONS AVAILABLE UPON REQUEST: [email protected]
Outreach made available through the help of:
Bank of Hemet & Pacific Western Bank |
El Nino Signs Detected, Presaging Global Weather ChangeAlex Morales
Signs have been detected that a periodic warming of the tropical Pacific known as El Nino is imminent, presaging changes to global weather patterns in the months ahead, the World Meteorological Organization said.
Water temperatures below the surface of the Pacific’s equatorial waters have warmed to levels similar to the onset of El Nino, and about two-thirds of climate models indicate thresholds for the phenomenon may be reached from June to August, the United Nations’ WMO said today in an e-mailed statement.
El Ninos occur irregularly every two to seven years and are associated with warmer than average years. They tend to lead to abnormally dry conditions over parts of Australia, the Philippines and Brazil, and to more intense storms in the Gulf of Mexico. Their counterpart, La Ninas, are associated with cooler years.
“El Nino and La Nina are major drivers of the natural variability of our climate,” WMO Secretary-General Michel Jarraud said in the statement. “If an El Nino event develops, and it is still too early to be certain, it will influence temperatures and precipitation and contribute to droughts or heavy rainfall in different regions of the world.”
The last El Nino was from 2009 to 2010. The strongest El Nino measured to date ran from 1997 to 1998 and contributed to 1998 being the third-warmest year on record.
“El Nino has an important warming effect on global average temperatures, as we saw during the strong El Nino in 1998,” Jarraud said. “The combination of natural warming from any El Nino event and human-induced warming from greenhouse gases has the potential to cause a dramatic rise in global mean temperature.”
El Nino events tend to lead to dry conditions in northern Australia, the Philippines and Indonesia, as well as drier periods in southeastern Africa and northern Brazil during the southern hemisphere’s summer, according to the WMO, a UN agency.
During the northern summer, Indian monsoon rains tend to diminish, harming crops, while the west coast of tropical South America gets wetter. Western Canada and Alaska usually get warmer winters, and the Gulf of Mexico gets more vigorous storms, it said. |
(II) The Role of Implicit and Explicit Learning in Mathematics
"We Learn By Doing
"Cognitive psychology is the branch of
psychology concerned with the functioning of the brain, especially
in regards to mental processes operating in response to stimuli,
such as how well a student learns in response to different teaching
methods" (Collins and
Harding, 2004, p. 9). In the literature of cognitive
psychology, the "learning by doing" process described in the
quotation on the left-hand page is called implicit learning. This
is the way most humans learn their first language. This is also the
way most humans learn to play a sport or a musical instrument, or
learn how to dance. However, contrary to Mr. Holt’s
contention in his quote, it is not the only way humans learn. There
is another process called explicit learning, and it is this process
which is more common and more effective in the learning of
Implicit learning is characterized by imitating behavior without necessarily acknowledging the underlying rules that determine this behavior. For example, children learn to speak their first language employing correct rules of grammar without being able to name those rules of grammar. On the other hand, explicit learning is rules oriented. Rules are learned, practiced in drills, and then implemented in problem situations. This is the most common way that adults learn a second language. They memorize the alphabet of the new language, they learn some vocabulary, they learn the rudimentary rules of grammar, and then they attempt to string words together according to the rules of grammar to generate simple sentences. With enough practice, this leads to an ability to converse in that language.
The rules-oriented approach to the teaching of mathematics is immediately recognizable by all. As children we are taught to memorize addition facts and then multiplication facts. Then we are taught algorithms for using these facts, and simple but abstract relationships between addition and subtraction and multiplication and division to enable us to subtract and divide whole numbers. In junior high school we are taught to employ these facts and these rules in more complex procedures for dealing with all the analogous arithmetic operations for ratios of whole numbers, fractions. Facts to be memorized, rules to be learned, practice to merge the two until they both become second-nature, then more complicated rules to be learned which build on the simple ones, and more practice to merge it all until it becomes second-nature, and on and on and on it goes. In fact, it goes on beyond what most naturally-inclined implicit learners can tolerate after the age of twelve. That is when most implicit learners stop acquiring new knowledge of mathematics.
If this is so, why can’t mathematics be taught to those individuals implicitly? To a point, it can. In fact, once rules of mathematics have become second-nature as described in the previous paragraph, then invocation of those rules is more in keeping with the process of implicit learning. That is, correct rules are employed without explicit acknowledgement of that rule at that moment. At this ideal stage of learning mathematics, this process is really a combination of the two. Unfortunately, without a lot of drill and practice of all the basic skills in mathematics, the learner will not achieve this ideal.
Another element of implicit learning that makes it ill-suited to the learning of mathematics is the lack of transference. In general, implicit learning is associative and context bound, while explicit learning is based on hypothesis testing and transfers across contexts (Berry & Dienes, 1993). Many basic mathematics skills can be learned associatively, but skills learned this way will be context bound and of limited value without an explicit understanding. For example, a child who cannot describe the conceptual meaning of multiplication but is merely drilled on multiplication tables, has in effect learned multiplication facts associatively. As a result, this child might be able to quickly state the answer to 7 x 9 but would fail to use multiplication when asked about the number of cups on a tray with seven rows, each with nine cups in it.
Automaticity of a skill is about making an explicitly learned skill also available implicitly. Thus, in the context or contexts in which the skill is highly over-learned (that is, is second nature) it will be appropriately and quickly retrieved with limited investment of mental resources.
Initially, it is essential that mathematical skills be learned explicitly if those skills are to be useful across different contexts without learning and practicing them in all potentially relevant situations. This explicit accessibility facilitates the use of that skill in new contexts.
Driving an automobile is a skill most of us acquire explicitly which eventually becomes automatic. This skill is ordinarily executed with so little demand on resources that most of us fail to remember what we did and what traffic signs we saw on the way to work. On the other hand, we can explicitly retrieve and describe the relevant sub-skills when teaching someone to drive. Furthermore, we do explicitly use and monitor these skills when road conditions are poor.
This is the ideal for mathematical skills to be useful in problem solving. When routine situations are encountered, we want invocation of mathematical procedures to be automatic so that working memory can attend to higher order thinking. On the other hand, when unfamiliar problem situations in mathematics confront us, we want to be able to retrieve explicit rules and use our higher-order thinking to creatively manipulate those rules to fit the new context (May, Rabinowitz and Mantyka, 2002). |
Lesson 4: Energy Transformations
Lesson 3 dealt with different types of energy; this lesson deals with how those types transform within a given system. Students accomplish this goal by completing the “Energy-Go-Round” activity, a lab that helps students explore energy transformations through the use of common household items and toys. The activity also gives students valuable practice describing energy changes using both words and diagrams (energy bar graphs and energy “chains”).
- Students will understand how energy “moves” through a system as that system changes
- Students will recognize energy transformations that occur in the “real” world
- Students will represent the energy transformations in a given system using both words and diagrams.
Classroom Time Required
- Students can complete 3-4 stations of the Energy-Go-Round in about 50 minutes.
- The “Energy Station Description” handout contains a detailed list of the required materials
- “Energy-Go-Round” handout
- Discuss the concept of an “energy chain” diagram with students, providing a few examples (details given below).
- Give students the “Energy-Go-Round” handouts
- Split students into groups of either 2 or 4 and send each group to start work on a particular station. As the group finishes, they rotate to another station.
- During the lab, circulate amongst student groups and talk with them about the energy transformation at their station. Make sure they are accurately depicting the changes with both their words and their diagrams.
- The “Energy-Go-Round” handout includes a follow-up activity, which students need to complete outside of class. This will be due by the beginning of lesson 7.
- The first collaboration with the engineering teams occurs on lesson 5, so student teams need to plan accordingly. Specifically, they need to plan out how to most effectively communicate their designs to the engineering teams. While the engineering teams do have access to design sketches (posted on the blog for lesson 4), students need to think about other important information the engineers need to create models of the roller coaster.
- An energy chain is a simple diagram that tracks how energy evolves as a system changes. For a falling object, the chain would describe how gravitational potential energy (GPE) transforms into (KE); GPE KE
- There are some situations in which the chain branches off into more than one energy type; if the falling object encountered air resistance, the GPE would split into KE and thermal energy (TE) associated with friction
- It’s usually a good idea to split students up in such a way that they occupy only 9 or 10 of the 12 stations. This leaves some space for student groups that work quicker than others. Adjust group size accordingly. |
The center of our Milky Way galaxy is located about twenty-five thousand light years from earth, in the direction of the constellation of Sagittarius. It is invisible to us in optical light because of extensive amounts of intervening dust, but radiation at other wavelengths, including the infrared and radio, can penetrate the veiling material. Even though it is far away, the Galactic Center plays an important part in earth's story. The solar system orbits around the center once every few hundred million years, and astronomers think the origin, evolution, and perhaps future of the entire Milky Way (including the region where the solar system resides) are reflected and perhaps partly determined by the properties of the Galactic Center region. Of course astronomers are also anxious to test their basic understanding of the bizarre nature of black holes themselves by studying this one - the massive black hole that is by far the closest to us.
Surrounding this supermassive black hole, at a distance of about 6 light-years, is a clumpy circumnuclear ring of material called the Circumnuclear Disk (CND). Astronomers wonder who ordered this odd structure, and are also puzzled to see it because such a ring-like object should quickly fragment and disappear. SAO astronomers Maria Montero-Castano and Paul Ho, together with former Harvard graduate student Robin Herrnstein, used the Smithsonian's Submillimeter Array (SMA) to obtain the first set of high quality images of the CND in dense gas as traced by the molecules hydrogen cyanide (HCN) and carbon monosulfide (CS).
The astronomers find that the CND is more like a necklace: a string of clumps around the supermassive black hole, with each clump containing several thousands of solar masses of gas. Furthermore, they find that the clumps are dense enough to be able to hold together against typical disruptive forces. Their analysis of the gas motions in and around the ring suggests that material is falling into the CND, while gas from the CND is spiraling in towards the black hole at the center. Their results are significant because they imply that the CND is not a transient structure, and that it plays some long-lasting role in the story of our galaxy's center. |
|Bald Eagle||Peregrine Falcon|
|Vultures||N. Saw-whet Owl|
Raptors: Birds of Prey
Raptors are cherished sights to many who are fortunate enough to behold them. Whether it’s the sighting of a Bald Eagle skillfully plucking a fish out of the water, a Peregrine Falcon in a high-speed chase, or an owl quietly surveying the landscape from an inconspicuous perch, the charismatic nature of raptors often commands the attention of all who gaze upon them. Even beginner naturalists seem able to detect a raptor amidst other bird company.
The term “raptor” broadly refers to birds of prey. While not a taxonomic bird grouping, raptors are characterized by hooked bills and sharp talons. Ospreys, eagles, hawks, falcons, owls, vultures, and kites are all common raptors in North America, but the traits of species within these groups vary widely. For example, some raptors feed exclusively on birds, while others rely upon carrion, or even snails.
Sentinels of Ecosystem Health
Why study raptors? Our answer at BRI is two-fold: to help species in need and to monitor our environment. As conservation biologists, we are representatives for the natural world. We study raptors to tell their stories as a means of aiding in their conservation. However, these birds have their own story to tell us about our environment.
Both their distribution, and their living tissue (blood, feathers, eggs) can provide much information about the health of individual birds, as well as the health of forests, rivers, and oceans that sustain all of us.
The absence or abundance of raptors in an ecosystem is meaningful. Because of these relationships, the near extirpation and recovery of several raptor species is tied to important U.S. environmental policies, such as the banning of DDT, and the Endangered Species Act.
BRI's Raptor Research
BRI focuses its research efforts on meeting the conservation needs of raptors, and using raptors as bioindicators to evaluate the health of individuals, populations, and ecosystems. Below, we have grouped our primary areas of research emphasis into three nonexclusive areas: (1) contaminants monitoring; (2) movement studies; and (3) surveys and population monitoring.
Many raptors sit at the top of the food web, often feeding on fish and other birds. In systems polluted with contaminants, these compounds are often magnified up the food web to top predators. As a result, raptors are among the most well-established bioindicators, or “biosentinels.” Biosentinels such as the Bald Eagle, Osprey, and Peregrine Falcon are well-known. Their fates and histories are tightly intertwined with some of the most important environmental policies in U.S. history. For example, the 1973 Endangered Species Act provided for the conservation of ecosystems upon which threatened and endangered species of fish, wildlife, and plants depend.
For decades, biologists have sampled birds to evaluate the potential for reproductive or behavioral impacts due to contaminant exposure. Bird blood, feathers, and eggs provide direct insights about the short- and long-term exposure to contaminants through diet. Sampling broadly throughout the landscape helps biologists identify “hotspots” of contaminant exposure, and sampling annually helps us determine if contaminant levels are changing over time. Such information has proven pivotal in guiding policy decisions to regulate pollutants.
Efforts to study the movements of raptors help us learn critical information about their behavior and ecology. How long does a raptor remain in its natal area before it disperses and where might it eventually go to breed? Do raptor movement patterns place these birds at a high collision risk with wind turbines or other structures? The answers to such questions are more than just interesting; they are needed to make responsible conservation and management decisions that affect both wildlife and humans.
How do we go about gathering information on raptor movements? Two approaches we emphasize are banding and tracking using telemetry.
Marking birds with unique leg bands is one of the most long-standing methods of studying birds. Traditional bands help us identify the banding origin of birds that are recovered or recaptured elsewhere.
Colored bands used on many raptors display unique etched characters which are readable in the field, enabling identification of birds from a distance. As a result, color bands are regularly noted by photographers and nature enthusiasts in addition to researchers. BRI’s raptor program continues to band hundreds of breeding and migrant raptors as part of our ongoing research and monitoring efforts.
With rapid advancements in tracking technology, we are able to gain detailed location information on birds that is helping us fill in major gaps in our understanding of their ecology. For example, we can answer questions such as: How far offshore do migrant Peregrine Falcons fly? How important are anadromous fish runs to Maine’s recently fledged Bald Eagle population?
Detailed data on raptor movements can help inform decision makers about where to place wind power facilities, or provide land managers important information about habitat use.
Scientists use two primary types of transmitters to track individual bird movements: traditional (VHF) radio transmitters, and satellite transmitters. Birds fit with traditional radio transmitters are typically tracked within a relatively close range (several miles) by plane, vehicle, or on foot using a receiver. Birds fitted with satellite transmitters can be tracked at a global scale because they relay location information via satellites. Developing tracking technology relays GPS locations using the cellular phone network.
To date, BRI researchers and our collaborators have tracked a variety of raptors including Bald Eagles, Peregrine Falcons, Ospreys, and Great-horned Owls. These individuals continue to provide remarkable information that is valuable in a wide spectrum of research, management, conservation, and legislative decisions.
Surveys and Population Monitoring
Conservation biologists are continually challenged to evaluate the status of bird populations. Importantly, we try to detect declines as they occur. We prioritize collecting information needed to detect and measure changes in the stability of raptor populations. In order to achieve this, we survey raptor populations to document the number of individuals, breeding pairs, nonbreeding pairs, nests, or young produced in an area. Birds can be detected by direct observation (counting), sound (responses to playback calls), or by capturing and banding them.
Surveys may be conducted on foot, by vehicle, boat, or small aircraft. Surveys may require covering expansive areas (such as for breeding raptors that roam over large territories and nest in low densities), or from one location (such as during migration when southbound migrants move through an area and are counted or captured). BRI’s raptor program aims to prioritize monitoring the status of a variety of species as a means of aiding in their conservation. |
The discovery of so many exoplanets in recent years has raised many new questions, forcing us to reexamine some of our ideas. Scientists had extrapolated models of stellar system evolution from our own Solar System, assuming that others look very similar to our own. But extrapolation can only get us so far. Scientists never expected to find so many “hot Jupiters”—gas giants larger than Jupiter and orbiting very close to their star.
We’re also having a hard time understanding the inner workings of exoplanets and stars with much greater mass than Earth. Scientists have managed to test some materials under extreme pressures and found that our conventional ideas about a material’s behavior may not apply. Certain exotic quantum mechanical models could apply in such extreme cases, but until recently, scientists have not been able to test those models’ predictions.
The difficulty, of course, is that actually visiting the cores of gas giants to test our understandings is wildly impractical. The next best thing, then, is to recreate these massive pressures on Earth and study their effects on materials. As impossible a task as it may seem, scientists at the National Ignition Facility (NIF) used its enormous lasers to do exactly that.
The NIF machine is a sight to behold, so exotically futuristic that it recently doubled as the warp core of the Starship Enterprise in last year’s “Star Trek Into Darkness." It's a massive aluminum sphere, 10 meters in diameter, that houses the world’s largest laser. When not acting as a “radioactive catastrophe waiting to happen" as Scotty put it in the film, the NIF is used for nuclear fusion experiments. Luckily, the machine’s resources are also occasionally allocated to fundamental physics research.
Taking advantage of that time, a group of scientists devised an experiment to learn about conditions inside massive exoplanets. To simulate the dense material at the cores of such planets, they used a synthetic diamond. The goal of the experiment was to put massive amounts of pressure on the diamond and to observe how much the diamond compressed.
To do so, the researchers fired 192 laser beams at the crystal over the unimaginably short period of two nanoseconds. This firing created pressures similar to those found at the core of Saturn, approximately 14 times greater than the pressure at the center of the Earth.
This was the first time this method, called dynamic ramped compression, was used successfully. Other approaches are incapable of reaching the incredible pressures seen in this experiment, so the success of ramped compression means we might be able to test even higher pressures. However, ramped compression requires extremely precise control—making NIF perfect for the job, as its laser is capable of incredible precision.
The experiment achieved an almost fourfold compression of the diamond, in the process gaining data that we can now compare to theory. Since the diamond is made of carbon, the team’s data can be used to extrapolate to the cores of planets comprised mostly or entirely of carbon compounds.
Because titanic pressures such as those seen in the team’s experiment are normal in some planets, scientists can use the new data to derive a better mathematical relationship between a carbon planet’s mass and its size. In other words, scientists should be able to more accurately calculate the sizes of carbon-rich exoplanets.
We live in an exciting era—exoplanets are being discovered at an incredible rate (with more found in 2013 than in every prior year put together), and new techniques are enabling scientists to study them with greater precision. Studies like this one, which push the boundaries of what can be tested experimentally on Earth, will be crucial to that endeavor. |
How climate is recorded
Many subtle recorders of climate surround us. In the oceans, tiny marine organisms die and are incorporated into sediments that blanket the ocean floor. The chemical composition of their shells reflects the composition and temperature of the seawater in which they lived. The chemical makeup of coral skeletons can be used to reconstruct conditions in the tropics. Tree rings and lake bed sediments record seasonal variations on the continents, while ice cores record high-latitude climate changes.
Topic: Earth Science
Subtopic: Climate/Climate Change
Keywords: Climatic changes--Observations, Climatology, Corals, Ice cores, seawater, Sedimentation and deposition, Shells, Tree rings
For the last two million years, the climate of our planet has been mostly cold, with great ice caps covering much of Europe, Asia, and North America.
Tree rings are sensitive indicators of annual climate.
This marine sediment core from the tropics shows fine laminations of alternating light and dark layers. |
Atherosclerosis is a common cardiovascular condition that affects individuals who are middle-aged and older. Atherosclerosis is a hardening and narrowing of arteries due to a build-up of cholesterols, fats, and other substances on the inside of arterial walls. Atherosclerosis can reduce blood flow to the heart and brain and cause a number of different serious health complications including heart attacks, strokes, and aneurysms. Luckily, there are many simple lifestyle tips individuals can follow to prevent atherosclerosis.
Get Frequent Medical Check-Ups
Atherosclerosis buildup in the arteries usually occurs over long periods of time. Many individuals who experience complications due to atherosclerosis could have prevented these side-effects through early identification of the disease. Simple non-invasive screenings involving measuring blood pressure and listening for foreign sounds in one's heartbeat can identify atherosclerosis before serious issues present themselves. Individuals forty years old or older should receive a physical check-up by a doctor at least once a year to identify any symptoms of atherosclerosis. |
What is the Goldman Fristoe Articulation Test?
The Goldman Fristoe Articulation Test is widely used by speech pathologists throughout the United States as a tool to assess children’s speech development. Developed by Dr. Ronald Goldman and Dr. Macalyne Fristoe, this test can assess and yield information about a child’s progress, ability, and other measures of speech production. It can be administered on infants and youth of all ages (until age 21) with the ability to compare the child’s individual status against national standardized norms. Goldman and Fristoe both have significant experience in speech pathology, and have created this test out of their own need to help their clients. When researching this test, you may find that there is now a second edition available. This edition includes tables of standardized norms, as well as updated pictures that are void of cultural bias.
Sampling a child’s spontaneous and imitative sound production helps to build a clear picture of their strengths and weaknesses. This test can be easily administered in less than 15 minutes using picture cards and consonant sounds. This short test can create a clear trusted picture for speech language pathologists.
Gender-Based Speech Therapy
Different stages and areas of speech can be tested in accordance with the child’s age and gender. Many relationship studies have suggested that the male and female sex communicate differently, and this starts the moment we begin to learn language. Women learn language abstractly while men learn based on sensory cues.
It can be explained best with the cliché related to men and women’s style of directions. Women will give directions with abstract and additional cues, “go one block north and turn right after you pass the bakery.” Men will give directions in a simpler manner without abstract cues or landmarks, “go north on Willow and turn right on Ash.”
Because of the differences in male and female language processing, different instruction will benefit one sex, but not the other. Boys may benefit from lecture and reading, while girls are more apt to learn with more abstract approaches, and information-laden learning. The importance of gender differentiated results is something that the Goldman Fristoe standardized test results takes into account. It is vital to know the differences in learning so that any therapy given is most successful to the child.
Chicago Speech Therapy Can Help
Perhaps your child is showing difficulty mimicking the sounds you make, or producing ones of their own. If you have any questions on your child’s language progress, have a speech pathologist conduct this short test. This quick test can provide lots of information about your child’s language assessment, and help you to decide whether or not to enter into therapy.
If you are concerned with your child’s speech or language development, please contact Chicago Speech Therapy by calling 312-399-0370 or by clicking on the “Contact Karen” button on the upper right section of this page.
Karen George is a Chicago speech-language pathologist. The practice she founded, Chicago Speech Therapy, LLC, provides in-home pediatric speech therapy in Chicago and surrounding suburbs. Karen and her team of Chicago speech therapists have a reputation for ultra-effective speech therapy and work with a variety of speech disorders. Karen is the author of several books such as A Parent’s Guide to Speech and Language Milestones, A Parent’s Guide to Articulation, A Parent’s Guide to Speech Delay, A Parent’s Guide to Stuttering Therapy, and A Parent’s Guide to Pediatric Feeding Therapy. She is often asked to speak and has addressed audiences at top Children’s Hospitals and Northwestern University. Karen is highly referred by many Chicago-area Pediatricians and elite schools. |
Cathode Rays discovered by Sir William Crooke are the stream of electrons. They can be produced by using a discharge tube containing gas at a low pressure of the order of 10–2 mm of Hg. At this pressure the gas molecules ionize and the emitted electrons travel towards positive potential of anode. The positive ions hit the cathode to cause emission of electrons from cathode. These electrons also move towards anode. Thus the cathode rays in the discharge tube are the electrons produced due to ionization of gas and that emitted by cathode due to collision of positive ions.
Properties of Cathode Rays
- Cathode rays travel in straight lines (cast shadows of objects placed in their path)
- Cathode rays exert mechanical force on the objects they strike.
- Cathode rays produce fluorescence.
- When cathode rays strike a solid object, specially a metal of high atomic weight and high melting point X-rays are emitted from the objects.
- Cathode rays ionise the gases through which they are passed.
- Cathode rays can penetrate through thin foils of metal.
- Cathode rays are found to have velocity ranging 1/30th to 1/10th of velocity of light.
Email Based Homework Assignment Help in Cathode Rays
Transtutors is the best place to get answers to all your doubts regarding cathode rays and properties of cathode rays. You can submit your school, college or university level homework or assignment to us and we will make sure that you get the answers you need which are timely and also cost effective. Our tutors are available round the clock to help you out in any way with quantum physics.
Live Online Tutor Help for Cathode Rays
Transtutors has a vast panel of experienced physics tutors who specialize in cathode rays and can explain the different concepts to you effectively. You can also interact directly with our physics tutors for a one to one session and get answers to all your problems in your school, college or university level quantum physics. Our tutors will make sure that you achieve the highest grades for your physics assignments. |
Learn something new every day
More Info... by email
Community sociology is the study of the people in a particular community. It is usually seen as a very tricky subject because the definition of a community is so broad. The term may refer to a certain town, or a certain age group in a specific town. It may also include people in an entire city or state. Most kinds of community sociology focus on a single community and study its inner workings. Sociologists are usually interested in how people in communities act toward one another, treat strangers, and how they interact during periods of crisis and calm.
Those who study community sociology often narrow their focuses to a single, very small community of people. This means that such a social scientist might study a community without overlapping into the research area of another sociologist. For instance, two scientists studying the same town might focus on two very different communities. One might concentrate on the teenagers in the town, while another studies the women there. Each sociologist might also choose a certain age group or niche within these communities on which to focus.
Other types of community sociology might examine much larger groups. For instance, one expert could focus on the community of lesbian women in the United States. This is an example of a community that is connected by a common characteristic rather than a common location. Communities like this are part of what can make community sociology so in-depth and complicated. Plus, in this example, a sociologist could easily split the U.S. lesbian community into smaller groups like African-American lesbians, Christian lesbians, and Muslim lesbians. Often, each of these sub-communities has different values, wants, needs, and causes.
When conducting community sociology research, the sociologist usually begins by examining how the members of the community behave. For example, a group of teenagers in a small, rural town might be disrespectful to those over 18 years old, enjoy staying out past the town curfew, and have poor grades. The sociologist might then look at why this community behaves in this way. In this example, the teens’ behavior might stem from a lack of the proper stimulation. In other words, they might be bored in school and feel unable to express themselves in a healthy way.
The above conclusions about the teens’ behavior illustrate why objectivity is important in community sociology. Every community has a motive for its actions and behaviors, even if those behaviors are damaging to others. The reasons likely make a lot of sense to the community itself, even if those outside it don’t understand. Community sociology can be applied by helping people communicate with each other and live more peacefully.
As a community sociologist, how do you carry out community participation in the community in the health care programs? |
The following chart shows a timeline comparing Athens and Rome during the period 800-450 B.C.
It’s obvious from the chart that the Greeks were a couple of hundred years ahead of the Romans in developing their culture. In 625, when the Romans were living in mud huts and working at draining the swamp that would become the Forum, the Athenians were already 125 years removed from establishing their colonies in Italy and well on the way toward defining a unique and advanced culture. Architecture forms were well developed, large sculptures were being produced, and pottery had already passed through its orientalizing period. The polis had become a mature political system as it broke new ground in human rights and political participation. At the same time, the Greek army had evolved advanced battle tactics including use of the Phalanx.
So why the disparity between Athens and Rome?
There are both environmental and cultural reasons for this. Rome and the Italian peninsula had similar experiences to Europe in the middle ages with respect to development of their political systems. This similarity is based on two factors: personal leadership and a collective unity and equality of tribesmen. In other words, their political systems grew out of leadership based on personal charisma which encompassed regal, military, and political elements. The society was flat with a leader and his associates on top and everyone as equals below.
The early Greek experience was different because it was influenced by unique factors: tribal kings were weak financially, the Aegean was isolated geographically, and Greek life was simpler than Roman life.
Since the early kings were not wealthy, their attempts at power were overcome by military leaders who excelled at forming superior tactics. Without money kings could not buy power. The isolation of the Aegean and its geography was also a factor because it prevented foreign threats and the influence foreign invaders could exert on evolving Greek political systems.
Perhaps the most intriguing aspect was the simplicity of Greek life. The Greeks looked at the world through an intellectual lens: embracing science, mathematics, philosophy, and the arts, rather than pure wealth building. Greek philosophy dictated that possessions were not the route to happiness in life and that logic demanded equality among free people. The Greeks believed that all possess inherent rights to justice, participation in government, and equality under law.
Ultimately Rome would come to dominate Greece because its unity and the power derived from it would overcome the Greeks independence. In this case, Greek philosophy was the liability because it kept the Greeks form creating a powerful empire which could compete against Rome. |
Though no humans call Antarctica their permanent home, it’s been known for years that our pollution still makes its way down to the southern-most points.
Scientists have known that mercury and other toxic substances can travel through the atmosphere to Antarctic sea ice, but a new study out in Nature Microbiology has found that the mercury can actually turn into an even more toxic form, due to a tiny resident of the frozen world.
The international team of scientists found that at least one form of bacterium, Nitrospinia, is converting mercury into methylmercury, a form highly toxic to humans that can cause developmental and physical problems in children.
In the study, the scientists collected samples from Antarctic sea ice for two months, and then analyzed both the types of mercury and the DNA and proteins of microorganisms in the lab.
Methylmercury does not just hang tight on the sea ice. It gets consumed by creatures and concentrated in fatty deposits of fish and birds through a process called bioaccumulation. This means that like in other parts of the world, the fish that we eat could have high levels of mercury.
As fish stocks in the Southern Ocean are already being harvested and put on your plate, this could be something to consider before ordering the Chilean Sea Bass.
“We need to understand more about marine mercury pollution, particularly in a warming climate and when depleted fish stocks means more seafood companies are looking south,” said John Moreau, lead author on the study and geomicrobiologist at the University of Melbourne said in a press release.
The scientists recognize that mercury pollution can come from both anthropogenic sources, like smelting gold and burning fossil fuels, and natural sources, like volcanic eruptions. But they say that these findings show a continued need to limit mercury pollution, as well as move to more responsible Antarctic fisheries. |
Information for Health Professionals and Parents
- A 2017 CDC study found only 7.1% of U.S adolescents met fruit intake recommendations and only 2% met vegetable intake recommendations.
- Similarly, a separate study found only about 25% of U.S. adolescents get the recommended amount of 2.5-3 daily servings of dairy.
- Recent studies have been conducted by the American Academy of Pediatrics (AAP) regarding insufficient vitamin D intake among children and adolescents. Results found high cardiovascular risk factors including high blood pressure, low HDL cholesterol, high blood sugar, hypertension, hyperglycemia, and metabolic syndrome.
- Studies show low-fat and whole vitamin D-fortified milk (plain or flavored) can help bridge the gap.
Health Benefits of Flavored Milk
Research shows that when any type of milk is on the menu, students:
- Consume more milk which has more muscle-building protein than juices and sodas
- Drink fewer sodas and soft drinks compared to those who do not drink flavored or unflavored milk
- Meet their calcium needs without consuming more total fat and calories
- Do not have higher total fat or calorie intakes than unflavored milk on average
- Have a body mass index (BMI) lower than or comparable to the BMIs of non-milk drinkers
- Consume more milk than exclusively unflavored milk drinkers
- In general, consume more calcium, phosphorus, magnesium, potassium, and vitamin A than non-milk drinkers
It’s important to remember that 9 out of 10 girls and 7 out of 10 boys aren’t getting the calcium they need for strong bones and healthy bodies. Limiting access to flavored milk may further reduce students’ intake of calcium and other essential nutrients.
The vitamins and minerals in milk help build and repair muscle, making it a great post-exercise beverage and a healthy alternative to sports drinks
The Role of Flavored Milk in Child Nutrition Programs
The Dietary Guidelines for Americans, numerous health organizations, and the latest science support the continued role of milk as a core component of child nutrition programs. Milk is a nutrient-rich beverage that’s good for kids, supported by the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) and the Child and Adult Care Food Program (CACFP).
Offering low-fat or fat-free choices of milk is an excellent way to increase milk consumption among children and make their diets more nutritious. It can also boost overall participation in school meal programs.
Concerns about calories, fat, and sugar as components of individual foods rather than the overall diet have put nutrient-rich milk at risk of not being offered to children. Limiting access to flavored milk, because of its added sugar, may have the undesirable effect of further reducing intakes of essential nutrients provided by milk.
Whether flavored or unflavored, milk provides the same thirteen essential nutrients (calcium, potassium, phosphorous, protein, vitamins A, D, and B12, riboflavin, niacin, iodine, potassium, selenium, and zinc) and can help kids meet their calcium requirements. Kids who consume milk meet their calcium requirements without consuming significantly more added sugar compared to those who do not consume milk.
Additional Resources on Flavored Milk from the Dairy Alliance
Without a doubt, kids (and adults) love the taste of flavored milk. Here at the Dairy Alliance, we are strong milk advocates because of its delicious creamy taste and all of the wonderful things it can do for our health. Look below for more resources on flavored milk. It’s time to step your milk game up a notch. Flavored milk is where the fun and flavor are at. Cheers to that.
Five Reasons to Raise Your Hand for Flavored Milk |
In the early 20th century, Danish biologist Johannes Schmidt solved a puzzle that had confounded European fisherman for generations. Freshwater eels—popular for centuries on menus across northern Europe—were abundant in rivers and creeks, but only as adults, never as babies. So where were they coming from?
In 1922, after nearly two decades of research, Schmidt published the answer: the Sargasso Sea, the center of a massive, swirling gyre in the North Atlantic Ocean. Now regarded as some of the world’s most impressive animal migrators, European eels (Anguilla anguilla) journey westward across the Atlantic to spawning sites in the Sargasso; their eggs hatch into larvae that are carried back across the ocean by the Gulf Stream, arriving two or three years later to...
“Since Johannes Schmidt identified this spawning area in the Sargasso Sea, people have been wondering about that great journey and trying to figure out how to follow the eels,” says Righton, whose work on epic marine migrations includes appropriately titled projects such as CODYSSEY and EELIAD. “But the technology hasn’t been available. . . . They just slip away into the darkness, really, in autumn, and no one knows what happens to them.”
This information gap is of particular concern to conservationists. European eel recruitment (the number of babies added to a population each year) is thought to have declined more than 90 percent in the last 45 years. In 2008, the International Union for Conservation of Nature classified the fish as “critically endangered,” citing a concerning lack of data on their life histories. “We have a black hole, or a ‘blue hole,’ out in the ocean in terms of adult migratory behavior,” says Kim Aarestrup, a senior researcher at the National Institute of Aquatic Resources, Denmark, and one of Righton’s collaborators. “It’s really hard to do population ecology and management if you have a hole in the life cycle.”
In the mid-2000s, with these concerns in mind, Righton, Aarestrup, and other European colleagues set out to fill in the blanks. Taking advantage of recent improvements in animal telemetry, they tagged more than 700 large (therefore female) European eels—no easy task, Aarestrup admits, since “they’re pretty slimy.” The team planned to track their slippery quarry across the Atlantic. As eels swim too deep—usually at least 200 meters down—to be tracked using GPS, tags logged environmental data to provide indirect clues about the eels’ whereabouts. When a tag’s battery died after several months, it detached and floated to the surface, where, depending on the type of tag, it either relayed data via satellite or drifted back to shore for collection.
As the data rolled in, the researchers realized there was more to European eel migration than previously thought. Just a fraction of tagged eels made it to the ocean: only 87 tags collected data beyond the coastline. Many of those 87 were soon separated from their eels by predation, and none made it beyond the Azores—a result that highlights the peril inherent to the transoceanic journey, Righton notes.
The partial trajectories recorded by the tagged eels that did make it to the ocean revealed further surprises. For starters, rather than take a direct route, eels apparently meandered their way to the Sargasso. “They’re taking a much longer route than ‘as the eel swims,’” says Righton, meaning that many models of eel migration are likely inaccurate. What’s more, the team found, eels don’t seem to swim with nearly enough urgency to reach the Sargasso in time for spawning early the following year.
Using eel larvae catch data from the spawning region, the researchers had estimated larval growth rates and extrapolated backwards in time to predict hatching times. From those estimates, they’d calculated that peak spawning must occur as early as February. But comparing this timeline with tag data revealed that eels leaving Europe in the autumn would have trouble reaching their destination fast enough at the pace they were going. “An eel has to go, for example, from Denmark at 55 kilometers a day to make it,” says Aarestrup. Tags instead showed maximum speeds of around 47 kilometers per day for the largest—and presumably strongest swimming—fish in the population.
The findings point to a different story from the one told until now, the researchers argue—at least some of these eels weren’t destined for the first spawning season at all, but the second (Sci Adv, 2:e1501694, 2016). “We can say that it’s highly unlikely that a significant number of the eels leaving Europe will actually make it down to the Sargasso Sea ready for the coming spawning,” explains Aarestrup. “Which then leads us to the conclusion that they’re probably part of the next spawning, which happens 12 months later.”
The claim is certainly a surprising one, says Michael Miller of Nihon University in Japan. “They propose the hypothesis that maybe there’s a mixed strategy—some go quickly and some go slowly,” he says. “It’s a completely new idea.” However, he emphasizes that without full data sets, there may be other explanations that can’t yet be ruled out. As Aarestrup and his coauthors acknowledge, larval growth rates might be underestimated using catch data, due in part to the biasing size-selectivity of fishing nets, meaning that peak spawning could actually occur later in the year. Alternatively, Miller adds, “eels that leave too late might just not succeed in spawning.”
Getting to the bottom of the mystery will probably require better tags that last longer and can be attached to a wider range of eels, not just the largest, notes Righton. “We’re waiting for a revolution in technology,” he says.
In the meantime, the study raises important questions about the ecology of these “amazing fish,” Miller notes. “The hypothesis is most interesting, because it could be true, or it could be not true,” he says. “Either way, they’ve proposed the hypothesis and it’s going to stimulate a lot of interesting research.” |
Interpret information presented visually, orally, or quantitatively (e.g., in charts, graphs, diagrams, time lines, animations, or interactive elements on Web pages) and explain how the information contributes to an understanding of the text in which it appears.
Earth Watch: Drowning in Plastic - Comprehension Worksheet
Practice reading comprehension skills and learn about microplastic pollution in our oceans with a reading comprehension activity.
Text Features Flashcards
A set of 17 flashcards to review text features.
Valentines Around the World – Comprehension Worksheet
Explore five global Valentine’s Day traditions with this informational text and comprehension questions activity.
Nonfiction Text Features: Match It Up!
A set of 36 match-up cards to practice identifying nonfiction text features.
Concrete Poem Poster
A poster providing a definition and example of a concrete poem (or shape poem).
Elements of a Biography Poster
Use this biographical writing poster with annotations to help your students understand how to write an engaging biography. |
Keynesian 45° Diagram
In this diagram, we can see that the level of aggregate demand is lower than the level of output at full employment, therefore the economy is overproducing and a decline in growth will occur. For example, say the level of current aggregate demand is $500, and aggregate demand at full employment is $600. This means that the value of the deflation is $100, that is, people are spending $100 dollars less than they can technically afford to. This situation may also be shown in the real output diagram below.
The value of the deflationary gap is again $100. This is not good news for countries, as it means that the growth of their output will decrease, and standards of living associated may decline. In order to deal with this problem, governments may attempt to implement some sort of fiscal policy, which will attempt to increase aggregate demand, or GNP, back to the full employment level. A fiscal policy is a purposeful budget action designed to stimulate an economy and bring it out of a deflationary period. Fiscal policy may consist of government injections into an economy in money, or decreases in taxes that raise real income. An example of use of fiscal policy to recover from deflation of an economy is the massive public works projects that Japan funded in the mid 1990’s which stimulated the economy, and helped to move it out of a depression. One would think that this government spending would be reflected by an increase in taxes to fund such works, however, as in the case of Japan, the government may go into budget deficit to fund works, or may use money leftover from budgeting, called budget surplus.
The fact that a massive increase in government spending stimulates the economy is due to the multiplier effect. In essence, the multiplier shows how increases in planned injections into the economy lead to larger increases in output and income, and therefore GNP and AD rise. This is because an initial money injection sets off spending from other people. For example, a firm spends $200 million on a new warehouse. The firm would pay contractors to build the warehouse. This $200m paid to contractors would be an increase in aggregate demand. This contractor may then use the $200m to pay workers building the warehouse, which may spend $20m on food. Manufactures of food would then pay their workers, who would again spend on other goods. John Keynes argued this point in his 1936 book, and stated that every job directly created by increased spending by firms would indirectly create other jobs. The government spending, therefore, sets off rounds of spending and increases in employment of further value than the initial injection of funds. Decreases in taxes also have the same effect, as taxpayers now have more income which they may spend on goods and services, which in turn pay employees and NY increases due to the multiplier effect.
When considering how best to close an inflationary gap, the government must look at the size of the multiplier. That is, how large will the increase in National Income be in proportion to initial spending? The size of the multiplier is determined by the equation
Size of multiplier = 1
MPC is the marginal propensity to consume. This refers to the amount of an increase in autonomous (outside the economy) income is spent on domestic goods and services and how much is saved. When autonomous spending increases aggregate expenditure, the increase will increase household income. This income may leak out in the form of savings, taxes or spending on imports, but the rest will be spent on domestic goods and services. The opposite of marginal propensity to consume is marginal propensity to save, which refers to what proportion of an autonomous income increase will be saved as opposed to spent.
For example, if the MPC is 0.5 the multiplier will be 2, which means that an increase in government spending of $100 million will increase NY by $200 million. The multiplier effect and its use by governments to stimulate AD can be shown in the diagram below.
Here's what a star student thought of this essay
Quality of writing
The structure here is fine, having a clear introduction. I would've liked to have seen some attempt at a conclusion, weaving the argument together and posing a justified judgement. Although the question doesn't prompt for an evaluative response, a conclusion is often a good place to make a passing comment. Technical terms are used fluently throughout, and the style allows for a convincing argument. It was a shame that some of the points don't upon the analysis to answer question! Spelling, punctuation and grammar are fine.
Level of analysis
The analysis here is sound. There is a strong awareness of how aggregate demand and supply are at equilibrium to give the level of output, and why this is below the level of full employment. Building a good foundation of knowledge early on allows you to move swiftly into analysis. I particularly liked the historic example of Japan investing in infrastructure, as this shows the ability to apply theory to real life situations. It was great to see a perceptive comment that an increase in government spending isn't always matched by increases in taxation, as a government may choose to run a budget deficit. The multiplier is always a tricky concept, and this essay explores it well. There is a clear definition, and some numerical analysis. I would've liked to have seen some discussion of the significance of the multiplier, showing how a small increase in government spending can help reduce the deflationary gap. It seems as if the argument moves away from the question near the end - it is vital to stay on focus.
Response to question
This essay responds to the question well, but only after the first paragraph. The introduction gives an unnecessary definition and summary of inflation, which is irrelevant here. It is a common mistake to think of a question regarding the deflationary gap as a question about inflation and deflation, whereas it is looking for a discussion of how to move output to full employment levels. Past the introduction, this essay does engage well with this task, looking at how the multiplier and government spending can influence. Diagrams have not transferred over, but by looking at the comments, they seem to be relevant. In a question such as this, it is vital to include diagrams as this is the easiest way to define a deflationary gap and look at policies to reduce it. |
Canada is the second largest country in the world, even if large masses of it are covered in ice. We all know that the cultivated or potentially arable parts of this vast territory were fought over following discovery by the French and the English and other races from Europe who had braved the Atlantic Ocean to start a new life in the New World. But first the world needed to know about the legal and political geography of Canada.
Demands were made before 1791 for a legislative assembly, and an Act was passed in that year dividing the province of Quebec into two parts along the Ottawa river. West of this was Ontario, then called ‘Upper Canada’, mostly British, though many settlers in the Thirteen Colonies on the eastern seaboard loyal to Britain had moved there before, during and after the American War of Independence. There were also the original inhabitants – the Iroquois. East of the river was ‘Lower Canada’. Quebec was the centre of the now less powerful French Empire in Canada, where French was universally spoken and the Catholic Church was powerful. Despite this both were governed by a British-appointed Governor. A legislative parliament was established in each province, but power had to be shared with an appointed, not elected Upper House. In both provinces land had to be set aside for the Protestant clergy – an idea which managed greatly to anger the French Canadians.
Then, after an uneasy forty-nine years, came the Act of Union in 1840 which united ‘Upper’ (West) and ‘Lower’ (East) Canada into one single province with one legislature, both parts having equal representation. It is clear that this was done to prevent French Canadians who one assumes were the majority from dominating the assembly. Immigration meanwhile was gradually making English-speaking Canadians into a majority, with the result that they developed a sense of grievance!
At last in 1867 the Parliament in London passed the British North America Act (funnily not mentioning the word Canada at all) and thus the Dominion of Canada was formed in the middle of an economic depression. Repeal of the Corn Laws (1846) in Britain had ended Canada’s protected market, even as the new Erie Canal siphoned off trade to New York State which had previously gone through Montreal. But in addition there was a population problem: the English-speaking people of Canada West (originally ‘Upper Canada’ had significantly outgrown the French speakers of Canada East (originally Lower Canada). The French resisted the English-speakers’ desire to enjoy a larger say in political affairs.
There were even ‘reformers’ requesting the United States to ‘annexe’ Canada, but this was resisted as well by the French Canadians because they thought that Canada – if American – would allow them even less say! It was an expedition to London made by Macdonald the Conservative leader, with the leader of the Reformers George Brown, and the French Canadians’ leader Cartier which finally persuaded the British Parliament to pass the British North America Act. It would unite the provinces of Canada West (Ontario) and Canada East (Quebec) with the colonies of New Brunswick and Nova Scotia ‘to form one dominion under the name of Canada. This Act formed the basis of the Canadian Constitution until 1982. Before that the Pacific coast province of British Columbia joined the Confederation (in 1871) and Prince Edward Island in 1873, Newfoundland however did not become a member until 1949.
Leave A Comment |
Cells and Cell Structures
These diagnostic questions and response activities (contained in the zip file) support students in being able to:
Use a light microscope to make and record observations of cells from a range of tissues and organisms.
Apply the idea that organisms are made up of one or more cells.
Identify subcellular structures and their functions.
Use ideas about cell structures and their functions to explain why a cell is a living thing.
Describe the features and the limitations of the animal and plant cell models.
The resources include details of common misconceptions and a summary of the research upon which the resources are based.
Download the zip file for all the questions and activities.
More resources like this can be found on the BEST webpage: Best Evidence in Science Teaching
Show health and safety information
Please be aware that resources have been published on the website in the form that they were originally supplied. This means that procedures reflect general practice and standards applicable at the time resources were produced and cannot be assumed to be acceptable today. Website users are fully responsible for ensuring that any activity, including practical work, which they carry out is in accordance with current regulations related to health and safety and that an appropriate risk assessment has been carried out. |
The teacher’s contributions to factors which affect reading comprehension extend to the matters concerning the authors. Writers vary considerably in the mannerism of their writing. While some writers produce simple, clear, and easy to understand reading material, others produce complex and difficult to understand reading material. If the text is complex and difficult to understand, students will require more effort, better skills, and advanced reading experience to comprehend the text. Hence, the choice of reading text, which is partly the teacher’s responsibility, should take into consideration the student’s level of comprehension. After all, learning is a progressive process. The teacher should, therefore, be able to choose an appropriate reading material because it will affect the learner’s reading comprehension.
In this course, the exploration into details of the factors affecting reading comprehension and how they may influence teaching reading in EFL context was undertaken. Factors such as background knowledge and vocabulary knowledge affect reading comprehension in a huge way. For instance, the nature of vocabulary that is used in a certain reading text may either encourage a learner or frustrate him. If the text comprises of many unfamiliar vocabularies, the reader will find it troublesome to comprehend the text. As such, appropriate use of teaching and reading strategies must be implemented to reduce the negative effects whilst taking advantage of the positive ones. (The assignment was completed by Psychology HomeworkHelpers)
Reading strategies for the English reading class
While there exist numerous reading comprehensive techniques that a teacher may exploit while teaching reading, different techniques produce optimal results when applied in specific conditions and for specific objectives. As the teacher approaches the reading class, he should have in mind the reading strategies he intends to teach as well as the method he will utilize while teaching those strategies. Because a good strategy is the one which guarantees that the reading comprehension objective is attained, the teacher’s choice of strategy is determined by the ability of that strategy to meet the objective. There is, therefore, a close connection between reading objectives and reading strategies.
While objectives emphasize knowledge and performance, techniques provide the means to achieve those objectives. Following the ABCD model for learning objectives, the teacher can create numerous learning objectives to be attained in the reading class. The objective model provides an excellent design through which an assessment of achievement can be drawn. As such, success means that the learner has gained the ability to do something he or she was previously incapable of. These objectives are predefined appropriately using the ABCD model for learning objectives.
The model includes four elements: Audience, Behavior, Condition, and Degree of mastery. Each of these elements is concerned with a specific part which is necessary for determining the learning objective. While the Audience (A) element determines who the learners are, Behavior (B) is concerned with what the teacher expects the learner to do, Condition (C) is concerned with the circumstances in which the learning will occur, and Degree (D) is concerned with the magnitude of achievement and performance. The following example illustrates a well-written objective which may be achievable in a reading comprehension class:
“C: Given reasons why specific birds qualify as endangered and therefore need protection A: the learner B: will from the reading be able to identify protected bird species and give reasons for their protection D: in 150 words or less.”
To achieve this objective, the teacher must equip the learner with the appropriate reading comprehension strategy. In this case, for example, the reading strategy may be one that involves monitoring and cognitive skills. Therefore, the teacher will utilize teaching methods which will encourage the students to be aware of what they understand when reading whilst at the same time identifying what they fail to understand. With the use of other strategies, the students should be able to resolve their failure to understand problems. As well, the teacher will provide specific instructions which promote the student’s ability to achieve the objective.
While the teacher should issue instructions and teach the appropriate strategies for the sake of achieving the well-defined objectives, the teacher is also tasked with developing those strategies. In this course, the teacher is empowered with the ability to access and develop a wide range of reading strategies. While developing a strategy, considerations are made not only about the target objective but also the factors that affect reading comprehension. Because those factors influence a reader’s ability to comprehend from a wide range of scope, strategies may be devised such that negative factors are reduced and positive factors utilized regardless of their levels. For example, while a reader’s background related factors may require schemata driven strategies, vocabulary related problems may require motivation driven strategies.
Many strategies may be developed for different reading classes as well as for different students. Because reading comprehension is a process, different strategies may be utilized within different stages of the process. As such, there are those strategies which are most appropriate prior to the reading activity; others are best suited to be utilized during reading, while others work well after reading is completed. In addition to being tasked with choosing the most appropriate strategy for the reader, the teacher is also tasked with teaching the reader the reading strategies. The teaching of strategies involves the provision of instructions concerning the strategy, defining and explaining the strategy, and giving justification for the strategy’s effectiveness.
Much has been learned in this course, including but not limited to nature of the reading comprehension process, factors affecting reading comprehension, the proper development of reading objectives, development and usage of reading strategies. While the knowledge and insight gained is enormous, almost all if not all of it directly relates to teaching reading in EFL context. The course provides important guidelines with which success in teaching reading in the EFL context may be achieved. |
Children and the COVID-19 vaccine: answers to common questions
The Food and Drug Administration (FDA) has authorized the Pfizer COVID-19 vaccine for use in children ages 5-11 and adolescents and teens 12-17. This marks a big development in COVID protection, but it also raises questions for many parents. Here are answers from Andrew Pavia, MD, a nationally recognized expert who is the director of Hospital Epidemiology at Intermountain Primary Children’s Hospital and the chief of the Division of Pediatric Infectious Diseases at University of Utah Health.
Why does my child need a vaccine?
- Participating safely in school, sports, activities, and play dates with friends
- Traveling with family
- Protecting others around them, like grandparents, infants, and toddlers
- Preventing the spread of COVID-19 in the community
Children have a lower risk of getting severely sick from COVID-19. However, lower risk does not mean no risk. In Utah in 2021, more than 600 children between 5 and 17 years old were hospitalized with severe symptoms. Two died. More than 100 children developed Multisystem Inflammatory Syndrome in Children (MIS-C), which can cause dangerous inflammation of the heart, lungs, kidneys, brain, skin, eyes, and gastrointestinal organs. Some children who get COVID will have symptoms that last for 12 weeks or longer after the infection. This is called Long-COVID. Symptoms of long-COVID include fatigue (extreme tiredness), muscle and joint pain, sleeplessness, headache, difficulty concentrating, and uneven heartbeat for extended periods of time. |
Mathematics can give students a hard time when it comes to solving complex equations. Students often get lost in the several steps that have to be solved before reaching the correct answer and, therefore, need the constant guidance of teachers and parents to be able to achieve the desired results. This is why our web pages have been designed to provide students with assistance on important topics in Mathematics in an efficient and effective manner. Read on to find out everything you need to know about how to solve linear equations with fractions.
However, before we move on to discuss the steps that are required to solve linear equations with fractions, we must first understand the terms individually and separately. Let us understand what each of these terms means. We also need to learn the math of numbers and how it can help us solve linear equations. It is time to befriend numbers instead of shying away from them.
Linear equations are those equations of the first order that represent lines of a coordinate system. Therefore, in other words, linear equations are equations of a straight line and are formulated by y=mx+b, where m represents the slope of the line while b stands for the y-intercept.
However, it might also be wise here to look at the history of linear equations and how they evolved to become such an integral component of Mathematics. The evolution of linear equations is very closely linked to the studies and developments in linear algebra. The earliest studies of linear equations can be traced back to have been done by the European mathematician René Descartes in 1637 after coordinates were introduced in geometry. These changes in geometry led to the rise of a new kind of mathematical geometry that was termed Cartesian geometry. Since lines and planes were important elements in this kind of geometry, there was an urgent need to devise equations to represent the same. This is how linear equations came into being and gradually developed to form a complex branch of mathematics with different systems of such equations existing at their intersections that need to be solved.
Now that we have understood what a linear equation is, it is also important to go through the definition of a fraction to get a better grasp of the topic.
Just like in real life, a fraction is a small portion of a larger piece. In mathematics, a fraction is a value that represents parts of a whole value. These parts hold an equal value that constitutes the whole and is termed numerator and denominator for the top and the bottom value, respectively. The former represents the value of the parts taken, while the former stands for the total number of equal parts in the whole value.
Once again, it is only advisable to be aware of the history of fractions before we proceed further. It is fascinating to learn that work on fractions goes back to the ancient Egyptian civilization. The first evidence of a study on fractions by several Egyptian mathematicians appears around 1600 B.C. in the Rhind Papyrus. However, the fractions that we find in these ancient works are different from our own understanding of fractions. These mathematicians treated fractions more as ratios and unit fractions.
Fractions were also studied and worked on by mathematicians living in ancient India. This version of fractions is closer to how we present fractions today, and is believed by many mathematicians to be the origin from which modern fractions have evolved. The first depiction of fractions in ancient India is recorded to have been done by Brahmagupta in A.D. 630, which was done by writing the numerator and denominator in separate lines without the bar.
The bar in fractions is believed to have originated from the Arabs, who used the bar due to constraints of tying innovations at the time, while the numerator and denominator were differentiated by Latin mathematicians. Until the sixteenth century, multiplication was applied to find the common denominator, before which adding and subtracting fractions had been employed for the same function since the seventh century. Division of fractions was added much later and has continued to date as a common operation of fractions, and might have been the first way to look for a common denominator.
Today, the way to find the common multiple in fractions differs largely from these older operations. But to better understand fractions and its function today, we need to next be well-versed with what a solution is in order to begin solving linear equations with fractions.
Mathematically, a solution is the process of assigning values to variables in an equation that can result in the equation holding true. This means that when a solution is applied to an equation, both sides become equal in value as denoted by the ‘=’ symbol.
Steps to Solve Linear Equations with Fractions
Linear equations can be solved in five simple steps that, when applied to an equation, results in a solution to hold both sides equal. These steps are:
- Clear the fractions in the equation by multiplying both sides with the Least Common Denominator (L.C.D).
- Remove parentheses on each side by using the Distributive Property formula, x(y+z)=xy+yz.
- Combine like terms on each side.
- Undo addition or subtraction present in the equation.
- Undo multiplication or division present in the equation to make the coefficient of the variable equal.
- Undoing these actions simplifies both sides of the equation.
- Solve the equation by isolating the variable on one side and the constant on the other side.
Tips for Solving Linear Equations with Fractions
While we have labelled out the steps that can be applied to solve linear equations in the above section, there are also some tips and points that will be useful for students to remember while attempting to solve linear equations with fractions.
- Any changes made on one side of the equation also have to be made on the other side since the left side is always equal to the right side in an equation.
- Single-variable equations can be solved by isolating the unknown variable on one side to find a number that is equal on the other side.
Things to Remember
Many students can make common mistakes, such as not multiplying both sides by the Least Common Denominator while solving linear equations with fractions. To avoid making such mistakes, here the things you should keep in mind:
- In case of an impossible equality in an equation like x=0, there will be no solution.
- Solutions are always real for equations where the equality holds true in the case of every possible solution.
- To avoid or cancel denominators in an equation, multiply the entire equation with the Least Common Denominator.
- In an equation, parentheses can be removed by multiplying the coefficient present before them to the elements contained within them.
- In the case of a nested parentheses, which refers to a parenthesis inside another parenthesis, the exterior parenthesis is removed first by multiplying whatever value is contained in it with the coefficients.
Frequently Asked Questions
Here are the questions that are frequently asked by students studying linear equations and fractions:
Name the three forms of linear equations.
The three forms of linear equations are standard form, slope-intercept form, and point-slope form.
What is the formula for representing the standard form of a linear equation?
The standard form of linear equations is given by:
Ax + By + C = 0,
where A, B, and C are constants, x and y are variables, and A ≠ 0 as well as B ≠ 0.
How to represent the slope form of linear equations?
Slope form of linear equations is represented by:
where m stands for the steepness of line while c is the y-intercept.
What is the difference between linear and nonlinear equations?
A linear equation represents straight lines, whereas a nonlinear equation does not form a straight line and can be a curve that has a variable slope value.
What are the seven types of fractions?
The seven different types of fractions are – Proper fractions, Improper fractions and Mixed fractions, like fractions, unlike fractions, equivalent fractions, and unit fractions.
How would you define proper fractions?
A fraction where the numerator is smaller than the denominator is called a proper fraction.
How would you define improper fractions?
A fraction is called an improper fraction when the numerator is greater than the denominator.
What is a mixed fraction?
A fraction that contains a combination of a whole number and a fraction is called a mixed fraction. |
Young children are vulnerable. They develop resilience when their physical and psychological well-being is protected by adults.
Children develop resilience by being given opportunities to overcome difficulties. Many children require the adults help to develop this important skill. Children demonstrate different levels of resilience and this can be affected by many factors. The stability of the home life and relationships with those around them are really important. Children need to feel emotionally secure. Children observe the behaviour of others and if they observe others giving up easily then they copy these behaviours. As children grow and develop they will learn how to regulate their emotions and this is a key underpinning skill for resilience. As an adult there is a fine balance between providing children with experiences where they can take risks, alongside keeping them safe.
Stop and Reflect:
Use this principles into practice card to reflect on the statements below
Use the effective practice section to think about your own practice and identify areas you could improve
Use the reflecting on practice section to think about how each child’s development can be supported through their experiences
Use the challenges and dilemmas section and think about how you might overcome them. https://1drv.ms/b/s!App0hvSdYs8rgdZGSm0cQQhDBq2W-Q?e=6aC2rI
Watch this video and think about the benefits and challenges of risky play.
How does allowing children to take risks support them in understanding safety?
Can children be overprotected?
Why might adults not allow children to take risks? |
A length of wire lying in the magnetic field and in which an EMF is induced is called a coil. The coils used in windings are shown in Figure 1.
Figure 1(a) represents a coil with only one turn in it. Each coil has active and inactive sides. A coil can in general have any number of turns. A single turn/coil has two active sides, or otherwise called as conductors. Similarly, a two turn coil has four conductors and a three turn coil has 6 conductors.
Generally, the total number of conductors per coil:
ZC = 2T
and, the total number of conductors for a given machine:
Z = ZCC
- ZC = total number of conductors per coil
- C = number of coils
- Z = total number of conductors
- T = number of turns per coil
Figure 1(b) represents multi turn coil.
Active side of a coil
It is the part of a coil which lies in the slots under a magnetic pole and EMF is induced in this part only.
In Figure 1(a), coil sides AB and CD are called as active sides. For a double layer winding, one half portion of the coil drawn with solid line corresponds to the coil side lying on the top of a slot, and the dotted line corresponds to the coil side lying in the bottom layer of another slot.
Inactive side of a coil
The inactive side of a coil consists of two portions, namely the front end side and the back end side. In Figure 1(a), the portion of the conductor which joins the two active sides and placed around the core, is called the back end side of the coil.
The portions which are used to connect other coils are called front end side. These ends have two leads called as starting end S and finishing end F of a coil.
In Figure 1(a), AD and BC represents the inactive sides of a coil.
|Title:||Theory of electrical machines and appliances – Government of Tamilnadu|
|Download:||Right here | Video Courses | Membership | Download Updates| |
Jan 25, 2016
Vernacular Architecture in Ghana
The phrase built environment refers to the human-made surroundings that provide the setting for human activity, ranging in scale from personal shelter to neighborhoods to the large-scale civic surroundings. The term is widely used to describe the interdisciplinary field of study which addresses the design, construction, management and use of these man-made surroundings and their relationship to the human activities which take place within them over time.
CEFIKS focuses on indigenous architecture and other built forms in Ghana and the influences brought on these local built forms through contacts with outside world.
Ghana originally consisted of many different tribes and ethnic groups. Their traditional architecture was influenced by factors as available materials and technological limitations, economic, social relationship within the community and religious beliefs. That makes all the tribes different in many ways, above all in architectural style.
As local vegetation is linked to climate, grouping architecture by climatic region helps to explain construction techniques. While there may be an infinite number of architectural variations between villages, inhabitants are still limited to the availability of materials in their certain climatic zone. So while scale, function and details can vary, construction techniques are broadly similar due to the inherent properties of these materials.
In northern Ghana, individual huts are linked together with mud walls to create compounds. These villages consist of conical moulded huts for sleeping, cooking and washing as well as huts for food and livestock. Everything in these villages is created with mud including walled thresholds, seats and shelves.
The architecture of the Akan people is characterized by the courtyard house, a building type that became a base for all the different types of buildings. The construction of the courtyard house was of timber framework covered in mud and a steeply pitched thatch roof. They painted the upper part of the building white, and the lower part red. The ground floor was raised, sometime up to two meters. An example of the typical traditional courtyard house is the “shrine” house, which consists of four buildings enclosing a central courtyard. The courtyard is a place for music, cooking and religion. The courtyard houses were well adapted to the climatic conditions in the area with ventilated ornamented screen walls, and partially roofed outdoor space as shelter for sun and rain.
The Asante Traditional Buildings are the only surviving examples of traditional Ashanti architecture. Their design and construction, consisting of a timber framework filled up with clay and thatched with sheaves of leaves, is rare nowadays. All designated sites are shrines, but there have been many other buildings in the past in the same architectural style. They have been best preserved in the villages, away from modern construction and warfare. The traditional Asante buildings are listed as World Heritage property by UNESCO, and are described as impressing in terms of construction, design, cleanliness and comfort.
Click here to visit gallery |
Tattooed remains have been discovered on over 49 mummies from various parts of the world, such as ancient Egypt, Greece, and Europe.
Otzi the Iceman
In 1991, two German hikers discovered Otzi, an ancient Iceman frozen for 5,300 years in Italy’s Italian Alps.
Clothing, tools, and weapons from the Iceman show evidence of his active life during the Copper Age. However, his tattooed body gives us more insight into early medicine than any other evidence available today.
Otzi adorned his body with 61 tattoos made up of lines and crosses. These tattoos were believed to have served as primitive acupuncture treatments to alleviate his chronic back pain.
The tattoos were believed to have been used by these mummies to ask gods for protection during gestation and labor. They date back to 2000 BC and depict bowls used for purification rituals and the Egyptian god Bes.
In Ancient Greece during its Dark Ages, prisoners of war, criminals, and enslaved people often received tattoos displaying their offenses. This practice became widespread over time.
Herodotus recorded that during the sixth century BC, Greeks learned this method of punishment from Persians. tattoos were effective deterrents against escape and made people easily identifiable as enslaved people or prisoners.
Tattooing was an increasingly common practice in Ancient Rome and was used for various reasons. It was used to mark soldiers in the army, identify criminals, and distinguish gang members among enslaved people.
Public execution was also used to make enslaved people more visible to the general public and as punishment for escapees. The prevalence of tattooing in ancient Rome is evidenced by artwork and figurines from as far back as 4000-3500 BC.
Transoceanic Trade Routes
Tattooing has been practiced for millennia. Ancient Egyptian mummies and ancient Chinese and Northern European tribes used tattooing and body painting.
Tattoos were believed to have emerged in Polynesia, where ceremonies were held to initiate young men into chiefly roles and give them tattoos as part of a rite of passage that symbolized community status, respect, and honor.
Tattooing dates back thousands of years to Paleolithic cultures and is widely practiced by tribal Celts of Ireland, Scotland, and Wales. Archaeological evidence supports this claim.
During the Age of Discovery, Europeans were fascinated by images of tattooed Polynesian natives. tattoos were considered barbaric when Christianity arrived in Europe but gradually regained popularity with the resumption of transoceanic travel. |
In general, Network means connecting two or more things together to share the information among them. Networking in a computer is the same, two or more computers are connected with each other to form a network. In this tutorial, you will learn about what is networking and its types.
What is networking?
Networking is the connection of two or more computing devices by using some networking devices to share the information between the network.
How we are using the network today?
Today, without a network we can’t live a second because everyone has at least one mobile phone. Computers evolved as Mobile phones, Laptops, Tablets, Playstations, etc. The computers are evolved in many different ways like the above examples. These devices are connected with each other. For example, a user in India can send a message to the user in America. By using your computer you can send messages, calls, files, videos, images, etc. All these things are getting done when your device is connected to any network. We discuss more about network in upcoming headings.
Types of Network
Based on the distance, the network is classified into different types. Some of them are listed below,
PAN or Personal Area Network is the connection of devices in a home network. Here the connection is not available to the others. In-home network, the personal devices are connected and its access is only inside the home. In figure-1, you can see the PAN connection.
LAN is Local Area Network which connects the computer devices in small area like Office, School, Laboratory. This connection is based on two types wired and wireless.
Wired LAN – Here the connection is made using wired medium called Ethernet Cables.
Wireless LAN– Here the connection is made using wireless medium like WiFi, Bluetooth, etc.
MAN is known as Metropolitan area network. It covers the area larger than LAN but it is smaller than WAN. For example, the connection of more LAN is called a Metropolitan area network.
Wide Area Network(WAN) is the widest connection which is made through World Wide. Internet is an example of WAN which is used all over the world.
What is network connection?
Everyone use the networking daily in the name of Internet. It is one of the world wide connection which is accessed through your SIM card and ISP. If you want to send any image to your friend in other country, you can easily send through any messaging application like Facebook, Whatsapp, Telegram, etc.
If you turn on the mobile data, it means you are online in your network. So you can receive the messages and files and also you can send it to others.
These are the basic things of networking. We see more about networking in up coming blogs. Thanks for visiting. |
At Oakley Cross our History curriculum has been designed to engage and enthuse our children with the people and events of the past and to develop meaningful skills as historians, enabling them to understand how the past has helped to shape the world they live in today.
Across EYFS, KS1 and KS2, we aim to:
- Develop chronological understanding and nurture the children’s interest in the past.
- Develop also their skills in enquiry, analysis, evaluation, and argument whilst enabling them to interpret and understand the past, and communicate historically.
- Overall, develop their interest in the past, arousing their curiosity and motivation to learn.
Our planning for History has a deliberate approach to sequencing the curriculum and the choice of content focus. At all stages, the curriculum links to previous content and concepts and identifies later links with opportunities to revisit key periods of time. At its heart is how the events of the past have shaped the region and country we live in today.
In EYFS, the focus is on the child’s immediate living memory and developing an understanding of old and new, past and present and identifying change
In KS1, the sequence of learning moves from history within the child’s living memory to looking at familiar features in the recent past and then gradually beyond living memory. The achievements of significant individuals including those in the North East adds a further dimension. Pupil’s prior knowledge is built upon and helps lay the foundations for future learning.
In KS2, our curriculum extends the children’s knowledge and understanding of British and world history in line with the National Curriculum. Alongside this, a thread of local history builds year upon year to allow the children to uncover how their home became the place it is today. In our long term overview:
- British history is sequenced chronologically across KS2 from the Stone Age through to the Romans, Anglo Saxons and Vikings.
- The impact and legacy of our mining and railway heritage and the changing face of West Auckland and County Durham is explored.
- In Y3/4 ancient civilisations are also explored through a study of Ancient Egyptians and the Shang Dynasty
- In Y4/5 the history of Britain around 1000AD is also contrasted with that of a faraway place including studies of the Mayans and Benin.
- In Y5/6 the chronology is extended beyond 1066 with studies of the Victorians and the Second World War and the impact these had both locally and nationally.
- Thematic studies of aspects of life since 1066 have been placed at the end of Y6 to allow reconnection to prior learning and to fill gaps in learning.
For a copy of our Long Term Plan for History and the key skills we aim for children to acquire over time please follow the link below.
Our children will become confident historians with an enthusiasm for the past and a clearer understanding of how this has molded their lives today. They will become more confident using their enquiry and literacy skills whist becoming more respectful of other nations, cultures and traditions. We also want children to question the impact of their learning and have posed the question ‘How did people in the past shape our world? |
Habitat Loss is the number one driver of extinction.
Habitat loss comes in several forms and includes:
Destruction and change in land use (filling in wetlands, dredging rivers, ‘clearcutting’ (logging), mining, agriculture, mowing, fishing with dragnets etc);
Fragmentation (as a result of building roads, dams, fences, housing and other infrastructure etc);
Degradation (due to pollution, invasive species, diruption of the ecosystem, for example erosion of dunes/beaches caused by human traffic).
As the habitat shrinks the carrying capacity (the number of any given species that can live there) declines. You only have to look at some of the world’s most iconic species to see they are living in fragmented and often isolated pockets – their historical range greatly reduced. African Lions have declined by more than 40% percent over the last 20 years with as few as 20,000 living in the wild and only occupying 8% of their historic range. Without habitat connectivity species like lions cannot safely roam or disperse which restricts their gene pool and increases the risk of disease. A shrinking and fragmented habitat means wildlife is living in close proximity to humans and this often results in a chain reaction – with lions, their prey species may be depleted leading them to predate on livestock which can in turn result in animosity from the community and retaliatory killings.
Biodiversity is threatened by habitat loss which is recognised as a well-known driver of insect decline. Insects are responsible for pollination, nutrient recycling and control of many pests and are essential to life on Earth.
A report by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES), a UN committee, written by 145 experts from 50 countries and published in 2019 is considered the most comprehensive assessment of its kind.
Some of its notable findings are shown below:
- Three-quarters of the land-based environment and about 66% of the marine environment have been significantly altered by human actions. On average these trends have been less severe or avoided in areas held or managed by Indigenous Peoples and Local Communities.
- More than a third of the world’s land surface and nearly 75% of freshwater resources are now devoted to crop or livestock production.
- Urban areas have more than doubled since 1992. |
How can we answer questions about the world around us? How can we make decisions about what to do? Over the past years, more and more people have turned to data for help. Huge amounts of data are collected every day from millions of sources. This data has a lot to tell us! But data by itself is mute—it can only help us if we learn to make it speak and tell its story.
In this short free online course, we will introduce basic ideas about collecting data, and techniques for turning data into information we can use. Along the way, we will hear from researchers at Loughborough University about the ways they use data in their work.
Learn to answer questions with data
In the first week, we will start by considering some questions drawn from arts, political science, geography and sport that we want to answer. We will think about what sort of data we might be able to use to answer these questions, and how we might go about finding this data.
Once we have data, we will start to explore it using some visual tools we can either create by hand or using apps online. We will discuss how to understand these visualisations and begin to read what our data has to say.
In the second week, we will follow up with ways to summarise and present data. You will learn how to choose the right summary for the type of data you have collected and the question you are trying to answer.
We will conclude with an article about how to make meaningful comparisons using data, and an explanation of the critical concept of significance. We will look at the data we have collected and use these techniques to see what it has to say about our starting questions.
Throughout the course, we will be collecting, sharing, analysing and discussing our own data and learning what it has to say about some specific questions.
Improve your critical thinking skills
Although there exist very difficult and mathematically complicated methods of analysing data, the fundamentals of data analysis come from general critical thinking, and can be grasped with the basic examples and techniques we will cover. By the end of this course, you will have learned about how data can help answer questions in a variety of disciplines, and have hands-on experience with data collection and analysis.
This course is open to anyone with a primary level education in maths and good critical thinking skills. It is suitable if you are:
- Starting or considering a course in arts, humanities, social sciences or sport
- In a career where data analysis is becoming relevant
- Curious about applications of data to a wide range of disciplines
- Interested in learning about and experimenting with how data can be collected and studied to help answer questions |
Decimals are just another way to express fractions. To
produce a decimal, divide the numerator of a fraction by the denominator.
For example, 1/2
2 = .5.
As with fractions, comparing decimals can be a bit deceptive.
As a general rule, when comparing two decimals such as .3 with .003,
the decimal with more leading zeros is smaller. But if asked to
compare .003 with .0009, however, you might overlook the additional
zero and, because 9 is the larger integer, choose .0009 as the larger
decimal. That, of course, would be wrong. Take care to avoid such
mistakes. One way is to line up the decimal points of the two decimals:
- .0009 is smaller than .0030
- .000900 is smaller than .000925
Converting Decimals to Fractions
Knowing how to convert decimals into fractions, and fractions
into decimals, is a useful skill. Sometimes you’ll produce a decimal
while solving a question and then have to choose from fractions
for test choices. Other times, it may be easier to work with fractions.
Whatever the case, both conversions can be done easily.
To convert a decimal number to a fraction:
Remove the decimal point and use the decimal number
as the numerator.
denominator is the number 1 followed by as many zeros as there are decimal
places in the decimal number.
Let’s convert .3875 into a fraction. First, we eliminate
the decimal point and make 3875 the numerator:
Since .3875 has four digits after the decimal point, we
put four zeros in the denominator:
Then, by finding the greatest common factor of 3875 and
10000, 125, we can reduce the fraction:
To convert from fractions back to decimals is a cinch.
Simply carry out the necessary division on your calculator, such
as for 31/80: |
Optimising experimental design needs a sound understanding of basic principles and a good appreciation of a range of designs and when to use them. More than ten years ago it was clear that among scientists using laboratory animals there was a widespread lack of recognition of the importance of valid comparisons and avoidance of bias, and a reluctance to use efficient designs. This has been emphasised by several publications over the last decade. Education was seen as a key element in addressing this, but teaching experimental design is different from teaching statistics as a subject, and where animals are used design involves more than data collection and valid comparison of experimental groups. Interactive workshops have evolved as a good way of giving researchers the knowledge, understanding and skills to optimise their design for the experimental question being addressed and the constraints within which they have to work.
An optimal design would be one which will produce valid results, be efficient in use of experimental material, and involve least severity for the animals used. The talk will consider what is needed for this, as that determines the content of workshops or other types of teaching on experimental design using animals. It will also consider the range of designs and their usage, and how researchers might acquire the skill to select an appropriate and optimal design. To assist this, tutors could think of an educational 3Rs – repetition, recapitulation and reinforcement. How this approach has been incorporated into an effective mix of presentations and problem solving will be illustrated. |
Physical activity refers to any bodily movement results in the energy expenditure including a broad range of occupational, leisure and daily activities.
Physically inactivity is a major underlying risk factor for coronary heart disease. Thus, it is advisable to include consistent physical activity or exercise in your daily routine.
Physical activities and cholesterol
Physical activity can play a major role in the treatment of hyperlipidemia. Also, it reduces the risk of heart attack or stroke. It can also help lower your blood pressure, reduce insulin resistance, and favorably influence cardiovascular function.
More than 30% of adults are not sufficiently physically active in their daily life, and it is continuing to decline. Physical activity is one of the best public health’s “best buys.”
12 Surprising Benefits of Physical Activity
Good evidence suggests simply reducing the sitting time each day reduces death risk independently of other lifestyle factors.
There is incontrovertible evidence that regular physical activity contributes to the prevention of several chronic diseases and reduced risk of premature death.
- Enhances Lipoprotein Profiles – Reduces triglyceride, increases HDL cholesterol, and decreases LDL-to-HDL ratios. Reference: European Journal of Medical Research 1997; 2:259-64. Acta Paediatrica 1994; 83:1258-63.
- Reduce Inflammation - A recent study has shown that exercise training may cause marked reduction in C-reactive protein levels (inflammation marker). Reference: Canadian Medical Association Journal 2005 Apr 26;172(9):1199-209.
- Reduced Cardiovascular Risk - A systematic review revealed an inverse relation between physical activity and the risk of cardiovascular-related death. These protective effects are for as little as 1 hour of walking per week. Reference: American Journal of Preventive Medicine 2004 Jun; 26(5):407-18. The benefits of physical activity and fitness extend to patients with established cardiovascular disease. Regular walking and moderate or heavy gardening were sufficient to achieve this benefit. Reference: Circulation. 2000 Sep 19;102(12):1358-63.
- Type 2 Diabetes - In a large prospective study, each increase of 500 kcal in energy expenditure per week associated with a decreased incidence of type 2 diabetes of 6%. Reference: The New England Journal of Medicine 1991 Jul 18; 325(3):147-52. Physical activity improves insulin action and glucose tolerance in obese individuals. Reference: Medicine & Science in Sports & Exercise 1999;31: S619-23. Physical activity reduces the incidence of type 2 diabetes among high-risk people (overweight). Reference: The New England Journal of Medicine 2001 May 3; 344(18):1343-50. Diabetes. 2005 Jan; 54(1):158-65.
- Metabolic Syndrome - physical activity helps to prevent/treat metabolic syndrome. Reference: Applied Physiology, Nutrition, and Metabolism 2007 Feb;32(1):76-88. Diabetes Care 2010 Jul; 33(7): 1610-1617.
- Reduced Hypertension Risk - There are consistent findings regarding the protective effects of physical activity in the prevention of hypertension. Studies have shown physical activity is inversely associated with hypertension. Reference: Hypertension. 2010 Jul; 56(1):49-55. Blood Press. 2011 Dec; 20(6):362-9. Physical activity has an independent capacity to lower blood pressure in people with hypertension. Reference: Journal of Clinical Epidemiology 1992 May;45(5):439-47.
- Increased Self-Esteem - Physical activity improves psychological well-being through reduced stress, anxiety and depression. Reference: Medicine & Science in Sports & Exercise June 2001 - Volume 33 - Issue 6 - pp S587-S597.
- Good Life Expectancy - It has been shown to add years to your life, and life to your years. In a study, people who went from unfit to fit in a 5-year period had a reduction of 44% in the relative risk of death. Reference: JAMA. 1995 Apr 12; 273(14):1093-8.
- Supports Healthy Weight – Physical activity helps stay at or get to a healthy weight, and it is an important component of long-term weight control. Reference: The American Journal of Clinical Nutrition July 2005 vol. 82 no. 1 226S-229S.
- Protect against Osteoporosis - Routine physical activity can improve musculoskeletal fitness. Thus, improved overall health status and a reduction in the risk of chronic disease and disability. Reference: Canadian Journal of Applied Physiology 2001 Apr; 26(2):217-37. Canadian Journal of Applied Physiology 2001 Apr; 26(2):161-216.
- Improved Sleep Quality - Objectively-measured physical activity associated with several improved self-reported sleep-related parameters. Reference: Mental Health and Physical Activity Volume 4, Issue 2, December 2011, Pages 65–69.
- Cancer Prevention - A series of mechanisms may explain the 46% reduction in cancer rates observed with regular physical activity. Reference: Critical Reviews™ in Oncogenesis 1997; 8(2-3):219-72.
A Linear relation exists between the volume of physical activity and health status, such that the most physically active people are always healthier.
Increased physical activity results in a reduction in triglyceride, increase in HDL cholesterol, decrease in blood pressure, reduces insulin resistance, improved blood glucose control, better immune function, reduced fat storage, increased energy expenditure, and drop in a free-radical generation.
Share your physical activities with friends and family and make it as fun rather than an exercise.
8 Simple ways to incorporate Physical Activity into your daily routine
Daily physical activity is the cornerstone of a healthy lifestyle. The first obvious step would be the use of stairs instead of lifts and walking & cycling for short distance journeys.
- Take stairs if possible - Increasing the number of steps you take is an obvious first step to incorporate physical activity into your daily routine. If you have to go up/down a long way, then you may walk the first few floors, then take the elevator for the rest.
- Clean the house/car/utensils – Cleaning is a great workout than what you might think. Cleaning involves plenty of walking, lifting, bending, and stretching. Cleaning work is an easy way to burn some extra calories.
- Gardening & yardwork - Yardwork is great, because it increases your physical activity, also gives you an opportunity to establish a link with nature. Pulling weeds, mowing the lawn, trimming the hedge, and raking leaves are all very physically taxing as well as mentally relaxing.
- Family Walk after Dinner - Research has found walking after your dinner, can make wonders for your health. A family walk after dinner improves everyone’s health as well as provide an opportunity to discuss topics and share love.
- Walk if Possible - Walk your children to/from school will help them get physical activity. At work, walk over to see your co-worker instead of contacting them over the phone or emailing. Walk while talking on the phone. Walk the dog don't just watch the dog walk, walk with it. Walk up escalators rather than standing still.
- Do Cycling - If possible, prefer cycling to go to work and market. Enjoy cycling with your kids in the evening; this helps encourage your kids for a healthy lifestyle.
- Play Regularly – Instead of watching TV, encourage and take your child to play. Be active with your child; take them to swimming pool or play in the garden/park. If you are active, then your child will likely to be active too; be an example. If possible, involve the whole family and enjoy playing.
- Dancing – For dancing, all you need is some brisk music tunes, forget about work pressure, and interest to dance. It is a fun dancing with your family, specifically kids. |
Brain imaging can look at the brain for evaluation of tumors, aneurysms, bleeding in the brain, nerve injury, and other problems, such as damage caused by a stroke. A brain MRI can also find problems of the orbits (eyes and optic nerves), internal auditory canals (ears and auditory nerves), and pituitary glands.
The facial orbit is the cavity or socket of the skull in which the eye and its appendages are situated. Orbital Imaging is used to discover tumors, infection, chronic diseases, and optic neuropathy. Other conditions evaluated with orbital imaging include papilledema (swelling head of the optic nerve), enlarged eye muscles (seen in thyroid conditions), infection or tumors of the lacrimal gland, and enlargement of vessels that supply and drain the eye area.
The pituitary gland is a pea sized structure in the middle of your brain just behind your eyes. It sits inside a bone cavity called the sella dorsica. The gland is attached to the brain by a thin stalk called the infundibulum. Pituitary gland MRIs are used to discover tumors of the pituitary gland, such as microadenomas, craniopharyngiomas, apoplexy tumors, and cysts.
MRA or magnetic resonance angiography is a type of magnetic resonance image (MRI) scan. MRI scans are used to look at blood vessels, and the flow of blood through them is called magnetic resonance angiography (MRA). An MRA evaluates blood vessels. The blood vessels in the neck (carotid and vertebral arteries) and brain are frequently studied by MRA to look for areas of narrowing or dilatation. In the abdomen, the arteries supplying blood to the kidneys are also frequently examined with this technique. The extremities (arms, legs) can also be studied for narrowing. MRA scans can find problems of the arteries and veins, such as an aneurysm, a blocked blood vessel, or the torn lining of a blood vessel (dissection). Sometimes contrast material is used to see the blood vessels more clearly. Like an MRI, magnetic resonance angiograms (MRA) use a magnetic field and pulses of radio wave energy to make pictures of blood vessels inside the body.
Musculoskeletal imaging can evaluate virtually all of the bones and joints, derangement, infection, inflammation, post-trauma, and tumors. Soft tissue evaluation includes tendons, ligaments, muscles, cartilage, and bone injuries. MRI can help diagnose various mass compositions and vascular pathologies.
Pelvic imaging investigates pathology and detection of a local invasion of rectal vascular pathologies, assessing the anatomy in peri-anal fistulas, infection, and inflammation. For women, pelvic MRI is used to evaluate the ovaries and uterus as a follow-up to an ultrasound exam which showed an abnormality, or cervical carcinomas, and to evaluate endometrial cancer. For men, pelvic MRI is sometimes used to evaluate prostate cancer.
The brachial plexus represents a complex network of nerves formed from the ventral rami of the lower cervical nerves (C5-C8) and the greater portion of the ventral ramus of the first thoracic nerve (T1). The ventral rami or roots coalesce to form trunks, divisions, cords, and branches, which provide motor and sensory innervation to the upper limbs and thorax. Brachial plexus imaging allows for visualization of a wide range of pathologies such as birth trauma, high-speed trauma, neoplasms (intrinsic or extrinsic masses) or post-radiation injury.
Abdominal imaging is most frequently used to further evaluate an abnormality seen on another test, such as an Ultrasound (US) or Computed Tomography (CT) scan. The MRI exam is usually tailored to look at specific organs or tissues, such as the liver, adrenal glands, kidneys or pancreas, in addition to various tumors, congenital abnormalities, and metabolic disorders.
Magnetic resonance cholangiopancreatography (MRCP) is a special type of magnetic resonance imaging (MRI) exam that produces detailed images of the hepatobiliary and pancreatic systems, including the liver, gallbladder, bile ducts, pancreas, and pancreatic duct. This is used to help evaluate tumors, stones, inflammation or infection, and to evaluate patients with pancreatitis to detect the underlying cause.
Spine imaging includes the anatomy of the cervical, thoracic, lumbar, and lumbosacral region. MRI scans are most commonly used to evaluation herniated discs or narrowing of the spinal canal (spinal stenosis) in patients with neck, arm, back and/or leg pain. It is also the imaging tool to look for a recurrent disc herniation in patients with a history of back surgery. |
The African American Vernacular English (AAVE) is also known as Ebonics which can merelybe defined as “black speech.” It is coined through blending the words ebony (black) and phonics (sounds). This is a term that was formulated in 1973 by African American scholars who did not like the negative terminologies such as “Nonstandard negro English” which had been used since the 1960s. One may argue that the African American Vernacular English (AAVE) has become popular mostly in the 21st century due to the music industry. However, the language has been in use for numerous years, and it is now that the language is gaining a bit of popularity. Most people, however, view it as bad English and the incorrect use of grammar. This paper aims to look at the history and the origin of AAVE and how the language developed over the years to what it is now.
Origin of AAVE
The history of African American Vernacular English has had controversies among linguists with various theories resurfacing as to the origin and the development of the language. The two theorists that hold the most weight when it comes to the roots of this language is the Anglist hypothesis and the creole theory. The two approaches are different in their explanations,but they both give definite explanationsin the origin and the development of this language. Linguists and dialectologists like Rickford have over the years argued on the topic of how the dialect came to existence,and it is through such debates that the theories were formulated. It is therefore paramount to look at both theories carefully to gain an understanding of the dialects.
The Creole Theory
The Creole Theory is a theory which is Rockford’s position when it comes to the origin of the AAVE. A creole is a linguistic that is originates from other languages, and it befits the primary language for the people who speak it. This theory explains that the language is resultant from, a creolized form of English which was spoken on American farms by the African slaves. This dialect was somehow correlated to the creolized forms of English which are articulated in Jamaica and other parts Trinidad, Guyana, and Barbados (Rickford, & Rickford, 2000).
For a creole to be created, a pidgin is first created which is a form of pseudo language which is developed with the primary purpose of allowing different groups with different languages to communicate. During the slave trade era, the slaves spoke many different languages from West Africa, and they had to find a way to communicate with each other. They, therefore, applied English and some of the West African vocabulary using simple grammar in their native tongues. This form of communication was a pidgin (Rickford, & Rickford, 2000). Pidgins are often narrow, specialized and may be termed as not being grammatical. The pidgin can develop into a functional language through the next generation who grow up learning the language from an early stage. The final product is what is known as a creole. This is what happened with the slaves as they passed on their form of communication to their children thus creating Ebonics as a language they could communicate in (Rickford, & Rickford, 2000). This theory further explains that the creole went through a process of decreolization which means some of the features in the language were replaced with features from the English language as the slaves migrated and learned the English language more.
The Anglicist Theory
The Anglicist theory is considered to compete with the Creole theory. This theory, however, is supported by dialectologists like Hans Kurath and was introduced in the mid-twentieth century. It was the theory that was mostly supported until the mid-1960s when other theories began to emerge. According to the Anglicist hypothesis (Rickford, 2006), the African American Vernacular English development was similar to the language of other immigrants develop. According to the theory, when the slaves were transported from Africa to the United States initially, they spoke different languages (Rickford, 2006). However, as they were exposed to English, they began learning the language. As the slaves had offsprings, their nativelanguages were less preserved. This explanation is seen today with immigrants whose children can barely speak their parent’s native language which means their children will not be exposed to the language at all (Rickford, 2006). Through this form of exposure and several generations, the slaves’ native languages faded away and were replaced by the regional dialect that the slaves were exposed to at the time which was mostly European American dialects of English as some slaves were also sold to the British.
This theory is supported by many and tends to refute the Creole theory. This is because this theory gives an explanation that seems to be more reasonable in explaining how the AAVE was formed. When the slaves came, they were in close contact with their white owners. However, the law at the time prohibited the whites from teaching them the language properly (Rickford, 2006). This is the reason as to why the language sounds like “broken English” and improper grammar.
The Distinct Features of AAVE
While both theories may have some facts that are believable, there are distinct features which explain the language and show the uniqueness it holds. The features shown however are only a fracture of the language. The AAVE is very diverse,and the features do not explain the entire dialect. There are several verbal markers which are unique. When it comes to the present tense, the verb is not marked,and the same verb can be used to serve all persons. There is also often the absence of the third person. For instance, ‘he do. She run to the market’. In Standard English (‘he does. She runs to the market’). There is also alack of copula when it comes to present tense such as ‘she walking too fast’ (Rickford, & Rickford, 2000). Which in Standard English would be ‘She is walking too fast.’ The copula is also deleted when a condition permanent for instance, ‘she my mother’ which in standard English would be ‘she is my mother.’
When it comes to the past tense the African American English combines done and the verb form of a word. For instance; ‘that’s the first time she done told me that’; which means, ‘that’s the first time she has told me that.’ Still when speaking in the past tense the language uses been as an equivalent of has been in Standard English for instance, ‘my son been ill’ which means, ‘my son has been ill.’ The future, on the other hand, be is used as will be for instance, ‘she be getting home late’ which means, ‘she will be getting home late’(Rickford, & Rickford, 2000).
The language also has syntactic and morphosyntactic markers which include negation, genitive and dative markers. When it comes to negation, multiple negators can be used in the same sentence for instance ‘Ain’t nothing she can’t do’ which means ‘there isn’t anything she can’t do.’ When it comes to genitive marking, the possessive -‘s marker used in AAVE is nonexistent for instance, ‘That’s my mama kitchen’ which means ‘That is my mother’s kitchen’ (Rickford, & Rickford, 2000). The dative markers in this language include the dative pronoun himself which is a masculine third person pronoun used in the place of himself as is in the Standard English. Other features that are considered paramount in the African American Vernacular Language are the phonological markers, for instance, an unstressed syllable is deleted from either the initial or the middle syllables such as ‘bout’ to mean about and ‘gov’meant’to mean government. Another phonological marker would be the initial change of [th] where the voiced fricatives of the initial [th] becomes [d] as follows those-[douz]; these- [diz] (Rickford, & Rickford, 2000).
In conclusion, it is evident that the origin of the African American Vernacular English was from the slaves. When they were brought to America, they were unable to communicate since they were from different West Africa regions. Eventually, they found ways to communicate and various theories explain how this was made possible. The slaves could have combined their native languages with English to create a creole, or they could have indirectly learned English from their white owners even though they were not allowed to teach them proper English. Either way, the language has evolved, or the dialect has been adopted mostly by African Americans.
Rickford, J. R. (2006). The Anglicist/Creolist quest for the roots of AAVE: Historical overview and new evidence from the copula. Studies in contact linguistics: Essays in honor of Glenn G. Gilbert. Bern & New York: Peter Lang.
Rickford, J. R., & Rickford, R. J. (2000). Spoken soul: The story of black English (p. 267). New York: Wiley.
Do you need high quality Custom Essay Writing Services? |
Step 4: Waste Water Treatment
Waste Water Treatment
Biogas - An Anaerobic Biogas Reactor is a chamber or vault that facilitates the anaerobic degradation of blackwater, sludge, and/or biodegradable waste. It also facilitates the separation and collection of the biogas that is produced. The residence time of the fluid in the reactor should a minimum of 15 days in hot climates and 25 days in temperate climates. For highly pathogenic inputs, a residence time of 60 days was factored for test conditions in the habitat. Thermophilic conditions allow for pathogen disinfectant of the waste products within the habitat (i.e. a sustained temperature over 50°C). Once waste products enter the digestion chamber, gases are formed through fermentation. The gas forms in the sludge but collects at the top of the reactor, mixing the slurry as it rises. The system will produce nutrients for the wetlands, algae and hydroponic gardens.
Hydrostatic Filter - Centrifugal force creates transverse flow patterns in a curved channel, which under certain circumstances manifest themselves as a pair of Dean vortices. As particles flow down the channel, they spiral around the Dean vortex cores while a combination of drag and shear-induced forces move them toward the channel center. Under the correct conditions (specified by channel geometry and flow rate), this dynamic causes the particles to focus into a band near the outside wall. At the end of the length of the channel, the single flow is separated into two flows: the concentrate and effluent outputs.
Although HDS technology leverages centrifugal force, it is different than centrifuges and hydrocyclones. Instead of relying on density differences between particles and fluid, HDS technology is solely based on hydrodynamic forces, resulting in a particle size dependent separation that allows for direct concentration of particles of any density, including neutrally buoyant ones.
UV - The UV filtration unit is a small stainless steel tank that has a inner housing with a UV bulb that allows the passing of water around the bulb for sanitizing and treatment of water collection systems.
Wetland - The bottom of the habitat has a constructed wetland to utilize several plants species to consume the nutrients produced by the bio gas reactor is producing from human excrement, plant wastes and broken down consumables and biodegradable habitat waste products. |
What Is POEMS?
POEMS is an acronym for Positive Outcome and Experience Management Strategies. The concept behind POEMS is to design, develop and deliver management strategies that allow practitioners to achieve a positive outcome and experience in a given situation.
What is POEMS for Children?
This one day course is unique, innovative and the first of its kind. The course introduces management strategies that allow practitioners to effectively reduce anxiety in children and achieve a positive outcome and experience for the child affected.
Why is the advent of the POEMS For Children course so important?
Up to 80% of children admitted to hospital with chronic illness experience anxiety from hospitalisation alone. Between 50 and 75% of children having surgery experience anxiety in the anaesthetic room. This means that in a major paediatric institution such as Great Ormond Street Hospital in London that admits 22,000 patients and completes 15,294 operative procedures per year (figures for 2007/8), up to 17,600 children experience clinically significant levels of anxiety at some point in their admission and up to 11,470 children experience peri-operative anxiety.
It has been shown that anxiety directly increases the perception of pain for any given stimulus, can adversely affect immune function and prolong the healing process. Increased anxiety in surgical patients has been shown to cause a statistically significant increase in post-operative analgesic requirement, nausea and vomiting and lead to a prolonged recovery time. In children, the presence and degree of peri-operative anxiety is directly linked with the appearance and magnitude of post-operative dysfunctional behaviour. Between 24 and 60% of children will display dysfunctional behaviour within the 3 weeks following surgery. This takes the form of eating disorders, problems sleeping, nightmares, temper tantrums, bed-wetting and problems with authority. Between 4 and 12% of children continue to display these patterns of behaviour one full year following surgery. This is an incredible number of children experiencing significant morbidity as a direct consequence of the anxiety they experience. By using the figures above we can put this into perspective. This means that within a single major paediatric institution, over the period of one year, up to 9,176 children would experience significant anxiety related morbidity within the 3 weeks following their surgery and up to 1,835 of these children would still be suffering after one year.
So, if we know that the number of children experiencing anxiety is high and the consequence of this anxiety is significant, what is already being done about this problem and what further action needs to be taken? When you consider the numbers involved, it is immediately obvious that only a small proportion of these children are being managed by psychologists, psychiatrists and play specialists.
These teams play a vital role as part of the multi-disciplinary approach to managing the problem and they are extremely effective in tackling complex and severe cases. They are a valuable but limited resource and from a practical and economic perspective can only be utilised in caring for a small number of children affected. You may then ask yourself who is looking after the vast majority of cases that are left. The answer is simple - YOU ARE. This might appear to make little sense. Applying the figures we have from the feedforward surveys from POEMS courses to hospital staff in general, it emerges that more than 50% have no formal training in the management of anxiety in children, while of the rest, less than 5% have received anything more than the occasional lecture on the subject.
So what is the solution?
We believe that the number of children affected is so high that the only solution is for every practitioner involved in the care of children in hospital to have a core competency in anxiety management. This would mean that every member of staff would be trained to identify, manage and potentially prevent anxiety in children under their care. Every member of staff that a child comes into contact with from the moment they step through the hospital doors would be able to significantly reduce the morbidity experienced by that child.
If multiple members of staff, all part of a multi-disciplinary team equally adept with respect to the management of anxiety are caring for the child, the experience and medical outcome for that child will be dramatically improved. Every interaction with members of staff would then represent an opportunity for that child - an opportunity to drastically improve the quality and outcome of their treatment and significantly reduce the emotional pain and morbidity they experience.
The inception of the POEMS For Children anxiety management course finally makes this goal achievable. The course will allow us to achieve this goal by making practitioners aware of the problem, by making practitioners aware of the consequences of anxiety and by offering practical techniques to allow the effective detection, management, reduction and prevention of anxiety in children receiving medical care. The potency of the management strategies introduced on the course hinges on the careful selection of skills and techniques from multiple areas of therapy and practice including hypnotherapy, psychology and psychiatry amongst many others, with subsequent integration of the techniques within a structured, dynamic and patient centred approach. The skills and management strategies gained from the course dove-tail with, complement and enhance the skills the practitioner already has.
Is the course approved by professional bodies?
The course is approved for 5 CPD points |
What is Mindfulness? Rooted in Buddhist meditation, mindfulness is the practice of staying “in the moment.” It involves tuning into your thoughts, feelings, body sensations, and environment. It involves accepting your thoughts and feelings without judging them as right or wrong. As opposed to rehashing the past or worrying about the future, mindfulness means paying attention to what you’re feeling in the present moment. In essence, mindfulness means paying attention mindfully – on purpose.
Mindfulness-Based Stress Reduction
Since 1979, mindfulness practice has increasingly entered the mainstream. Championed by Jon Kabat-Zinn from the University of Massachusetts Medical School, thousands of studies have affirmed the physical and mental health benefits of Mindfulness-Based Stress Reduction (MBSR). MBSR has since been adapted for use in hospitals, schools, treatment centers, prisons, and more.
In a nutshell, the ABC’s of practicing mindfulness can be summarized as follows:
- Being with your present experience
- Creating a gap between your experience and reaction to it, which allows you to make wiser choices and to respond versus react
The many proven benefits of mindfulness include:
- Reducing automatic/habitual reactions
- Slowing down/stopping brain chatter
- Becoming more aware
- Fully experiencing the present
- Reducing stress
- Learning to regulate emotions
- Responding more effectively to difficult situations
- Interrupting self-sabotaging patterns of behavior
- Gaining clarity
- Achieving balance
- Increasing focus
- Improving relationships
- Creating new, healthier neural pathways
- Helps fight depression
- Improves memory and attention skills
- Powerful aid to therapy
- Can be practiced by young and old
Mindfulness in Schools
Research shows that teaching mindfulness in schools reduces behavioral problems such as aggression and increases students’ ability to focus and pay attention. Teachers trained in mindfulness further show greater compassion and empathy and have less negative emotion.
Mindfulness in Prisons
In prisons, recent evidence shows that the practice of mindfulness decreases prisoner outbursts, hostility and rage, increases awareness of thoughts and feelings, and is a powerful tool in their rehabilitation and reintegration.
Healthcare Professionals and Mindfulness
Healthcare professionals also report the benefits of mindfulness, including coping with stress, connecting more with clients, reducing negative emotions, increasing capacity for compassion, and improving overall quality of life.
Types of Mindfulness Practices
Types of mindfulness practices include:
- Mindfulness Based Stress Reduction (MBSR)
- Mindfulness Based Cognitive Therapy (MBCT)
- Mindfulness Based Parenting
- Mindfulness Based Childbirth
- Body Scan Exercises
- Body Awareness Exercises
- Learning How the Mind Works
- Mindfulness Meditation
- Walking Meditation
- Loving Kindness Meditation
Learn more about cultivating mindfulness and enjoy a more balanced, peaceful, productive, happier and healthier life. |
In figurative language quails are symbols of heat or, more basically, of love-heat. It will be observed that in China the quail was the bird of the south and of Fire, being the Red Bird, the symbol of Summer. In Chinese astronomy it gave its name to the central star of the Summer Palace.
Notwithstanding, quail symbolism is linked especially to the behaviour of a migratory bird and to its underlying cyclical nature. What is more, this was of a rather strange character, since it led in China to the phoenix being substituted for the quail.
Like the swallow, in Ancient China the quail returned with the fine weather and was believed to change itself into a field-mouse or frog during the Winter. Springtime jousting imitated the mating habits of quail (and partridge and wild goose). This seasonal rhythm, this coming and going of migratory birds, was an image of the alternations of yin and yang, birds (celestials) changing into animals which lived either underground or in the water.
The Vedic myth of the deliverance of the quail by the horse-headed twin gods, the Ashvins, is well known. It would appear to possess a significance of the same order, even if it forms part of a cycle of different scope. Current interpretations link the Ashvins to Heaven and Earth, to day and night. The quail (vartika) which they freed from the jaws of the wolf must therefore be the dawn, the sunlight previously swallowed and shut in the cavern. It will be remembered that the Chinese dawn-clouds have five colours ‘like the quail’s egg’ and also that the quail always flies by night. Christinger observes that vartika means ‘she who returns’ and derives from the same root as ortyx, the Greek name for the bird. Ortygia, ‘Quail Island’, was the birthplace of Artemis (Moon) and Apollo (Sun), whose alternation bears some relation to that of the Ashvins. It goes without saying that this light, set free from the clutch of darkness - or from the Underworld - is not simply that of the rising Sun but also that of the spiritual Sun or, more accurately, the enlightenment which comes from intellectual effort or initiation.
Nor should it be forgotten that, along with manna, quails provided the miraculous food on which the Children of Israel were nourished in the Wilderness. |
CSI meets conservation
Scant amounts of DNA reveal conservation clues, Stanford University researchers and their colleagues found
Credit: Prasenjeet Yadav
The key to solving a mystery is finding the right clues. Wildlife detectives aiming to protect endangered species have long been hobbled by the near impossibility of collecting DNA samples from rare and elusive animals. Now, researchers at Stanford and the National Centre for Biological Sciences at India’s Tata Institute of Fundamental Research have developed a method for extracting genetic clues quickly and cheaply from degraded and left-behind materials, such as feces, skin or saliva, and from food products suspected of containing endangered animals.
Their proof of concept – outlined April 10 in Methods in Ecology and Evolution – could revolutionize conservation approaches and policies worldwide, the researchers said.
“It’s CSI meets conservation biology,” said co-author Dmitri Petrov, the Michelle and Kevin Douglas Professor in the School of Humanities and Sciences.
The specter of extinction hangs over more than a quarter of all animal species, according to the best estimate of the International Union for Conservation of Nature, which maintains a list of threatened and extinct species. Conservationists have documented extreme declines in animal populations in every region of Earth.
Clues from DNA
Helping species recover often depends on collecting DNA samples, which can reveal valuable information about details ranging from inbreeding and population history to natural selection and large-scale threats such as habitat destruction and illegal wildlife trade. However, current approaches tend to require relatively large amounts of DNA, or expensive and often inefficient strategies for extracting the material. Getting meaningful information rapidly from lower-concentration, often degraded and contaminated DNA samples requires expensive and specialized equipment.
A solution may lie in an ongoing collaboration between Stanford’s Program for Conservation Genomics, including the labs of Petrov and co-authors Elizabeth Hadly and Stephen Palumbi, with India’s National Centre for Biological Sciences, including the lab of co-author Uma Ramakrishnan, a molecular ecologist and former Fulbright faculty fellow at Stanford.
“I have been working on tiger conservation genetics for over a decade, but have been frustrated at how slow and unreliable the process of generating genetic data can be,” Ramakrishnan said. “Conservation needs answers fast, and our research was not providing them fast enough.”
The researchers looked at endangered wild tigers in India and overfished Caribbean queen conchs, examining tiger feces, shed hair and saliva found on killed prey, as well as fried conch fritters purchased in U.S. restaurants. All of the samples were too impure, mixed or degraded for conventional genetic analysis.
“Our goal was to find extremely different species that had strong conservation needs, and show how this approach could be used generally,” said Palumbi, the Jane and Marshall Steele Jr. Professor of Marine Biology. “The King of the Forest – tigers – and Queen of the Caribbean – conch – were ideal targets.”
Inexpensive and effective
Together, the team improvised a new approach, using a sequencing method that amplifies and reads small bits of DNA with unique differences in each sample. By doing this simultaneously across many stretches of DNA in the same test tubes, the researchers kept the total amount of DNA needed to a minimum. Making the procedure specific to tiger and conch DNA allowed for the use of samples contaminated with bacteria or DNA from other species.
The technology proved highly effective at identifying and comparing genetic characteristics. For example, the method worked with an amount of tiger DNA equivalent to about one-one-hundred-thousandth the amount of DNA in a typical blood sample. The method had a higher failure rate in conchs because the researchers did not have whole genomes at their disposal.
The approach’s effectiveness, speed and affordability – implementation costs could be as low as $5 per sample, according to the researchers – represents a critical advance for wildlife monitoring and forensics, field-ready testing, and the use of science in policy decisions and wildlife trade.
“It is easy to implement and so can be done in labs with access to more or less basic equipment,” said co-author Meghana Natesh of the National Centre for Biological Sciences and Sastra University in India. “If a standard procedure is followed, the data generated should be easy to share and compare across labs. So monitoring populations across states or even countries should be easier.”
The scientists have made their methods freely available.
Petrov is also a member of Stanford Bio-X and the Maternal & Child Health Research Institute, as well as an affiliate of the Stanford Woods Institute for the Environment. Hadly and Palumbi are also members of Stanford Bio-X and senior fellows at the Stanford Woods Institute for the Environment. Ramakrishnan is also a senior fellow at the Wellcome Trust / DBT India Alliance. Other Stanford co-authors include postdoctoral scholars Ryan Taylor and Nathan Truelove. Taylor is currently a consultant at End2End genomics, which is developing the DNA method.
Funding for this research provided by the Wildlife Conservation Trust, the U.S. Department of State, the Wellcome Trust / DBT India Alliance, the Summit Foundation and the Smithsonian Institution.
Related Journal Article |
Fantine Abby July 14, 2020 Worksheets
Learning about numbers includes recognizing written numbers as well as the quantity those numbers represent. Mathematics worksheets should provide a variety of fun activities that teach your child both numbers and quantity. Look for a variety of different ways to present the same concepts. This aids understanding and prevents boredom. Color-by-Numbers pictures are a fun way to learn about numbers and colors too.
This can be done only if the child gets the basics right. Worksheets are a great way of testing a child with him having fun at the same time. Subtraction worksheets are necessary to be solved by a child regularly so that he understands the subject well.
If you cannot purchase a math work sheet because you think you may not have time to, then you can create on using your home computer and customize it for your kid. Doing this is easy. All you need is Microsoft word application in your computer to achieve this. Just open the word application in your computer and start a new document. Ensure that the new document you are about to create is based on a template. Then, ensure that your internet connection is on before you can search the term ”math worksheet” from the internet. You will get templates of all kinds for your worksheet. Choose the one you want and then download.
The math worksheets are specially designed for kids and adults. They are very helpful in improving mathematical aptitude and skills. They can be easily used by school students as well as college goers. They are available from elementary to advanced level. You can also buy customized worksheets. Customized sheets can be planned according to the level of your school going child.
There are many parents, who make use of the writing worksheets for teaching the children about the writing patterns, even before they start their school. There are lots of options available and even online options are prevalent these days. This will make your kids ready for going to high school. Online means are easy for parents and teachers and also interest the children to get the interest in getting the ideas about writing.
You can find several types of sheets online and offline. You can choose among multiplication, Addition, Subtraction, Division, Geometry, Decimal, Shapes and Space worksheets.
Tag Cloudsolve the following equation calculator 2 digit addition mathematics games for grade 1 multiplication word problems ks2 12 math problems essential mathematics basic number skills worksheets beginning algebra worksheets math facts challenge multiplication division using arrays worksheet math aids graph paper philippine money worksheets for grade 2 math puzzles and answers 7th grade math worksheets common core math learning s college algebra solving equations problem solving money word problems 1st grade geometric drawing worksheets speed math test worksheets science worksheets 5th grade practice test lcm worksheets |
Miss Rumphius Literature Guide
What is our responsibility in the world? To Alice Rumphius, her responsibility was to do something to make the world more beautiful. But what does that mean to her and what does it mean to you? Making the world a more beautiful place may seem like a big task, but small simple gestures may have a bigger impact than you imagine. This book inspires the reader to think of small ways to do something for the common good. What will you do to make the world more beautiful?
ASK: What can you do to make the world a beautiful place?
SHOW: Look at the pictures throughout the book. Notice that the character goes to faraway places and grows older.
CONNECT: This is a story that tells about one person from when she is a child to a very old woman. There are three things she wants to do in her life. Let’s find out what they are.
ASK: What are the three things Miss Rumphius wants to do?
SHOW: What job does she have? Where does she travel? How does she feel about other people?
CONNECT: What do you think she will do to make the world beautiful? What would you like to do to make the world beautiful in your lifetime?
ASK: What does the young Alice at the end of the story say she wants to do in her life? Why do you think she wants to do the same thing as her great aunt?
SHOW: Look at the final pictures of the children enjoying the company and flowers of Miss Rumphius. What does that tell you about Miss Rumphius?
CONNECT: Although Miss Rumphius seems to be alone her whole life, she is part of several different communities. What are the different communities? What different communities do you belong to? (A community is a group of people coming together for a common purpose or in a common place.)
- Design and carry out a plan to make the world a more beautiful place. Ideas include cleaning up a park, planting flowers, and organizing a recycling effort.
- Think of several examples of beauty in the world. Sort your ideas into different lists, such as natural things/made by humans or big/small or things to see/actions.
- Think of several examples of things in the world that could be made beautiful. List the steps to make one thing beautiful.
- Brainstorm a list of possible jobs for yourself and other family members. Look at the jobs in the book, jobs of people you know, and jobs related to current interests. When you are ready to choose a job for yourself (as a grown up), what do you think are the important things to consider? Why do you think Miss Rumphius chose the job she did?
- Design a seed packet for one kind of flower. Include labels, pictures, and descriptions. Use words that would appeal to Miss Rumphius.
- Observe a flower carefully. Then do one or more of the following:
- Use a ruler to measure it.
- Write a careful description of the flower.
- Draw a detailed picture of it.
- Write a poem about the flower using descriptions related to several of your senses.
- Discuss the meaning of the word philanthropy (the giving or sharing of one’s time, talent, or treasure of the sake of another or the common good). Talk about how philanthropy is related to the story of Miss Rumphius. Talk about ways that your family is/can be philanthropists with time, with talent, and with treasure. Here are some ways other people are philanthropic:
- Jerry and his mother spend every Thanksgiving at a local soup kitchen helping to serve meals to the homeless.
- Mr. Roberts’ class goes outside once a week to pick up trash around a local neighborhood.
- Bobby gives half of his allowance each week to an organization to support homeless people.
- Sheila baby-sits for her little cousin once a week so her Aunt can study and finish college.
- Katrina and Troy visit an area nursing home once a month to visit the elderly residents and give them seasonal cards and decorations.
- Lakeside Elementary has a mitten and sock clothesline at school to provide new mittens and socks for homeless children |
A new photo from NASA's Hubble Space Telescope shows two star clusters that appear to be in the early stages of merging.
The colliding clusters are 170,000 light-years away in the Large Magellanic Cloud, a small satellite galaxy of our own Milky Way. They're found in the core of a massive star-forming region called 30 Doradus, which is also known as the Tarantula Nebula.
Scientists originally thought the clump of stars was a single cluster, but the new Hubble images suggest there are two distinct groups that differ in age by about 1 million years, scientists said.
The 30 Doradus complex has been actively forming stars for about 25 million years. Researcher Elena Sabbi, of the the Space Telescope Science Institute in Baltimore, Md., and her team began looking at the area while searching for fast-moving "runaway stars," which have been booted from the clusters that gave birth to them. [Hubble Telescope Forsees Star Cluster Crash (Video)]
"Stars are supposed to form in clusters, but there are many young stars outside 30 Doradus that could not have formed where they are; they may have been ejected at very high velocity from 30 Doradus itself," Sabbi said in statement.
The giant gas clouds that condense to form star clusters can fragment into smaller pieces, according to some models. Once these smaller bits begin producing stars, they might then merge to become a bigger system. Sabbi and her team think this may be happening in 30 Doradus.
While perusing the Hubble data, the team noticed something odd about the supposed single cluster at the heart of 30 Doradus. Rather than being spherical as expected, it's elongated in places — just like merging galaxies that get stretched by each other's gravitational pull.
There are also lots of high-speed runaway stars around 30 Doradus, researchers said. They may have been ejected after a process called core collapse, in which huge stars sink to the center of a cluster. This makes the core unstable, and the big stars begin booting each other out into space.
The big cluster in the center of 30 Doradus, known as R136, is too young to have experienced a core collapse, researchers said. But the phenomenon can occur more quickly in small systems, so the runaway stars may have been produced after a smaller cluster merged into R136.
The researchers hope to tease out more of the details via follow-up observations with Hubble and other telescopes. Further study of 30 Doradus could help scientists understand the details of cluster formation and how stars formed in the early universe, researchers said. |
- Is heat from a fire radiation or convection?
- Does heat transfer in a vacuum?
- What does fervor mean?
- What are the methods of heat transfer?
- What is an example of heat transfer by radiation?
- What form of heat transfer is most important?
- What is another name for heat?
- What are four methods of heat loss?
- What is conduction vs convection?
- What is heat transfer formula?
- What happens during heat transfer?
- What’s another word for sun?
- What are the 4 methods of heat loss and give an example of each?
- What are the three types of heat transfer?
- What are the 4 types of heat loss?
- What are 4 examples of radiation?
- What are 4 examples of convection?
- What is a real life example of convection?
- What means pertaining to heat?
- What are two main categories of energy?
- What are the two types of convection?
Is heat from a fire radiation or convection?
The thermal radiation from the fire spreads out in all directions and is able to reach you.
This thermal radiation is mostly in the form of infrared waves and visible light.
In contrast, the campfire heat transferred via convection shoots straight up into the sky and never reaches you (i.e.
hot air billows upwards)..
Does heat transfer in a vacuum?
Heat typically travels through three main pathways: conduction, convection and radiation. … But radiation — heat transfer via electromagnetic waves — can occur across a vacuum, as in the sun warming the Earth.
What does fervor mean?
noun. great warmth and earnestness of feeling: to speak with great fervor. intense heat.
What are the methods of heat transfer?
Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer.
What is an example of heat transfer by radiation?
This type of transfer takes place in a forced-air furnace and in weather systems, for example. Heat transfer by radiation occurs when microwaves, infrared radiation, visible light, or another form of electromagnetic radiation is emitted or absorbed. An obvious example is the warming of the Earth by the Sun.
What form of heat transfer is most important?
conductionHeat is transferred by conduction when adjacent atoms vibrate against one another, or as electrons move from one atom to another. Conduction is the most significant means of heat transfer within a solid or between solid objects in thermal contact.
What is another name for heat?
What is another word for heat?warmthfierinessmellownesshot weatherdog dayscozinessUSglowbody heatthermal reading18 more rows
What are four methods of heat loss?
After that point heat is lost to the environment through mechanisms such as radiation, convection, evaporation, and conduction. Four types of heat loss from a patient to the relatively cold environment: Radiation. Convection.
What is conduction vs convection?
Conduction is the transfer of thermal energy through direct contact. Convection is the transfer of thermal energy through the movement of a liquid or gas. Radiation is the transfer of thermal energy through thermal emission. Hope this helps!
What is heat transfer formula?
Heat transferred from one system to another system is given by the following equation, Q = m × c × Δ T Q=m \times c \times \Delta T Q=m×c×ΔT.
What happens during heat transfer?
Heat can travel from one place to another in three ways: Conduction, Convection and Radiation. … Metal is a good conduction of heat. Conduction occurs when a substance is heated, particles will gain more energy, and vibrate more. These molecules then bump into nearby particles and transfer some of their energy to them.
What’s another word for sun?
What is another word for sun?dawnsunrisesunshinebreak of dawndaytimelight of daysunlightdaylight hourshours of daylightsunlight hours34 more rows
What are the 4 methods of heat loss and give an example of each?
Heat can be lost through the processes of conduction, convection, radiation, and evaporation. Conduction is the process of losing heat through physical contact with another object or body. For example, if you were to sit on a metal chair, the heat from your body would transfer to the cold metal chair.
What are the three types of heat transfer?
The first is conduction, which occurs in solids or fluids that are at rest, such as this metal bar. The second form of heat transfer is convection, which occurs in liquids or gases that are in motion. And the third form of heat transfer is radiation, which takes place with no material carrier.
What are the 4 types of heat loss?
The body loses heat through:Evaporation of water from your skin if it is wet (sweating). … Radiation (similar to heat leaving a woodstove). … Conduction (such as heat loss from sleeping on the cold ground). … Convection (similar to sitting in front of a fan or having the wind blow on you).
What are 4 examples of radiation?
Examples of Everyday RadiationVisible light.Infrared light.Near ultraviolet light.Microwaves.Low frequency waves.Radio waves.Waves produced by mobile phones.A campfire’s heat.More items…
What are 4 examples of convection?
13 Examples Of Convection In Everyday LifeBreeze. The formation of sea and land breeze form the classic examples of convection. … Boiling Water. Convection comes into play while boiling water. … Blood Circulation in Warm-Blooded Mammals. … Air-Conditioner. … Radiator. … Refrigerator. … Hot Air Popper. … Hot Air Balloon.More items…
What is a real life example of convection?
Everyday Examples of Convection radiator – A radiator puts warm air out at the top and draws in cooler air at the bottom. steaming cup of hot tea – The steam you see when drinking a cup of hot tea indicates that heat is being transferred into the air. ice melting – Ice melts because heat moves to the ice from the air.
What means pertaining to heat?
If it has to do with heat, it’s thermal. The Greek word therme, meaning “heat,” is the origin of the adjective thermal. … Something that is thermal is hot, retains heat, or has a warming effect.
What are two main categories of energy?
After hundreds of years of observation and experimentation, science has classified energy into two main forms: kinetic energy and potential energy. In addition, potential energy takes several forms of its own. Kinetic energy is defined as the energy of a moving object.
What are the two types of convection?
Basically, there are two types of convection ovens: Convection and True Convection. Convection is your normal oven but with an added fan on the back to circulate air. True Convection, or European Convection, features a heating element behind the fan, allowing for better cooking results than standard convection. |
Presbycusis is gradual slow loss of hearing that is happens as we age.
The inner ear senses vibration created by sound. Hair cells in this area changes the vibration into electric signals. These signals move through nerves to the brain so that you can hear. Over time this system can get worn down. The normal aging process can cause:
- A wearing down of the inner ear
- Changes in the bone structure of the middle ear
- Changes in the nerves needed for hearing
Other factors that can cause damage over time include:
- Regular exposure to loud noises—damages hair cells of the inner ear that are critical to hearing
- Genetic factors
Presbycusis is more common in:
- People over 75 years old
- People with pale skin
Other factors that may raise your chance of presbycusis are:
- Family history of hearing loss with aging
- Being around loud noises for work or hobbies
- Being overweight
Having health problems, such as:
- Problems with the immune system
- Renal failure
- Cerebrovascular disease
Hearing loss happens slowly over time in both ears. Common symptoms include:
- Problems hearing high sounds, such as female voices, the phone ringing, or birds
- Sounds that are not clear
- Problems hearing people talking, such as in noisy places or while speaking on the phone
- Ringing in one or both ears— tinnitus
- Background sounds that are too loud or annoying
- Ear fullness with or without vertigo , a feeling of spinning when you are not moving
You will be asked about your symptoms and medical past. A physical exam will be done. The doctor will check your inner ear with a lighted tool. Some basic tests will help to check your hearing.
Other tests may include:
- Audiometry —to check the level and amount of hearing loss
- Weber test—to find out if the hearing loss is only on one side (rule out other causes)
- Rinne test—to test if the hearing loss is because of nerve problems (rule out other causes)
Hearing loss can't be reversed. The goal of treatment is to decrease impact of hearing loss on quality of life. Other steps may help to slow further hearing loss. Options include:
Steps that may improve your ability to hear include:
- Stand closer to and face-to-face when you speak to people.
- Repeat back what you hear to people speaking to make sure you understood them.
- Ask people to rephrase things they say instead of asking them to repeat them.
- Ask others speak louder and more clearly.
- Try to lower background noise.
Hearing Aids and Assistive Listening Devices
Talk with a specialist to see if a hearing aid is right for you. An audiologist will then be able to do tests to find the best type of hearing aid for you. You may need to replace hearing aids with other models if your hearing loss gets worse.
There are also devices that can make voices over the phone more loud and clear.
A hearing aid may not be helpful for severe hearing loss. Some with this type of hearing loss may benefit from a cochlear implant . It may improve the way sound reaches the brain. It can provide partial hearing to the profoundly deaf.
To help reduce your chance of presbycusis:
- Follow treatment plans to manage health problems that may cause hearing loss.
- Avoid being around loud noises and sounds. This includes hazards at work, home, and during activities.
- Protect your ears when you work with loud machinery or are in loud places.
- If you smoke, talk to your doctor about how you can quit.
- Talk to your doctor about supplements that may slow down age-related hearing loss.
- Reviewer: EBSCO Medical Review Board Michael Woods, MD, FAAP
- Review Date: 09/2018 -
- Update Date: 04/23/2018 - |
Because clotting has a vocabulary of its own, I have included this
Clotting Glossary, to help you understand some of the terms found in
Typically clots are not called acute and chronic, but you are
correct that fibrin and calcium have roles in the process. Calcium,
also known as Factor IV, is necessary in the clotting scheme, but I?ve
not heard of clots becoming calcified, without an indwelling catheter
of some kind. (I would suppose this could theoretically happen, it
must not be common).
?There are many types of blood clots that can occur in the veins:
· Deep vein thrombosis (DVT) is a blood clot occurring in a deep vein.
· Pulmonary embolism is a blood clot that breaks loose from a vein and
travels to the lungs.
· Chronic venous insufficiency isn't a blood clot, but a condition
that occurs when damaged vein valves or a DVT causes long-term pooling
of blood and swelling in the legs. If uncontrolled, fluid will leak
into the surrounding tissues in the ankles and feet, and may
eventually cause skin breakdown and ulceration.
Blood Clotting Disorders
Blood clotting disorders are conditions that make the blood more
likely to form blood clots in the arteries and veins. These conditions
may be inherited (congenital, occurring at birth) or acquired during
life and include:
· Elevated levels of factors in the blood which cause blood to clot
(fibrinogen, factor 8, prothrombin)
· Deficiency of natural anticoagulant (blood-thinning) proteins
(antithrombin, protein C, protein S)
· Elevated blood counts
· Abnormal Fibrinolysis (the breakdown of fibrin)
· Abnormal changes in the lining of the blood vessels (endothelium)
Blood clots typically occur when a blood vessel has sustained damage,
through injury, or plaque formation. Platelets are the first to
arrive, sticking to the injured edge of the vessel. The platelets act
not only as a physical ?plug? but also release a substance that
attracts more platelets, effectively stopping the bleeding.
Next come clotting factors, which induce the production of fibrin ?
strands of protein that stick to the platelets, sealing the wound.
After a few days, the blood vessel injury heals and the clot
dissolves, with the help of other anti-clotting factors.
?When a wound occurs, several changes take place to minimize blood
loss. First, the blood vessel slows the flow of blood past the wound
site. Next, platelets collect at the wound site to form a plug.
Finally, fibrin clots form scabs to replace these temporary platelet
plugs. Fibrin clot formation is dependent on adequate function of
clotting factors. Multiple factors can prevent fibrin clot formation.?
?The platelets are tiny cellular elements, made in the bone marrow,
that travel in the bloodstream waiting for a bleeding problem to
develop. When bleeding occurs chemical reactions change the surface of
the platelet to make it ?sticky.? Sticky platelets are said to have
become ?activated.? These activated platelets begin adhering to the
wall of the blood vessel at the site of bleeding, and within a few
minutes they form what is called a ?white clot.? (A clump of platelets
appears white to the naked eye.)?
?The thrombin system consists of several blood proteins that, when
bleeding occurs, become activated. The activated clotting proteins
engage in a cascade of chemical reactions that finally produce a
substance called fibrin. Fibrin can be thought of as a long, sticky
string. Fibrin strands stick to the exposed vessel wall, clumping
together and forming a web-like complex of strands. Red blood cells
become caught up in the web, and a ?red clot? forms.
A mature blood clot consists of both platelets and fibrin strands. The
strands of fibrin bind the platelets together, and ?tighten? the clot
to make it stable.
In arteries, the primary clotting mechanism depends on platelets. In
veins, the primary clotting mechanism depends on the thrombin system.
But in reality, both platelets and thrombin are involved, to one
degree or another, in all blood clotting.?
?The platelets produce a substance that combines with calcium ions in
the blood to form thromboplastin, which in turn converts the protein
prothrombin into thrombin in a complex series of reactions. Thrombin,
a proteolytic enzyme, converts fibrinogen, a protein substance, into
fibrin, an insoluble protein that forms an intricate network of minute
threadlike structures called fibrils and causes the blood plasma to
gel. The blood cells and plasma are enmeshed in the network of fibrils
to form the clot. Blood clotting can be initiated by the extrinsic
mechanism, in which substances from damaged tissues are mixed with the
blood, or by the intrinsic mechanism, in which the blood itself is
traumatized. More than 30 substances in blood have been found to
affect clotting; whether or not blood will coagulate depends on a
balance between those substances that promote coagulation
(procoagulants) and those that inhibit it (anticoagulants).?
?Blood clots (fibrin clots) are the clumps that result from
coagulation of the blood. A blood clot that forms in a vessel or
within the heart and remains there is called a thrombus. A thrombus
that travels from the vessel or heart chamber where it formed to
another location in the body is called an embolus, and the disorder,
an embolism. For example, an embolus that occurs in the lungs is
called a pulmonary embolism.
Sometimes, a piece of atherosclerotic plaque, small pieces of tumor,
fat globules, air, amniotic fluid, or other materials can act in the
same manner as an embolus.?
Thrombi and emboli can firmly attach to a blood vessel and partially
or completely block the flow of blood in that vessel. This blockage
deprives the tissues in that location of normal blood flow and oxygen.
This is called ischemia and if not treated promptly can result in
damage or death (infarction or necrosis) of the tissues in that area.?
?Following damage to a blood vessel, vascular spasm occurs to reduce
blood loss while other mechanisms also take effect:
Blood platelets congregate at the site of damage and amass to form a
platelet plug. This is the beginning of the process of the blood
"breaking down" from is usual liquid form in such a way that its
constituents play their own parts in processes to minimise blood loss.
Blood normally remains in its liquid state while it is within the
blood vessels but when it leaves them the blood may thicken and form a
Blood clotting (technically "blood coagulation") is the process by
which (liquid) blood is transformed into a solid state.
This blood clotting is a complex process involving many clotting
factors (incl. calcium ions, enzymes, platelets, damaged tissues)
activating each other.?
?The end result of the clotting pathway is the production of thrombin
for the conversion of fibrinogen to fibrin. Fibrinogen is a dimer
soluble in plasma. Exposure of fibrinogen to thrombin results in rapid
proteolysis of fibrinogen and the release of fibrinopeptide A. The
loss of small peptide A is not sufficient to render the resulting
fibrin molecule insoluble, a proces that is required for clot
formation, but it tends to form complexes with adjacent fibrin and
A second peptide, fibrinopeptide B, is then cleaved by thrombin, and
the fibrin monomers formed by this second proteolytic cleavage
polymerize spontaneously to form an insoluble gel. The polymerized
fibrin, held together by noncovalent and electrostatic forces, is
stabilized by the transamidating enzyme factor XIIIa, produced by the
action of thrombin on factor XIII. These insoluble fibrin aggregates
(clots), together with aggregated platelets ( thrombi), block the
damaged blood vessel and prevent further bleeding.?
?Platelet aggregation and fibrin formation both require the
proteolytic enzyme thrombin. Clotting also requires:
· calcium ions (Ca2+)(which is why blood banks use a chelating agent
to bind the calcium in donated blood so the blood will not clot in the
· about a dozen other protein clotting factors. Most of these
circulate in the blood as inactive precursors. They are activated by
proteolytic cleavage becoming, in turn, active proteases for other
factors in the system.
?What essential function do gamma-carboxyglutamic acid residues endow
upon a protein? There appear to be two major effects:
· First, they enable the protein to bind to membrane surfaces. Much of
blood clotting is a result of blood-clotting proteins assembling into
a complex on the membranes of platelets and endothelial cells; within
these complexes, the factors can efficiently contact one another to
become activated and participate in clot formation. Additionally,
calcium is necessary for the blood clotting reaction.
The proposed mechanism involving carboxylation is that
gamma-carboxyglutamic acid residues strongly chelate calcium, and
positively-charged calcium forms ion bridges to negatively-charged
phosphate head groups of membrane phospholipids.
· Second, gamma-carboxyglutamic acid groups appear to participate in
forming the necessary structure of such proteins by forming
calcium-mediated intrachain interactions that link two
gamma-carboxyglutamic acids to a calcium ion (similar to disulfide
bridges, but much shorter).?
?A blood clot forms as a result of concerted action of some 20
different substances, most of which are plasma glycoproteins?
These are seen on a chart as well as an illustration of the intrinsic
and extrinsic pathways.
?The phenomenon of blood coagulation is traditionally distinguished
into two pathways. These pathways are the intrinsic and the extrinsic
pathways (Figure below). The intrinsic pathway is defined as a cascade
that utilizes only factors that are soluble in the plasma, whereas the
extrinsic pathway consists of some factors that are insoluble in the
plasma, e.g., membrane-bound factors (factor VII). However, the
boundary differentiating these two is becoming more and more blurred.?
?Vitamin K is required in the liver biosynthesis of the prothrombin
g-carboxyglutamic groups by participating in the carboxylation of the
g-carbon of glutamic acid. These carboxy groups are required for
binding calcium to prothrombin, which induce a conformation change in
prothrombin enabling it to bindi to co-factors on the phospholipid
surfaces during its conversion to thrombin by factor Xa, factor V, and
platelet phospholipids in the presence of calcium.?
?Atherosclerosis begins when cholesterol and other fatty substances
attach to and infiltrate the endothelial lining of coronary arteries.
These fatty deposits eventually form a large atherosclerotic plaque
that increases in size and juts out into the lumen of the artery. Even
plaques that may appear to be clinically insignificant can precipitate
a thrombotic event; indeed most thrombotic events are caused by small
plaques rupturing and resulting in an occlusive thrombus or blood
clot. Any plaque may rupture due to natural causes, such as shear
forces in the artery or gradual decay. Alternatively, rupture may
result from mechanical injury during a procedure such as percutaneous
transluminal coronary angioplasty (with or without stent
?When plaque rupture occurs, platelet adhesion, activation, and
aggregation follow. Plaque rupture exposes the endothelium (inner
lining) of the artery to the bloodstream. Platelets that come in
contact with the sub-endothelium will stick to it, binding to von
Willebrand factor (vWf), and begin to form a platelet monolayer that
covers the injured site, which protects against continued exposure of
the sub-endothelium and allows the healing process to begin. The
sub-endothelium also contains collagen, a potent platelet agonist,
which causes the bound platelets to activate and undergo a
conformational shape change. At this time, platelets will each express
70,000 - 100,000 glycoprotein (GP) IIb-IIIa receptors and release
internal pools of signaling agents, including ADP (adenosine
diphosphate), thromboxane A2, serotonin, and epinephrine into the
Please read the entire age for complete information. There are also
links to animated videos showing clot development.
?A fibrous, soluble protein called fibrinogen ("clot-maker") comprises
about 3% of the protein in blood plasma. Fibrinogen has a sticky
portion near the center of the molecule, but the sticky region is
covered by little amino acid chains with negative charges. Because
like charges repel, these chains keep fibrinogen molecules apart.
When a clot forms, a protease (protein-cutting) enzyme clips off the
charged chains. This exposes the sticky parts of the molecule, and
suddenly, fibrinogens (which are now called fibrins) start to stick
together, beginning the formation of a clot.
The protease that cuts off the charged chains is called thrombin. So,
just like the lobster clotting system, the heart of the reaction
involves just two molecules: fibrinogen and thrombin. But, unlike the
lobster, there's a lot more to this machine. It turns out that
thrombin itself exists in an inactive form called prothrombin. So it,
just like fibrinogen, has to be activated before it can start the
clotting process. What activates prothrombin? Here's where life gets
really interesting. Prothrombin, a protease itself, is activated by
another protease called Factor X which clips of part of the inactive
protein to produce active, clot-forming thrombin. OK, so what
activates Factor X? Believe it or not, there are still more proteases,
two of them, actually, called Factor VII and Factor IX, that can
switch on Factor X.?
The clotting cascade is a detailed and interesting study?far beyond
the scope of this question. There is a chart on this site that
illustrates the Intrinsic Pathway, or cascade.
Everything you wanted to know about clotting!:
?Degradation of fibrin clots is the function of plasmin, a serine
protease that circulates as the inactive proenzyme, plasminogen. Any
free circulating plasmin is rapidly inhibited by a2-antiplasmin.
Plasminogen binds to both fibrinogen and fibrin, thereby being
incorporated into a clot as it is formed. Tissue plasminogen activator
(tPA) and, to a lesser degree, urokinase are serine proteases which
convert plasminogen to plasmin. Inactive tPA is released from vascular
endothelial cells following injury; it binds to fibrin and is
consequently activated. Urokinase is produced as the precursor,
prourokinase by epithelial cells lining excretory ducts. The role of
urokinase is to activate the dissolution of fibrin clots that may be
deposited in these ducts.?
Formation of a clot-here you can see fibrin strands, red blood cells
and a few platelets
Looking through a partially occluded vein:
I hope this has adequately answered your question. If not, please,
before rating, request an Answer Clarification, before rating. I will
be happy to assist you further on this question, if possible.
Clotting factor cascade
blood clot scheme
blood clot formation
Calcium + clot formation
Clarification of Answer by
20 Sep 2005 20:59 PDT
Hi again Ekw70,
I'm still a bit confused as to what exactly you mean. Could you
share with me where you got the data showing that clots age per "acute
(<14 days), subacute (2weeks to 3 months) and chronic thrombus (>3
months) in terms of what the clots are made of."
I can find no evidence that clots are made of anythng different than
what was included in my answer. Did you check all the links?
I have found additional information, but I am not sure it is what
you are seeking. Whether a clot forms due to a vessel injury, or a
jagged peice of plaque, clots are made up of platelets, fibrin, red
blood cells, whose formation is facilitated by clotting factors. Small
clots are dissolved by anti-clotting factors, or therapy such as
heparin or other anti-coagulants. IF left alone, they increase in
?Most acute coronary syndromes are caused by acute thrombosis,
superimposed on a ruptured or eroded atherosclerotic plaque, with or
without concomitant vasospasm (1,2). Plaque rupture is not a rare
event. It is followed by a variable amount of hemorrhage into the
plaque and luminal thrombosis (often small and non-obstructive),
causing sudden and rapid, but often clinically silent, growth of the
?According to a study by researchers at the University of Pittsburgh
Medical Center (UPMC), a thin polymer coating on the inside of
coronary arteries may one day prevent blood clot formation called
acute thrombosis, following angioplasty.?
?Acute thrombosis occurs in five percent of patients who undergo
angioplasty and can lead to heart attack. This condition also may
necessitate repeat coronary intervention or bypass surgery.
The study attempted to determine if the polymer, polyethylene glycol
diisocyanate, could protect the damaged vascular wall from platelets
in the blood long enough for the inside of the artery to heal and
prevent the acute thrombosis process from beginning.?
?Venous thrombosis of the lower extremities is a common mechanism of
disease. Methods for detecting and diagnosing thrombus are Ultrasound
duplex imaging and compression of the veins.?
?Deep venous thrombosis (DVT) is the principal disease of the deep
system. The thrombus frequently originates in the cusps of valves. As
the clot progresses blood flow will restrict causing an increase in
venous pressure. The vein walls will stretch causing damage to the
valves. The entire clot or part of it can break loose and cause a
pulmonary embolism, which can be life threatening. Contributing
factors of DVT include venous stasis, trauma, hypercoagulation, age,
heart failure, and previous DVT.?
?Venous thrombosis in the superficial system (SVT) is not as
threatening as DVT. SVT can cause superficial phlebitis, an
inflammation of the vessels, resulting in a palpable cord throughout
?Image characteristics will appear differently along with each phase
of the thrombus. Fresh or acute thrombosis refers to clots that are
days up to two weeks old. Acute thrombus appears spongy in texture and
is poorly attached to the vessel wall. The vein will be distended
abnormally larger than usual. Venous distension helps to differentiate
between newly formed and older thrombus. Newly formed clot generates
low level echoes and may be anechoic. This will make imaging
difficult. The use of color Doppler will help indicate blood flow to
the vessel. A lack of flow indicates the presence of thrombus. The
subacute phase refers to clots that are weeks up to two months old.
The thrombus gradually becomes more echogenic. Retraction and lyses of
the thrombus will reestablish patency and occupy less of the vein
lumen. The vein becomes less distended and returns to a normal size.
The chronic phase refers to clots that are months to years old.
Chronic thrombus appears rigid in texture and is well attached to the
vessel wall. Echogenicity is strongly increased in this phase.
Echogenic intraluminal material may resemble plaque and cause acoustic
?In a 54-year-old man who suffered from chronic cardiac insufficiency
with terminal graded cyanosis on necropsy a foramen ovale late patens
was found with massive chronic thrombosis of the trunk and main
branches of the pulmonary artery above extensive partly ulcerous
atheromatous plaques. The thrombosis is considered autochthonous by
?It is generally accepted that platelet adherence to plaques on the
linings of arteries is part of the atherosclerosis cascade. Platelet
adherence is worsened by excess fibrin. Platelets release a
platelet-derived growth factor, causing the smooth muscle cells on the
walls of arteries to proliferate. The resultant smooth muscle cells
have an increased permeability to platelets and lipids, especially
LDL-cholesterol. As LDL increases, it penetrates further into the
arterial wall. Plaque forms in the arterial wall as a benign
neoplastic growth (a monoclonal mutation). Excess fibrin, free
radicals, chronic inflammation, homocysteine, oxidized LDL, and
environmental hydrocarbons, etc. aggravate this mutation.
In the free radical hypothesis, lipid peroxides damage the arterial
walls, further enhancing wall permeability, as well as additionally
increasing the oxidation of lipids, especially LDL. These free
radicals invade the arterial wall and activate cell proliferation and
abnormal cell duplication. The newly mutated cells migrate into the
arterial wall and induce plaque formation. This cell proliferation
increases the surrounding clot growth or thrombus formation. T-cell
antibodies regulate this process. The resulting lesions are
atheromatous plaque. The surrounding thrombi form primarily from
modified smooth muscle cells, LDL, and fibrin.
Naturally occurring thrombolytic enzymes that dissolve clots are
generated in the endothelial cells of blood vessels. As people age,
production of these enzymes slows and the blood is more prone to
coagulation. This results in clotting. However, clots can form at any
?Endothelial injury can expose collagen, causing platelet aggregation
and tissue thromboplastin release that, when stasis or
hypercoagulability is present, trigger the coagulation mechanism. Many
factors may contribute to venous thrombosis: injury to the endothelium
of the vein, eg, from indwelling catheters, injection of irritating
substances, thromboangiitis obliterans, and septic phlebitis;
hypercoagulability associated with malignant tumors, blood dyscrasias,
oral contraceptives, and idiopathic thrombophlebitis; and stasis in
postoperative and postpartum states, varicose thrombophlebitis, and
the thrombophlebitis that complicates prolonged bed rest of any
chronic illness, heart failure, stroke, and trauma.?
?Most venous thrombi begin in the valve cusps of deep calf veins.
Tissue thromboplastin is released, forming thrombin and fibrin that
trap RBCs and propagate proximally as a red or fibrin thrombus, which
is the predominant morphologic venous lesion (the white or platelet
thrombus is the principal component of most arterial lesions).
Anticoagulant drugs (eg, heparin, the coumarin compounds) can prevent
thrombi from forming or extending. Antiplatelet drugs, despite
intensive study, have not proved effective for prevention.?
?Dr. Mark Fisher, director of the UCI Stroke Center, and colleagues
found that in the carotid artery, the primary source of blood to the
brain, plaques form lesions that support the growth of the
stroke-causing blood clots, which can either block the artery or break
off and travel into the brain.?
??plaque can build up in the walls of your arteries. Cholesterol,
calcium, and fibrous tissue make up this plaque. As more plaque builds
up, your arteries narrow and stiffen. This process is called
atherosclerosis, or hardening of the arteries. Eventually, enough
plaque builds up to reduce blood flow through your carotid arteries,
or cause irregularities in the normally smooth inner walls of the
Definition of thrombosis:
"Thrombosis: The formation or presence of a blood clot in a blood
vessel. The vessel may be any vein or artery as, for example, in a
deep vein thrombosis or a coronary (artery) thrombosis. The clot
itself is termed a thrombus. If the clot breaks loose and travels
through the bloodstream, it is a thromboembolism. Thrombosis,
thrombus, and the prefix thrombo- all come from the Greek thrombos
meaning a lump or clump, or a curd or clot of milk. See entries also
to: Cavernous sinus thrombosis; Renal vein thrombosis. And see: Deep
Vein Thrombosis and Pulmonary Embolism."
"A disease that persists for a long time. A chronicdisease is one
lasting 3 months or more, by the definition of the U.S. NationalCenter
for Health Statistics. Chronic diseases generally cannot be prevented
byvaccines or cured by medication, nor do they just disappear.
Eighty-eightpercent of Americans over 65 years of age have at least
one chronic healthcondition (as of 1998). Health damaging behaviors -
particularly tobacco use,lack of physical activity, and poor eating
habits - are major contributors tothe leading chronic diseases."
I feel you may be thinking of chronic thrombosis in the sense that the
disease lasts a long time. Should the clot last a ong time, it is
still composed of the same matter - cells, fibin, clotting factors,
Please let me know if you require further research. I too would like
to know what you have so far, as it may give me further clues as to
what information you need. |
January 17, 2017,
Gastrointestinal (GI) bleeding refers to any bleeding that starts in the gastrointestinal tract.
Bleeding may come from any site along the GI tract, but is often divided into:
- Upper GI bleeding: The upper GI tract includes the esophagus (the tube from the mouth to the stomach), stomach, and first part of the small intestine.
- Lower GI bleeding: The lower GI tract includes much of the small intestine, large intestine or bowels, rectum, and anus.
- Answering Your Questions About Aspirin, Heart Attack and Colon Cancer
- Wide Use of Antibiotics Allows C. Diff to Flourish
- Aspirin Risks Outweigh Benefits for Younger Women
- The $2.7 Trillion Medical Bill
- The Consumer: How Much Aspirin Is Too Much of a Good Thing?
- Studies Link Daily Doses of Aspirin to Reduced Risk of Cancer
- Daily Aspirin Is Not for Everyone, Study Suggests
- Aspirin Helps in Reducing Cancer Deaths, a Study Finds
- Lautenberg’s Cancer Is Curable, Doctor Says
Back to TopAlternative Names
Lower GI bleeding; GI bleeding; Upper GI bleeding
Back to TopConsiderationsThe amount of GI bleeding may be so small that it can only be detected on a lab test such as the fecal occult blood test. Other signs of GI bleeding include:
- Dark, tarry stools
- Larger amounts of blood passed from the rectum
- Small amounts of blood in the toilet bowl, on toilet paper, or in streaks on stool (feces)
- Vomiting blood
Massive bleeding from the GI tract can be dangerous. However, even very small amounts of bleeding that occur over a long period of time can lead to problems such as anemia or low blood counts.
Once a bleeding site is found, many therapies are available to stop the bleeding or treat the cause.
Back to TopCauses
GI bleeding may be due to conditions that are not serious, including:
However, GI bleeding may also be a sign of more serious diseases and conditions, such as the following cancers of the GI tract:
- Cancer of the colon
- Cancer of the small intestine
- Cancer of the stomach
- Intestinal polyps (a pre-cancerous condition)
Other possible causes of GI bleeding include:
- Abnormal blood vessels in the lining of the intestines (also called angiodysplasias)
- Bleeding diverticulum, or diverticulosis
- Crohn's disease orulcerative colitis
- Esophageal varices
- Gastric (stomach) ulcer
- Intussusception (bowel telescoped on itself)
- Mallory-Weiss tear
- Meckel's diverticulum
- Radiation injury to the bowel
Back to TopHome Care
There are home stool tests for microscopic blood that may be recommended for people with anemia or for colon cancer screening.
Back to TopWhat to Expect at Your Office Visit
GI bleeding is diagnosed by a doctor -- you may or may not be aware of its presence.
GI bleeding can be an emergency condition requiring immediate medical attention. Treatment may involve:
- Blood transfusions
- Fluids and medicines through a vein
- Esophagogastroduodenoscopy (EGD) - a thin tube with a camera on the end is passed through your mouth into your esophagus, stomach, and small intestine
- A tube is placed through your mouth into the stomach to drain the stomach contents (gastric lavage)
Once your condition is stable, you will have a physical examination, including a detailed abdominal examination.
You will also be asked questions about your symptoms, including:
- When did you first notice symptoms?
- Did you have black, tarry stools or red blood in the stools?
- Have you vomited blood?
- Did you vomit material that looks like coffee grounds?
- Do you have a history of peptic or duodenal ulcers?
- Have you ever had symptoms like this before?
- What other symptoms do you have?
Tests that may be done to find the source of the bleeding include:
- Abdominal CT scan
- Abdominal MRI scan
- Abdominal x-ray
- Bleeding scan (tagged red blood cell scan)
- Blood clotting tests
- Capsule endoscopy (camera pill that is swallowed to look at the small intestine)
- Complete blood count (CBC), clotting tests, platelet count, and other laboratory tests
Back to TopReferences
Bjorkman D. GI hemorrhage and occult GI bleeding. In: Goldman L, Ausiello D. Cecil Textbook of Medicine . 23rd ed. Philadelphia, Pa: Saunders Elsevier; 2007:chap 137.
Savides TJ, Jensen DM. Gastrointestinal bleeding. In: Feldman M, Friedman LS, Brandt LJ, eds. Sleisenger and Fordtran's Gastrointestinal and Liver Disease . 9th ed. Philadelphia, Pa: Saunders Elsevier;2010:chap 19.
MOST POPULAR - HEALTH
- The New Old Age: Physician Aid in Dying Gains Acceptance in the U.S.
- Trump Health Secretary Pick’s Longtime Foes: Big Government and Insurance Companies
- Life Expectancy in U.S. Declines Slightly, and Researchers Are Puzzled
- What Obese Patients Should Say to Doctors
- The Science of Fat: Skinny and 119 Pounds, but With the Health Hallmarks of Obesity
- The Science of Fat: After ‘The Biggest Loser,’ Their Bodies Fought to Regain Weight
- Well: Is the Sleep Aid Melatonin Safe for Children and Adults?
- Well: The Scientific 7-Minute Workout |
Tona Kunz at [email protected] or (630) 252-5560.
LEMONT, Ill. – Scientists broke new ground in the study of deep earthquakes, a poorly understood phenomenon that occurs where the oceanic lithosphere, driven by tectonics, plunges under continental plates – examples are off the coasts of the western United States, Russia and Japan.
This research is a large step toward replicating the full power of these earthquakes to learn what sets them off and how they unleash their violence. It was made possible only by the construction of a one-of-a-kind X-ray facility that can replicate high-pressure and high-temperature while allowing scientists to peer deep into material to trace the propagation of cracks and shock waves.
“We are capturing the physics of deep earthquakes,” said Yanbin Wang, a senior scientist at the University of Chicago who helps run the X-ray facility where the research occurred. “Our experiments show that, for the first time, laboratory-triggered brittle failures during the olivine-spinel (mineral) phase transformation has many similar features to deep earthquakes.”
Wang and a team of scientists from Illinois, California and France simulated deep earthquakes at the U.S. Department of Energy’s Argonne National Laboratory by using pressure of 5 gigapascals, more than double the previous studies of 2 GPa. For comparison, pressure of 5 GPa is 4.9 million times the pressure at sea level.
At this pressure, rock should be squeezed too tight to rapture and erupt into violent earthquakes. But it does. And that has puzzled scientists since the phenomenon of deep earthquakes was discovered nearly 100 years ago. Interest spiked with the May 24 eruption in the waters near Russia of the world’s strongest deep earthquake – roughly five times the power of the great San Francisco quake of 1906.
These deep earthquakes occur in older and colder areas of the oceanic plate that gets pushed into the earth’s mantle. It has been speculated that the earthquakes are triggered when a mineral common in the upper mantle, olivine, undergoes a phase transformation that weakens the whole rock temporarily, causing it to fail.
“Our current goal is to understand why and how deep earthquakes happen. We are not at a stage to predict them yet; it is still a long way to go,” Wang said.
“GSECARS is the only beamline in the world that has the combined capabilities of in-situ X-ray diffraction and imaging, controlled deformation, in terms of stress, strain and strain rate, at high pressure and temperature, and acoustic emission detection,” Wang said. “ It took us several years to reach this technical capability.”
This new technology is a dream come true for the paper’s coauthor, geologist Harry Green, a distinguished professor of the graduate division at the University of California, Riverside.
More than 20 years ago, he and colleagues discovered a high-pressure failure mechanism that they proposed then was the long-sought mechanism of very deep earthquakes (earthquakes occurring at more than 400 km depth). The result was controversial because seismologists could not find a seismic signal in the earth that could confirm the results.
Seismologists have now found the critical evidence. Indeed, beneath Japan, they have even imaged the tell-tale evidence and showed that it coincides with the locations of deep earthquakes.
In the Sept. 20 issue of the journal Science, Green and colleagues explained how to simulate these earthquakes in a paper titled “Deep-Focus Earthquake Analogs Recorded at High Pressure and Temperature in the Laboratory”.
“We confirmed essentially all aspects of our earlier experimental work and extended the conditions to significantly higher pressure,” Green said. “What is crucial, however, is that these experiments are accomplished in a new type of apparatus that allows us to view and analyze specimens using synchrotron X-rays in the premier laboratory in the world for this kind of experiment — the Advanced Photon Source at Argonne National Laboratory.”
The ability to do such experiments has now allowed scientists like Green to simulate the appropriate conditions within the earth and record and analyze the “earthquakes” in their small samples in real time, thus providing the strongest evidence yet that this is the mechanism by which earthquakes happen at hundreds of kilometers depth.
The origin of deep earthquakes fundamentally differs from that of shallow earthquakes (earthquakes occurring at less than 50 km depth). In the case of shallow earthquakes, theories of rock fracture rely on the properties of coalescing cracks and friction.
“But as pressure and temperature increase with depth, intracrystalline plasticity dominates the deformation regime so that rocks yield by creep or flow rather than by the kind of brittle fracturing we see at smaller depths,” Green explained. “Moreover, at depths of more than 400 kilometers, the mineral olivine is no longer stable and undergoes a transformation resulting in spinel, a mineral of higher density.”
The research team focused on the role that phase transformations of olivine might play in triggering deep earthquakes. They performed laboratory deformation experiments on olivine at high pressure and found the “earthquakes” only within a narrow temperature range that simulates conditions where the real earthquakes occur in earth.
“Using synchrotron X-rays to aid our observations, we found that fractures nucleate at the onset of the olivine to spinel transition,” Green said. “Further, these fractures propagate dynamically so that intense acoustic emissions are generated. These phase transitions in olivine, we argue in our research paper, provide an attractive mechanism for how very deep earthquakes take place.”
“Our next goal is to study the 'real' material, the silicate olivine (Mg,Fe)2SiO4, which requires much higher pressures,” Wang said.
The research was funded by grants from the Institut National des Sciences de l’Univers and L’Agence Nationale de la Recherche and the National Science Foundation. Use of the Advanced Photon Source was funded by U.S. Department of Energy Office of Science.
The authors of the study were Alexandre Schubnel at the Ecole Normale Supérieure, France; Fabrice Brunet at the Université de Grenoble, France; and Nadège Hilairet, Julian Gasc and Wang at the University of Chicago, and Green of UC Rvierside.
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.
The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit the Office of Science website. |
This format allows students to dissect an identified PROBLEM by exploring the possible CAUSES to why the PROBLEM exists. After that, the student must then identify a SOLUTION for each the CAUSES. Remember, the student must SOLVE the CAUSE, not the PROBLEM itself. For example:
Problem: Not enough people wear their seatbelts.
Cause 1: Seatbelt laws are too weak.
Cause 2: People have misconceptions about seatbelts.
Solution 1: We need stricter seatbelt laws.
Solution 2: We must educate the public about seatbelt facts.
Please use the following format to generate and critique ideas. |
Carbon dioxide which is already in our atmosphere could continue warming the planet for centuries even if new emissions were entirely halted, scientists claim.
A new analysis of future carbon emission scenarios found that it may take significantly fewer emissions for global temperatures to reach unsafe levels than previously thought.
Carbon dioxide, the most important greenhouse gas, has long-term effects because it can remain in the atmosphere for centuries after it is emitted.
To understand how long its influence on global temperatures will last, scientists produced a computer model of a scenario where all carbon emissions were immediately stopped after 1,800 billion tonnes had been released into the atmosphere.
They found that 40 per cent of the carbon would be absorbed by the oceans or landmasses within 20 years of emissions ceasing, 60 per cent within 100 years and 80 per cent within 1,000 years.
The decreasing levels of carbon in the atmosphere should in theory have a cooling effect, but this would be outweighed by the fact the oceans will absorb less and less heat as time goes on.
Previous studies had suggested that global temperatures would remain steady or decline if emissions were suddenly stopped, but did not account for the declining capacity of the oceans to continue absorbing heat, the scientists claimed.
Eventually the warming effect of heat which is no longer being absorbed by the oceans and is lingering in the atmosphere will outweigh the cooling caused by declining CO2 levels, they said.
Results published in the Nature Climate Change journal suggest that after an initial century of cooling following the stoppage of emissions, the planet would then warm by 0.37C over a 400 year period.
Although the change sounds small, it is almost half the total amount of warming seen since the start of the industrial era which stands at 0.85C.
According to the Intergovernmental Panel on Climate Change, an increase of 2C or more above pre-industrial levels could result in dangerous effects on the climate system.
Experts have previously warned that to keep global temperature rises below 2C, humans must keep the total amount of carbon dioxide emitted in the industrial era below 1,000 billion tonnes, about half of which has already been released.
But the new study suggests the 2C benchmark could be reached with significantly lower carbon emissions.
Dr Thomas Frölicher of Princeton University, who led the study, said: "If our results are correct, the total carbon emissions required to stay below two degrees of warming would have to be three-quarters of previous estimates, only 750 billion tons instead of 1,000 billion tons of carbon.
"Thus, limiting the warming to two degrees would require keeping future cumulative carbon emissions below 250 billion tons, only half of the already emitted amount of 500 billion tons." |
It has become commonplace to witness a public display of bigoted and offensive language. The latest onslaught, made by the President of the United States in a meeting about immigration, included the use of "hate filled, vile and racist" language when referring to people from Haiti and countries in Africa. As has become the typical news cycle, the latest remarks are reacted to with outrage, denials, repudiations, criticisms and—from far too many—nothing at all.
Educators wonder how to talk about it in schools. Parents worry about their kids hearing it and how they should respond. And many people simply question what can be done, if anything. And then we move on.
But what if we don’t want to move on?
Every day—from the schoolyard to the White House—we hear biased, hurtful language. From racist “jokes” to slurs to stereotypes to blatant bigotry, the impact of standing by and doing nothing is profound. It tacitly condones the words. It sends a message to those who are in that group that they could be targeted next. It contributes to internalized oppression of those who are on the receiving end. And it leads to an increasing escalation of hate, bias and injustice in society. Indeed, the pyramid of hate shows us that when people or institutions treat lower-level biased attitudes and language as acceptable by saying and doing nothing, the bias keeps moving up the pyramid to more serious acts of discrimination and violence.
Since we cannot control what others say or do, especially those in positions of power, let’s talk about what we can do.
We need to hold each other accountable when our friends, acquaintances, classmates, family members, co-workers, neighbors and elected officials either make the comments themselves or remain silent, make excuses or defend biased and stereotypical language. This is especially important as we raise the next generation so that they know, through our words and deeds, that bigoted words are unacceptable. We need to actively challenge biased language and encourage all around us—including young people—to do the same. Here are some ways to do that:
Address jokes and slurs
It can be challenging when you are confronted with offensive jokes and slurs. You may feel uncomfortable or unsure what to say in the moment and you may be accused of having no sense of humor. However, when you remain silent, you are communicating your acceptance of the behavior to others. That is why it is important to say or do something. You can first try talking with the person privately. Let the person know how you feel about what they said and the impact it had on you and others and why. Request that this “humor” is not used around you and that they think seriously about who they use it around if they insist on keeping it up.
“I didn’t mean it like that…”
When someone says something that sounds biased and those around that person confront them, sometimes the response is defensiveness and “that’s now how I meant it” or “you’re taking it the wrong way.” We know that language is always evolving. And we want to assume good will. Everyone makes mistakes, and even when we make sincere efforts to be aware of our language, we may still use insensitive language. While the intent may not be to offend, the impact of those words can deeply affect others--especially the person targeted and others who identify in that same way. While it’s important to acknowledge that everyone has bias, people also need to take responsibility for their own biases, to self-reflect and learn to stop them from escalating. When you notice biased attitudes in yourself or if it is pointed out by others, ask yourself the following questions: Where does this bias come from? How is it hurtful? What can I do to challenge it in myself? How can I be better at challenging it in others?
Think and then act
Before you say something or act, take a moment, several minutes or even day(s) to reflect on what was said and be clear about what you want to accomplish in your interaction. Do what you can to ensure your own safety online and offline in how you respond. Sometimes you are challenging what was said to understand where the person is coming from, or sometimes to educate the person about the power of their words or explain the context of those words. Sometimes you want to interrupt the bigoted words and leave it at that. Whether it’s done in the moment or later, ask questions and let the person know why their words were offensive and hurtful. Be respectful but don’t sugar coat it. You might say something like: What did you mean by that? That sounded to me like a stereotype. Do you understand why that was so hurtful? You can speak up in-person and online. And if you are seeking a way to amplify your voice on a bigger level, activism can be a way to do that.
Educate yourself, others and promote empathy
Sometimes people use slurs, offensive language, bigoted words and racist “jokes” because they don’t know or understand the origin of those words. Or they make generalizations about people they know little about. You can help to correct inaccuracies by providing reliable information, historical context when relevant and statistics when the example cited is a “single story” rather than a nuanced and complex understanding of people or a whole nation. Help fill in the blanks, dispel stereotypes and contextualize people’s uninformed use of words and language. When you discuss it from an educational perspective, you also provide an opportunity for those to empathize with others. Instead of putting people down, we can use that energy to understand people who are different than we are, assuming there is good will on both ends. Learning to empathize with others’ lives, feelings, experiences and situations is what makes us human and helps us connect with one another. |
Insects that lay eggs are called orthopterans, which consist of common species such as praying mantis, cockroaches, crickets and grasshoppers. Although these insects all lay eggs, they have different methods of carrying out the egg laying process. All egg-producing insects have distinct methods of carrying, hatching and raising their young.
In addition to variation in egg-laying habits among species, there is also diversity in habits between members of the same species. In the cockroach family, for instance, females in only one family (Blaberidae) have egg cases that lack clear definition and structure. In some cockroach families, females carry egg sacs called oothecae outside their bodies until it is time for the eggs to emerge, while other females deposit their egg cases in nests several days before the eggs emerge.
The eggs produced by the different egg-laying insects vary in size, color and appearance. Eggs of the walking stick species, which includes praying mantis, look like small seeds and are usually dispersed by females loosely on the ground before hatching. Crickets, in contrast, lay their eggs within soils or plant materials and arrange their eggs in rows prior to hatching. Grasshoppers lay their eggs in soil or deposit them in dead wood or grass clumps. Grasshoppers produce large volumes of eggs, which are housed in protective sacs before hatching. |
|This article needs additional citations for verification. (April 2009)|
Temporal range: Miocene
|Oreopithecus bambolii fossil|
Oreopithecus is an extinct primate from the Miocene epoch whose fossils have been found in today's Tuscany and Sardinia in Italy. Oreopithecus (from the Greek ὄρος, oros and πίθηκος, pithekos, meaning "hill-ape") existed in the Tusco-Sardinian area when this territory was an isolated island in a chain of islands stretching from central Europe to northern Africa.[explain 1]
It was one of a large number of European immigrants that settled this area in the Vallesian-Turolian transition and one of few hominoids to survive the so-called Vallesian Crisis together with Sivapithecus in Asia. To date, dozens of individuals have been discovered at the Tuscan localities of Montebamboli, Montemassi, Casteani, Ribolla, and, most notably, in the fossil-rich lignite mine in the Baccinello Basin, making it one of the best-represented fossil apes.
Oreopithecus bambolii was first described by French paleontologist Paul Gervais in 1872. In the 1950s, Swiss paleontologist Johannes Hürzeler discovered a complete skeleton in Baccinello and claimed it was a true hominid — based on its short jaws and reduced canines, at the time considered diagnostic of the hominid family — and a biped — since the short pelvis was closer to those of hominids than those of chimpanzees and gorillas. However, Oreopithecus′ hominid affinities remained controversial for decades until new analyses in the 1990s reasserted Oreopithecus as directly related to Dryopithecus; the peculiar cranial and dental features explained as consequences of insular isolation. These new evidences confirmed that Oreopithecus was bipedal but also revealed that its peculiar form of bipedalism was much different from that of Australopithecus′ — the hallux formed a 100° angle with the other toes enabling the foot to act as a tripod in erect postures — and could not enable Oreopithecus to develop a fast bipedal locomotion. When a land bridge finally broke the isolation of the Tusco-Sardinian area , truly large predators such as Machairodus and Metailurus were present among the new generation of European immigrants and Oreopithecus faced quick extinction together with other endemic genera. [explain 2]
Known as the "enigmatic hominoid", Oreopithecus can dramatically rewrite the palaeontological map depending on if it is a descendant from the European ape Dryopithecus or some African anthropoid. Some have suggested the unique locomotory behavior of Oreopithecus requires a revision of the current consensus on the timing of bipedality in human developmental history, but there is limited agreement on this point among paleontologists.
Some researchers have related Oreopithecus to the early Oligocene Apidium, a small arboreal proto-ape that lived nearly 34 million years ago in Egypt. It shows strong links to modern apes in its postcranium and, in this respect, it is the most modern Miocene ape below the neck with closest similarities to the postcranial elements of Dryopithecus, but its dentition is adapted to a leafy diet and a close link is uncertain. Others claim it to be either the sister taxon to Cercopithecoidea or an even direct human ancestor, but it is usually placed in its own subfamily within Hominidae. It could instead be added to the same subfamily as Dryopithecus, perhaps as a distinct tribe (Oreopithecinae).
Oreopithecus bambolii is estimated to have weighed 30–35 kg (66–77 lb). It possessed a relatively short snout, elevated nasal bones, small and globular neurocranium, vertical orbital plane, and gracile facial bones. The shearing crests on its molars suggest a diet specializing in plant leaves. The very robust lower face, with a large attachment surface for the masseter muscle and a sagittal crest for attachment of the temporal muscle, indicates a heavy masticatory apparatus.
Its teeth were small relative to body size. The lack of a diastema (gap) between the second incisor and first premolar of the mandible indicates that Oreopithecus had canines of size comparable to the rest of its dentition. In many primates, small canines correlate with reduced inter-male competition for access to mates and less sexual dimorphism.
Its habitat appears to have been swampy, and not savanna or forest. The postcranial anatomy of Oreopithecus features adaptations for suspensory arborealism. Functional traits related to suspensory locomotion include its broad thorax, short trunk, high intermembral index, long and slender digits, and extensive mobility in virtually all joints. Its fingers and arms seem to show adaptations for climbing and swinging.
Its foot has been described as chimp-like, but is different from those of extant primates. The habitual line of leverage of the primate foot is parallel to the third metatarsal bone. In Oreopithecus, the lateral metatarsals are permanently abducted so that this line falls between the first and second metatarsals instead. Furthermore, the shape of the tarsus indicate loads on the foot were transmitted to the medial side of the foot instead of the lateral, like in other primates. The metatarsals are short and straight, but have a lateral orientation increase. Its foot proportions are close to the unusual proportions of Gorilla and Homo but are distinct from those found in specialized climbers. The lack of predators and the limitation of space and resources in Oreopithecus′ insular environment favored a locomotor system optimized for low energy expenditure rather than speed and mobility.
Oreopithecus has been claimed to exhibit features that are adaptations to upright walking, such as the presence of a lumbar curve, in distinction to otherwise similar species known from the same period. Since the fossils have been dated to about 8 million years ago, this would represent an unusually early appearance of upright posture. However, a reevaluation of the spine from a skeleton of Oreopithecus has led to the conclusion that it lacked adaptations for habitual bipedality
The semicircular canals of the inner ear serves as a sense organ for balance and controls the reflex for gaze stabilization. The inner ear has three canals on each side of the head, and each of the six canals encloses a membranous duct that forms an endolymph-filled circuit. Hair cells in the duct’s auditory ampulla pick up endolymph disturbances caused by movement, which register as rotatory head movement. They respond to body sway of frequencies greater than 0.1 Hz and trigger the vestibulocollic (neck) reflex and vestibuloocular (eye) reflex to recover balance and gaze stability. The bony semicircular canals allow estimates of duct arc length and orientation with respect to the sagittal plane.
Across species, the semicircular canals of agile animals have larger arcs than those of slower ones. For example, the rapid leaper Tarsius bancanus has semicircular canals much bigger than the slow-climbing Nycticebus coucang. The semicircular canals of brachiating gibbons are bigger than those of arboreal and terrestrial quadrupedal great apes. As a rule of thumb, arc size of the ducts decreases with body mass and consequently slower angular head motions. Arc size increases with greater agility and thus more rapid head motions. Modern humans have bigger arcs on their anterior and posterior canals, which reflect greater angular motion along the sagittal plane. The lateral canal has a smaller arc size, corresponding to reduced head movement from side to side.
Allometric measurements on the bony labyrinth of BAC-208, a fragmentary cranium that preserves a complete, undeformed petrosal bone suggest that Oreopithecus moved with agility comparable to extant great apes. Its anterior and lateral semicircular canal sizes fall within the range for great apes. Its relatively large posterior arc implies that Oreopithecus was more proficient at stabilizing angular head motion along the sagittal plane.
Oreopithecus had hominid-like hand proportions that allowed a firm, pad-to-pad precision grip. Features not present in the hands of extant or fossil apes include hand length, relative thumb length, a deep and large insertion for the flexor pollicis longus, and the shape of the carpometacarpal joint between the metacarpal bone of the index finger and the capitate bone. At the base of the second metacarpal bone, the facet for the capitate is oriented transversally, like in hominids. The capitate, on the other hand, lacks the waisting associated with apes and climbing, and still present in Australopithecus. Oreopithecus share the specialised orientation at the carpometacarpal joint with A. afarenis and the marked groove for the flexor pollicis longus with A. africanus. It is thus likely that the hand morphology of Oreopithecus is derived for apes and convergent for early hominids.
- In what remained of the Tethys Sea or what was becoming the Mediterranean Sea; see also Geology and paleoclimatology of the Mediterranean Basin and Messinian salinity crisis
- A parallel to the Great American Interchange two million years later
- Agustí & Antón 2002, pp. Prefix ix, 174–5, 193, 197–9
- Simons 1960
- Delson, Tattersall & Van Couvering 2000, p. 465
- Köhler & Moyà-Solà 1997
- Ghose, Tia (2013-08-05). "Strange Ancient Ape Walked on All Fours". LiveScience.Com. TechMedia Network. Retrieved 2013-08-07.
- Russo, G. A.; Shapiro, L. J. (2013-07-23). "Reevaluation of the lumbosacral region of Oreopithecus bambolii". Journal of Human Evolution. doi:10.1016/j.jhevol.2013.05.004.
- Spoor 2003, pp. 96–7
- Rook et al. 2004, p. 355
- Moyà-Solà, Köhler & Rook 1999
- Agustí, Jordi; Antón, Mauricio (2002). Mammoths, Sabertooths, and Hominids: 65 Million Years of Mammalian Evolution in Europe. New York: Columbia University Press. ISBN 0-231-11640-3.
- Carnieri, E. and F. Mallegni (2003). "A new specimen and dental microwear in Oreopithecus bambolii". Homo 54 (1): 29–35. doi:10.1078/0018-442X-00056. PMID 12968421.
- Delson, Eric; Tattersall, Ian; Van Couvering, John A. (2000). "Dryopithecinae". Encyclopedia of human evolution and prehistory. Taylor & Francis. pp. 464–6. ISBN 978-0-8153-1696-1.
- Harrison, Terry (1990). Coppens, Y; Senut, B, eds. "Origine(s) de la Bipédie chez les Hominidés". Paris: Museum National d’Histoire Naturelle.
- Köhler, Meike; Moyà-Solà, Salvador (October 14, 1997). Ape-like or hominid-like? "Ape-like or hominid-like? The positional behavior of Oreopithecus bambolii reconsidered". PNAS 94 (21): 11747–11750. doi:10.1073/pnas.94.21.11747. PMC 23630. PMID 9326682.
- Moyà-Solà, Salvador; Köhler, Meike; Rook, Lorenzo (January 5, 1999). "Evidence of hominid-like precision grip capability in the hand of the Miocene ape Oreopithecus" (PDF). PNAS 96 (1): 313–317. doi:10.1073/pnas.96.1.313. PMC 15136. PMID 9874815.
- Rook, Lorenzo; Bondioli, Luca; Casali, Franco; Rossi, Massimo; Köhler, Meike; Moyá Solád, Salvador; Macchiarelli, Roberto (2004). "The bony labyrinth of Oreopithecus bambolii" (PDF). Journal of Human Evolution 46 (3): 347–354. doi:10.1016/j.jhevol.2004.01.001. PMID 14984788.
- Rook, Lorenzo; Bondioli, Luca; Köhler, Meike; Moyà-Solà, Salvador; Macchiarelli, Roberto (July 20, 1999). "Oreopithecus was a bipedal ape after all: Evidence from the iliac cancellous architecture" (PDF). PNAS 96 (15): 8795–8799. doi:10.1073/pnas.96.15.8795. PMC 17596. PMID 10411955.
- Rook, L.; Harrison, T.; Engesser, B. (1996). "The taxonomic status and biochronological implications of new finds of Oreopithecus from Baccinello (Tuscany, Italy)" (PDF). Journal of Human Evolution 30: 3–27. doi:10.1006/jhev.1996.0002.
- Simons, Elwyn L. (June 4, 1960). "Apidium and Oreopithecus". Nature 186 (186): 824–826. doi:10.1038/186824a0.
- Spoor, Fred (2003). "The semicircular canal system and locomotor behavior, with special reference to hominin evolution" (PDF). In Franzen, Jens Lorenz; Köhler, Meike; Moyà-Solà, Salvador. Walking Upright: Results of the 13th International Senckenberg Conference at the Werner Reimers Foundation. E. Schweitzerbart'sche Verlagsbuchhandlung. ISBN 3-510-61357-0.
Media related to Oreopithecus at Wikimedia Commons |
Nowadays, people are accustomed to being sick. Getting through work with a headache or a case of the sniffles has become workplace policy and just one of the many annoyances of daily life. However, when one day of coming down with a cold reaches a week of being immobilized with the flu, a bit of panic can set in.
Worry has reached a fever pitch throughout the nation as more and more cases of the flu are popping up in local hospitals. The virus has been reported in 41 states so far this year -- 29 of which are reporting high or “severe” levels of infection.
There have been 15,000 reported cases and 18 deaths associated with flu-like symptoms since the beginning of flu season. Keep yourself informed and your family safe with these helpful facts:
The main symptoms of the flu are high fever, joint pain, feeling weak, headache, sore throat and runny nose.
There's a difference between a cold and a flu. There are more than 100 viruses which can cause a cold. The flu can only be caused by influenza virus types A, B and C. A cold will probably bring you down for a little bit, while the flu will make you feel as if you're being kicked while you are down.
The flu attacks the immune system. If the immune system is already compromised, the body will be less able to fight back against further infection.
Flu is spread when you make direct contact with the virus. For example, inhaling droplets in the air that contain the flu virus, sharing drinks or utensils that have been contaminated or simply handling contaminated items can infect you. You cannot catch the flu simply by walking around outside without a hat or warm jacket. The reason this myth is so popular is because lack of common sense will weaken your immune system, inhibiting your body's ability to fight off the flu.
Most healthy adults can infect others one day before symptoms arise and up to five to seven days after getting sick. Some, especially young children and people with weakened immune systems, may be infectious for a longer time period.
The flu is so prevalent in the winter due to its ability to thrive in low humidity. In winter, the relative humidity of indoor air is very low in comparison to the outside air. People are usually inside, in closer proximity with each other in the cold winter months, thus increasing the chance of the flu being spread. The flu is spread person to person, most commonly in areas where people are in constant close proximity to each other.
Antibiotics will not be effective against the flu as they only treat bacterial infections. The flu is a virus, immune to antibiotics.
The best method for preventing the flu is simply using good health habits. Actions as simple as washing your hands, covering your cough and staying home when you're sick can stop the flu from spreading.
Flu shots aren't a guaranteed way to prevent the flu. Although doctors recommend everyone above the age of six months old receive a flu shot, the shot prevents the flu 70 percent of the time. Sometimes a specific strain cannot be taken into account for, leaving a person still vulnerable. The seasonal flu vaccine protects against the three influenza viruses that research suggests will be most common. This year, those are Influenza A, B and the H1N1 strain.
The flu typically lasts anywhere from a few days up to 2 weeks after it's been contracted.
New Chapter Immune Take Care Why you may like this product? This product is used for improved immunity and a heightened immune system response. Elderberry and black currant are known cold and flu fighters.
10 Ways to Prep for Flu Season Flu season is upon us and while we cannot promise you will make it out without getting sick, there are some ways to prep to help lower your risk and possibly speed your recovery. Keep reading to learn more.
Natural Cold and Flu Remedies With the cold winter weather upon us, it's getting harder to stay
healthy. Read more to learn about natural ways to fight the cold and
Statements made about specific vitamins, supplements, procedures or other items sold on or through this website have not been evaluated by eVitamins or by the United States Food and Drug Administration. They are not intended to diagnose, treat, cure or prevent disease. The information provided on this site is for informational purposes only. As always, please consult with a licensed doctor or physician before starting any diet, exercise or supplement program, before taking any vitamin or medication, or if you have or suspect you might have a problem. |
Dogs can get colds and viral illnesses such as flu, canine parvovirus infection, canine distemper, kennel cough and canine hepatitis. You can treat colds in dogs by keeping the pet in a warm area, exposing it to steam and using antibiotics. Medications and vaccinations are also available for treating and preventing canine viral illnesses.Continue Reading
The symptoms of common cold in canines are coughing, sneezing, runny or stuffy nose and watering eyes. If not by a virus, the cold is likely to be caused by an allergic reaction or a parasite or fungus that has entered the lungs, trachea and/or the heart. The latter can lead to lung tissue scarring and pneumonia.
Providing the dog with healthy food, fresh and clean water and friendly temperatures to live in helps prevent bouts of cold. Consult a veterinarian if the cold symptoms do not improve after several days.
Symptoms of flu, caused by influenza virus, in dogs include fever with temperatures up to 104 to 106 degrees Fahrenheit, fatigue and poor appetite. The best ways to treat canine flu are adequate rest, plenty of fluids, and antibiotics. If left untreated, dog flu might lead to more serious diseases.
Canine parvovirus infection is characterized by fever, dehydration, fatigue, bloody diarrhea and vomiting. Symptoms of canine distemper include sneezing and discharge from the eye. It can lead to pneumonia and encephalopathy that can be fatal.
Kennel cough symptoms include fever, lethargy, poor appetite and deep cough. It can cause pneumonia if left untreated. Symptoms of canine hepatitis, caused by adenovirus, are diarrhea, vomiting and later jaundice.
Except for flu and kennel cough, other viral illnesses require hospitalization with IV fluids, antibiotics, respiratory medicines and other medications for treatment.Learn more about Veterinary Health |
A method of transmission is the movement or the transmission of pathogens from a reservoir to a susceptible host. Once a pathogen has exited the reservoir, it needs a mode of transmission to the host through a portal of entry. Transmission can be by direct or indirect contact or through airborne transmission.
Direct contact is person-to-person transmission of pathogens through touching, biting, kissing, or sexual intercourse. Microorganisms can also be expelled from the body by coughing, sneezing or talking. The organisms travel in droplets over less than 1 metre in distance and are inhaled by a susceptible host.
Indirect contact includes both vehicle-borne and vector-borne contact. A vehicle is an inanimate go-between, an intermediary between the portal of exit from the reservoir and the portal of entry to the host. Inanimate objects such as handkerchiefs and tissues, soiled laundry, and surgical instruments and dressings are common vehicles that can transmit infection. |
Humans who are visually impaired are taking a cue from their mammalian cousin, the bat, when it comes to using sound to see. Bats use a navigation system called echolocation by emitting a high pitch sound which bounces off of objects in the environment. By listening to how the sound is returned the bat can effectively see with its ears. The bat’s brain calculates the amount of time it takes for sound to return to determine the distance of an object, how big it is and even what direction it is travelling. This technique allows bats to hunt at night allowing for the animal to be cloaked from predators during a a time insects are abundant.
The saying, “blind as a bat” is a bit deceiving because in addition to echolocation bats also see with their eyes. Unfortunately, some humans don’t have the same advantage. People who are blind or visually impaired traditionally use other methods to see their environment such as guide dogs or canes. Daniel Kish, who is blind, believes the echolocation used by bats could also be used by humans as well in a technique he calls FlashSonar.
Kish uses a series of clicks which he listens for in order to determine the space around him. Incredibly, not only can he determine what buildings may be around him, but also what features those buildings may have including its angles and whether it has balconies or columns. Rather than keep this technique to himself, Kish is founder and President of a non-profit foundation called World Access for the Blind whose goal is to teach blind children to see the environment around them using FlashSonar. Besides teaching blind children a technique to see, the foundation also teaches a no-limits philosophy to encourage children to do more regardless of their disability. |
Spider legs use hydraulic pressure. When they die their legs curl because the liquid dries up!
Spiders' legs curl up is because a spider uses hydraulic pressure to push liquid into it's legs that allow it to move, and when it dies the liquid drains out making the legs curl up
If you've ever killed a spider (without completely squishing it) or come across one that is already dead, you know that their legs curl up beneath them when they die. You may be too busy being grossed out by this eight-legged creature to wonder just why this is. Human's limbs don't curl inward after they die. So what is going on with this posthumous tendency.
It turns out that spiders' legs are hydraulic. Or more specifically, two of the six joints in their legs are hydraulic. In order to straighten these joints and subsequently their legs, spiders flex their muscles and increase the pressure. When the spider dies, it obviously loses control over it's leg system and relaxes. The legs then curl upward automatically because no pressure is keeping them straightened. |
The epiglottal or pharyngeal ejective is a rare type of consonantal sound, used in some spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨ʡʼ⟩.
Features of the pharyngeal ejective:
- Its manner of articulation is occlusive, which means it is produced by obstructing airflow in the vocal tract. Since the consonant is also oral, with no nasal outlet, the airflow is blocked entirely, and the consonant is a stop.
- Its place of articulation is epiglottal, which means it is articulated with the aryepiglottic folds against the epiglottis.
- Its phonation is voiceless, which means it is produced without vibrations of the vocal cords.
- It is an oral consonant, which means air is allowed to escape through the mouth only.
- It is a central consonant, which means it is produced by directing the airstream along the center of the tongue, rather than to the sides.
- The airstream mechanism is ejective (glottalic egressive), which means the air is forced out by pumping the glottis upward.
A pharyngeal ejective has been reported in Dargwa, a Northeast Caucasian language. |
CHAPTER 4: Unending Seige
Canada’s Participation in the War
The Political Scene Inside Canada
Canada was distant from the sense of total war felt across the European continent and the British Isles. Nothing here was preventing people from leading peaceful lives. Since 1912, Ontario had been debating the notorious Regulation 17 to dismiss French as a language of instruction in its separate schools, while at the same time, once war broke out, criticizing Quebec Francophones for refusing to fight for the Empire. The Quebec nationalists proclaimed their readiness to fight for the Franco-Ontarian victims of Regulation 17 instead of making an all-out effort overseas.
The pursuit of grand political objectives was not conducted without internal strife. To the Quebec nationalists, conflicts at home were more urgent than those on the battlefield. They asked why they should fight for France when a century and a half earlier France had negotiated away its interest in Quebec; they wondered if the war of 1870, less than half a century earlier, had meant anything at all. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.